
Why is understanding connectors critical?
Connectors are the silent enablers of every Power Platform solution. Whether you're automating approvals, syncing customer records, or generating reports, connectors are doing the heavy lifting in the background.
But here’s the catch: Not all connectors are created equal, and not all are included in your M365 license.
Understanding how your connector is licensed isn’t just a technical detail. It affects:
- Your cost model
- Your app’s scalability and maintainability
- And your ability to respond to Microsoft’s evolving licensing rules
If you don’t understand the connector landscape, you’re building on shaky ground. This blog post helps IT operations teams, platform admins, and Center of Excellence leaders make smart, future-proof decisions about connector use.
This is the fourth part of our Power Platform licensing series. In our previous articles, we covered
- Microsoft Power Platform licensing changes in 2025 and how they affect users,
- Power Platform Licensing Within M365 & D365 and what bundled access actually includes
Standard vs. Premium connectors: What’s the difference and why does it matter?
Microsoft splits connectors into two main categories: Standard and Premium. Custom Connectors fall under Premium too. If neither Microsoft, a third party, nor the community has built a connector for your specific system, you can create your own to tailor integrations to your exact needs.
Understanding the difference between Standard and Premium is critical to staying compliant and budgeting correctly. The type of connector you use directly impacts your licensing model. You might start with a simple app that uses SharePoint and suddenly need a premium licence just because you added a single connection to Dataverse or SQL Server.
Here’s what you need to know:
Standard connectors
These are included with most Microsoft 365 licenses and cover tools your team likely already uses:
- SharePoint
- Outlook
- Excel Online
- OneDrive
- Planner
Great for: internal, low-complexity apps that don’t require external system integration.
Premium connectors
These require additional licensing, either via Per App, Per User, or Pay-As-You-Go plans.
Examples include:
- Dataverse
- SQL Server
- Salesforce
- SAP
- HTTP, Azure DevOps, ServiceNow
Great for: unlocking richer integrations but come with licensing implications.
Keep in mind that even a single Premium connector will upgrade the entire app’s license requirement.
Why this matters
Connector classifications aren’t static. We’ve seen connectors reclassified from Standard to Premium, but these changes are typically announced in advance, giving teams time to prepare.
Keep in mind that Premium connectors can significantly alter your app’s costs. Adding just one can shift a solution from being covered under a Microsoft 365 license to requiring a Premium license for every user.
Real-life example: The SQL connector shift
The SQL Server connector was originally classified as Standard, making it a go-to choice for internal apps connecting to on-prem databases or Azure SQL. Teams across industries built solutions under the assumption that they were operating within the boundaries of their Microsoft 365 licenses.
Then came the change. Microsoft reclassified the SQL connector as Premium. This meant that the connector that powered dozens of reliable business apps was no longer included in base licensing.
Apps that had been running smoothly now required Power Apps Premium licenses or a Pay-As-You-Go model to stay compliant. IT teams scrambled to re-architect solutions, request unplanned budget approvals, or freeze deployments altogether.
The SQL connector shift is a reminder that connector classifications aren’t set in stone, and that licensing assumptions can quickly become liabilities.
Lessons learned
Don’t assume a connector’s classification is permanent. Instead, design apps with licensing flexibility in mind, avoiding hardcoded architectural decisions that rely solely on current connector classifications.
Microsoft is getting better at communicating connector changes, but surprises still happen
To their credit, Microsoft has made real progress in making things clearer:
- It’s now easier to find out which connectors are Standard vs. Premium in the official docs.
- Release Wave updates highlight what’s changing before it happens.
- Admin Center and Message Center posts give early warnings so you can plan ahead.
But there’s still a lag between policy updates and their impact in real-world apps. And some changes appear with little to no warning, especially for lesser-known connectors or third-party services.
What to keep in mind:
- Always double-check connector classification before starting a project, not just before deployment.
- Previously free connectors can be reclassified.
- New connectors may launch as Premium from day one.
How to manage connector risk proactively
If you're running a Power Platform environment at scale, connector governance is just as important as app governance.
Here’s how to get ahead of it:
- Maintain an internal approved connector list
Track which connectors are Standard vs Premium, add usage notes, and include business owners for accountability.
- Start with Standard connectors, upgrade to Premium when it’s necessary or adds value
Default to Standard connectors to control costs and streamline deployment. But don’t rule out Premium connectors as they can unlock valuable functionality. The key is alignment: choose Premium only when those extra features directly support your use case.
- Monitor for classification changes
Set alerts from Microsoft’s Message Center and make sure someone regularly reviews Release Wave updates. Connector statuses can change.
- Regularly audit apps
Identify apps using Premium connectors and regularly check whether the current licences still fit. Flag anything at risk if classifications shift again.
- Educate makers
Many citizen developers don’t realise that using just one Premium connector upgrades the licence requirement for everyone. Share clear internal guidelines from the start.
Bonus tip: Don’t forget connectors in flows
It’s easy to focus on connectors in Power Apps, but don’t overlook Power Automate.
Flows using Premium connectors (e.g., Dataverse, SQL, custom APIs) follow the same licensing rules. If a flow triggers via a Premium connector, the user (or the flow owner) must have the proper license. This is one of the most common compliance gaps we see in audits.
Smart connector choices = Long-term app stability
Choosing connectors isn’t just about capability, but about sustainability too. You need to know exactly what you’re using, design apps that can adapt if licensing changes, and validate connector classifications early and often. This approach helps you build apps that are scalable, cost-effective, and resilient to change.
If you’re not sure which license setup is best for your team, contact us to discuss your use case.
Up next in this series:
- Request management made easy — Staying within limits and budget
- Smarter spending with Power Platform — Managing costs for future scaling

“I thought this was included with Dynamics 365. Why are we getting license errors?”
If you’re responsible for automating team processes, like building flows, setting up approvals, managing request, you’ve probably heard this more than once. Maybe you’ve said it yourself.
Let’s say you rolled out a Power App to streamline onboarding. It’s using SharePoint, Outlook, maybe even Teams. No problem so far. But the moment someone adds a Dataverse table or a Power Automate flow that hits a SQL database? It suddenly prompts you to start a trial or upgrade to a premium license. Now you’re faced with licensing decisions.
This confusion is one of the most common traps for operations teams using Power Platform in Microsoft 365 or Dynamics 365 environments. The tools look free. The makers assume they’re included. But under the hood? It’s more complicated.
This is the second part of our Power Platform licensing series. In our previous article, we covered Microsoft Power Platform licensing changes in 2025 and how they affect users.
Which Power Platform features are included in M365 and D365?
Bundled access comes with hidden limits. Let’s break it down.
M365: Good for standard connectors, but that’s it
Microsoft 365 plans (like E3 or E5) include:
- Power Apps with standard connectors (SharePoint, Excel, Outlook, and many more)
- Power Automate with standard connectors (triggers and actions)
- Canvas apps embedded in Teams
But the moment your app or flow uses:
- Premium connectors (like SQL, Dataverse, Salesforce, or custom APIs)
- Model-driven apps with richer logic and relational data
- Standalone Power Apps portals (now Power Pages)
- AI capabilities or Copilot integrations
…you’ve left the “free with M365” zone. Even read-only access to premium data still requires a premium license — a common oversight that leads to compliance issues.
D365: More power, but only for licensed users — and only for the specific app
Dynamics 365 plans (like Sales, Customer Service, or Field Service) come with broader Power Platform entitlements — but there are two strict boundaries:
- Only licensed D365 users get the extra capabilities
- Only for scenarios tied to their specific D365 app
So, if someone with a Dynamics 365 Sales license builds a Power Automate flow that connects SharePoint and Dataverse for a sales process?
Covered.
But if a non-Sales user tries to use that same app or flow?
They’ll need their own premium license.
And if the Sales-licensed user builds an app or flow for HR, Finance, or Operations?
That falls outside the licensed scope — even if it uses the same Power Platform components — and may not be compliant.
Bottom line: D365 licensing is generous within the app boundary, but it doesn’t transfer across departments, scenarios, or users.
Are hidden assumptions breaking your automations?
Let’s say your team builds a Power Automate flow to route vacation approvals. It uses SharePoint and Outlook, so you assume it’s covered under your M365 license.
But then someone quietly adds a premium connector to Entra ID or Dataverse. Nobody flags it. The flow still works, more or less. Then you start seeing:
- Flow throttling
- Unexpected errors
- Sudden license warnings
Admins are confused. Users are frustrated. And now you’re chasing down compliance gaps and trying to keep things running, instead of focusing on scaling meaningful work.
This is the risk of assumptions. Power Platform doesn't always block you upfront — it lets you build and run… until usage crosses an invisible line.
Does usage mean you're licensed?
Here’s the tricky part: Just because something runs doesn’t mean it’s licensed.
Power Platform often doesn’t block you at the start. Apps and flows may run smoothly at first. But that doesn’t mean you’re in the clear.
Problems tend to appear later, when:
- A new enforcement rule quietly kicks in
- A background API call exceeds your entitlements
- A usage audit flags non-compliance
And by then, it's not just a licensing problem. It’s a business disruption.
If you're not proactively monitoring usage against entitlements, you're one policy change away from broken automation and user downtime.
How to stay in control before Microsoft starts monitoring your team
If you’re not monitoring entitlements proactively, you’re not in control — Microsoft is.
If you want to avoid surprises, you need a licensing-aware automation strategy. Here’s how:
1. Know what’s “premium”
Keep a cheat sheet of premium connectors, features, and app types. Share it with your makers and approvers so they understand when they’re entering license territory.
2. Map users to roles and needs
Who’s building? Who’s consuming? What data sources are in play? Don’t assign licenses blindly. Align them with usage patterns.
3. Monitor usage centrally
Use these tools to track and stay ahead:
- Power Platform Admin Center
See request volumes, connector usage, and license assignment gaps across environments.
- Azure Monitor (optional)
Set alerts when flows near usage limits or exceed thresholds — useful for high-scale environments.
4. Watch for “inherited” access
Just because someone is part of a Teams channel or D365 group doesn’t mean they’re licensed for the app or flow embedded there. Shared access ≠ shared entitlement.
Don’t assume, assess
If you’re building automation at scale, especially in hybrid M365 + D365 environments, licensing can’t be an afterthought.
- M365 gives you the basics but not the premium connectors most real-world apps need.
- D365 licenses go deeper but only within narrow boundaries.
- And enforcement is now active and automated.
So, if you want to keep building without friction, make license visibility part of your ops playbook. Stay ahead of usage, keep your team up-to-date, and model costs before they spiral.
If you’re not sure which license is best for your team, contact us to discuss your use cases.
Up next in our Power Platform licensing series:
- Staying ahead of connector changes in Power Platform
- Request management made easy: Staying within limits and budget
- Scaling without breaking your budget

Imagine describing an app you need in your own words and getting a basic app framework in minutes. With Plan Designer in Power Apps, that’s already becoming possible.
What is the Plan Designer?
Plan Designer is a new Copilot experience within Power Apps. It allows users to describe their app in natural language and receive a structured starting point.
This move is part of Microsoft’s broader move to bring generative AI into everyday business tools. While it doesn't yet deliver complete or production-ready applications, it offers a strong foundation that helps teams move faster, validate ideas earlier, and collaborate more effectively with dev teams when it’s time to build.
Important to know: It’s still in preview
Plan Designer is currently available as a public preview feature. That means it’s not production-ready yet, and it’s not recommended for complex or business-critical use cases.
It’s a promising direction, and there are many more improvements in the pipeline. But for now, think of it as a way to jumpstart your ideas, not as a full replacement for expert-built solutions. Let’s see how:
From idea to app structure, without coding
Some of the best ideas for internal apps come from the people who work closest to the process.
You’ve likely experienced it yourself: you know exactly what your team needs, whether it’s a simple PTO planning tool or a way to track field tasks. You understand the workflow, the challenges, and the users. But when it comes to turning that insight into a working app, you’re not sure how to get started.
That’s been the reality for many business users.
Historically, PowerApps has been aimed at non-developers, people in HR, customer service, field operations, and sales. These users know their business inside and out but often lack the technical or systems thinking skills to design a well-structured, scalable app. As a result, many apps were either overly simple or hard to maintain and improve.
That’s where Plan Designer comes in.
It offers a more guided way to get started. Instead of starting from scratch, you describe what you need in natural language, for example, “I need a tool to assign jobs to field technicians.” You can even upload visuals, like a screenshot of an old tool or a process diagram.

Based on your input, Copilot generates a structured draft of your app.
What you get is a smart skeleton, with suggested tables, screens, user roles, and basic logic. It proposes a data model and automation ideas using Power Automate, all based on what your prompts. You can then review, adjust, or approve what Copilot gives you before it builds out the logic.
It won’t give you a finished app, but it gives you a strong starting point, one that reflects your intent and helps you think through how your app should be structured. That’s a big step forward for anyone who understands the business problem but not the development process.
What can you currently do with Plan Designer?
To access the Plan Designer, you need a preview environment with early feature access enabled. Once set up, you can start designing solutions directly from the Power Apps homepage by toggling on the new experience.
It’s still the early days, so it’s important to set the right expectations. As of April 2025, Plan Designer has the following capabilities:
Natural language input
Based on natural language input, the Plan Designer will generate a solution tailored to your needs. This includes creating user roles, user stories, and data schemas.
Solution generation
The tool can create basic end-to-end solutions, including:
- Dataverse tables
- Canvas apps
- Model-driven apps
Iterative development
You can refine your plans by providing feedback during the design process to make sure that the generated solution aligns with your specific needs.
Collaboration and documentation
The generated plan serves as both a blueprint for development and documentation for future reference to help teams align on business goals and technical execution.
Integration with Power Platform tools
While still in preview, the tool integrates with other Power Platform components like Dataverse and Power Apps. However, some features (e.g., Power Pages support and advanced data modeling) are not yet available.
Limitations in the preview
The tool currently does not support generating full Power Automate flows or using common data model tables like accounts or contacts. Features like analytics integration, Azure DevOps compatibility, and document uploads (e.g., RFPs) are not yet implemented.
The feature set is evolving rapidly, with updates rolling out every few days. One recent improvement: Copilot now explains which AI agents are working on different tasks, for example, the requirement agent, data agent, or solution agent.

To sum up, Plan Designer helps you get the core pieces in place in just a few minutes. It’s especially useful for:
- Prototyping apps without waiting for a developer
- Practicing prompt-writing to refine app design
- Getting a better understanding of how systems and logic fit together
It’s great for playing around, testing out concepts, and learning how to approach app development with systems thinking. Let’s see how this might change in the coming years.
How you’ll use Plan Designer in the near future
Let’s say there’s a process in your team that’s manual, slow, or inconsistent, and you know exactly how it should work. Maybe it’s tracking field work, collecting customer data, or planning PTO.
You have the knowledge to solve it. What you don’t always have is the time, tools, or technical background to build the solution yourself.
That’s where Plan designer is moving toward. It will help you translate your ideas into something concrete: a data model, screens, and suggested relationships. It will give you a head start, so you won’t have to start from scratch.
Here’s what that might look like in practice:
- You’re a field manager who needs to track technician assignments and jobs.
You describe your idea to Copilot, and it creates basic tables like “Jobs” and “Technicians,” with suggested relationships between them. The logic and visuals still need work, but you now have a structure to build on.
Looking for inspiration to improve efficiency in Field Service? Check out our use cases here.
- You’re in sales and want to explore upsell recommendations for client visits.
Copilot sets up a rough draft with placeholders for customer info and past purchases. It doesn’t connect to CRM data yet, but it helps you map out the concept before looping in technical teams.
- You’re on a support team and want to build a customer intake form.
You describe the form and basic routing needs, and Copilot creates a simple layout with suggested fields and logic. You’ll need to tweak it, but it’s a much faster way to get started.
While these examples are simple, they give you an idea of where things are heading. Plan Designer isn't here to replace software engineers but to allow business teams to move faster and speak the same language as your dev team.
Turning your starting point into a real solution
At VisualLabs, we follow every development in the Microsoft ecosystem closely and we’re excited about Plan Designer’s progress. It’s already a powerful tool for creating skeleton apps, exploring ideas, and learning how data models and logic come together.
But when you need more than just a starting point, when performance, integration, scalability, and usability matter, our team is here to help. We bring the expertise to take your idea and turn it into a reliable, well-designed app that fits your organisation’s needs.
AI is changing how we build apps, but human insight still makes the difference.
Interested in what use cases our customers are prioritising? Check out our case studies here.

I first joined VisualLabs in the summer of 2020 as a junior business analyst. As you can see from the timeline, I was part of the mass junior recruitment. With three of us, the company grew to 8 people at that time.

In the more than 1 year I worked here from 2020-2021, I was involved in quite a variety of tasks: building and improving Power BI reports, working a lot on a contract management application I built using the Power Platform, and also gaining insight into the beauty of Business Central. The latter also gave rise to some comical memories, such as the painstaking work involved in recording and subtitling training videos for clients, and how I was then, as an undergraduate student, on 'duty' for Christmas because I had no more holidays left for the year. But I got a lot of support from my senior colleagues in these things, they didn't let me get lost in the shuffle.
3 years later, in the summer of 2024, I rejoined VL, but now I work specifically with ERP. One thing that was very nice and new to me in the company was the company timeline. Where last time I was one of the mass junior hires, I'm now a part of the company life.

An amazing amount has happened in my time away, and it's great to see these events being shared by my colleagues, creating a stronger sense of belonging.
What has actually changed in these 3 years? I haven't had the chance to go through everything since I rejoined, and there's not enough space to go into it all here, so I'll just give you a few snippets.
Office
The first of these is probably the new office: the move from Zsigmond Square to Montevideo Street was already done when I was still here as a junior. But who I couldn't enjoy it then, and I wasn't part of the "moving in", but still, when I returned here 3 years later, I felt like I had shaped it. Interpret this to mean that the ethos that makes visuallabs to visuallabs, I think, changed very little, and the homeliness of the office reflected that.
Specialisation
The company has made huge progress in terms of specialisation and staff numbers while I was away: the team has grown to 35 people, and there are now separate business units for all the tasks I had the opportunity to join on a rotational basis as a junior. These are the CE team, who build business applications for clients, the data team, who deliver data analytics and visualisation solutions, and there's the ERP team - of which I became part - where we introduce Microsoft's enterprise management solutions (Dynamics 365 Finance and Operations and Business Central) to clients.What I would perhaps highlight from this is that even though these specialisations have evolved, it has not brought with it a siloed operation. To deliver work in our own area, we have access to the knowledge of other areas, and we mutually help each other across teams to deliver the highest quality service possible. From this perspective, what has changed in 3 years? I would say nothing; what worked then on a small scale, works now on a bigger scale.
Agile operation
We have had a solid vision of how we deliver solutions since I was a junior employee here: the agile methodology. What was in its infancy is now mature. If not fully agile, it uses the agile elements so well that it supports our work to a great extent on a day-to-day basis.It helps us communicate internally and to our customers by allowing them to post issues in DevOps that we help them resolve; we write features, user stories, test cases that help us with needs assessment and implementation. We have daily stand-up meetings with the team in the mornings where we discuss our stumbling blocks, at the end of the week we have sprint rounds where we always plan the next week's sprint, and monthly we have a retros where we pay special attention to feedback to each other, looking back on the past 1 month.
Team and all the fun
Unfortunately, during my first job, I didn't get much of that live because of Covid, but even then I had those short conversations at the beginning of a call or at the morning "all-people" DSMs that reinforced the sense of belonging to the team and the good atmosphere. Fortunately, we have kept this habit ever since, so no call is ever dull. And once the epidemic subsided, these community events only grew stronger, with regular team-building events, VL team-building retreats, co-hosted Christmas and Halloween parties.It's also a good day at the office. Although it varies from day to day, we have little rituals that colour the days and take the focus off work. For example, the daily lunch together in the office, chit-chat while making coffee, or just passing a funny comment to each other at the next desk, or the monthly office day when we all go in and look back over the past month. In short, you never get bored here. 😊
Coming back to a place where I've worked before is a special experience - especially when so much has changed in the meantime. VisualLabs has retained the supportive community and vibrancy that I grew to love, while reaching new levels of development and professionalism. This journey has been a learning experience not only for the company, but also for me, as the old and new experiences have given me a stronger, more mature perspective. I look forward to being a part of the next chapter and seeing where the company goes in the future!

Hey everyone! Here’s a summary of the Budapest BI Forum 2024, where I had the chance to dive into some intriguing topics and engage in inspiring conversations.
The first day was a full-day Tabular Editor workshop, where we covered the basics and discussed topics such as controlling perspectives, writing macros, and refreshing partitions. The other two days of the conference were packed with learning, and here are my key takeaways from my favorite sessions.
Keynote Speech: BI Trends
The day kicked off with a keynote that explored current and future BI trends.
Bence, the main organizer and host of the event, supported his key points with insights from Gartner research and similar studies. A few highlights that caught my attention:
- By 2025, data security and data governance are expected to top the list of priorities for executives.
- The rapid rise of AI introduces scenarios where users export data from dashboards to Excel, feed it into tools like ChatGPT, and generate their own insights. While exciting, this raises concerns about security and "shadow reporting," issues companies have tried to curb for years.
As a contractor and consultant I find this especially ironic. Large companies often hesitate to share data, even when it’s crucial for project development. They implement robust policies like VPNs and restricted searches to prevent leaks. But, at the same time, they struggle to monitor and control employees' behaviors, such as inadvertently sharing sensitive data.
This evolving dynamic between AI, data security, and governance will definitely be a space to watch closely.
Read more about Gartner’s 2024 BI trends here.
PBIR: Report Development in Code
This technical session introduced the PBIR format, a preview feature that allows Power BI reports to be stored as individual JSON files for each visual and page, instead of a monolithic file.
The feature’s potential for bulk modifications was the most exciting part. The presenter showed how Python scripts could iterate through the JSON files to apply changes (e.g., adding shadows to all KPI cards) across the report.
While still in preview and somewhat buggy, it’s a promising direction. I’m also intrigued by the integration possibilities with VS Code and GitHub Copilot, which could simplify automation for non-coders.
However, it seems TMDL language won’t be integrated into PBIR anytime soon—a bit disappointing, but I’m optimistic this will eventually happen.
TMDL Enhancements in Power BI & VS Code
One of the most exciting parts of the forum was exploring updates to TMDL (Tabular Model Definition Language), designed to make Power BI model development more efficient.
TMDL View in Power BI
This might be the feature I’m most excited about! The ability to edit your semantic model as code directly inside Power BI is a massive leap forward. Combining drag-and-drop, Copilot, and coding will make development smarter and faster.

Immediate Code Updates in Power BI (Planned for Next Year)
A handy feature to look forward to is real-time synchronization between modified TMDL code and Power BI. Changes to the model will reflect instantly in Power BI without reopening the file, saving tons of time during development.
VS Code TMDL Extension
The TMDL extension in VS Code offers:
- Formatting: Automatically organizes TMDL syntax.
- IntelliSense and Autocomplete: Speeds up coding with intelligent suggestions.
- Expand/Collapse Functionality: Makes navigating larger TMDL files easier.
Copilot Integration in VS Code
Copilot lets you generate measures, calculations, and scripts with AI assistance. For example, as you type "Profit," Copilot suggests a complete formula based on the context. It’s a productivity boost I can’t wait to leverage more!

Online Editing with VSCode.dev
You can now edit repositories directly in your browser using the vscode.dev
prefix for your repository URL. It’s perfect for quick edits without setting up a local environment.
These updates are poised to make model development faster, smarter, and more collaborative for teams using GitHub and VS Code.
Lunch Break with Insights from Microsoft
Lunch turned into one of the highlights of the day when Tamás Polner, a key figure at Microsoft, joined our table. Tamás shared some fascinating insights about the current direction of Microsoft’s data ecosystem and upcoming trends:
- Fabric focus: Microsoft is heavily prioritizing Fabric over tools like ADF and Synapse, which are expected to receive basically no new feature updates as development resources shift toward Fabric. While this has been an industry assumption for a while, it was great to have this firsthand confirmation. The message is clear: Fabric is the future of Microsoft’s data ecosystem.
- Data security: Reflecting on the keynote’s emphasis on data security, Tamás explained that this aligns with what he’s seeing at Microsoft. The number of developers in the security team is increasing significantly, and this trend doesn’t seem to be slowing down.
- Optimized compute consumption: We also discussed CU (Compute Unit) optimization in Fabric. Tamás reaffirmed something I’d heard in Fabric training sessions: notebooks are far more powerful and efficient than UI-powered features like Dataflow Gen2. They use significantly less compute capacity, making them the better choice for many workflows.
- DP-600 exam: Tamás mentioned that the DP-600 exam has become one of the most successful certifications in Microsoft’s history, with a record high number of certifications achieved in short time.
- Copilot and AI: Copilot is a major focus for Microsoft, but its rollout faces challenges due to the high resource intensity of AI models. Tamás noted that, like other companies deploying built-in AI solutions, Microsoft needs to continue investing heavily in CAPEX for computing power to make these solutions broadly accessible.
This conversation provided valuable context and insight into Microsoft’s strategic priorities and was a great opportunity to discuss industry trends and technical strategies in detail.
Storytelling with Power BI
This session revisited a topic close to my heart: how to create Power BI reports that truly connect with their audiences. The presenter broke it down into three key phases:
- Research: Start by understanding the report’s purpose. Who will use the report? What decisions should it support? Can the goal be summarized in one clear, concise sentence?
- Create: Develop the report based on your research. Ensure that the visuals, design, and structure align with the user’s needs and the intended outcomes.
- Deliver: It’s not just about handing over the report and documentation, then walking away. True success lies in monitoring how the report is used and gathering user feedback. This feedback often reveals both strengths and weaknesses you didn’t anticipate, providing opportunities to refine and enhance the report further.
While much of this was a confirmation of what I already practice, it underscored an essential point: The discovery phase and follow-ups are just as critical as the actual development process.
It’s also a reinforced me that educating clients about the value of these stages is crucial. When clients understand that investing time and resources into proper research and post-delivery follow-ups leads to better reports and happier users, they’re much more likely to embrace these processes.
Final Thoughts
The day was packed with insights, but what truly stood out was the seamless blend of technical innovation and strategic foresight. Whether it was exploring new options like TMDL and PBIR, or gaining a deeper understanding of the big-picture trends shaping the future of BI, the forum offered something valuable for everyone.
Of course, the lunch chat with Tamás was a treasure trove of insider knowledge—easily one of the event’s highlights for me. Another personal highlight was a heartfelt conversation with Valerie and Elena, who encouraged me to take the next step in my professional journey: becoming a conference speaker.
If any of these topics piqued your interest or you’d like me to dive deeper into specific sessions, just let me know—I’d be happy to share more!

When working with data from REST APIs, it's common to encounter limitations on how much data can be retrieved in a single call. Recently, I faced a challenge where the API limited responses to 1000 rows per call and lacked the usual pagination mechanism, such as a "next page URL" parameter in the response. This absence makes it difficult for developers to automate the data retrieval process, as there's no clear way to determine when all the data has been retrieved.
In scenarios like data backup, migration, or reporting, this limitation can become an obstacle. For instance, using a real-life scenario as an example: if your company manages its HR-related processes in SAP SuccessFactors, you’ll encounter this same challenge. Without a native connection between SuccessFactors and Power BI, one of the viable options for pulling data for reporting or building a data warehouse (DWH) is through REST API calls. While Power BI offers a quick solution via Power Query, it’s not always the best tool—particularly when dealing with larger datasets or when corporate policies limit your options. This is where Azure Data Factory (ADF) becomes a more appropriate choice. However, ADF presents its own challenges, such as handling API responses larger than 4MB or managing more than 5000 rows per lookup activity.
This article will walk you through overcoming these limitations when working with JSON files and API responses in ADF. By the end, you'll learn how to append multiple API responses into a single result and handle API calls with unknown pagination parameters.
While alternative solutions like Python or Power BI/FabricDataflow Gen2.0 may offer quicker implementations, there are cases where ADF is necessary due to restrictions or specific use cases. This guide is tailored for ADF developers or anyone interested in experimenting with new approaches.
If you're familiar with working with APIs, you're likely aware that limitations on the number of rows per call are a common practice. Typically, APIs will let users know if there’s more data to retrieve by including a "next page URL" or similar parameter at the end of the response. This indicates that additional API calls are necessary to retrieve the full dataset.
However, in the scenario we faced, this part of the URL was missing, leaving no clear indication of whether more data remained in the system. Without this pagination rule, it's challenging to determine from a single successful API call whether more API requests are required to retrieve the complete dataset. This makes automating the data retrieval process more complicated, as you have to implement a method to check whether further calls are necessary.
This is the first issue we’ll solve using ADF.
The second issue involves handling API responses in JSON format and merging different JSON files into a single result.
If the requirement is to store all the data in one single JSON file, there are several approaches you can take:
- Create one single JSON file from all the responses and store or process it later.
- Generate multiple JSON files, one for each API call, and then flatten them into one file at a later stage in the ADF pipeline.
Alternatively: Write the data directly to a SQL table, if the final destination is a database.
In this article, we’ll focus on Option 1—creating one single JSON file from multiple responses. While this may not be the ideal solution, it presents a unique challenge when working with JSON arrays in ADF.
While the solution itself is not overly complicated, there are a few important points where developers should proceed with caution.
First, be aware of the limitations ADF imposes on each LookUp Activity or Web Activity: the output size is restricted to 4 MBs or 5000 rows. If your response size slightly exceeds this limit, you can adjust the settings—lowering the number of rows per call from 1000 to, say, 800. However, keep in mind that this adjustment could significantly increase the overall runtime, especially if you're dealing with many columns of data. In such cases, consider an alternative approach, such as using the Copy Activity to write the data directly into a SQL database or generate multiple JSON files and merge these into one later.
Another critical point is the use of loops. The solution involves two loops, so it’s essential to carefully handle scenarios that could result in endless loops. Proper checks and conditions must be implemented to avoid such issues and ensure smooth execution.
Implementation - ADF
Here is the logic of the entire pipeline:

To manage API pagination and build a single valid JSON file in ADF, you will need to define several variables, as shown in the image above: Variables Setup:
- Skip X Rows (Integer):This variable will store the number of rows to skip in each REST API call, which is the dynamic part of the URL.
- Skip X Rows - Temporary (Integer):Skip X Rows - Temporary (Integer): This variable is needed because ADF doesn’t support self-referencing for variables. You can’t directly update Skip X Rows using itself, so this temporary variable helps track progress.
- REST API Response is empty? (Boolean):This Boolean flag will indicate whether the last API response was empty (i.e., no more data), triggering the loop to stop.
- API Response Array (Array):Used to store each individual API response during the loop. This allows you to gather all responses one-by-one before processing them.
- All API Response Array (Array) [Optional]:This array is optional and can be used to store all responses combined after the loop finishes.
- Current JSON (String):Stores one individual API response in JSON format as a string.
- Interim Combined (String):Stores the concatenated JSON responses as you append them together in the loop.
- Combined JSON (String):Holds the final complete JSON result after all responses have been processed and combined.
Step-by-Step Execution
1) Initialize Variables:
- Set Skip X Rows to 0. This represents the starting point for the API.

- Set Skip X Rows - Temporary to 0. This is a temporary counter to help update the primary skip rows.

- Set REST API Response is empty? to false. This Boolean will control when to stop the loop.

2) Add an UNTIL activity: Set up a WHILE loop (or UNTIL activity) with the condition @equals(variables('REST API Response is empty?'), true) so it continues running as long as there is data to retrieve.

3) Inside the WHILE Loop: a) Lookup Activity (Initial API Call):

- Perform a Lookup Activity calling the REST API, but limit the returned data to only one column (e.g., just the ID which should be never empty if present). This keeps the response light and allows you to check if more data exists.b) IF Condition (Check Response):

- If the response is empty, set REST API Response is empty? to true to end the loop.

- If not empty, proceed to the next step.c) Full API Call:
- If the response is not empty, perform the full REST API call to retrieve the desired data.
- Append the response to the API Response array variable.

- d) Update Variables:
- Increase Skip X Rows - Temporary by the number of rows retrieved (e.g., 1000).

- Set Skip X Rows to the value of Skip X Rows - Temporary to update the dynamic part of the API URL.

- 4) Handle Failure Scenarios:
- Optionally, but highly recommended: add a Fail Condition or a Timeout Check. This condition will break the loop if there is a problem with the API response (e.g., a 404 error).
After gathering all the API responses, you'll have a list containing multiple JSON arrays. You’ll need to remove the unnecessary brackets, commas or other JSON elements. To do that you’ll need a for loop which iterates over all the JSON arrays in the array variable and modifies those accordingly.

The steps followed inside the for loop:

- Save the currently iterated JSON into 1 variable and save as string (string format is needed so you can manipulate the response as text).

- Modify JSONTo flatten a JSON files into a single file you need to remove the first “[“ or last “]” character so concatenating it will result in a valid file.

3. Save it to another variable which will store the final JSON Due the lack of self referencing option in ADF you need to update the combined JSON variable every time a new JSON piece is added.

And that’s it! You have successfully addressed both the pagination and JSON file handling challenges using ADF.
Implementation – Power BI
Compared to this, the solution in Power Query is much more straightforward. You need one function where you can control the number of rows you want to skip, which basically calls the API by 1000 rows. And you need another query which starts whit a while loop which calls the API as many times as it doesn’t return an empty response. Once it’s ready, you can combine the list of tables into one table. By expanding it, you’ll end up with the complete dataset. Here is the code of the function:

let
Here is the query which will invoke the function and act as a while loop.
Source = ( rows_to_skip as number ) =>
let
Base_URL = "https://Your_API_URL",
Relative_URL = "The relative URL part of your API call",
Source = Json.Document(
Web.Contents(
Base_URL,
[Relative_Path = Relative_URL & Number.ToText(rows_to_skip) ]
)
),
//Additionally you can convert the data directly with this function to table
Convert_to_Table = Table.FromRecords ( {Source} )
in
Convert_to_Table
in
Source
let
Surce =
//Create a list of tables
List.Generate( () =>
// Try to call the function and set input parameter to 0 during the 1st call.
[Result = try Function_by_1000(0) otherwise null, Page = 0],
//Checks if the inside of first row (referenced by the “{0}” part) is empty. Due to logic of this particular API it checks the “results”
inside the “
d”
parameter in the response.
each not List.IsEmpty([Result]{0}[d.results]),
// Try to call the function again and increase the input parameter of the function by 1000 (max rows by API call)
each [Result = try Function_by_1000([Page]+1000) otherwise null, Page = [Page]+1000],
each [Result])
in
Source
