The lab

Discover. Learn. Innovate.

A collection of valuable insights, real success stories, and expert-led events to keep you informed.

Soft blue and white gradient background with blurred smooth texture

Insights & ideas

Stay ahead with expert articles, industry trends,
and actionable insights to help you grow.

How can we introduce AI into our business processes safely?
December 10, 2025
10 mins read

How can we introduce AI into our business processes safely?

Read blog

TL;DR

Most organisations want AI but aren’t ready for it. Real AI adoption means either augmenting employees with copilots or creating autonomous agents—but both require clean data, documented processes, and strong governance. If workflows live in Excel, approvals happen in chats, or data is scattered, AI has nothing reliable to operate on. Once processes are structured and people understand how to work with AI, the organisation can finally unlock decision intelligence, safe automation, and meaningful impact. AI doesn’t fail because the model is bad—it fails because the foundations aren’t there. Build readiness first, value follows.

What companies get wrong about “adding AI”

Every organisation wants to “implement AI”, but few can describe what that actually means.

Is it adding Copilot to meetings?
Automating tasks with Power Automate?
Building agents that take decisions on your behalf?

The reality is that most companies don’t yet know what they want to achieve with AI, and even fewer are ready for it. Not because they lack tools, but because their people, processes, and technology aren’t structured for AI to operate safely, reliably, and at scale.

This post breaks down, in practical terms, what organisations truly need for AI-enabled business processes, the common pitfalls we see again and again, and a clear framework your organisation can actually use to get started.

What “adding AI” really means

When most teams say they want to “add AI”, they usually mean one of two things, and each has very different requirements.

1. Extend the worker (AI-augmented work)

This is where copilots and conversational assistants truly shine: helping employees search company knowledge, summarise decisions, retrieve documents, and take routine actions. But this only works if:

  • the AI actually understands your business data,
  • the data is structured and governed, and
  • the agent is not given decision rights that introduce risk.

The system must understand the company’s knowledge, not just respond to prompts.

2. Create autonomous workflows (AI agents)

This is the more advanced path: agents that make limited decisions, move work between systems, and act without constant human supervision.

But autonomy does not mean freedom. Governance is key. An agent should operate within a clearly defined scope and can only take business-critical decisions when it’s given clear criteria.

This distinction matters because it forces organisations to re-examine how they work. If your processes are unclear, inconsistent, or undocumented, AI will reveal that very quickly.

Before you automate anything, understand the real process

One of the first questions we ask in readiness workshops is deceptively simple:
“How does this process actually work today?”

Almost always, the answer reveals a gap between intention and reality:

  • Sales opportunities tracked in Excel
  • Approval steps handled informally in Teams chats
  • Documents scattered across personal drives
  • Edge cases handled by “whoever knows how to do it”

This is where it all breaks down. AI cannot automate a process if even humans cannot describe it. If a process isn’t documented, it's technical debt.

Another red flag is when organisations that want to “keep the process exactly as it is” and simply add AI on top. AI doesn’t work that way. If the process itself is inefficient, undocumented, or built on manual workarounds, no amount of automation will save it.

To get real value, the process must be worth automating in the first place, ideally delivering a 10x improvement when reimagined with AI.

The hidden bottleneck: your data

Every AI workflow, from copilots to autonomous agents, relies on data being structured, governed, consistent, discoverable, and stored in systems designed for long-term work.

If you’re tracking key business processes in Excel, you’re not AI-ready. Excel is brilliant for calculations, bu it is not designed for workflow execution, audit trails, role-based access, entity relationships, or system-to-system integration.

Excel is unstructured data. You cannot build AI on manual data.

The good news is that Microsoft’s systems are AI-ready by design:

  • Dynamics 365 for structured sales and service processes
  • Dataverse for the unified data backbone
  • SharePoint for document lifecycle and governance
  • Teams and Loop for shared context and collaboration

If your processes live outside these systems, your AI will operate without context, or worse, without safety.

And if your data sits in old on-premise servers? Connecting them to modern AI systems becomes slow, fragile, and expensive. AI thrives in the cloud because the cloud creates the structure AI needs.

Designing workflows where AI and humans work together, safely

Once processes are structured and data is governed, the next question is:
what should AI do, and what should humans do?

There’s a simple rule of thumb:

  • High-impact, high-risk, or ambiguous decisions → human
  • High-volume, low-risk, routine steps → AI

This is where human-in-the-loop design becomes essential. A well-designed AI workflow should:

  • Define exactly where humans intervene
  • Log every AI action for traceability
  • Provide confidence scores and explanations
  • Avoid overwhelming people with unnecessary alerts
  • Keep the final accountability with the human owner

Humans should use judgement, handle exceptions, and ensure ethical and correct outcomes. AI should do the repetitive work, the data consolidation, and the first pass of tasks.

AI readiness is also about people, not just systems

One of the most underestimated aspects of AI readiness is human behaviour. For AI to work as intended, business users must:

  • Be curious
  • Know how to break their work into steps
  • Be willing to adapt workflows
  • Understand where data lives
  • Ask questions and refine prompts
  • Avoid bypassing the process when “it’s easier to do it my way”

Processes fail when people resist the change because they  don’t understand the “why”. And they fail just as quickly when employees work around the automation or keep using personal storage instead of governed systems.

AI introduction is as much a cultural shift as it is a technical programme.

What you can finally ask once AI-readiness is achieved

Once the foundations are in place, people begin asking questions that were previously impossible:

“Which of our suppliers pose the highest risk based on the last 90 days of invoices?”

“What decisions were made in the last project meeting, and who owns them?”

“Show me opportunities stuck for more than 30 days without activity.”

“Draft a customer update using the last three emails, the CRM history, and the contract.”

“Alert me when unusual patterns appear in our service requests.”

These are questions an agent, not a chatbot, can answer. But only if the process is structured and the data is clean.

AI doesn’t fail because the model is bad. It fails because the organisation isn’t ready

Before building agents, copilots, or automations, ask yourself:

  • Would AI understand our processes, or would it get lost in exceptions?
  • Is our data structured, governed, and accessible?
  • Do our people know how to work with AI, not around it?
  • Are we prepared to support safe, auditable, and reliable AI operations?

If the answer is “not yet”, you’re not alone. Most organisations are still early in their readiness journey. But once the foundations are there, AI value follows quickly, safely, and at scale.

Want to move from AI curiosity to real, measurable impact? Get in touch for an AI readiness workshop.  

This is some text inside of a div block.
How Agent 365 changes enterprise AI
December 3, 2025
10 mins read

How Agent 365 changes enterprise AI

Read blog

TL;DR

A365 is Microsoft’s new identity, governance and management layer for AI agents, giving each agent its own permissions, lifecycle, audit trail and operational controls. It's a signal that AI isn’t a side feature anymore; it’s becoming a governed, scalable digital workforce inside the enterprise stack. Instead of scattered pilots and experimental bots, enterprises get a unified way to build, manage and scale agents across CRM, ERP, HR, finance and data workflows. This is the shift from “AI as a helper” to “AI as part of the workforce,” and it raises a simple question: are you preparing your processes, data and governance for digital labour, or will you be catching up later?

How will Agent 365 reshape the way organisations work?

Most organisations spent the last year wrapping their heads around Copilot: what it can do, where it fits, and how to introduce it without overwhelming employees. But while everyone was busy figuring out prompts and pilots, Microsoft was preparing something far bigger.

Agent 365 is the moment enterprise AI stops being a clever assistant and becomes a managed digital workforce.

There’s an important detail that wasn’t obvious at first: the A365 icon sits inside Microsoft’s AI Business Applications stack, the same family as Dynamics 365 and the Power Platform. What looked at first like a Modern Work / Office feature is actually positioned alongside enterprise-grade business applications.  

And they gave it the “365” name. When Microsoft attaches “365” to a product, it becomes part of the workplace operating system. SharePoint, Teams, Excel, Dynamics. These aren’t just tools, they’re the foundation of daily work. This isn’t accidental positioning; putting agents in the 365 family, Microsoft is sending a clear message:

AI agents are not experiments anymore. They are part of the enterprise stack.

And this has huge implications for IT Ops, Security, CoE teams, and business leaders.

From scattered bots to a unified agent ecosystem

If you’ve worked with Copilot Studio or any of the early Microsoft agents, you know the experience hasn’t been consistent. Agents lived in different places, were created in different ways, and had different capabilities. Some behaved like chatbots, others like automations. A few acted like full digital workers, if you were brave enough to give them permissions.

Agent 365 is the first attempt to bring order to this chaos. Instead of agents scattered across the Microsoft ecosystem, there will be one place to see them, manage them, and govern them. Microsoft calls it the Monitoring Admin Center, where agents are treated like real operational entities.

For the first time, IT teams can:

  • see all agents in one view
  • assign their own permissions
  • scale them independently
  • isolate them if needed
  • monitor activity
  • apply governance policies the same way they do for users

This is the shift organisations have been waiting for. AI is no longer a set of small tools you sprinkle across teams. It becomes a proper enterprise layer, where you can administer, secure, and scale agents.

Copilot vs Agent 365

What’s the difference? A useful way to think about it:

  • Copilot is the interface where people talk to AI.
  • Agents are the products the AI performs the work with.

Copilot will remain the interaction layer used across Microsoft products, but the deeper AI ecosystem (the one that will actually power work) is Agent 365.

This means that agents are moving into infrastructure territory.

A unique identity for every agent changes everything

The most important and least understood part of the announcement is Microsoft Entra Agent ID.

Until now, most AI agents have run under user identities, app registrations, or custom service accounts. Agent ID introduces a new, first-class identity type in Entra that is purpose-built for agents.

With Agent ID, an enterprise agent can finally have:

  • its own identity in Entra
  • its own assigned permissions instead of inheriting a user or app profile
  • its own access and governance policies, including Conditional Access
  • its own lifecycle management (creation, assignment, decommissioning)
  • its own auditability, with logs that show what the agent did and when
  • its own compliance surface, so organisations can apply the same Zero Trust, monitoring and oversight they use for other identities

In short: Agent ID gives agents a proper identity layer, separate from people and apps, and creates the foundation for secure, governed, enterprise-grade agentic automation.

You’re no longer tying a bot to a user’s permissions and hoping nothing goes wrong. You can now manage a digital worker with the same clarity as a human one, without the HR paperwork.

For IT Ops and Security teams, this is the part that makes scalable AI realistic. Without clear identity, real autonomy is impossible. Agentic ID is the foundation for everything Microsoft wants to build next.

Tools turn agents into real digital workers

Early AI agents were impressive but limited. They could answer questions or summarise documents, but they couldn’t do much.  

Agent 365 changes that by introducing a real tool model: secure, isolated, pre-defined capabilities that agents can invoke to complete tasks on your behalf.

This brings a new class of role-specific agents. Some use cases we expect to see soon:

  • An agent with invoice-reading capabilities can take on routine finance tasks.
  • An agent that can post into your ERP can handle basic accounting work.
  • An agent that can update your CRM can manage SDR-level activities.

In other words: your business systems stay the same, but what your agents can do inside them expands dramatically.

The tools define the scope of work, and the governance layer defines the boundaries.
Once those two connect, something significant happens:

AI stops being a helper and becomes a decision-maker. That’s why companies need structure, identity, and controls before they deploy anything serious. And this is exactly what Agent 365 provides.

Microsoft will ship out-of-the-box agents

Microsoft doesn’t hide the direction anymore: they’re building their own out-of-the-box agents for major business functions.

Expect products like:

  • Sales Development Agent
  • HR Lifecycle Agent
  • Customer Service Agent
  • Finance/ERP Agent
  • Fabric Data Agent
  • Security and Compliance Agents

These will be real, supported Microsoft products. And they will almost certainly be licensed per agent, just like every other 365 workload.

This will raise important organisational questions:

"How many agents do we need?"

"Which roles replace manual steps with agents first?"  

"Should we start with one per department or buy bundles?"  

"What does ROI look like at the agent level?"

Licensing will likely become more complex; but the value will grow even faster for organisations that introduce agents deliberately, not reactively.

Where businesses will see early wins

In the next 12 months, the most realistic value will come from processes that already run inside Microsoft systems and already require repetitive, structured work:

  • Sales teams cleaning pipelines
  • Finance teams processing invoices
  • Customer service teams triaging cases
  • Data teams preparing datasets
  • HR teams onboarding people

Anywhere a human currently moves structured data between structured systems, an agent will do it faster, cleaner, and more consistently.

And the mistakes to avoid

Agent 365 brings enormous potential, but, like every major Microsoft release, it also comes with predictable, avoidable traps.  

As with every AI initiative, readiness is key. Before you commit to licences, tools or departmental rollouts, make sure you’re not walking into the same issues that slow organisations down every time a new solution arrives.

  • Don’t skip process mapping.
    Use frameworks like MEDDIC or Value Architecture Design to ensure you’re automating a clean, well-understood workflow instead of scaling a broken one.
  • Don’t buy more agents than your teams can adopt.
    Start small. A controlled pilot with a handful of agents will always outperform a large purchase no one is ready for.
  • Don’t roll out everything at once.
    Introduce agents gradually so users have the space to understand how each one fits into their workflow before the next arrives.
  • Don’t skip process mapping.
    Automating a broken process only creates a faster, more expensive version of the same problem. Map the journey first, then bring in the agent.
  • Don’t underestimate data quality.
    Agents make decisions based on the information you give them. If your CRM, ERP or SharePoint data is inconsistent, the agent’s actions will be too.
  • Don’t assume governance will “figure itself out.”
    Without clear ownership, shadow agents, over-permissioned tools and ambiguous access boundaries will appear quickly.

When these pitfalls are ignored, the same uncomfortable questions always come back:

Why isn’t anyone using what we bought?”

“Why isn’t this delivering the value we expected?”

“How did this agent end up with access to everything?”

The organisations that succeed aren’t the ones who rush. They’re the ones who pause long enough to define clean data, clear ownership, intentional design and a rollout plan that respects how humans, not machines, adapt to new ways of working.

The future of work will be humans + agents

Agent 365 is the moment Microsoft finally aligns its tools, its platform, and its vision:
every person will work through Copilot, and every system will be executed by agents.

The question for organisations now is simple:

Are you preparing for a future where digital labour is part of your workforce, or will you be retrofitting governance after the agents have already arrived?

We can help with the clarity, structure, and safe adoption you’ll need. Join our free webinar where we'll walk you through how to get AI-ready in 90 days.  

This is some text inside of a div block.
How to get ready for agentic AI
November 26, 2025
10 mins read

How to get ready for agentic AI

Read blog

TL;DR

90% of organisations aren’t ready for agentic AI because their people, processes, and technology are still fragmented. Before building copilots or custom agents, companies must become data-first organisations: establishing strong governance, integrating foundational systems, and replacing manual processes with structured, interconnected workflows. With agentic blueprints and a proven methodology grounded in value, architecture, and design patterns, Microsoft-centric organisations can gain AI value faster and more safely.

90% of organisations aren’t AI-ready. Are you?

Everyone is talking about AI agents — or copilots — that summarise meetings, answer questions, trigger workflows, and automate the routine.

Most organisations believe they’re “ready for AI” because they use ChatGPT or Copilot. But agentic AI only becomes valuable when the foundations are in place — governed data, consistent processes, and interconnected systems.

In reality, around 90% of organisations aren’t there yet. Their data lives in silos, processes run on spreadsheets, and collaboration happens in unstructured ways that AI cannot interpret.

So before you rush to “add AI,” stop for a moment. Is your organisation truly ready for an agentic AI strategy, or are you still running on Excel?

From automation to augmentation

Many companies start here: Sales teams track opportunities in Excel. Documents live in personal folders. Collaboration happens over email or private Teams chats.

It works until it doesn’t. Because when you ask, “Can we plug AI into this?” the answer depends entirely on how your work is structured.

For AI to deliver value, your processes and data need to be consistent and governed. If information sits in silos or moves around without clear ownership, no Copilot will sort it out for you.

Step 0: Look at how you work

AI can only operate within the workspace it lives in. Before talking about technology, ask a simple question: How does your team actually get work done every day?

  • Where do we keep track of our work, such as opportunities, sales/purchase orders, contacts, customers, and contracts?
  • Who updates them?
  • How are documents stored, shared, and versioned?

If the answer includes “Excel,” “someone keeps a list,” or any other manual step, that’s not AI-ready. Manual tracking makes automation impossible and governance invisible.

When we assess readiness, we start by examining your value patterns: how your teams create value across people, process, and technology. These patterns reveal which activities need to be structured into systems that log every action consistently. Only then can an agent analyse, predict, and assist.

Microsoft’s modern workspace is AI-ready by default

Microsoft’s modern workspace, including SharePoint, Teams, Loop, Dataverse, and Copilot, is already agent-ready by design.

Chat, files, and meeting notes create structured, secure data in the cloud. When your team works in this environment, an AI agent can see what’s happening, and safely answer questions like:

  • “What was decided in the last project meeting?”
  • “Show me invoices from vendors in Q3.”
  • “Which opportunities need follow-up this week?”

With even basic tools, you can achieve impressive results. A simple SharePoint automation can pull in invoices, let AI read and structure them into columns (supplier, invoice number, amount), and feed the data into Power BI, all in an afternoon.

Step 1: governance first, AI second

When someone logs into Copilot and asks a question, Copilot will find everything they can access. That’s both the promise and the risk, without strong data loss prevention, AI may surface information you never intended to expose.

This is why governance is the first pillar of any AI readiness strategy, and it’s the foundation of our value–architecture–design pattern methodology. Without clear ownership, access, and data controls, no agent can operate safely.

When we run readiness audits, the first questions aren’t about models or copilots — they’re about access and accountability:

  • Who owns each SharePoint site?
  • Who has edit rights?
  • Is sensitive data over-shared across Teams?
  • What happens if a site owner leaves the company?

The good news is that Microsoft’s audit tools automatically flag ownership gaps, oversharing, and risky access patterns so you can act before an AI ever touches the data.

Step 2: structure your business data

Even with strong governance, your data still needs structure. AI can read unstructured notes and spreadsheets, but it can’t extract meaningful insights without a consistent data model.

This is where Microsoft's data ecosystem helps. Their tools connect sales, service, finance, and other processes into a single, governed data layer. Every record — contact, invoice, opportunity — sits in one place with shared logic.

Structuring business data turns your architecture patterns into reality. When CRM, ERP, SharePoint, and collaboration systems are interconnected, you create the unified backbone that agentic workflows rely on.

And that’s where agentic AI truly begins. You can build agents that review opportunities, identify risks, and recommend next steps based on the clean, consistent data flowing through Microsoft 365.

Step 3: from readiness to reality

Once the foundation is solid, the strategy becomes clear:

  1. Audit your workspace and permissions.
  1. Standardise how data is collected and stored.
  1. Govern collaboration and access through Teams and SharePoint admins.
  1. Enable your first agent — a Copilot, chatbot, or custom agent using Copilot Studio — to assist in everyday processes.

From there, you can start to ask more ambitious questions:

  • Which of our processes could an agent safely automate?
  • How do we combine Copilot with custom workflows to handle domain-specific tasks?
  • What guardrails do we need so that AI doesn’t just act, but acts responsibly?

You also don’t have to start from scratch. Our proprietary baseline agents for Microsoft ecosystems cover common enterprise scenarios and act as accelerators, reducing implementation time and giving you a proven foundation to tailor AI behaviour to your organisation.

Want to learn more? Come to our free webinar.

The right question isn’t “how fast”, it’s “how ready”

Every organisation wants to move fast on AI. But the real differentiator isn’t how early you adopt, it’s how prepared you are when you do.

For small teams, AI readiness can be achieved in weeks. For large, established enterprises, it’s a transformation touching governance, data models, core systems and ways of working.

So before asking “How soon can we deploy an agent?” ask instead:

  • “Would our current systems help or confuse the AI?”
  • “Can we trust the AI to find, but not expose our data?”
  • “Do our people know how to work with agents, not against them?”

That’s what an agentic AI strategy really is. Not just technology, but the deliberate design of trust, control, and collaboration between humans and AI.

Before you deploy agents, build the trust they need to work

AI adoption is no longer about experimentation. It’s about building the right foundations —  governance, structure, and readiness — so your first agents don’t just answer questions, but deliver real, secure value.

Agentic AI starts with readiness. Once your systems, data, and people are ready, intelligence follows naturally.

We help Microsoft-centric organisations move from AI-curiosity to real impact, creating environments where AI agents can operate safely, efficiently, and intelligent.

Join our free 45-minute webinar — we’ll walk you through how to get AI-ready in 90 days.

This is some text inside of a div block.
How does Microsoft Fabric licensing actually work?
November 12, 2025
10 mins read

How does Microsoft Fabric licensing actually work?

Read blog

TL;DR:

Microsoft Fabric doesn’t work like a simple “per-user licence”. Instead, you buy a capacity (a pool of compute called Capacity Units, or CUs) and share that across all resources (lakehouses, SQL servers, Notebooks, pipelienes, Power BI reports etc.) you use inside fabric for (data engineering, warehousing, dashboards, analytics). On the top of this, you separately pay for storage and certain user licences if you publish dashboards. The smart path: start small, track your actual consumption, align capacity size and purchasing model (pay-as-you-go vs reserved) with your usage pattern, so it becomes cost-efficient rather than a budget surprise.

What makes Fabric’s pricing model different

If you’re used to licencing models for analytics tools such as per-user dashboards or pay-per-app, Fabric introduces a different mindset. It’s no wonder many teams are confused and asking questions like:

“Will moving our Power BI set-up into Fabric make our costs spiral?”
“We’re licensing for many users. Do we now have to keep paying per user and capacity?”
“What happens when our workloads spike? Will we pay through the roof?”

Reddit users already ask:
“Can someone explain me the pricing model of Microsoft Fabric? It is very complicated …”

So yes, it’s new, it’s different, and you should understand the mechanics before you start using it.

The basics: what you must understand

Here are the key building blocks:

Availability of capacity = compute pool

Fabric uses “capacity” measured in CUs (Capacity Units). For example, an “F2” capacity gives you 2 CUs, “F4” gives 4 CUs, and so on up to F2048.

That pool is shared by all workloads in Fabric: data flows, notebooks, lakehouses, warehousing, and dashboards.

Important: you’re not buying a licence per user for most functionality, you’re buying compute capacity. It’s also important to note that if you are usign a Pay-as-You-Go model you’ll pay whenever the capacity is turned on (whether you are actively using it or not). You’ll be billed based on the time (minutes) the capacity was running and it doesn't matter if you were using all or none of your CU-s.

Storage is separate

Storage (via OneLake) is billed separately per GB. The capacity you buy doesn’t include unlimited free storage.
There are additional rules e.g. free mirroring storage allowances depending on capacity tier.

User licences still matter, especially for dashboard publishing

If you use Fabric and you want to publish and share dashboards via Power BI, you’ll still need individual licences (for authors/publishers) and in certain cases viewers may need licences depending on capacity size.

So compute + storage + user licences = the full picture.

Purchasing models: pay-as-you-go (PAYG) vs reserved

  • Pay-as-you-go: You pay for the capacity per hour (minimum one minute) as you run it, and you can scale up/down. Good if usage is variable.
  • Reserved capacity: You commit to a capacity size for a year (or more) and get ~40% discount—but you pay whether it’s used or not. Good if your workloads are steady.

How to pick the right model and avoid common mistakes

1. Track your workload and consumption before committing

Because you’re buying a pool of compute rather than per-user seats, you need to know how many jobs, how many users, how many refreshes, how much concurrency you’ll have.

For example: if you pick a small capacity and your workloads goes beyond that, Fabric may apply “smoothing” or throttle. So start with PAYG or small capacity, track consumption for 60-90 days.

2. Ask the right questions up front

  • “Will our usage spike at month-end or quarters?”
  • “How many users are viewing dashboards vs authoring?”
  • “How many data transformation jobs run concurrently?”
  • “Can we pause capacity on nights/weekends or non-business hours?”
    If you don’t answer these, you risk buying too much (wasted spend) or too little (performance issues).

3. Beware the “viewer licence” threshold

There’s a key capacity size (F64) where things change: For capacities below F64, users may still need separate Power BI PRO licences for report or dashbaord consumption; at F64 and above, viewers may not need individual licences.
If you migrate from an older Power BI model into Fabric without checking this, you could pay more.

4. Storage charges can sneak in

Large datasets, duplicates, backup snapshots, and deleted workspaces (retention periods) all consume storage. Storage costs may be modest ($0.023 per GB/month as of November 2025) compared to compute, but with volumes they matter.

Also, networking fees (data transfer) are “coming soon” as a Fabric cost item.

5. Don’t treat capacity like a fixed server

Because Fabric allows bursting (temporarily going above your base capacity) and smoothing (spreading load) your cost isn’t purely linear with peak loads. But you still pay for what you consume, so design workloads with efficiency in mind (incremental refresh, partitioning, avoiding waste).

A simplified checklist for your team

  • Audit actual workloads: data jobs, refreshes, user views.
  • Choose initial capacity size: pick a modest SKU (F4, F8) for pilot.
  • For smaller solutions (below F64) it might make sense to combine Fabric PAYG and Power BI PRO solutions to get the best performance and price.
  • Run PAYG for 60-90 days: monitor CUs used, storage, spikes.
  • Analyse when you hit steady-state usage (>60% utilisation) → consider reserved capacity.
  • Map user roles: who authors dashboards (needs Pro/PPU licence), who views only (maybe Free licence depending on capacity).
  • Optimise data architecture: incremental loads, partitioning, reuse instead of duplication.
  • Monitor monthly: CUs consumed, storage growth, unused capacity, aborted jobs.
  • Align purchase: Scale up/down capacity ahead of known events (e.g., month-end), pause non-prod when idle.  

Frequently-asked questions  

“Will moving our Power BI setup into Fabric make our costs spiral?”
Not necessarily — if you migrate smartly. If you move without revisiting capacity size/user licences you could pay more, but if you use this as an opportunity to right-size, share compute across workloads, optimise refreshes and storage, you might actually get better value.

“Do I need a user licence for every viewer now?”
It depends on your capacity size. If your capacity is below certain thresholds, viewers may still need licences. At F64+ capacities you may allow free viewers for published dashboards. Monitor your scenario.

“What happens if we only run analytics at month-end? Can we scale down otherwise?”
Yes, with PAYG you can scale down or pause capacity when idle, hence paying only when needed. With reserved you lock in purchase. Choose based on your workload patterns.

“Is storage included in capacity cost?”
No, storage is separate. You’ll pay for OneLake storage and other persistent data. If you have massive data volumes, this needs budgeting

Get Fabric licensing right from the start

While Microsoft Fabric’s licensing and pricing model may feel unfamiliar compared to traditional per-user or per-service models, it offers substantial flexibility and potential cost-efficiency if you approach it intentionally. Buy capacity, manage storage, plan user licences, and do the tracking in the early phase.  

Teams that treat this as an after-thought often get surprised by bills or performance constraints. Teams that design for efficiency from the start get a shared analytics and data-engineering platform that truly scales.

Licensing clarity is the first step to a smooth Fabric journey.
Our experts can assess your current environment, model different licensing scenarios, and build a right-sized plan that keeps costs predictable as you scale.

Book a free assessment.

This is some text inside of a div block.
How to start using Microsoft Fabric for data and analytics?
November 5, 2025
10 mins read

How to start using Microsoft Fabric for data and analytics?

Read blog

TL;DR

Use our step-by-step guide to start using Fabric safely and cost-effectively, from setting up your first workspace and trial capacity to choosing the right entry path for your organisation. Build a lakehouse, connect data, create Delta tables, and expand gradually with pipelines, Dataflows, and Power BI. Whether you’re starting from scratch or integrating existing systems, it shows how to explore, validate, and adopt Fabric in a structured, low-risk way.

How to start using Fabric safely and cost-effectively

Before committing to a full rollout, start small. Activate a Fabric trial or set up a pay-as-you-go capacity at a low tier (F2–F8). These are cost-effective ways to explore real workloads and governance models without long-term commitments.

Begin by creating a workspace. From here, you can take one of two paths: starting fresh or integrating with what you already have.

1. Starting fresh (greenfield)

If you don’t yet have a mature data warehouse or analytics layer, Fabric lets you build the essentials quickly with minimal infrastructure overhead.
You can:

  • Create a lakehouse in your workspace
    Import sample data (e.g.: ready to use Contoso data) or upload Excel/CSV files
  • Explore them with SQL or notebooks

This gives you a safe sandbox to understand how Fabric’s components interact and how data flows between them.

2. Integrating with what you already have

Most organisations already have data systems — SQL databases, BI tools, pipelines, or on-prem storage. Keep what works; Fabric can extend it.
You can:

  • Use Dataflows Gen2 or pipelines to ingest and transform data from existing sources
  • Create OneLake shortcuts to reference external storage
  • Bring in exports or snapshots (for example, CRM tables or logs)
  • Use Fabric as an analytical or orchestration layer on top of your current systems

This hybrid approach lets you test Fabric on real data without disrupting production systems, helping you identify where it delivers the most value before scaling further.

Next steps once you’re up and running

After choosing your entry path, expand iteratively. Fabric rewards structure, not speed.

Add ingestion and transformation

Continue shaping data with Notebooks, Dataflows Gen2 or pipelines, schedule refreshes, and test incremental updates to validate performance.

Expose for analysis

Create a warehouse or semantic model, connect Power BI, and check performance, permissions, and security. Involve your Power BI administrators early — Fabric changes how capacities, roles, and governance interact.

Introduce real-time scenarios

Connect streaming sources, create real-time tables or queries, and trigger alerts or automated actions using activators.

Advance to AI and custom workloads

Train and score models in notebooks, or use the Extensibility Toolkit to build custom solutions integrated with pipelines.

Govern, monitor, and iterate

Apply governance policies, monitor cost and performance, and use CI/CD with Git integration to manage promotion across environments and maintain auditability.

Core Fabric building blocks and how to use them

Lakehouses & delta tables

Lakehouses in Fabric combine data lake flexibility with analytic consistency. Under the hood, Fabric stores everything in Delta Lake tables, which handle updates and changes reliably without breaking data consistency.

You can ingest raw files into lakehouse storage, define structured tables, and then query them with SQL or Spark notebooks. Use delta features to handle changes and versioning.

Pipelines & Dataflows

Fabric includes pipeline orchestration similar to Azure Data Factory. Use pipelines (copy, transformation, scheduled) for heavier ETL/ELT workloads.  

Use Dataflows Gen2 (Power Query–style) for lighter transformations or data prep steps. These can be embedded or called from pipelines.  

If you prefer a pro-code/code-first solution you can use PySpark, SparkSQL or even a simple python code to transform and analyse your data in a Notebook.

Together, they let you build end-to-end ingestion workflows, from source systems into lakehouses or warehouses.

Warehouses & SQL query layer

Once data is structured, you may want to provide a SQL query surface. Fabric lets you spin up analytical warehouses (relational, MPP) to serve reporting workloads.

These warehouses can sit atop the same data in your lakehouse, leveraging delta storage and ensuring you don’t duplicate data.

Real-time intelligence

One of Fabric’s differentiators is built-in support for streaming and event-based patterns. You can ingest event streams, process them, store them in real-time tables, run KQL queries, and combine them with historical datasets.

You can also define activators or automated rules to trigger actions based on data changes (e.g. alerts, downstream writes).

Data science & AI

Fabric includes native support for notebooks, experiments (MLflow), model training, and scoring. You can ingest data from the lakehouse, run Python/Spark in notebooks, train models, register them, and score them at scale.

Because the same storage underlies all workloads, you don’t need to copy data between ETL, analytics, and AI layers.

Extensibility & workloads

For development teams or ISVs, Fabric supports custom workload items, manifest definitions, and a DevGateway. Microsoft provides a Starter-Kit that helps you scaffold a "HelloWorld" workload to test in your environment.

You can fork the repository, run local dev environments, enable Fabric developer mode, and build custom apps or tools that operate within Fabric’s UI.

Common scenarios and example workflows

Speeding up Power BI reports
Move slow or complex dataflows into a lakehouse, define delta tables, and connect Power BI directly for faster, incremental refreshes.

Real-time monitoring
Ingest IoT or application logs into real-time tables, run KQL queries to detect anomalies, and trigger automated alerts or actions as events occur.

Predictive analytics
Use lakehouse data to train and score models in notebooks, then surface results in Power BI for churn, demand, or risk forecasting — all within Fabric.

Custom extensions
Build domain-specific tools or visuals with the Extensibility toolkit and integrate them directly into Fabric’s workspace experience.

Best practices and things to watch out for

Data discipline matters — naming, ownership, and refresh planning remain essential. Start small and build confidence. Begin with one or two use cases before expanding.  

Treat migration as iterative; don’t aim to move everything at once. Sync with your BI and governance teams early, as changes in permission and capacity models affect all users.  

Use Microsoft’s Get started with Microsoft Fabric training. It walks you through each module step by step. And take advantage of the end-to-end tutorials covering ingestion, real-time, warehouse, and data science flows.

Fabric delivers the most value when aligned with your goals. Our team can help you plan, pilot, and scale it effectively — get in touch to get started.

Resources:

https://learn.microsoft.com/en-us/training/paths/get-started-fabric/

This is some text inside of a div block.

Recent events

Stay in the loop with industry-leading events designed to connect, inspire, and drive meaningful conversations.

View all
VisualLabs is an official EPPC Silver Sponsor
June 16, 2025

VisualLabs is an official EPPC Silver Sponsor

View event
The Unified Enterprise
April 1, 2025

The Unified Enterprise

View event
The Copilot Blueprint: Best Practices for Business AI
February 25, 2025

The Copilot Blueprint: Best Practices for Business AI

View event

Expert-led webinars

Gain valuable insights from top professionals through live and on-demand webinars covering the latest trends and innovations.

View all
Watch out for these if you're managing field work with an EV fleet
May 21, 2025
30 min watch

Watch out for these if you're managing field work with an EV fleet

Read more
Leveraging Generative AI and Copilots in Dynamics CRM
Apr 1, 2025
49 min 15 sec

Leveraging Generative AI and Copilots in Dynamics CRM

Read more
Accelerating sustaianbility with AI - from reporting to green value creation
Apr 1, 2025
46 min 23 sec

Accelerating sustaianbility with AI - from reporting to green value creation

Read more
How Dallmayr Hungary became the digital blueprint of the Group
Apr 2, 2025
38 min 32 sec

How Dallmayr Hungary became the digital blueprint of the Group

Read more
The future of digital transformation - key trends and themes for leadership
Apr 2, 2025
41 min 45 sec

The future of digital transformation - key trends and themes for leadership

Read more

Real results

See how companies like you have succeeded using our solutions and get inspired for your own journey.

View all
ZNZ Preventive Technologies
5 min read
Dec 8, 2025

How ZNZ Preventive Technologies has transformed its field service operations through digitalisation

Read more
Kontron
5 min read
May 12, 2025

How Kontron aligned their global operations with data-driven decision-making

Read more
Dallmayr Hungary
May 12, 2025

How Dallmayr Hungary increased operational efficiency by 30% and became the digital blueprint of the group

Read more

Tools & Guides

Access helpful eBooks, templates, and reports to support your learning and decision-making.

View all

Improve First-Time Fix Rates In The Fields

Where and How to Get Started When Digitising Field Operations

Get the guide
Soft teal and white gradient background

Ready to talk about your use cases?

Request your free audit by filling out this form. Our team will get back to you to discuss how we can support you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Stay ahead with the latest insights
Subscribe to our newsletter for expert insights, industry updates, and exclusive content delivered straight to your inbox.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.