The lab

Discover. Learn. Innovate.

A collection of valuable insights, real success stories, and expert-led events to keep you informed.

Soft blue and white gradient background with blurred smooth texture

Insights & ideas

Stay ahead with expert articles, industry trends,
and actionable insights to help you grow.

The biggest mistakes companies make when implementing agentic AI
January 30, 2026
10 mins read

The biggest mistakes companies make when implementing agentic AI

Read blog

TL;DR


Most agentic AI initiatives fail not because of the technology, but because organisations try to use AI to compensate for weak data, broken processes, misaligned behaviours, and unclear ownership. Common mistakes repeat across maturity stages: assuming AI will clean up chaos, underestimating change management, treating PoCs as proof of value, using rigid delivery models, automating unstable processes, and applying governance that either blocks learning or erodes trust.  The companies that succeed treat agentic AI as a maturity journey, stabilising processes first, aligning people and incentives, designing for iteration, and using governance to enable safe scaling.

Agentic AI has moved fast from hype to experimentation. Most organisations now have at least one agent running somewhere; a Copilot, a workflow assistant, a triage bot, or a proof of concept built on Power Platform or Azure.

And yet, months later, many leaders are left asking the same question:

What did this actually change for the business?

The issue is rarely the technology. The biggest mistakes we’ve sees are structural, organisational, and AI readiness-related. They tend to repeat themselves depending on where a company is in its agentic AI journey.

Below, we break down the most common mistakes by stage, explain why they are serious, and show how to avoid them.

1. Planning stage: You’re assuming AI can fix chaos

Believing that AI will compensate for poor data and unclear processes is one of the most common mistakes we see.

Assumptions often sound like this:
“Once we add AI, things will become cleaner and smarter.”

In reality, agentic AI amplifies whatever it touches. If your data is inconsistent, fragmented, or manually maintained (for example in spreadsheets), the agent will not magically improve it. It will inherit the same confusion — and distribute it faster.

A useful rule of thumb is simple:

If you cannot make sense of your data, you cannot expect AI to make sense of it either.

Why this is serious

Dirty data usually isn’t just a data problem. It’s a process problem. Most “bad” data comes from manual handovers, workarounds, and parallel systems that exist because the underlying process never worked properly.

Trying to clean data without addressing the process that produces it only creates technical debt.

How to avoid it

  • Start with process ownership, not AI tooling.
  • Replace uncontrolled manual steps with systems designed to manage business processes (for example CRM instead of Excel-based tracking).
  • Accept that some long-standing habits will need to change. AI readiness requires organisational courage, not just technical effort.

2. Preparation stage: You’re underestimating the cost of change

Thinking preparation is mainly a technical exercise is another big misconception.  

Even when organisations recognise the need for better data and processes, they often underestimate the human side of the change. Long-standing “this is how we’ve always done it” behaviours don’t disappear just because a new platform is introduced.

Resistance often comes from experienced employees who feel their proven ways of working are being questioned.

Why this is serious

Agentic AI depends on consistent behaviour. If people continue to bypass systems or maintain shadow processes, agents will never see a complete or reliable picture of reality.

This is also where dissatisfaction can surface, usually coming from teams feeling stuck with outdated tools while leadership talks about “modern AI”.

How to avoid it

  • Be explicit about why change is necessary, not just what is changing.
  • Treat system adoption as a business initiative, not an IT rollout.
  • Measure progress not only in features delivered, but in behaviours changed.

3. PoC stage: You’re mistaking early success for scalability

Many teams overestimate the value of proofs of concept. Everyone wants PoCs. They are fast, visible, and relatively safe. And they often work impressively in isolation.

The problem is that PoCs are rarely designed to scale.

They prove that something can be done, not that it should be done or that it will survive real operational complexity.

Why this is serious

Many organisations get stuck in a loop of perpetual experimentation. Agents are demonstrated, praised, and quietly abandoned when they fail to deliver measurable impact.

This creates AI fatigue and scepticism long before the technology has had a fair chance.

How to avoid it

  • Define success in operational terms from day one.
  • Ask early: What process does this change? Who owns it? How will we measure improvement?
  • Treat PoCs as learning tools, not as evidence of ROI.

4. Pilot stage: You’re choosing the wrong delivery model

Many organisations default to waterfall-style delivery when building agentic AI. While the waterfall approach is effective in stable environments, it relies on fixed requirements defined upfront. Agentic AI rarely works like that.

The hardest part isn’t building the agent. It’s discovering what the agent needs to know, and that knowledge only emerges through use, feedback, and edge cases.

Why this is serious
Rigid delivery models make it difficult to adjust as reality surfaces. Teams end up locking in assumptions that turn out to be wrong, and pilots struggle to adapt.

How to avoid it

  • Accept that agentic AI requires continuous discovery.
  • Use iterative delivery to surface hidden assumptions early.
  • Involve people who are willing to be “confused on purpose” and ask uncomfortable questions about how work actually happens.

Agile ways of working are not free. They require time, discipline, and strong collaboration. But they significantly reduce the risk of building something that looks right and works nowhere.

5: Go-live stage: You’re trying to automate broken processes

Placing AI on top of unclear or fragile processes almost never works.  

A common question we hear is:
“Can’t we just add AI to what we already have?”

You can. But it is one of the fastest ways to stall ROI.

Agentic AI does not fix broken processes. It inherits them.

Why this is serious

Unclear ownership, undocumented exceptions, and tribal knowledge create unpredictable agent behaviour. This is often misdiagnosed as a model issue, when it is actually a process issue.

Employees may request AI support because something is painful, not because it is ready to be automated.

How to avoid it

  • Stabilise and simplify processes before introducing agents.
  • Make decision points, exceptions, and escalation paths explicit.
  • Treat agent design as an opportunity to improve the process, not just automate it.

6: Adoption and scaling stage: You’re getting governance wrong

Being either too restrictive or too loose with governance are both common mistakes. Fear-driven governance can be as damaging as no governance at all.

If access is too restricted, domain experts cannot experiment, prompts never advance, and agents remain disconnected from real work. If governance is too loose, trust erodes quickly when something goes wrong.

Why this is serious

Agentic AI sits at the intersection of business and IT. Scaling requires both sides to work together. Without clarity on decision rights, accountability, and maintenance, adoption stalls.

How to avoid it

  • Define who owns agents, risks, and ongoing changes.
  • Enable domain experts to work with AI, not around it.
  • Treat governance as an enabler of trust, not a barrier to progress.

A final mistake: locking yourself into narrow assumptions

Across all stages, one pattern appears again and again: organisations arrive with strong hypotheses and only look for evidence that confirms them.

This often leads to missed opportunities. Teams optimise locally while overlooking areas with far greater potential impact.

Agentic AI rewards openness. The biggest gains often appear where organisations are willing to question long-held assumptions about how work should be done.

How to move forward safely

Introducing agentic AI is not a single decision. It is a maturity journey. The organisations that succeed are not the ones deploying the most agents, but the ones willing to clean up their foundations, rethink the processes agents will sit inside, align people and governance early, and stay open to the uncomfortable discovery that comes with making implicit work explicit.  

Want a clear view of where you are today and what to fix first?  

We can run a short AI readiness review and help you prioritise the changes that will make agentic AI safe, adoptable, and measurable.  

This is some text inside of a div block.
The skills AI agents need for business-critical work and how to build them
January 21, 2026
10 mins read

The skills AI agents need for business-critical work and how to build them

Read blog

TL;DR

Agentic AI is moving from experimentation to execution, and access to tools is no longer the limiting factor. What separates stalled pilots from real impact is whether agents have the operational skills to work safely inside real processes. These skills include clean inputs, process context, governance, and the ability to validate and escalate decisions. Defining them forces organisations to confront how work actually gets done, not how it’s documented. Scaling agentic AI therefore means scaling maturity and skills first, not rolling out more agents or licences.

2026: the year agent skills matter more than tools

2025 was the year of agents with almost everyone experimenting. Copilots appeared in familiar tools, pilots were launched, and AI proved it could accelerate knowledge work.

In 2026, the focus will change. The organisations that move ahead won’t be the ones deploying more agents. They’ll be the ones investing in something far harder: developing the skills that allow agents to operate safely and reliably inside real business processes.

Access to AI is no longer the constraint. The real question has become:  

Do our agents have the skills required to execute business-critical work?

This is the fifth part of our series on agentic AI. Read more about

Why defining the right tools is harder than getting them

Until recently, AI adoption was limited by access to models, platforms, compute, or licences. That barrier has largely disappeared. Competitive LLMs exist both inside and outside the Microsoft ecosystem, and most enterprises already have Copilot or similar tools in place.

Yet many initiatives stall after the pilot phase. Not because the technology fails, but because organisations are unprepared for what agents actually need to be effective: clean inputs, defined processes, traceable decisions, and safe execution paths.

The LLM models and embedding frameworks are already here. The question is whether your operational maturity is.  

What do we actually mean by “agent skills”?

Agent skills are not prompts or plugins. They are the operational capabilities that allow an agent to do real work without becoming a risk.

In practice, skills combine:

  • access to systems and artefacts,
  • context about domain rules and process history,
  • the ability to reason, execute, validate, and escalate,
  • and clear boundaries for governance and safety.

This is why the conversation has moved from “Do we have the right tools?” to “Do our agents have the skills to handle business-critical processes?”

Why terminal access matters for agents

Many agents today operate in chat-only mode. That is useful for summarising, drafting, or answering questions, but it quickly becomes a ceiling.

To unlock real capability, agents often need controlled terminal access. Modern agents behave less like chatbots and more like junior engineers:  

  • they need to inspect repositories,  
  • review work item history,  
  • understand configuration,  
  • and correlate changes across systems.

A typical example is enabling read access to Azure DevOps or GitHub using scoped Personal Access Tokens. Combined with Azure CLI or repository access, an agent can begin to understand how a process evolved, not just what it looks like today.

This is where agents become genuinely useful for IT Ops. With access to work item history, commits, and deployment context, an agent can investigate recurring issues, surface undocumented decisions, or even generate accurate documentation, which is something humans rarely have time to do consistently.

Why does agent development force uncomfortable discovery?

When you define what an agent would need to execute a workflow safely, you are forced to map the real process, not the idealised version.

Questions quickly surface:

  • Is there a template for this request?
  • Who validates this step?
  • Who is accountable for the decision?
  • What evidence do we keep that the process was followed?

These questions are often new, not because the process is new, but because it was never formalised. Agent development turns hidden assumptions into explicit requirements. That can be uncomfortable, but it’s also where real improvement starts.

This is why scaling agentic AI isn’t only about building agents but about upskilling them: designing them with the right decision rules, guardrails, and proof points so they can operate safely in the real world, not the imagined one.

What does “upskilling an agent” actually look like?

To upskill an agent, you don’t just retrain the model. You also need to progressively expand trust.

Typically, this starts with visibility rather than action. The agent is allowed to inspect and explain before it is allowed to execute. Validation and approval steps are introduced early, and only once the process is stable does automation expand.

Agents often work surprisingly well with “just” access, but only if the underlying data and process history are clean. If DevOps tickets lack context or key decisions live only in meetings, the agent will reflect those gaps back to you.

In that sense, upskilling your agents and improving your processes happen together.

Why scaling always requires scaling agents’ skills first

Many organisations try to scale agent adoption by enabling more chat surfaces or rolling out more licences. While usage increases, outcomes rarely do.

Without skills, scaling leads to inconsistency and risk. Agents amplify whatever they are given: clean, structured processes scale well; messy, undocumented ones scale badly.

That’s why scaling requires skilling. Before organisation-wide adoption, you need

  • reusable patterns,  
  • ownership clarity,  
  • observability,  
  • and human-in-the-loop controls.  

Otherwise, trust erodes quickly.

Domain experts are critical here. They are not just reviewers at the end, but co-builders of the skills agents rely on. This work must be iterative, because no one can fully predict how a process behaves until it is made explicit.

What does a realistic maturity path for agent adoption look like?

Successful adoption never starts with a large, end-to-end agent for a complex process. That approach almost always fails.

Instead, one capability is broken into smaller parts you can test and develop iteratively. Our team typically follows a simple cycle:

  • discovery of how the process really works,
  • hypothesis about what the agent should do next,
  • validation with real cases,
  • approval before expanding scope.

Short sprints and tight feedback loops are essential. Skeletons will come out of the closet: undocumented steps, unclear ownership, inconsistent execution. Treat this process as discovery, not failure.

How can you make agentic AI safer?

For end users, the goal is simple: they should be able to interact with AI safely. For IT Ops, safety comes from orchestration.

Process orchestration allows deterministic control where needed, dynamic agent behaviour where valuable, and human intervention where risk is high. It provides observability, auditability, and governance; the foundations that turn agentic AI from a demo into a dependable capability.

Where should you get started?

Start small, but deliberately.  

  1. Choose one process with real pain and clear boundaries.  
  1. Then ask what skills an agent would need to execute it reliably.  

That exercise alone will highlight what needs to change: templates, ownership, documentation, or process clarity.

The success of Agentic AI doesn’t just depend on the technology you use, but on how your organisation matures with it. And the organisations that treat 2026 as the year of skilling — not just tooling — will be the ones that move beyond pilots and build lasting capability.

Want to move from AI pilots to g overned, orchestrated agent skills that deliver measurable impact? If you’re unsure what it takes to scale safely, we can run a free audit — get in touch.

This is some text inside of a div block.
How to improve the ROI of agentic AI
January 15, 2026
10 mins read

How to improve the ROI of agentic AI

Read blog

TL;DR

Most agentic AI pilots fail to show ROI not because the technology is weak, but because they are built without clear processes, ownership, or measurable outcomes. Intelligent agents that sit beside broken or undocumented workflows can feel useful but rarely change results. Real value comes when implicit knowledge is made explicit, processes are stabilised, and agents are embedded into orchestrated workflows with governance. When organisations optimise for outcomes instead of intelligence, agentic AI becomes predictable, scalable, and measurable.

Why most pilots fail

Agentic AI has moved fast from hype to experimentation.

From autonomous agents through Copilots to digital workers, most organisations now have at least one agent running somewhere. It might summarise content, answer questions, triage requests, or support a workflow built with Power Automate.

And yet, a few months later, decision makers often ask a simple question:

What value did this actually deliver?

Too often, the answer is vague. The agent works. People like it. But the real business impact is difficult to prove. Not because agentic AI lacks potential but because most initiatives are built on the wrong foundations.

This article looks at why many agentic AI pilots struggle to deliver ROI, and what needs to change to turn experimentation into reliable delivery of business value.

This is the fourth part of our series on agentic AI. Read more about

Why do most agentic AI pilots look impressive but never show real ROI?

Because they optimise for intelligence, not outcomes. Many early agentic AI initiatives are designed to showcase what the technology can do. A smart agent that drafts responses, analyses text, or answers questions is genuinely useful.  

But usefulness alone doesn’t guarantee an actual return on investment. If the agent doesn’t change how work flows through the organisation, its impact remains local and limited.

Real ROI comes when agents are embedded into business processes with clear ownership and measurable outcomes.  

Without that connection, teams end up with intelligent tools that sit beside the work rather than transforming it. Productivity may improve slightly, but the underlying process remains unchanged, and so do the results.

What’s the biggest hidden blocker to scaling agentic AI?

Implicit knowledge. Every organisation relies on knowledge that isn’t written down.

  • Who really owns a process
  • Where data actually comes from
  • Which exceptions are acceptable and which ones trigger escalation

These things are “known” but rarely documented.

The problem is that people often can’t clearly articulate this knowledge when asked. Not because they don’t understand their work, but because experience blurs the line between what feels obvious and what needs to be explained. Inside one team, this usually works. For an AI agent, it doesn’t.

Why do AI agents behave unpredictably even with good prompts?

Because prompting can’t compensate for unclear processes. An AI agent doesn’t infer organisational context the way humans do. If instructions, boundaries, and decision logic aren’t explicit, the agent fills the gaps on its own; sometimes acceptably, sometimes not. This is often mistaken for a model problem, when in reality it’s a knowledge problem.

Agentic AI forces organisations to confront how much of their operation runs on assumptions. If that implicit knowledge isn’t surfaced and structured, it’s no surprise when an agent starts behaving inconsistently. It was never given a clear picture of the process it’s meant to support.

Designing agentic AI is closer to teaching than coding. You’re not just telling the system what to do, you’re explaining how work actually happens.  

If you can’t explain the process clearly enough that a grandmother could follow it, an AI agent won’t either.

That doesn’t mean over-documenting. It means being precise about what matters: the steps, the handovers, the decision points, the exceptions, and the limits. The clearer the process, the more predictable and valuable the agent becomes.

Can’t we just add AI to existing processes?

You can, but it’s one of the most common reasons ROI stalls.

Many organisations try to layer AI on top of processes that are already fragile. These processes often rely on workarounds, undocumented rules, and individual judgement. Adding an agent doesn’t fix those issues.

This is why employees frequently ask for AI help in areas that shouldn’t be automated yet. The request isn’t really about intelligence; it’s about pain. When you look closer, the real issue is usually missing ownership, unclear inputs, inconsistent data, or accumulated technical debt.

Agentic AI works best when the process it sits on is stable enough to support it. Otherwise, you’re automating confusion, and probably paying for it later.

What does good business process discovery look like for agentic AI?

It starts before any agent is built. Good discovery means being able to describe the business process in concrete terms:  

  • what triggers it,  
  • what systems are involved,  
  • who owns each step,  
  • where decisions are made,  
  • and how success is measured.  

This is harder than it sounds, especially because internal processes vary widely between organisations and teams.

Domain experts play a critical role here. They understand where the real pain points are, what expectations are realistic, and which edge cases matter. Without them, teams often build agents for the wrong problems or for processes that need fixing before automation makes sense.

In practice, AI readiness work — mapping processes, clarifying responsibilities, and making assumptions explicit — often delivers value on its own. It creates the conditions in which agentic AI can succeed.

How do we move from isolated agents to workflows that actually scale?

This is where process orchestration enters the picture.  

  • Isolated agents are good at individual tasks.  
  • Orchestrated workflows are what deliver business outcomes.  

Orchestration allows organisations to combine deterministic steps, where control and predictability matter, with AI-driven decisions where flexibility adds value.

In Microsoft-based environments, this often means using Power Automate to manage workflows, while agents contribute reasoning, classification, or decision support within that structure. Instead of asking whether an agent works, teams can measure whether the overall process performs better.

This shift from task optimisation to outcome optimisation is where ROI starts to scale.

Why do waterfall-style AI projects fail so often?

Because agentic AI requires continuous learning, not fixed requirements. The hardest part of building agentic systems is uncovering what the agent needs to know and realising that you don’t know all of that upfront. Understanding improves through iteration, feedback, and encountering edge cases.

This is why forward-deployed engineers are so effective in agentic AI initiatives. Their role isn’t just implementation. It’s asking the naïve questions, surfacing assumptions, and forcing implicit knowledge into the open. In other words, they do the discovery work the agent itself cannot do.

How do governance and guardrails enable more autonomy?

Without governance, organisations keep agents small and disconnected because the risk feels too high. With well-designed guardrails, agents can safely access the systems and data they need, and nothing more.

Security by design doesn’t reduce autonomy; it enables it. When access, behaviour, and decision-making are observable and controlled, organisations can let agents operate closer to real business processes with confidence.

From experimentation to impact

Most agentic AI pilots don’t fail because the technology isn’t ready. They fail because the foundations aren’t.

When organisations invest in optimising processes, cleaning data, making implicit knowledge explicit, involving domain experts, and designing for orchestration and governance, agentic AI stops being a demo. It becomes real value.  

And that’s when ROI becomes measurable and repeatable.

We help organisations move from isolated AI experiments to orchestrated, governed agentic workflows that deliver real business impact.

If you’re trying to understand why ROI is stalling — or what it would take to scale value safely — we’re happy to help. Get in touch for a free audit.

This is some text inside of a div block.
How can we help employees adopt agentic AI?
January 8, 2026
10 mins read

How can we help employees adopt agentic AI?

Read blog

TL;DR  

Real AI adoption starts after the PoC. To scale successfully, pick a PoC that delivers real business value, built on well-defined processes and measurable outcomes. Treat AI like a product: iterate through MVP cycles with strong governance, clean data, and clear ownership. Maximise impact by building cross-functional capability, aligning IT and business, communicating openly, and starting with use cases that show quick, visible wins.

How to improve AI adoption and avoid money down the drain

When organisations reach a certain stage — the PoC is complete, the checklist is ticked off, SharePoint is clean, governance is in place, access controls are set, and Copilot is already live across the business — the next question becomes very simple:

What should we build next so that AI actually generates value, not just another experiment?  

This is also the stage where most AI initiatives stall. The technology might be ready, but the organisation isn’t designing for value, adoption, and iteration.  

We call this Value Architecture Design: identifying where AI can create value and designing solutions in a way that people will actually use.  

In this post, we outline how to select the right PoCs, how to scale from early wins to managed AI services, and how to prepare your workforce for meaningful, trustworthy adoption.  

This is the third part of our series on agentic AI. Read more about

What does real AI adoption look like?  

AI isn’t truly “adopted” the moment a proof of concept runs. It’s adopted when people can use it with confidence. That’s when teams understand how agents work, reusable building blocks like prompts, agents, flows, and APIs start to take shape and get shared, and the business begins improving AI solutions the same way it improves apps: iterating on MVPs, learning from real usage, and refining what works.

And it’s when decision-makers understand enough themselves to keep the momentum going, beyond approving a pilot, they can actively lead the next steps.

How to choose a PoC that delivers value and actually gets used  

A good PoC is not the most exciting part of the project, but it’s essential.  It needs to:  

  • sit on an already successful business process  
  • be well-defined and constrained  
  • have clear, measurable outcomes  
  • deliver relief from repetitive, manual work  
  • create a sense of “finally, I don’t have to do this like a robot anymore”  

This is what we call Proof of Value, not Proof of Concept. Early lighthouse projects should:  

  • reduce time spent on manual categorisation or triage  
  • replace low-value cognitive tasks (“read, sort, route, summarise”)  
  • demonstrate visible time savings or cost avoidance within weeks  
  • be easy to explain and easy to show  
  • create appetite for “what else can we automate?”  

A simple example:  
A flow categorises incoming emails → when it finds a certain category, it triggers an agent → the agent decides where the request should go and completes the next action.  

It’s clear, repeatable, and the repetitive manual work from the process.  

That’s the pattern you want.  

Different users need different AI pathways  

Once the fundamentals are in place (SharePoint cleaned up, governance set, access controls defined), adoption becomes a layered journey:  

Layer 0 — Business users with no technical background  

  • Use AI for information synthesis  
  • Build small, safe mini-apps with Copilot Studio Light  
  • No creation of new systems, just better access to existing knowledge  

Layer 1 — Managed Copilot Studio solutions  

  • Built and iterated by more technical users  
  • Governance, data connections, compliance configuration  
  • Where structured APIs and reusable prompt libraries emerge  

Layer 2 — Pro-code engineering for fully custom solutions  

  • Complex integrations, advanced orchestration  
  • High-value automation tied into business-critical systems  
  • Requires agile delivery: MVP → iterated improvements → continuous optimisation  

All three layers require different adoption strategies. All three can deliver value.  
But the PoC you choose determines which layer you are enabling.  

The biggest non-technical blockers are culture, clarity, and trust  

Technology rarely blocks adoption. People do.  

We see four blockers appear again and again:  

Poor stakeholder management  

Executives, end users, and IT all need to be aligned, and they rarely start that way.

Fear of automation  

People need to hear clearly: “This helps you. It does not replace you.”  

Disconnect between IT and the business  

Business knows the process; IT knows the tools. Agents require both sides to collaborate.  

Lack of clarity about decision rights  

  • Who approves agents?  
  • Who owns risks?  
  • Who maintains the agent when the process changes?  

Without clear answers, trust is hard to establish and even harder to sustain.

How to prepare your workforce to collaborate with agents  

Adoption is ultimately about behaviour change. The mindset shift is:  

“AI is an extension of my tools, not a black box that takes over.”  

Organisations should focus on:  

  • Training champions who mentor, explain limitations, and build confidence  
  • Teaching teams how to design good prompts and document them in a prompt library  
  • Regular feedback cycles (“What’s working? What’s frustrating?”)  
  • Making the agent’s role transparent: what it does, where the data goes, how decisions are made  
  • Ensuring agents always use up-to-date information  
    (The fastest way to break trust? Let an agent read from outdated files.)  

Think of this as AI workplace readiness, not AI training.  

The most successful teams build cross-functional capability, bringing together business process experts,  

  • prompt engineers or AI solution designers,  
  • data specialists,  
  • integration and pro-code developers,  
  • governance and security specialists,  
  • and product owners who treat agents as evolving applications.    

Their mindset is agile rather than waterfall: start with an MVP, release it, gather feedback, and iterate continuously.  

Governance is the foundation for sustainable, safe AI  

Good AI governance is not bureaucracy. It is clarity.  

Organisations need defined roles for:  

  • Policy ownership and risk management (usually IT + security)  
  • Quality assurance for prompts, agents, and data sources  
  • Access control and data protection  
  • Decision rights about when AI can act autonomously vs. when humans must step in  

Business criticality becomes the deciding factor:  
“What must remain human-in-the-loop?”

“What can be automated end-to-end?”  

Well-designed governance enables scale. Poor governance kills it.  

 

How to select a lighthouse use case for quick value and easy adoption  

A great lighthouse project has three characteristics:  

  1. Clear boundaries: the business process is simple and well understood.  
  1. Measurable results: time saved, cost reduced, fewer errors.  
  1. Heavy manual effort: repetitive tasks where humans feel like “bio-robots”.  

These are the opportunities where agents shine immediately:  
categorisation, routing, triage, summarisation, document extraction, escalation decisions. This is where momentum comes from.  

How to build trust that drives real adoption  

Trust is not created by accuracy alone. Users trust AI when:  

  • they understand its limitations  
  • champions are available to advise and mentor  
  • they see a clear audit trail of what the agent did and why  
  • their data and identity feel protected  
  • feature requests and feedback loops visibly shape the next iteration  

Trust grows with use. Use grows with clarity. Clarity grows with good governance and good communication.  

Avoid these mistakes  

  • Over-automating without understanding the process
  • Building agents without guardrails  
  • No single owner for the solution  
  • Ignoring user needs, for example by having poor UX, unclear instructions, or wrong expectations  
  • Messy data and outdated SharePoint structures  
  • Not communicating early and often  

AI adoption succeeds when it is treated like product development  

Real value happens when organisations stop thinking about AI as a one-off pilot and start treating it as:  

  • a managed service  
  • an evolving product  
  • a collaboration between humans and agents  
  • an iterative improvement cycle  

The PoC is only the start. The real work and the real payoff begin with intentional adoption, strong governance, cross-functional collaboration, and continuous improvement.  

 

Want to move beyond experimentation and get ready for AI that drives real value? Get in touch for an AI-readiness workshop.  

 

This is some text inside of a div block.
Work IQ, Fabric IQ, Foundry IQ vs Microsoft Graph?
January 2, 2026
10 mins read

Work IQ, Fabric IQ, Foundry IQ vs Microsoft Graph?

Read blog

TL;DR

Microsoft Graph provides permission-aware access to Microsoft 365 data, but it doesn’t interpret meaning. The IQ layers add context so AI can reason safely: Work IQ helps Copilot connect people, conversations, content, and activity into usable work context; Fabric IQ (preview) adds governed business meaning so AI understands what data represents and how key entities relate; and Foundry IQ grounds custom agents in trusted enterprise knowledge via Azure AI Search, enabling secure retrieval and governance. In short, Graph enables access; IQ enables understanding.

Work IQ, Fabric IQ, Foundry IQ, vs Microsoft Graph

Over the past year, Microsoft has introduced a new family of concepts: Work IQ, Fabric IQ, and Foundry IQ.

If you’ve been following Copilot, Power Platform, Dynamics 365, or Azure AI Foundry, you’ve probably seen “IQ” mentioned more and more, often without a clear explanation of what it actually is, how it relates to Microsoft Graph, or why it matters for real business outcomes.

This post cuts through unnecessary complexity.

General AI is no longer the differentiator

A year ago, access to powerful AI models felt like an advantage. Today, it’s a must.

Every enterprise has access to strong foundation models. The real difference is no longer how smart the model is, but how well it understands your organisation.

What AI lacks is not general knowledge but enterprise context:

  • how your processes actually work
  • how your data is structured and governed
  • how decisions are made
  • what is allowed, restricted, or risky
  • what is happening right now in your workflows

This is where the new “IQ” concepts come in. At its core, IQ is Microsoft’s way of describing an enterprise context engine. It’s the layer that turns raw data into something AI can reason over safely.

Microsoft Graph vs IQ: access vs understanding

Let’s start with the foundation: Microsoft Graph.

Microsoft Graph is:

  • a unified API and access layer, and
  • a data model that spans services
  • connecting users, emails, files, calendars, Teams, SharePoint, and more.

The Graph name isn’t a coincidence. It reflects a connected data model of entities and relationships across Microsoft 365.

You can think of Graph as the unified access layer and permission model that gives consistent access to data stored across Microsoft 365 services.

What Graph does not do is interpret meaning.

Graph answers questions like:

  • Which emails exist?
  • Which files belong to this user?
  • Which meetings happened last week?

It gives you access. The IQ layers sit above this. They don’t replace Graph, but they use it, enrich it, and reason over it.

A simple way to frame it:

Graph enables access. IQ enables understanding.

Work IQ: understanding how work actually happens

Work IQ is the intelligence layer Copilot uses to understand day-to-day work.

It builds a dynamic picture of:

  • emails, chats, meetings, files
  • tasks and decisions
  • Dynamics 365 and Dataverse entities (when connected through Copilot experiences, plugins, or agents)
  • relationships between people, content, and actions
  • how work evolves over time

Crucially, Work IQ doesn’t just retrieve information, it interprets context.

That’s why Copilot can answer questions like:

  • “What did we decide last week about the field project budget?”
  • “Summarise the latest customer escalations and draft a report.”

It’s not searching like a document library. It’s reasoning over signals, patterns, and workflows.

A helpful analogy:

  • Microsoft Graph is the access layer
  • Work IQ is the intelligence layer that makes work context searchable, explainable, and useful

Work IQ also learns work patterns, preferences, and typical next actions. This is why it feels “personal” without breaking security boundaries.

From an organisational point of view:

  • Work IQ is accessed primarily through Copilot
  • It is out of the box
  • You don’t need to define complex use cases to see value

But it only works well if your work is structured.

Copilot cannot infer intent from SharePoint chaos.

Fabric IQ: giving AI business meaning, not just data

If Work IQ understands work, Fabric IQ (preview) understands data.  

In Fabric, IQ is essentially a governed business knowledge layer — a way to define entities, relationships, and meaning so AI can query and reason over data correctly.

Microsoft Fabric already centralises analytics across, OneLake, data warehouses, lakehouses, and Power BI semantic models.

Fabric IQ adds a critical layer on top: business meaning, capturing:

  • data models and relationships
  • semantic definitions
  • measures and business logic
  • lineage and governance rules

In other words, Fabric IQ allows AI to understand things like what the data represents, how entities relate, which numbers matter, and which rules must be respected.

This is a shift away from basic retrieval-augmented generation. Fabric IQ allows agents to ask analytical questions, generate code. spot anomalies, or explain trends in business terms.

For example:

  • “Why did request volume spike in the North region last month?”
  • “Show anomalies in field service cycle time.”

The difference is subtle but important: Fabric IQ grounds AI in what the data means, not just where it lives.

Foundry IQ: how custom agents stay grounded in trusted data

Foundry IQ is the knowledge grounding layer in Azure AI Foundry — it helps agents retrieve the right information from approved sources (securely and with governance), so they can reason and act with the right context.  

While Work IQ and Fabric IQ are largely plug-and-play, Foundry IQ is fully custom, designed for Copilot agents, pro-code development, and multi-agent collaboration.

Foundry IQ brings together:

  • knowledge sources (documents, databases, APIs)
  • indexing + retrieval orchestration
  • permission-aware grounding
  • citations / traceability (where supported)
  • governance + safety controls for knowledge use

If Work IQ is about understanding work and Fabric IQ is about understanding data, then:

Foundry IQ is the knowledge layer for Azure AI Foundry agents, built on Azure AI Search. It helps developers create reusable knowledge bases that agents can query through one API, with retrieval planning, source routing, and permission-aware grounding.

Without clear objectives, well-defined processes, and strong governance, Foundry IQ can quickly become expensive and ineffective.

It is powerful, but it’s not something to adopt without a clear business case.

Security, grounding, and guardrails

One common concern we hear is security. A critical point often missed:

Enterprise access control is enforced by the infrastructure (identity + permissions), not by the model.

Permissions, identity, access control, environment separation, and governance all sit below the IQ layers.

The IQ concepts don’t bypass security. They operate within it. This is why custom security policies and guardrails must be designed upfront, not added later.

Always keep in mind that AI is only as trustworthy as the context and constraints you give it.

What Microsoft-centric organisations should do now

For Dynamics 365 and Power Platform customers, the message is clear: go back to basics.

Before moving on to advanced agent scenarios:

  • clean up SharePoint structures
  • reduce duplication
  • clarify ownership and versioning
  • document key processes
  • align Dataverse models with real business logic

AI is not magic, it just amplifies what already exists.

Where you can get real, immediate value

We see the fastest returns where context is already well defined. A classic example is onboarding.

If onboarding processes are documented:

  • new hires can ask Copilot how things work
  • find the right documents instantly
  • understand workflows without tribal knowledge

Other early wins can be for example personal work summaries, prioritisation (“what needs my attention?”), or light decision support.

These are safe entry points that build trust before moving into deeper workflow augmentation with Foundry IQ.

The context advantage

In today’s AI landscape general knowledge is commoditised and models are interchangeable. What isn’t interchangeable is your context. The organisations that win will be those that make their context explicit, govern it properly, and design it deliberately for AI.

AI is only as smart as the context you allow it to see.

That’s where the real advantage lies.

Want to know where your organisation actually stands?
We help Microsoft-centric teams assess their readiness and identify where agents will create real value. Get in touch for a free audit.  

This is some text inside of a div block.

Recent events

Stay in the loop with industry-leading events designed to connect, inspire, and drive meaningful conversations.

View all
VisualLabs is an official EPPC Silver Sponsor
June 16, 2025

VisualLabs is an official EPPC Silver Sponsor

View event
The Unified Enterprise
April 1, 2025

The Unified Enterprise

View event
The Copilot Blueprint: Best Practices for Business AI
February 25, 2025

The Copilot Blueprint: Best Practices for Business AI

View event

Expert-led webinars

Gain valuable insights from top professionals through live and on-demand webinars covering the latest trends and innovations.

View all
Accelerating sustainability with AI - from reporting to green value creation
Apr 1, 2025
46 min 23 sec

Accelerating sustainability with AI - from reporting to green value creation

Read more
Watch out for these if you're managing field work with an EV fleet
May 21, 2025
30 min watch

Watch out for these if you're managing field work with an EV fleet

Read more
Leveraging Generative AI and Copilots in Dynamics CRM
Apr 1, 2025
49 min 15 sec

Leveraging Generative AI and Copilots in Dynamics CRM

Read more
How Dallmayr Hungary became the digital blueprint of the Group
Apr 2, 2025
38 min 32 sec

How Dallmayr Hungary became the digital blueprint of the Group

Read more
Intelligent Business Transformation with Power Platform in the age of AI
Apr 1, 2025
39 min 23 sec

Intelligent Business Transformation with Power Platform in the age of AI

Read more

Real results

See how companies like you have succeeded using our solutions and get inspired for your own journey.

View all
ZNZ Preventive Technologies
5 min read
Dec 8, 2025

How ZNZ Preventive Technologies has transformed its field service operations through digitalisation

Read more
Kontron
5 min read
May 16, 2025

How Kontron aligned their global operations with data-driven decision-making

Read more
Dallmayr Hungary
May 16, 2025

How Dallmayr Hungary increased operational efficiency by 30% and became the digital blueprint of the group

Read more

Tools & Guides

Access helpful eBooks, templates, and reports to support your learning and decision-making.

View all

Improve First-Time Fix Rates In The Fields

Where and How to Get Started When Digitising Field Operations

Get the guide
Soft teal and white gradient background

Szeretnéd, ha ránéznénk a folyamataidra?

Jelentkezz ingyenes felmérésre és kollégáink nemsokára felveszik veled a kapcsolatot, hogy időpontot egyeztessünk. Csak írd meg miben segíthetünk.
we'll share what we've seen work.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Stay ahead with the latest insights
Subscribe to our newsletter for expert insights, industry updates, and exclusive content delivered straight to your inbox.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.