How to get ROI from AI: Six ingredients that make adoption work

Data, AI & Automation

Article Summary

Learn six ingredients that make AI ROI possible, from business value and data readiness to governance, adoption and safe scaling.

Total Read Time -
X min

AI has moved quickly from experimentation to expectation.

Boards are asking where the value is. Executives are asking which use cases should move forward. Teams are testing tools, copilots and agents. And in many organisations, AI is already being used in day-to-day work, whether there is a formal strategy in place or not.

That creates a clear challenge for leaders: How do you move from activity to impact?

According to John Valastro, Director – Digital Innovation and Growth here at Avec, the answer is not simply more experimentation.

“If I’m going to answer the ROI question upfront, it comes from deliberate adoption, not from more experimentation.”

This distinction is now critical. Many organisations have no shortage of AI ideas, and what they often lack is a disciplined pathway for turning those ideas into measurable, trusted and operationally sustainable outcomes.

As John puts it:

“The market loves to sell us the finish line, but it’s not very good at selling how you get there.”

The finish line is compelling: faster service, better decisions, reduced manual effort, improved customer experience and new ways of working. But the path to that value is rarely a straight line and requires business clarity, the right type of work, trusted data, governance, adoption planning and measurable outcomes.

Useful AI is not just a technology discipline, but a value, workflow, data, governance and adoption discipline.

Here are the six ingredients that make AI ROI possible.

1. Start with a real business problem

AI initiatives often struggle when they start with a tool instead of a problem.

The conversation begins with a platform, a model, a copilot or an agent, then works backwards to find something it might do. That approach can generate demos, but it rarely creates lasting business value.

John sees this pattern often:

“We talk a lot about the tools, but not a lot about the value.”

The better starting point is a real business problem with a clear link to one or more measurable outcomes. This might include cost, revenue, risk, service quality, cycle time, employee effort or decision quality.

A strong AI use case should be able to answer questions such as:

  • What business issue are we solving?
  • Why does this issue matter now?
  • What is the cost of leaving it unresolved?
  • Who owns the outcome?
  • How will we know whether the solution has worked?

Without that clarity, AI becomes a technology exercise. With it, AI becomes a business improvement lever.

This is where delivery discipline matters. Organisations do not need to pursue the biggest or most complex use case first and, in fact, they usually shouldn’t. The strongest starting point is often a specific, high-friction workflow where the value is visible and the outcome can be measured.

The aim is not to prove that AI can do something interesting, but to prove that AI can improve something that matters.

2. Choose the right work pattern

Not every business problem needs AI.

That may sound obvious, but it’s one of the most common mistakes organisations make. In the rush to adopt AI, many use cases are framed as AI opportunities when they are actually automation, process improvement, data quality or workflow design problems.

John’s view is direct:

“Automation is not being replaced. It’s evolving.”

Traditional automation remains highly effective for deterministic, rules-based work: tasks where the process is known, the inputs are structured, and the decision logic is clear.

AI is better suited to work that involves interpretation, synthesis, triage, reasoning or coordination. It is strongest where a process requires judgement, context or the ability to make sense of unstructured information.

As John explains:

“Keep to automation when it’s clear and deterministic. Add AI when the work is probabilistic.”

Understanding this distinction is critical for ROI. Applying AI to the wrong type of work can increase cost, risk and complexity without improving the outcome.

For example, a rules-based process with clear inputs and outputs may be better served by automation. A workflow that requires reading multiple documents, summarising competing information, triaging requests or supporting a decision may be a stronger AI candidate.

The question is not: “Can AI do this?”

The better question is: “Is AI the right way to create value here?”

3. Establish a measurable baseline

You cannot prove ROI if you do not know the starting point.

Before building an AI solution, organisations need to understand the current cost, effort, delay, error rate or experience issue associated with the workflow. This baseline doesn’t need to be perfect, but it does need to be credible enough to support a scale decision later.

A useful baseline might include:

  • How long the process takes today
  • How many people are involved
  • How many handoffs occur
  • Where rework or errors happen
  • What the current service experience looks like
  • How often decisions are delayed or escalated
  • What the current cost to serve is

Without this, success becomes subjective. A solution may feel faster, smarter or more impressive, but leaders will struggle to determine whether it has created meaningful value.

John is clear that this discipline is often missing:

“People are not measuring the baseline. They’re not actually working out whether there’s going to be a return on the investment.”

This is where many AI experiments stall. They demonstrate capability, but not impact.

A baseline also helps organisations choose better use cases. If a process has low volume, low risk and low friction, it may not justify the investment required to design, govern and support an AI-enabled solution. If a process is high volume, high cost, slow, error-prone or strategically important, the business case becomes easier to establish.

Measure before you build. Otherwise, ROI becomes an opinion.

4. Make sure the data and access are fit for purpose

AI is only as useful as the context it can safely access.

That context depends on data quality, permissions, source-of-truth systems, access controls and information architecture. If the underlying data is incomplete, outdated, duplicated, poorly governed or difficult to trace, the AI solution will inherit those weaknesses.

John puts it simply:

“Whatever knowledge base you’re using to give AI context has to be clean. If it’s not, your experiment is a waste of time.”

However, this doesn’t mean an organisation needs to fix its entire data estate before starting. That would stop most AI programs before they begin. The more practical approach is to define the specific data surface required for the use case, then make sure that data is fit for purpose.

For each AI use case, leaders should understand:

  • Which data sources are being used
  • Whether they are trusted and current
  • Who has permission to access them
  • How sensitive information is protected
  • Where the source of truth sits
  • How outputs can be checked or traced

This is especially important in high-trust environments such as government, healthcare, financial services, utilities and other regulated sectors. AI adoption in these contexts cannot rely on novelty. It needs to be safe, explainable and supportable.

Data quality is also central to bias and trust. As John notes:

“Most of the bias doesn’t come from the model, necessarily. It comes from your data.”

Better data foundations don’t remove every risk, but they materially improve the likelihood that AI outputs are useful, reliable and appropriate for the workflow.

5. Build governance into delivery from day one

AI governance is often treated as a policy task: something to document after the solution has been designed.

That is the wrong sequence.

John’s position is clear:

“Governance is not something that comes after. In the context of AI, governance is part of the solution.”

Good governance includes testing, monitoring, escalation, auditability, accountability, security, human oversight and decision controls. These are not administrative extras. They are the mechanisms that allow AI to move from pilot to production safely.

This matters because AI can fail differently from traditional systems.

“The biggest risk is not so much that AI could fail. It’s that it can fail silently.”

A silent failure may look like a plausible but incorrect summary, a recommendation based on incomplete context, a decision-support output that appears confident but misses a key exception, or a workflow that gradually drifts away from expected performance.

Without monitoring and escalation pathways, these failures may not be visible until they’ve already created risk.

Governance should answer practical delivery questions:

  • Who is accountable for the AI-enabled process?
  • What must remain human-led?
  • When does the system escalate to a person?
  • How are outputs tested before release?
  • How is performance monitored over time?
  • How are errors identified and corrected?
  • When should the solution be paused, changed or shut down?

John also warns against weak or unclear ownership:

“Lukewarm ownership is not going to get the right outcome.”

For AI to scale, accountability must be visible. Someone needs to own the business outcome, the risk profile, the user experience and the decision to continue, change or stop.

6. Design a path to adoption

A working AI solution is not the same as an adopted AI solution.

Many initiatives fail not because the technology is poor, but because people don’t use it, trust it or understand how it fits into their work.

Adoption needs to be designed with the same discipline as the solution. That means identifying the users, understanding their workflow, building the support model, planning the change approach, and tracking whether the solution is actually being used in the intended way.

At a minimum, an AI adoption pathway should include:

  • A named business owner
  • Defined users and use cases
  • Clear guidance on appropriate use
  • Training and support
  • A feedback mechanism
  • A scale decision process
  • ROI tracking after implementation

This is also where organisations need to be honest about whether a process is ready for AI.

As John puts it:

“You don’t automate a bad process. Guess what? You don’t do that with AI either.”

If the workflow is unclear, inconsistent or poorly owned, AI may simply accelerate the confusion. In some cases, the right first step is process redesign. In others, it may be data clean-up, governance design or automation.

AI can improve work. It shouldn’t be used to compensate for work that has not been understood.

From experimentation to operational impact

AI ROI is not created by adopting the latest tool, launching a proof of concept or building an impressive demo.

It comes from disciplined delivery.

That means starting with value, choosing the right work pattern, measuring the baseline, preparing the data, designing governance into the solution and creating a practical path to adoption.

John summarises the approach clearly:

“Start with the value, choose work that suits, design small but for scale, build trust and prove the value before you scale.”

This is the shift organisations need to make now.

AI experimentation has been useful in helping leaders understand what’s possible, but the next phase is different. It requires organisations to move from curiosity to capability, and from isolated pilots to trusted, scalable delivery.

The organisations that succeed will not be the ones that test the most tools, but the ones that connect AI to real business problems, build confidence through governance and measure value before scaling.

Because useful AI is not just about what the technology can do. It’s about whether the organisation can make it work safely and sustainably in the real world.

Let's start something great, together.