Home Artificial intelligence 5 Priorities For Business Leaders
Artificial intelligence

5 Priorities For Business Leaders

Share


Mark Thirlwell, Global Digital Director at BSI.

With the unpredictability of the last decade, it’s harder than ever to predict what lies in store in the year ahead—events have a way of pushing expectations off course. Whether 2026 will be the year the AI bubble bursts is hard to say from this vantage point.

But in lieu of a crystal ball, we do have insights into how businesses are viewing AI. Last year, BSI used an AI model to analyze how AI was being publicly framed by businesses in their annual reports, then paired this with executive-level insights. There are five things I think should be front of mind for business leaders, whatever the next 12 months bring.

1. AI isn’t necessarily the answer. It depends on the question.

Just as my kids forget once-cherished toys the moment a shiny new one arrives, many businesses have done the same in the years since ChatGPT landed. They’re chasing AI as if it were the holy grail for boosting productivity, cutting costs and driving growth in a tough economic climate. They believe AI will solve their problems, and maybe it will. But if businesses are putting all their eggs in an AI basket just because they’re in an arms race with competitors, or their investors want them to, then that’s not a recipe for success.

In the BSI report, small businesses were much less likely to say they were seeing tangible benefits such as growth, innovation or efficiencies from AI investment, while more than two-fifths (43%) of business leaders said “AI investment has taken resources that could have been used on other projects.”

Best case: Businesses investing in AI without a strategic roadmap could be wasting funds on duplication or tools that don’t work. At worst, this could expose them to new risks. The point is that AI investments need to be considered as part of a longer-term strategy, with greater focus on the value being derived.

2. AI governance isn’t a priority now, but this needs a rethink for 2026.

If AI is akin to a shiny new toy, then ensuring a foundation of good governance can feel like the parent asking the kid to clear their room first. That doesn’t make it any less important.

What’s clear is that many businesses are not focused on the guardrails. BSI’s research showed firms underweighting risk and security considerations. For example, less than a quarter (24%) reported having an AI governance program, while under half (47%) said “AI use is controlled by formal processes” and only 34% reported “using voluntary codes of practice.” It also appears rare for businesses to have formal risk assessment processes evaluating where AI may be introducing new vulnerabilities.

Put simply, business leaders are not routinely asking the right questions. Without trying to anticipate and address future vulnerabilities, they’re at risk of sleepwalking into a governance crisis. In 2026, businesses must focus on what formal processes need introducing, adding AI risk into provisions around continuity planning and incident response and, ultimately, striking a balance between innovation and risk management.

3. Data transparency should be a key consideration.

Sitting alongside this is data and how it is being collected, stored and used to train AI models. It’s staggering that “only 28% of business leaders know what sources of data their business uses to train or deploy its AI tools,” while just 40% “said their business has clear processes in place around use of confidential data for AI training.”

Full transparency on data collection and model inputs is key to building trust. Bias is often unavoidable in AI tools because data itself is inherently biased. But that doesn’t mean they shouldn’t be used; it just means businesses employing AI tools will need to face into this, understand their data sources, show their working and be able to explain discrepancies.

Businesses are already required to have protocols to protect personal data. This good practice should extend to any data used by AI models. With the scale and speed at which the technology processes and interprets data, a well governed approach will be critical to remaining in control and building confidence with stakeholders.

4. Prepare for the unexpected, and view AI through a business resilience lens.

Nearly a third of executives surveyed “felt AI has been a source of risk or weakness for their business.” Yet while firms regularly talk about embracing AI, there’s less discussion about business continuity and resilience. This is despite a fifth admitting that “if generative AI tools were unavailable for a period of time, their business could not continue operating.”

As reliance on AI increases, now is the time to put continuity planning in place by learning from past technology rollouts. That means not assuming everything will work out, but planning in case it doesn’t.

After all, high-profile cyberattacks or server issues regularly bring businesses to their knees. Why should AI be different? Incidents with customer service chatbots, algorithmic bias or misuse of personal data are already well-documented. Trying to prevent incidents with AI from occurring is one component of good governance; having plans in place to manage the fallout when they do is another.

5. AI cannot replace human oversight, and businesses will likely learn the hard way.

In the same vein, I expect 2026 to bring a wave of red-faced CEOs issuing apologies for betting too heavily on AI. The reality is, when it comes to outsourcing human tasks, AI can do plenty of the legwork. It can organize and process data, analyze findings and make recommendations, but it can also hallucinate, display bias or make mistakes. Human checks need to be built. As McKinsey’s research suggested, work in the future will be a partnership between people, agents and robots.

The differentiation will be between businesses using AI to enhance what they offer and provide better service and those simply swapping humans for AI. Some will succeed in the short term, but in the end, quality will win out.

Ultimately, AI has huge potential for good if there are guardrails in place. Moving beyond reactive compliance to proactive, comprehensive AI governance should be top of the business to-do list.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?




Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *