Home Artificial intelligence Three AI Myths That Need Clarity In 2026
Artificial intelligence

Three AI Myths That Need Clarity In 2026

Share


Heiko Claussen is Chief Technologist at Emerson’s Aspen Technology business, leading its AI research and technology strategy.

While artificial intelligence (AI) has been around for decades, tools like generative AI have helped it gain mainstream attention in recent years. AI adoption across almost every industry has accelerated, including among industries known for slower adoption curves.

For instance, process industries like oil and gas, and chemicals run mission-critical operations in which AI hallucinations or inaccurate results could have catastrophic consequences. This, understandably, caused some industrial companies to be hesitant to adopt general AI, including generative AI, at first. Despite this, today, industry is adopting AI at a rapid rate, based on the understanding that not all AI is created equal. According to a 2025 survey by KPMG, 70% of industrial manufacturers have seen significant operational improvements due to AI.

Unlike general AI, industrial AI is purpose-built for complex industrial environments that have specific requirements for the safety and explainability of AI results. Industrial AI establishes guardrails based on domain expertise and the first principles of engineering. It produces trusted, explainable outcomes that improve how organizations run—whether the AI is making process control recommendations or providing predictive alerts to deliver early failure warnings.

As we head into 2026, continuous education around the nuances and misconceptions of AI will support process industries—and many others—in accelerating adoption. Here are three common AI myths that deserve clarity in the new year:

1. AI hype will slow down.

Like many other technologies that have become nearly ubiquitous, AI is subject to its fair share of skepticism. Some of the recent comments around AI hype are rooted in concerns that enterprises aren’t seeing the ROI on AI they’re expecting.

Concerns over whether an AI project will provide value can be traced back to the project’s origins. Technology-driven projects almost always lead to disappointment. Implementing AI for AI’s sake or to appease pressure from stakeholders is a surefire way for AI projects to fall short of expectations.

Instead, organizations must start with the problem they’re trying to solve and its business value, before deciding whether AI is the best tool to fix or improve it. Breaking an AI project down into smaller steps, in which quick wins and feedback are collected along the way, also means project owners can show value early and build on project success before significant time and financial investments are made. In the example of an AI-driven asset performance management (APM) tool for condition monitoring, an organization could select a scalable asset class, create a template, then scale throughout the asset type before selecting the next opportunity for repeatable success.

By taking a value-driven approach and being realistic about what AI can—and cannot—do, organizations increase the likelihood that their latest AI project lives up to expectations.

2. AI needs a lot of field data.

For AI to be successful, having the right data is far more important than having a lot of data. Years ago, when digital transformation began to take hold, it was common for organizations to collect as much data as possible, under the assumption that it would be needed for future use cases. By now, organizations are storing an overwhelming amount of data that is difficult to sort through and clean, often making it ineffective for AI projects.

Luckily, many successful AI use cases are not purely based on data from the field. In particular, industrial AI can be highly effective based solely on first principles and simulation models. First principles and simulation models—alongside physical AI, which is known for giving AI systems the ability to perceive, understand and reason in the physical world—are highly data efficient. They can be operational without any data from the field, then refined and improved as additional information is collected.

In 2026, organizations will be driven to adopt data fabrics that connect data from disparate locations across the organization, provide a secure single source of truth and give the data meaning. Feeding AI models with in-context data makes it easier to create and sustain meaningful AI models to close the simulation-reality gap and predict future outcomes based on past observations.

3. AI is just large language models (LLMs).

ChatGPT is a leading example of the significant technological advancements we’ve seen in LLMs in the last few years. Large foundation models like LLMs have become so pervasive that it can be easy to forget modeling everything isn’t the only or always the best approach for AI.

In 2026, we’ll see more small language models used to produce narrow and focused results. As AI maturity grows, this will become a favorable method for highly specific use cases, such as a chemical engineer solving an operational problem at a refinery or when the fundamental laws of physics are already incorporated. During a time when critics are raising concerns over AI’s intensive energy requirements, small language models will also gain attention for being more efficient overall, requiring less data and compute power.

Global industries and consumers alike are already on the way to transforming the way they work and live with AI. With the right amount of clarity and realism, AI will continue to be a powerful tool in 2026 and beyond.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?




Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *