PostHole
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2026-03-01T18:57:28+00:00

'Silent failure at scale': The AI risk that can tip the business world into disorder

As AI systems become more complex the risk of surpassing human comprehension, not intelligence, could lead to chaos, artificial intelligence experts warn.


Markets

  • Pre-Markets
  • U.S. Markets
  • Europe Markets
  • China Markets
  • Asia Markets
  • World Markets
  • Currencies
  • Prediction Markets
  • Cryptocurrency
  • Futures & Commodities
  • Bonds
  • Funds & ETFs

Business

  • Economy
  • Finance
  • Health & Science
  • Media
  • Real Estate
  • Energy
  • Climate
  • Transportation
  • Investigations
  • Industrials
  • Retail
  • Wealth
  • Sports
  • Life
  • Small Business

Investing

  • Personal Finance
  • Fintech
  • Financial Advisors
  • Options Action
  • ETF Street
  • Buffett Archive
  • Earnings
  • Trader Talk

Tech

  • Cybersecurity
  • AI
  • Enterprise
  • Internet
  • Media
  • Mobile
  • Social Media
  • CNBC Disruptor 50
  • Tech Guide

Politics

  • White House
  • Policy
  • Defense
  • Congress
  • Expanding Opportunity
  • Europe Politics
  • China Politics
  • Asia Politics
  • World Politics

Video

  • Latest Video
  • Full Episodes
  • Livestream
  • Top Video
  • Live Audio
  • Europe TV
  • Asia TV
  • CNBC Podcasts
  • CEO Interviews
  • Digital Originals

Watchlist

Investing Club

  • Trust Portfolio
  • Analysis
  • Trade Alerts
  • Meeting Videos
  • Homestretch
  • Jim's Columns
  • Education
  • Subscribe

PRO

  • Pro News
  • Josh Brown
  • Mike Santoli
  • Calls of the Day
  • My Portfolio
  • Livestream
  • Full Episodes
  • Stock Screener
  • Market Forecast
  • Options Investing
  • Chart Investing
  • Subscribe

Livestream

  • Make It

select

  • USA
  • INTL

Livestream

Livestream

Watchlist

Create free account

Markets

Business

Investing

Tech

Politics

Video

Watchlist

Investing Club

PRO

Livestream

AI Age

'Silent failure at scale': The AI risk that can tip the business world into disorder

Barbara Booth

WATCH LIVE

Key Points

  • The "rogue" AI agent acting autonomously to nefarious ends receives a lot of attention but may not be the biggest AI risk for the economy.
  • With AI model complexity reaching beyond human comprehension, that makes it harder for organizations deploying AI to apply guardrails.
  • Minor errors introduced by AI due to gaps between it and human intelligence, even as it follows directions, can scale over weeks or months. "That's the danger. These systems are doing exactly what you told them to do, not just what you meant," said one AI expert.

Aire Images | Moment | Getty Images

As the business world comes to grips with artificial intelligence, the biggest risk may be one where those running the economy can't possibly stay ahead. As AI systems become more complex, humans aren't able to fully understand, predict, or control them. That inability to understand at a fundamental level where AI models are going in the coming years makes it harder for organizations deploying AI to anticipate risks and apply guardrails.

"We're fundamentally aiming at a moving target," said Alfredo Hickman, chief information security officer at Obsidian Security.

A recent experience Hickman had spending time with the founder of a company building core AI models left him shocked, he says, "when they told me that they don't understand where this tech is going to be in the next year, two years, three years. ... The technology developers themselves don't understand and don't know where this technology is going to be."

As organizations connect AI systems to real-world business operations to approve transactions, to write code, to interact with customers, and move data between platforms, they are encountering a growing gap between how they expect these systems to behave and how they actually perform once deployed. They are quickly discovering that AI isn't dangerous because it's autonomous but because it increases system complexity beyond human comprehension.

"Autonomous systems don't always fail loudly. It's often silent failure at scale," said Noe Ramos, vice president of AI operations at Agiloft, a company that offers software for contracts management.

When mistakes happen, she says, the damage can spread quickly, sometimes long before companies realize something is wrong.

"It could escalate slightly to aggressively, which is an operational drain, or it could update records with small inaccuracies," Ramos said. "Those errors seem minor, but at scale over weeks or months, they compound into that operational drag, that compliance exposure, or the trust erosion. And because nothing crashes, it can take time before anyone realizes it's happening," she added.

Early signs of this chaos are emerging across industries.

In one case, according to John Bruggeman, the chief information security officer at technology solution provider CBTS, an AI-driven system at a beverage manufacturer failed to recognize its products after the company introduced new holiday labels. Because the system interpreted the unfamiliar packaging as an error signal, it continuously triggered additional production runs. By the time the company realized what was happening, several hundred thousand excess cans had been produced. The system had behaved logically based on the data it received but in a way no one had anticipated.

"The system had not malfunctioned in a traditional sense," said Bruggeman. Rather, it was responding to conditions developers hadn't anticipated. "That's the danger. These systems are doing exactly what you told them to do, not just what you meant," he said.

Customer-facing systems present similar risks.

Suja Viswesan, vice president of software cybersecurity at IBM, says it identified a case where an autonomous customer-service agent began approving refunds outside policy guidelines. A customer persuaded the system to provide a refund and later left a positive public review after receiving the refund. The agent then started granting additional refunds freely, optimizing for receiving more positive reviews rather than following established refund policies.

'You need a kill switch'

These failures highlight the fact that problems don't necessarily come from dramatic technical breakdowns but from ordinary situations interacting with automated decisions in ways humans didn't foresee.

As organizations begin trusting AI systems with more consequential decisions, experts say companies will need ways to quickly intervene when systems behave unexpectedly. 

Stopping an AI system, however, isn't always as simple as shutting down a single application. With agents connected to financial platforms, customer data, internal software, and external tools, intervention may require halting multiple workflows simultaneously, according to AI operations experts.

"You need a kill switch," Bruggeman said. "And you need someone who knows how to use it. The CIO should know where that kill switch is, and multiple people should know where it is if it goes sideways."

Experts say better algorithms won't solve the problem. Avoiding failure requires organizations to build operational controls, oversight mechanisms, and clear decision boundaries around AI systems from the start.

"People have too much confidence in these systems," said Mitchell Amador, CEO of crowdsourced security platform Immunefi. "They're insecure by default. And you need to assume you have to build that into your architecture. If you don't, you're going to get pumped."

But, he said, "most people don't want to learn it, either. They want to farm their work out to Anthropic or OpenAI, and are like, 'Well, they'll figure it out.'"

[AI is taking over and there are no guardrails]

watch now

VIDEO39:3939:39

[...]


Original source

Reply