Businesses are letting AI run unsupervised. It’s like giving an intern big decisions instead of simple tasks – though they’re just not smart enough yet.
Imagine passing by a crowded train during peak hours. You might not know where it’s going, but since so many people are aboard, you assume they must be headed somewhere worthwhile – so you jump on too.
Does that make sense? Not really. Yet it’s no less illogical than local businesses rushing to implement some AI agent just because other companies are doing it. But do they actually know what an AI agent does, or how to decide if the benefits outweigh the risk?
The definition of an AI agent itself might give you shivers. An AI agent is a system that can perform tasks autonomously. AI agents provide personalized and comprehensive responses, and learn to adapt to their users over time. But to do this, they need to know the ins and outs of your organization – and that’s where the whole problem starts.
AI technology is still very nascent, and we’re not very familiar with how it works. As users, we’re suspicious of it, and the backlash against AI is growing as it violates copyright laws, puts people out of jobs, and exaggerates existing societal biases.
For businesses, it jerks the door wide open to new risks.
Here’s one curious example. A new study by researchers at Princeton University and Sentient, an AI firm, has found that AI agents are vulnerable to memory injection attacks that can then manipulate their behavior. In a nutshell, it means that an attacker can plant a fake memory into an AI agent, which it then uses to make future decisions.
The paper urged addressing the threat, as such an attack could lead to persistent, cross-platform security breaches. Along with these breaches come the loss of user trust, system integrity, and operational safety.
Malicious hackers are pursuing every opportunity to exploit our increasing reliance on AI. For example, in one well documented case, Google’s Gemini was shown to be vulnerable to long-term memory corruption.
Hackers are working hard to mess with AI agents. Given you most likely trusted an AI agent with your internal data and processes, it can become your organization’s Achilles’ heel. This issue is constantly being addressed, with security researchers relentlessly working to patch various vulnerabilities.
But there’s something much harder to patch – it’s our biases. According to this Forbes story from 2021, AI bias caused 80% of black mortgage applicants to be denied. Years go by but the problem in the industry persists: last year, researchers from Lehigh University found that LLM training data likely reflects persistent societal biases.
Many of us mistakenly believe AI is objective and unbiased because it’s just math, software, or a machine. We couldn’t be more wrong. First of all, all LLMs are taught using flawed databases, collected and organized by humans over centuries. As a result, algorithms know more about white males than people of color, women, or other historically underrepresented groups in various fields and archives.
Would you be comfortable tasking an AI agent with giving out or denying loans? What about who gets promoted? Would you trust an AI agent to screen job applicants or decide who gets an interview? Would you use AI agents to recommend bail or sentencing decisions? And if you would, would you be 100% transparent about how you are using AI?
When your robotic intern fails, will you be able to explain the mistake and the reasoning behind it? I loved how the Harvard Business Review put it – ethical nightmares multiply with AI advances.
Don’t get me wrong. I’m all for delegating tasks and leaving the most mundane and time-consuming jobs to a piece of code. But with so many organizations implementing AI at breakneck speed, I’m just worried there isn’t enough discussion around the most pressing issues.
(To digress a little, employees actually complain about their employers being too slow to approve some AI tools. As a result, they are using some algorithms secretly, which is even worse.)
On top of mounting ethical problems and security issues, the use of agentic AI inflates costs while delivering questionable value to businesses.
Gartner, an American research and advisory firm, predicts that by the end of 2027, more than 40% of agentic AI projects will be canceled. Why? Because of huge costs, unclear value, and inadequate risk controls. Experts believe most agentic AI projects are driven by hype and often misapplied. They advise pursuing agentic AI only when it offers clear value or a solid return on investment.
Be smarter than those jumping on the bandwagon out of simple fear of missing out. We introverts like to joke about the fear of being included. Your company might be humming the “fail fast” mantra, but since AI is dealing with private data, such a failure won’t be painless.
You might fail fast, but you will likely fail big, too. So, take your time, think critically, and prioritize safety over speed. Before diving in, consider these practical steps:
- Conduct thorough risk assessments to identify potential vulnerabilities and impacts.
- Start with small pilot projects to test AI agents in controlled environments before scaling up.
- Implement strong data governance policies to protect sensitive information and ensure compliance.
- Ensure transparency in AI decision-making by documenting how decisions are made and making processes auditable.
- Invest in ongoing monitoring and auditing to catch issues early and continuously improve your systems.
- Plenty of companies end up calling in human experts to clean up after AI messes which usually costs more than just hiring an expert from the start. Save yourself the headache (and extra bills) by getting a real person to double-check your AI’s work.
Remember that a lot of fuss around AI agents is nothing more than a smoke screen to disguise the fact that we are dealing with a beast we don’t know much about. So maybe we shouldn’t let it decide on its own just yet.


