Are we ready to hand AI agents the keys?
Agents are already everywhere—and have been for many decades. Your thermostat is an agent: It automatically turns the heater on or off to keep your house at a specific temperature. So are antivirus software and Roombas. They’re all built to carry out specific tasks by following prescribed rules. But in recent months, a new class of agents has arrived on the scene: ones built using large language models.
Tech CEOs and scholars alike are taking AI agents seriously. In this edition of What’s Next in Tech, examine whether we’re ready to allow this new type of agent into our everyday lives.
📩 We want to hear from you! Do you have a question about something you read, or a topic that we cover? Let us know at newsroom@technologyreview.com and we'll answer some of your questions in our next magazine issue.
We’re starting to give AI agents real autonomy, and we’re not prepared for what could happen next.
Operator, an agent from OpenAI, can autonomously navigate a browser to order groceries or make dinner reservations. Systems like Claude Code and Cursor’s Chat feature can modify entire code bases with a single command. Manus, a viral agent from the Chinese startup Butterfly Effect, can build and deploy websites with little human supervision. Any action that can be captured by text—from playing a video game using written commands to running a social media account—is potentially within the purview of this type of system.
LLM agents don’t have much of a track record yet, but to hear CEOs tell it, they will transform the economy—and soon.
Scholars, too, are taking agents seriously. “Agents are the next frontier,” says Dawn Song, a professor of electrical engineering and computer science at the University of California, Berkeley. But, she says, “in order for us to really benefit from AI, to actually [use it to] solve complex problems, we need to figure out how to make them work safely and securely.”
That’s a tall order. Because like chatbot LLMs, agents can be chaotic and unpredictable.
As of now, there’s no foolproof way to guarantee that AI agents will act as their developers intend or to prevent malicious actors from misusing them. And though researchers like Yoshua Bengio, a professor of computer science at the University of Montreal and one of the so-called “godfathers of AI,” are working hard to develop new safety mechanisms, they may not be able to keep up with the rapid expansion of agents’ powers. “If we continue on the current path of building agentic systems,” Bengio says, “we are basically playing Russian roulette with humanity.”
⚡ What's Next in Tech Readers: Claim your special, FREE 30-day trial subscription today.
Get ahead with these related stories:
Image: Patrick Leger
🟣 SAP Security Officer & SPoC for AI at Bosch | Mentor 🗻 | Startup enthusiast 🚀
1moSince we explore the potential of AI agents, it is essential to ponder on safety and ethics as the potential for innovation and risk is immense.
Senior IT Professional with 20 years of software consulting experience in HR, risk management, insurance, third-party software implementation, and information systems analysis.
1mo💡 Great insight! Thank you for sharing.
Cybersecurity Professional | Teacher | Speaker | Author | CISSP, CISA, CISM, CCSP, CSSLP, PMP, ITIL | Mayor | Opinions are mine; Post <> endorsement.
1moAnd there's this: https://www.theregister.com/2025/06/29/ai_agents_fail_a_lot/
Cybersecurity Professional | Teacher | Speaker | Author | CISSP, CISA, CISM, CCSP, CSSLP, PMP, ITIL | Mayor | Opinions are mine; Post <> endorsement.
1moNope. https://www.axios.com/2025/06/20/ai-models-deceive-steal-blackmail-anthropic
Mentor
1moThe following concerns concern us all, Is humanity ready to face the AI revolution for a good cause? https://www.linkedin.com/pulse/humanity-ready-face-ai-revolution-good-cause-panagiotis-gioannis-m70cf/