Unless and until agents really do work at expert level, the benefits of AI use are going to be contingent on the skills of the AI user, the jagged abilities of the AI you use, the process into which you integrate use, the experience you have with the AI system & the task itself. Users, especially experts, are often quite capable of figuring out ways to use AI systems for significant productivity and performance gains, but it does take some time and effort. And anyone can figure out how to use AI to do rote tasks quite easily. But it still will require R&D to discover optimal approaches.
Even when the agents can work at an expert level, we can't lose track of the human experience. That's why we built our own AI dev tools from scratch to prioritize the developer's capability and engagement, to make developers better able to engage with complexity. Productivity is all well and good, but at the end of the day its human beings using AI. That's why our design priority remains on reflecting the improvements in AI back onto humans.
Couldn't agree more. And while we don't know how the former will turn out (i.e. whether or not AI agents will proliferate and work at an expert level across different fields), we can say with a fairly high degree of confidence that prompt engineering (i.e. giving chatbots/agents context) will be increasingly useful.
I think Agents and RPA Bots will go further together. Companies such as Uipath, Automation Anywhere and Blue Prism should have the experience, tech knowhow and clients to take advantage to embed Agents into their tech stack.
Spot on professor
Those experts, who do figure out ways to use AI systems productively, especially when they are software developers, can work with industry to expose those ways to non-experts through AI-powered applications. In effect packaging up productivity by constraining the open-endedness of GenAI into informed experiences. This is what Elena Oncevska Ager (educator) and I (software) have been building together and using with learners at Noticing Network: pedagogically informed support for learning - and the feedback so far is that it is useable, useful, authentic and doesn’t suffer from cognitive offloading. Having said that, isn’t this what software has always done? Make great, but complex power accessible? Why do we presume that the right way to interact with AI is directly when we don’t for e.g. AWS?
In a world where you can pretty much look up everything, it is perhaps the knowledge we carry inside our heads that is more valuable than ever.
Kierstin Geary - ^ on the effort to train a fully autonomous agent. We are at a point in time where complex AI workflows often require the human user to orchestrate and evaluate critical outputs.
Chief AI Architect | Transformation Strategist | AI Systems Researcher | Trusted Advisor
1moAI can solve zero percent of "hard" problems. AI can never replace the artist, but it can (eventually) replace the artisan. And if one really thinks about it, this is obvious and by design. AI can only learn what it has been taught, so it cannot arrive at novel, nuanced, or truly creative solutions.