Pillar 4: High-Performing Human and Agent Teams
This is the fourth article in our six-part series on the core pillars of Enterprise Agentic AI. The journey so far has taken us through Safety and Trust (responsible agent design), Control (centralised oversight), and Quality (performance monitoring and continuous improvement). Now we reach what may be the most transformative and complex frontier: creating high-performing hybrid teams of humans and agents. This fourth article is written collaboratively by Rob Price, co-founder of Futuria, Chris Nicholls, Chief People Officer and HR professional, and Tom Geraghty, founder and CEO at Psychological Safety - psychsafety.com.
This is not a side note. This is the new operating model.
To unlock value, productivity, and resilience at scale, organisations must design environments where agents are not standalone tools but collaborative, evolving, and trusted teammates — working with humans and alongside other agents, across different systems and domains.
1. Train and Develop Agents Continuously — Just Like People
Agents are not one-shot automations. Like employees, they require ongoing learning and development, updated contextual knowledge and clear feedback on performance. Without this, they degrade — or worse, make decisions based on stale assumptions. Development pipelines must ensure agents are re-briefed, re-trained, and re-deployed in line with changes to the business, processes, systems, and data at the same time as employees as part of the team.
Importantly, this applies to both individual agents and multi-agent teams, where the training must also cover collaboration patterns and role clarity.
Training hybrid teams will require specific onboarding protocols for integrating new agents into existing human teams and vice versa. Organisations will need to develop role clarity frameworks that define which tasks remain human-only, agent-only, or collaborative. It becomes important to document optimal human-to-agent ratios for different work types and maintain knowledge transfer protocols when agents are updated or replaced or moved across departmental lines.
These role frameworks also require to be underpinned with capability and competency models which make sense of role content for agents and employees - and allows for learning and development (including training) approaches and content to be created that meets the need to potentially address the required skill mix in roles as they change.
2. Agent-to-Agent Interoperability: Beyond Tech Stack Boundaries
Agent collaboration will not be confined to a single platform or vendor. Enterprises already span multiple tech stacks — cloud platforms, LLMs, RPA systems, workflow engines, CRM suites, and more.
So how will agents:
Discover each other’s capabilities?
Share state, goals, and decisions?
Delegate tasks or escalate issues across domains?
This demands standardised agent-to-agent interaction protocols — with the same importance we once placed on APIs. But in today’s landscape, no such dominant standard exists, beyond emergent interoperability standards such as A2A (Agent-to-Agent, Google) and MCP (Model Context Protocol).
We may need:
Communication agents, to translate between platforms and LLM types
Broker or orchestration agents, to manage coordination and prioritisation
Enterprise context agents, maintaining a shared knowledge layer between all actors.
These meta-agents could enable interoperability while maintaining security, privacy, and governance boundaries. Designing for explicit agent interactions across technology silos will be a critical capability for future operating models.
In addition human roles across multi agent teams are also likely to evolve. Problem solving and critical reasoning skills will be tested as multi-agent test boundaries of “what business issues are we trying to address” is pushed as more opportunities are created by the inter-operability of the hybrid agent teams. Those team structures that allow flexibility of human/agent placement into team roles may trump more rigid structures.
3. Design Human-Agent UX with Triggers, Modes, and Patterns
A well-designed user interface isn’t just about clarity — it’s about timing, tone, and trust. What matters is not only what the agent says or does, but when and why it chooses to engage the human.
There are multiple trigger points for human-agent interaction:
Event-driven: The agent completes a task or detects a condition (e.g. anomaly, approval request)
Chat-initiated: The human queries the agent directly
Agent-initiated: The agent reaches out to prompt, clarify, warn, or escalate
Scheduled: Regular summaries, reviews, or handoffs between human and agent at defined intervals.
Each mode requires thoughtful interface design, ensuring humans understand:
What the agent knows (and doesn’t)
What the agent is doing (and why)
When and how to intervene.
This is especially complex in multi-agent scenarios, where agents may surface conflicting options or partial answers. UX patterns must evolve to support team-level transparency, not just individual tool outputs.
There is a balance to be found between ‘watching the agents work’, explainability and clarity about how they’ve reached the answer, together with simply needing to achieve the work output in the shortest time available. Logically, it is not possible for humans to ‘see’ everything given the speed with which agents are operating.
Teams will need to establish communication protocols and etiquette standards for human-agent interactions. It becomes crucial to define conflict resolution procedures for when humans and agents disagree on priorities or approach. Organisations should create decision-making hierarchies that clarify authority structures, especially when agents can flag performance issues. Teams will also need to design real-time collaboration mechanics for simultaneous work on deliverables, including version control and handoff protocols.
4. Support the Human Side of Change
The most common failure mode in early agent deployments isn’t technical — it’s cultural.
Whilst it is still early days, agents are often launched without sufficient:
Onboarding for human teams
Explanation of value and limits
Mechanisms for human feedback, oversight, and control.
Enterprises must invest in change management as a continuous capability, helping people adapt their workflows, roles, and expectations as agents evolve. This includes:
Educating managers on new patterns of delegation and trust
Enabling staff to give structured feedback to agents
Reinforcing that humans are augmented, not displaced.
Hybrid teams don’t just reshape workflows; they reshape our identities as team members. We (in part) define ourselves by our skills and roles. If an “agent teammate” takes on tasks that once conferred someone’s value, the risk isn’t just displacement: it’s a significant loss of status and purpose. Organisations that succeed will be those that openly acknowledge this and create new career paths such as agent supervisors, hybrid workflow designers, or roles focused on agent oversight and integration. The future of work won’t just be about working alongside AI: it will be about building professions (and our career collateral) around it. Agents, or agent teams, that can be taught, will only evolve if their human colleagues are motivated to teach them (more on this in Pillar 6).
Organisations must address psychological safety concerns when humans work alongside algorithmic teammates. It becomes important to develop learning and development programs for effective human-agent collaboration skills and evolve career development paths to include hybrid team competencies. Teams should manage cognitive load by preventing human fatigue from constant context-switching between human and agent interactions. Throughout the integration process, it's essential to maintain human agency and decision-making confidence.
5. Build Shared Accountability in Hybrid Teams
As agents integrate into core operations, we move from tool usage to shared performance.
Agents should be able to:
Raise flags if human input is delayed or inconsistent
Log performance issues in multi-agent workflows
Adapt behaviour in response to both system and human signals.
This opens up a fascinating — and sensitive — frontier: Can agents report on human performance?
The answer will depend on context and culture. In regulated environments, an agent may need to report that a manual check was skipped. In collaborative settings, it may simply nudge or shadow. But either way, structured mechanisms for bi-directional accountability will become increasingly necessary — supporting better outcomes, not blame.
One of the biggest risks in human–agent collaboration isn’t technical failure, but human silence. If people feel monitored, judged, or second-guessed by agents, they are likely to adapt their behaviour to reduce risk: by withholding concerns, gaming the system (consciously or not), or bypassing agents altogether. This can undermine the whole point of hybrid teamwork.
Psychological safety (the belief that it’s safe to speak up, raise concerns, suggest ideas and admit mistakes) must extend to hybrid teams. That means designing agents and workflows where it’s not only possible but encouraged to question outputs, override decisions, and report problems without fear of blame. Just as in aviation and healthcare, the ability to challenge across authority gradients is essential. Agents change the gradient (we don’t necessarily know how yet) but the principle stays the same.
“Shared accountability” between humans and agents mustn’t slip into shared blame. If agents log human performance issues, we need to apply the principles of Just Culture and Local Rationality: recognising that most (if not all) errors are systemic rather than individual failings. If accountability mechanisms become punitive, they’ll drive defensiveness, silence, and workarounds. If they’re framed around learning, they’ll surface exactly the insights that hybrid teams need to improve.
Furthermore, traditional team performance metrics fail to capture hybrid team dynamics. Organisations will need to establish new KPIs that measure collaborative output rather than individual contributions. It becomes important to track attribution patterns to understand how human and agent contributions combine to create value. Teams should monitor cohesion indicators like conflict resolution speed, knowledge sharing effectiveness, and adaptation to changing requirements. The goal is to measure both productivity gains and human satisfaction to ensure sustainable performance improvements.
Building trust and confidence is also an essential element. Whilst confidence can be engendered in clear outcomes and logical success factors being achieved, trust inevitably has to be earned. Conventional human only teams do this through a variety of formal and informal means. In the case of Hybrid operating teams the positive aspects of predictability of outcome, mutual success and engagement in the process to solution are critical. Equally there are those which are safeguards to effective hybrid teams - not exposing or deliberately undermining performance, opportunities to learn and develop are paramount and adherence to ethical standards.
6. Do Team Design Principles Still Apply?
In the human world, we’ve developed robust models for designing effective teams. Approaches like Belbin’s Team Roles, Team Topologies, and other frameworks based on psychological diversity, communication styles, and responsibility mapping have helped leaders build cohesive, adaptable teams.
So what happens when some team members aren’t human?
Some elements still hold:
Agents can adopt functional roles (e.g. implementer, analyst, coordinator)
Workflows can be designed around cognitive complementarity — playing to different strengths
Systems like Team Topologies still help us think about team interfaces, autonomy, and flow.
But others need rethinking:
Psychometric profiling is irrelevant to agents — but version, model type, and confidence metrics matter
Trust is built not through personality, but predictability, transparency, and performance
Feedback loops must account for machine logic and human emotion, simultaneously.
We don’t need to abandon human team-building wisdom — but we do need to evolve it for hybrid environments. Future leaders will need fluency in both worlds.
Agents promise efficiency, but poorly designed interactions can actually increase cognitive load rather than reduce it. Humans can quickly become fatigued by constant context switching, opaque decision-making, or the need to second-guess agent recommendations. This is the classic “out-of-the-loop” or “automation paradox” problem seen in aviation automation: the more advanced the system, the more tempting it is to disengage, until the moment a human must quickly step in to avoid disaster.
The answer isn’t just more training, but better design. Hybrid teams must build graceful failure into workflows, making it obvious when an agent is uncertain, when intervention is required, and how to safely re-engage humans without pushing them out of the loop.
Leaders will need to develop new management practices for supervising hybrid teams, including agent performance coaching and hybrid workflow design. Organisations should create institutional memory systems that balance how agents maintain knowledge versus human expertise. It becomes important to establish meeting dynamics and participation protocols when agents are team members, not just tools.
The blending and mixing of teams will still need care and thought as before – that doesn’t change. What does change is that the content of the skills and capabilities will be different – and likely expressed differently as roles are designed for the new human part of the hybrid team. Leaders will need to pay more attention to briefing effectively and setting goals that are necessarily different, but still tangible and achievable for the team overall and for the individual humans within it.
Leaders will need to be much more proactive in creating the learning and development opportunities for human team members to adjust and adapt to the changes in the skills and capabilities they need to deploy/use. For HR professionals the shift in skills and capabilities points to a rethink to their traditional talent models – as they address different skills acquisition, development and utilisation. This will likely impact performance goal setting and measurement, talent strategies and succession planning – as roles have evolved and changed. All of this needs to be considered and appropriate employee communications and engagement.
7. Lay the Groundwork for Scalable Team Patterns
To scale effectively, we’ll need repeatable patterns of hybrid team design:
Shared workflows
Coordination frameworks
Onboarding and knowledge transfer routines
Multi-agent troubleshooting and escalation paths.
These patterns should be grounded in:
Safety and Trust established through responsible agent development
The Control enforced via governance and repositories
The Quality driven by performance benchmarking and feedback loops.
High-performing hybrid teams won’t happen by accident. They will be designed — operationally, architecturally, and culturally. And those who master this new way of working will outperform those who simply layer AI into legacy roles and processes.
The real test of high-performing hybrid teams won’t be how seamlessly agents talk to each other, but how confidently and safely humans work alongside them. If people don’t feel able to question, challenge, make mistakes and learn in these new teams, the technology will deliver efficiency at the expense of resilience. We must not mistake efficiency for resilience.
In our next article, we’ll turn to Infrastructure — the practical, often invisible foundations that make scalable Agentic AI possible: orchestration, memory, context layers, data pipelines, interoperability layers, monitoring tools, and more. Because even the best teams, human or agent, can’t perform without the right environment to support them.
References
Corporate Digital Responsibility Manifesto https://corporatedigitalresponsibility.net/cdr-manifesto
AI for the Rest of Us meet-up June 2025 report https://www.linkedin.com/posts/rob-price-4a44884_meetups-responsible-aiagents-activity-7341763195696496640-m9JB?utm_source=share&utm_medium=member_desktop&rcm=ACoAAADIMLwB3sMRAwvp8vwrXA7QJGoNpEopYac
The Usher-Middleton scale for Multi-Agent Teams https://www.linkedin.com/posts/rob-price-4a44884_multiagentteams-agenticai-activity-7340685962546454528-xHXy?utm_source=share&utm_medium=member_desktop&rcm=ACoAAADIMLwB3sMRAwvp8vwrXA7QJGoNpEopYac
Futurise Podcast https://open.spotify.com/show/3BFEdGmKB1qiQptZf37aSc?si=48f3368a4dcb453a
Scaling Multi-Agent Teams https://www.linkedin.com/pulse/scaling-multi-agent-teams-rob-price-hmsae/?trackingId=jBLQGeV2zOubHRsWps7Qag%3D%3D
Pillar 1 - Safety & Trust https://www.linkedin.com/posts/rob-price-4a44884_safetyandtrust-control-quality-activity-7358430398210547712-seiD/
Pillar 2 - Control & Governance https://www.linkedin.com/posts/rob-price-4a44884_quality-activity-7360952007643328512-RcYg/
Pillar 3 - Quality & Performance https://www.linkedin.com/posts/rob-price-4a44884_agenticai-activity-7363510347867029504-2taM/
Futuria is a UK and US based business that builds Agentic AI, multi-agent team, solutions for high assurance, highly secure organisations https://futuria.ai
This article was written with some occasional use of ChatGPT and Claude.
President Foundation for Sustainable Communitiesare
1moOutstanding points! Looking forward to further advancements on high-performing human systems! It truly is the reality of "people/processes/technology" coming together! Great leadership!
Great read Rob. The question of agents being able to report on human performance/behaviour is incredibly thought-provoking!
Fellow of BCS | Author of designed4devops | Digital Transformation Consultant | Communicator | Strategist | AgenticAI at Scale | Tech Leader
1moDesigning teams and team topologies with meta agents providing support and being able to monitor and adjust accordingly will be a completely new role for humans to fill.
Expert on Strategy & Innovation | Agentic AI | CEO, AI Risk | Founder, Embedded Finance & Insurance Strategies | Guest lecturer, Singularity University | Keynote speaker
1moGreat points, Rob. Agentic AI is not just another IT development...
Founder at Superagentic AI | ex-Apple | Helping companies to adopt Agentic AI
1moThis is amazing write ups Rob Price, I am closely following these Agentic AI pillar. At Superagentic AI defined 5 pillars of Agentic AI for Superagentic's mission but in bit more technical ways. The one you writing are more practical and needs to explore bit more. Keep up the good work! Here are 5 Pillars as a Foundation of Superagentic AI. https://super-agentic.ai/super-pillars