Last Week in AI, Edition #5: Google’s Agent IDE, AWS Agent Rules, Transformers v5

Last Week in AI, Edition #5: Google’s Agent IDE, AWS Agent Rules, Transformers v5

December 8, 2025

Welcome back to Last Week in AI, your weekly roundup of what engineers, builders, and hiring leaders need to know.

The past two weeks brought meaningful updates across agent-native development, delivery governance, and multi-cloud orchestration. Google announced a new agent-first IDE, AWS added runtime policy enforcement for agents, Hugging Face shipped Transformers v5 with its first built-in model server, and Akuity expanded GitOps pipelines to support multi-cloud workflows.

Here’s more on what shipped:

1 | Google announces Antigravity, an agent-first development platform

Google introduced Antigravity, a new developer environment built from the ground up for autonomous AI agents.

Agents operate across the editor, terminal, and browser, maintain state across tasks, and can be managed via a dedicated control surface. Projects run locally with full version control.

Article content
Image: Google for Developers

This marks a shift from single-assistant interactions to coordinated, multi-agent development workflows. Developers focus less on typing commands and more on directing stateful, parallel workers that execute and track tasks end to end.

2 | AWS adds agent policies and launches Kiro

At re:Invent 2025, AWS introduced AgentCore policy controls, allowing teams to define hard rules for how agents operate — similar to IAM roles but for runtime behavior. These policies let developers constrain what AI agents can access, call, or modify.

Article content
Image: AWS re: Invent

AWS also launched Kiro, a long-running AI assistant that understands your architecture, backlog, and codebase. It can handle multi-repo upgrades or work interactively on scoped changes.

Together, these tools give teams more explicit control and make it easier to assign durable work to agent systems that operate autonomously over time.

3 | Hugging Face releases Transformers v5 with built-in model server

Hugging Face released Transformers v5.0, a major update focused on simplification and deployment.

Article content
Image: Hugging Face

The library now runs solely on PyTorch, with a leaner codebase and fewer dependencies.

The release adds a new CLI tool, transformers serve, which lets developers expose any compatible model behind an OpenAI-style API. It supports batching, paged attention, and 4-bit/8-bit quantization — no custom inference stack required.

This gives teams a faster way to go from fine-tuned model to usable endpoint with minimal integration overhead.

4 | Akuity adds multi-cloud promotion to GitOps workflows

Akuity expanded its GitOps platform to support multi-environment promotions across cloud targets and workload types.

Teams can now declaratively roll out infrastructure and application changes — from Terraform-managed resources to Kubernetes deployments — with audit trails and policy gates.

The update supports promotions across AWS, GCP, on-prem clusters, and more, consolidating release workflows into a single, structured layer.

It reduces the need for glue scripts or pipeline workarounds when shipping across environments.

The through-line

Developer workflows are shifting from single-task implementation to coordinating larger systems.

Agents now persist across sessions. Policies define what they can do. Model deployment is becoming an API call. Infra promotion spans multiple clouds by default.

More of the system is now managed in code — by developers — at the point where behavior, rollout, and coordination are first defined.

What’s new from HackerRank 

In case you missed it: Edition #1 (10/9) | Edition #2 (10/24) | Edition #3 (11/6) | Edition #4 (11/19)

Like
Reply

This is a brilliant synthesis of where AI infrastructure is heading. The shift from “AI tools as assistants” to “AI agents as first-class collaborators” is clearly underway — and combining an agent-first IDE, robust governance policies, and streamlined model deployment feels like the turning point. With initiatives like Kiro and the agent-policy framework from AWS, plus the new agent-native dev-environment from Google, we’re not just automating tasks we’re redefining software development workflows. The potential for engineering productivity, reduction of technical debt, and rapid iteration at scale is huge. Thanks for putting these threads together this post made it easy to see the larger paradigm shift.

Love how you pulled these threads together. It feels like we are quietly moving from “AI inside tools” to “tools inside AI,” where the IDE, the rules layer, and even the model family start behaving like an ecosystem, not separate products. The way you framed Google’s Agent IDE, AWS’s policy layer, and the next wave of Transformers as parts of one story makes it much easier to see where this is heading for real teams, not just labs. One angle that keeps tugging at my mind is what this does to design and product work once agents become first class collaborators, not just helpers that sit at the edge. When code, infra and data all have “agent natives,” do we also need a new kind of “experience agent” that sits across them and protects context, trust, and taste for the user? Would love to hear how others are already probing this in their own stacks or experiments.

AWS's approach to runtime policies feels like exactly what the industry needed for managing AI agents at scale across different environments.

To view or add a comment, sign in

More articles by HackerRank

Explore content categories