Hypervelocity Engineering: Start Slow and Accelerate
Following my posts on Becoming an AI Engineering Team and What is Hypervelocity Engineering, I want to offer some practical guidance on how teams can transition to this new paradigm without crashing and burning. While my previous post on Junior Engineers and Hypervelocity Engineering described long-term talent development, this post focuses on immediate, actionable steps your team can take to begin the journey toward hypervelocity responsibly.
This is speculative guidance – though I do have more early signals supporting these approaches than for my junior engineer growth strategies (which will naturally take years to validate). The key insight I'm proposing is simple: move deliberately at first, then accelerate as your team builds capability. We're all experimenting here, and I'm sharing what I believe could help your team navigate these uncharted waters.
Map Your Real Bottlenecks – Beyond Just Coding
The first step for any team looking to harness AI is to not jump immediately into attempting to automate building production code. It’s to understand your whole lifecycle and document your actual bottlenecks – and these often extend far beyond writing code. Is security approval for new services consuming weeks of your schedule? Are you spending excessive time convincing stakeholders of Return on Investment (ROI)? Do Architecture Decision Record (ADR) approval and Pull Request (PR) review cycles create frustrating bottlenecks? Is documentation consistently neglected due to time constraints?
Work with your team to identify which of these friction points could benefit from AI assistance. Often, developing small, targeted AI-enabled tools to address specific bottlenecks can provide immediate value while helping your team learn how to effectively use these systems without compromising quality. Measure these in some way, whether it’s time from PR submission to merge, or even just survey results on engineer satisfaction with a process, gathering initial measurements will help you understand whether your AI improvements are “moving the needle”.
Hypervelocity Engineering isn't about coding faster – it's about solving your actual bottlenecks with intelligent assistance.
A potentially valuable application might be in streamlining the creation of comprehensive status reports for stakeholders. A team could experiment with an AI-enabled reporting tool that transforms raw project data into clearly structured updates. You could then compare the time it took before and after to generate this report, and qualitative judgments on whether the AI-enabled report generation meets or exceeds the quality bar of previous reports. This type of relatively "low-risk" application could deliver immediate value while building the team's confidence in working with these systems – providing a gentle on-ramp to more ambitious applications.
In our Data Science teams we’ve been using HVE to help build simple Gradio data visualization apps making it easier to label data and identify bad data points. Core to improving our results, but often a painful manual process – this was a natural inroad for HVE, as the コスパ (cospa, or cost/performance) was quite high and the risk was low.
Be Vigilant About Models and Data
As you begin integrating AI into your workflow, establish clear guardrails. This isn't just about compliance – it's about building sustainable practices that will scale as your AI adoption accelerates.
Never use unauthorized models
Only work with AI systems your organization or customer has explicitly approved.
Protect sensitive data
Ensure that the data you're providing to models is approved for sharing with that system. Note that as of this writing, new Workspaces in VSCode will revert to the default model for the GitHub Copilot Agent, which may not be one that you’ve approved. Watch out for subtle issues in your approved tools like this that may cause unintended data leakage.
Understand data governance policies
Know the locality and retention policies of any AI system you adopt before sending it data. AI can help you find this information, but make sure you click through to the references – your business is on the line.
Document these decisions as part of your team's established processes, creating a framework that guides both your human teammates and any AI systems you're working with. Use a format like Markdown that both humans and AI can read well, and make sure that your AI is aware of it (either through explicit reference/attachment in the context, or through pointers from copilot-instructions.md for instance). When your team has clear boundaries, both humans and AI can operate with confidence.
Start with Pairing, Not Delegation
While it may be tempting to immediately delegate entire workflows to AI systems, the most successful transitions I’ve seen begin with using AI as a pairing partner. This approach teaches your team how to effectively communicate with AI to achieve optimal results. It builds intuition about which models work best for different types of tasks. And perhaps most importantly, it prevents the skill erosion I warned about in earlier posts by maintaining human oversight and understanding.
Pairing with AI builds the muscles your team needs before delegating whole workflows.
Consider organizing "AI pairing sessions" where team members collaboratively work with an AI to solve a specific problem. Make the explicit goal learning how to interact with these systems effectively and document the prompts and approaches that yield the best results. Readers, what do you think “best results” look like – is it reduction in time-to-feature? Is it consistent quality across the team? Would love to hear your thoughts.
Match Models to Tasks
Your collection of AI tools should be as diverse as your team's responsibilities. Different work demands different AI capabilities:
Strategic thinking & stakeholder engagement: Claude or M365 Copilot might excel here with their nuanced understanding of communication and business context
Technical documentation & planning: Claude or o3 could be your go-to for drafting ADRs and organizing repo documentation
Coding assistance: GitHub Copilot with GPT o3, o4-mini, and Gemini 2.5 Pro show particular strength in this domain
Models change fast, and the best model for a task today may be overshadowed next week. Build a culture of re-evaluating your assumptions.
The landscape is evolving rapidly, so what works best today might change in six months. Encourage your team to experiment across platforms to discover which combinations work best for your specific context, and foster a culture and build a cadence that re-evaluates tool and model choices regularly.
Re-evaluate tool and model choices regularly
Have a set of tasks that you can test models and tools on, and a way of evaluating those results. Even if your evaluation “metrics” are qualitative judgments by the people on your team, being able to repeat the same experiments on new models and tools will go a long way towards you making more intentional decisions as you shift tooling over time.
Document and Share Your Learning
The most critical element of a successful AI Engineering Team transition is knowledge sharing. Without it, you end up with siloed expertise and inconsistent practices across the team.
A promising approach might be creating an "AI Engineering Playbook" that lives in your team's repo, documenting patterns that seem promising, model selection guidance based on early experiments, and example prompts that have shown potential. This could become a living document that evolves as your team explores AI capabilities and builds a shared understanding of what works in your specific context.
The teams that win at Hypervelocity Engineering aren't the ones with the fanciest AI tools – they're the ones that build institutional knowledge about how to use them effectively.
Establish Feedback Loops
What processes have successfully incorporated AI? Where has AI assistance fallen short of expectations? Which team members have discovered effective techniques others could adopt? How has the team's velocity and quality metrics evolved?
As your team begins accelerating, establish regular retrospectives specifically focused on AI integration to answer these questions. These conversations should happen frequently at first – perhaps biweekly – then can move to a monthly cadence as practices stabilize.
Make AI integration a regular part of your team's retrospectives. Creating this deliberate space for reflection ensures that knowledge is shared across the team and prevents silos of expertise from forming.
Speculating on the Acceleration Curve
While we're all still experimenting with Hypervelocity Engineering adoption, we can look to historical technology adoption patterns for perspective. Previous technological shifts – from waterfall to agile, from on-prem to cloud, and from manual testing to CI/CD – suggest certain patterns might emerge:
An initial learning phase where productivity may temporarily dip as teams build new muscles
A potential acceleration period as practices mature and confidence grows
Eventually, a new baseline as teams integrate the technology into their normal workflows
Patience in the early phase of adoption pays dividends later – teams that rush to implement new paradigms without building fundamentals often create more problems than they solve.
The history of DevOps adoption might be particularly instructive here. Early DevOps adopters often saw initial friction as they built automation capabilities, followed by dramatic productivity improvements once those systems were in place. However, teams that rushed to implement CI/CD without building the underlying engineering discipline often created more problems than they solved.
Without sufficient data on HVE specifically, caution is warranted. By deliberately starting slow, documenting what works, and gradually expanding AI's role on your team, you can establish the foundations for what might become sustainable acceleration while minimizing risk. In addition, models and tools are advancing quickly, so by moving deliberately and building confidence, when you reach the point where you are writing production code you’ll be doing so with a stronger technological foundation.
In a future post, I'll explore how to measure and evaluate the effectiveness of HVE initiatives and establish repeatable (not necessarily reproducible) experiments to guide your team's continued evolution.
I'd love to hear how your teams are approaching this transition. What bottlenecks have you identified where AI could help? What pairing strategies have proven most effective? How are you measuring your success (or challenges)? Share your experiences in the comments.
#HypervelocityEngineering #AIEngineering #TechLeadership #SoftwareDevelopment #EngineeringExcellence #AIProductivity #TechInnovation #EngineeringTeams
Entrepreneur, Experienced Product Leader & Startup Advisor/Mentor
2moAgreed the velocity should be measured in terms of acceleration of impact and not speed of coding