Technology Radar — April 2025 review — Part 1
A review of the recent Technology Radar April 2025 update — I review at least three items from Techniques and Tools in this part
Originally published on Medium: https://medium.com/cloudweed/technology-radar-april-2025-review-part-1-dbe8dee83219
Yes! Vol. 32 is out now and this is my review. The Tech radar provides the Software Engineering community, a very good glimpse of what technologies, techniques, patterns, tools, languages, frameworks are recommended for Adopt, Trial, Assess and Hold in four quadrants.
You can also create your own radar, here
These are, however, only guidelines as they stand, based on the research performed by ThoughtWorks. Needless to say, these recommendations doesn’t suit every organisation depending upon your needs. What you are encouraged to do though, is to create your own Technology Radar; see thoughtworks.com for more details.
This article gives you my perspective of the techniques that I identify as ready to be adopted and fit into the current architectural/system design needs of many organisations; no matter the size/team, how disruptive or what you are building. You can also subscribe to the radar so you won’t miss the radar as it gets published.
Check out the interesting themes for this edition as I have summarised here:
The rapid innovation in generative AI, particularly focusing on coding assistants and observability tools. It highlights the trend of supervised agents in coding assistants, where AI tools like Cursor, Cline, Windsurf, and GitHub Copilot are integrated into IDEs to assist developers in navigating and modifying code, updating tests, executing commands, and fixing errors. Despite these advancements, there is caution about the potential complacency with AI-generated code and the need for vigilant code review.
The Radar is a document that sets out the changes that we think are currently interesting in software development — things in motion that we think you should pay attention to and consider using in your projects. It reflects the idiosyncratic opinion of a bunch of senior technologists and is based on our day-to-day work and experiences. While we think this is interesting, it shouldn’t be taken as a deep market analysis.
Birth of Technology Radar
As a supplement, if you want to know about the history of Technology Radar, this will help.
Birth of the Technology Radar
By Darren Smith
Techniques
Interactive radar: https://www.thoughtworks.com/radar/techniques
TRIAL: GraphRAG
GraphRAG is a new technique developed by Microsoft Research to enhance the capabilities of Large Language Models (LLMs) in handling private datasets. It uses LLM-generated knowledge graphs to improve question-and-answer performance.
This pattern involves a two-step approach: first, chunking documents and using an LLM-based analysis to create a knowledge graph; second, retrieving relevant chunks at query time via embeddings and following edges in the knowledge graph to discover additional related chunks.
Baseline Retrieval-Augmented Generation (RAG) struggles with connecting disparate pieces of information and understanding summarised semantic concepts over large datasets. GraphRAG addresses these issues by using knowledge graphs for better context and grounding.
This approach enhances LLM-generated responses and has been beneficial in understanding legacy codebases by using structural information like abstract syntax trees and dependencies to build the knowledge graph.
The GraphRAG pattern has gained traction, with tools and frameworks like Neo4j’s GraphRAG Python package emerging to support it. Additionally, Graphiti is seen as fitting a broader interpretation of GraphRAG as a pattern.
Microsoft Research plans to continue developing and applying GraphRAG to various domains, including social media, news articles, workplace productivity, and chemistry. Ongoing work includes improving evaluation metrics and working closely with customers.
TRIAL: Small Language Models
Small Language Models (SLMs) are artificial intelligence models designed to process, understand, and generate natural language content with fewer parameters compared to large language models (LLMs). Here are some key points about SLMs.
Size and Efficiency:
Applications:
Model Compression Techniques:
Transformer Architecture:
Recent announcement of DeepSeek R1, a small language model (SLM) with 671 billion parameters that requires a mini cluster of eight state-of-the-art NVIDIA GPUs to run1. However, DeepSeek is also available in smaller, distilled versions like Qwen and Llama, which can run on more modest hardware while still offering significant performance improvements over previous SLMs.
The document also highlights other innovations in the SLM space, including Meta’s introduction of Llama 3.2 at 1B and 3B sizes, Microsoft’s release of Phi-4 with a 14B model, and Google’s release of PaliGemma 2, a vision-language model available in 3B, 10B, and 28B sizes. These developments indicate a trend towards smaller, more efficient models that continue to push the boundaries of AI performance.
Tools
Interactive radar: https://www.thoughtworks.com/radar/tools
Tools quadrant is looking good with no n the HOLD which means it all up for grabs in terms of any R&D to discover anything suitable for your team or organisation. Here is my review:
ADOPT: Vite
Vite, a high-performance front-end build tool, has gained widespread adoption and is now recommended by frameworks like Vue, SvelteKit, and React, which has deprecated create-react-app. Recently, Vite received significant investment, leading to the creation of VoidZero to support its development and sustainability.
It is a modern build tool and development server designed to provide a faster and more efficient development experience for web projects. Here are the main features and benefits of Vite[1][2][3]:
Fast Development Server:
Optimized Build Process:
Support for Modern JavaScript:
Plugin System:
Ease of Use:
Recommended by LinkedIn
Performance:
References
GitHub - vitejs/companies-using-vite: A list of companies using Vite.
A list of companies using Vite. Contribute to vitejs/companies-using-vite development by creating an account on GitHub.
TRIAL: Cursor
Cursor is a AI-first code editor, which is a leader in the AI coding assistance space. Cursor is known for its effective code context orchestration and support for a wide range of models, including the option to use a custom API key. The Cursor team is innovative, often introducing user experience features before other vendors. They include an extensive list of context providers in their chat, such as referencing git diffs, previous AI conversations, web search, library documentation, and MCP integration1.
Cursor stands out for its strong agentic coding mode, which allows developers to guide their implementation directly from an AI chat interface. This mode enables the tool to autonomously read and modify files, as well as execute commands1. Additionally, Cursor can detect linting and compilation errors in generated code and proactively correct them.
Here are some of the key features and possibilities of Cursor AI:
Intelligent Code Suggestions:
Code Generation:
Contextual Help and Documentation:
Debugging Support:
Code Editing:
Error Detection:
Cursor AI is designed to cater to developers of all levels, from novices to seasoned professionals, making coding more efficient and productive[1][2]. Would you like to know more about any specific feature?
References
TRIAL: Software Engineering agents
Over the past six months, the concept of “software engineering agents” remains undefined. A significant development is supervised agentic modes within the IDE, which enable developers to implement code through chat. Tools like Cursor, Cline, and Windsurf lead this space, with GitHub Copilot catching up. These modes, utilising models such as Claude’s Sonnet series, enhance coding speed but require small problem scopes for effective review. To avoid complacency, pair programming and disciplined review practices are recommended, especially for production code.
Software Engineering Agents are autonomous systems designed to assist with various software development tasks. These agents leverage advanced AI models, such as large language models (LLMs), to perform tasks that traditionally require human intervention. Here are some key aspects of software engineering agents.
Automation of Development Tasks:
Tool Usage and Command Execution:
Error Detection and Fixing:
Customizable Interfaces:
Examples and Implementations:
These agents are designed to enhance productivity, reduce errors, and streamline the software development lifecycle.
References
How do AI software engineering agents work?
Coding agents are the latest promising Artificial Intelligence (AI) tool, and an impressive step up from LLMs. This…
Create Your Radar
You can create your own technology radar and see where the blips are compared to the ones published by Thoughtworks. You need to understand the differentiator and what makes sense for you and why. There is also constant review needed to adjust your radar when there is a need for a new framework or techniques that your team wants to adopt and have a credible reason/use case for it. Also, be mindful that you’d also need to create some artefacts including a lightweight Proof of concept to ensure that you are not leaving it too far to figure out any major constraints with the items from our Radar and perform a durable Market scan(s).
Have you created and used your own Technology Radar for your project/organisation? It’d be great to hear your feedback and experience (comments welcome)!
muddasirkhan@neoteksoft.com