Technology Radar — April 2025 review — Part 1
A review of the recent Technology Radar April 2025 update — I review at least three items from Techniques and Tools in this part
Originally published on Medium: https://medium.com/cloudweed/technology-radar-april-2025-review-part-1-dbe8dee83219
Yes! Vol. 32 is out now and this is my review. The Tech radar provides the Software Engineering community, a very good glimpse of what technologies, techniques, patterns, tools, languages, frameworks are recommended for Adopt, Trial, Assess and Hold in four quadrants.
You can also create your own radar, here
These are, however, only guidelines as they stand, based on the research performed by ThoughtWorks. Needless to say, these recommendations doesn’t suit every organisation depending upon your needs. What you are encouraged to do though, is to create your own Technology Radar; see thoughtworks.com for more details.
This article gives you my perspective of the techniques that I identify as ready to be adopted and fit into the current architectural/system design needs of many organisations; no matter the size/team, how disruptive or what you are building. You can also subscribe to the radar so you won’t miss the radar as it gets published.
Check out the interesting themes for this edition as I have summarised here:
The rapid innovation in generative AI, particularly focusing on coding assistants and observability tools. It highlights the trend of supervised agents in coding assistants, where AI tools like Cursor, Cline, Windsurf, and GitHub Copilot are integrated into IDEs to assist developers in navigating and modifying code, updating tests, executing commands, and fixing errors. Despite these advancements, there is caution about the potential complacency with AI-generated code and the need for vigilant code review.
The Radar is a document that sets out the changes that we think are currently interesting in software development — things in motion that we think you should pay attention to and consider using in your projects. It reflects the idiosyncratic opinion of a bunch of senior technologists and is based on our day-to-day work and experiences. While we think this is interesting, it shouldn’t be taken as a deep market analysis.
Birth of Technology Radar
As a supplement, if you want to know about the history of Technology Radar, this will help.
Birth of the Technology Radar
By Darren Smith
Techniques
Interactive radar: https://www.thoughtworks.com/radar/techniques
TRIAL: GraphRAG
GraphRAG is a new technique developed by Microsoft Research to enhance the capabilities of Large Language Models (LLMs) in handling private datasets. It uses LLM-generated knowledge graphs to improve question-and-answer performance.
This pattern involves a two-step approach: first, chunking documents and using an LLM-based analysis to create a knowledge graph; second, retrieving relevant chunks at query time via embeddings and following edges in the knowledge graph to discover additional related chunks.
Baseline Retrieval-Augmented Generation (RAG) struggles with connecting disparate pieces of information and understanding summarised semantic concepts over large datasets. GraphRAG addresses these issues by using knowledge graphs for better context and grounding.
This approach enhances LLM-generated responses and has been beneficial in understanding legacy codebases by using structural information like abstract syntax trees and dependencies to build the knowledge graph.
The GraphRAG pattern has gained traction, with tools and frameworks like Neo4j’s GraphRAG Python package emerging to support it. Additionally, Graphiti is seen as fitting a broader interpretation of GraphRAG as a pattern.
Microsoft Research plans to continue developing and applying GraphRAG to various domains, including social media, news articles, workplace productivity, and chemistry. Ongoing work includes improving evaluation metrics and working closely with customers.
TRIAL: Small Language Models
Small Language Models (SLMs) are artificial intelligence models designed to process, understand, and generate natural language content with fewer parameters compared to large language models (LLMs). Here are some key points about SLMs.
Size and Efficiency:
SLMs typically have a few million to a few billion parameters, whereas LLMs can have hundreds of billions or even trillions of parameters.
They require less memory and computational power, making them ideal for resource-constrained environments like edge devices and mobile apps.
Applications:
SLMs are used in scenarios where quick responses and efficiency are crucial, such as real-time performance on smartphones, tablets, or smartwatches.
They are suitable for tasks like text generation, summarisation, sentiment analysis, and more.
Model Compression Techniques:
Techniques like pruning, quantization, low-rank factorization, and knowledge distillation are used to build SLMs from larger models.
These methods help reduce the size of the model while retaining as much accuracy as possible.
Transformer Architecture:
SLMs employ a neural network-based architecture known as the transformer model, which is fundamental in natural language processing (NLP).
Transformers use mechanisms like self-attention to focus on important tokens in the input sequence and generate accurate outputs.
Recent announcement of DeepSeek R1, a small language model (SLM) with 671 billion parameters that requires a mini cluster of eight state-of-the-art NVIDIA GPUs to run1. However, DeepSeek is also available in smaller, distilled versions like Qwen and Llama, which can run on more modest hardware while still offering significant performance improvements over previous SLMs.
The document also highlights other innovations in the SLM space, including Meta’s introduction of Llama 3.2 at 1B and 3B sizes, Microsoft’s release of Phi-4 with a 14B model, and Google’s release of PaliGemma 2, a vision-language model available in 3B, 10B, and 28B sizes. These developments indicate a trend towards smaller, more efficient models that continue to push the boundaries of AI performance.
Tools
Interactive radar: https://www.thoughtworks.com/radar/tools
Tools quadrant is looking good with no n the HOLD which means it all up for grabs in terms of any R&D to discover anything suitable for your team or organisation. Here is my review:
ADOPT: Vite
Vite, a high-performance front-end build tool, has gained widespread adoption and is now recommended by frameworks like Vue, SvelteKit, and React, which has deprecated create-react-app. Recently, Vite received significant investment, leading to the creation of VoidZero to support its development and sustainability.
It is a modern build tool and development server designed to provide a faster and more efficient development experience for web projects. Here are the main features and benefits of Vite[1][2][3]:
Fast Development Server:
Vite offers extremely fast Hot Module Replacement (HMR), allowing developers to see changes instantly without refreshing the page.
Optimized Build Process:
It uses Rollup for bundling code, producing highly optimized static assets for production.
Support for Modern JavaScript:
Vite targets modern browsers during development, leveraging native ES modules and other latest JavaScript features.
Plugin System:
Vite is highly extensible through its Plugin API, allowing integration with various frameworks and tools.
Ease of Use:
It comes with sensible defaults out of the box and supports various templates for popular frameworks like React, Vue, Svelte, and more.
Performance:
Vite pre-bundles dependencies using esbuild, which is significantly faster than traditional JavaScript bundlers.
References
[2] What is Vite and Why Should You Use It Instead of Create React App?
[3] What is Vite (and why is it so popular)? — StackBlitz
GitHub - vitejs/companies-using-vite: A list of companies using Vite.
A list of companies using Vite. Contribute to vitejs/companies-using-vite development by creating an account on GitHub.
TRIAL: Cursor
Cursor is a AI-first code editor, which is a leader in the AI coding assistance space. Cursor is known for its effective code context orchestration and support for a wide range of models, including the option to use a custom API key. The Cursor team is innovative, often introducing user experience features before other vendors. They include an extensive list of context providers in their chat, such as referencing git diffs, previous AI conversations, web search, library documentation, and MCP integration1.
Cursor stands out for its strong agentic coding mode, which allows developers to guide their implementation directly from an AI chat interface. This mode enables the tool to autonomously read and modify files, as well as execute commands1. Additionally, Cursor can detect linting and compilation errors in generated code and proactively correct them.
Here are some of the key features and possibilities of Cursor AI:
Intelligent Code Suggestions:
Provides real-time code suggestions based on the context of the code being written.
Helps developers find relevant snippets, libraries, or functions quickly.
Code Generation:
Generates complete code segments based on user prompts or descriptions of desired functionality.
Speeds up development and serves as a learning tool for beginners.
Contextual Help and Documentation:
Integrates with documentation systems to provide relevant explanations and information without switching between environments.
Offers contextual assistance to understand code better.
Debugging Support:
Analyzes code to identify potential errors or performance issues.
Provides debugging assistance to improve code quality.
Code Editing:
Allows developers to highlight sections of code and apply edits based on natural language commands.
Simplifies the process of making changes to the codebase.
Error Detection:
Detects errors and suggests fixes to ensure the code runs smoothly.
Enhances the reliability of the code.
Cursor AI is designed to cater to developers of all levels, from novices to seasoned professionals, making coding more efficient and productive[1][2]. Would you like to know more about any specific feature?
References
[1] What is Cursor AI, the ChatGPT Replacement for Coding
[2] What is Cursor AI? Understanding the Future of Coding
TRIAL: Software Engineering agents
Over the past six months, the concept of “software engineering agents” remains undefined. A significant development is supervised agentic modes within the IDE, which enable developers to implement code through chat. Tools like Cursor, Cline, and Windsurf lead this space, with GitHub Copilot catching up. These modes, utilising models such as Claude’s Sonnet series, enhance coding speed but require small problem scopes for effective review. To avoid complacency, pair programming and disciplined review practices are recommended, especially for production code.
Software Engineering Agents are autonomous systems designed to assist with various software development tasks. These agents leverage advanced AI models, such as large language models (LLMs), to perform tasks that traditionally require human intervention. Here are some key aspects of software engineering agents.
Automation of Development Tasks:
They can automate code synthesis, program repair, test generation, and other software development activities.
This reduces the manual effort required and speeds up the development process.
Tool Usage and Command Execution:
These agents can use tools, run commands, and observe feedback from the environment to make informed decisions.
They interact with development environments to perform tasks autonomously.
Error Detection and Fixing:
They can identify and fix issues in code, improving the overall quality and reliability of software.
This includes debugging and optimizing code.
Customizable Interfaces:
Software engineering agents often come with configurable interfaces that allow them to interact with different development tools and environments.
This flexibility makes them adaptable to various software engineering tasks.
Examples and Implementations:
SWE-agent: A system that uses agent-computer interfaces to autonomously solve software engineering tasks, such as fixing issues in GitHub repositories[1].
Agentless: A simpler approach that focuses on localization, repair, and patch validation without complex agent-based setups[2].
These agents are designed to enhance productivity, reduce errors, and streamline the software development lifecycle.
References
[1] GitHub — SWE-agent/SWE-agent: SWE-agent takes a GitHub issue and tries …
[2] Agentless: Demystifying LLM-based Software Engineering Agents
SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering
How do AI software engineering agents work?
Coding agents are the latest promising Artificial Intelligence (AI) tool, and an impressive step up from LLMs. This…
newsletter.pragmaticengineer.com
Create Your Radar
You can create your own technology radar and see where the blips are compared to the ones published by Thoughtworks. You need to understand the differentiator and what makes sense for you and why. There is also constant review needed to adjust your radar when there is a need for a new framework or techniques that your team wants to adopt and have a credible reason/use case for it. Also, be mindful that you’d also need to create some artefacts including a lightweight Proof of concept to ensure that you are not leaving it too far to figure out any major constraints with the items from our Radar and perform a durable Market scan(s).
Have you created and used your own Technology Radar for your project/organisation? It’d be great to hear your feedback and experience (comments welcome)!
Head of Recruitment & US Recruitment / US Staffing & End - End Recruitment Trainer (Recruitment / Bench Sales / OPT Recruitment / BDM Training as well) @ Neo tek soft
4momuddasirkhan@neoteksoft.com