In our last two tutorials, we explored how to give AI agents real capabilities with ADK-TS: from building calculators to fetching weather data and calling APIs 🧮 ☁️ But what if you could connect your agent directly to Telegram, Discord, GitHub, databases, and dozens of other services… without writing any integration code? That’s exactly what MCP tools make possible. 🔌 MCP (Model Context Protocol) is like a universal adapter for AI agents. Just as USB standardized how devices connect to computers, MCP standardizes how agents connect to external services. With MCP, your agents can: ✔️ Plug into external platforms instantly ✔️ Interact with the ecosystem of tools your organization already uses ✔️ Scale their usefulness without custom integrations In this short tutorial, we show you how to get this done! 📺 Watch Part 3 here: https://lnkd.in/d6vPepbk Full playlist: https://lnkd.in/dySt2yii Know any TypeScript developers? Don't hesitate to share this with them 🤝
How to connect AI agents to external services with MCP
More Relevant Posts
-
✍️ I’ve written a new article that continues my series on AI development workflows. This time I go into how I manage references and context sources (repos, docs, Context7 MCP) to keep coding agents accurate, up to date, and reusable across projects. I also explain how I recycle past implementations as references to speed up new builds. Read it here 👉 https://lnkd.in/diXB7GUt
To view or add a comment, sign in
-
In our first video, we introduced the Agent Development Kit for TypeScript (ADK-TS). Now, in Part 2, we’re showing you its real power: tools and AI SDK integration. ✨ Tools are what turn a simple chatbot into a true assistant. Need quick calculations? Add a calculator tool. Want real-time weather updates? Build a weather tool. Looking to connect with databases, send emails, or call external APIs? Tools make it possible. ✨ SDK integration gives you the freedom to choose the brain behind your agent — GPT-4, Claude, Gemini… whichever suits your use case best. By the end of this tutorial, you’ll know how to: 🔹 Build and customize your own tools 🔹 Integrate them with agents 🔹 Run them on any AI model you prefer This is where your agents go from answering questions to interacting with the real world 🌏 Watch Part 2 here: https://lnkd.in/dgjNC2fJ And if you missed Part 1, you can catch it here: https://lnkd.in/d-Vwb2cP
ADK-TS Tools & AI SDK Integration - Give Your AI Agents Real Capabilities
https://www.youtube.com/
To view or add a comment, sign in
-
🚀 How can we truly deliver high-quality software with AI in our daily work? I just published a new article where I share a real-world case using React, showing how we combined AI + best practices to accelerate development without losing focus on quality, maintainability, and accessibility. In the article, I walk through: - A real scenario: building a multi-step wizard for group creation - How AI helped us move faster (without replacing fundamentals) - Key practices: Clean Code, accessibility, architecture, and testing - Why strong fundamentals + AI is the winning formula This is not theory, it’s what we actually applied in a project, and the results were impressive. 👉 Check out the full article and let me know your thoughts. Curious to hear: how are you using AI in your daily development?
To view or add a comment, sign in
-
Originally, I thought I’d need to build a full cloud-based stack to enable real-time voice interaction in ActSolo.AI. Then, while researching another project, I came across OpenAI’s Realtime API + Voice Activity Detection (VAD) - and it clicked. Now, instead of spinning up a whole new part of this project, I’m working on updating the ActSolo teleprompter to support speech-to-speech (S2S) in its current state. With the ability to keep the mic open, the OpenAI addition will enable: - Cue-word detection - Detection when the user has started or stopped speaking - Turn-taking based on end-of-speech so the AI voices from ElevenLabs respond naturally when an actor finishes their line At this stage, this project has become complex. It started to need some real engineering so I began using VS Code with Lovable to fully support the build. I’m sure this is old news for developers but…VS Code has been awesome for a first-time user: - I upload my .md project plans so I can always reference them with Lovable - I ask questions and debug in context, using Agent + Chat modes - Every edit is tracked - I can test, debug, and more easily see the full project components - Most importantly, it cuts down on Lovable credits by reducing chat volume Recently, I was happy to have successfully debugged and connected the OpenAI API locally through the Terminal after Lovable was having errors figuring it out. With the help of a custom GPT (S/O Lazar Jovanovic), I figured out how to: - Run Supabase CLI commands - Set and update secrets - Deploy edge functions - Customize logs and test error flows in real time Now, I’m not a developer, but for every new project, I’m going to utilize VSC with Lovable and would recommend for anyone starting out with it to also include it in their workflow. Sure, you can start with vibes, but it’s better when it feels like a real product team across tools.
To view or add a comment, sign in
-
-
I'm teaching a new class this week titled 'AI in Software Testing.' I assumed the students would be software developers. It turns out they mostly do functional testing. ANYWAY, I'm still showing them how to use AI to help write and refactor unit tests, but I've also whipped up a cool little app for them tonight that you might be interested in. The app takes natural language instructions and generates working Playwright test code. How it works: * GPT-4 analyzes the natural language prompt * MCP server handles browser automation commands * Playwright executes the actual browser interactions * Express server with WebSocket provides real-time feedback * Generated test code is immediately usable https://lnkd.in/gr3DcezV
To view or add a comment, sign in
-
🐙 This Week on GitHub + AI Developers are in the middle of a wave of AI-powered updates: ⚡ Mistral Le Chat adds 20+ connectors, from GitHub to Stripe. ⚡ Warp Code introduces “agent steering” for smarter code reviews. ⚡ Cloudsmith launches its ML Model Registry, integrating with Hugging Face. For GitHub’s ecosystem, these aren’t just add-ons — they’re signals of a future where code, compliance, and collaboration all run with AI in the loop. 💡 What excites you most: better AI code reviews, smarter chat connectors, or centralized model governance?
To view or add a comment, sign in
-
🐙 This Week on GitHub + AI Developers are in the middle of a wave of AI-powered updates: ⚡ Mistral Le Chat adds 20+ connectors, from GitHub to Stripe. ⚡ Warp Code introduces “agent steering” for smarter code reviews. ⚡ Cloudsmith launches its ML Model Registry, integrating with Hugging Face. For GitHub’s ecosystem, these aren’t just add-ons — they’re signals of a future where code, compliance, and collaboration all run with AI in the loop. 💡 What excites you most: better AI code reviews, smarter chat connectors, or centralized model governance?
To view or add a comment, sign in
-
Discover Async, an open-source developer tool streamlining AI coding with integrated task management and code review. Async combines Claude code, Linear, and GitHub PRs into one workflow. [https://lnkd.in/gVXgDRXs] Async automates research, executes cloud-based code changes, and breaks down tasks for code review. Use Cases: 1. Automate security patch implementation across multiple repositories. 2. Refactor legacy code by breaking it into manageable subtasks, ensuring each change is thoroughly reviewed. #AI #SoftwareDevelopment #OpenSource #DevTools
To view or add a comment, sign in
-
Thrilled to share the progress on Basir V2 – AI Action Agent! ✨ Over the past sprint cycles, we’ve developed and refined an autonomous browser automation agent that executes complex web tasks directly from natural language prompts. Key highlights of the project include: 🔹 Custom Browser & Context – ensuring stable multi-tab handling, conflict-free debugging, and resilient automation pipelines. 🔹 Prompt Engineering & Splitting – improved workflow reliability and context management. 🔹 Integration of browser-use components – controllers, deep search, hooks, and extended system messages for better adaptability. 🔹 Dockerization & Refactoring – preparing the agent for production readiness and long-term maintainability. 🔹 Live Video Streaming – enabling real-time visualization of the agent’s browser interactions. These iterations resulted in a production-ready foundation with promising outputs, and our next steps will focus on scaling consistency through stronger APIs such as OpenAI or Claude. I’m proud to have collaborated with brilliant teammates Mona Khaled, Farah Mostafa, and Nada Mohamed, and I’d like to extend my gratitude to Hamza Moussi for his invaluable guidance throughout this journey. Onwards to pushing the boundaries of AI-driven automation! ⚡ #AI #ActionAgents #BrowserAutomation #LangChain #Playwright #Teamwork #Innovation
To view or add a comment, sign in
-
🚀 New in Stark: AI Code Remediation to help close the gap especially for non-developers. Issues flagged? Click “Help me fix it” and watch as Stark takes the relevant code snippet and presents back two things, side by side (in your assets or bug tickets): 1. The Problem → Why the issue matters, and the impact it has on real users. 2. The Solution → Copy the clear, actionable code snippet to fix it. It’s what you’ve come to know from Stark — remediation and contextual education right inside your workflow. https://lnkd.in/e58vNAwD
To view or add a comment, sign in