Smarter labs, faster fixes: How we’re using AI to provision our virtual labs more effectively

|

We’re transforming the way we provision virtual labs at Microsoft with AI and continuous improvement.

Providing technical support at an enterprise of our size here at Microsoft is a constant balancing act between speed, quality, and scalability. Systems grow more complex, user expectations continue to rise, and traditional support models often struggle to keep up. Beyond the usual apps and systems everyone uses, many of our employees require virtual provisioning for diverse tasks in many of our businesses. Supporting these virtualized environments is a special challenge.

To meet the growing demand for virtual lab usage across the organization, we turned to AI, not just to automate support responses but to fundamentally rethink how user issues are identified and resolved. This vision came to life through the MyWorkspace platform, where we in Microsoft Digital, the company’s IT organization, introduced a domain-specific AI assistant to streamline how we empower our employees to deploy new virtual labs.

The results have been dramatic: what was once a slow, manual process is now fast, efficient, and frictionless.

But the benefits extend beyond faster resolution times. This transformation represents a new approach to enterprise support—one that uses AI not just as a tool for efficiency, but as a strategic enabler. By embedding intelligence into the support experience, we’re turning complexity into a competitive advantage.

Scaling support in a high-demand environment

MyWorkspace is our internal platform for provisioning virtual labs. Originally developed to support internal testing, diagnostics, and development environments, it has since grown into a critical resource used by thousands of engineers and support personnel across the company.

Scaling the platform infrastructure was straightforward—adding capacity for tens of thousands of virtual labs was a technical challenge we could solve with ease, thanks to our Microsoft Azure backbone. As usage grew, the real strain didn’t show up in CPU load or storage limit, but rather in the support queue—every few months, a new wave of users was onboarded to MyWorkspace: partner teams, internal engineers, and external vendors. These new users, often with minimal experience of the platform, needed fast access and guidance from support.

The questions, though simple, piled up quickly.

Several Tier 1 support engineers repeatedly encountered the same questions from users, such as how to start a lab, what an error meant, and which lab to use for a particular test. These weren’t complex technical issues—they were basic, repetitive onboarding questions that represented a huge opportunity to introduce automation.

“We also found that there were a lot of users who found more niche issues, and those issues had been solved either by our community or by ourselves. In fact, we had a dedicated Teams channel specific to customer issues, and we found that there was a lot of repetition and that other customers were facing similar issues, and we did have a bit of a knowledge base in terms of how to solve these issues.”

A photo of Deans.
Joshua Deans, software engineer, Microsoft Digital

Unblocking a bottleneck with AI

Our support team set out to tackle a familiar but costly challenge: high volumes of low-complexity tickets that consumed valuable time without delivering meaningful impact. Instead of treating this as an unavoidable burden, we saw an opportunity to turn it into a self-scaling solution. If the same questions were being asked repeatedly—and the answers already existed in documentation, internal threads, or institutional knowledge—then an AI system should be able to surface those answers instantly, without human intervention.

“We also found that there were a lot of users who found more niche issues, and those issues had been solved either by our community or by ourselves,” says Joshua Deans, a software engineer within Microsoft Digital. “In fact, we had a dedicated Teams channel specific to customer issues, and we found that there was a lot of repetition and that other customers were facing similar issues, and we did have a bit of a knowledge base in terms of how to solve these issues.”

That insight led the MyWorkspace team to begin building what would become a transformative AI assistant: an automated support layer purpose-built for the MyWorkspace platform. Unlike traditional chatbots that rely on scripted responses or rigid decision trees, this assistant would leverage generative AI trained on a rich dataset of real-world support conversations, internal FAQs, and official documentation.

“So that’s where we found this opportunity to turn this scaling challenge into a scaling advantage, with help from AI. We took all those historical conversations of tier one staff helping new users—trained our AI to provide user education based on that training—and saved our Tier 1 staff from answering potential tickets.”

Vikram Dadwal, principal software engineering manager, Microsoft Digital

The result was a context-aware, responsive system capable of resolving common issues in seconds—not hours or days—dramatically easing the load on support teams while improving the user experience.

Built on Azure and Semantic Kernel

MyWorkspace’s core infrastructure is fully built on Azure services. At any given moment, it manages tens of thousands of virtual machines, scaling up and down with demand. That elasticity, combined with our internal developer tooling and AI orchestration capabilities, provided the perfect environment for an AI-powered support layer.

“So that’s where we found this opportunity to turn this scaling challenge into a scaling advantage, with help from AI,” says Vikram Dadwal, a principal software engineering manager within Microsoft Digital. “We took all those historical conversations of tier one staff helping new users—trained our AI to provide user education based on that training—and saved our Tier 1 staff from answering potential tickets.”

To build the assistant, the team used our Microsoft open-source framework, Semantic Kernel. Designed for generative AI integration, Semantic Kernel allows engineers to create prompt-driven, modular systems that can interact with large language models (LLM) without vendor lock-in.

This approach gave the team several advantages:

  • Flexibility in choosing and switching between LLM providers.
  • Fine-grained control over how prompts were structured and updated.
  • Extensibility through plugins and actions that tie the assistant into the broader ecosystem.

Crucially, the assistant was designed to be part of the platform’s architecture, capable of operating at the same level of scale and responsiveness as the labs it supported. Also, the assistant was initialized with a well-scoped system prompt, limiting its responses strictly to the MyWorkspace domain.

“On average, we measured these interactions at around 20 minutes from ticket submission to problem resolution. Now compare that with a 30-second AI interaction for resolving the same class of issues—that’s a 98% reduction in resolution time, a number we’ve validated with our support team and continue to track.”

Nathan Prentice, senior product manager, Microsoft Digital

Shifting from tickets to conversations

Whether users had questions about lab types, needed clarification on configuration details, or sought guidance during onboarding, the AI provided accurate, interactive responses without requiring human escalation. The experience was both faster and significantly better. Support engineers saw a noticeable reduction in repeat tickets, as common issues were resolved on the spot. Onboarding friction decreased, and users were confident that they could get the answers they needed instantly—no ticket, no delay, no need to track a support contact.

“On average, we measured these interactions at around 20 minutes from ticket submission to problem resolution,” says Nathan Prentice, a senior product manager within Microsoft Digital. “Now compare that with a 30-second AI interaction for resolving the same class of issues—that’s a 98% reduction in resolution time, a number we’ve validated with our support team and continue to track.”

Smart, interactive, and intuitive

Our Microsoft Digital team has recently implemented a new version of the MyWorkspace AI assistant that includes several major enhancements. The assistant now features adaptive cards, polished layouts, and a Microsoft 365 Copilot-aligned user experience, making it feel familiar and trustworthy for internal teams. The assistant can now distinguish between a question and an action. If a user says, “Start a SharePoint lab,” it responds with an interactive card and begins provisioning, bridging the gap between passive support and active enablement.

“One of the primary bottlenecks we previously faced in creating an AI solution to address frequently asked user questions was the lack of technology capable of generating accurate answers for complex technical queries and understanding nuanced user input. With the availability of Azure OpenAI models, we were able to effectively overcome this challenge, enabling our AI solution to deliver precise and context-aware responses at scale.”

A photo of Nair.
Anjali Nair, senior software engineer, Microsoft Digital

To guide our employees and improve discoverability, the assistant offers recommended prompts—just like Copilot does—helping new users understand what they can ask and how to get started.

Users can now rate responses, giving a thumbs up or down. These signals are aggregated and reviewed by the engineering team, ensuring continuous improvement and fine tuning over time.

Intelligent provisioning with multi-agent orchestration 

At Microsoft Digital, we’re reimagining how labs are provisioned by integrating AI-driven intelligence into the process. Traditionally, users are expected to know exactly what kind of lab environment they need. But in complex virtualization and troubleshooting scenarios, these assumptions often fall short. Should a user troubleshooting hybrid issues with Microsoft Exchange spin up a basic Exchange lab, or one that includes Azure AD integration, conditional access policies, and hybrid connectors? To eliminate this guesswork, our team is building a multi-agent system powered by the Semantic Kernel SDK multi-agent framework. This system interprets the user’s support context—often expressed in natural language—and automatically provisions the most relevant lab environment.

For example, a user might say, “I’m seeing sync issues between SharePoint Online and on-prem,” and the assistant will orchestrate the creation of a tailored lab that replicates that exact scenario, enabling faster diagnosis and resolution. With agent orchestration, each agent in the system is specialized: one might handle context interpretation, another lab configuration, and another cost optimization. These agents collaborate to ensure that the lab not only meets technical requirements but is also cost-effective. By leveraging telemetry and historical usage data, the system can recommend leaner configurations—such as using ephemeral VMs, auto-pausing idle resources, or selecting lower-cost SKUs—without compromising diagnostic fidelity. This intelligent provisioning framework is designed to scale, adapt, and continuously learn from usage patterns.

“One of the primary bottlenecks we previously faced in creating an AI solution to address frequently asked user questions was the lack of technology capable of generating accurate answers for complex technical queries and understanding nuanced user input,” says Anjali Nair, a senior software engineer within Microsoft Digital. “With the availability of Azure OpenAI models, we were able to effectively overcome this challenge, enabling our AI solution to deliver precise and context-aware responses at scale.”

With multi-agent orchestration, we’re taking a step towards a future where lab environments are not just automated, but intelligently orchestrated, context-aware, and cost-optimized—empowering engineers to focus on solving problems, not setting up infrastructure.

Scaling support without scaling headcount

The MyWorkspace assistant is a powerful example of how enterprise support can evolve through intelligence. By embedding AI into the support experience, we’ve turned complexity into a competitive edge—reshaping knowledge work and operations through AI’s problem-solving capabilities. As Microsoft advances as a Frontier Firm, MyWorkspace shows how we can scale support on demand, with intelligence built in. Routine queries are offloaded to AI, freeing Tier 1 teams to focus on critical issues and giving Tier 2 engineers space to innovate. Most importantly, support now scales with user demand—not headcount.

But this system does more than just respond—it learns. Every interaction becomes a data point. Each resolved issue feeds back to the assistant, sharpening its accuracy and expanding its knowledge. What started as a reactive Q&A tool is now growing into a proactive orchestrator that surfaces insights and points users to solutions, resolving issues before they ever become tickets.

“We have a lot more telemetry now, so users can provide feedback to our responses—for example, thumbs up or thumbs down feedback,” Deans says. “And we can actually view where the model is giving incorrect or inappropriate information, and we can use that to make adjustments to the prompt provided to the model.”

In this model, support becomes a seamless extension of the user experience. With the right AI architecture in place, it transforms a cost center into a strategic asset. The MyWorkspace assistant fulfills its role as an embedded, intelligent teammate—delivering answers, driving actions, and continuously improving over time.

Ultimately, our journey with MyWorkspace shows that meaningful AI adoption doesn’t have to begin with sweeping transformation. Sometimes, it starts with a helpdesk queue, a recurring issue, and the choice to build something smarter—something that learns, adapts, and empowers at every step.

Key takeaways

Here are some of our top insights from boosting our internal deployment of MyWorkspace with AI and continuous improvement.

  • Start small and specific. Focus on a defined domain—like MyWorkspace—and use existing support logs to train your assistant.
  • Invest in AI infrastructure. Tools like Semantic Kernel provide flexibility, especially in enterprise settings where vendor neutrality and customization matter.
  • Design for trust. Align your assistant’s UI with well-known systems like Microsoft Copilot to build user confidence.
  • Don’t wait for perfection. Launch a V1, gather feedback, and make improvements. AI assistants get better over time if you let them learn.
  • Think outside the ticket queue. The future isn’t just faster support—it’s intelligent, anticipatory systems that eliminate friction before it begins.

Recent