The Impact of Direct Skill Invocation in Automations
#TLDR
Introduction
As discussed in the previous article Boosting Your SOC Operations with Optimized Automation Using Security Copilot | LinkedIn, adopting automation solutions in security operations must be optimized to ensure that the resulting benefits are also economically sustainable. Greater optimization means lower costs per automation, which in turn enables an increase in the number and frequency of automations—ultimately enhancing the effectiveness of SOC activities.
In the context of automations built with Security Copilot and Azure Logic Apps, I recommend reading that previous article for insights on:
What’s New in This Solution
The solution presented in this new article is available here: 🔗DefenderIncidentInvestigation.
It is an evolution of the one shared in my previous article. The earlier version allowed the use of placeholders in the prompt’s JSON parameter, which could only be replaced at the start of execution using values received from the Logic App’s input trigger.
The new solution introduces a significant enhancement: it allows the inclusion of placeholders in prompts that can be dynamically replaced at runtime. These placeholders are linked to specific entity types—referenced by the placeholder name—and are resolved using identifiers extracted during the execution of previous prompts.
This mechanism for replacing placeholders with values (entity identifiers) extracted from previous prompts supports two main scenarios:
In practice, this solution shifts short-term memory from the session-based model implemented by Security Copilot to an externally managed application logic—specifically, within the Logic App. This logic relies on a step-by-step approach, where each prompt execution contributes to populating and querying a “Property Bag” that stores the identifiers of entities analyzed up to that point.
This design choice has a collateral implication: unless you explicitly include one or more prompts in the flow that process all previously gathered content (e.g., a final summary prompt for the investigation), the Logic App can be configured to avoid executing all prompts within a single Security Copilot session. In theory, this should lead to additional savings in resource consumption. However, in the tests conducted using the prompt sequence shared in this article, no significant benefit was observed—likely because the individual prompts were already highly optimized. That said, for workflows involving particularly verbose question-and-answer sequences, this remains a valuable option to consider.
For a full evaluation of the benefits and limitations of this approach, please refer to the final considerations section.
Results of the Test
The test conducted to compare the resource consumption of the Logic App versus manual prompting in natural language consisted of submitting 10 prompts related to incident investigation.
Within the Logic App, these 10 prompts are generated by 5 prompt definitions described in the JSON parameter. Four of them leverage the Direct Skill Invocation technique. The only prompt definition that uses natural language is the second one, which targets the newly announced Graph API-based plugin for Microsoft Entra.
Resource Consumption with Manual Natural Language Prompts
(Note: Executing a promptbook was not feasible in this case because it wasn’t possible to include a single prompt capable of analyzing the compliance status of five dynamically retrieved devices. The 10 prompts, then, were submitted manually, one by one)
The execution of the 10 prompts manually, using natural language, resulted in a total consumption of 8.7 SCUs.
Resource Consumption with Logic App Execution
The execution of the 10 prompts in the Logic App with Direct Skill Invocation resulted in a total consumption of 2.8 SCUs.
Resurce Consumption Comparison
The following table compares SCU consumption on a prompt-by-prompt basis.
Prompt Sequence in the JSON parameter (Logic App)
For the sake of completeness, this is a summary of the structure of the JSON parameters used to define the prompts within the Logic App. You can find the exact JSON here: Create example of prompts JSON parameter.json · stefanpems/cfs@3931ea4.
As previously explained, some of these five prompt definitions were automatically expanded at runtime, resulting in the execution of the ten prompts shown in the tables above.
Considerations for Using the New Plugin for Microsoft Entra
The new Graph API-based custom plugin for Entra is a powerful tool capable of constructing and executing complex plans composed of multiple Graph API queries, all initiated from a natural language request. This flexibility and capability are backed by the associated SCU usage, which may be relatively high. If you intend to run this automation very frequently, it may be more efficient to replace the natural language prompt with a Direct Skill Invocation to a purpose-built Graph API-based plugin. Below are two examples of such custom plugins:
In such cases, you will most likely achieve a consumption rate of approximately 0.1 SCUs per invocation. However, keep in mind that multiple invocations may be required. For example, if you need to retrieve both authentication methods and risk status for two different users, you would typically need at least four invocations.
Final Considerations
In the current state of the technology, using this solution for automation scenarios—compared to promptbooks—offers two key advantages:
Some components of the Logic App could (should...!) be replaced with Azure Functions to improve performance and readability. For instance, the implementation of the logic that identifies placeholders in prompts is quite complex in the Logic App due to the lack of regex function for string manipulation. This task could be handled more efficiently with an Azure Function. On the other hand, adding Azure Functions slightly complicates the deployment of the solution.
The Logic App shared here is complex, and therefore not optimal in terms of readability. Troubleshooting may be challenging. Unfortunately, this is the trade-off for offloading part of the logic—normally handled by natural language interpretation within the LLM—into an external fixed application logic for the sake of cost optimization. In practice, if you want to replace a natural language instruction like “refer to previously identified values of a certain type” with a fixed application logic inside a prompt, you must bear the cost of implementing and maintaining that logic which may result quite complex. The documented reduction in SCU consumption—in this example, from 8.7 to 2.8 SCUs—comes with this added complexity.
In the near future, I’ll likely share additional notes on the naming convention used for the placeholders in the prompt JSON parameter—particularly the use of the prefix “multiple-”, which indicates that the current prompt accepts multiple comma-separated identifiers of that specific entity type.