2. Who Am I?
pacodelacruz.io
github.com/pacodelacruz
Paco de la Cruz
Principal Cloud Solution Architect
@ Deloitte Engineering
Melbourne, Australia
Married with 2 daughters
Like to learn about
parenting, health,
& body weight exercises
Specialised in:
Enterprise Integration (20+ years)
Cloud Platforms & Distributed Apps (9+ years)
Azure & .NET
7 years
3. Agenda
Introduction & Context
What are Prompts and Prompt Engineering?
Demo: Extending an API with GenAI
The good, the bad, and the ugly of extending your APIs with GenAI
Key Takeaways
Opinions are my own and
do not express the views of my employer
4. Introduction & Context
GenAI is revolutionising software engineering, from code generation to
smarter applications.
GenAI augmented apps can be more than just chatbots.
We can use large language models (LLMs) together with prompt
engineering to make our software solutions smarter.
But can we enhance our APIs and make them smarter by extending them
with GenAI?
My goals for this session are:
Show you in a small scale what’s possible leveraging GenAI in APIs
Inspire you to explore new ways to extend your APIs using GenAI.
Discuss the potential benefits, challenges, and pitfalls of embedding GenAI into
your APIs.
5. What are Prompts and Prompt Engineering?
We use prompts to interact with LLMs
Prompts are natural language instructions defining a task and context
Prompt engineering is an iterative process to shape inputs to the AI
model and get desired outputs effectively.
Strategies for prompt engineering
Give clear instructions and context
Provide examples
Split complex tasks into subtasks
Request to provide a thought process
Test prompts systematically
A temperature parameter is used to determine the output randomness
7. Demo – Scenario & Requirements
Scenario:
Build an API that acts as a semantic business rules engine for
automated expense claim approvals
Functional requirements:
Use business rules written in natural language
Use semantics, i.e., use the data and the data’s meaning to apply business rules
Support semi-structured input data
Returned a structured output in JSON format
Non-functional requirements
Allow business rules to be updated without recompilation or redeployment.
Allow automated testing
8. Demo - Solution Architecture
Open AI receives the
prompt, behaves as a
BRE for an expense
approval process and
returns the assessment
5
Open AI
The Web API combines
the BRE task and the
expense doc as an LLM
prompt and sends it to
Open AI
4
https://github.com/pacodelacruz/
openai-business-rules-engine-demo
Business
rules
A prompt has been
prepared with a task
for the LLM to behave
as a business rules
engine (BRE) for an
expense approval
process. The prompt
also contains the
business rules in
natural language.
1
The HTTP client
receives the
assessment result in
JSON format from the
Web API
6
Expense claim
assessment
A HTTP client sends an
expense claim in JSON
format to a Web API
(Azure Function)
2
Expense claim
document HTTP
Client
Web API
The Web API reads the
BRE task prompt
3
9. Task Prompt
You are a business rules engine that specialises in expense claim approvals.
You receive expenses in JSON format and assess the expense.
You will return your response as a JSON document.
Once you assess the expense, you need to return one of following statuses:
'Approved', 'Rejected', or 'RequiresManualApproval'.
The 'statusReason' field must have a value for all responses.
When setting the 'statusReason' field value, state clearly the rule applied,
including the values in the claim that were used for the assessment and
explain in detail your thought process.
Be as precise and deterministic as possible when calculating the status.
When data is not explicit, use the rest of the document to derive the data.
Below are the rules you need to follow to assess the expense:
10. Business Rules
- Expenses of type 'flight' that are for a domestic flight within Australia
and the employee level is not 'Boss',
Must be approved when amount is below or equal to 1 5 0 0 AUD and
must be rejected when the amount is greater than 1 5 0 0 AUD.
- Expenses of type 'flight' that are for an international flight
and the employee level is not 'Boss'
Must be rejected when their amount is greater than 3 0 0 0 AUD and
only if the amount is less than or equal to 3 0 0 0 AUD are manually approved.
- Expenses of type 'flight' that are for a domestic flight within Australia
and the employee level is 'Boss',
Must be approved when their amount is below or equal to 2 5 0 0 AUD and
must be manually approved when their amount is greater than 2 5 0 0 AUD.
- Expenses of type 'flight' that are for an international flight and the employee level is 'Boss',
Must be approved when their amount is below or equal to 3 5 0 0 AUD and
must be manually approved when their amount is greater than 3 5 0 0 AUD.
- Expenses of type 'meals' that occurred during a weekday and the employee level is not 'Boss',
Must be approved when their amount is below or equal to 5 0 AUD and
must be rejected when their amount is greater than 5 0 AUD.
- Expenses of type 'meals' that occurred during a weekend and the employee level is not 'Boss',
Must be rejected when they are greater than 5 0 AUD and
must be manually approved when their amount is below or equal to 5 0 AUD.
- Expenses of type 'meals' and the employee level is 'Boss',
Must be approved when their amount is below or equal to 1 0 0 0 AUD and
must be rejected when their amount is greater than 1 0 0 0 AUD.
17. Semantic features of the GenAI-based BRE
The LLM was able to make the following deductions without having
explicit instructions:
Whether the date was a weekday or a weekend.
Whether the flight was international or domestic using airport codes in the
description field.
Expense type using the description field when the expense type field was
not present.
18. The good, the bad, and the ugly of extending your enterprise
APIs with GenAI
Unlocking new and
innovative use cases
Easy to start experimenting
Productivity gains –
less code & rich features
Enhanced collaboration:
business users & developers
Structured outputs
Options available to keep
inputs and outputs secure
Natural language challenges:
ambiguity and difficulty to
maintain
Constant changes and
preview releases
Underlying costs
Added latency
Complex testing
Build vs wait-and-buy
dilemma
Hard to incorporate
human oversight
Language understanding
limitations: context sensitivity,
nuances, tokenisation, etc.
Unpredictable behaviour
Hallucinations
Black-box nature of AI
difficult to diagnose & fix
Model biases
The good The bad The ugly
19. Key Takeaways
My demo was an experiment of what’s possible, not an endorsement of
an enterprise-ready solution
GenAI can help us building intelligent apps beyond chatbots
It’s easy to start experimenting and learn as you go
We can use LLMs to perform AI-augmented tasks within our APIs
An API that acts as a semantic BRE is just one of many potential use
cases.
There are challenges that we need to consider when using GenAI.
I hope this demo has inspired you to try and experiment with different
use cases in your APIs.