No AI Team? No Use Case? No Problem.

No AI Team? No Use Case? No Problem.

Generative AI and Prompt Engineering: Practical Steps for Businesses

Many companies find themselves at the early stages of adopting generative AI technology, often lacking a comprehensive AI strategy, clear use cases, or access to specialized teams such as data scientists. If this scenario describes your business, don't worry, you’re not alone. A practical starting point is leveraging off-the-shelf Large Language Models (LLMs). Although these models may lack domain-specific expertise compared to customized AI solutions, experimenting with prompt engineering can illuminate your path forward. Prompt engineering involves crafting specific prompts or workflows to effectively guide AI output. This process helps leaders understand the potential strengths and weaknesses of AI tools and clearly visualize what initial success in AI could mean for their organizations.

A Practical Use Case: Automating Product Review Summarization

Consider the common yet tedious task of analyzing customer reviews. Typically, a modest team manually reads, summarizes, and categorizes reviews to identify potential areas for improvement and customer satisfaction issues. The sheer volume of reviews makes it challenging to maintain consistency and thoroughness, leading to fatigue and inefficiencies. Off-the-shelf large language models (LLMs) can significantly enhance this process. By simply prompting an LLM with targeted questions, such as identifying common complaints, favorite features, or evaluating product value, businesses can quickly generate insightful summaries. This allows product managers to focus on addressing problems rather than merely detecting them. Of course, accuracy and bias remain legitimate concerns. Organizations are mitigating these risks by utilizing fine-tuned large language models (LLMs) trained on approved content, thereby ensuring alignment with their internal communication standards. Moreover, since review summaries are internal, inaccuracies can typically be managed effectively.

Enhancing Model Performance with Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation (RAG) is another technique businesses can adopt to boost AI performance. RAG integrates supplementary data to enhance the accuracy and relevance of AI responses without requiring major customizations. For instance, businesses can upload nonsensitive documents like employee handbooks or instruction manuals, allowing their teams to query internal processes or assist customer service efforts easily. RAG reduces common AI pitfalls, such as inaccuracies and outdated information, providing a cost-effective way for organizations to see immediate improvements in their AI capabilities. However, RAG does have its limitations. If a company reaches the performance ceiling of RAG-assisted models, it signals the time to explore heavier-weight solutions involving more substantial customization and data.

Moving Towards Fine-Tuning and Beyond

When an organization fully understands the capabilities and limitations of general-purpose models, it might be ready to move into fine-tuning, where generic AI models are customized using company-specific data. For example, utilizing AI tools to fine-tune models for querying diverse data silos using natural language. Fine-tuning requires significant resources, secure data management practices, and clear operational strategies. Companies at this stage typically have explicit use cases defined, leadership buy-in, and a well-structured data architecture.

Pretraining Models: A Resource-Intensive Commitment

Pretraining an AI model from scratch, where the model is trained entirely on your own datasets, is an option for organizations with highly unique data, specific domain requirements, or a strong need for data control and security. However, it demands considerable computational resources, sophisticated tooling, and robust data management practices.

Continuous Evaluation and Monitoring of LLMs

Regardless of the stage, continuously evaluating and monitoring AI models is crucial. The dynamic nature of AI and the data it processes necessitates regular assessments to ensure accuracy, mitigate biases, and maintain effectiveness. Many businesses are now exploring the use of "LLMs-as-judges," utilizing advanced AI to evaluate other AI models, which helps scale evaluation efforts while closely aligning with human judgments.

Start Small, Think Big

Prompt engineering and off-the-shelf AI solutions represent accessible entry points for businesses new to AI. Experimentation at this stage can yield immediate operational benefits and inform strategic decisions regarding future investments in AI. By progressively adopting more sophisticated approaches such as RAG, fine-tuning, and potentially pretraining, businesses can steadily harness AI's transformative power, aligning their technology roadmap with operational goals and practical constraints.

Read: Machine Learning. The Game Show

Royce Humpert Jr.

Professor of Cybersecurity | Co-Founder @ NeatLabs | Cybersecurity Consultant | MS | CRISC

2mo

John Giordani, DIA you present a very interesting case with this article! Very insightful sir!

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics