Best Practices for Real-World Prompting (Prompt Engineering 2 of 3)
Tuning the AI’s “knobs” – a little clarity here, a bit of creativity there – to craft the perfect prompt.

Best Practices for Real-World Prompting (Prompt Engineering 2 of 3)

Key Takeaways:

  • Clarity is King: Be specific and clear – a well-phrased prompt beats a vague one every time.
  • Add Context & Roles: Set the scene for the AI (e.g. “You are a travel guide…”) to guide tone and relevance.
  • Examples & Structure Help: Show the AI exactly what you want by giving examples or step-by-step instructions.
  • Tune the Creativity: Adjust AI “randomness” settings – higher temperature/top-k for creativity, lower for precision .
  • Iterate and Refine: Don’t settle for the first response; tweak your prompt and try again to get the perfect answer.

Welcome back! So you survived Part 1 and now know what prompt engineering is and why you should care. (If not, I’ll wait here while you go check it out… Jeopardy music plays… All caught up? Great!). In this second installment of our Prompt Engineering Series, we’re moving from the “what” to the “how.” How do you actually write a prompt that makes an AI deliver useful answers (instead of nonsense or an essay about the meaning of life)? As someone who’s spent countless hours coaxing AI models to behave, I’ve compiled a handy toolkit of best practices. Let’s dive in – with our trademark self-deprecating humor and real-world examples along the way.

1. Be Specific and Clear

This is prompt writing 101. Ambiguity is your enemy. AI models are literal-minded. If you ask, “Tell me about financial markets,” don’t be surprised if you get a textbook-style overview that’s a mile wide and an inch deep. The AI is thinking, “Uh, where do I even start?” Instead, zoom in on exactly what you want: e.g. “Explain in simple terms how the stock market works, in one paragraph.” Now the AI has a clear mission. Specificity guides the AI like rails on a track. Include details: who, what, where, when, how long, what style… whatever matters to your query. It’s the difference between asking a friend “How do I fix my bike?” versus “How do I fix a flat tire on my mountain bike’s rear wheel?” The latter will get you a much more useful answer.

Being specific also means deciding what not to ask. If you lump multiple questions in one prompt, the AI might get confused or only answer one. For example: “Explain what ChatGPT is and write a poem about AI.” You’ll probably get one of those tasks done (or a weird combination). It’s better to split that into two prompts. In short: clarity. If your prompt could be interpreted in two or three different ways, chances are the poor AI will pick the wrong one. Save it (and yourself) the trouble – be unambiguous.

Pro tip: If you notice the AI’s answer seems off-base, reread your prompt. You might find you weren’t as clear as you thought. (I’ve had many facepalm moments discovering that my question could be read two ways – and of course the AI chose the wrong interpretation.) Remember, the AI isn’t trying to be sneaky; it just follows the words you give it. So give it good ones!

2. Provide Context and Set a Role

AI models don’t have persistent personalities or context unless you give it to them. They start with a blank slate for each prompt (aside from any conversation history you’ve built). This means you can and should set the scene. If you need an answer in a certain style or perspective, tell the AI that upfront. For example, start with something like: “You are an experienced career counselor. Now, answer the following question…” or “Act as a friendly tour guide for New York City:”. By assigning a role or context, you guide the tone and level of detail in the response. It’s like giving the AI a persona to adopt for that answer.

Why does this help? Imagine asking two people, one a physics professor and one a 10-year-old, “Explain gravity.” You’ll get very different answers. With AI, you can decide who it pretends to be when answering. If you want a super technical answer, you might prompt it as “You are a physics PhD.” If you want a casual, easy explanation, maybe “You are my helpful friend who’s great at simplifying complex ideas.” I love using this trick and it really works. It’s like casting the AI in a role play. And unlike human actors, the AI never complains about method acting!

Context isn’t just persona; it can be the scenario or background info. For instance, if you’re asking for advice, frame the situation: “I have a team of 5 engineers and we’re struggling with communication. As a management expert AI, how would you improve this?” Now the AI knows the context (team of 5 engineers) and its role (management expert) before it even begins to answer. You’ll get a much more relevant response than if you just said, “Our team has issues, help?”

One more example: Instead of asking “Where should I go on vacation?” (to which the AI might default to generic popular spots), try “I’m looking for a vacation spot in Europe in July, off the beaten path, with great hiking.” You’ve given context about your preferences. The answer you get will likely be way more tailored (and useful) than a one-size-fits-all list of tourist traps. Context is queen.

3. Give Examples (Few-Shot Prompting)

Sometimes the easiest way to get the output you want is to show the AI what you expect. This is often called few-shot prompting, but let’s keep it simple: it’s providing examples. If you want the AI to follow a format or style, give it a sample to imitate.

For instance, suppose you want an AI to generate some customer support replies in a friendly tone. You could prompt: *“Example of a friendly support reply:

Customer: I can’t log into my account, it keeps saying error 502.

Support: Hi there! I’m sorry you’re bumping into that error. A 502 usually means our end is having a hiccup. Let me fix that for you right away… [etc.]

Now please write a similar friendly support reply to this customer question: ‘My package is late, where is it?’”*

By showing an example first, the AI gets a clear picture of the style and structure you want. It will then produce a reply in that same vein. This technique is golden when you have a very specific output in mind. I’ve used it to generate things like code in a certain format, or responses with a desired tone. Instead of hoping the AI magically guesses the style, I just demonstrate it. It’s the old “monkey see, monkey do,” except the monkey is a giant neural network. And it works surprisingly well, AI researchers have found that providing a couple of examples can significantly steer the model’s output to be more accurate or relevant.

A related tip: if you’re asking the AI to follow a formula or solve a problem step-by-step, you can first walk through a simple example in the prompt. E.g., “Here’s how to solve a simple math problem: 2+2 -> First, add the numbers: 4. Now using that method, solve 5+7.” The AI will follow the demonstrated method. This is like giving the AI a mini training on the spot. Just be mindful of length (the examples eat up some of the prompt space, which is limited). But for many cases, a small example or two can dramatically improve the output. It’s a neat trick that feels like you’re programming the AI with plain language examples.

4. Structure Your Prompt (Organize the Task)

If your request is complex or has multiple parts, structure it! Large language models actually respond well to an organized prompt – it’s easier for them to parse. Use bullet points or numbered steps in your prompt if that makes sense. For example: “Help me draft an email with the following: 1) A greeting to a new client, 2) A brief intro of our company, 3) A request to schedule a meeting, 4) A polite sign-off.” By numbering these points, you’re essentially giving the AI a checklist. The output is far more likely to cover everything in a neat order, perhaps even numbered or paragraphed accordingly.

I often do this when I want a certain format. Another scenario: “Please provide: \n- A one-sentence summary of the article.\n- Three key takeaways as bullet points.\n- A closing statement.” The AI will usually mirror that structure in its answer, making it super easy for me to read or copy into a report. If I had just said “Summarize the article and give key points and a closing,” I might get a big blob of text that I have to disentangle. Structured prompt, structured answer. Everyone’s happy (especially me, who doesn’t have to do the extra formatting work).

Also, consider breaking a big task into smaller prompts if needed. Let’s say you want the AI to write a short story and then also summarize it. Rather than one giant prompt like “Write a story about X and then summarize it and give a title,” you could prompt: “Write a short story about X.” Get the story, then on the next line: “Great. Now give me a 2-sentence summary of that story and a catchy title.” Two prompts, clearer focus for the AI on each task. It’s not that the AI can’t do multiple things at once – it can – but you reduce the chance it fumbles one of them. Think of yourself giving instructions to a person: if you rattle off a long list of to-dos in one breath, something’s going to be missed. The AI is similar – one step at a time, when possible.

5. Mind the Limits (Tokens and Length)

This one is a bit “under the hood,” but important for real-world prompting: AI models have a token limit. Tokens are pieces of words – basically a way the AI measures text length. You don’t need the exact science, but know that there’s a cap on how much input + output an AI model can handle in one go. If you overload it with a huge prompt (like pasting a 10,000-word chapter) and ask for a huge output, the model might truncate the answer or even refuse because it’s too much. Each model (GPT-3, GPT-4, etc.) has its own limit of tokens (e.g., maybe around 4000 tokens for some, which is roughly 3,000 words, newer ones have more).

What does this mean for you? If you have a very large input, you may need to summarize or feed it in parts. And if you want a very long output (like a full essay), be aware the model might stop halfway if it hits its limit. A common scenario: you ask for a long story or a detailed report, and the AI stops mid-sentence. It didn’t have a stroke – it just ran out of its allotted length. When this happens, you can prompt “Please continue” and it will usually pick up where it left off. Crisis averted.

To avoid hitting limits unknowingly, keep prompts focused. Don’t dump irrelevant info into the prompt thinking the AI will magically ignore it – it could waste precious tokens. Also, if the response must be of a certain length, you can say “keep the answer under 200 words” or similar. The AI will try to obey (though it’s not perfect at counting, it gives a good effort). Conversely, if you want a thorough answer, you might say “give me a detailed answer, around 4-5 paragraphs.” This helps the AI gauge how much to write. Without guidance, you might get either a one-liner or a novella, depending on the model’s mood. So, set expectations about length and remember you can always ask for more detail or more brevity in a follow-up prompt.

6. Adjust the Settings (Temperature, Top-k, etc.) - Yes, I had AI Help Me Here!

If you’re using a consumer AI chat interface, you might not see these settings explicitly (some UIs just have a simple “Creative ↔ Precise” slider). But if you have access to advanced settings or you’re coding with an API, understanding temperature, top-k, top-p, and max tokens can elevate your prompting game to the next level. Don’t worry, we’ll keep it layman-friendly:

Temperature: This controls randomness. A low temperature (0 to 0.3) makes the AI deterministic – it will give more straightforward, likely answers (good for factual Q&A). A high temperature (0.7 to 1) makes outputs more varied and creative – the AI might take more risks in wording and ideas (good for brainstorming, storytelling). At temperature 0, if you ask the same prompt multiple times, you’ll basically get the same answer. At temperature 1, you might get different answers every time. I think of temperature like the AI’s “creativity vs. consistency” dial. Feeling adventurous? Turn it up. Need precise, repeatable results? Turn it down .  (Analogy: Low temp = the AI is on script; High temp = the AI is ad-libbing.)

Top-k: This is another way to control randomness. It limits the AI to considering only the top K most likely words at each step. If k=1, the AI always picks the single most likely next word (making it very predictable and sometimes overly conservative). If k=50, it has a broader selection of words it might choose from, injecting creativity. Many people just leave this alone and focus on temperature, but it’s good to know. A smaller top-k = safer, a larger top-k = more variety.

Top-p (nucleus sampling): Instead of a fixed number of words like top-k, top-p says “consider from the most likely words until they cumulatively account for p percent of probability.” For example, top-p=0.9 means the AI will only choose from the set of words that make up 90% of the probability distribution for the next word. This often dynamically adjusts how many options it considers. The effect is similar to top-k: lower p (like 0.5) really narrows the choices = more predictable output; p=1.0 lets it consider pretty much everything = can get wackier. In practice, you’ll usually tweak either top-p or top-k (not both) along with temperature. The goal of all these is to tune how creative vs. accurate you want the AI to be . One Kaggle AI podcast summary noted that by manipulating settings like temperature and top-k, you can get more creative or more precise results from the model . In other words, these are your creativity knobs.

Max tokens: We touched on this earlier with limits. If you have control over max tokens for the output, set it according to what you need. If you only want a quick answer, you can cap it low. If you want a thorough essay, set it high (just not so high that you exceed the model’s total capacity). If you’re not sure, a moderately high number is fine – the AI will stop on its own if it feels it’s done (they usually have some sense of closure). Just remember, if you get cut-off answers, bump this up next time.

For most casual users, you won’t fiddle deeply with top-k or top-p, but understanding temperature is handy. Some apps label it as “creativity”. Essentially, low temp = factual and repetitive, high temp = inventive and varied. If you ask an AI at temperature 0.2 to “write a story about a cat”, you might get a very bland cat story the same way each time. At temp 0.9, one time you get a cat astronaut saga, another time a cat comedy sketch – who knows! Use the appropriate level for the task: coding or math – low temp; poetry or brainstorming wild ideas – high temp. And if you can’t set these at all in your interface, no worries – just know the AI might have some default randomness. You can still control style a lot through wording in the prompt itself (e.g., saying “be creative” or “stick to facts” actually does influence many models).

7. Iterate and Refine (Don’t One-and-Done)

Finally, perhaps the most important “best practice”: Treat the first AI answer as a draft. Prompt engineering is an iterative process. Rarely will your first prompt yield the perfect response (if it does, buy a lottery ticket, because you got lucky!). Typically, you’ll want to review what the AI said and then adjust your prompt or ask follow-up questions to refine the result. This back-and-forth is not only normal, it’s expected.

For example, I recently asked an AI writing assistant to help me draft some rewrites of my own content. The first version it gave me was okay but a bit generic. Instead of tossing it out, I prompted again: “That’s a good start. Can you make it more playful and include a call-to-action at the end?” Boom – the next answer was much better, with a fun tone and a nice “Call-to-Action” at the end. Sometimes I’ll realize I left out a key detail in my prompt. Rather than starting from scratch, I’ll just clarify: “Actually, assume the reader already knows about our product’s basics, focus on the new features.” The AI will adjust in the next response. It’s like sculpting – you chip away or add a bit with each prompt to get the shape you want.

Even at big companies with AI teams, prompt engineers follow this process. They’ll test a prompt, see the output, tweak the wording, maybe adjust a parameter, and test again. In fact, when Morgan Stanley was integrating GPT-4 to help their financial advisors, their team (including prompt engineers) iteratively refined prompts and evaluated the outputs for quality . They didn’t expect the AI to be perfect on the first try; they experimented, learned, and improved the prompts continuously. So give yourself the same grace. If the AI’s answer isn’t right, think about why. Was your question too broad? Did you forget to mention the format you wanted? Was the AI too “creative” (maybe lower the temperature) or too stiff (increase it, or literally say “have fun with it” in the prompt)? Then try again. The AI never gets impatient, and it doesn’t mind repeating itself or changing its answer. You won’t hurt its feelings by saying “let’s try that again.” I often joke that prompt engineering is 30% knowing what to ask, and 70% trial-and-error.

To recap our toolbox: be clear, add context/role, use examples, structure your asks, be mindful of length, tune the settings if you can, and always iterate. If this sounds like a lot, don’t worry – you don’t need every trick for every prompt. Even just remembering to be specific and to add a bit of context will instantly level-up most of your AI interactions. The rest you can apply as needed. Before long, you’ll develop an intuition for it. You’ll start phrasing questions to AI in a way that feels almost like talking to a smart human, except you’re also giving that human a full brief of what you need upfront (something we probably should do with actual humans more often, to be honest!).

In the final part (Part 3) of this series, we’ll switch gears and look at the “what not to do” side of things. That means common mistakes, funny failures, and cautionary tales from the prompt engineering trenches. I’ll share a few times I hilariously messed up a prompt (and got gibberish), plus some real-world stories like the now-infamous case of lawyers misusing ChatGPT and ending up in hot water . More importantly, we’ll talk about how to fix a prompt gone wrong. It’s like our bloopers reel and troubleshooting guide rolled into one. Until then, go forth and experiment with some of these tips – your AI assistant won’t know what hit it (in a good way)!

Sources:

  • Kaggle (OpenTools.ai) – Whitepaper Companion Podcast Summary (importance of prompt techniques and settings: e.g., adjusting temperature, top-k for creative vs precise outputs)  .
  • OpenAI Blog – Morgan Stanley uses GPT-4 (prompt engineers refined prompts with feedback to improve an AI assistant’s output quality in a real-world use case) .
  • mxmoritz.com – Common Mistakes in Prompt Engineering (highlights the need for clarity, context, and iteration in prompting, which inform best practices).


Hashtags: #AI #PromptEngineering #TechTips

To view or add a comment, sign in

Others also viewed

Explore topics