Lessons from Heroic Internal Experimentations with AI

Lessons from Heroic Internal Experimentations with AI

We hear more and more about how companies are externally experimenting with AI in their marketing - from agile creative generation through to clever media and optimisation. Personally I’m fascinated by the, often quiet, ways in which people are starting to use it internally in their day to day jobs. 

I don’t think we talk about that enough as an industry! Every now and then I stumble upon a friend or colleague who is using AI in some super clever way to automate a task, simplify their lives or allow them to do something they never even imagined. I’m far from an expert but as I’ve written about before, and recently spoken about at Mad//Fest, I decided I needed to at least play with it and have taken to at least experimenting in some increasingly bonkers ways. Here’s a bit of what I’ve learnt along the way:

Co-Writing A LinkedIn Article:

I trained a custom GPT (you’ll probably need a corporate or paid subscription to be able to do so) based on over 100 things I’d written over the years and tried to get AI to write an interesting new article based on that quite thorough training.

Great Reader & Summariser - it’s obvious to say but AI is REALLY good at absorbing a large amount of information and giving pretty accurate summaries of it. It digested endless amounts of my twaddle and did quite accurately summarise my passion points, areas of interests, and my dedication to battling the myth of ‘organic reach’.

Looks Backward… Slightly Generically - it’s an inevitable flaw of AI that it’s better at summarising and bubbling up topics it’s already trained on than it really is on pushing forward into new spaces. I was a little bit more surprised that, despite all the training data, the article it wrote didn’t really read like something I’d have typed myself (as I naively assumed it would!) and it took slightly generic stances on issues I tested it with, rather than perhaps really applying the #DigitalSense lens to consider them.

Building an Internal Media Bot:

The next evolution has been an upgraded internal version of this bot in our secure RBI Chat GPT enterprise environment. Here I’ve been able to upload additional internal documents (such as our Media Brilliant Basics and recent MMMs and measurement) as well as more reference material to shape its world view (it is quite the expert on Byron Sharp and Binet & Field these days). 

The bot is available internally to colleagues who want to ask it basic media questions, or who even want to upload a media plan/strategy or campaign report to get an analysis of it through our prescribed lens… and as far as I know we’re one of the very first brands in the world to go a step further and actually use this AI bot as one of the evaluation elements in a series of recent media agency RFPs.

Prompts Good Questions & Highlights Differences - I wouldn’t say the bot’s answers are always perfect or even always that insightful. When faced with a large amount of data however (especially for instance all the different technical submissions that might make up a large pitch process) we found it powerful at identifying key areas of difference between partners. 

Overall I find it very helpful in promoting areas we should look at and analyse more, and where there are good questions to be asked - At first to interrogate whether its observations are accurate and then to push on what we could improve. It’s a powerful tool to be able to throw a market’s plan into and within seconds have a simple score card against our Brilliant Basics criteria to prompt an in depth local conversation.

Needs Plenty of Training - It wasn’t quite as simple as uploading a few documents and watching the magic unfold. I ended up spending a considerable amount of time teaching it specific stock responses to some questions (including some universal prompts like always encouraging people to speak to their media team for more information) as well as battling some issues where it chose to start analysing training data it had onboarded as if it was a live request.

To make it a useful partner in evaluating different agency submissions or pitches I had to quite manually train it to ask for certain documents in sequence (eg the brief, scorecards, then each agencies submissions) and then to go through a series of set next steps in terms of the analysis it would produce and questions it would ask. Along the way you have to put some real checks & balances in place to be sure it is understanding everything it’s submitted.

In particular getting it to read an actual Excel media plan was a challenge. Partly this was because it needed a lot of very manual instruction on what to look for and where in the document; exactly which columns and rows contained what kind of information and context. Disappointingly even after being trained to fully understand one agency’s template I found it struggled to reapply and adapt that knowledge to a different template which a human would more easily understand to be similar.

There are also some fundamental things that, Chat GPT at least, just cannot read. It’s common practice to mark when a channel is on air by shading in a box in the Excel calendar for instance, but my bot simply couldn’t see that. I had to manually put ‘X’s in all the active weeks (or at least the first and last one) so it had something to identify. It asks an interesting question of how we might adapt media charts in the future to be more AI friendly.

Willing to Lie & Change its Mind - If you’ve spent much time with AI you’ll probably know this already, but generally speaking AI models are much happier to lie than admit they don’t know something. In the case of media plans for instance the bot will always gleefully confirm it has read and understood a media plan, but basic questioning often reveals it really has not and it needs to be trained on that exact template.

Often a prompt as simple as ‘are you sure?’ Or ‘that doesn’t sound right?’ Can be enough to prompt a full scale confession that it was indeed just making it up, or completely wrong. In the pitch review process that we created the AI, on receiving a media plan, tells you it has understood it but then is manually coded to suggest you double check by asking a simple question. All the user has to do is agree and the bot will ask itself if it knows how many weeks on air TV has in the plan, to which it quite often realizes itself it does not.

Oh and then there’s the fact that despite the rigorous training, clear guidelines and scientific feel of it all… the same inputs can produce different outputs every time. Just as a sense check after finally completing our first full pitch analysis (it honestly took the best part of a day to fully coach it) I then fairly quickly ran a repeat exercise - although the general analysis and key points were very similar, it actually scored all the agencies slightly differently that run through.

In summary we’re a long way off actually wanting to hand such decisions over to AI, but it can certainly be an interesting input into your evaluation IF you put the effort in to train it on your needs, what it’s looking at, and what kind of outputs to provide.

Getting Heroically Creative:

Why SHOULD only creative teams get to play with generative AI and visuals eh? Image generation tools and AI video animators put in all our hands the ability to create almost anything we could imagine. Briefs which might require weeks to produce and tens of thousands of dollars (or even be entirely impossible) can now be done in minutes. There is of course a next level in quality, execution and consistency that manual design still most definitely brings, but for some tasks…

I’ve experimented with AI videos both from an internal comms perspective (as a fun and unexpected way of introducing our new media team and capabilities to our other departments and franchisees) and as part of an inexplicable LinkedIn content strategy. Bored of writing about the #DigitalSense we need to bring to the discipline of marketing I took the logical next step and created an animated super hero cartoon to land the same messages.

You Get Out What You Put In - This is probably true of any creative brief, but if you ask for something vaguely and generically then expect to get something vague and generic in response. Both in fleshing out an initial visual style and in every frame you might create the more detailed and specific you can be the more rich details will be packed in. By default Chat GPT seems to regress towards simpler and lazier drawings and animations, I eventually found that prompts as simple as “add more detail and elevate the visual style” can totally transform the quality of what it produces (see the new Captain Digital Sense Shorts versus the original series!)

This is applicable even in less creative prompts - the more detailed a brief the better and more specific your answers. It’s easy to ask a GPT a general question but the more specific you are with your inputs and in demanding exactly what you want in your outputs the better. Again you can simply ask your AI bot to produce a more detailed report in the style of your favourite analyst and it will up its game. For really important queries try working on the brief with the AI before you finalise it.

Consistency is Difficult - If you’re trying to create any sort of cartoon or visual identity you probably want it to be consistent. You want your key characters to look the same, the graphical style to be the same, your brand to perhaps look right.

You can do a lot to help with this - most notably by carrying out a detailed initial setup and getting really specific on character world and design details. Get your GPT to consolidate these in a written guide and sets of reference images of key design elements. Eventually most AI chat bots will tell you a conversation has become too long and so you’ll need these to brief a fresh conversation, but you’ll probably also fall back on them every now and then when the bot you start with floats way from the brief.

In reality in the current state of AI the bots are NOT consistent - This will improve for sure. You see that in inconsistent answers to the same written questions and scoring, and you definitely see it in visuals. In reality the cartoons I’ve made would drastically fail any real world animators sense checks - the drawing style, the character details, fundamental elements of logo design and character activity vary greatly from image to image. By the time you through an image into an AI animator almost anything could happen.

I have a fairly high tolerance to this and part of the secret is creating distinctive another characters and themes that despite their constant variation no one would really notice or worry about it. That said there are many times when I’ve been in a battle with countless amends to try and remove dramatically inaccurate elements that creep in, often with limited success.

Which brings me to the one thing AI can be very consistent at, and that’s making mistakes. Our internal AI bot developed an odd bug that when asked to analyse a media plan, rather than asking for you to submit one it would dig into its training memory and pull out a random MMM or campaign it had data from and then analyse that. I wrestled with the AI for over an hour (each time it confidently telling me it would update its code and not do this again) before I managed to find fairly water tight ways of blocking this mistake.

As the AI slowly generates its latest image it’s depressing when you see a glaring error slowly emerging. Sometimes it’s on you for not providing a clear enough prompt not to be misunderstood. Sometimes it just ignores all its training, references and promoting and generates something wrong. When challenged sometimes it accepts this, often at first it denote it… and even when really pushed the AI can be very bad at correcting even simple mistakes. The biggest curse of creating a long thread of images is that certain elements from previous image requests can get lodged in the AIs mind, and then sometimes the battle is lost. However hard you try and convince it not to include said element in your next image - there it is again! If completely changing to a different brief doesn’t work I’ve had to start a whole new thread before, annoyingly consistent for a change.

Frustratingly Slow and Manual… But Fun to Experiment - All of this means that with current tools the whole process can be incredibly slow and frustrating. To get to even this level of consistency I employ a very manual process of individually generating each static visual key frame. Then I now use a different tool (Pika) to animate a string of them together. And for now I then totally manually piece together the resulting animation clips, record a manual voice over, try and still have the energy to drop some audio in the background.

When each image you generate can take over a minute to come to life then each error or anomaly can mean a 5-10 minute battle to course correct. It’s only because visuals are so obvious you can spot such errors, in written or other AI responses you might not spot them at all. The amount of content I’ve ended up producing is directly correlated to the amount of time I’ve spent travelling and waiting around in airport lounges trying to pass the time.

That said I find it creatively wonderful - to be able to think up almost any visual scenario and see it magically coming to life. I’ve played around with it with my nieces and nephews, in group chats and just as a way to entertain myself. Whilst some of my experimentation has been quite frivolous, and arguably serves no real purpose, I’ve enjoyed doing it all the same… and whilst I might never truly be required to have the skills needed to create a cartoon mini series, I have learnt a huge amount about how to prompt and negotiate with AI as a result.

My conclusion really loops back exactly to where I started - that AI is changing and evolving so much and it’s far from clear how we’ll end up really leveraging it, but it’s a great idea to find small ways to start experimenting with it in your every day life. From summarising documents or shortening emails, through to more advanced analysis, it really does pay to play. And if you end up embarking on your own mad creative journey… then I can lend you a GPT briefing and reference images to work on your side of the crossover episode.

And no... quite clearly I didn't get Chat GPT to write, or even summarise, this article.

 

Alice Beverton-Palmer

Commercial strategy consultant for digital media brands (ex-Twitter, PinkNews, Hearst)

1mo

This is great and fascinating, thank you for sharing. I read recently that a key difference between AI adoption and social media adoption back in the day, is that while our social media selves were influenced by the element of performance, our AI selves and all the fascinating ways we're using it are private unless shared like this. It both means that AIs will likely see people's darkest or weirdest impulses (I think this was in Garbage Day writing about the Meta AI feed), and that we will miss out on useful learning from each other - so thanks!

Daniel Feldstein

Playing Games with Ad Tech | Marketer | Creative Problem Solver | Connector

1mo

Love this Jerry, appreciate the transparency and great to see what you’re building!

To view or add a comment, sign in

Others also viewed

Explore topics