AI as Concept-proofing vehicle
As part of my ongoing work in feeling out the capabilities of popular Large Language Models, I've taken to validating some of my ideas using ChatGPT. This validation does not come in the form of asking ChatGPT to tell me if something is a good idea or not. Not just because it's a yes man, is confidently incorrect nearly all the time, and is generally a lazy reader, but because all of that is anthropomorphizing and that prompt won't give me what I want from this model.
For example, on a whim, I decided to see how a circus-themed sticker set might look.
Welcome to the Circus (example 1):
In order to create these mocks I took some boilerplate prompt language that defines a particular style, I asked GPT to give me trigrams (three word chunks) on actions that happen in a circus. Then, I interpolated the prompt through each trigram. I'll circle back to this in another article.
The results were mixed. Here are 2 of the sheets I generated.
While the first sheet was as whimsical and fun as intended, the second sheet looks macabre. If I were taking this business of selling sticker sheets seriously, here is how I would use these experimental results.
If I were on an extreme budget constraint, unable to draw myself, and generally had loose ethics, I would inspect the prompt used to create the first one and iterate on content, but not on look and feel. I would have ChatGPT evaluate the prompt that created the second image, for anything that may generate negative imagery, and iterate on that a few times, checking the DALL-E output after each iteration.
If I were ethical/ could afford to, I would hire an artist and use the AI generated materials to be used as inspiration. (This is the option I actually use for my projects)
Self-healing data (example 2):
A while back, I began a project on self-healing data. I did start writing about it but a lot of things started happening very quickly at work and I had to put that on hold. Anyway, another example of what I'm talking about is validating ideas, whether for business, or not. I have 2 approaches that I use.
Approach 1: The direct way
In this approach, i simple ask ChatGPT to give me an approach. Literally, here's the prompt:
What almost always happens when using this approach is that ChatGPT will cough up some nonsense. As I said, it is typically confidently incorrect. Here's the response to that particular prompt.
And so on, and so forth.
I find that ChatGPT gives consistently poor outputs for this task type. Actually, its consistency in giving poor responses is exactly why I engage it for this process. This is a first pass. This is the low-level and horrible thinking that I could have spent a few hours or days doing (and have, and probably will again). Instead, I get this from ChatGPT and I can almost immediately see what's wrong with it... because they aren't my ideas. I'm not killing my darlings by eviscerating this sample output. So, we collapse the time horizon for iterating on new ideas.
Approach 2: The backdoor
As I've stated before, I'm really big on the idea of inversion. I tend to use inverted methods for a lot of things intuitively. It's only recently that I've begun to do it by intent.
To approach this example the inverted way, I would instead ask ChatGPT to tell me why something won't work. Again, being literal.
What usually happens is that ChatGPT identifies the biggest and most obvious pitfalls in an idea. Now, if you ask it for pitfalls explicitly, it won't return this list. It's one of those quirks of a corpus-driven predictive model. Here's the output:
Et cetera, et cetera.
In surfacing these obvious flaws, ChatGPT is doing the work of a critic or a good friend. It is painful, expensive, and time consuming to fall into pitfalls. It's also embarrassing to make a mistake that is so clear in hindsight. ChatGPT collapses the time horizon on that learning cycle and also gives you a hit list. In order to have a successful operation, it is very likely that if you invert each of the items it gives you, you'll have that. It's a slam dunk. Where it lacks in is the implementation details, which you'll need to work out sans cognitive aides.