Of edges, flying, and falling – or: how AI misleads strategists

A few weeks ago I went out for drinks (a rare occasion) and ended up chatting with a guy about work (a not so rare occasion). “Well, I’m going to lose my job soon”, he shrugged, taking a big swig of his drink. It was the first time that this conversation, the robots-taking-over-our-jobs conversation, hit that close to home. A storyboard artist who knows they can’t compete with generic AI because for most people fast, fine, and cheap beats fast, amazing, and fair. I bought him another drink and we finished the conversation wondering whether his diving instructor training offered not just better perspectives on life but also regarding AI.

Since then, the flood gates have opened and not a day goes by without dozens of posts about the best “ChatGPT prompts to get more done and reduce work hours” or “15 Best ChatGPT prompts to finish hours of work in seconds” or “The most useful ChatGPT prompts (in order)” or “Let GPT-4 transform how you journal and review your week” being flushed through my timeline.

Unsurprisingly, a few people have started writing about how strategists can use AI to improve their output. Which would be great – why wouldn’t we all want to continuously improve and get better at our craft – if “improving” didn’t always seem to equate to “speed up.” And therein lies a bit of a problem, particularly if “speeding up” means “cutting short”. While I’m intrigued about the potential of LLMs for strategic development and I love a good shortcut as the next person, I worry about the future of our profession when we start shortcutting our thinking.

Join the free and fun race to the bottom here

Julian Cole offers a great source of shortcuts for aspiring strategists, the ones that come in the form of (mostly) helpful frameworks. He’s been trialing ways of incorporating AI into those processes, shared a prompt by someone else to turn ChatGPT into a (poorly performing) strategy coach, and recently ran a free webinar on using AI as a strategist. (And he also warns to use ChatGPT “like salt”, i.e. carefully.)

Use Chat GPT like salt

One of the seemingly more harmless examples he shared made immediate sense to me because I could relate to the pain point it aims to solve: finding, i.e. creating, better images to make your presentations look better. Isn’t it great that you now can simply ask Mid Journey and the like to just make that image to illustrate your point perfectly? No more scouring the depths of internet or streets of the world for that great image to land that argument. A precise prompt, and voila: image done, point made.

The aesthetics of the logic of an argument

But something doesn’t quite sit right here. And it has nothing to do with the desire to make a presentation look better. And everything to do with the danger of making an argument look better than it is. Because the reason you can’t find the perfect image to illustrate your point might not be that you suck at image research. The reason probably is that you suck at making a point.

Of course Julian is not suggesting to make up fake evidence. (Even though I can probably ask Midjourney to create photographic evidence to land my point.) He just wants your decks to look nicer because it’ll help sell your strategy.

An image I created in Midjourney for a client presentation. I never used it.

This article here is purely me thinking all of this a step further. (And maybe a step too far.)

The perfect aesthetics of the illogic of a fake argument

If you have to – and now can – manufacture the precise imagery for your hypothesis and ideas, does that mean your hypothesis and ideas are even real?

Generative AI is an amazing tool to create fantastic imagery and stories. They’re hilariously entertaining. Think Harry Potter but dressed by Demna or Pope Francis in his white Balenciaga puffer. And, while sometimes you really want them to be true (like when Wankers of the World created this stunning series of Tories without the privilege) they’re mostly that: images and stories that are too good to be true.

https://twitter.com/wankersotw/status/1641730609000194052?s=46&t=8HAdf_MDI7ln8I11YpQe5g
Too good to be true

Our job as strategists isn’t one of fantastic stories. It’s the foundations for those. Our job is groundbreaking truths. Truths that cut deep, hit hard, and root great ideas. Our job is to find the human truth that sparks behaviour change and that sets mass culture alight.

Now, if the image depicting the truth of your argument doesn’t exist, what it the truth of your argument doesn’t neither?

Generative AI and the advent of the generic

Generative AI and the tools we are currently playing around with, in an arms race for some sort of competitive advantage, are not necessarily helping with that search. They are simulations of the truth, based on probabilities determined by the quality, biases, gaps, and flaws of the “training data.”

This training data was not intended to be applied to the very specific problems of a marketing/brand/creative strategist. It was created to create reasonable continuations of any type of general input. Sure, it can be used for creative strategy, just like you can use window cleaner to cure any kind of ailment, but you better take the machine’s input, the data, with a pinch of salt. Somebody recently tweeted that no data was better than bad data. And, currently, there are a lot of strategists out there filling their presentations and clients’ heads with bad data.

Using Chat GPT to help you get to better strategy

They might have done that before ChatGPT, too, but now they’re doing it at a pace that is hard to keep up with and with a (false) confidence that is hard to reason with. And while it might feel like they’re creating an advantage for themselves, they are most certainly only creating false ones for their clients. Whether that is businesses in the market or creatives in the agency.

How are we supposed to create a truly distinctive idea, a truly differentiating approach, a truly unique product when all the inputs are averagely generic and just another reasonable continuation? Naive strategists high on the fumes of the seeming rocket fuel of AI are at risk of driving their (and our) credibility off the cliffs – at high speed. Yes, it will feel like flying, for a bit, when it’s really just falling.

Good strategist were always said to be a little lazy. The danger now is that lazy strategists think that they’re actually good strategists.

The pros and cons and ways to go

Now, this can easily be misconstrued as a rant against AI from someone worried he won’t be able to keep up. Well, I won’t be able to keep up, I’ve learnt that much in the past weeks. And I’m not saying AI tools will not be able to improve the strategy process. It can probably speed it up – but so far I haven’t seen it improve the quality of the process. False and/or made-up citations, or Artificial Hallucinations, are just one problem. Biases in the training data leading to cultural misrepresentation another. The current tools seem to get you to average faster (at best), but don’t seem to be able to reach good. (Yet.)

To get to good, we need to create models that are trained on specific data that is relevant for our jobs. Think about the IPA database, APG papers, the catalogue of Effie and AME case studies, paired with the potentially massive databases of every focus group transcript or ethnographic research report. Then it might get interesting and useful to get to better briefs and better strategies. Until then we should all use it like Tom Roach: to set the floor of expectations.

So next time you can’t find that perfect image to illustrate your point, why not pause for a second to ask yourself a simple question: do you really have a point?

If you really think so, strap on that rocket. I see you at the cliff.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.