Playback speed
undefinedx
Share post
Share post at current time
0:00
/
0:00

The Disruption and Opportunity of Generative AI

Captain’s Log - Issue #141

Disruption may be noisy, or quiet. An earthquake is destructive and violent, it’s effects both tragic and immediately apparent. It changes things through fracture and loss. But an idea may have an equal or greater impact, in either a short or long amount of time.

There is a high degree of interest in the disruption provided by emergent stable of generative AI systems. The crest of this wave is caused partly by their immediate, conceivable and sharable impact (pretty much anyone can figure out how to use them, and they generate text and images which can be easily consumed and shared), by their familiarity of format (they do not produce new modes of text, image, or soon music and film, but rather they produce the strikingly familiar), and by their almost immediate abstraction of expertise (anyone can be an artist, anyone a writer).

But right now, their disruption is largely by addition: they are used as a curiosity, a toy, a game, a fascination, a distraction. Except in certain quite specific contexts, they have not yet at scale taken anything from us. They have not yet had their true impact. And they have not yet grown: what we see now are simply systems of nascent potential. The very first tremors of what is to come.

What’s fascinating is the almost concurrent polarisation of opinion, the narrowness of much debate, and the certainty with which some people are holding opinions. This week i have seen opinions ranging from ‘the death of writing’ to how ‘AI will never replace writing’, alongside a slew of announcements from Google, Microsoft and others about integration into core products, and much self assured narrative about how AI is not creative/creates bias/frees us from bias/is often wrong.

Well, it is often wrong, but then again, nobody claimed it was perfect: systems like ChatGPT are testbeds, highly capable with narrow focus, but how accurate they are, or how well they serve us, are largely transient questions or problems: already we have seen a new ChatGPT 4 roll out this week, and that wagon is just starting to roll. Improvements are measured in tens of percentage points, not fractions.

I wrote something around my own perspective on it this week: i think that we are clearly at a point of fracture, but we are not yet transformed, and the imperative is to learn.

Instead of relying on other people’s dogma or certainty, this is almost certainly a time to build a working group and testbed available technology in real contexts. I would be creating elective programmes using a prompt based and scaffolded learning design, with high levels of support, for an engaged small group, and seeing what we learn.

Waiting for a vendor or supplier to ‘give’ us an answer for application, or simply standing by and shouting an opinion, neither feel particularly constructive at this stage.

Systems in crisis respond with spasms: banning technology, resorting to certainty, ignoring the shaking, shouting loudly.

Systems that adapt will most likely hold their certainty lightly, will clearly articulate how they will adopt risk, will create space and agency for experimentation, and will recognise that there is no ‘answer’ to find right now, but there is learning to be done.

0 Comments
The Captain’s Log
Authors
Julian Stodd