A useful lens through which to understand the planned, and tacit, behaviour of Organisations is that of tensions. Surfacing and mapping these can help us to recognise the forces that amplify or inhibit change, as well as giving visibility of our blind spots. But the tensions inherent in Generative AI create new fault lines, which may lie hidden either because of their innate subtlety, or through a human exceptionalism (a belief that ‘AI will never be able to do this…) that blinds us, or even through the dogma of systems (the notion of ‘epistemes’, that form boundaries to our ways of knowing) that codify ideas into truth.
Some of these are obvious: we have a tension between the need to do what our systems know how to do (learn, procure, integrate, leverage, drive for efficiency, codify into systems etc) vs the need to do what we are less good at (discarding power, relinquishing control, evolving our narratives of truth and validation, shifting to dynamic and short term models and codification of certainty etc). Some are less immediately clear, such as the tension between ‘integration’ of AI into existing socio cultural perspectives (where technology fits into an established social model, as a tool), or a conceptual reframing of society into an anthropo-technic model (human plus machine, but not necessarily with humans in charge), which is more synergistic than utilitarian.
Tensions lie in the personal space (what do I do with GenAI vs what do I disclose that I do), as well as Organisational (what efficiencies do I get vs what do I pass on), and to complicate things, the landscape in which we stand is still in motion – and will be – possibly forever.
There are systemic features that seek stability or predictability (market forces, imposition of markets, reporting, legislation etc), which may continually reframe, but are poor at handling fracture, and fracture may be a central feature of the change we are experiencing, or rather will experience.
As an example, we have seen great efficiencies in, for example, learning design or data processing, in parallel with a pressure to reduce rates from clients (who are not daft, and know that these efficiencies are in play). So the window of opportunity to exploit the profit of innovation is virtually nil in these cases – it’s more table stakes to be efficient – but we have also seen markets innovate by charging ‘premium’ for the human (although really this is a bet on ‘familiarity’, as opposed to something inherently ‘better’). We can rely on human exceptionalism to prop or distort the market for a while, but it’s not a stable and inherent feature.
We already know that what people say, and what they do, are two different things: e.g. people believe that they will prefer a human medic, but once they’ve had a deeply pleasing and fast experience with an AI medic (as opposed to a frustrating wait with a human one), they may change their minds. And the more we experience e.g. rapid and efficient chatbot experiences vs human ones, perhaps the less keen we will be to go back to degraded and cost efficiency driven models of human engagement.
There are also deeper tensions running underneath this – partly features of the Social Age, partly features of our Post Pandemic recovery (or legacy – or failure to learn), such as challenges to notions of engagement, jobs, career, the notion of ‘local’, and even ideas of citizenship and belonging, as I explore in the Planetary Philosophy work.
What we need are not easy answers, but rather a radical sense making capability, at both individual and structural levels, the ability to hold multiple narratives and hence multiple views of power, predictability, and clarity, which speaks to an ability to hold ambiguity.
Tension and fracture are related, but not inevitable: some tension is a strong feature, something we should seek, not fear, much as some risk should be mitigated, but some should be burned as the fuel of change.
Today I am #WorkingOutLoud on new work around Generative AI.