Here are some observations I made from the discussion we had throughout the AI and the Art of Management day organised by ComplexitySalon in Sydney on 25th August. The day was governed by Chatham House rules, so I have generalised. I have not attempted a complete summary of what was a very rich day, and some of these remarks carry my own interpretation.
Just to reflect upon why we organised the day in the first place. The current state of the world and the pace of change can feel overwhelming. The day was intended as an oasis of reflection, where we could sit together, bodies in a room, and make as good sense as we could of the processes and changes in which we have no choice but to participate. It is a way of making ourselves more responsible, a small opportunity for action at least in sensemaking. Those who refuse to struggle have no right to hope.
So we discussed that we are undoubtedly in the midst of a technological revolution, the precise reach of which is still contestable – it’s worth remembering that a good degree of hype always accompanies such changes in technological capacity and reach, and to keep the fanfare going benefits the companies which are promoting it. But there are already breakthroughs for medical research, commerce and routinizing boring and repetitive tasks (as long as there are always humans in the technological loop), many of which are hugely beneficial. Some of these developments have surprised even those who are developing AI systems: the precise workings become opaque when AI systems interact with other AI systems.
The promise of generative AI has yet to be fully delivered but change is happening so fast it is important never to say never. AI models are statistical probability machines, but they are more than this. But it is easy to anthropomorphise these systems and talk as though they can ‘think’ and ‘reason’. Both thinking and reasoning are human, social activities. Although AI systems can mostly give and account of how they reached an answer, depending on whether the algorithms are proprietary or not, this is not the same as giving an account to one another and working through our differences, a task we undertook in the room.
We talked about the difference between thinking and critical thinking. AI seems capable of working something out, and a second order process of noticing and reframing the working out. But critical thinking, a third order process we sometimes call reflexivity is currently beyond it. Reflexivityimplies the ability to think about how we are thinking. In humans the mind is more than the brain, it manifests through our bodily experience. Amongst a small handful of mammals we are capable of taking ourselves as objects to ourselves. We can notice ourselves as we engage with others.
We are also feeling animals: the ability to mimic, respond to and call out emotions in humans is not the same as having a feeling capacity. Can being appealed to emotionally ever substitute for caring for one another directly?
If we were to apply critical thinking to AI systems, then one way of framing this is to think in terms of the four ‘I’s: interests, ideology, identity, and institutions. Technological development is never just neutral, nor just for the good.
Cui bono? Whose interests does the current development of AI benefit? The AI agenda in the West seems largely driven by a small handful of large technology companies, some of whose leaders have openly interfered in elections and have demonstrated fealty to regressive political regimes. These same technology companies have cocreated an economy that has contributed to widening inequality, and have amplified social divisions, and fragmentation. The interests of AI development in China seems less clear, apart from competition with the west.
Ideology: AI seems driven by a variety of ideologies from techno-utopianism: that outcomes can only be good, to pessimism about flawed human beings. Techno-utopianism is a form of scientism that believes that ultimately all qualities can be turned into quantities, as long as we have enough data points. As for pessimism, we are deemed to be so broken and imperfect, incapable of rationality and logic, that only AI can fix us. The sooner we hand over to, or fuse with, our more logical surrogates, the better. There is also a quieter and more modest commitment to making AI work for human betterment and insisting that it works for us, rather than the other way round.
In acknowledging that there is a need for legal and ethical ‘guardrails’ to constrain the development of AI, and the need to include a range of social scientists in AI development teams, there was nonetheless also a recognition that there is an ‘arms race’, where very few governments are likely to constrain development for fear of missing out. One wonders whether ‘guard rails’ will ever be sufficient to contain an arms race.
Identity: who are we becoming as we hand more and more tasks over to AI? Do we lose the capacity to think for ourselves and take responsibility? If moral reasoning and critical thinking are like muscles which need to be exercised, then how do we stay in shape?
Institutions: what will the world of work become? If our institutions depend upon AI for streamlining recruitment, for detecting fraud, for awarding exam results, and we know that systems are designed by flawed human beings, then how do we ensure that we don’t just amplify our prejudices and project them forwards?
In our discussions we reflected on what gets lost in an algorithm-based system which synthesises and represents data from the internet, most of which has been created by a privileged minority. It constitutes a very narrow slice of human knowledge in history, ways of knowing, and ways of being together. We still have traditions in the world where knowledge is imparted thought transmission, teaching music and drama, supervising a doctoral student, Zen Buddhism, depend upon a relationship between someone who has been through the gate supporting an initiate to do the same. So much of human experience depends upon human bodies resonating in a room together.
At one point we wondered whether AI is a threat at all, particularly in comparison with run away environmental catastrophe. But it is not so hard to see the connection between the two. Both are symptoms of a particular instrumental attitude to the world and to ourselves, what one sociologist has described as treating the world as a point of aggression, that it should be made known, dominated and pressed into greater use.
Turning to the question of whether we should be optimistic or pessimistic about these revolutionary changes, one might conclude that neither is appropriate, and both are superficial responses: it’s just as easy to be either. But hope may be a much more appropriate if we understand it as hope against hope: we go on together with no expectation that things will necessarily get better, or our problems will be solved, but we are still prepared to take action together to bring about the best we can. Being flawed and human is not a design fault, it’s what allows the world to be continuously surprising and emergent. And in the experiential group we experimented with exactly this imperfect, resonant process, trying to be visible to each other to negotiate a better understanding of the sense we had made together of the day.
In Copenhagen. Thanks for the blog.
Still in Sydney?
Some thoughts on ideology: https://tempo.substack.com/p/an-optimism-of-steel
I am disappointed that AI has failed to take my job yet.
There seems to be a wave of redundancies where employers believe AI will replace workers. About 50% of those workers will be hired back as the AI is not that good yet.