Artificial General Intelligence: Hype or Not? A Heated Theoretical Exchange Between Two University of the Free State Professors
AI shares with apartheid a corrupt impulse to alter what it means to be human
In April 2023 Professor Richard Ocaya and I had a fairly heated exchange via work email on the subject of Artificial General Intelligence (AGI). This was a continuation of a debate on ChatGPT and Large Language Models that had begun a month earlier at a retreat in Clarens, near the border with Lesotho. Both of us are based at the University of the Free State. Richard is a Professor in the Physics department and I am in Communication Science. He is at the QwaQwa campus; I am in Bloemfontein – which explains the electronic nature of the exchange. Richard and I have contrasting – and occasionally strong – positions on Artificial General Intelligence, its present, and its future. Richard is the expert. He has built computers, worked for IBM, and is currently training his own offline AI system at home. In fact, Richard recently whispered to me that he was forced to shut down his home AI because he thought that, one morning, it had begun to self-prompt!
Richard is convinced that computers can become sentient, and also that they will take over most jobs currently occupied by humans. I think that Richard is soundly mistaken, and that he is caught in the hype. I am not an expert on AGI, and so my contribution is non-technical and non-specialist. Yet, my field, communication, more or less pioneered Information Science with Shannon’s ground-breaking 1948 paper, “A Mathematical Theory of Communication”. I also believe that subjects such as AI are too important to leave to programmers and algorithm-pushers. Rather, the social sciences and the humanities have a large role to play, particularly now that AI has shown a propensity for telling lies, naked hallucination, and cannibalism. Richard believes the hype around AGI is fully deserved, and provides reasons to support this. His claims are presented in the paper. I am still broadly unconvinced about AGI. My contribution in our ensuing exchange draws on my work in the emerging field of Apartheid Studies (AS) which I pioneered to focus on human action in the “design” of persistent harm in human society. To my mind, the study of apartheid “design” offers surprisingly lucid entry points into the understanding of the limits of algorithmic design.
My theoretical work in AS, for one, leads me to doubt the efficacy of the Turing Test. I make an argument about the “nature of nature” and the “nature of truth” to show that neither nature nor truth can pass the Turing Test. This is because nature and truth cannot seem. The only things that can pass Turing’s Test are those that can seem, like computers, which use training data to seem human. Seeming can never iterate into being. Computer programmes that seem human can never be trained to be human.
Furthermore, my research in Apartheid Studies on the nature of harm, rest, and entropy suggests that humans will always be infinitely faster and better than computers at certain tasks. This is if we use a simple test that I devised which I call the “same-hands test”. In this test, I show that robots cannot be programmed to wash hands, for instance. I also show that machines and algorithms cannot rest because rest is neither input nor output. Rather, rest alternates with work as a ceaseless string of 0s and 1s that never superpose. There is no quantum superposition in social relations.
I concluded, to Richard’s dismay, that the hype around machine learning is undeserved because it is based on a fundamental, perhaps corrupt, misunderstanding of how humans relate to each other and to the world. It is of some interest to me that apartheid, likewise, expresses a corrupt impulse to alter what it means to be human. This paper presents our exchange, framed by my broader theorising of harm, truth, and rest in Apartheid Studies. I offer the paper here as a footnote-in-progress to the emerging debates on Artificial General Intelligence. This Abstract has been submitted to the inaugural Foundational Digital Capabilities Research (FDCR) Seminar 2023.
Thanks, Elvis. I will share with you the paper (so you get Richard's thoughts) - it's still a draft at the moment. But the issue of AI is one that scholars needs to assess soberly and engage with. Certainly, it is too important to leave to engineers.
Wow. Your contribution is logically presented. I have little to none knowledge about computers but definitely understand some components that makes up our social scientific models of communication therefore I concur with you Prof.
However I would still love to have a taste of What Prof Richard had to say as a counter to this argument… sounds interesting