The philosophy of tech

Some of the burning questions about tech, AI or robotics are actually questions of philosophy. What is our true goal here?

There is a lot of debate about the change AI will bring to our ways of living and working, and rightly so. However, in many discussions, we could easily replace the word “AI” with “tech”, as shorthand for “information technology”, without changing the debate. Moreover, we can trace the debate back to its underlying tech philosophy, such as positions of materialism or idealism.

Let me explain.

These days, there’s pretty much a consensus that AI will overtake tasks that so far only human beings can do. In principle, that’s nothing new. The calculator I used at school could solve many maths problems faster than I could, and address more complex problems than I could, too. In class, we could delegate certain tasks to the calculator, allowing us to tackle new, different questions. That’s what tech in general, and AI in particular, does.

Nowadays, AI is a fuzzy concept. A few years ago, it used to mean “machine learning”. Now, it’s more commonly used for LLMs and their equivalents. Suffice it to say that it will mean something different in a few years’ time. Clearly, we are on one of the front lines of information technology, or “tech”.

The definition of being human

Now, what seems to be clear is that tech, or AI, isn’t equivalent to “human”. Or is it? A strain of transhumanism strives for the fusion of human beings and machines, but at least for now, we can maintain the distinction. With every task AI (or tech) takes over, it touches on the question of the very definition of being human; it raises ontological questions:

It seems to be almost a consensus that AI will impact many, if not most jobs. Think about other general-purpose technologies, and you’ll see that computers and the internet affect nearly any job today, be it directly or indirectly.

The effect of AI goes beyond that. It touches on ontological questions, ones that concern the human condition. In 1917, Sigmund Freud coined the term “slights to humanity” (Kränkungen der Menschheit) for disruptive scientific discoveries which, according to his theory, have called into question people’s self-image in the form of a narcissistic injury.

Back in 1994, the German physicist and philosopher Gerhard Vollmer pointed out ten mortifications of humanity. Among them was the prospect of machines (artificial intelligence) that match and even surpass our intellectual achievements. Human beings define themselves and their value partly through work and through self-perception as a unique species. AI threatens both definitions.

Especially in the realm of creativity, AI forced us to rethink our assumptions. We thought, and somehow continue to think, that creativity is part of the human condition. When machines take over creative tasks, do we then change our definition of creativity – or human beings?

This brings us to the philosophy of tech, in particular to positions such as materialism and idealism. Put simply, materialism is the view that everything – including human consciousness – can be traced back to matter. Physicalism goes one step further with the thesis that everything that exists is ultimately physical. From a materialist perspective, there is no fundamental difference between human beings and machines.

Artificial consciousness

Strictly speaking, there is no problem here for the materialist. For the idealist, it’s a different matter. In idealism, consciousness is fundamental. As long as machines aren’t conscious, they are fundamentally different from human beings. However, artificial consciousness at least seems to be possible. But maybe this only means that we are back at the crossroads between materialism and idealism.

For the materialist, artificial consciousness is something to be expected because he has no objection to it in principle. The idealist would object and stick to the distinction between conscious human beings and unconscious machines. At this point, we can leave this debate open.

However, there is a tendency to contrast machine perfection with human imperfection. The underlying assumptions go back to Romanticism, as a counter-movement to industrialisation and the Enlightenment. The keywords here are subjectivity, imagination, nature, intuition, passion, or beauty. We can see traces of Romanticism in today’s green movements or the appreciation of the artisanal as opposed to the industrial.

The irony is that we can only afford the artisanal on the basis of the industrial. At least, that’s the case as long as we want to keep our current standard of living and not go back to pre-industrial levels. The latter is something that only a minority would accept.

Artificial imperfection

But the question goes deeper. As Christian Souche, who leads robotics at Accenture Innovation, puts it:

Should the ultimate goal of AI in robotics be to replicate human performance, including our imperfections, or to achieve a flawless, perfect performance devoid of human error?

We could, he argues,

create machines that not only perform tasks as humans do but also embody the nuances and imperfections inherent in human behavior. By integrating these imperfections, AI can achieve greater relatability, adaptability, and acceptance in human-centric environments. This strategy underscores the importance of empathy, intuition, and the organic variability that characterizes human interactions and decision-making processes.

On the other hand, the pursuit of perfect performance emphasizes the elimination of human errors and the optimization of efficiency and precision. This perspective advocates for leveraging AI’s potential to surpass human limitations, creating systems that operate with unparalleled accuracy, consistency, and reliability. Here, the focus is on achieving superior outcomes, reducing risks, and enhancing productivity across various sectors, from healthcare to manufacturing.

Is there a class of tasks that AI can’t overtake as long as it pursues perfect performance? Can tech be empathic and have intuition, or at least simulate these human qualities?

But then there is another problem that lies at odds with perfection: the rise of mediocrity. Generative AI tends to produce more of the same – not perfect, not necessarily bad, but interchangeable. It is commoditization on steroids. It culminates in a trend towards meh-diocrity that has many causes – not just AI and tech. I don’t think we can escape that trap by more automation.

These are questions of robotics, AI, or tech in general, or rather philosophy. The answer depends on our philosophical stance towards tech (and there is more to it than idealism, materialism, or romanticism). And on our wishes for a world that is increasingly characterised by tech and AI.

Picture by Alex Knight / Unsplash.