London Calling Back, #2–Invasion of the New Daleks: Alienation, Authenticity, and The Preacher on the Train.


APP EDITORS’ NOTE:

LONDON CALLING BACK, by Emre Kazim, is a series about philosophy, society, and politics, from a British and non-North-American point of view, emphasizing a new critical-dignitarian, edgy, and thoroughly push-backarian philosophical, social, and political ferment on the rise in London, recalling the heady days of politicized punk and The Clash.


EARLIER INSTALLMENTS:

#1: “Human Nature”


#2: Invasion of the New Daleks: Alienation, Authenticity, and The Preacher on the Train.

With the emergence of new digital technologies, we’re seeing the increasing automation of actions, and a “digitalised decision-making” that was previously the purview of human beings.

So perhaps the most appropriate motto and call-to-arms in this era is: “the new Daaleks are taking over!”

In this context, serious questions of human authenticity are emerging.

By authenticity I’m referring to the relationship between a person and their thoughts, feelings, desires, and choices, together with the actions they perform.

Actions performed by someone, as a causal source, resulting from free choices that this person makes, can be categorically contrasted with actions that merely happen to a person, as a result of someone else’s, or something else’s, causal influence.

The more a person is subject to actions and occurrences that merely happen to them, and that therefore did not flow from their own free choice, as a causal source, the less authenticity the person has.

According to a broadly Marxist picture, with the advent of our modern means of capitalist production, the person as an object that’s simply moved and moulded by external forces suffers alienation.

The term “alienation” in English of course contains and is derived from the term alien, used in our everyday language to refer to something altogether foreign and different: especially including controlling, invasive, threatening extra-terrestrial alien life-forms like the Daleks.

But what precisely does this term mean in the context of our own alienation?

In this sense, as alienated, we’re not emotionally or physically isolated from the society of others (although obviously that can happen too): instead, in this sense, people are alienated from themselves.

According to this conception of alienation, there’s a split in the personality of a person, whereby, on the one hand, there’s a thinking self, and on the other there’s a self that’s objectified,  a self-in-the-world that’s merely being externally caused and moved as an object: thus the person is at once aware of themselves from the inside (the thinking self) and also observes themselves as helplessly subject to external forces (the objectified self).

The disconnect is grounded precisely in the gap or split between the thinking self and the self that’s being acted upon.

There would be no disconnect, gap, or split if (and only if) the thinking self and the self-in-the-world were coherently integrated—and here, our thoughts, feelings, desires, choices, and actions would be defined by our own self-organized deliberation and sourcehood.

Again, that’s authenticity.

In other words, then, there’s a direct relationship between automation and personal alienation, i.e. inauthenticity.

The question is, what is the cost of such inauthenticity?

We may argue that increased automation is making things easier for us; that life is better as a result of this.

In some circumstances, such as trivial tasks, this is obviously the case.

Nevertheless, in areas far more consequential and significant, such as education, friendship, love, social institutions, and politics, automation only increases the load of external forces exerted upon people; and in all such consequential and significant contexts those forces, which seek to automate our thinking and our lives, do so with a view to manipulating us for the benefit of those who pursue a particular agenda, or set of agendas, that are typically at once economically-driven and State-driven.

In this way, the automation of thinking itself—via algorithmic nudging that’s highly efficient in affecting and effecting human behaviour—is an attack on the very idea of what it is to be human, where “to be human” is to be a conscious, self-conscious, thinking, feeling, desiring, choosing, acting rational human animal.

Above, I talked about “alienation” in terms of a thinking self that’s more or less self-consciously aware that their own personhood is inherently divided—gappy and split.  

What’s so frightening about contemporary automation—the invasion of the new Daleks—is therefore that the thinking, conscious, self-conscious self is now directly subject to the forces of automation itself.

If our thinking itself is automated, prior to or over and above action, then even the awareness that “Something is wrong! I am not one with myself!” will be dissolved and humans will truly become new Daleks and merely robotic post-human creatures, fully automated, and entirely “lost in space.”

Our humanity, as such, will be lost.

To be authentic, as I’ve said, is to have forged a coherent relationship between all of one’s thoughts, feelings, desires, choices, and actions, both at any given time and also over time.

I think, feel, desire, choose, and act, coherently and for myself, therefore I authentically am—that’s what we’re striving for.

Without the satisfaction of all the elements of the antecedent clause, however, then we merely exist, as new Daleks.

Allow me to narrate something that happened earlier this year.

During the summer, I was reading about the emergence of emotional artificial intelligence, which is the development of automation and “intelligence” that reads a person’s emotive states.

One source of enthusiasm for this technology is in “welfare delivery”: e.g., robots providing emotional AI can potentially be used to take care of the elderly, in mental health therapy, and even as “companions” for those in prison.

Traditionally, these roles required care-givers, hence interventions by people who are conscientious and caring.

In other words, they’re roles that require concern, empathy, and respect for the dignity of persons.

How could robots ever actually be concerned, empathic, or respectful?

At most, they could only manifest a set of superficial behavioral indicators that cog-sci and marketing “experts,” by means of empirical studies, deem indicative of those essentially human qualities: in other words,  pseudo-concern, pseudo-empathy, and pseudo-respect.

Let’s consider empathy in particular.

We can think of two kinds of empathy.

The first is genuine empathy: here, a person is actually being empathetic to another person, as a result of that person’s genuinely caring for the other.

The second is mimicked empathy, mock empathy, phony empathy, pretend empathy, or pseudo-empathy: here, someone merely simulates the behavioral indicators of empathy.

But if we assume that the person who is the recipient, i.e., the one experiencing the empathy, experiences exactly same thing, whether it’s the first or the second kind of empathy, i.e., authentic or inauthentic empathy, then we can ask: what’s the difference?

Someone might say: what’s really important is the experienced empathy, not the authenticity of the source.

So the developers of empathic, emotional AI, can argue that authenticity is irrelevant.

These summer readings deeply troubled me, because I had no direct and effective response to that argument.

To return to my earlier line of thinking: if a person is so effectively manipulated that they don’t self-consciously experience the division, gap, or split between their thinking self and their self-in-the-world, then what’s there to worry about?

While reading an article on precisely this topic—arguing for the irrelevance of authenticity in emotional, empathic AI—I sat on a train in the London Underground on my way to work.

On the particular Underground line that I use, the trains have been updated so that there’s no division between the carriages—hence it’s possible to walk along the entire length of the train.

“Your life has meaning!”—a raised voice reached my ear, emanating from a man a few carriages down, who was walking slowly through the train.

Raising my head, I watched him slowly advance towards me.

He was proclaiming various fairly familiar evangelical Christian slogans, like “God loves you!,” “Love your neighbour!,” and so-on.

My particular favourite was, “You’re made of light!”

As the preacher progressed along the train, I looked at the other passengers: I was the only person keeping my eyes on the man; others glanced up momentarily and then quickly returned to their statue-like poses, most of them with headphones in their ears and their eyes glued to their phone screens.

But the man went on and on, passing me and continuing along the train, his voice dimming with the distance.

Fascinated, I didn’t return to the text I was reading: instead, I closely observed the other people on the train, all of whom appeared utterly unaffected and unmoved.

Reflecting further, I thought about what to make of the man and his loud slogans; I’m sure that most people just dismissed him as unhinged, with a few others perhaps feeling embarrassed for him, or feeling pity for someone who’d been reduced to this pathetic, deluded state.

Perhaps the most striking fact, for me, was their sheer indifference.

Contrastingly, what struck me about him was his self-confidence and apparent lack of any inhibiting self-awareness—the kind that causes us to fear speaking in public,

or even more so, makes us fear being publicly laughed at or shamed

Centrally, I was moved by his unique subjectivity: by which I mean the way he existed in and for himself in those moments on the train.

The man was unique, with a powerful character and a beaming smile.

His existence was his own self-organized existence: no one on the train was like him.

But aside from him, everyone—including myself—was nothing but a digitized copy of everyone else.

I don’t ascribe to his worldview, but I was intensely envious of his commitment and purpose.

His life had meaning, and it was in direct contrast to the rest of us new Daleks—mere automata—on the train.

This experience, in turn, helped me to develop a critical response to the troubling argument that as long as we’re unaware that we have become mere automata, or, as long as emotive manipulation is experientially successful, there’s no problem.

My response is as follows, step by step.

1. Emotional and empathic manipulation rely on machines that simulate the behavioral indicators of empathic states.

2. To simulate successfully is to trick us successfully.

3. But real emotions (feelings, desires, etc.) are inherently subjective states that would have to be functionally and statistically modelled, in order to be reproduced.

4. And real emotions, as inherently subjective and first-personal, are not merely functional and statistical phenomena.

5. Therefore, real emotions cannot be reproduced.

6. Therefore, emotional AI cannot be truly effective.

This argument relies, in steps 3-5, on the notion of real emotions, which, I’m claiming, are inherently subjective and first-personal, and therefore functionally and statistically unmodellable.

Obviously, for that part of my argument, I’m drawing on classical Thomas-Nagel-style irreducibility arguments: but I’m also appealing directly to the essentially embodied phenomenology of human emotion, which is inherently deeper and richer than simple consciousness, i.e., subjective experience, on its own.

Now when a cognitive-scientific model of emotion is created, it does not reproduce real human emotion, but instead creates a behavioral and neural image of human emotion.

Assuming that the algorithmic manipulation of human behaviour is indeed experientially effective, as a highly powerful means of manipulation, then the effect of this manipulation is to generate an emotional state in people that’s not identical with what would be their real emotional state, independent of that manipulation, but is instead only an experience of a behavioral and neural image of real human emotion.

Correspondingly, this experience implements precisely that disconnect, gap, and split between the thinking self and the self-in-the-world, that’s characteristic of personal alienation.

In short, emotional AI cannot reproduce the experience of real human empathy, but instead it can only induce or produce personal alienation by fooling people into believing they’re experiencing someone else’s empathy.

Now back to the preacher on the train: as far as I could tell, his thoughts, feelings, desires, choices, and acts all aligned with the guiding ideals of his life.

His existence was wholehearted.

Compare this, now, to the “normal, sane” contemporary adult human being, i.e., to the other folks on the train, who would conventionally be sharply contrasted with the preacher, who’s taken to be “bonkers” or a “nutter”—in short, insane.

The “normal, sane” contemporary adult human being, compliantly, diligently, and obediently executing the duties that society and the State have demanded of them, and living the life of homo economicus, is an instantiation of a socially-constructed human type, the digitized subject, the new Dalek.

The AI algorithms work on this digitized subject and are altogether effective, as we know from many empirical studies.

Why then did everyone on the train, bar the preacher, look so miserable?

It could quite easily be argued that I’m in no position to cast aspersions on the people on the train—for how could I truly know their internal states, hiding behind their everyday social masks?

That’s obviously true: I can’t know their internal states with certainty.

Allow me then instead to appeal directly to you, the reader, and ask: to what extent do you join me in my envy of the commitment and purposefulness of the preacher on the train?

Of his bravery and uniqueness?

Of his “I don’t give a damn” attitude about externally-imposed social norms, in pursuit of authenticity?

So I’ll close with a slightly modified version of Nietzsche’s Parable of the Madman:

Have you not heard of that madman who lit a lantern in the bright morning hours, ran to the market place, and cried incessantly: “I seek authenticity! I seek my humanity!” —As many of those who did not believe in such things were standing around just then, he provoked much laughter.

The madman jumped into their midst and pierced them with his eyes. “Whither is the Human?” he cried; “I will tell you. We have killed humanity—you and I. All of us are murderers. But how did we do this? How could we drink up the sea? Who gave us the sponge to wipe away the entire horizon? What were we doing when we unchained this earth from its sun? Whither is it moving now? Whither are we moving? Away from all suns? Are we not plunging continually? Backward, sideward, forward, in all directions? Is there still any up or down? Are we not straying, as through an infinite nothing? Do we not feel the breath of empty space? Has it not become colder? Is not night continually closing in on us? Do we not need to light lanterns in the morning? Do we hear nothing as yet of the noise of the gravediggers who are burying Humanity?”

“How shall we comfort ourselves, the murderers of all murderers? What was holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us? What water is there for us to clean ourselves? There has never been a greater deed! This deed is still more distant from us than most distant stars—and yet we have done it ourselves.”


Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here.

Please consider becoming a patron!