Stoic advice column: should I worry about A.I. and automation?

artificial intelligence[Feel free to submit a question for this column, but please consider that it has become very popular and there now is a backlog, it may take me some time to get to yours.]

D. asks: “How should a Stoic evaluate their thoughts about predictions for future A.I. and automation when they are concerned for human jobs and the value of the human being?”

Wow, this truly is Stoicism for the 21st century, and beyond! I doubt Seneca ever thought he had to contemplate that sort of ethical conundrum (though he probably should have thought more carefully about the institution of slavery, which he took for granted as one of the foundations of Roman power).

Let me begin by saying that — as both a biologist and a philosopher — I’m not really concerned with the sudden development of super-human A.I., the so-called Singularity event, here and here is why.

That said, more run of the mill A.I., as well as automation at many levels, is both a reality and a social concern, as your question implies. Of course, one could argue that this is nothing new. The famous Luddite movement of the early 19th century railed against, and unsuccessfully opposed, the introduction of weaving machinery at the onset of the Industrial Revolution. And when I was growing up in Italy in the ’70s workers at the FIAT motor company were protesting against early experiments in the robotization of their work place, fearing — correctly — that they would either lose their jobs or face lower pay in the future. All of this on top of the additional labor problems created by the recent trends in globalization and multinational corporatization.

I believe the Stoic take on this is to put human beings first, the efficiency of production a distant second, and corporate profits a very very last third. This for a number of reasons.

To begin with, as Marcus reminds us, our job is that of trying to be as decent a human being as possible, concerned with the welfare of others and of society at large:

“In the morning, when you rise unwillingly, let this thought be present: I am rising to the work of a human being.” (Meditations V.1)

“Labor not as one who is wretched, nor yet as one who would be pitied or admired; but direct your will to one thing only: to act or not to act as social reason requires.” (Meditations, IX.12)

This, of course, is in agreement with the general injunction to live “according to nature,” i.e., taking seriously our nature as social beings capable of reason. (See here my discussion of the relevant passage in Cicero’s De Finibus, book III, section 20.)

It is also in agreement with the famous Stoic concept of oikeiosis, the idea that we should “appropriate” other people’s concerns, most famously represented by the image of Hierocles’ concentric circles, centered in oneself but surrounded by those of family, friends, fellow citizens, and eventually humanity at large. (See the last section of my essay On Hierocles.)

Finally, your preoccupation is also in line with Epictetus’ discipline of action, which is itself related to the virtue of justice and the topos of ethics, arguably the most fundamental of the three spheres of Stoic study (the other two being physics and logic).

So it seems to me that you are on very solid Stoic ground when you worry about the effects of A.I. and automation on the welfare of fellow human beings. The question, of course, is what to do about it.

History — particularly the above mentioned Luddite movement — tells us that it is a fool’s errand to simply oppose the advancement of technology. Indeed, to do so would be a poor application of the virtue of prudence (phronesis), or practical wisdom. This is the one related to the discipline of assent, and it’s your guidance on how to navigate complex situations in the most ethical manner. I suggest that outright rejection of technology would not be the “prudent” (in the Stoic sense) thing to do.

Then what? I think a Stoic would want to fight for justice here, meaning neither the “social justice warrior” approach — which in my mind is well intentioned but smells too much of self-righteousness and even occasionally of narcissism — nor adopting a general theory of justice a la, for instance, John Rawls (because general theories of justice regularly come short of accounting for the complexity of actual human societies and situations). Rather, it means to always be mindful to treat other human beings justly, with fairness, even at a cost to one’s own convenience or finances.

Let me give you an example that has to do more with the “shared economy” than with automation per se, though the same principle applies. I live in New York City, where currently I enjoy — lucky for me — six basic choices in order to get around: I can drive (but I’m not crazy, I don’t own a car!); I can walk (which I do, often); I can use a bike (either my own, or the ones the city rents out — I do this occasionally, if the weather is okay); I can use public buses and subways (which I do most often); I can take a cab; or I can use one of the private car services like Lyft and Uber.

In the few instances in which I’m either forced, or it is honestly much more convenient, to take a cab or a car service, I always opt for cabs on the ground that their workers are treated better than those of the private services. When I do (rarely) use a private service, I always opt for Lyft over Uber, because of the notorious corporate culture at the latter company, which includes systemic sexual harassment of its female employees, as well as highly unethical treatment of both drivers and customers (the latter through the despicable practice of spiking fares even when it would be obviously objectionable to do so, like after a terrorist attack in Australia, when people were trying to get away from danger).

I’m perfectly aware that: i) the situation is complicated, because for instance in avoiding Uber I do indirectly hurt their drivers; and ii) that my individual choices are a small drop in a very large bucket.

Nonetheless, my own phronetic analysis led me to the above (always revisable, in the light of new facts or better reasoning) choices. Something similar can be done when it comes more directly to automation. For instance, once Amazon will introduce drone delivery I will choose either to opt for another provider or to pay an extra premium to have the “privilege” of my goods being delivered by an actual human being.

Finally, I also make a point of talking to my representatives in Congress and to vote for people who are defenders of strong labor laws aimed at minimizing the impact of modern technology (and corporate greed) on workers. I know, it ain’t easy, or convenient, to be a Stoic. But that has never been the promise:

“‘Is there no further reward?’ Do you look for any greater reward for a good man than to do what is noble and right? At Olympia you do not want anything else; you are content to have been crowned at Olympia. Does it seem to you so small and worthless a thing to be noble and good and happy?” (Epictetus, Discourses III, 24)


Categories: Stoic Advice

32 replies

  1. Of course, one could argue that this is nothinag new. The famous Luddite movement of the early 19th century railed against, and unsuccessfully opposed, the introduction of weaving machinery at the onset of the Industrial Revolution

    Even for the Luddites it was the soft-ware (the punch cards) more than the machinery that adversely affected their jobs.


  2. For the record, corporations are persons ..

    My limited understanding is that the original purpose of ‘corp as perons’ was a ‘legal fiction, so they could make contracts and be sued thus allowing them to function incomers.

    If the commit crimes (killer Pinto) the people responsible should be prosecuted and go to jail. You can’t punish a fiction if you due the very nearly innocent stock holder are the people who suffer.

    Corporations are not people and before this absurd manipulation of ‘person’ they were only ‘person’ in a very narrow sense.

    Liked by 1 person

%d bloggers like this: