I’ve just published Winning Slowly 7.13, the final episode of this season in which we attempted to make some gestures toward what a fruitful ethic of technology might come to. In listening to the episode as I edited it, there was one particular claim Stephen made and to which I didn’t really have a good opportunity to respond. However, I do have a blog (and a standing commitment to write at least 500 words each day this month), so: here we are!
Early on in the episode, I noted that people’s feeling of decline is itself a kind of actual decline, and Stephen disagreed:
I don’t think that’s necessarily even true: because we Twitter bots that make things seem true, and… there’s literally not even anybody making that idea. Twitterbots picked it up out of the air and made it a thing…
On air, I chose to dig into some of the meatier kinds of decline which are harder to argue with. Here, however, I do want to note my disagreement with the claim as Stephen made it on air. Twitter bots don’t make things up “out of the air”! To the contrary: they are designed at minimum to amplify the sentiments they encounter. Moreover, much of the Twitter bot activity of the last few years has in fact been self-consciously designed by malicious actors to bring about exactly the kinds of negative outcomes we have seen.
This way of talking about these technologies is a mistake, the same mistake computer technologists have been making for a very long time, perhaps still best exemplified by Kevin Kelly’s What Technology Wants: to attribute agency to the program and thereby to absolve the humans behind those programs of responsibility.
I think we do ourselves and our neighbors a very great disservice when we speak this way. It is possible that things are only in fact perceived to be bad, and that this perception is worse than reality — that perception is the extent of the effect of Twitter bots’ amplification of negativity and discontent. Even if we grant this, for the sake of argument, there are still real negative outcomes in the world as a result! It leaves people feeling more alienated from those around them, likelier to be depressed, and so on — with all the attendant ills.
What is more, though: I do not think those attendant ills, real and serious though they are, represent the full extent of the outcome of Twitter bots. Neither do I think that Twitter bots have created, undirected, out of the air, the malaise so many people feel. Insofar as some of these problems are “only perception,” that remains a real problem; but more than that, the bots are amplifying pre-existing problems, at the behest of their makers. We owe it to each other to say this clearly and truthfully — to hold accountable those who make and employ these technologies, and to counter their effects.
Stephen has since responded with a very thoughtful piece: On Twitter Bots and the Presence of Disinformation. This paragraph in particular gets quite clearly at both what I aimed to get at on the episode and what I gestured at a bit above:
There’s another sense in which the presence of disinformation is real and that presence itself can contribute to a sense of decline. Discussing the presence of disinformation as a factor that contributes to a perceived sense of decline counts disinformation as “real.” It is “real” in that it is an actual factor contributing to a perceived sense of decline, despite its content being untrue. Things that are not true, do not exist, or never happened should not worry you; their nonexistence cannot affect you in a material sense. But those things being “untrue” does not necessarily mean that the disinformation does not exist–even if it should not exist. And the presence of disinformation can contribute to a perceived sense of decline (and perhaps rightfully so); the actual disinformation can contribute to a perceived sense of decline (but I am saying that this should not be so, and we should fight against this tendency).
I commend the whole post to you.