#lang pollen more writing

Solarpunk and the Way Out

(I’ve found it! The way out! No more Paglen! Depressing old man!)

[note: this was written for a class on the cultural implications of image-based machine learning and assumes one has read Trevor Paglen's invisible images]
“I will be an 85 year old man in the Catalonian wilderness, teaching a robot pottery.
- Zach Mandeville, A future vision

When I first stumbled on the writings of the other Zach, I was in a deep tech-alienation spiral. I was in the early stages of actually acquainting myself with what my computer could do, using it to build all manner of net.art-y things that felt like they opposed the corporate web. It was the middle of winter break, and I had been excited to use the time off to become a fully realized 10x developer (not really, but I wanted to teach myself a bunch of computer stuff). But even though the process of learning and building (and deliberately misusing web technologies) felt in some sense countercultural, I just couldn’t motivate myself to do any of it. Any attempts at resisting our corporate overlords felt hopeless/pointless, and I remember giving up for several days. If I had read about Paglen’s prison of invisible images then, I surely would have plunged even deeper into despair. Thankfully I instead happened upon the peer-to-peer web and found a whole community of people who felt the same way and were actively imagining a new relationship with technology.

Computers are wonderful magic machines (and potential friends) but are currently configured primarily to convince us to purchase more and more content (food, netflix, airpods, etc.) instead of encouraging any emergent relationality. The dominant user-centered design paradigm discourages any knowledge of the machine: its entire essence is abstracted behind the big ‘BUY’ button. Even knowledge of how computers work doesn’t alleviate this alienation entirely. Computers are still set up to be expensive vehicles for ads– we can’t have computer-friends without a different sort of computer. Paglen is useful in some sense: he gives us an (maybe too alarmist) idea of the current state of the world. But he refuses to imagine an alternative, trapping us in a hopeless world of surveillance, marketing, and corporate control. This is why the robot pottery lesson is so important– with hopeful imaginations aided by a sprinkling of Simondon and the possibilities opened up by deep learning, we can begin charting a way out of our current techno-alienation.

I led with this snippet of speculative fiction because I want hope to be the starting point of our inquiry, not an afterthought. The contributions of image-based deep learning are crucial and our current situation is dire, but ‘the discourse’ has paid these phenomena plenty of attention while largely omitting any gesture towards hopefulness. In this paper, I think with Simondon’s On the Mode of Existence of Technical Objects, ‘solarpunk’ futures, and deep learning algorithms to draw a (hazy/uncertain) line from our current dystopia towards a world of teaching mushroom-powered robots not to push down so hard on their thumb pots.

Simondon makes the stakes of this work clear, arguing that the most potent form of social alienation derives from a ‘non-knowledge’ of the machine, which is created by the pursuit of 'automatism' over 'technicity'. (Simondon 16-17) Since the time of his writing (1958), the alienation caused by technology has only worsened. Kevin Slavin’s article on user-centered design helps us understand this alienation. He argues that the contemporary logic of rationalization has created machines that are ignorant of and alienated from their environments (what Simondon would call associated milieu). For Simondon, the relationship between elements of the machine and between the machine and its environment must change to bring technics into culture. Deep learning and its understanding of images has opened up the potential for such a radical reconfiguration. Because neural networks demonstrate the potential for computing with meaning rather than abstracted data, they propose a different relationship between human and technology (and society) that Simondon could only have dreamed of.

First let us dig into the forced separation of technics (machines) from culture. Simondon compares culture’s view of machines to a ‘xenophobia’ that rejects the machine-stranger, even though the machine contains something human within it. While aesthetic objects are granted meaning, machines are reduced to tools that have only function. Those who appreciate the meaning contained within machines– those with technical knowledge– are thus forced to elevate the machine to a sacred position, since culture will not grant machines an aesthetic position. From its sacred position, the machine creates a ‘technocratic aspiration to unconditional power’: it becomes a tool for domination of others. This domination is embodied in a will towards automating the machine to as high a degree as possible, what Simondon describes as “duplicate of man devoid of interiority”. (Simondon 16) So, culture is left with two contradictory attitudes toward the machine: that it is a “pure assemblage of matter”, i.e. a tool devoid of meaning, or that it is the mythical robot duplicate of man that harbors the permanent possibility of aggression against us. And, seeking to prevent the latter, culture reduces machine to the automated, enslaved tool. (Simondon 17)

Simondon argues that the the pursuit of automatism in machines (spurred on by industrial modernization) represents the root of this issue. Automatism closes machines off to information and their ‘associated milieus’ in the name of reducing them to a invariable, sub-human tool. When machines are more sensitive to the world around them, their results are less predictable (lower automatism), but they possess greater possibilities for use and a more involved relationship with humans (higher technicity). This is not to say automatic machines have no relation with man, but that their automatism seeks to obscure and stabilize that relationship. (Simondon 17-18) Furthermore, this closed-ness to external processes reduces the machine to a state of artificiality, requiring human intervention to protect it from the natural environment. (Simondon 49)

Though he was writing about machines like engines and diodes, we can see Simondon’s theory in our relationship with today’s computers. The current configuration of (non-deep-learning) computers is one of automatism and artifice. Computers are designed to deliver us content (whatever form that takes); they are the glue between Amazon’s networks of production and delivery and us the consumers. The computer through which we click purchase has know knowledge of the meaning of that action, it just unquestioningly passes the correct bytes to the correct server; it is fundamentally alienated from the world. Because they are designed to run exactly the same no matter their environment or their user, they possess an automatism that closes them off to the world. Our computer only knows which buttons we click; it is restricted to manipulating abstract bits. Such a machine can not survive out in the world. It is artificial, a slave to its human protector. (Simondon 49)

Though they demand our attention, computers do not relate to us. They were designed for users, not collaborators or participants. The user-centered design paradigm has smoothed over our interactions with computers. There is no indeterminacy only intuitiveness– a program that surprises or confuses is a bad program. Rather than feeling like a technical object, the computer should feel like nothing. Is this not the dream of the ever-thinning iPhone? to disappear completely into your hand– a smudge-free window into the world of information? Though I argue that deep learning proposes a way out of this prison, it is not immune to this phenomenon. Natural language processing has enabled the rise of Siri and Alexa– automated voice assistants that are not your friend, probably more like your secretary. Even when we give machines a voice, it is only to make it easier for the user to access information (and products). Moreover, much of machine learning today is directed towards figuring out how to convince each of us to watch the next show, order the next pizza. Computers fit neatly into the cultural binary Simondon wants to undo– they have been made both the deferential servant and the dystopian manipulator.

“The prime condition for the incorporation of technical objects into culture would thus be for man to be neither inferior nor superior to technical objects, but rather that he would be capable of approaching and getting to know them through entertaining a relation of equality with them, that is, a reciprocity of exchanges; a social relation of sorts.”
- Simondon p. 105

I have laid out a description of our current condition and the case for change. But how do we get to the point where computers are our friends? It begins with images. Images (and videos) contain meaning that until recently, computers struggled to access. But with the rise of deep learning, computers have begun unpacking images in terms of human meaning. Object detection is pretty much a solved problem. Higher level meaning-computations like describing photos with words, identifying human body position, etc. are approaching similar levels of accuracy. One might argue that such a computer could not understand iconography or metaphor, and, while this may be true, I do not think it matters much. How would you explain to a child what Pinochet represents and why we might want to remove photos of him from social media? In a future where machines understand language and can extract objects from images, it seems like one could simply explain the issue in a similar way. Regardless, I am willing to concede this point– if our goal for computers is fascist-free social media, maybe deep learning can not solve that. But a future where we can build and teach our own robot friends seems imminently possible.

If we accept this future as possible, it has important implications for Simondon’s own work. In a way, the way deep learning programs are trained proves Simondon’s theories about indeterminacy correct. Artificially adding indeterminacy (e.g. rounding decimals) to the training process of neural networks has been shown to increase their accuracy when given real-world data. That is to say, machines with in-built indeterminacy function better in the real world. So, deep learning can be read as a sort of Simondonian praxis– we are opening machines up to the world, making them more natural than artificial.

So what lies between us and teaching a robot pottery? Computers are well on their way towards understanding language and their environment (through a video feed). And we’ve developed models that allow humans to teach neural nets how to operate. Now we just need to open them up to bespoke data. We need a computer that is so open to its world that it can learn how to exist just by interacting with us and the rest of its environment. Vast training sets have brought us to this point, but for a computer to be a friend, it must be open to learning a lot about a very specific environment instead of a little bit about the entire world. This sort of learning only exists today to serve us more effective ads or better surveil us. I struggle to see how reclaiming the use of our data happens without political change first. Maybe as processors get faster/smaller/cheaper, training personal neural nets will be accessible to all. I don’t feel confident enough to take a position on this, but I’m happy to leave this thread loose.

Where are the images in all of this? Though I may have left them slightly behind, the image and its relationship to language are central to this work. Computers did not really understand human language until it was coupled with a live video feed. Furthermore, consumer computers today are not pottery-robots, but screens with a keyboard. Their entire function is to display us endless images. If deep learning allows computers to extract meaning from images, it seems like we could start building a relationship of technicity (no master-slave binary) through images now, without waiting for our pottery-robot. I won’t pretend to have a concrete answer for how, but I think we should start with our robot-friend and imagine it behind a screen. Rather than communicating with words and gesture, we would have to learn to think in images. In retrospect, this language of images is probably what I should’ve written this paper about, but it took all this wonderful thinking about Simondon and robot pottery for me to even get here, so I do not regret it.

Before you close this page, I want to pose a couple more questions and offer some partial answers. First, does dancing in front of a Mo-Bamba-image-generating computer represent increased technicity or point towards this language of images? My gut says yes: if the images were less broken it might feel like we were just puppeting the computer, but in its current hallucinatory, not-quite-cohesive state, the dance feels more like a conversation. One can feel the agency of the machine in the images; it is not the ever-disappearing smartphone. What do we do with that? Embrace indeterminacy in our AI art. Refuse to allow space for ‘proof-of-function’ works (ahem, Paglen). Ask how each piece asks us to think with the computer, not just what it does to us.

Lastly, in a possible future of meaning-computing machines that are our friends and equals, how would our relationship to nature change? If machines can compute human- legible meaning, can they compute plant-legible meaning? or bear-legible meaning? Simondon’s technicity is not just a relationship between humans and machine, but between machine and environment as well. “The robot is fueled by the sun and mushroom pulp,” and blends in with the pottery. Our future computer friends are not only deeply entangled with people, but with nature. I don’t even know how such machines would communicate with nature, but if machines can be taught to think with humans, it seems possible that we could teach them to think with non-humans. Maybe this would even give us a way to think with non-humans. I want a world where we teach robots pottery, and they teach us how to talk to the mushrooms that are their food.

References:

Paglen, Trevor. Invisible Images: Your Pictures Are Looking at You. Architectural Design 89, no. 1 (2019): 22-27.

Mandeville, Zach. A future vision. Accessed March 15, 2019.

Simondon, Gilbert, Cécile Malaspina, and John Rogove. On the Mode of Existence of Technical Objects. Minneapolis, MN: Univocal Publishing, 2017.

Slavin, Kevin. Design as Participation. Journal of Design and Science. February 24, 2016. Accessed March 15, 2019.

bonus---> Christopher Alexander: A Timeless Way of Building