Chapt. 01
What is poetic about computation?
Introduction
Greetings. Welcome to the first class of Poetics and Politics of Computation at the School for Poetic Computation(SFPC). I’d like to begin the class by asking “What is poetic computation?” First, there is the poetics of code, which refers to code as a form of poetry. There is something poetic about code itself, the way that syntax works, the way that repetitions work, and the way that instruction becomes execution through abstraction. There is also what I call the poetic effect of code, which is an aesthetic experience realized through code. In other words, when the mechanics of words are in the right place, the language transcends its constraints and rules, and in turn, creates this poetic effect whereby thought is transformed into experience.
Together, the poetics of code and the poetic effect of code form ‘poetic computation.’ The terms code and computation are often used interchangeably, but I should note that code is only one aspect of computation. Code is a series of instruction for computation that requires logical systems and hardware to make the instructions computable. In that sense, computation is a higher level concept than code. For our purposes, however, we can use poetics of code and poetics of computation interchangeably throughout these discussions.
To a non-coder, non-artist friend, or to those just beginning to learn to program, I often say code may look like poetry in an alien language. And to those more experienced with code, writing code sometimes feels like writing poetry because it doesn’t always ‘work.’ I mean two things by ‘work’: first, does it work as an art form? Is it good poetry? On the other hand, I mean ‘work’ in a more utilitarian sense. Does it have practical application?

At SFPC, we like to think that poetic computation is when language meets mathematics, and logic meets electricity. Sometimes, poetic computation is literally writing poems with code. Some of our teachers and students write poetry with algorithms to explore what the language can do. When we started the school, a lot of people asked if the school is for generative poetry or electronic literature. We clarified that while we are definitely interested in the intersection of language and computation, we want to explore a broader definition of the ‘poetic.’ We want to investigate the art of computation as well as the expressive qualities of code, including its aesthetic, visual, aural and material aspects.
While this artistic potential lies at the core of the school’s excitement about code and computation, I’m interested in how this turn towards art may help us explore political possibilities. In this class, I consider computation to be a lens for examining reality and thinking about emergent issues in the world. In other words, computation can be a vehicle for imagining new ways of being in the world. Let’s first step back to look at material precedents of modern computation and computers.
Genealogy of computers
The first computers were human.1 Long before electronic computers were invented, ‘computing’ was a profession for people who calculated and managed data. The (human) computers worked with mathematicians to execute algorithms and theories. Mathematicians would ask the (human) computers to work on the numbers.2 Often times, there would be multiple (human) computers working on the same algorithms in order to detect and prevent mistakes. Considering the history, it’s curious that we’ve created such a dichotomy between computers and humans these days. In a sense we are all computers (people who compute). Computers need not always be metallic, electronic or very distinct from us. Computer scientists, among others, may be wary of my broad definition of computers, but I like to think computers are whenever a logical way of thinking is applied to a given problem.
We can trace the evolution of contemporary computers from operations research around World War II to management science in the second half of the 20th century. Operations research mainly focused on calculations for ballistic missiles and planning logistics for moving large numbers of troops at the same time, while management science, which grew out of operations research, included anything from accounting to quantitative research. Actually, much of the software we use today, such as Microsoft Excel and Word, Gmail and Facebook, share a distant lineage to both.
Operations research and management science are related because they both were influenced by the discipline of cybernetics, the theory of self-regulating systems comprised of feedback loops. When these self-regulating systems were accompanied by powerful computers, it made possible the centralization and decentralization of information and material goods on a vast scale. The tension between these two states marked temporary crises and resolutions in Capitalism, manifest as production and dispersion, times of abundance and scarcity, or even war and peace. In this way, war machines and international finance share the same ancestors. As we move on, it’s important to keep in mind that I’m presenting an incomplete genealogy of computers, and I encourage you to go back after the talk and explore the specifics.
Mechanical computer
In the early 19th century, Charles Babbage, an English mathematician and engineer, invented arguably the first mechanical computer. This tabulating machine, designed to calculate large sets of data, was built with the materials and technology available to Babbage at the time. It was an era of ships, railroads, and lots of mechanical inventions, so he constructed his computer as a system of moving gears. Very mechanical!
It’s interesting to think in general about how people’s ideas for inventions were constrained by the materials available to them. In fact, the history of computers is closely related to discovery of new materials. It is thus remarkable that Babbage managed to imbue his mechanical computer with the conceptual framework of the not-yet-possible computers. The Analytical Engine, one of the incomplete prototypes by Babbage, became a platform for Ada Lovelace, a mathematician, to collaborate and create algorithms. In this way, Lovelace came to be considered one of the first ‘computer programmers,’ a person who instructs machines in automated tasks.3
Analog Computer
The analog computer was a stepping stone to the digital computer because while it still had mechanical components, it also had analog components that used a continuous (electrical) signal.4 Vannevar Bush, a mathematician and electrical engineer, who we’ll discuss more next week, did critical work on the analog computer. Its components included disc and wheel mechanisms that could calculate, for example, the trajectory of a missile. Computers at this time, however, were still slow, prone to failure and in need of endless fiddling by engineers.
Electronic
The next major leap was in 1937 when Claude Shannon, a very bright student of Bush’s, wrote a master’s thesis at MIT called “A Symbolic Analysis of Relay and Switching Circuits.”5 It showed that electronic relays could be used to carry out binary logic operations. Until this point, there was the concept of binary logic but not reliable electrical materials to execute it. Shannon discovered that relays could switch on and off, thereby changing the direction of electrical flow and allowing for new logical operations. This is very similar to how transistors work, which were invented about ten years later at the Bell Telephone Labs in New Jersey.6
We can think of the transistor as the smallest conceptual building block in a computer. Transistors have three legs, or terminals, called the collector, base and emitter. The signal comes in through the base, pulls current into the collector, which gets amplified through the emitter. This is one of the essential features of the transistor, that it can amplify a signal. If the signal is really small or noisy, it can still get a clean output from the transistor. This makes long distance communication possible because while we can easily talk to each other in this room, we’d need to amplify the signal to communicate over a larger distance.
The other essential feature of the transistor is its ability to switch on and off, thereby enabling binary logic. Recall that Shannon’s relays, which directly preceded the transistor, also made binary logic possible. While an electrical current can only travel in one direction, transistors, by switching on or off, can change the direction of the current. Zeroes and ones in computers, by the way, are simply these changes in the electrical current. These simple characteristics of transistors made it possible to build electrical circuits that could compute exceedingly complex logic.
Reference Reading
1Charles Petzold, Code: The Hidden Language of Computer Hardware and Software (Redmond: Microsoft Press, 1999). http://www.charlespetzold.com/code/index.html
2David Alan Grier, When Computers Were Human (Princeton: Princeton University Press: 2005).
3Charles Petzold, Code: The Hidden Language of Computer Hardware and Software.
4As we’ll see, this is different from digital computers which operate with discrete (electronic) signals with decision-making capacity.
5 Claude Elwood Shannon, “A Symbolic Analysis of Relay and Switching Circuits.” (Cambridge: Massachusetts Institute of Technology, 1940), http://hdl.handle.net/1721.1/11173.
6Priya Ganapati, “Dec. 23, 1947: Transistor Opens Door to Digital Future,” Wired, December 23, 2009, https://www.wired.com/2009/12/1223shockley-bardeen-brattain-transistor/.