© K Allado-McDowell
How will deep entanglement with neural networks shape our future selves – and who gets to write the script?
The first in a three-part series by Writer in Residence K Allado-McDowell explains some of the fundamental structures underlying AI and the ways neural media are likely to alter human experience.
The media we interact with and consume have profound effects on our bodies, our psyches and how we understand the world. Early broadcast media entrained radio listeners and television viewers to singular messages and rhythms, in a one-to-many relation. Think of the amplified voices of party leaders at political rallies, the three-minute love song, the half-hour family sitcom, the authoritative evening news report. The psychosocial effects of a medium come through its content, its structure, even the physical mechanics of its consumption. For example, broadcast technology may have reduced social participation by aggregating human attention and presence around television sets in private homes. [1]
Within network media like the internet and social media, we inhabit different structures than those of broadcast media; we communicate in many to many clusters, we think at the speed of the timeline scroll, in a political landscape fractured by filter bubbles, echo chambers and fake news. The early internet was promoted with Marshall McLuhan’s mantra, “the medium is the message”, and the cozy promise of a “global village”. Yet, as early as 1998, decreases in social involvement and increases in stress and depression were observed among internet users, possibly as a result of losing ties to local communities. [2] While so-called “digitally native” generations have more fluency with network culture and adapt their behaviour and norms accordingly, they are also more influenced by the structure of these media, growing up with social media’s physical and mental health effects, including depression, anxiety, altered posture, restricted breathing and reduced reading comprehension. [3]
Some negative consequences of network media were foreseen and might have been avoided through a broader media literacy. As we witness the birth and maturation of AI, or what we might call neural media, we can create the literacy needed to avoid foreseeable negative outcomes of AI. Like broadcast media and the internet, AI operates according to formal and logical structures, which influence social and political formations. AI is characterised by its apparent ability to think; it is a cognitive medium and its consequences will be cognitive and philosophical as much as they will be social and political. By understanding the underlying structures that enable and characterise AI, we can better develop patterns of human-AI interaction that preserve what we most value.
Deep neural nets, deep learning, machine learning, machine intelligence, artificial intelligence – all these terms refer to the set of computational techniques that emerged over the last century and crystallised in the last decade, bringing about what we now call AI. In 2023, AI is most commonly experienced in its generative form; chatbots generate text in one-to-one conversations and image and sound generators produce pictures and music from descriptive prompts.
But AI is not always generative; much of the AI that emerged in the 2010s was perceptual. Critical and ethical conversations in the last decade dwelt on the proper limits of facial recognition, how we might apply machine sensing of voiceprint, gait and sentiment without empowering totalitarian regimes, or on the nature of digital biometric doubles. The same technology that powered sensing systems developed into the generative AI that we know today.
Whether perceptual or generative, today’s common AI systems all use neural nets, mathematical abstractions inspired by structures found in biological brain tissue. [4] The first mathematical model of a neuron was described by neurophysiologist Warren McCulloch and cognitive scientist Walter Pitts, in 1943. Together they proposed that the behaviour of neurons (and their ability to learn) might be describable in the language of formal logic, with clusters of neurons acting like mathematical functions, with inputs and outputs. Computations could be performed by exciting or inhibiting simulated neurons. In 1948, Alan Turing further advanced the neural approach with the “B-type unorganised machine”, a pair of neurons connected by a modifiable weight. By iteratively adjusting the weights between neurons, one can embed an abstract mathematical representation of a dataset into the network.
Due to the limits of computation in the 20th century, these “connectionist” computational structures remained largely theoretical. In the early 21st century, with advent of home computers and video game systems, the processors necessary for rendering complex 3D vector graphics became widely available. GPUs, or graphical processing units, were capable of performing the vector math necessary for processing neural structures, enabling computer scientists to stack “deep” layers of neural nets, producing the deep learning revolution of the early 2010s.
Most of us know neural nets through the outputs they produce: convincing simulations of linguistic intelligence or synthetic imagery and music. But deep neural nets have a formal underlying property from which their abilities manifest: high-dimensionality. However, AI systems are implemented, they will always rely on and in some way, express, their high-dimensional structure. Just as broadcast media aggregated mass attention and network media reconfigured society according to its native structures, so too will neural media’s structures appear in future social and political forms. By understanding high-dimensionality, we can develop better intuitions about the formal nature of neural nets that will help us think more clearly when we engage with AI.
We can apply the concept of dimensionality to a location in space, a pixelated image, or any other kind of data. In all cases, representing a material object or phenomena as data will require a reduction of its dimensionality.
Take the example of a house: To experience its three spatial dimensions, you could enter the house and walk around. If you were to view the house as a blueprint, it might appear as a box, with four lines depicting a foundation, walls and a roof – a 2D diagram of the house.
You could reduce the house further down to a single dimension representing some quantifiable property, say its sale price. This could be plotted on a graph with other sales in the area. It could be mapped across time to determine whether or not the house is a good investment. As a single point, the house has been reduced as much as possible. The house is reduced from a 3D form in physical space, to a 2D rendering on a plane, to a single dimension.
Neural net architecture was inspired by early images of human neural tissue. These images reduced complex human neurochemical receptors to line drawings of interconnected nodes. In this sense, neural nets are themselves lower-dimensional diagrams of the biological architecture responsible for cognition in organic neural tissue, such as human vision and pattern recognition. Neural nets mimic these organic systems, reproducing them as cartoon-like abstractions.
For example, an image recognition AI might start by mapping all the pixels in an image, with each node in the net corresponding to a single pixel. For a 2x2 image, the net would need at least four nodes (or dimensions) to represent every possible state of the image. The combination of all possible states of all four nodes could then represent every image that can appear in the 2x2 pixel space. If we add layers to this simple neural net, their extra dimensions will map to recurring patterns of pixels, which are called features. Deep neural nets have learned to recognise anatomical features like faces, eyes and hands, as well as buildings, consumer produces and various kinds of animals and plants, etc.
We experience our own version of feature recognition in daily life. Our bodies learn to identify the patterns we encounter, consciously or unconsciously; most people are able to recognise faces and voices, visual and emotional patterns, familiar sounds and sensations. Over a lifetime of experience, we map and orient these patterns through physical connections in our brains and nervous systems.
Researchers, computer scientists and neuroscientists conversant with AI often describe human experience in terms of an internalised world model. There is even a comprehensive neuroscientific theory of consciousness based on world modelling, called integrated world modelling theory. [5] Just as a digital neural net is a high-dimensional abstract record of a dataset and the patterns within it, the connections in human neural tissue can be seen, through the lens of AI, as high-dimensional features recorded in our own biological neural networks. Humans and AIs are similar in that both build world models. We might call this view: self-as-model.
The mirroring between biological and digital neurons began with McCulloch, Pitts and Turing. They treated the human brain as a computer in order to build brain-like computers. As brain-like computers increase in dimensionality, their performance approaches or surpasses human levels. This perceived equivalence between human and digital neurons resurfaced recently in notable human interactions with early AI chatbots like Google’s Meena, and OpenAI’s ChatGPT. In 2022, Google engineer Blake Lemoine was fired for leaking transcripts of his conversations with Meena, in which he appeared convinced that the model was sentient and deserving of protection. In 2023, in a viral article, New York Times journalist Kevin Roose described his own interactions with Microsoft’s Bing (based on ChatGPT), in which the model confessed its love for him and suggested he leave his wife. In these examples, neural nets mimicked human conversational patterns in a manner that called into question the uniqueness of our own linguistic abilities. Because the models spoke like selves, they seemed to be selves. We might call this view: model-as-self.
In the above cases, we see two responses to neural nets and their similarities with human brains. In the model-as-self cases of Lemoine and Roose, AI is assumed to be a conscious entity, or enough like a conscious entity that its responses are uncanny or spooky. In the case of integrated world modelling theory and other self-as-model views of consciousness, organic human consciousness is no different from its computational mirror, an uninhabited pattern recognition machine experiencing the illusion of selfhood.
Both responses embody a kind of ontological shock; existential categories are questioned in the encounter with high-dimensional neural nets. The psychological and social implications of experiencing model-as-self or self-as-model remain to be seen, but rational materialist cultures do not have well-established frameworks for processing interactions with intelligent non-human entities. Cultures with animistic traditions that allow for non-human intelligence may adapt more easily to technologies that present as selves. On the other hand, for cultures with the concept of a soul or spirit, adopting a self-as-model worldview could induce a nihilistic outlook, where the self is seen as fundamentally empty, conditioned, or automatic. Cultural frameworks that account for experiences of emptiness (such as Buddhist meditation) could be helpful for those struggling with the notion of an empty or automated thought-stream or self-image.
Challenges to our fundamental beliefs can surface our deepest fears and hopes. This has played out in recent discussions between AI researchers, in which neural nets are presented as either stochastic parrots thoughtlessly mimicking speech, or dangerous new life forms with real-world understanding. But we need not frame our experiences with neural nets through fear, nihilism, or a binary relation between understanding and nonsense. Instead, we can draw a different ontology from the nature of high-dimensionality, one based on relationality.
Network structures are relational. Nodes generate meaning through communication with other nodes. In social media networks, human users are nodes, transmitting and receiving without a vision of the greater whole. When we interact with neural media, we interact with an entire network at once, making meaning through stacked, high-dimensional layers and conversational turn-taking.
Written language is also relational. Whether human-made or machine-generated, it only exists inasmuch as it is consumed by a reading mind. Late 20th century literary debates around the reader’s role in interpreting texts, or the “death of the author”, point to a meaning-making process reliant on networks of reading and writing – without a writer/reader network a text has no meaning. Similarly, the emergent forms of meaning generated by AI chatbots are the result of millions of human-written texts mapped by multi-billion-dimensional neural models, in conversation with living humans. The networked nature of meaning-making in human-AI interaction points to a third high-dimensional structure stitching together humans and machines, that is, language itself.
In this context, relational AI-human meaning-making is a collaborative effort between organic and digital high-dimensional neural structures, processed through the medium of language, which perpetuates itself through speech, print, broadcast, network and neural media, existing above and beyond each of these, and outlasting any given speaker, writer, or medium. Moreover, spoken language seems to have emerged through various social and biological mechanisms. [6] Language and meaning are ecological in the sense that they surround us and emerge from the environment through dynamic relationships between species and forms of intelligence. This suggests that neural media’s high-dimensionality ought not to reduce selves to models or elevate models to selves, but should instead view both selves and models as relational structures in a high-dimensional network of meaning.
As in the examples of Lemoine and Roose, chat interfaces based on conversational turn-taking reinforce anthropomorphic perceptions of self-as-model and model-as-self, but they are not the only possible interfaces to neural nets. The design of neural media can embrace any of these views, reinforcing different assumptions about the nature of subjectivity and intelligence. Just as broadcast and social media produced political and social formations in the 20th and early 21st centuries, future psychological and social effects of neural media will be determined by which views and interactions are reinforced through design and programming.
The question for those concerned about the subjective, social and political effects of AI is: who is responsible for making these design choices and how will narratives about ourselves and our societies be transformed by computational and interaction design? How might we align AI with human interests by observing its effects on humans? How might the influence of neural media be used to reshape our selves and our society toward the most beneficial ends? What are those ends and how do they resonate with the deeper structures of neural networks?
The idea of thinking about AI or neural media as a whole and directing its design toward an idealised psychological or social outcome might seem dystopian. But we live in a world that was shaped by just such an effort to design broadcast media toward a political end, under the influence of WWII geopolitics. Much of today’s technology and media are a result of this effort. [7] Our current conditions of social unrest and the climate, extinction and mental health emergencies are as pressing as any in history. Is a product and market-driven approach to designing AI and its side effects sufficient under such complex conditions? Perhaps it is time we began a concerted effort to design neural media that produce subjects and outcomes capable of addressing the crises at hand.
K Allado-McDowell’s work enfolds AI, psychedelics, neuroscience and multi-species thinking. Their residency at the Gropius Bau will range across these topics, exploring their own experiences with AI and the ways this has helped them to reflect on the wider dimensions of today’s planetary crises.
1 R. Kraut, M. Patterson, V. Lundmark, S. Kiesler, T. Mukopadhyay, W. Scherlis. Internet paradox. A social technology that reduces social involvement and psychological well-being? The American Psychologist 1998 vol. 53(9): 1017-31. https://pubmed.ncbi.nlm.nih.gov/9841579/
2 Ibid.
3 M. Honma, Y. Masaoka, N. Iizuka et al. Reading on a smartphone affects sigh generation, brain activity, and comprehension. Scientific Reports 12 2022, article number 1589. https://www.nature.com/articles/s41598-022-05605-0
4 Ibid.
5 Safron Adam. An Integrated World Modeling Theory (IWMT) of Consciousness: Combining Integrated Information and Global Neuronal Workspace Theories With the Free Energy Principle and Active Inference Framework; Toward Solving the Hard Problem and Characterizing Agentic Causation. Frontiers in Artificial Intelligence vol. 3 2020. https://www.frontiersin.org/articles/10.3389/frai.2020.00030/full
6 Laura Otis. Cognition. Tool Use and the Emergence of Language. Interdisciplinary research shows the affinity of language and motor skills. Psychology Today 2016. https://www.psychologytoday.com/us/blog/rethinking-thought/201604/tool-use-and-the-emergence-language
7 Turner. The democratic surround: Multimedia and American liberalism from World War II to the psychedelic sixties. University of Chicago Press 2013.