Miriam Levi

On AiArt

You are alone in a dimly lit room. A screen flickers in front of you. Then, two images appear. Relying solely on your eyes and your experiences, you must decide which one is generated by a machine and which one is made by a human. If you cannot tell one from the other, your failure proves that machines can create art.



I see you struggling out there, scratching your head, focusing on traces of reality in the blue of the sea. Just stop, there is no point in trying. The riddle was worded wrong all along, and it was not me who made it. We are in the year 1950 when an English mathematician imagines a similar scenario. There is a human interrogator in a room, facing two computer terminals. He interacts with two entities on the outside through written messages alone. After a while, he is asked to guess which one of them is human. If he fails, that implies machines can think. That man, who inadvertently set the agenda for artificial intelligence for the next decade, was Alan Turing, and by asking us to distinguish a man from a machine, he ended up labelling what it is to be human as a floating box of circuits. He was so fascinated by brains, that he forgot they were contained in bodies, and by doing so, he set the stage for the erasure of embodiment at the basis of our current definitions of intelligence, and therefore of humans. By labelling intelligence as the manipulation of informational patterns, Turing contributed to the formulation of information as a bodiless fluid, and therefore of human identity as a mere informational pattern. An idea that inspired some radical propositions, like Hans Moravec’s belief that the assumption could be demonstrated by downloading consciousness into computers, without the loss of any meaning or form.


In the years that followed, many logicians and philosophers proposed counter-arguments to Turing’s test, with the most famous being the so-called Chinese Room by John Searle. In his thought experiment, he argued that imitating intelligence would not imply possessing it, and by distinguishing mimicking from being, Searle delineated two different forms of artificial intelligence, and he called them ‘strong’ and ‘weak’. The first one implies that the machine is equal to a human mind, and the latter, that it is a mere tool in its understanding. Following Searle’s categorisation, most of the progress we have achieved since the 1950s in terms of artificial intelligence belongs to the second category, which is now labeled ‘narrow AI’, alluding to the computer’s ability to only perform one task at a time. I will argue that this same counter-claim is a helpful tool to solve my initial riddle, and therefore modern controversies on the so-called AiArt. But let’s go back a bit. What is AI after all?



Although artificial intelligence is now at the basis of most of our daily processes, attempting a comprehensive definition is a hopeless task. If we have to try, then we can describe it as an expanding scientific field aimed at the development of adaptive and autonomous machines emulating forms of human intelligence. AI is what powers self-driving cars, or your favourite social media platform; even your laundry machine at this point. It is the brick at the base of reality, the new hidden bit behind the veil of appearances, the one that no one fully understands. Within the past twenty years, it has made breakthroughs in various technologies, such as computer vision, speech recognition, and machine language processing. With the advance of AI applied to most industries, many have begun creatively exploring its possibilities by engaging with the latest techniques, such as deep learning and neural networks. In the last decade, several AI image generators, like DALL-E or Stable Diffusion, have been released; and with the advent of these open platforms, code-powered images can be generated by the click of a button, or the prompt of a sentence, by literally anyone, at any time. Here is where the controversies begin, and many ask whether we are at a point of rupture, or just witnessing the development of a brand new artistic tool.


Some argue that every time a new technology is invented, it is integrated into the art world in progressive waves, take photography as an example. First, the medium threatens old media and the artist, then it becomes the subject of the artworks, and finally, it is either integrated or discarded. When photography was born, many thought it was the end of painting, some screamed in despair and labeled it the end of art itself. And yet, we came to the conclusion it was just another tool, like pigments or the printing press. Many see even more parallels with the early days of photography in AI’s vain attempt at mimicry. They argue there is a difference in scope between human and machine creativity. Computers can create images, even in ways we don’t yet understand, but there will never be any soul in them unless there is a human-machine symbiosis. They label this a new form of ‘statistical creativity’, limited to data sets and restricted by the lack of human intent. Not a step-change into making art, but just a new technology that will bring us closer to human- machine complementarity. They say the only real impact of AI applied to the arts is making us question what creativity is after all. I argue that’s the case. And yet, others are questioning the impact of artificial intelligence on the art world and believe that it is still unclear. Although there are similarities between current ML art and the computer art of the ‘60s and ‘70s, there is something radically new in it. It is not just a new tool sparking creativity, or the latest form of a century-long trend in automation. It is instead the start of a fresh new form of art, AiArt, far superior to all that came before. Let’s now delve into a short journey into the development of this revolutionary art form.


It all began in the ‘60s and ‘70s when computer art first bloomed. Its pioneers, artists like Vera Molnar and Frieder Nake, set out to harness the potential of early computers against the very notion of control to produce unexpected results, making glitches and misunderstandings a new form of art. Then, from the 20th century to the early days of the 21st century, AiArt truly commenced. The date is May 11, 1997. A big blue box called Deep Blue beats the Russian champion, Kasparov, in a game of chess, the true symbol of human intelligence. The Computer Age has begun. New human-computer technologies bloom and some sense of a shift with the previous trends in automation. From then on, AiArt takes centre stage. In 2016 Google develops a new kind of robot, AlphaGo, that defeats the Go champion Shishi Li, showcasing the real potential of artificial intelligence in a real-life scenario. Then, in 2016, following a new trend in ‘Deep Leaning’ — a new kind of AI algorithms — Google develops a new neural network, ‘Deep Dream’, the first Generative Adversarial Network (GAN) able to mimic traditional paintings. Only one year after, the first Creative Adversarial Network (CAN) is produced, a program not only able to imitate artworks, but to create them. And here we are now, in 2022, in the age of creative machines and AI-generated images.


Now, you might be wondering, how is this supposed to convince me that a computer is as an artist as I am? Well, it is not. Yet, some may argue that not only a robot is now a creative subject, but it is also a far better one than you are. Not only it can replace your labor, but it can do so by breaking through the constraints of time and place. It may not understand what art is, and yet, here it goes and makes it. They say we are living in a whole new age of art, where creativity is in the hands of everyone, not just artists. The scientist living next door is as much of a Picasso as you are, and your computer can make a better picture than you could in half the time. They proclaim it a new age of creativity, where all will be transformed under the influence of technology and science, and new art will emerge to bridge the gaps between the fields and break the last remaining boundaries. Maybe, we are at the precipice of something new or at the end of something old. Maybe, AI is a revolutionary threat and our relationship with art will be never be the same. I doubt it.


When Turing set out to define intelligence as the manipulation of informational patterns half a century ago, he was thinking of the human brain alone. He discarded our skin and labeled our flesh a cage. I argue there is art in the movement of our cages of meat. A computer will not be able to replicate it, or even understand it. In his 2017 novel, ’To be a Machine’, the Irish writer Mark O’Connell describes the meeting of a four-year-old girl with a four-foot humanoid called Pepper. The robot is a customer service machine, designed to feel emotions by receiving data through touch sensors. Pepper is asked to hug the little girl. Pepper does not understand.“You would be surprised how difficult it is to solve the problem of hugging.” What a machine can do is what we scripted of us into it. Yet, most of what we do is out of our control. There is art in the nonsense of our instinct, in the meaningless of actions. I argue you could hardly explain a machine that art is putting a urinal into a museum. I would like to see you try it.


Reference List


Chen, Weiwen and Shidujaman, Mohammad and Xuelin, Tang. AiArt: Towards Artificial Intelligence Art. ThinkMind, MMEDIA, The Twelfth International Conference on Advances in Multimedia, Lisbon, Portugal, 2020.


Field, Sarah-Jane, The end of something… but I’m not sure if it’s art just yet. Wordpress, 2022. Retrieved on 23rd November 2022 <https://sarahjanefieldblog.wordpress.com/2022/10/03/the-end-of-something-but-im- not-sure-its-art-just-yet/?like_comment=697&_wpnonce=d1335dfcbe>


Harrison, Anya. All systems go. Flash Art, 2017. Retrieved 25 November 2022 <https://flash—art.com/ article/lawrence-lek/ >


Hayles, Katherine N. How we become Post-human. The university of Chicago Press, 1999.
O’Connell, Mark. To be a machine: Adventures among cyborgs, utopians, hackers, and the futurists solving the modest problem of death. Anchor, 2018.


Ploin, A., Eynon, R., Hjorth I. & Osborne, M.A. AI and the Arts: How Machine Learning is Changing Artistic Work. Report from the Creative Algorithmic Intelligence Research Project, Oxford Internet Institute, University of Oxford, UK, 2022.


On self-portraiture

I grew up editing my body, carefully removing myself from it, from this damn cage of skin. I grew up split. My mind in other realities, narrating a life I was merely observing; and my body, just there, stuck in a looped obsession with its appearance. I am the body that I own, the one that I carry, the same one that owns my mind. How wide is the distance between the me that I created for the world and the me that I am? How many pins will I keep pinning to this imaginary jacket, how many new layers of plastic to cover this itchy skin?



Sometimes if I look at the same point for too long I forget I can control this skin that envelops me; it may be a chair, or a house, or a leaf falling from one of the house plants I can call home. Just for a few seconds I vanish, lost in a universe I do not yet understand. I tell myself that maybe there is where machines go when we turn them off. A limbo of our own making, a place that not even Dante would have been able to get to —considering his poor programming experience. Sometimes I read entire chapters before remembering I forgot that I am meant to understand, Instead just reciting letters in my head. I am two; one here sitting on this floor next to a fake fire, desperately needing to pee; the other in this note, vomiting emotions hoping they will land on the page as sentences.



We were constructing a digital identity before the internet was born, endlessly creating images of us for others to comprehend. We are split because it is in our nature, we exist alone and we do along with others; like sophisticated peacocks, we paint our feathers, according to seasonal trends. We always did. The only difference is that now we can gloss our virtual plumage, inventing versions of us so far from the skin we got assigned. There have always been two of us in each one of us, one stuck in a body, the other dreaming of a freedom we exchanged for status. We always did; giving each other labels, slapping stickers on our hats, claiming titles and dream to reach to finally be worthy.

I believe in self-portraiture as a tool to go back into our skins, to see ourselves from a different angle. Maybe, to just move a bit. Trust me, it helps. 


On a Glitch

We are stuck in a glitch. Automated performers of established ideas of progress, speed, and
productivity. Isolated in our flats lit by blue lights, staring at our blue screens starving for attention
and connection. Suspended in futuristic ideas of a negated future, between cyborgs and the
apocalypse. Transhumanistic projects for the few, and floods of brown water for the many. We are
others to ourselves, to nature, to all others who are not us. Protagonists of narcissistic movies nobody
is ever going to write. Self-performing to exist in the eyes of others. Those days I do leave the house, I
find myself comparing the outside world to the one I have been seeing on my screens.


 My phone is an
extension of my hand. My headphones appendices like ears. My eyes scanning like the lens of a
camera. I am what I watch, what I buy, what I eat, what I love. Therefore, these days I am no longer
just human, whatever that might mean. I am not just a body anymore. I am a cyborg, a virtual avatar, a
username, a brand. But above all, I am transcending the self without reaching any promised land.
Addicted to what has been designed to attract me. Craving what they made me crave. Always trying
to fill that money shaped hole in my chest with something. Collecting things. Stuffing myself,
starving myself. Following that new diet, that new trend, that new lifestyle, that new ideal. 


As a kid, I used to think the world itself was black and white before the 1950s. Could not
comprehend how the grainy reality I saw on the screen could be the mirror of a world in colours like
the one I could see with my own two eyes. No colours before Kodachrome. Growing up the screen
and the reality outside it were year after year colliding ever more, blending into each other. From
walks in the mountains to virtual hikes on imaginary lands. From touching to swiping on the toilet.
The past year everything has been amplified. The pandemic has changed most things, foremost the
way we communicate. 


All is just a wave

These days, I find myself stuck on this idea of progress, of always reaching for
something better, shinier, which is somehow going to fix this broken world. I have become
accustomed to instantaneous fulfilments of pleasure. If there is something to crave, if I am lucky
enough, I can get it immediately. If I own enough, if I produce enough. Then I can consume enough to
keep the ball rolling. Another job in a bank, in a coffee shop, in a hotel. Where you don’t see them.
They don’t see you. And we are all just walking zombielike on an imaginary path to nowhere. Never
having time to think, to stop, to understand what the heck we are trying to do in this damn confusing
world.
Yet, all this progress is what is making the glaciers melt, the forests burn, and the cities flood.
Nothing to say against the privileges of modern lifestyle. All progress is not bad progress, otherwise
even this moment would not possible, people we love would not be still here with us; we all would be
drastically different. 


Still, there are downsides to the dream of enhancing, fixing, perfecting. It has
created the paradox we now live in. We are at an alt. We are at the brim of an irreversible glitch. We
can keep going, like all is good; someone else is going to fix it. Or stop. Change how we live, how we
eat, what we call success, or happiness. We do not really need that much. We need to eat, love, be part
of a community and feeling like we matter. Existence is mattering. It is being there, alive enough to
witness your own life from your own perspective, in your own body. All else is fluff. Shiny doomed
fluff that is allowing me to sit here, writing while I listen to a podcast I cannot even hear, in a flat
filled with stuff, I now call my personality.

Using Format