top of page

Invisible Art

By George Wilson



Deep below the ground near Singapore’s Changi Airport, through corridors, vaults and seven-tonne doors, lies a hoard of treasures. Damian Hirsts, Francis Bacons, Cezannes, Hockneys – you name it: they’re probably all down there. But you won’t ever know, because Le Freeport is one of the largest freeport art storages in the world: a taxfree zone, soon to be expanded to 538,000 square feet. As you can imagine, documentation surrounding the contents of this bunker-cumgallery is not readily available. You might call a place like this a ‘secret museum’. Its contents are certainly impressive enough to rival the MoMA in New York. But there is one crucial difference: these artworks can move from floor to floor in the same storage facility for years, changing hands without ever being unboxed or even seen by anyone. They become invisible art.


The company that runs Le Freeport previously established a storage facility in Geneva that is rumoured to house about a thousand Picassos, all obscured by discreet documentation. The space was not officially considered part of Switzerland until a few years ago. Unlike museums, which are often tethered to a specific nation-state, then, these freeports are stateless. The artworks within them are not part of a museum narrative of national culture or prestige, because the nation they rest in has relinquished control of them. They float within a zone of legal exception. This uncertainty threatens to destabilise the worth of these pictures: how does one value an artwork if it cannot be seen, if it has no location? What is the point?


Le Freeport’s website is certainly not advertising to all. It describes ‘breathtaking aesthetics’ and private bespoke showrooms which offer ‘a confidential and luxurious environment ideal for the display of any valuables, be it for transaction or enjoyment purposes’: in other words, far away from the public eye. This is art for the few – to be enjoyed only by the freeport’s art handlers, the artwork’s owner and perhaps their limo driver.


This invisible art phenomenon finds a curious analogue in the technology world, where image recognition software has become the norm. Neural networks trained to recognise formulaic patterns (such as a face) in a range of images are now used at border controls, on social media, in drone warfare and for unlocking your iPhone. Putting this software to work on the canon of Western art history, however, exposes some of its shortfalls. Without specific training, a neural network cannot ‘recognise’ elements in a picture in the same way that a human art historian might recognise a particular painter’s hand. In the most formulaic way, Google’s recognition software results suggest some knowledge of the history of art. If you search Google with a cropped PNG file of the artist’s chin in a Rembrandt ‘Self Portrait’, the software correctly identifies it as a Rembrandt, a ‘portrait’, and offers a plethora of other oil paintings of young men in brown and red hues. When the image is cropped again, however, reduced to a smudge of red and brown, the software gets confused and shows results for pictures of floors. The software encounters this final cropped image in a very different way to how a human might understand a cropped section of the original painting in real life. Texture, quality of light and the physical presence and weight of the canvas are lost in Google’s set of data. Without this tactility, a neural network can easily mistake a flat painted plane with a three-dimensional view of a floor and a wall.


Google’s neural networks can scan thousands of images to appear as if they know what a portrait is. It would be physically impossible for a human to achieve this same speed and breadth of viewing. But it goes without saying that a neural network doesn’t really ‘know’ anything in the same way a human does, nor can it appreciate a picture or its emotional significance. (It’s OK, art historians: the computers aren’t taking your jobs just yet!) This formulaic pattern-filtering software is flawed, defined in its entirety by a vast number of images that we, as humans, can never see. It will never learn more than its formula – than the images it has been shown.


Trained neural networks have proved an important tool for Forensic Architecture, the Turner Prize-winning research group who investigate cases of state violence and human rights violations around the world. In their video The Triple Chaser, FA track the use of tear gas canisters from the weapons manufacturer Safariland on peaceful protestors and refugees by government forces. The video explains the difficulty of training a software to recognise images of tear gas canisters online when they have so few pictures to show the software in the first place. With less than a hundred images of the canisters, they instead have to simulate a complete picture in an animated environment (called a ‘synthetic training data set’) to teach the classifier what to recognise. This practice of showing simulated images to networks can lead to dangerous oversights.


When FA simulated images of canisters from different angles out of their collected data, they were using the formula of a search engine neural network that groups similar images. Google engineer Alexander Mordvintsev has illustrated how this process can be completely inverted in his computer vision program, DeepDream, which simulates images in a different way. Rather than identifying and classifying an image, DeepDream reverses the network used by Google’s search engines: the software adjusts an indistinguishable image to a suggested output neuron, so that it looks more like a face (or an animal or particular shape). The hallucinatory results of this process demonstrate the biased nature of the network: it has been programmed to see something similar to the many images it has seen before, and so it will see it. The images involved in this process will always remain invisible to us.


This has implications for our quotidian lives. A world of invisible images teaches these networks what to see, and subsequently what data to show us online. As a result, race, religion and gender biases are reflected back to us in the images Google shows. Search ‘doctor’, for instance, and Google will give you thousands of images of white men in white coats with stethoscopes. This software, used every day by millions of people, is reinforcing dangerous prejudices from amassed data which we cannot see in its entirety. This may seem a world away from freeport trade facilities, but both represent increasingly privatised and hidden examples of image hoards which, although invisible, affect the public realm.


All of these invisible images have become a new form of currency, as pointed out by Hito Steyerl in her latest book, Duty Free Art: Art in the Age of Planetary Civil War (2017). This short collection of essays on art and modern technology tracks the development of neural networks and invisible images as they become intrinsically linked to political exploitation. A painting, we learn, can be used an easy means of transferring wealth, changing hands within the safety of a tax-free haven (The Economist once described freeports as ‘permanent homes for accumulated wealth’). So too the collected data of a biometric passport scanner is valuable currency for corporations, international intelligence and security organisations, with its capacity to inform stock market values, map out territories in warzones, decide which refugees are terrorists and send targeted adverts. And yet, as Steyerl warns, these decisions should not be made by a software. Although engineers at Google state that these neural networks can only simulate a projected pattern of recognition rather than truly ‘knowing’ what a portrait or a face is, such data is taken very seriously in other hands. (Take China’s social credit system, which collects patterns such as shopping habits to directly rank and classify social hierarchies.)


Zach Blas is an artist whose project, ‘Facial Weaponization Suite’ (2011-14), manipulates pattern recognition technology in order to escape it. Blas created a set of three-dimensional globulous masks, generated from the biometric data of thousands of homosexual men’s faces. Masks were also generated from the collected data of ethnic minority faces. According to the artist’s website, corporations in the security

sector use biometric technology ‘with the hope of manufacturing the perfect automated identification tool that can successfully read a core identity off the body’. This software is used by states ‘to profile various sectors of the public into potential risk categories, like activists’. Crucially, it relies ‘heavily on stable and normative conceptions of identity’, creating the potential to ‘discriminate against race, class, gender, sex and disability’. The masks replace the wearer’s face entirely with a rippled surface that is unrecognisable to neural networks; they are ‘anti recognition’, ‘biometric devices’ for protesting. Blas describes this ‘autonomous invisibility’ as liberating, using the prejudice of facial recognition software to establish the ‘power of the collective face’.


Attacking the assumed reliability of the data selected and filtered by neural networks has never been more crucial. Organisations like the USA’s National Security Agency have access to a vast amount of data, but require advanced sifting and decoding procedures to handle the waves of unintelligible encrypted information. The Human Rights Data Analysis Group estimates that around 99,000 Pakistanis may have been wrongly classified as terrorists by SKYNET, an NSA program that sifted through data from mobile phone customers. Although we do not know how this collected data was eventually used, the consequences of such a number were potentially devastating, considering the fact that between 2,500 and 4,000 people were estimated as having been killed since 2004 by the USA in their drone war campaign against suspected militants. These networks are mechanically acting on the images of millions, which we cannot see because they are both private and encrypted, to create and destroy our reality.


What are the implications for the history of art, when the most powerful images in our technology-dependent society are invisible to us? Without the aid of decryption software and a knowledge of coding, the images shaping our borders, our security and our identities to potentially exploitative ends are unintelligible to humans. Steyerl playfully illustrates this in a chapter where the images on the page have been replaced by code – paragraphs of seemingly meaningless numbers and letters are captioned ‘Image captured by my camera as its viewfinder was being used by onlookers to locate Daesh positions in Kobane, Syria, October 8, 2014’. Such an elusively coded image can be hard to value, unlike a Freeport Picasso painting, which will always have market value no matter how many times it changes hands or moves around the freeport vaults – even if it is never seen. Nonetheless, the only finite evidence of the Picasso painting’s existence is in the ‘real’ public realm of insurance paperwork, art catalogue footnotes and photographic reproductions.


Hito Steyerl concludes that ‘not seeing anything intelligible is the new normal’. In the final lines of Duty Free Art, she wonders how humans’ physical bodies will evolve to deal with the new intrusion of unintelligible technology into our daily lives, articulated in the new language of code. Steyerl proposes that this as an exciting, even utopian vision for the future of our species – growing strange, new biological adaptions to communicate in a super-fast collective language. The reality (at least, for now) is something more like groping around in the dark. We don’t know what these images are used for or who is using them. They are privately owned. We don’t even know when our own faces are being collected as data for neural networks, as public spaces in cities become increasingly privatised. As these invisible images become more prevalent, their existence must be brought into public awareness. By understanding that a vast wealth of images is constantly being circulated amongst the computers of corporations, states and private parties, we can begin to search for what is hidden. Unintelligible code and blank spots are the new visual material for art historians.


GEORGE WILSON reads History of Art at St John’s College, but in her spare time she paints male nudes on canvas, ceramics, bits of wood from a skip, match-boxes, lamp shades, T-shirts...


Art by Alex Haveron-Jones

Comments


bottom of page