As we reach the end of 2022, we look back at a very busy year, and forward to an even more intense 2023. In this series of posts, we have selected some of our favorite artcasts, artists, artworks, articles, and interviews. They outline an overview of what has happened in Niio over the last months and highlight the work of artists and galleries with whom we are proud to collaborate. However, there is much more than what fits in this page! We invite you to browse our app and discover our curated art program, as well as our editorial section.
Five artcasts from 2022
Our curated virtual exhibitions are characterized by their flexibility to bring art in a digital format to any screen, at the homes of collectors and art fans, as well as in the framework of international exhibitions. This year, we have featured commissioned artworks by outstanding artists, participated in the ISEA2022 Barcelona International Symposium of Electronic Arts among other events, and introduced photography artcasts with celebrated photographers in collaboration with Fahey-Klein gallery.
We have chosen five artcasts from almost 60 launched since March this year, featuring the work of more than 80 artists. Click on the titles to explore each selection.
Niio joined the exhibitions of the ISEA2022 Barcelona 27th International Symposium on Electronic Art with a selection of artworks addressing the main themes of the symposium. The screen-based works address the notion of possibles in different ways, from the dynamics of microscopic particulate matter to the global effects of climate change, from new worlds we could inhabit to those that are fading away, and from our individual perception of the world to the realization that even machines can forget. Participating artists: Frederik de Wilde, Diane Drubay, Jeppe Lange, Sabrina Ratté, Antoine Schmitt, and Snow Yunxue Fu.
Artists create with the weight of art history on their shoulders. The canons from Antiquity, the Renaissance, the Baroque and Neoclassical periods, as well as Modernity have shaped the perception of the Fine Arts and the objects that an artist is supposed to create. Artists nowadays have the possibility, through digital technologies, to incorporate, remix, and reshape the art from the past in order to create new artworks that question the need for a static piece of marble or a canvas, and instead present an ongoing process. Participating artists: Quayola, Daniel Canogar, Frederik de Wilde, and Julian Brangold.
A selection of photographs by Steve Schapiro (1934-2022), one of the most prominent figures of documentary photography in the United States, initiated our series of photography artcasts curated by Nicholas Fahey, owner of Fahey-Klein gallery in Los Angeles. Devoted to photojournalism from a young age, he worked as a freelance photographer for Life and other magazines such as Time, Newsweek, the Saturday Evening Post and Paris Match. An exceptional witness of the civil rights movement, his camera captured key moments in American history with a sharp eye and caring attention to the subjects of his portraits.
In this series of works, artist Carla Gannis and her avatar C.A.R.L.A.G.A.N. travel around different parts of the world, in the city, and the countryside scanning and recording their experiences. The artist references historical philosophers, thinkers, and artists considering developments and similarities between the past and the future, between the pre-digital and the post-digital ages. The artworks represented derive from a series of 3D LiDAR scans taken from the artist’s iPhone, which are then recreated into fragmentary sceneries through a multimedia process that includes post-photography, 3D animation, digital painting, and AI generated imagery.
The pace at which city dwellers move is faster the bigger the city is. This was already proven by Marc and Helen Bornstein in their often quoted essay The Pace of Life from 1979, and has become much more clear nowadays, when our physical movements in the city are paired with a relentless digital activity. Yuge Zhou revisits her exploration of urban environments and the flows of commuters in these two commissioned artworks creating video collages of passersby in different U.S. cities, walking on sidewalks or rushing through subway stations. The collage technique allows her to create repetitions and create a sense of rhythm in these observations of daily life. “Interlinked I” and “Interlinked II” are part of the Niio Commissions Vol. 3
Ask Me Anything is a series of articles in the form of conversations, aiming to clarify certain terms, techniques, and debates related to digital art. Our Senior Curator puts 20 years of expertise in digital art at your service to answer your questions, taking only 5 minutes of your time.
Mark Amerika. Mobile Beach, 2007
Hey, what happened? The screen is broken!
What? Ah, don’t worry, the screen is fine. What you are seeing is glitch art.
This is art? But there’s something wrong with the image, it’s not loading properly. Did you check the cables?
Yes, precisely that is how the image is supposed to look. The glitches are what this type of art is about.
Oh, you can’t be serious… How can this…? I mean, ah… I can’t concentrate with this image jumping around and… and getting all pixelated and broken…
I understand. Let me recommend a simple exercise: take a deep breath… and stop trying to fix the image, just look at the changing patterns on the screen. Don’t think of it as an image of a river, or the portrait of a woman, or whatever it is you are trying to see there. That is just an illusion. The image does not exist, it is just information interpreted by a program and displayed on a screen.
I don’t understand. When I download an image to my computer, it is always an image. I see a thumbnail on the desktop, I click on it, and there it is: an image.
Yes, because it is interpreted as such every step of the way by the operating system. But try this simple trick:
1. Click on the filename. Change the extension from .jpg or .png to .txt
2. Open the file. The operating system will use a text editor.
3. You’ll see strings of weird characters that make no sense. Select some and erase them.
4. Save the file. Change the extension back to .jpg or .png
5. Open the file. The image has changed, it is probably broken or cut at some point.
This shows you what I explained before. When everything goes well, you are deceived into seeing a sharp, beautiful image, but when the data is corrupted, not properly transmitted, or there is an error in the program interpreting the data, this is what happens.
Ok I get it. But then, why is this art?
Glitch art is mainly about exploiting an error in a computer system, exposing its inner workings. It is hard to offer a specific definition, since there are many types of glitches and ways of interpreting what a “glitch” can be. Artist and researcher Rosa Menkman, who has extensively worked and theorized about glitch art, puts it this way:
“A glitch is the most puzzling, difficult to define and enchanting noise artifact; it reveals itself to perception as accident, chaos or laceration and gives a glimpse into normally obfuscated machine language. Rather than creating the illusion of a transparent, well-working interface to information, the glitch captures the machine revealing itself. Glitch artists make use of the accident to ‘disfigure’ flow, image and information, or they exploit the void – a lack of information that creates space for deciphering or interpreting the process of creating (new kinds of) meaning.” [1]
Menkman argues that glitch art goes beyond the aesthetic or the machinic, revealing flaws that are also present in social, political, and knowledge systems.
How can art be about error and nonsense?
Different art movements have explored the creative potential of errors and played with the absurd. Take for instance the Dadaists, who proclaimed the futility of art and their distrust of the art system. “Everything one looks at is a fake,” said Tristan Tzara in his Dada Manifesto of 1918. The Surrealists also wanted to disrupt the creative process and access less formal and rational ways of creating art by introducing randomness and spontaneity.
Ok, but the Dadaists and Surrealists did not use computers.
No, but they faced structured systems with codes and an internal logic that they wanted to disrupt. Using random words to create a poem or creating one out of unintelligible words, such as “dll rrrrr beeeee bö fümms bö,” as Kurt Schwitters did in his Ursonate (1932), is akin to creating a glitch in language, understood as a formal system, and actually developing a different kind of language. Similarly, Glitch Art is not simply about creating a disruption in a computer system, but exploring the creative and expressive capabilities of integrating glitches into a digital image, video, text, sound, or software, among other mediums.
Hold on, you’re saying that the artists create the glitches?
They sometimes appropriate them, or create the conditions for the glitches to happen. Musicians working with electronic synthesizers already experimented with disrupting the circuits to create noise. Also artists like the duo JODI, who are among the pioneers of net art, explored the aesthetic capabilities of the code hidden behind every website and also with the first modifiable versions of videogames such as Wolfenstein 3D or Quake. Some artists appropriate glitches happening while using computer software, as for instance Ant Scott, who in the early 2000s built a blog collecting screenshots and photos of software crashes and offered an initial definition of glitch art. Others initiate a process aimed at making glitches happen: this is the case of Mark Amerika’s experiment with mobile video in the late 2000s, forcing the capabilities of the mobile phone and the limitations of streaming HD video to generate “datamoshing,” a visible error caused by video compression. Rosa Menkman and Johan Larsby created in 2011 a glitch generator software called Monglot as a way of teaching about glitch at a moment in which it had been widely adopted as a purely aesthetic visual style in music videos and graphic design. More recently, glitch has been frequently adopted in the NFT art scene, as can be found in the work of Domenico Barra, or notably in generative art projects such as Kim Asendorf’s Sabotage. Then some artists are inspired by glitch art but move beyond it, as is the case of Yoshi Sodeoka, whose work connects noise music and glitch into an audiovisual language of his own.
Yoshi Sodeoka, Synthetic Liquid 7, 2022.
Wow, you went full art historian mode there.
I just scratched the surface.
Understood, but now that we have increasingly better high resolution screens, hyperrealistic 3D simulations, and immersive virtual reality devices, what’s the point of glitch? Isn’t it a bit nostalgic and passé?
I would argue quite the opposite, that as the means to create a convincing virtual reality around us are getting better, we need to counter this sleek, fake hyper-reality with a bit of glitch. Besides, glitch is fun.
Yes, I have to say that once you “get it,” it’s quite fun.
Told ya.
[1] Rosa Menkman. The Glitch Moment(um). Amsterdam: Institute of Network Cultures, 2011, p.29-30, 33.
ChatGPT is a machine learning model developed by OpenAI which was recently opened to the public as a research preview, allowing users to test it freely. Similarly to howDALL-E 2 draw widespread attention for its ability to create impressively realistic or stylized images based on text prompts, ChatGPT is now receiving similar responses, since it is capable of producing reasoned explanations and provide answers to follow-up questions, apparently grasping the context of what is being asked. As explained by OpenAI, the model was trained using supervised learning, based on conversations written by humans, in which a question and an answer were provided. Then a reward model was trained using several answers to the same question and having a human labeler rank them from best to worst. A reinforcement learning algorithm was used to optimize the model.
ChatGPT is able to explain concepts and provide answers considering different aspects of an issue, often maintaining a neutral attitude that seeks to balance out opposing points of view. It usually concludes with a short paragraph that summarizes its previous statements. Seeking to test it, I asked a series of questions regarding art created with Artificial Intelligence programs. It seemed fitting to have an AI system explain AI art to humans. The result is the interview below.
Please note: ChatGPT provides long answers, sometimes using repetitive formulas. I have decided not to make any edits to the text in order to remain true to its outputs, and so to avoid forcing readers to scroll through a very long article I’ve included an index below. Feel free to click on the questions you find most interesting. They are grouped into themes to make browsing easier.
AI art is artwork created by artificial intelligence. This can take a variety of forms, but typically involves using algorithms and machine learning techniques to generate visual artwork, music, or other forms of creative expression. AI art often explores the intersection of art and technology, and can involve the use of neural networks and other forms of AI to create unique and unpredictable artistic experiences. Some AI art is created with the goal of imitating human creativity, while other AI art is intended to push the boundaries of what is possible with technology and create entirely new forms of art.
AI art is a type of art that is created using artificial intelligence algorithms and technology. AI art can take many forms, including digital art, paintings, sculptures, and other types of visual art. The algorithms and technology used in AI art can be trained to create original works of art or can be used to enhance or manipulate existing artwork. AI art can be created by both human artists and AI algorithms working together, or it can be created entirely by AI.
AI art can be considered a subset of algorithmic art. Algorithmic art is a term that refers to art that is created using algorithms and other computational processes. AI art is a specific type of algorithmic art that is created using AI technology and algorithms.
In this sense, AI art can be considered a subset of algorithmic art, because it is a specific type of art that is created using algorithms and computational processes. However, it is worth noting that not all algorithmic art is AI art, and there are many other types of algorithmic art that do not use AI technology.
Overall, while AI art is a type of algorithmic art, it is not the only type of algorithmic art, and there are many other types of algorithmic art that do not use AI technology.
What role does AI art play in the history of contemporary art?
AI art is a relatively new development in the history of modern and contemporary art, and its role is still evolving. However, AI art has already had a significant impact on the art world, and it has introduced new ideas and possibilities for artists and art enthusiasts.
One of the key ways in which AI art has impacted the art world is by challenging traditional notions of what constitutes art and who can create it. Because AI algorithms and technology can be used to create art without the need for human intervention, AI art has called into question the idea that art must be created by human artists in order to be considered “real” art.
Additionally, AI art has also introduced new possibilities for collaboration between human artists and AI technology. Many artists are now experimenting with using AI algorithms and technology as part of their creative process, and this has led to the emergence of new styles and genres of art that are not possible without the use of AI technology.
Overall, while AI art is a relatively new development in the history of modern and contemporary art, it has already had a significant impact on the art world, and it is likely to continue to play an important role in the future.
AI art has its roots in the field of artificial intelligence (AI), which is the study of how to create computer systems that can simulate human intelligence. The development of AI technology has been ongoing for several decades, and it has led to the creation of many different AI algorithms and technologies that can be used to solve complex problems and to automate various tasks.
One of the areas in which AI technology has been applied is in the creation of art. By training AI algorithms on large datasets of artwork, it is possible to create AI programs that can generate novel and original artworks based on the information they have learned.
The first known examples of AI art date back to the 1960s, when researchers began to experiment with using AI algorithms to create visual art. Since then, the field of AI art has continued to evolve and grow, and there are now many different AI algorithms and technologies that can be used to create a wide range of different types of art.
Overall, AI art has come about as a result of the development of AI technology and algorithms, and the application of these technologies to the field of art.
The first known examples of AI art date back to the 1960s, when researchers began to experiment with using AI algorithms to create visual art. One of the first known AI artworks was created by the computer scientist Harold Cohen in the early 1960s. Cohen developed a program called AARON, which was able to create abstract drawings based on a set of rules and instructions that Cohen provided.
Another early example of AI art was created by the artist Harold Cohen in the early 1970s. Cohen developed a program called AARON, which was able to create abstract drawings based on a set of rules and instructions that Cohen provided.
Overall, while the exact details of the first AI artwork are not well-documented, it is likely that one of the first AI artworks was created by Harold Cohen in the early 1960s.
Can you name the main artists who have created art with AI?
There are many artists who have created art using AI technology and algorithms. Some of the most notable and well-known artists who have created AI art include:
Mario Klingemann: Klingemann is a German artist who creates AI art using a variety of different algorithms and technologies. His work often explores the intersection of art and technology, and he has exhibited his AI art in galleries and museums around the world.
Trevor Paglen: Paglen is an American artist who creates AI art that explores the relationship between technology, surveillance, and power. His AI artworks often take the form of large-scale installations and projections, and he has exhibited his work in galleries and museums around the world.
Anna Ridler: Ridler is a British artist who creates AI art using deep learning algorithms and machine learning techniques. Her AI art often explores themes of identity and the human condition, and she has exhibited her work in galleries and museums around the world.
Sougwen Chung: Chung is a Canadian artist who creates AI art that explores the relationship between humans and machines. Her work often involves collaborative projects between human artists and AI algorithms, and she has exhibited her AI art in galleries and museums around the world.
Overall, there are many artists who have created AI art, and their work covers a wide range of styles, techniques, and themes. These artists are at the forefront of exploring the potential of AI technology and algorithms to create art, and their work has been exhibited and recognized in galleries and museums around the world.
Both the artist and the machine can create AI art. Some AI art is created entirely by AI algorithms, without any human input. In these cases, the algorithms themselves are the “artists” creating the artwork. Other AI art is created by human artists using AI technology and algorithms to assist them in the creative process. In these cases, the human artist is the creator of the artwork, but the AI technology and algorithms are an important part of the creative process. In some cases, AI art may be created by a collaboration between a human artist and AI algorithms, with both contributing to the final artwork.
How can an algorithm be an artist? Isn’t creativity exclusively human?
The question of whether creativity is exclusively human is a complex one, and there is no definitive answer. Some people believe that creativity is a uniquely human trait, while others believe that it is possible for machines and algorithms to be creative.
One way to approach this question is to consider what we mean by creativity. Creativity can be defined as the ability to produce something new and original. While this definition may seem to exclude machines and algorithms, it is important to remember that these tools can be used in ways that were not possible before, and that they can help us to explore new ideas and create new and original works of art.
In the case of AI art, the algorithms and technology used can be trained on vast amounts of data and can be designed to explore a wide range of possibilities. This can allow the algorithms to generate artwork that is novel and original, even if it is created entirely by a machine. In this sense, the algorithms themselves can be seen as creative agents, capable of producing new and original works of art.
Ultimately, whether we consider AI art to be the work of a human artist, the machine, or a collaboration between the two, will depend on our individual perspectives and values. What is important is that AI art allows us to explore new possibilities and to create art in ways that were not previously possible.
Why would an AI program create art for humans? Wouldn’t it be more interesting to create art that only another AI can understand?
AI programs can create art for humans or for other AI programs, depending on the specific goals and objectives of the program and the designers who created it. In some cases, an AI program may be designed to create art that only another AI program can understand, for example, as part of a research project or to explore the capabilities of AI algorithms.
However, in many cases, AI programs are designed to create art that can be understood and appreciated by humans. This is because art is often a way for people to express themselves and to communicate with others, and creating art that can be understood and appreciated by humans is an important part of this process.
Additionally, creating art that can be understood and appreciated by humans can also help to demonstrate the capabilities of AI technology and algorithms. By creating art that people can understand and appreciate, AI programs can show the potential of AI technology to generate novel and interesting ideas, and to explore new creative possibilities.
Overall, whether an AI program creates art for humans or for other AI programs will depend on the specific goals and objectives of the program and the designers who created it.
The question of who should sign an artwork created with AI is a complex one, and there is no definitive answer. The specific approach that an artist takes when signing AI-generated art will depend on the individual artist and their goals and values.
In some cases, an artist may choose to sign an artwork created with AI themselves, as the primary creator of the artwork. This could be because the artist believes that they were the primary driving force behind the creation of the artwork, and that the AI program was only a tool that was used to assist in the creative process.
On the other hand, some artists may choose to sign the artwork with the name of the AI program that was used to create the art. This could be seen as a way of recognizing the contributions of the AI program to the creation of the artwork, and it could be an interesting and unique way to explore the relationship between human artists and AI technology.
In some cases, an artist may choose to sign an artwork created with AI using both their own name and the name of the AI program. This could be seen as a way of recognizing the contributions of both the human artist and the AI program to the creation of the artwork.
Ultimately, the approach that an artist takes when signing AI-generated art will depend on their individual goals and values. Some artists may choose to sign the artwork themselves, while others may choose to sign it with the name of the AI program, and still others may choose a combination of both. What is important is that the artist is transparent about their approach and that they are fair and reasonable in their dealings with the AI program and any other parties involved.
If AI programs are written by humans, isn’t AI art entirely created by humans, in the end?
It is true that AI programs are written by humans, and that the technology and algorithms used in AI art are created by humans. However, it is important to remember that AI programs are not just simple, pre-programmed tools that are used by human artists. Instead, AI programs are capable of learning from data and of making decisions and predictions based on that learning.
In the case of AI art, this means that the AI algorithms and technology are not just blindly executing instructions that have been written by humans. Instead, the AI algorithms are able to generate novel and original artwork based on the information they have learned from the training data.
This means that while AI art is not created entirely by humans, it is also not created entirely by AI algorithms. Instead, it is the result of a collaboration between human creators and AI technology, with both playing important roles in the creation of the final artwork.
Overall, while AI art is not created entirely by humans, it is also not created entirely by AI algorithms. Instead, it is the result of a collaboration between human creators and AI technology.
There are many different tools and technologies that can be used to create AI art. Some of the most commonly used tools and technologies for creating AI art include:
Machine learning algorithms, such as neural networks, that can be trained on data to generate art
Generative adversarial networks (GANs), which can be used to create realistic images and other types of art
Deep learning algorithms, which can be used to create detailed and complex artworks
Computer vision algorithms, which can be used to analyze and interpret visual information
Natural language processing algorithms, which can be used to generate text-based art, such as poetry or fiction
These are just a few examples of the tools and technologies that can be used to create AI art, and there are many other options available. The specific tools and technologies used to create AI art will depend on the individual artist and their goals and objectives.
There are many different AI programs and technologies that are commonly used for creating AI art. Some of the most commonly used AI programs include:
GANs (Generative Adversarial Networks): GANs are a type of AI algorithm that is often used for creating AI art. GANs consist of two neural networks – a generator network and a discriminator network – that work together to generate novel and original artworks based on a dataset of existing artwork.
Style transfer algorithms: Style transfer algorithms are a type of AI algorithm that is used to apply the style of one image to another image. This can be used to create AI art by applying the style of a famous painting or artist to a new image, creating a novel and original artwork that combines the style of the original with the content of the new image.
Evolutionary algorithms: Evolutionary algorithms are a type of AI algorithm that is inspired by the principles of natural selection and evolution. These algorithms can be used to create AI art by starting with a population of random artworks, and then using a set of rules and criteria to select the “fittest” artworks from each generation, allowing the art to evolve over time.
Deep learning algorithms: Deep learning algorithms are a type of AI algorithm that is based on the structure and function of the human brain. These algorithms can be used to create AI art by training the algorithm on a dataset of existing artworks, and then using the trained algorithm to generate novel and original art.
Overall, there are many different AI programs and technologies that are commonly used for creating AI art, and the specific program that is used will depend on the goals and objectives of the artist.
Which programming languages are used to create AI art?
There are many different programming languages that can be used to create AI art. Some of the most commonly used programming languages for creating AI art include:
Python: Python is a popular and versatile programming language that is often used for AI and machine learning applications. It has a large and active community of developers and users, and it offers a wide range of libraries and tools for working with AI algorithms and technology.
R: R is a programming language that is commonly used for statistical computing and data analysis. It has a strong focus on data visualization and is often used for creating AI art that involves the generation of complex images and other visual elements.
JavaScript: JavaScript is a popular programming language that is often used for web development. It is a versatile language that can be used for a wide range of different AI and machine learning applications, including the creation of AI art.
Java: Java is a popular and widely used programming language that is often used for developing large-scale, enterprise-level applications. It is a versatile language that can be used for creating AI art, and it has a large and active community of developers and users.
These are just a few examples of the programming languages that can be used to create AI art, and there are many other options available. The specific programming language that is used to create AI art will depend on the individual artist and their goals and objectives.
Does an artist need to code to create this type of art?
An artist does not necessarily need to know how to code in order to create AI art. While knowledge of coding can be helpful for creating AI art, it is not a requirement.
There are many different tools and technologies available that can be used to create AI art without needing to know how to code. For example, there are several user-friendly AI platforms and tools that can be used to create AI art without needing to know how to code. These tools often provide a simple, intuitive interface that allows users to create AI art by dragging and dropping different elements and using pre-defined algorithms and settings.
Additionally, there are many artists who collaborate with coders and other technical experts to create AI art. In these cases, the artist can focus on the creative aspects of the art, while the coders and technical experts handle the technical aspects of creating the AI art.
Overall, while knowledge of coding can be helpful for creating AI art, it is not a requirement, and there are many different tools and technologies available that can be used to create AI art without needing to know how to code.
The specific hardware requirements for creating AI art will depend on the specific AI algorithms and technologies that are being used, as well as the complexity and scale of the artworks being created. In general, however, creating AI art often requires a significant amount of computing power, and it is not something that can be done on a standard desktop or laptop computer.
In many cases, creating AI art will require the use of a high-performance computer with a powerful processor and a large amount of RAM. This type of computer will be able to handle the complex computations and data processing that is required for creating AI art, and it will be able to generate high-quality artworks in a reasonable amount of time.
Additionally, some AI algorithms and technologies may require the use of specialized hardware, such as graphics processing units (GPUs), in order to run efficiently and to generate high-quality artworks. In these cases, a computer with one or more powerful GPUs will be required in order to create AI art.
Overall, while the specific hardware requirements for creating AI art will vary depending on the algorithms and technologies being used, it is generally a resource-intensive task that requires a powerful computer with a large amount of computing power.
Does this mean, then, that only a handful of artists can create AI art?
The ability to create AI art is not limited to a small group of artists. While creating AI art does require a certain level of technical expertise and access to the necessary tools and technologies, these are not necessarily difficult or expensive to obtain.
There are many different tools and technologies available that can be used to create AI art, and many of these are freely available or can be purchased at a low cost. Additionally, there are many online resources and tutorials that can help artists to learn the technical skills and knowledge that are needed to create AI art.
As a result, while creating AI art does require some technical expertise and access to the necessary tools and technologies, it is not something that is limited to a small group of artists. Anyone who is interested in creating AI art can learn the necessary skills and obtain the necessary tools and technologies, and there are many different resources available to help them do so.
The use of AI technology in the creation of art will not necessarily make artists lazy. In fact, AI technology can be a powerful tool that can assist artists in the creative process and help them to explore new ideas and possibilities.
However, it is important to remember that AI technology is just a tool, and it cannot replace the creativity and vision of human artists. While AI algorithms and technology can be used to generate suggestions or ideas for artwork, it is ultimately up to the human artist to decide which ideas to pursue and how to incorporate them into their work.
Additionally, the use of AI technology in the creation of art can also help to inspire and challenge artists to think in new ways and to push the boundaries of what is possible. In this way, the use of AI technology can actually encourage artists to be more creative and innovative, rather than making them lazy.
Overall, while the use of AI technology in the creation of art can be a powerful tool, it will not necessarily make artists lazy. Instead, it can assist artists in the creative process and help them to explore new ideas and possibilities.
The use of AI programs in the creation of art could potentially lead to an overproduction of art. Because AI algorithms and technology can be used to generate art quickly and efficiently, it is possible that the use of AI could result in a large number of artworks being produced in a short period of time.
However, it is important to remember that the use of AI in the creation of art is not a new development, and it has been happening for several years. As a result, the art market has already had time to adjust to the use of AI technology, and the potential for overproduction of art is not as great as it may seem.
Additionally, it is worth considering that the quality and value of AI-generated art will vary, and not all AI-generated art will be of equal quality or worth. This means that while there may be a large number of AI-generated artworks produced, not all of them will be considered valuable or worth collecting.
Overall, while the use of AI programs in the creation of art could potentially lead to an overproduction of art, the potential for overproduction is not as great as it may seem, and the quality and value of AI-generated art will vary.
The environmental impact of creating AI art will depend on a variety of factors, including the specific tools and technologies that are being used, the amount of computing power that is required, and the conditions under which the art is created.
In general, however, creating AI art can be a resource-intensive process that requires a significant amount of computing power. This can have an impact on the environment, both in terms of the energy that is consumed and the potential for electronic waste.
Additionally, some AI algorithms and technologies may require the use of specialized hardware, such as graphics processing units (GPUs), which can be expensive to purchase and difficult to dispose of in an environmentally-friendly manner.
Overall, the environmental impact of creating AI art will depend on the specific tools and technologies that are being used, and it is important for artists to consider this impact when creating AI art. There are many different ways to reduce the environmental impact of creating AI art, such as using energy-efficient hardware and recycling or repurposing old or obsolete technology.
And if in the future many more artists can use AI programs and these are resource-intensive, does this mean that AI art can be bad for the planet?
If the use of AI programs for creating art becomes more widespread in the future, it is possible that it could have a negative impact on the environment. As more and more artists use AI algorithms and technology to create art, the demand for computing power and specialized hardware will increase, and this could lead to an increase in energy consumption and electronic waste.
However, it is important to note that the potential negative impact of creating AI art on the environment is not unique to AI art, and it is something that is also a concern for other forms of art making that are resource-intensive. Additionally, it is also possible that advancements in AI technology and algorithms could lead to more efficient and environmentally-friendly ways of creating AI art, which could help to mitigate the potential negative impact on the environment.
Overall, while the use of AI programs for creating art could potentially have a negative impact on the environment, this is not unique to AI art, and it is something that can be addressed through the use of efficient and environmentally-friendly technology and practices.
Do people really like AI art, or are they in it just for the novelty?
It is difficult to say whether people truly like AI art, or if they are simply interested in its novelty. There are certainly many people who are intrigued by the idea of AI art, and they may be drawn to these artworks because they are something new and different.
However, there are also many people who genuinely appreciate the aesthetic qualities of AI art, and who enjoy the unique perspective and ideas that these artworks can offer. Some people may find AI art to be thought-provoking and engaging, and they may enjoy the challenge of trying to understand and interpret these artworks.
Overall, it is likely that people’s reactions to AI art will vary, and there will be some people who are interested in it simply because it is novel, while others will genuinely appreciate and enjoy the art itself.
Why do most AI generated artworks produce nightmarish images?
Most AI-generated artworks do not produce nightmarish images. In fact, many AI-generated artworks are quite beautiful and interesting. However, it is true that some AI-generated artworks can produce images that some people might find disturbing or unsettling.
There are a few reasons why this may be the case. One reason is that AI algorithms are designed to explore a wide range of possibilities, and this can sometimes result in the generation of images that are unusual or unexpected. Another reason is that the training data used to teach the AI algorithm may contain images that some people find disturbing, and this can influence the types of images the algorithm generates.
It is also worth noting that what one person finds disturbing or nightmarish may not be the same for another person. What is considered “nightmarish” is a matter of personal opinion, and different people may have different reactions to the same image.
Overall, while some AI-generated artworks may produce images that some people find disturbing, this is not the case for all AI-generated art, and many AI-generated artworks are beautiful and interesting.
Why does GAN generated art look like bad painting?
Not all GAN-generated art looks like bad painting. GANs, or Generative Adversarial Networks, are a type of AI algorithm that can be used to generate art, and the specific appearance of the art generated by a GAN will depend on the training data and other factors.
It is true that some GAN-generated art may appear to be low-quality or “bad” painting. This can happen for a few reasons. One reason is that the training data used to teach the GAN may not be of high quality, which can result in the GAN generating art that is less detailed or less realistic than it could be. Another reason is that the GAN may not be trained for a long enough time, or it may not be trained on a large enough dataset, which can also result in lower-quality art.
It is also worth noting that what one person considers to be “bad” painting may not be the same for another person. Beauty and quality are subjective, and different people may have different opinions about the same artwork.
Overall, while some GAN-generated art may appear to be low-quality or “bad” painting, this is not the case for all GAN-generated art, and the specific appearance of GAN-generated art will depend on a variety of factors.
Should we reconsider our notions of art when exploring art made with AI?
The use of AI technology and algorithms to create art can certainly challenge our traditional notions of art and the creative process. Because AI algorithms and technology can be used to create art without the need for human intervention, AI art calls into question the idea that art must be created by human artists in order to be considered “real” art.
Additionally, AI art can also challenge our ideas about what makes art valuable and interesting. Because AI algorithms are able to generate artworks that are highly detailed and complex, they can create art that is aesthetically pleasing and engaging. This can challenge the idea that art must be created by human artists in order to have value or meaning.
Overall, exploring AI art can certainly lead us to reconsider our notions of art and the creative process. By challenging our traditional ideas about what constitutes art and what makes art valuable, AI art can help us to see art in new and interesting ways.
The potential for AI art to be a good investment will depend on a variety of factors, including the specific artworks being considered, the reputation and track record of the artist, and the overall state of the art market.
In general, however, AI art is still a relatively new and untested area of the art market, and there is a lot of uncertainty around the value and potential appreciation of AI artworks. Many buyers and collectors may be hesitant to invest in AI art due to its novelty and the lack of a track record for these types of artworks.
Additionally, the market for AI art is still relatively small, and there may not be a large number of buyers and collectors who are interested in these artworks. This could limit the potential for AI art to appreciate in value over time, and it could make it difficult for investors to find buyers for their AI artworks.
Overall, while there may be some potential for AI art to be a good investment, there are also many factors that could limit its value and potential appreciation. It is important for investors to carefully consider these factors before deciding to invest in AI art.
When selling art created with AI, should the artist split their benefits with the AI program?
Whether or not an artist should split their benefits with an AI program when selling art created with AI will depend on the specific circumstances and the agreement between the artist and the AI program. In some cases, the artist and the AI program may have agreed to share the benefits from the sale of the art, in which case the artist would need to split their benefits with the AI program.
However, in other cases, the artist and the AI program may not have agreed to share the benefits from the sale of the art. In these cases, the artist would not be obligated to split their benefits with the AI program, and they would be entitled to keep all of the proceeds from the sale of the art.
Ultimately, whether or not an artist should split their benefits with an AI program when selling art created with AI will depend on the specific circumstances and the agreement between the artist and the AI program. It is always a good idea for the artist and the AI program to carefully consider and agree on these issues before starting to create art together.
Will AI-generated art be preserved in the same way as paintings and sculptures? Will it last for centuries?
It is not clear whether AI-generated art will be preserved in the same way as paintings and sculptures. The preservation of art is a complex process that involves a variety of factors, including the materials used to create the art, the conditions in which the art is stored, and the care and attention given to the art over time.
In the case of AI-generated art, the specific approach to preservation will depend on the materials and technology used to create the art. For example, some AI-generated art may be created using digital media, such as video or audio, which may be more susceptible to degradation or loss over time. In these cases, the art may require special preservation techniques in order to ensure that it is preserved for future generations.
On the other hand, some AI-generated art may be created using physical materials, such as paint or clay, which may be more durable and easier to preserve over time. In these cases, the art may be more likely to last for centuries, just like traditional paintings and sculptures.
Overall, the preservation of AI-generated art will depend on the materials and technology used to create the art, as well as the care and attention given to the art over time. It is not clear whether AI-generated art will be preserved in the same way as traditional paintings and sculptures, and it will ultimately depend on the specific artworks and the circumstances in which they are created.
It is unlikely that AI-generated art will replace other forms of art making. While AI technology and algorithms can be used to create interesting and novel artworks, they are not a replacement for the creativity and vision of human artists.
Additionally, there are many different forms of art making, and the use of AI technology and algorithms is just one of many different approaches that artists can take. As a result, it is unlikely that AI-generated art will replace other forms of art making, and it will likely continue to exist alongside other forms of art making.
Overall, while AI technology and algorithms can be used to create interesting and novel artworks, it is unlikely that AI-generated art will replace other forms of art making. Instead, it is likely that AI-generated art will continue to exist alongside other forms of art making, as one of many different approaches that artists can take.
Do you think that AI art will get better with time?
It is likely that AI art will get better with time, as AI technology and algorithms continue to advance and improve. As AI algorithms become more sophisticated and are able to learn from larger and more diverse datasets, they will be able to generate more detailed and realistic artwork.
Additionally, as more and more artists begin to experiment with AI technology and algorithms in their creative process, they will develop new techniques and approaches that can help to improve the quality of AI-generated art. This could lead to the emergence of new styles and genres of AI art, and it could help to push the boundaries of what is possible with AI technology.
Overall, it is likely that AI art will continue to improve and evolve over time, as AI technology and algorithms advance and as more artists begin to experiment with AI in their creative process.
Over its history, AI technology has frequently promised more than it could deliver. Is this the case with AI art right now?
AI technology has indeed often promised more than it could deliver in the past. However, this is not necessarily the case with AI art. While there are limitations to what AI technology and algorithms can currently do in terms of creating art, there are also many interesting and impressive examples of AI art that demonstrate the potential of this technology.
One of the key limitations of AI art is that it is currently unable to create art that is truly original and creative in the same way that human artists can. AI algorithms and technology are still limited in their ability to understand and generate novel ideas, and most AI-generated art is based on existing data and patterns.
However, despite this limitation, there are many examples of AI art that are impressive and thought-provoking. Some AI algorithms and technologies are able to create art that is highly detailed and complex, and there are many interesting and novel ways in which AI technology can be used to create art.
Overall, while there are limitations to what AI technology and algorithms can currently do in terms of creating art, there are also many interesting and impressive examples of AI art that demonstrate the potential of this technology.
Tamiko Thiel is a pioneering visual artist exploring the interplay of place, space, the body and cultural identity in works encompassing an artificial intelligence (AI) supercomputer, objects, installations, digital prints in 2D and 3D, videos, interactive 3d virtual worlds (VR), augmented reality (AR) and artificial intelligence art. In this conversation, that took place on the occasion of the launch of her solo artcast Invisible Naturecurated by DAM Projects, she discusses the evolution of technology over the last three decades, her early AR artworks and her commitment to create art that invites reflection.
Your work is characterized by the use of Virtual Reality and Augmented Reality technologies, with pioneering artistic projects. Which technical challenges have you met over the last decades in the creation of these projects?
My first exposure to real time computer graphics was at MIT when I was a graduate student in 1982. At that point, writing everything from scratch, you had to program for a semester in order to get a cube that would rotate in three dimensions. Coming from an artistic and design background, I felt that this is not really where I want to create art right now, I’ll have to wait. And then about 10 years later, in 1992, Silicon Graphics came out with OpenGL, an open standard that made it possible to do real time interactive computer graphics on PCs. Then in 1994, I started to work with a company called Worlds Incorporated, which was taking this new potential for doing interactive 3D computer graphics on PCs connected to the Internet. At that time I worked with Steven Spielberg on theStarbright World Project, the first 3d online Metaverse for ill children, a virtual world where they could momentarily escape the space of the hospital. This first Metaverse was running on high end PCs, with fast connections provided by various high tech companies, but it was still unaffordable for people at home. The project ran from 1994 to 1997, and at that time the technology was still unstable.
So you must jump from that to 10 years later, when Second Life came about and this time people had more powerful graphic cards and ADSL connections at home. Second Life was able to create a much more developed virtual world, which seemed like the next phase of the Internet and all the corporations wanted to move there. Then around 2007-2008, probably due to the financial crisis, but also the rise of Facebook, which allowed people to share photographs on a common platform, the excitement around Second Life fizzled. And then if we jump another 15 years more, we find ourselves with still bigger processing power and faster connections. Now it is much easier to create virtual worlds than it was 25 years ago, partly because it is easier to create 3D objects, or you can buy them online, and also because of the advancements in hardware and software.
So, as you can see, big steps come on later than you think. It takes maybe 10 to 15 or 20 years to get there instead of the five years that all the evangelists predict. People talked about virtual reality at that time in the 90s as being a failure, just as they talked about AI being a failure in the 80s and 90s. And what they don’t realize is that technological change takes longer than you’d want it to. So it’s wrong to call it a failure. It’s more like: “Okay, we have to keep on working on this.” And if you wait long enough, 20 years or so, then you’ll get it.
Video by Tamiko Thiel, Rewilding the Smithsonian, 2021. Created with the ReWildAR AR app (2021, with /p). Commissioned by curator Ashley Molese for the 175th anniversary of the Smithsonian Institution, in the Arts and Industries Building.
Interactive 3D and VR artworks such as Beyond Manzanar and Virtuelle Mauer have a strong narrative component as they explore historic and political issues. What is the role of the user in constructing these narratives?
Basically, what I tend to do is look for key moments that I think can be expressed and experienced and communicated better in virtual reality than in other media. In Beyond Manzanar, for me that was the moment where you’re sitting in a beautiful Paradise Garden, and you see the mountains covered in snow around you. This is an image from the book Farewell to Manzanar by Jeanne Wakatsuki Houston: the author tells that when she was an eight-year-old and she was imprisoned in the camp, she would pick a viewpoint where she couldn’t see any guard towers, any barracks, nor barbed wire fence. And she tried not to move for the longest time, because as long as she didn’t move, she could preserve the illusion she was in paradise of her own free will. As soon as she moved, she saw that she was indeed in prison, she fell out of paradise back into prison. And so this moment occurs in Beyond Manzanar, where you enter a garden which is framed by the beautiful mountains. But if you go too deeply into the garden, then boom! – the garden disappears, and you’re back in the prison camp.
My second piece,The Travels of Mariko Horo, has a much more complicated structure with several heavens imagined by a time traveling 12th century Japanese female artist inventing the West in her imagination. In this work there is this moment when you enter the different churches, which are in fact liminal spaces between the prosaic everyday life and the world of the supernatural. When you cross that threshold, Mariko Horo takes you to heaven or takes you to hell. But it is always by your own free will, you’re always making the decision and making the motions that all of a sudden present you with the consequences of your decisions.
Finally, in Virtuelle Mauer/ReConstructing the Wall, I introduced some characters that take you in a time travel through the history of the Berlin Wall. But if you cross over the invisible boundaries of the former Death Strip,, then you fall back into the 80s, the wall appears behind you. So in all three pieces, it’s really about letting you feel like you have the freedom to go anywhere you want and do anything you want to do. But then you must face the consequences of these actions, which might take you to Paradise or they might take you to prison. But you always feel like it was your decision to go there, or to examine this, and therefore you’re sort of complicit with whatever happens to you.
Video by Tamiko Thiel, Atmos Sphaerae, 2022. Created with the Atmos Sphaerae VR artwork, 2021.
Creating artworks in Augmented Reality offers the possibility of intervening in institutional art spaces uninvited, as you did at MoMA, the Venice Biennale, or TATE Modern, or within a curated exhibition, as is the case with Unexpected Growth, which was shown at the Whitney Museum of American Art. Can you tell us about the creative process in both cases and your experience with “guerrilla” interventions versus curated exhibitions using the same technology?
Let’s start with We AR in MoMA, an augmented reality project created by Sander Veenhof and Mark Skwarek that took place at the Museum of Modern Art in New York on October 9th, 2010. The iPhone had been around since 2007, as well as other smartphone models, and in the course of 2009 both Mark and Sander had been playing around with the technology and developing AR artworks on mobiles in public spaces. And then they realized they could also geolocate the artworks to have them appear in certain spaces, so they came up with this idea of doing the spectacular intervention at MoMA. I knew Mark from the art circles before we had both shown in the 2009 Boston CyberArts Festival, so he dropped me and many of his artist friends an email saying: “Hey, we’re able to do this now. Send me some content and I’ll put it up and we’ll do a flashmob at MoMA.” They were not asking permission from MoMA. They didn’t know about it, and they couldn’t stop us. At that time, people didn’t realize that location based AR could be used anywhere. But then it turned out that they did find out about it beforehand, because Mark and Sander were doing the intervention as part of a citywide public art festival of psychogeography, so it was publicly announced by the festival all on Twitter. MoMA actually posted a link to the festival and said: “Hey, looks like we’re going to be invaded by AR,” which was very forward thinking and embracing this new development in technology. So, that was incredibly good publicity. It was a really exciting moment, when we realized that there were these possibilities that the new technology was bringing about. I would say this was a path breaking exhibit in the history of media.
After this intervention at MoMA, the artists who took part in it created the group Manifest.AR. We were thinking about where to do the next incursion, and since I live in Munich, which is a six and a half hour beautiful train ride to Venice, I suggested we go to the Venice Biennale in 2011. It was a group of about eight of us. We created virtual pavilions that were located inside the Giardini and at Piazza San Marco, so that people who didn’t want to spend money to enter the Giardini could also experience the artworks in a public space, because the Giardini, with its walls around it is a classically closed curatorial space. The point was that having your work shown at the MoMA or the Biennale is a sign of achievement, of having been able to enter these closed curatorial spaces, but now with AR interventions that was not true anymore, anybody can place their artwork wherever they want. But then people’s reaction was: “Oh, wow, you’re showing in the Venice Bienniale, you’ve made it!” Then we told them we hadn’t been curated and that we were doing this of our own accord, but people would respond: “Oh, that’s even better.” So we thought we were doing this sort of Duchampian breakdown of all sorts of structures that define prominence in the art world. Duchamp exhibited his famous urinal not to say that an artwork becomes an artwork when an artist says it’s an artwork and places it in an art context, but to state that this whole thing is ridiculous.
These interventions gave us a feeling of exhilaration that we could hold our own exhibits anywhere, even though no one in the art world was interested in media art at that moment. And we could also play off site. Because AR is a site-specific medium, you’re always dealing with the site. And that opened up whole new possibilities. Interestingly, shortly after that, George Fifield, the Boston Cyberarts director, arranged our first invitational show at the ICA Boston. This was in April of 2011. The ICA curators didn’t understand how the technology works. They said: “Okay, you can do it on the first floor, but not on the second floor. You can do it in the lobby and outside, but you can’t do it inside of the galleries.” And we had to tell them it doesn’t work that way. The artworks are triggered by a GPS location which has a radius of a mile or so.
As for showing Unexpected Growth at the Whitney Museum of American Art in New York, it was thanks to Christiane Paul, the adjunct curator of media art at the museum. I have known her for quite a while, I think since about 2002, and she has curated me into many of her shows over the years in different venues, but this was the first time at the Whitney. She had of course done the visionary work of creating Artport, a space for net art supported by the museum, but she still hadn’t placed an AR artwork inside the museum. Then in 2014 she commissioned an AR intervention by Will Pappenheimer, Proxy, 5-WM2A, at the Whitney’s final closing gala for the old Breuer Building. So when she contacted me in 2018 to create an artwork to show at the Whitney, she had already gone through the process of introducing this technology in the museum. She invited me to create an artwork for the terrace, which is 20 by 10 meters in size. Since this was a big show, I needed to make sure that the piece would work properly, so I contacted the people at Layar, the AR app we had used in all our previous interventions, but by then they told me they would shut down their servers, so I had to find a solution. My husband Peter Graf, who is a software developer, told me he could write an app for me. We worked side by side on this project, so I realized he should co-author it with me and he came up with the artist name /p, so now the artwork is in the Whitney collection credited to myself and /p in collaboration. Now the artwork is not officially on view at the museum, but if you download our app and go to the terrace you can still experience it.
Video by Tamiko Thiel, Unexpected Growth (Whitney Museum Walk1), 2018. Created with the Unexpected Growth AR app (2018, with /p), commissioned by and in the collection of the Whitney Museum of American Art.
There is also the fact that the artworks are invisible, so how did you communicate their existence and solve the technical problems associated with having the proper device, software, and connectivity?
At the Venice Biennale intervention, Sander got in touch with Simona Lodi, director of the Share Festival Turin, and the artist group Les Liens Invisibles, who were together mounting another AR intervention The Invisible Pavilion. We created a common postcard with QR codes to download the app. We also invited people to come to Piazza San Marco and the Giardini on certain days and times and help them experience the artworks. Collaborating with the team from the Share Festival was a huge help, because those of us from outside of Italy had terrible connection issues, and also it was the first Venice Biennale when hordes of people were walking around with their cellphones, overloading the networks. The Vodafone network actually broke down in the Venice area. Gionatan Quintini of Les Liens Invisible loaned me his smartphone to show my work, and this is an example of the kind of collaborative atmosphere that you get in the media art world and that is not that easy to find in the contemporary art world.By connecting our networks with those of Share, we got a lot of publicity for both the interventions in MoMA and in the Venice Biennial, and that put AR in this early time into the media art history books, and therefore into the art canon.
Video by Tamiko Thiel, Sponge Space Trash Takeover (Walk1), 2020. Created with the VR space “Sponge Space Trash Takeover” courtesy of Cyan Planet and xR Hub Bavaria.
The artworks in your latest artcast titled Tamiko Thiel: Invisible Nature all deal with different aspects of our intervention of the natural environment. What has been your experience addressing this subject in terms of the balance between the artistic expression and the message you want to convey?
Perhaps because I started out as a product designer, with the Connection Machine being what I consider my first artwork, I am always thinking of my audience and how to communicate with them. When I approach political or social issues, such as climate related problems, I know that the really shocking photographs (for instance, a dead bird whose stomach is full of plastic) give you an immediate emotional jolt, and make you realize that this is a serious problem. But I personally cannot look at those images day after day, time and time again. So, balancing my work as an artist with my desire to communicate, sometimes I think that I should be a journalist, so I could write articles that can go into the details in much more depth. But how often do you reread the same article? So I think that what is truly the value of an artist making work about a subject such as these is that the art work can be exhibited time and time again, in different places around the world. And people might see it again, they may be willing to look at it time and time again, but not if it is something horrible and shocking. I’m traumatized enough by what’s happening in the world, so I’d rather create something that is not traumatizing for people, but at the same time it makes you think.
For instance, Unexpected Growth shows a very colorful, bright coral reef on the terrace of the Whitney. And when you look at it more closely, you realize this beautiful coral reef is made out of virtual plastic garbage. So people are confronted with something that is really beautiful, but after a while they realize that they are surrounded by garbage. So my strategy is to seduce people with a strong visual composition that is captivating. And then, when I’ve got their attention, I let them figure out that there is actually something else going on here, if you actually spend the time to look at it.
Video by Tamiko Thiel, Evolution of Fish – Anthropocene Daze #1, 2019. Created with the AR app Evolution of Fish (2019, with /p).
Serafin Álvarez is an artist and researcher based in Barcelona, who explores themes and concepts associated with liminality, non-human otherness, the journey into the unknown and changes in the perception of reality; and how these are imagined and depicted in contemporary popular culture, with a particular interest in science fiction and fantasy film and video games. Encompassing 3D animation and interactive simulated environments, sculpture and installation, his work has been exhibited internationally.
The work of Serafín Álvarez has been featured in Niio in the artcasts Worlding with the Trouble (curated by Fabbula) and Heterotopias, alongside other international artists. The recent artcast Places of Othernessbrings together four of his works, spanning the latest five years of his career. On the occasion of this presentation, we talked with him about the process and concepts behind his work.
You have stated that the inspiration for Maze Walkthrough comes from the experience of going from one airport to another while you were producing a previous project. Would you say that both airports and videogame environments are “non-places” meant for endless circulation?
Indeed, airports have often been associated with Marc Augé’s concept of non-place, but I would not put, generally speaking, video game environments in that category, since they are, for many players, places where meaningful relationships are established. In any case, when I did these works I was not so much thinking about the concept of non-place as about liminality. In both cases I looked at certain architectural spaces (corridors and airports) as spaces for transit, circulation, change. Spaces that have not been designed to be inhabited, but to connect other spaces.
You are interested in science fiction as an exploration of the Other. In your work, this Other would be the space itself, strange and unpredictable?
One of the things that interests me most about science fiction is the speculation about the unknown and the ways of representing it. That unknown can be an Other (understood as someone different, whether human or of another species), but it can also be a place, a state of consciousness, a mutation, and so on. In my work I have looked at multiple resources that science fiction uses to represent what we don’t know: visual effects, soundtracks, costumes… but you are right that in most of my work there is an important spatial component, an active interest in spaces of otherness.
In your works you seek to create an experience, which becomes immersive by allowing the viewer to wander freely through the spaces and free themselves from the impositions of gameplay. How do the sculptural elements you create for exhibitions in physical spaces participate in this immersion?
My work is predominantly digital, but when I exhibit it I’m very interested in its physical dimension. I like sculpture very much and I try to incorporate in my own work that physical relationship between bodies that I enjoy so much when looking at physical objects in the real world. On the other hand, digital work can become a bit schizophrenic, because you can edit and polish details ad infinitum, try one thing, undo it and try another one endlessly. Working with matter is different, it allows me and encourages me to be more intuitive, to let myself go, to establish a less controlling relationship with the materials, and I personally think that brings very positive things to my work.
Serafín Álvarez, A Full Empty, 2018
You have distributed your work as downloadable files that the public can buy for whatever price they want, even for free. What has this kind of distribution meant for you? Do you see other ways of distribution that would be conducive to your work, particularly because of its identification with the language of videogames?
I have two pieces of interactive software on itch.io, an interesting platform for independent video games with a very active community. I usually work with physical exhibitions in mind, but distributing part of my work digitally has allowed me to reach other audiences; it has given me a certain autonomy to show and make my work known without having to depend exclusively on institutions, galleries and curators; and being attentive to digital platforms for art distribution has allowed me to get to know the work of a large number of very interesting artists who are active online although they may not have as much presence in the conventional channels of contemporary art.
It seems that Maze Walkthrough has been better understood in the field of videogames than in the contemporary art world. Do you think this is due more to the aesthetics or to its “navigability”?
I don’t know if better, but different. When I published Maze Walkthrough it was reviewed in some media outside the field of contemporary art and it was very well received. Many people wrote to me, many people commented and shared both the piece of software and the collection of corridors at scificorridorarchive.com that I made while conceiving the project. Audiences around science fiction and video games have always interested me, and that such audiences valued my work was something that filled me with joy. One of the things I liked most about that reception was to see people enjoying the piece in a different way than the contemporary art audiences I’m used to, which tend to look at the work in a reflexive way, pondering possible interpretations. I’m very interested in hermeneutics, but it was refreshing to also see people enjoying Maze Walkthrough more from experience than intellect.
Serafín Álvarez, Maze Walkthrough, 2014
A Full Empty, the video you presented as part of the artcast curated by Fabbula, shows a world in which nature has run its course after an industrial era that fell into decay. Do you see in this work an interest in dealing with environmental issues through simulation, or do you continue to explore spaces linked to science fiction narratives?
Both. This work is based on two fictional texts: Andrei Tarkovsky’s film Stalker and, especially, the novel Roadside Picnic by the Strugatsky brothers on which Tarkovsky based his film. Both texts are about a forbidden zone to which humans have restricted access and which develops its own ecology, and while making that video I found myself thinking about what the planet would be like once we are no longer here.
You are interested in freeing the viewer from the tyranny of the camera, but there’s actually an interesting aspect to the camera movement in your work. Normally it’s a forward traveling sequence, following the logic of video game exploration, but in A Full Empty it is, conversely, a backward traveling, which gives it a more cinematic character. Is this a conscious decision in the creation of this piece? Have you thought about working more with camera movements in future works?
Yes, of course it was a very conscious decision. In Roadside Picnic the scientists who study the forbidden zone explore it with great care, because it is full of deadly traps. They have developed hovering vehicles with a “route memorizer” system that, once they have finished an exploration journey into the zone, return them back on their steps in an automated way to reduce the danger, undoing on the way back the exact same route they did on the way out and therefore without falling into the traps already bypassed. The video is influenced by this automated journey of return after having entered a strange place in search of something.
I’m sure I’ll continue working with camera movements, it’s something that fascinates me. Right now I’m involved in developing live simulations that are much less cinematic than the video A Full Empty, but I still think and care a lot about camera movements, no matter how simple they are. Moving the camera is a wonderful expressive resource.
Serafín Álvarez, Now Gone, 2020
In Now Gone you adopt a different aesthetic, which resembles the point clouds created by 3D scanners, to show a mysterious cave inspired by the film Prometheus and the universe of H.R. Giger. What led you to this aesthetic and how would you link this piece to your other works?
The link with other works is a similar interest in the journey, in the passing from one place (or condition, or state…) to another. Also, the arrangement of “intertextual elements”, vestiges that refer to fictional stories as if they were a kind of archaeological objects… although it is true that the aesthetics of Now Gone is different from my previous works. Now Gone was born from an invitation to participate in a publication, Today is a Very Very Very Very Very Very Very Gummy Place by Pablo Serret de Ena and Ruja Press. They sent me a very ambiguous map and asked me to make something from it. My proposal was to build an environment with video game technology. Since the publication was going to be edited in black and white I started to try things using this limitation in a creative manner and, after several experiments, something that worked very well for what I wanted to achieve was to render the images using a 1-bit dither (a graphic technique in which there are only black or white pixels organized in such a way that it produces the illusion of grays, similarly to Ben Day dots in comics). I’m very pleased with the result, in fact I soon returned to a very similar aesthetic in a later work, A Weeping Wound Made by an Extremely Sharp Obsidian Knife, and I’m currently looking at different ways to develop it further in the future.
Fabbula specializes in curating Virtual Reality projects and immersive experiences. In relation to your work, how do you see the possibilities offered by current VR devices for the dissemination of digital artworks?
At the moment I haven’t seriously started working with VR. As I mentioned in a previous question, I’m very interested in the relationship between the work, the viewer and the physical space, but generally speaking VR experiences tend to remove that physical space. I’m sure there are interesting ways to incorporate it, but for the moment I haven’t worked in that direction yet.
Eelco Brand(Rotterdam, 1969) creates virtual 3D models that resemble beautiful natural environments full of lush vegetation, bathed by the warm light of the sun or entrancing moonlight. While photorealistic, his artworks are not based on photography or 3D scanning. They are painstakingly created from scratch, layer by layer, with the patience of a devoted painter. The scenes he creates have no conclusion. They simply play out endlessly in seamless loops, depicting a surreal activity that, by repetition, becomes natural.
Brand is represented byDAM Projects, the pioneering digital art gallery funded and owned by Wolf Lieser in Berlin, which is presenting its most outstanding artists on Niio. Our recent artcast Sprout features a selection of artworks by Brand that depict scenes of nature with a mysterious twist. We sat down with the artist to discuss the concepts behind his 3D animations and the techniques he uses to create them.
Eelco Brand. WT.movi, 2019
Can you take us through the process of creating one of your animations?
A project starts with small pencil sketches. Followed by animating the movements in simple test scenes with dummy objects. When things seem to be possible technically and it might become an interesting work, I start building with 3D modeling. Then I import the 3D objects in a construction that could best be described as a virtual film studio. Lights and a camera are used as in a real film set. Only the area within the viewport of the camera is relevant, zooming out would reveal it is an illusion, as in a studio scene of a feature film.
At the same time it is often impossible to build a whole scene in one construction because of the limitations of computer memory and render power. So I use separate rendered layers and place them on top of each other in a film editing program. Which gives additional tools for adjusting image details.
Your artworks integrate elements of the languages of both painting and cinema. Which role do these elements play?
In my animations there isn’t a narrative, no story development. Shown within the edges of a screen hanging on a wall, the similarity with a painting is obvious. It doesn’t matter when you start looking at it. This can be bothersome in a video art exhibition, when it is unclear whether you are at the end or at the beginning. A painting, on the other hand, is static and is often looked at for just a brief moment, trained as we are to see and judge an image in a split second because of the visual bombardment we are subjected to each day. So, as a painter, to be able to use movement to attract and hold the attention of a viewer has always felt as a powerful quality. In maintaining the resemblance with a painting I prefer to use slow movements or keep the camera standing still. For me, the slow rhythm and iterations are a welcome opposite of the constant flow of images in a fast, hyper tensed society.
Light and colors in the animations, as well as camera angle and depth of field are mostly a consequence of the scenery. The most attractive way of working on an animation is when the whole construction seems to take over and evolves by its own logic.
Eelco Brand. HH.movi, 2017
You do not use photographs or scanned objects in the making of your artworks. Why did you choose this method of creation? Do you keep libraries of elements that you can re-use in different artworks?
Yes, I re-use objects when I can. The sculpting and texturing of a 3D object is quite a lot of work each time, but the handmade aspect is essential in my opinion. It would be very well possible to obtain existing 3D objects, but that feels as cheating. From beginning to end, to every detail, it feels important to construct everything myself.
At the same time, I find it interesting to question to which extent the 3D software is only a technical toolset and whether you can consider yourself as the creator of each and any aspect. It can be said that there is a sort of anonymous collaboration between the designers of the software and the artist, particularly when certain typical effects are applied. I always try to be careful and avoid using the newest effects of 3D software, because there is this point that it is not so much the artist just using a toolset, but you see in fact the coolest new wizardry made by software designers.
And nothing is outdated as fast as the newest, flashy techniques.
Eelco Brand. OBJ.movi, 2021
You underscore the fact that landscape is a fiction, and so your depiction of nature is at the same time photorealistic and playfully fantastic. Is this your intention, to lead the viewer to question their perception of reality?
Nature is, on the one hand, an infinitely refined machinery. A biochemical machine. Up to the tiniest protein and molecule behaves according to the laws of physics. And, on the other hand, nature is mystical, magical and divine. Or is that the human mind, projecting its thoughts and feelings? Indeed a landscape is fictional. It is our perception that creates a landscape out of trees and rocks and fields that are just randomly placed.
For me the fascinating quality about 3D animations is its immaterial aspect. It can be compared to the substance that dreams are made of. While fully virtual, it can be convincingly real. And with VR techniques rapidly evolving using virtual environments, the boundaries between fiction and reality will fade more and more.
One would say that some of your animations depict particular moods, such as joy, longing, or sadness. Is there an emotional dimension in these landscape and still life compositions?
I think that the works can have a certain atmosphere depending on the interpretation of the viewer. I entertain the idea that it is a personal issue. Nevertheless, the fact that a scene could trigger a certain emotion is very welcome.
Eelco Brand. KB.movi, 2021
The titles of your artworks are particularly puzzling, since they are reduced to a string of letters and the file extension. Is this a way to remove all possible interpretations of the artwork beyond the fact that it is a 3D rendered animation?
Exactly. They could all be named N.T., but the different letters help me organize the artworks. They are often just abbreviations of the project map on the computer. For example ‘Fir Tree Project’ would be FT.movi.
Eelco Brand. QTQ.movi, 2018
Since you create such detailed scenes with 3D modeling, have you considered expanding your landscapes to immersive 360 environments for virtual reality? Or do you prefer the image to stay inside a frame?
I have tried some things with Unreal Engine and it is fascinating. The visual impact of a VR environment is huge and a big promise for the future. But still, to put a device on your head isn’t that ideal. Especially in an exhibition surrounding, I don’t think it works very well.
Your sculptures seem to go in the opposite direction of the animations, as they are artificial objects that seem extracted from a 3D rendering process and placed in a world where they don’t belong. What is your main interest in the creation of these pieces?
Because of the immaterial quality of 3D modeling, it felt almost magical to touch a real sculpture after production, designed on the computer as an intangible object. In several pieces I have an animation in which a shiny, unnatural shape moves. And in the exhibition the sculpture lies materialized next to the monitor as the actor out of the movie. It was interesting that there were people convinced to see the sculpture moving after watching the animation.