Lisa Jevbratt

lisa1.jpg

Lisa Jevbratt
(spanish)

I have been studying, exhibiting, teaching and curating in the field of software/Internet-based art since I came to the USA from Sweden in 1994 for graduate school. My projects mostly consist of software that collects data from (and is concerned with) the use and functionality of the Web, Internet and e-mail communication, and provide alternate ways of accessing that data. From the very start I was interested in the collective aspect of the Internet, how it is allowing us to experience and map, us, humans, on a community/species level. The Internet enables ‘species collaboration’ – inexplicit collaborations that are typically non-hierarchical, to some degree self-organizing, in which we participate more or less unintentionally. Examples of such collaborations are abundant, ranging from open source software developments to collaborative information filtering a la amazon.com and of course the Internet itself.

Several of my projects are concerned with the totality of the Internet and in specific the part of the Internet defined by the HTTP protocol, the Web. In addition, in all my projects there is an interest in the patterns and synchronicities that seem to emerge in collective entities such as the Web – a question with a more esoteric potential. I will discuss a couple of projects, older and ongoing, which have been important for me because they have brought some key issues to my attention, issues which has become a part of a conceptual and aesthetic foundation for my work in general

1:1 – towards a new type of image

1:1, created in 1999, consisted of a database that would eventually contain the addresses of every Web site in the world and interfaces through which to view and use the database. Crawlers (software robots, which could be thought of as automated Web-browsers) were sent out on the Internet to determine whether there was a Web site at a specific IP address. If a site existed, whether it was accessible to the public or not, the address was stored in the database. When the project was first created, approximately two percent of the IP-addresses were searched and 186,100 sites were included in the database. In the fall of 2001, the search was started again. The initial idea was to continuously search the IP space to eventually have covered the whole spectrum. But since the Web had changed drastically since 1999, it seemed more interesting to search the same areas again to be able to make comparisons between the Web in 1999 and 2001. Five interfaces (Hierarchical, Every, Random, Excursion, Migration) visualize the databases and provide means of using the database to access and navigate the Web. The first four interfaces show the two databases in parallel. The fifth interface, Migration , reveals in one image how the Web “moved” over the years. The database was updated once more in 2004, and the Migration interface was updated to include the new database and expanded into a project in itself. In the Migration image each pixel location on the picture represents 255 IP-addresses. The pixel in the top left corner represents the 255 addresses that start on 0.0.0 and the one in the lower right corner the ones that start on 255.255.255. If there are any Web sites starting on the number a pixel represents a “blob” is placed there. The size of a blob is determined by how many sites it represents (1 to 255). In interface 1 the red blobs represent Web sites found in 1999 and the green represent the Web sites found in 2001/2002 and the blue represents the sites found in 2004. In interface 2 the color of the blobs represent the exact oldest creation date of the pages represented by the blob. Gray non-shaded blobs represent sites that did not exist anymore when the visualization was created in 2004/05 (sites represented in red and green in interface 1).

In the interface Every(IP) the image is composed of pixels each representing one IP-address stored in the database. The lowest IP-address in the database is represented in the top left corner and the highest in the lower right. The color of a pixel is a direct translation of the IP-address it represents. It is generated by using the second part of the IP-address for the red value, the third for the green, and the fourth for the blue value. The variations in the complexity of the striation patterns are indicative of the distribution of Web sites on the Internet. An uneven and varied topography is indicative of larger gaps in the numerical space i.e. the servers represented there are far apart, while smoother tonal transitions are indicative of networks hosting many servers, which because of the density have similar IP addresses.

When working on 1:1 I realized that the interfaces/visualizations were not maps of the Web but were, in some sense, the Web. They were a new kind of image of the Web, and they were a new kind of image. These images are “realistic” in that they have a direct correspondence to the reality they are mapping. Each visual element has a one-to-one correlation to what it represents. The positioning, color, and shape of the visual elements have one graspable function. Yet the images are not “realistic”; they are real, objects for interpretation, not interpretations. I realized that I wanted visualizations to be experienced, not function as a dialogue about experience. Visualizations should avoid looking like something we have seen before, or they could playfully allude to some recognizable form but then slip away from it. Instead of representing data symbolically by filtering it through known visual forms, the collected data should be represented by leaving an imprint; the images are “rubbings”, indexical traces of reality, and they are reality. The reason why this minimal “indexical” method of visualization occurred to me was because I wanted to make images of the whole Web, making it visible and accessible in one glance. There were no room for three-dimensionality with its overlapping shapes; pixels spent on anything purely “decorative” or in-between spaces. Every pixel mattered.

In the article “Systems Esthetics”1 Burnham Jack, Artforum, September 1968 (1968) Jack Burnham wrote about the complex process oriented society he saw emerging: a new era of highly complex economical and cultural systems that begs for new modes of understanding. Burnham argues that because we can’t grasp all the details of our highly complex systems (economical, cultural, technical, etc), we cannot make “rational” decisions within them or understand them by analyzing the parts or even the system. The way to make decisions within them and to understand them is by making more intuitive, “esthetic decisions”, a concept he borrows from the economist J. K. Galbraith.

This idea has an intriguing parallel in the philosopher Emmanuel Kant’s reasoning about the energizing effect the sublime has on our organizing abilities. Kant claims that in experiencing the sublime, by facing large amounts of information, huge distances and ungraspable quantities, our senses and our organizing abilities are mobilized. Contrary to what might be believed, we feel empowered, able to make decisions, and capable to act. This is of great interest to the field of data visualization. Many strategies for aiding people in the task of turning large sets of data into knowledge assumes that they should be presented less information and fewer options in order to be able to make sense out of the data. However, humans are capable of sorting through enormous amounts of visual information and make sensible and complex decisions in a split second, (the ability of driving a car is one example). Supported by Kant’s idea I propose that under the right circumstances, drawing on sensations of the sublime, people can, when faced with huge quantities of data, be mobilized to make intuitive understandings of the data. Many information visualizations, artistic or scientific, are a result of the mistake of compressing the information too much and decreasing the amount of information through calculations that embody assumptions that are never explained. The most common mistake in data visualizations is not too much information but too little, their “images” of the data landscape are not high resolution enough for an esthetic decision to be made.

Infome Imager Lite – The unintended

The problem with compressing data, allowing more or less explicit assumptions to drive the visualization, is that one taints the data with intention. When I was working on 1:1 I started to think of myself as an alien, landing on planet earth and trying to find out if there is life/intelligence there. Choosing to pretend that I don’t know, that I don’t see the intentions in the networks I look at. I have found several interesting examples of how identity is found not in deliberate attempts, in intentional acts, but through the less intentional parts and expressions of a system. Some years ago a student of mine made an interesting discovery in a project he made2 Gielow Ryan, San Jose State University, 1999. It was Web software that returned the result of a search for something on three different search-engines in the reversed order. I.e. the most relevant (however the search-engine define that) was last on the list and the least relevant of the relevant sites was shown first on the list. The result was significant. The least relevant sites, the ones usually so many clicks away we don’t bother to look at them, varied greatly between the different search engines. The most relevant results, the ones usually displayed on top, were all the same. Some centuries earlier Giovanni Morelli (1816-1891) made a related discovery. He sought to find a method of determining authorship of paintings and came upon the fact that authorship is more detectable in the parts of a painting done with less intention; the parts which are not significant for the author or the genre in which the painting is made, such as earlobes and fingernails. His method is now called “The Morelli Method”. In art historian Edgar Wind’s words it is interesting that “Personality is found where personal effort is the weakest”. 3 Ginzburg Carlo, “Morelli, Freud and Sherlock Holmes: Clues and Scientific Method” in History Workshop Journal, 1980 The physicist Albert-Laszlo Barabasi made a similar discovery in the field of biology. In his book ‘Linked: The New Science of Networks’4 Barabasi Albert-Laszlo, “Linked: The New Science of Networks”, Perseus Publishing, Cambridge Mass., 2002 he explains his research on network structures and linkage systems of various fields from computer networks to biology. He finds that “For the vast majority of organisms the ten most-connected molecules are he same.” (p. 186) and “[T]hough the hubs are identical, when it comes to the less connected molecules, all organisms have their own distinct varieties.” (p. 187) The highly connected molecules, hubs in Barbasi’s terminology, are equivalent to the most relevant pages in a Web search or the traditionally most “important” features in a painting. These are the items, nodes, with the most intent. And just as the least relevant Web pages are the most dissimilar, and the least important features such as earlobes say more about the painter, the difference between different organisms ­– the production of their identity – lies in the least connected, least used or significant molecules.

With the project Infome Imager Lite (started in 2002) I continued to explore the idea of indexical visualizations with the explicit focus on visualizing data that is commonly not considered to be “content”, but rather the artifacts of the system creating the “content”, data that is unintended, and thus have the potential of being a more direct expression of the identity of the entity producing it, the Web. In the Infome Imager project the idea of the Internet and the Web as a “collective”, an entity with an identity worth investigating is brought to the forefront by introducing the idea of the Web as an organism. The term Infome in the title is derived from the word “information” and the suffix ‘ome,’ used in biology and genetics to mean the totality of something as in chromosome and genome.

Infome Imager Lite allows the user to create crawlers that gather data from the Web, and it provides methods for visualizing the collected data. Some of the functionality of the ‘Infome Imager Lite’ crawler is similar to the crawlers search engines such as Google use, but with some significant differences. The search engine crawler collects data about the intended content of a page, the actual words, in an effort to index the Web according to the “meaning”, the semantics, of Web pages. The ‘Infome Imager Lite’ crawler collects “behind the scenes” data such as the length of a page, when a page was created, what network the page resides on, the colors used in a page and other design elements of a page etc.

It glances down into the subconscious of the Web in hopes to reveal its inherent structure, in order to create new understandings of its technical and social functionalities. The Infome Imager interface allows the user to manipulate the crawler’s behavior in several ways. The user decides where it should begin crawling; it could for example start on a Web page specified by the user, a page resulting from a search on a search engine, or on a random Web page. The crawler can be set to either visit a page once or every time it encounters a link to it. The data resulting from many revisits will create repetitive patterns in the visualization, revealing the linkage structure of the Web sites, while data resulting from single visits will generate distinct data. The user also sets how many pages the crawler should visit. The activity and the result of the crawler can be monitored on the Web site. The visualizations created by the crawler functions as an interface linking to all the sites the crawler visited. The images are produced in a fashion similar to the images in 1:1. Each datum is represented by one distinct entity, a pixel or a user-defined icon. The patterns that occur in the Imager visualizations are due to how Web designers and Web masters write their web pages and how they construct their sites.

The ‘Infome Imager Lite’ Web site can be seen as a form of distributed computing such as the seti@home http://setiathome.ssl.berkeley.edu/ project. However, while seti@home is using peoples computers in a collective search for intelligence in the universe ‘Infome Imager Lite’ is using people themselves, or more precisely their aesthetic sensibilities, to collectively find occurrences of identity in the environment/organism created by the HTTP protocol. The collective environment was enhanced further in a Workshop installation of ‘Infome Imager Lite’ exhibited in ‘Techno Sublime’ at the UC Boulder Art Museum in February/March 2005.There the audience sat down together, creating and printing visualizations and hanging them on the walls adjacent to the computers. The audience shared their findings with others in physical space and with people who did not wish to engage with a computer. By taking the visualizations outside the computer, to the museum wall and people’s offices or living rooms, more time can be used in interpreting the imagery. The images can be seen and interpreted at more random times while doing other things, while eating breakfast or talking on the phone. This allow people’s ‘esthetic decision-making’ processes more time and context to develop and hopefully a larger understanding of the environment, the Web, can emerge collectively.

lisa3.jpg

Seeing

So what are these “indexical visualizations” traces of? What do the patterns indicate? Some of the meanings are right there, on the surface, such as the insight from the 1:1 interface every(access) that most of the Web in 1999 was inaccessible and operated by the US military (while we had thought that it mostly consisted of porn and corporate home pages urging to be found and visited). Or in the case of Infome Imager, we see a design strategy revealed in the repetitious patterns of a corporate web site such as macromedia.com (who by using the uncommon tag ‘tabindex’ produce one of the prettiest sites around, seen from the perspective of the Infome Imager). But the most interesting ‘meanings’ are the more elusive, the ones that we feel through the aesthetic sensibilities described by Kant or Burnham. Jan Ekenberg offers his description on what one could find in this text, written for my project Out of the Ordinary in 2002, an Internet packet sniffing visualization software that visualized the probability of communication between computers on the Internet, looking for out of the ordinary occurrences.

‘What’s an “ordinary” occurrence? And, what’s an “out of the ordinary” occurrence? Not as in ordinary – same old, same old – dull. And not as in not ordinary – something fantastic just happened, and probably not as in the mathematical occurrence, either. Too boring. Rather: an event that draws attention to itself. TO ITSELF – need not to be pointed out! Something reveals itself. Something lets us know it has meaning. Combinatorial Revelation – two things, which by themselves offers little or no meaning, combined become very meaningful. Deviation – something that is very unlikely, happens. Simultaneity – events that become important in that they happen at the same time; and If they didn’t they’d be ordinary. Stuttering – events that are only supposed to happen once, occurs two, or several times. This notion of meaning. Meaning as presage. A sign that draws attention to itself, is the focus of many of Lisa Jevbratt’s artworks. This focus becomes especially clear in ‘Out of the Ordinary’. This project, with its gothic minimalism, is a trap, or revealer, for this type of signs as they happen (or don’t) “in the pipe” of a computer network. Because: possession just doesn’t look the same in 2002 as in 1627. Let’s say you meet three men approximately three minutes apart on your daily walk. They all carry unusual small suitcases, are dressed too warm for the season, and all have long noses. Or… an email packet and an http packet go through your network at the exact same time and they have exactly the same size (3,789Kb). Or… you are on the beach, and behind you a deer falls off the cliff, runs into ocean, almost drowns and then gets up and runs away. Know what I mean? And then there’s the Hollywood movie trope from the giant creature films like King Kong (De Laurentiis’ version): a group of men leaning over a radar screen. Suddenly a large green blotch appears in the sweep with a simultaneous load and startling beep.

“What the hell was that!”

1 Burnham Jack, Artforum, Septiembre de 1968; 2 Gielow Ryan, San Jose State University, 1999; 3 Ginzburg Carlo, “Morelli, Freud and Sherlock Holmes: Clues and Scientific Method” en History Workshop Journal, 1980; 4 Barabasi Albert-Laszlo, “Linked: The New Science of Networks”, Perseus Publishing, Cambridge Mass., 2002; 5 http://setiathome.ssl.berkeley.edu/; 6. Ekenberg Jan, “Out of The Ordinary Occurrences” 2002

lisa2.jpg

Lisa Jevbratt

She is a Swedish artist and an Assistant Professor in the Media Arts and Technology Program and the Art Department at University of California Santa Barbara . Her work has been exhibited internationally in venues such as the New Museum in New York, the Walker Art Center in Minneapolis, Ars Electronica in Lintz, Transmediale in Berlin, and the 2002 Whitney Biennial in New York

Education

MFA Computers in Fine Art 1997, San Jose State University (CADRE), San Jose, CA, USA

Latest main projects/exhibitions

2005: Techno-sublime, The University Art Museum, University of Colorado, Boulder (Infome Imager Workshop Installation) — 2004: – Database Imaginary, Banff Centre for the Arts, Banff, Canada (1:1 billboard print) – Villette Numerique, Le Parc de la Villette, Paris (Out of The Ordinary) – Ciberart Bilbao, Bilbao, Spain (Infome Imager Lite.) — 2002 ; – Troika, an alt.interface commissioned by rhizome.org. The project was an (online) interface to the rhizome database. Exhibited at the New Museum in New York, October 2002. – Electrohype 2002, Malmoe Sweden, (Out Of the Ordinary) – Sight Unseen, Exploratorium, San Fransisco, CA (Mapping The Web Infome) – The 2002 Whitney Biennial, The Whitney Museum of American Art, New York, NY (1:1) – Korea Web Art Festival (Syncro Mail – Unconscious Collective) — 2001;- The Altoids Curiously Strong Collection, The New Museum for 2000 – Transmediale, Berlin, Germany (1:1) —1999; Open X at Ars Electronica, Lintz, Austria. (1:1) — 1998; – A Stillman Project for the Walker Art Center Web Site, a parasitic art system hosted by The Walker Art Center web-site, commissioned by Steve Dietz/Gallery 9, The Walker Art Center, Minneapolis, MN

Personal main writings

Winter 2006 : “Inquires in Infomics”, a chapter in a book edited by Tom Corby, Routledge. (Other authors include 0100101110101101.org, Thomson & Craighead, Sarah Cook, Natalie Bookchin, Jonah Brucker-Cohen)

October 2006: Anthology on Digital Technologies in Visual Art. Ed. Marie Roemer Westh, Royal Academy of Art (Wall and Space Department), Copenhagen, Denmark. (Images and Texts from the Infome Imager Lite web site.)

Summer 2004: “A Prospect of the Sublime in Data Visualizations”, Ylem Journal Volume 24 Number 8, and Scale online Journal.

Spring 2003: “Coding the Infome. Writing Abstract Reality” in Dichtung Digital.

Summer 2000 : Mingling Theory: Invitational roles in hypertextual networks” in Spectra (issue 1 v. 27 Summer) , edited by Steve Dietz.