Half a shade off from the reality we are living already: an interview with Jacob Garbe of ISA

Jacob Garbe is a Bay Area artist, designer, and MFA student at UC Santa Cruz. His latest ARG-like creation, XMPLAR, involves an iOS and Android app, a frighteningly believable fictional corporation (of which Jacob is apparently an employee), a series of physical installations involving a dizzying array of display systems and interfaces, and a live performance. If you’re in the area, you can experience the dramatic culmination of this phase of XMPLAR at the UCSC DANM ground (ctrl) exhibition on May 2, 2013. The exhibit runs until the 5th, but the opening reception on the 2nd promises to be extra special. Wherever you are, you can explore the app and website now and for the foreseeable future. Jacob spoke with me via email over the weekend:

First, could you tell us a little bit about XMPLAR, both from a storyworld point of view, and from your perspective as an artist?

XMPLAR is about a collective of nascent artificial intelligences created to learn and evolve with stimulus from crowd-sourced photography/surveillance. The gist of the experience is one where the player is put into an initially uneasy partnership with an AI, which gradually matures over time into a more whole-hearted commitment to its concerns and desires. It’s a world only half a shade off from the reality we are living already, with a soupçon of magical realism thrown in to spice things up.

As an artist, this piece is trying to concretize some ideas I’ve had for awhile now about the use of technology to create persistently reactive work. The intention is to make something that evolves over time, but never requires people to start from scratch. I’m looking to build a long-term relationship with my audience, over multiple experiences in different media. It’s also an exorcism/indictment of the always-hungry corporate façades doing their best to monetize, control, or package a product from the world around us.

How did you get into this kind of practice? What’s your background, and why are you interested in this strange hybrid of narrative, interaction design, and performance?

I was into computer science and robotics when I was younger, but had a change of heart while in undergrad, and ran headlong into Humanities. I’ve also always considered myself a writer in practice, if not so much in product at times. So there’s a thread of narrative to all my concerns.

After graduating, I started making my peace with the science/art, hard/soft disciplines through works of hyperfiction, which got me interested in the use of anonymized tracking in order to make readers’ experiences persistent. I was working as a web and graphic designer at the time in Kentucky.

I entered the UC Santa Cruz Digital Arts New Media program two years ago, and everything has exploded from there. I’ve gotten really interested in using web technology to make reactive projection installations, as well as bringing back my work with physical materials through electronics.

To me, working with all these different media is a way to push myself, and to also break through the barrier of normalcy we’ve built up around technology. I want to make it magical again. I love to make things composed of ordinary parts that, when added up, become extraordinary.

Who or what are some touchstone inspirations for you?

I’m inspired materially by the growing normalization of surveillance–both on a person-to-person level, and the organizational level–through mobile apps and GPS. Businesses like Internet Eyes, where we’re given the ability to spy on each other with sanction from the government through their own CCTV cameras, and given “prizes” for catching criminals, is a source of constant amusement and horror to me. There are so many corporate entities out there that eclipse any fiction I can create, the best I can hope is to pull faces at it and hope to expose that to my audience.

On a lighter note, I find a lot of inspiration in the work of other ARGs like the Jejune Institute and your own Reality Ends Here! I think these works are ultimately a real labor of love, and those sorts of experiences where creators take an intensely individual focus on the recipients is really ballsy and laudable.

I’m fascinated by the role of chaos in this project, particularly with respect to narrative. At first, the prompts I receive from my XMPLAR seem totally random. But as things move on, various structures — story figures, characters, etc — start to emerge. How did you do this — and, perhaps more importantly, why?

There are a couple intentions at work in the code. On one level there is an element of randomness, within the bounds of a selected set. I’m drawing from a database of millions of concepts, so things can naturally diverge quite quickly. But I try to build in checks such that the player is drawn into certain directions as they move through the experience. It sort of builds a bank as you go, and that informs its selection process. But it’s important to me to allow space for the player to map their own ideas onto the XMPLAR’s workings. There is nothing more interesting to me than hearing people offer theories on what they think the XMPLAR are doing when they take that picture!

There is also a particular story I am hoping to tell with this first chapter. But as with all good ARGs, it’s important to me to see what the players are thinking, and to let that shape the story moving forward. I’m hoping it will be a highly-mediated, but highly-responsive, dialogue!

I know you’re collecting a lot of data in real-time about usage of the app, and that this data is going to appear in a variety of ways at the exhibition. Are you seeing anything surprising in the ways that people are using the app? Are there any common trends in the way that people engage with their XMPLARs?

I’ve been surprised by how wildly the engagement varies. I was also surprised to see a fair number of people dive into the app before I’d really planned on any way of getting the word out! Thankfully having that information available made it possible to react quickly. It’s probably also horribly American of me, but I’m surprised at how the distribution of users has been pretty even-handed inside and outside the US. And to be honest, I’ve been surprised and a little unnerved at how the XMPLAR processes have been recovering from errors and ushering people onward in the experience. The chaos I’ve introduced into the project is hopefully a wave I can continue to ride!

I like interacting with my XMPLAR in front of my TV. It’s actually a great way of watching TV, as it gets me searching through the channels for images I could use as responses to the prompts. I find myself watching stuff I wouldn’t normally watch, and looking at parts of the frame I usually tune out. In this way, my XMPLAR is detourning my TV watching experience. How is this typical (and/or atypical) of the way you expected people to engage with the project?

Oh wow, I hadn’t even thought of that! That’s amazing. Taking a picture of a picture of a picture! But that’s exactly the experience I am hoping to create. I’ve had players come up to me and tell me how they had never noticed something totally weird in their day-to-day world until their XMPLAR asked them to take a picture of “fine-grained parallelism” or something like that. And that’s what I’m shooting for with this first chapter.

Where and when is the exhibition, and what can people expect to see there?

The big opening is on May 2nd from 8-10pm…this Thursday! It’s at the Digital Arts Research Center up at UC Santa Cruz. I’m exhibiting with other members of my cohort in the Digital Arts program. In addition to getting a chance to see some of the data visualizations of what’s currently happening in the game, there are some interesting plans in motion that should hopefully result in a very punctuated, transitory, and shocking experience. There will be some recording happening I think as well (as any automated security company worth its salt would) so people physically unable to attend should keep an eye on the ISA Website.

Finally, what’s next for XMPLAR — and for you?

If you can believe it, I’m going to be starting a PhD in Expressive Intelligence at UC Santa Cruz this fall. So there may be more fact in XMPLAR than people suspect by the end of things, as far as AI goes. The idea is to fold this ongoing piece into my practice moving forward, while pushing myself in the sort of “harder” areas of artificial intelligence to substantiate the fiction more fully, while also continuing to move the experience into weird areas like telerobotics and cybernetic systems!

Thanks, Jacob!

Further information: Integrated Security Automation, Inc., UCSC DANM ground (ctrl), Jacob Garbe.

IBM supercomputer “Watson” to appear on Jeopardy

…just don’t count on him to open the pod bay doors:

Technically speaking, Watson wasn’t in the room. It was one floor up and consisted of a roomful of servers working at speeds thousands of times faster than most ordinary desktops. Over its three-year life, Watson stored the content of tens of millions of documents, which it now accessed to answer questions about almost anything. (Watson is not connected to the Internet; like all “Jeopardy!” competitors, it knows only what is already in its “brain.”) During the sparring matches, Watson received the questions as electronic texts at the same moment they were made visible to the human players; to answer a question, Watson spoke in a machine-synthesized voice through a small black speaker on the game-show set. When it answered the Burj clue — “What is Dubai?” (“Jeopardy!” answers must be phrased as questions) — it sounded like a perkier cousin of the computer in the movie “WarGames” that nearly destroyed the world by trying to start a nuclear war.

This time, though, the computer was doing the right thing. Watson won $1,000 (in pretend money, anyway), pulled ahead and eventually defeated Gilmartin and Kolani soundly, winning $18,400 to their $12,000 each.

“Watson,” Crain shouted, “is our new champion!”

It was just the beginning. Over the rest of the day, Watson went on a tear, winning four of six games. It displayed remarkable facility with cultural trivia (“This action flick starring Roy Scheider in a high-tech police helicopter was also briefly a TV series” — “What is ‘Blue Thunder’?”), science (“The greyhound originated more than 5,000 years ago in this African country, where it was used to hunt gazelles” — “What is Egypt?”) and sophisticated wordplay (“Classic candy bar that’s a female Supreme Court justice” — “What is Baby Ruth Ginsburg?”). (New York Times)

Video here.

Crucial Juncture for Blue Brain

The Blue Brain project is now at a crucial juncture. The first phase of the project—”the feasibility phase”—is coming to a close. The skeptics, for the most part, have been proven wrong. It took less than two years for the Blue Brain supercomputer to accurately simulate a neocortical column, which is a tiny slice of brain containing approximately 10,000 neurons, with about 30 million synaptic connections between them. “The column has been built and it runs,” Markram says. “Now we just have to scale it up.” Blue Brain scientists are confident that, at some point in the next few years, they will be able to start simulating an entire brain. “If we build this brain right, it will do everything,” Markram says. I ask him if that includes selfconsciousness: Is it really possible to put a ghost into a machine? “When I say everything, I mean everything,” he says, and a mischievous smile spreads across his face. (seed magazine)