An Uncomfortably Exciting Year

November 08, 2017

I am loath to add yet another “my election night 2016 feelings” essay to the still-smoldering pyre of funereal takes. But there were other things I wrote in 2016 and didn’t share that, a year later, I think it might be worth going back to. Anyone who isn’t directly named in this can be easily identified by anyone with enough knowledge of this space, but I don’t particularly want my personal reflections to become fodder for personal attacks on people who probably weren’t looking for a fight with a relative stranger.


A few days before the 2016 election I got on a plane to Paris to stand in a room I had no business being in. Someone with a weird sense of humor or desire to cause trouble recommended me to some very nice people at Google who, on pretty short notice, invited me to attend something called the Arts and Machine Learning Summit at the Google Cultural Institute, an arm of the company based in Paris and that I frankly didn’t think was a real thing when I first received the invitation.

It’s been hard for me to take most discourse about machine learning (either its promising potential or its discontents) all that seriously in part because I am instinctively skeptical of any emerging technology where it appears that everyone working with it has entirely too much money. I’ve been hesitant to really engage with the hype cycle around AI for the same reason, also because of something Ella offhandedly said to me a few years ago–”well, corporations are basically a form of advanced artificial intelligence that we’ve been living with for years.” To me, this is kind of the actual blood and guts problem of most of the hand-wringing over the social implications of AI. If we don’t talk about what’s fundamentally broken with capitalism and corporate power structures, we’re never going to address what’s politically broken in either their business or creative applications. If we can’t meaningfully regulate or govern Google or Facebook’s tax offshoring, why do we even think they’ll care about governance for automated systems?

So when perhaps the most sentient corporation of them all offered me a transatlantic flight and accomodation just to be in the audience of an event specifically about this field I’ve been vocally skeptical of, I was pretty confused and ambivalent. Ultimately, I went to this Arts and Machine Learning Summit partly because I just wanted to understand what the hell the Google Cultural Institute actually was, and because I doubted I would ever get invited into this belly of the beast under other circumstances. I also doubted there would be many more circumstances for invitation, seeing as mid-2016 was a banner year for Alphabet’s moonshot killing spree. The brief glimpses between slide deck setups of a desktop wallpaper for the “Google BrandLab” was a reminder that in all likelihood the folks at the Cultural Institute wear many hats, and conversations with other attendees and further reading suggest that its main purpose is to act as a source of soft power in Europe as Google continues to establish itself in countries that are far more skeptical about the company’s aims.

Based on two very jetlagged days in Paris, I’m not sure I got a better idea of what the future of artists using machine learning looks like. I do know that Google wants to be the one who decides what that looks like. Mostly the whole event made me think a lot about how how Google as a company is trying to position itself as a cultural and social actor in this space and it made me curious about the topics that remain absent in discussions of ML and culture. Looking back on it a year later, it also serves as a weird object lesson of how steadily large platforms managed to maintain a stance of plausible deniability about the adverse impacts they have on political life and civil society.

The details that I pay attention to in situations like this sometimes feel like I am missing the point of the gathering. But setting the scene, and the representations of taste and wealth that pervaded it, seems important here. Or, maybe I just don’t want to be the kind of person who takes these details for granted, as I don’t want to take the privilege that lands me in these spaces (for now) for granted.

The Cultural Institute is in a fairly small but well-appointed building, part of a larger Beaux-Arts compound that holds other Google offices in the heart of Paris. The casual living-room style entrance featured tasteful, modern LED-and-wood chandeliers that, I am just guessing, cost more than three months of my rent. There were almost no visible trash receptacles throughout the space–presumably during the event because there was staff there to pick up after attendees, but maybe also because the idea of waste or useless things was contrary to the entire aesthetic. (Naturally, I spilled a cup of coffee all over the very pristine white floor.) There were lots of screens, lots of art on the screens, primarily using the machine learning tools built by Google and used by the Cultural Institute’s Artists in Residence (who tend to make very Google-y art using Google tools–inoffensive, blithely fun, not necessarily groundbreaking, a lot of stuff that would probably be well-suited to a museum kiosk).

At the same time, there were a number of faked-expensive gestures and hints of Ikea-cheapness that offered a reminder that Google’s Arts and Culture division probably doesn’t have endless money to burn. The talks took place in a long, narrow corridor of a room that seemed to be optimized for video documentation which would make said event look much larger and better-attended (looking at photographs on the Cultural Institute website seems to confirm this). The corridor connecting the event space to the main entrance was a gleaming, pearlescent white, the kind one might walk through while wearing white knee-high boots in a 1967 depiction of a space cruise ship. Upon closer inspection, this effect was achieved by buffing down poorly applied plaster to white walls and adding strips of red, green, and blue LEDs above the walls.

Attendees signaled their own wealth in various ways as well. The fashion trended toward Conspicuously, Expensively Casual, a hard to place but impossible to miss anti-brand aesthetic favored by people who might capitalize the word Creative on their LinkedIn profile. By people who have and maintain LinkedIn profiles. People who could complain that next time, they’d book their own flights and just get reimbursed because well, they could afford to do that. Of course, talking about money (or its absence) is generally frowned upon in expensive spaces, especially art-related ones. Money, and how Google makes money, came up only in extremely limited and rare cases over the course of the conference.

Ultimately it’s hard for me to look at events like these without imagining them via postapocalyptic hindsight, heavily colored by William Gibson’s imagined end of the world in The Peripheral. Someday, the participants of this conference would look back and recall how they tried to find a path toward a better future, but that unfortunately gathering for overproduced recorded talks in expensive corporate settings was somehow not enough to prevent the systemic violence of climate change, antibiotic failure, and total war from killing 80% of the world’s population. Those who survived into this remarkable new future will be sad, and sorry, but point to the Important Conversations they were part of–if only power had listened! And least we steered this technology that now only serves the survivors in the right directions, they might say.

When I imagine these moments of looking back, I’m not sure if I’m among the people reminiscing. Given the privilege I currently hold, I’m better positioned to be in that survivor’s future than a lot of other people. I don’t really know if I’m mercenary enough to survive like this.

In an opening keynote, a key figure in Google’s Art and Machine Learning team kind of set the tone for this postapocalyptic perspective when he described the moment we live in as “uncomfortably exciting times.” Our keynote made an effort to prove his earnest woke credentials demonstrating that he was not merely a Clueless Tech Man; he quoted Deleuze and Guattari and he spoke sincerely about the need for diversity (that being said, he didn’t open up much about the diversity of his own team, so it’s unclear if he’s really practicing what he preaches). In an attempt to reinforce his case for greater diversity, he proceeded to make an argument about gender fluidity that somehow concluded with him telling the audience that “in a way, we are all queer”, and also, maybe, X-Men (it got kind of confusing here). He probably meant well. Men like this always do.

While I was kind of encouraged to know that Google keeps around people smart enough to at least pay lip service to the limitations and potential harms of the technology they’re building, I also thought about Ella’s offhand comment about corporate AI and wondered at the business incentives for a company like Google to normalize, monetize, and subsume transgression (and, in this case, apparently queerness and X-Men) into a paradigm of compliance. The phrase “uncomfortably exciting times” would bitterly ring in my ears days later while marching up Fifth Avenue in a large post-election protest outside Trump Tower. Months later, a friend would text to inform me that Congressional testimony indicated said march was called by a Russian propaganda Facebook page. Even catharsis is merely another genre of clickable content to be manipulated.

When art historian Robin Oppenheimer, on a panel about the long history of artist-technologist collaborations, offered her full disclosure that she was currently being paid by Google to work on a book about artist-technologist collaborations my postapocalypse hindsight perspective felt slightly more justified. The summit wasn’t merely an attempt by the Google Cultural Institute to justify its expensive existence to the mothership or a pretext for a bunch of well-heeled tech artists to hang out in Paris. We were part of the production of an art hagiography of Google, placing itself in the paradigm of Bell Labs circa the Experiments in Art and Technology era. We were there to perform the role of future art-historian documentation of a particular scene of artists and technologists working together, like archival photos of Klüver and Merce getting ready for the Nine Evenings.

I don’t say this to entirely knock Google–if there’s one thing that artists working with technology are bad at, it’s documenting their own history, and we could absolutely use the help. And large piles of unevenly distributed money making their way to artists is sort of the entire history of the Western canon. But I question who and what gets to be part of this particular history and whose agendas are served by its construction. This was maybe exemplified by the moment when Doug Eck from Magenta compared his work to electric guitar inventor Adolph Rickenbacker–in contrast to artists like Jimi Hendrix who unleashed the guitar’s potential, Rickenbacker “merely” created the vehicle that allowed Hendrix to create. (This is, I later learned, one of Eck’s preferred talk chestnuts.) Considering the number of Google commissions or Google-driven projects seen during the summit, this argument seemed kind of like if the Medicis were both the patrons of artists and the primary manufacturers of raw pigment and paintbrushes.

While Eck conceded that the Magenta team could not know what a real creative breakthrough made by an artist using their technology might look like, it was clear that Google very much wants to be at the center of any creative breakthrough associated with machine learning, if not to monetize it then to canonize themselves. Or, maybe canonization is the best way that the Arts and Culture team could conceive of justifying Google’s continued investment in their efforts (or could make their efforts complementary to the entirely separate Art and Machine Learning group–the conference may have also been a weird attempted detente in an inner-Google turf war). Monetization didn’t really come up all that much.

The limited mentions of advertising or monetization maybe speaks to the paradox inherent in Google supporting art and machine learning. In general, the thing that makes really good art interesting is its ability to convey something unexpected or poetic that can only happen when the, and the most interesting ML-based work shown at this event did that. Business applications demand reliability, accuracy, and control–forecasting quarterly results, knowing the probability that you’ll buy something, personalizing an interface to your needs. The art in some ways just acts as a proof of concept or relatively cheap approach to R&D (assuming artists work for less money than engineers, which they usually do) to improve tools for marketable industry applications. (It’s also probably worth noting here that the most successful output of the Google Cultural Institute thus far has been Google Cardboard, which is arguably the most strategic approach to normalizing and expanding VR media–advantageous both to broke artists and to Google, in pursuit of new markets.)

There were other notably absent themes. Climate came up obliquely, primarily as an abstract Big Problem to be worked on or a problem our historical models might no longer be able to forecast rather than something that, say, the insistence on a future more and more reliant on complex computation at scale might actually exacerbate. (To their credit, Google has done more work on improving their own energy efficiency in data centers using machine learning, and invested heavily in adding renewable energy to power grids around the world–I remain unsure if building the systems requiring that much energy in the first place is a worthy pursuit, but perhaps that is not the hill to die on.) Hito Steyerl offered the only sincere (and sincerely enjoyable) references to occultism (which, I admittedly just always want to see more of, in any context), which seemed oddly absent given the fact the most well-known experiments in machine learning and art from Google are the neon-eldritch surrealism of DeepDream puppyslugs. The looming 2016 election hovered over conversations mostly as an “uncomfortably exciting” vector, but there was still a glimmer of hope or at least enough denial to look like hope.

In the year since I was told we live in “uncomfortably exciting times” it seems like neither the discomfort nor my skepticism have lessened, but there have been some meaningful shifts and developments. US regulation of giant platform companies seems a far more conceivable goal now than it did a year ago, though most regulatory frameworks conveniently dodge the question of whether it would be better for everyone to simply abolish rather than triage these business models. There appears to be a much more honest conversation within these massively ungoverned systems about what it is they’re doing. But in the past year I have glimpsed more well-apppointed-but-not-too-well-appointed rooms like that one in Paris, more invitation-only summits, more funding and more thinkpieces and more bad demo art and some better less demo-y art and literally fascism showing its face in a way that well-meaning centrism can’t dismiss anymore. While there are fits and starts of hope it still feels like the struggle is to accommodate these hulking ancient AIs of corporate systems such that they might crush slightly fewer democratic norms under their might but still continue to prosper. Which, maybe there’s a better metaphor here about appeasing old gods, about burning down the temple.

See also: notes