Q & A: Jaron Lanier, author of You Are Not a Gadget

    1 of 1 2 of 1

      There’s plenty of people raising warning flags about adverse cultural effects of the Internet, but few of them have the credentials of Jaron Lanier. Back in the digital era’s Cretaceous period, in the 1980s, the New Mexico–born Lanier was one of a small group of pioneering Silicon Valley programmers working on virtual reality and other technologies that bordered on science fiction. He was also a founding contributing editor of the hugely influential tech magazine Wired.

      He’s worked in the field ever since, designing sophisticated applications for university networks, as well as computer simulations for medical training and research. But as the years have passed and computers have shaped more and more of our lives, Lanier has become increasingly wary. His new book, You Are Not a Gadget (Knopf, 209 pp), is a manifesto that sets out a searing critique of Web idealism.

      Much of it is aimed squarely at what’s sometimes called Web 2.0—Wikipedia, Facebook, Youtube, and the like, to name just a few of the most famous and widely used sites that rely on user interaction and information-sharing. Lanier argues that the effect of these Web-based phenomena on who we are and how we see ourselves is often dehumanizing.

      “Anonymous blog comments, vapid video pranks, and lightweight mashups may seem trivial and harmless,” he writes, “but as a whole, this widespread practice of fragmentary, impersonal communication has demeaned interpersonal interaction.”

      The Straight caught up with Lanier by phone, during a brief visit he made recently to Toronto.

      Georgia Straight: You argue in the book that the design of the information systems we now use constantly—especially the design of social networks and other examples of what’s called Web 2.0—alters basic things about us as humans, and not always for the better.

      Jaron Lanier: What I’ve observed is that slight changes in technology can really change the way people behave and the way they conceive of themselves. For instance, if in Facebook you’re given a set of categories to choose from that describes something about you or your life, like romantic status or something like that, you tend to start thinking of your life in terms of that system, because you’re interacting with everybody via it, you’re planning things via it. So at a certain point it just becomes real for you, even if otherwise you might have thought about things slightly differently. And the slight differences are really where the core of meaning is. So these slight adjustments shouldn’t be discounted.

      If I can give you one other example, one of the ways that behaviour modification is being most effectively offered on-line is that we dole out little dollops of useability or ease of use. So, for instance, right now, if you’re a programmer, if you’re digitally skilled, then you do have a shot at controlling your privacy settings on Facebook. But if you’re a normal person, if you’re not technical, you really don’t.

      And so people change their ideas about privacy rather than having to learn to be programmers, given that choice. In other words, if you just tell somebody, “Hey, why don’t you give up privacy so this company that’s operating like a spy agency can gather all the information about your life in order to help advertisers reach you better?” you’d say, “Are you kidding? No way.”

      But if it’s given to you as a choice—like, “Either learn to be a programmer, so you can control it, or just accept that you’re going to have less privacy, and that you’re going to grant all this power to this other company”—you know, if it’s put as a choice that way, people say, “Well, I don’t want to learn to program.” So all of a sudden their standards change.

      GS: This is a gradual process, then, as if you don’t really notice your own attitudes shifting in these ways.

      JL: Right. Well, actually, more to the point, it’s as young people come up—it’s a generational process. And we’re presenting a set of choices to a younger generation as if they’re normal.

      Furthermore, we’re telling that younger generation that they should identify with them, so that anybody who disagrees with it should be treated as an old fogey, when in fact the people pushing those choices on them are as old or older than anyone who might criticize them. In my case, they’re the same age, because the people who are pushing it are my old friends. It’s like one circle of people who are both opposing it and pushing it.

      GS: One way you sum up your argument about Facebook is in the idea that “information underrepresents reality” by turning us into these fields of factoids. But isn’t that underestimating the service’s users? Isn’t there enough of the old, “personal” part of their identity that they recognize there’s a difference between who they are and these lists of facts?

      JL: It depends how old they are. My observation is that there’s a generational divide, and it favours older people [laughs]. So if you are old enough to have a life—you have a job, say, or you have a family, and you have a persona developed as an adult—well, then, I don’t think Facebook is going to change that.

      Let’s talk about a younger person—let’s talk about a 16-year-old. There, the problem I see is that the relentless precision of the database representation of you on Facebook presents a couple of challenges that make what was already the difficulty of being a teenager really, transformatively different.

      One thing is, now, 24 hours a day, your persona is vulnerable to losing status, so you have to tend to it all the time. It’s like you’re always running for president. You’re always Obama, having to be careful of every little thing you say, from when you’re a kid. And it just seems ridiculous. But that’s part of the dynamic on-line, where status groups form and so forth—and, I mean, that’s just human. It’s always been true for teenagers. But by turning it into this clinical digital thing, you can never escape it.

      You never get those breaks from it, or those cracks in time when you can reinvent yourself. And in particular, you can’t move to a new place and forge a new persona—you’re not allowed to do strategic forgetting as part of personality formation. And that seems to me to be a huge loss, and in a way infantilizes people.

      Now, when I’m talking about this stuff I always rush in to point out—just so nobody else has to—how in a sense awkward and unfair it is that I’m talking about these things, because I’m not 16 anymore, by a long shot. And you can say what right do I have to judge them, and that’s correct. I don’t, and I’m not really judging anyone in particular.

      But I do see this pattern so strongly in the many younger people I talk to that I just feel like I ought to say that I’m seeing it. Ultimately, it’s up to them to judge themselves, and I’m certainly not in that role. But I still want to point it out and see what they make of it.

      GS: Others in this debate like to point out that technological revolutions are always condemned for personal or moral degradation—that books were seen this way after Gutenberg, and then newspapers were called a corrupting influence, and then radio and then TV, and so forth. What do you make of this as a response to what you’re saying?

      JL: It’s a bunch of nonsense. So, like, there’s one true technology path? That’s ridiculous. It’s a thing that’s used to make people who aren’t technologists feel inferior somehow, or afraid. I don’t have that problem, because I invented a lot of this stuff.

      So I would say just exactly the reverse: that if we’re going to invent a new world and then refuse to believe there could be any problem with it, if we’re going to believe that whatever we did must have been perfect on the first pass, then we must surely be idiots.

      GS: What motivates the passion to believe in this way?

      JL: I think part of it is that, when we first came up with a lot of the dogmas that didn’t work out—the open-culture stuff and all that—a lot of it was to fill a void that had opened up in a lot of people’s hearts and minds when the Soviet Union fell and the sort of neo-Marxist-lefty thing just really seemed like it had failed. And in a lot of ways, I think, people need some sort of a structure to believe in, and young people need something new to believe in that’s, like, some better way than the world they’ve come into. And they should have that.

      But I think, for us and for a lot of the people who came after, it filled the need for this set of ideas to believe in, and it became part of our identity and part of a bonding force between friends and fellow travellers. So the ideas are more than ideas—they’re part of an identity. And it’s very hard to change one’s mind when ideas are more than ideas.

      I hope this doesn’t come off as arrogant, but I think a lot of people, if they really consider the evidence from the world and look at what’s happening, they’ll agree with me, at least to some degree, that we need to rethink these things.

      GS: There’s this constant characterization of these open-culture developments on the Web—wikis and so on—as creating a kind of digital democracy, promoting plurality of opinion. But you argue that they have the opposite effect in many cases.

      JL: If there’s going to be some complex new idea, which might be a new novel or a new law or a new piece of music, instead of just opening it up totally so anybody can edit it while it’s in process and everybody always has equal access to it, you can give people some time to work on it, test it, work out the kinks, before they present it. And that allows them to build something with more subtlety, complexity, and something that has more of a voice and more of a statement, that’s more whole and more valuable than would be the case if everything’s absolutely, totally open all the time.

      I’ll give you a couple of examples that are very different. One is in the scientific community. Obviously scientists have to publish at some point, or they perish, as the saying goes. But before they publish they have some time to get their ducks in a row. They have to get the data, they have to analyze it, they have to go through a review process. So by the time it’s published, it’s at least decent.

      But here’s an even better example of exactly the same concept, which is life on Earth. There was an ancient time when there was just a big gloop of organisms sharing DNA. With the invention of the cell wall, nature created the possibility of making something more complex, where a particular combination of genes could be tested for a period of time, so that the result could mean more than just a constant, random recombination. And that’s what led to the explosion of complexity and subtlety and beauty, instead of just a mass of slime and oozes. And essentially we’ve created an Internet that encourages people to create slimes and oozes.

      GS: At one point in You Are Not a Gadget, you express disappointment that the best that Web 2.0 has come up with is Wikipedia, a copy of the Encyclopedia Britannica.

      JL: It’s not precisely that. I mean, I try to give a balanced presentation in the book on Wikipedia, which is not entirely condemnatory. But fundamentally it is true. I think the way I put it in the book is that if 30 years ago I’d said to people, “Do you realize that in decades to come computers will be millions of times faster, not just thousands but millions, and the great prize humanity will gain as a result of that is a new encyclopedia and another release of UNIX?”—that would’ve just killed interest in digital stuff right there. That would’ve been enough to just knock out the whole process. But, you know, we’ve sort of slipped into it.

      And oh, God, the Wikipedia. There’s nothing that can make any topic boring more quickly than the Wikipedia, except for certain pop culture things. It’s just like this generic, bland statement about things that isn’t for anybody.

      GS: Elsewhere in the book you contend that, in the face of this whole process, figures such as professional artists and musicians and filmmakers are becoming what you call the “peasants” of the digital economy.

      JL: The new order is based on the idea of using computers to gather data on everybody and then manipulate the world and pull power out of it. So if you own the giant server at the middle of everything, then you can become arbitrarily wealthy and powerful. And everybody wants to be most meta-, which means getting closer and closer to owning that castle in the centre—and I do tend to use feudal metaphors here, because I do think they’re the ones that match.

      So there’s three kinds of servers at the centre right now. One runs the hedge fund, the other runs the NSA [the U.S.’s National Security Agency], and the other is Google. And they’re all astonishingly similar. I mean, they’re barely distinguishable, if you just step back for a moment and think about what they do and how they operate.

      That’s not to say there shouldn’t be such a thing or that they’re all bad. I personally think the intelligence agencies probably have to exist, and I accept that the world can’t be perfect. The hedge-fund ones do not have to exist and I think they should go away. And the Google one, I think, just needs a different business plan, because it’s currently on a path that’s leading it more and more into a sort of negative impact on the world.

      So the idea of becoming a peasant—it’s not just artists and musicians. Eventually, it’s everybody. The thing that we have to realize is that as technology gets better and better, either we enter into a future where you have some way to make a living from your heart and your brain more and more as a result, or you enter into a future where you have a way to make a living from your heart and brain less and less.

      The current open-culture idea is that, sure, you have to give away the music, but you can sell the T-shirt or something. So you move away from intellectual or artistic expression as a way to make a living directly, and instead you move to something more physical, which might be travelling to give live gigs in one case—and there are problems with that, but at least it’s something—or selling T-shirts or something. But essentially we’re telling you to do what the Maoists told the artists and the intellectuals, which is to crawl back toward the field, to make your living in more and more treacherous and physical ways, instead of in more and more elevated ways. So that’s why I call them peasants, because essentially we’re giving that message that you should become a peasant.

      But the musicians and the journalists are just the canaries in the coal mine—they’re just the first wave. Eventually, as technology gets better and better, it encompasses absolutely everything that everybody might do with their brains and their hearts”¦.I mean, anything you do with your brain is vulnerable to this. For civilization to survive, we have to make more ways for people to make a living from their brain, not less.

      GS: It seems in some ways—especially to people outside your field—that technology always feels like a juggernaut and you can’t resist it. It seems any concerns about the downsides bring a response of “Hey, that’s just the way things are going.”

      JL: Yeah, well, this is very similar to this argument that “Oh, if you don’t like exactly what we’re doing, you must have hated Gutenberg and you would have wanted to throw Darwin in jail, and you’re this horrible, backward whatever.” And ultimately technologists should be servants to the species as a whole, or what are we doing otherwise? So, not to include people, to give them this feeling of helplessness, is unethical in the first place. But then, also, it’s just false. It’s just power-mongering—it’s just a way to try to manipulate and befuddle people.

      Comments