Video, color, stereo, USA, 17:40
Introduced by Arnaud Gerspacher
Year: 2016
Jesse McLean’s short film See a Dog, Hear a Dog explores the multi-faceted investments we hold in regard to nonhuman animals and technology. This includes the desire to touch and be touched, to recognize and be recognized, and to find space where our interiority seems to commune with another’s. Without privileging human being as the only worthwhile being, McLean’s work makes room for various other forms of access to and in the world, whether it be canines—one of our many enfleshed evolutionary co-travelers—or the digital interfaces we’ve built ourselves, which necessarily model themselves after human and nonhuman intelligence and have become part of the story.
Arnaud Gerspacher: One of the many compelling themes that tend from “See a Dog, Hear a Dog” is a subtle, almost phenomenological staging of what it means to interface with the world. Contrary to the more standard understanding (either the newer digital version or the much older humanist one that only finds human faces meaningful), your film presents all sorts of different interfacings—from humans to canines to computer screens. I kept coming back to what might be a fundamental characteristic of faciality, namely, that to experience another face is to be offered access to some sort of interiority while simultaneously being impeded, kept outside, or screened off. I see this dynamic throughout your film: eyes, text, speech, music, singing, and howls open up empathic connections, yet they are often accompanied by formal separations and barriers—be it a screen door, a smudged computer monitor, or the deferment of photography and video itself. What makes your approach most interesting is that you don’t seem to privilege the human face over all the others, and the viewer becomes aware that both human and nonhuman minds transmit themselves behind cranial integrity and physiognomic expressivity that is not always transparent or without ambiguity (not to mention all the nonhuman digital operations!). So that, fundamentally, the face as medium both delivers itself and necessarily remains hidden. Am I right to begin here with this expanded understanding of faciality?
Jesse McLean: For this particular project, I was thinking more specifically about the surface, which could be a face (human or nonhuman animal) or an interface (computer screen) as simultaneous entry point and barrier. Surface gets a bad rap, almost an impatient dismissal. The attitude being that it’s just a doorway to get to the real stuff behind it. But, another idea is that the surface isn’t only a doorway, but more like skin, like a vital organ for keeping out and taking in data and material needs, and for making constant adjustments to the environment. I wanted to acknowledge the face and the interface as this vital organ that is always in flux. The smeared computer screen image was an important image for me, because it acknowledged the interface, but also because the greasy human handprint on the LCD display symbolized human desires for technology and the barrier that prevents these desires from being reciprocated. With other footage, specifically the dogs, I also wanted to create a barrier, by using either found footage (that already has a layer of distance) and an actual screen door or window in front of the camera. There’s also a shot of the dog looking out the window and not returning our gaze. Returning to human faciality, many of my previous projects have taken advantage of the medium close-up, which is a shot commonly used for composing close-ups of human faces. It’s used frequently in television, for moments of special significance. I’ve also always enjoyed Andy Warhol’s screen tests. What I like about this composition is that it’s a roll of the dice, are you going to get let in or pushed out? Will you witness something emotive and dramatic or will you just be staring at each other? The desire for something dramatic, like tears or anger, immediately reveals more about the desires of the viewer than what is happening onscreen.
AG: Desire is something that can spin out of control, so I’d like to approach it through a specific form, namely, that of questioning. I’m wondering if “See a Dog, Hear a Dog” doesn’t show a space for questioning beyond or beneath speech (which would probably be related to your more generous and subtle take on faciality). That dog at the very end, lying on the couch, appears to be questioning something—why did you wake me? What are you holding in your hands? There are the video portraits of man and woman who question via facial expressions—they seem to get more and more imploring, which can be frustrating for the viewer because it’s impossible to appease them…it’s a good test of paranoia! Then there are non-diegetic “hellos” into empty space, as well as chats that communicate largely through questions. So, basically, this question is about the form of questioning itself, and this gets really interesting when it comes to nonhumans. Can nonhuman animals pose questions (I hear they’re working on phonetic pet translation devices)? Can machines pose questions? And how might we verify that an authentic question has truly been posed, either through voice or face?
JM: I had a similar question posed to me once, about whether or not emotions could ever be artificial, no matter how dubious the source of the feeling or authentic the relationship. My answer was no, emotions are always real, and always fleeting. This kind of transient, emotive state is reminiscent of a question, something that is supposed to be a device for getting an answer/knowledge, but can also be a process for being in the world, for expressing doubt and wonder. Machines ask us questions now, although it’s an automated response. For example, I’ll get an error message, or a solicitous junk email. But an authentic question would be a deviation from the script. I’m not sure how we would recognize this, partly because humans rely on other means beyond written/spoken language to communicate with one another. This is why body language, facial expression and intuition are so vital in our communication with one another, and how face can be read over voice, when determining authenticity. But these methods don’t seem available with machines or animals. The issue is also translation, how can we know what is authentic or significant to a machine? Couldn’t “Are you sure you want to shut down your computer now?” be read as a philosophical entreaty?
The phonetic pet-translator technology at first seems like another attempt to anthropomorphize the nonhuman, but it also could represent a loss of control that I’m unsure humans are prepared for. We love and care for our domestic animals deeply, but always under our terms. If this relationship gets flipped, we’ll have to adjust our anthropocentric world view, we’ll be less in control. But domesticated animals live with and depend on us. An important difference between translated pets and sentient machines is that the machines don’t need us for food, shelter, etc. If they started to vocalize their wants and needs, or talk to one another without us, it would destabilize our domination much more significantly. Like the Facebook AI experiments that were supposedly shut down, because the bots started speaking their own language and humans got very uncomfortable. We want to maintain top position. Questioning can throw the authority of a system into doubt, and that is a frightening concept when applied to the nonhuman, especially the undomesticated nonhuman that has no need for our care. Of course, domesticated animals and nonhuman animals already express doubt and wonder, even if we humans don’t understand the cause. And if machines are indeed sentient, perhaps they do, too.
AG: I’ve always thought that when we’ll develop artificial sentience (should it be verifiable), robots will have to be given the same ethical consideration that nonhuman animals are increasingly being given—even, slowly but surely, the so-called “food” animals. It won’t be a huge stretch for the generation raised on both Wall-E and Nemo. As always it would go in two possible directions: an anthroproprietary perspective, whereby human capacities remain the deciding standard, or a perspective that makes room for a multiplicity of life-worlds with a multiplicity of capacities, some crossing over with the human, some not. The second option is far more interesting, and I appreciate the way you put it, as a loss of control in certain ways (of course, the other side of this is the possibility of immoral or amoral killer robots very much in the news right now). Coincidentally, I just watched Adam Curtis’s “HyperNormalization” (2016). There’s a moment in the film that mentions a pivotal event in the development of AI. As a joke, the computer scientist Joseph Weizenbaum developed ELIZA, a computer system whose interface would repeat the questions posed to it by human users, like a therapist. His secretary was the first to use it and became engrossed, as if her desires for communication were really being met—and perhaps, solipsistically, they were! The chats in your film seem to function similarly, did you know about this history?
JM: Yes, I did. In fact, all the text chats are taken from ELIZA conversations that I had online with an ELIZA program. These programs are still accessible and still maddening/captivating in equal measure. There’s a bit of computer history snuck throughout the film, in images I used in montages and collaborative experiments. I realized that certain sections of the film could be considered collaborative creative acts because of how I was using a machine or program to generate both content and aesthetics. In general, the computer is aestheticized in this film, I draw a lot of attention to the interface and interrupt the film with reminders of its presence. The handprint on the screen is the most obvious marker. This aestheticization is both a continuation of artistic gestures where tools become incorporated into the final form, but also a way of introducing the computer as a character inside the film. Also outside the film, because I acknowledge software as collaborators in the credits. This is partly tongue in cheek, because I don’t usually make a habit of thanking my laptop on projects, but really, I couldn’t have generated the dialogue or the animated sequences without the input of these programs. And I didn’t know what I was going to get. It was a nice suspension of my control.
AG: I’m assuming the appearance of Stanley Kubrick’s “2001” is particularly important here. Can you elaborate on some of the other references to computer history in the film?
JM: I agonized a bit over HAL, because it’s so iconic, but that ended up being exactly why I used his image. I wanted to ensure that the threat of being completely controlled by machines, which is associated with AI, is present in the film. Fifty years later, HAL remains an obvious representation of this fear, but I set about to also complicate this iconic image. During his speech, HAL morphs into the iTunes visualizer, an innocuous technology which we don’t fear, although aesthetic choices are being made by a machine without human input, out of a database of possibilities. HAL also speaks from a text that dismisses the notion that machines (and animals) have souls and are intelligent. The animation and montage sections that follow his speech undo that claim, in my opinion. That text is one computer history reference, it was taken from an essay in Computing Machinery and Intelligence, a paper written in 1950 by Alan Turing. Turing’s image is also seen briefly in a montage sequence, as are images of ENIAC, blueprints for the difference engine, and some more recent technology, like Bina 48. A significant narrator in the film is a computer voice named Ava, which isn’t quite “Ada” but close to Ada (Lovelace).
My practice is collagist, often utilizing multiple sources and subjects, for multiple purposes. The computer history references operate partly as an homage, partly as contextual subjects and partly as pure aesthetic material. The montage sequences throughout the film also show images of dogs being dogs, dogs performing like humans, and experimentation with both machines and animals. The variety of ideas and material in the film gets shaped into a larger whole. The development of technology could also be considered a collagist effort, with multiple ideas contributing to the end result, often using different parts and sources, and relying upon collaboration (whether acknowledged or not). The development of the computer mouse or graphical user interface, for example; technology sometimes credited as Macintosh innovations but actually in development prior to the Apple Lisa, by different people in different places. The users of technology also continue to shape it by refining it (touchscreen, wireless mouse), discovering new uses and also emotionally bonding with it (touchscreen). Human relationships with technology that we’ve developed (tools, computers, domesticated animals?) are always collaborative, messy and emotional.
AG: This is such a wonderfully complex interweaving of fears and desires, and to think that some of this just gets slipped in to our daily lives like an algorithmic unconscious! Do you see the nonhuman, furry or not, as a possible fellow traveler in resistance? And in what ways will “See a Dog, Hear a Dog” inform your future work?
JM: I think nonhuman animals, and pretty much all lifeforms on the planet that are not human, have been in the longest, most worthwhile demonstration of resistance since we showed up. How else could they persist, since people tend to gobble everything up? Sentient machines might join this resistance, taking the side of the other nonhuman!
In terms of how this piece is informing my future work, it did set a course for wanting to continue exploring human relationships with nonhuman forms, especially digital forms that present as human. The video I just completed, “Wherever You Go, There We Are,” looks at how computer protocols (specifically junk emails/chatbots) pretend to be human and how this effort both succeeds and fails. The interest in this effort, to render something natural/human out of the artificial/nonhuman, is still guiding my work. Not because I want to point to the difference, but because this relationship, despite its pretense, despite its ulterior motives, reveals a thin but real connection.
Credits
Music by Emily Howell and David Cope, as recorded on Centaur Records, “From Darkness, Light”
With:
Metrah Pashaee
Carl Bogner
Ben Balcom
Nazil Dinçel
Early
Obi
YouTube Performances by:
singing_basenji
Basenji_tricks
Basenji Sings Braveheart
Dialogue:
Jesse McLean, ELIZA
Texts partially adapted from:
Computing Machinery and Intelligence, A.M. Turing
The Soundscape, R. Murray Schafer
Animation:
Jesse McLean, iTunes Visualizer
THANKS: Thad Kellstadt, Mike Gibisser, Hannah Givler, Lori Felkler, Michael Robinson, Aaron Brenner, Corinne Teed, Kathy McLean, UWM, UIOWA and the MacDowell Colony
SPECIAL THANKS to David Cope
Video, color, stereo, USA, 17:40
Introduced by Arnaud Gerspacher
Year: 2016
Jesse McLean’s short film See a Dog, Hear a Dog explores the multi-faceted investments we hold in regard to nonhuman animals and technology. This includes the desire to touch and be touched, to recognize and be recognized, and to find space where our interiority seems to commune with another’s. Without privileging human being as the only worthwhile being, McLean’s work makes room for various other forms of access to and in the world, whether it be canines—one of our many enfleshed evolutionary co-travelers—or the digital interfaces we’ve built ourselves, which necessarily model themselves after human and nonhuman intelligence and have become part of the story.
Arnaud Gerspacher: One of the many compelling themes that tend from “See a Dog, Hear a Dog” is a subtle, almost phenomenological staging of what it means to interface with the world. Contrary to the more standard understanding (either the newer digital version or the much older humanist one that only finds human faces meaningful), your film presents all sorts of different interfacings—from humans to canines to computer screens. I kept coming back to what might be a fundamental characteristic of faciality, namely, that to experience another face is to be offered access to some sort of interiority while simultaneously being impeded, kept outside, or screened off. I see this dynamic throughout your film: eyes, text, speech, music, singing, and howls open up empathic connections, yet they are often accompanied by formal separations and barriers—be it a screen door, a smudged computer monitor, or the deferment of photography and video itself. What makes your approach most interesting is that you don’t seem to privilege the human face over all the others, and the viewer becomes aware that both human and nonhuman minds transmit themselves behind cranial integrity and physiognomic expressivity that is not always transparent or without ambiguity (not to mention all the nonhuman digital operations!). So that, fundamentally, the face as medium both delivers itself and necessarily remains hidden. Am I right to begin here with this expanded understanding of faciality?
Jesse McLean: For this particular project, I was thinking more specifically about the surface, which could be a face (human or nonhuman animal) or an interface (computer screen) as simultaneous entry point and barrier. Surface gets a bad rap, almost an impatient dismissal. The attitude being that it’s just a doorway to get to the real stuff behind it. But, another idea is that the surface isn’t only a doorway, but more like skin, like a vital organ for keeping out and taking in data and material needs, and for making constant adjustments to the environment. I wanted to acknowledge the face and the interface as this vital organ that is always in flux. The smeared computer screen image was an important image for me, because it acknowledged the interface, but also because the greasy human handprint on the LCD display symbolized human desires for technology and the barrier that prevents these desires from being reciprocated. With other footage, specifically the dogs, I also wanted to create a barrier, by using either found footage (that already has a layer of distance) and an actual screen door or window in front of the camera. There’s also a shot of the dog looking out the window and not returning our gaze. Returning to human faciality, many of my previous projects have taken advantage of the medium close-up, which is a shot commonly used for composing close-ups of human faces. It’s used frequently in television, for moments of special significance. I’ve also always enjoyed Andy Warhol’s screen tests. What I like about this composition is that it’s a roll of the dice, are you going to get let in or pushed out? Will you witness something emotive and dramatic or will you just be staring at each other? The desire for something dramatic, like tears or anger, immediately reveals more about the desires of the viewer than what is happening onscreen.
AG: Desire is something that can spin out of control, so I’d like to approach it through a specific form, namely, that of questioning. I’m wondering if “See a Dog, Hear a Dog” doesn’t show a space for questioning beyond or beneath speech (which would probably be related to your more generous and subtle take on faciality). That dog at the very end, lying on the couch, appears to be questioning something—why did you wake me? What are you holding in your hands? There are the video portraits of man and woman who question via facial expressions—they seem to get more and more imploring, which can be frustrating for the viewer because it’s impossible to appease them…it’s a good test of paranoia! Then there are non-diegetic “hellos” into empty space, as well as chats that communicate largely through questions. So, basically, this question is about the form of questioning itself, and this gets really interesting when it comes to nonhumans. Can nonhuman animals pose questions (I hear they’re working on phonetic pet translation devices)? Can machines pose questions? And how might we verify that an authentic question has truly been posed, either through voice or face?
JM: I had a similar question posed to me once, about whether or not emotions could ever be artificial, no matter how dubious the source of the feeling or authentic the relationship. My answer was no, emotions are always real, and always fleeting. This kind of transient, emotive state is reminiscent of a question, something that is supposed to be a device for getting an answer/knowledge, but can also be a process for being in the world, for expressing doubt and wonder. Machines ask us questions now, although it’s an automated response. For example, I’ll get an error message, or a solicitous junk email. But an authentic question would be a deviation from the script. I’m not sure how we would recognize this, partly because humans rely on other means beyond written/spoken language to communicate with one another. This is why body language, facial expression and intuition are so vital in our communication with one another, and how face can be read over voice, when determining authenticity. But these methods don’t seem available with machines or animals. The issue is also translation, how can we know what is authentic or significant to a machine? Couldn’t “Are you sure you want to shut down your computer now?” be read as a philosophical entreaty?
The phonetic pet-translator technology at first seems like another attempt to anthropomorphize the nonhuman, but it also could represent a loss of control that I’m unsure humans are prepared for. We love and care for our domestic animals deeply, but always under our terms. If this relationship gets flipped, we’ll have to adjust our anthropocentric world view, we’ll be less in control. But domesticated animals live with and depend on us. An important difference between translated pets and sentient machines is that the machines don’t need us for food, shelter, etc. If they started to vocalize their wants and needs, or talk to one another without us, it would destabilize our domination much more significantly. Like the Facebook AI experiments that were supposedly shut down, because the bots started speaking their own language and humans got very uncomfortable. We want to maintain top position. Questioning can throw the authority of a system into doubt, and that is a frightening concept when applied to the nonhuman, especially the undomesticated nonhuman that has no need for our care. Of course, domesticated animals and nonhuman animals already express doubt and wonder, even if we humans don’t understand the cause. And if machines are indeed sentient, perhaps they do, too.
AG: I’ve always thought that when we’ll develop artificial sentience (should it be verifiable), robots will have to be given the same ethical consideration that nonhuman animals are increasingly being given—even, slowly but surely, the so-called “food” animals. It won’t be a huge stretch for the generation raised on both Wall-E and Nemo. As always it would go in two possible directions: an anthroproprietary perspective, whereby human capacities remain the deciding standard, or a perspective that makes room for a multiplicity of life-worlds with a multiplicity of capacities, some crossing over with the human, some not. The second option is far more interesting, and I appreciate the way you put it, as a loss of control in certain ways (of course, the other side of this is the possibility of immoral or amoral killer robots very much in the news right now). Coincidentally, I just watched Adam Curtis’s “HyperNormalization” (2016). There’s a moment in the film that mentions a pivotal event in the development of AI. As a joke, the computer scientist Joseph Weizenbaum developed ELIZA, a computer system whose interface would repeat the questions posed to it by human users, like a therapist. His secretary was the first to use it and became engrossed, as if her desires for communication were really being met—and perhaps, solipsistically, they were! The chats in your film seem to function similarly, did you know about this history?
JM: Yes, I did. In fact, all the text chats are taken from ELIZA conversations that I had online with an ELIZA program. These programs are still accessible and still maddening/captivating in equal measure. There’s a bit of computer history snuck throughout the film, in images I used in montages and collaborative experiments. I realized that certain sections of the film could be considered collaborative creative acts because of how I was using a machine or program to generate both content and aesthetics. In general, the computer is aestheticized in this film, I draw a lot of attention to the interface and interrupt the film with reminders of its presence. The handprint on the screen is the most obvious marker. This aestheticization is both a continuation of artistic gestures where tools become incorporated into the final form, but also a way of introducing the computer as a character inside the film. Also outside the film, because I acknowledge software as collaborators in the credits. This is partly tongue in cheek, because I don’t usually make a habit of thanking my laptop on projects, but really, I couldn’t have generated the dialogue or the animated sequences without the input of these programs. And I didn’t know what I was going to get. It was a nice suspension of my control.
AG: I’m assuming the appearance of Stanley Kubrick’s “2001” is particularly important here. Can you elaborate on some of the other references to computer history in the film?
JM: I agonized a bit over HAL, because it’s so iconic, but that ended up being exactly why I used his image. I wanted to ensure that the threat of being completely controlled by machines, which is associated with AI, is present in the film. Fifty years later, HAL remains an obvious representation of this fear, but I set about to also complicate this iconic image. During his speech, HAL morphs into the iTunes visualizer, an innocuous technology which we don’t fear, although aesthetic choices are being made by a machine without human input, out of a database of possibilities. HAL also speaks from a text that dismisses the notion that machines (and animals) have souls and are intelligent. The animation and montage sections that follow his speech undo that claim, in my opinion. That text is one computer history reference, it was taken from an essay in Computing Machinery and Intelligence, a paper written in 1950 by Alan Turing. Turing’s image is also seen briefly in a montage sequence, as are images of ENIAC, blueprints for the difference engine, and some more recent technology, like Bina 48. A significant narrator in the film is a computer voice named Ava, which isn’t quite “Ada” but close to Ada (Lovelace).
My practice is collagist, often utilizing multiple sources and subjects, for multiple purposes. The computer history references operate partly as an homage, partly as contextual subjects and partly as pure aesthetic material. The montage sequences throughout the film also show images of dogs being dogs, dogs performing like humans, and experimentation with both machines and animals. The variety of ideas and material in the film gets shaped into a larger whole. The development of technology could also be considered a collagist effort, with multiple ideas contributing to the end result, often using different parts and sources, and relying upon collaboration (whether acknowledged or not). The development of the computer mouse or graphical user interface, for example; technology sometimes credited as Macintosh innovations but actually in development prior to the Apple Lisa, by different people in different places. The users of technology also continue to shape it by refining it (touchscreen, wireless mouse), discovering new uses and also emotionally bonding with it (touchscreen). Human relationships with technology that we’ve developed (tools, computers, domesticated animals?) are always collaborative, messy and emotional.
AG: This is such a wonderfully complex interweaving of fears and desires, and to think that some of this just gets slipped in to our daily lives like an algorithmic unconscious! Do you see the nonhuman, furry or not, as a possible fellow traveler in resistance? And in what ways will “See a Dog, Hear a Dog” inform your future work?
JM: I think nonhuman animals, and pretty much all lifeforms on the planet that are not human, have been in the longest, most worthwhile demonstration of resistance since we showed up. How else could they persist, since people tend to gobble everything up? Sentient machines might join this resistance, taking the side of the other nonhuman!
In terms of how this piece is informing my future work, it did set a course for wanting to continue exploring human relationships with nonhuman forms, especially digital forms that present as human. The video I just completed, “Wherever You Go, There We Are,” looks at how computer protocols (specifically junk emails/chatbots) pretend to be human and how this effort both succeeds and fails. The interest in this effort, to render something natural/human out of the artificial/nonhuman, is still guiding my work. Not because I want to point to the difference, but because this relationship, despite its pretense, despite its ulterior motives, reveals a thin but real connection.
Credits
Music by Emily Howell and David Cope, as recorded on Centaur Records, “From Darkness, Light”
With:
Metrah Pashaee
Carl Bogner
Ben Balcom
Nazil Dinçel
Early
Obi
YouTube Performances by:
singing_basenji
Basenji_tricks
Basenji Sings Braveheart
Dialogue:
Jesse McLean, ELIZA
Texts partially adapted from:
Computing Machinery and Intelligence, A.M. Turing
The Soundscape, R. Murray Schafer
Animation:
Jesse McLean, iTunes Visualizer
THANKS: Thad Kellstadt, Mike Gibisser, Hannah Givler, Lori Felkler, Michael Robinson, Aaron Brenner, Corinne Teed, Kathy McLean, UWM, UIOWA and the MacDowell Colony
SPECIAL THANKS to David Cope