I look forward to this Correspondence. We have only met briefly, in passing, at a conference on “Theorizing the Web” that took place at the International Center of Photography exhibition space in New York on the Bowery. At that time it was a pre-museum, a raw space with empty walls and folding chairs and, as I remember, a small area for children to play. In the physical emptiness, the echoing rooms, it seemed there was the psychic space in which new thoughts could be imagined, new ideas presented. Time had a chance to reassert itself in its passing, less interrupted by the beeps of cellphones and the nearly constant rain of emails. It felt like a cave Plato might have inhabited, where the images were kept indistinct and at bay, and imagination was, for once, to be called into play.
This is similar to how the Web felt to me in its earliest days in the 1990s, when we went from the non-visual Internet to the Web with its capacity for photography and video, limited at the time, by slow uploads from telephone modems. The Web, rather than being fast-loading and multitudinous, a consumerist fantasy in which every link would provide new, if not always interesting, delights, seemed for a moment a manifestation of human possibility that could be made widely accessible in which meaning could be elicited slowly, more like reading a book on Web “pages” or contemplating a painting than the speedy attention-deficit electric surfing that would soon be enabled via cable modems and wifi. It was an era of stand-alone CD-ROMS in which experiments in hypertext were being conducted, poems and artworks produced that had to be read in non-linear ways, authored by individuals who took responsibility for the pathways that would emerge and open. It was not like watching tv then, channel surfing, but rather more like looking at a specific work or publication and spending time dedicated to exploring it in some depth.
“for the most part the Web seemed an exercise in efficiency, in giving the consumer (the “user”) an array of choices, making it easy for the viewer to immediately click and be rewarded with another screen full of something else, whatever else, as long as there was more”
The speeded-up Web that followed, dominated by content-management systems, in which the templates predominated over the idiosyncratic visions of an author, seemed like a fall from grace. Suddenly the mentality of the corporate brochure, the rigidity of a supermarket with its aisles and departments, began to efface or at least diminish the taste and the vision of the individual. Certainly there were always exceptions, artists creating sites that had far more to them than a manifestation of a template, but for the most part the Web seemed an exercise in efficiency, in giving the consumer (the “user”) an array of choices, making it easy for the viewer to immediately click and be rewarded with another screen full of something else, whatever else, as long as there was more. I remember teaching in a graduate program at the university where students were told that in building Web sites they had to make it obvious for the viewer to click within three seconds, or else the viewers would leave their site without having engaged. I remember telling my students that if the viewer could click within less than thirty seconds their site may be a failure – I wanted what was shown on the Web to challenge the viewer, just like reading Dostoievski, or looking at an abstract painting, or listening to John Coltrane, had challenged me, an experience that took much longer than three seconds.
Digital media, with its emphatically anti-hierarchical bias, its encouragement of the individual to be both reader (or perhaps skimmer or scanner is more accurate) and producer, began to resemble a warehouse—a jumble of possibilities radiating a sense that there might be a treasure somewhere if one only looks long enough (this seemed to me a possibility that would be of interest to those with large amounts of free time rather than those whose days were less flexible, taken up with work and other responsibilities). Curation and editing, the filtering that focused the attention of both large and small segments of society, was largely abandoned for a “democratization” of media that also seemed a virulent form of nearly out-of-control consumerism. Rather than trust the taste of others (as in a retail store, or a museum, or a newspaper) social media adherents began to deride specialists as exemplifying a form of vacuous elitism. The power needed to be with the people, and as we segmented and splintered into a myriad of special-interest groups, this was articulated as the prerogative of the individual in a democratic/consumerist societies, a displacement of the public square into, if you will allow me, a vast mosaic of individual pixels. Of course this is exactly what we are complaining about now in the populist politics of this moment, including the red state-blue state, rural-urban divides within the United States, a splintering that led to the election of a reality-television star as president.
“Information now appears and disappears within minutes in a 24-hour news cycle, with little chance for a coherent narrative, as images are appropriated and re-contextualized (with and without permission). Authorship is challenged.”
I am certainly aware of the limitations of the analog media, with its politics of exclusion, its parochial tastemakers coming from only certain classes. But I am also aware that by focusing society on certain issues to the exclusion of others, it was possible to have a civic dialogue in which large numbers of people shared certain of the same reference points. It was also a way for the individual writer or photographer or filmmaker to benefit from the expertise of a larger group pre-publication rather than the more profligate self-publishing we see today, even if that larger group could at times be insensitive, repressive and unhelpful. We could concentrate on the war in Vietnam for example, with front pages documenting its progression, and rally to stop it—unlike the war in Syria or Iraq or Afghanistan, where much of what happens seems a blur of competing narratives, taken in as fragments with little sense of the context or the history (why did we have so many iconic images from the war in Vietnam to rally around and against and none from the war in Afghanistan, for example, even if it is now the US’s longest war?). Information now appears and disappears within minutes in a 24-hour news cycle, with little chance for a coherent narrative, as images are appropriated and re-contextualized (with and without permission). Authorship is challenged.
I am not trying here to be overly nostalgic. I am aware that the Web offers enormous possibilities that transcend much of the mundane uses that we are seeing today, but what troubles me is that so few are being realized. I see the Internet as a quantum universe of probabilities and possibilities that differs profoundly in its logic from the Newtonian cause-and-effect universe of the analog, a coded environment that, as I have written before, offers enormous opportunities to explore what it means for us to be coded, DNA-based beings. But I do find that the “aura of the original,” the sense of specialness and even of the sacred, are difficult to find when confronted with trillions of images and a multitude of words. And I find that the many artists aggregating the proliferation of images and words on the Web as a new archive do not often move us into new ways of understanding the human spirit and its potentials.
I think more these days of the great picture magazines of the 1920s and ‘30s, such as Vu and Life, in which imagery was placed on the page, designed with a sense of scale and contrast, the printing controlled, the typography selected so as to enhance and emphasize particular meanings from the photographs themselves. Form interacted with content both within and outside the rectangle of the photograph, creating a dynamic that was more than the sum of its parts. Now the Web tends to showcase photographs that vary in size depending upon the screen each person is looking at, from that of a cellphone to a laptop, with uncontrolled color palettes and bland typographies. There is little meaningful interplay between form and content, but more of a sense that one is looking at a great deal of “stuff” that can always be abandoned and replaced with a fast click of the mouse. So the most vivid photographs are placed at the beginning of a slide show, hoping that the viewer stays longer, rather than creating a visual rhythm that aids a more nuanced expression of the work in which some of the strongest images are shown at the end.
Where I think that the Web offers more than the analog can provide is in the non-linear narrative, hypertext, an intelligent “interactivity” rather than a facile one. We should not be creating ATM-like “interactivity” that is more about the click than the result, but an interactivity that leads to more complex and unexpected experiences. This is a different form of collaboration than that between a writer and reader in a book, or a painter and viewer, or a filmmaker and her audience. This is a non-linearity which allows the reader/viewer a freedom to roam, to explore, and in doing so to add meaning to the relationship with the text and its author. This is more of what I had hoped we would be getting online.
I am interested in your responses, to have a sense of where you think the light is at the end of this digital tunnel. The Web’s capacities for aggregation and efficient distribution are evident—what I am searching for is more of a sense of discovery of the transcendent and the transformative, a potential for expanding not only data and information but also knowledge and wisdom, in whatever forms they might emerge.
Next “Correspondence” by Nathan Jurgenson will be published in May 30th.
“Perhaps the social photo, liberated from the fact of the matter, is able to tell new truths.”
I, too, look forward to this correspondence and I read your first letter with great interest. It is an honor to reply as your work, especially with respect to contemporary photographic technologies, has deeply influenced my thinking as I begin to approach these topics. It was very good to meet you at the Theorizing the Web conference last year, and I appreciate your kind words about the imaginative possibilities provided by the event space as reminiscent of the early web. As the conference’s co-founder and co-chair, that messiness and unpredictability are what make a space worthy – and, as an organizer, also a little scary! Another parallel with the internet.
I quite like your telling of the story of the web and appreciate how it avoids utopian and dystopian platitudes as well as the bad habit of fixating on the companies and gadgets. It’s a good place for us to begin, framing just what the internet is, what it has done, where it will go, what has been lost and what can still be gained. This story is often reduced in popular and even academic discussions to asking if the internet is good or bad. On the one side a silly techno utopianism that thinks if people in repressive regimes get new phones then democracy will just emerge or that social ills like crumbling institutions can be solved with ever Bigger Data. On the other is a kind of knee-jerk dystopian Black Mirror-ization of tech discourse that trades actual criticism for cynical cliché. We should reject such framing because the internet is both good and bad. Of course. But that’s the easy part.
The more important question this leaves me with is why are these new digital technologies that are so deeply interwoven and ubiquitous through sociality framed so often in these good-or-bad terms?
The more important question this leaves me with is why are these new digital technologies that are so deeply interwoven and ubiquitous through sociality framed so often in these good-or-bad terms? Let’s take phone applications as an example: they encompass identity, intimacy, friendship, families, news gathering, and most anything else that makes up sociality. They aren’t used by everyone, but still touch a wonderfully diverse arrangement of people with different interests and experiences and politics and vulnerabilities. Anything that comprises human sociality will be as complex and multivalent as human sociality. This impossibly complicated entwinement of technologies and people and groups and environments is met with, well, is it good or bad? It’s like asking if talking is good or bad. More than just rejecting the unfair and unproductive good-versus-bad framing, we should ask, what makes it possible to so routinely ask such an incoherent question of technology?
I think the answer is that there is an underlying and often unspoken presupposition that the internet isn’t real or native to this reality but is instead its own realm or environment. The idea that there’s a kind of zero-sum “on” versus “off” line or of digitality as a separate virtual sphere made for good fiction (Snow Crash, The Matrix, and so on) but that doesn’t best capture the web really is, and does. If we think of all this as like a technological appendage to reality, the question of good-or-bad makes more sense. But thought of as essentially interlaced with reality, as partly constituting reality, and as a flavor of information that intersects all of sociality as it occurs on and off the screen, the good-or-bad question would be replaced with asking which particular technological affordance is good and bad, for whom, when, and why. Technology is reduced to good-or-bad because people wrongly think of it as less than real.
For instance, while I share nostalgia for the early messy and unpredictable internet, it’s also true that some of that free and open and unregulated internet was, and continues to be, opportunity for hate, harassment, and bigotry. Far from disappearing, some of these hurtful elements have a larger audience and influence than ever, even in the White House. In this way, the internet is tragically normal given that a fact of human experience is the commonality of suffering and evil. As was always the case but perhaps more obvious now, the internet is merely real.
Doomed to such reality, it should not be surprising that it looks a lot like us: sometimes beautiful, challenging, evil, but most often pretty mundane. As part of and imbricated within the rest reality, it’s of no surprise that profit and scale took precedence over weirdness and challenging art. Built and used by humans with bodies and geographies and demographics and politics and feelings, the web is real and as such looks like real life, full of people who, for example, don’t obsessively read the news or make art. And that routine and centralized internet we complain about is also full of people who love, of families reaching out across long distances or just keeping in touch about their day. It’s the dailiness, the routine banality the internet absorbs that gives us a lot to complain about but also shouldn’t be underestimated.
But I certainly do not want to preempt discussion about what has changed. This discussion is, I think, best situated within the short history of modernity and the existential cultural preoccupations that are older and go deeper than any particular new technology. By modernity I mean the industrial revolutions, the rise of science, secularism, capitalism, and democracy, the growth of transportation technologies like steam, automotive, and flight, and the rise of various communication technologies like mass print, photography, radio, television, and the web. For as long as these processes have been going on (and the special acceleration of these in the 19th century that my discipline of sociology emerged to describe) we see familiar excited worry about people contemplating less, society moving too fast, and a growing anxiety or anomie from epistemic unmooring. The state of being bombarded with text, images, and information is as old as we’ve had mass media, literacy, and modernity. Technological novelty, shock, and acceleration are by now predictable (imagine the shock if any of this slowed!) but each development listed here had its own distinct and profound effects on culture. From this historical grounding, we can best sensitize ourselves to the most recent changes.
To disagree only slightly, I do not think the internet is primarily anti-hierarchical or primarily hierarchical, but affords each in new ways at new scales. The anti-hierarchical participatory and interactive components you describe were novel and world changing. Though, the degree to which much of what happens on screens is centrally conceived and controlled and the way all of that user information is often run through black box like algorithms is not just hierarchal but hierarchal in new and profound ways. We have at once a chaos of user generated information almost anyone can contribute to, and at the same time this is afforded by and then sorted by corporate algorithms. And in this example, I think you are very correct to underscore the loss of verticality, especially in news consumption.
“Will news at it occurs through connected screens always be this way?”
I like to read debates about the internet from the 1990’s, which I’m sure is a different experience than being there for them, but it seems there was from the start some worry around what horizontalism would mean for news, where the sections in a newspaper would be less obvious as each story is its own page, and where people come straight to the story without seeing the front page and thus losing the editor’s role of ranking information. Each story on the web is placed next to each other without much context about how the story relates to the others, as opposed to print papers which could control the font size of the title of a story, could control the story’s location on the front page versus the back page, and in which section. I think that worry about horizontalism was very well founded but seemingly ignored, and we’re dealing with the consequences today. That opinion pieces look almost exactly like reported pieces online – as opposed to opening a paper and seeing a giant “Opinion” as the title of one specific section – is to me a very underappreciated and detrimental editorial (and financial) choice that should be rethought.
However, opposed to essentialisms about what digitality tends towards, we should stress that social media is very young and it doesn’t have to be the way it is. Will news at it occurs through connected screens always be this way? I do not think there’s any necessary reason the pendulum could not swing back towards some editor-controlled news spaces on a connected screen. And then we can go back to the Frankfurt School style critiques of the culture industry controlled vertically by elites. (Arguably, that critique is already desperately needed, specifically in how online news is sorted editorially, with virality as an editorial philosophy producing a politics of chaos. Though perhaps I’m chasing tangents for another discussion.)
It’s within this perspective that I hope to have a start at answering your question about the light at the end of the digital tunnel. I appreciate your hopeful conclusion about interactivity, not because it is hopeful but because it provides a useful framework for looking ahead. You describe an interactivity that doesn’t lead just to more information but to complex and unexpected experiences. Your terms remind me of Walter Benjamin’s essay called “The Storyteller”. There, he makes a similar distinction between information and experience, where information is the fact of the matter at any given time versus the capacity to articulate experience, something storytelling can uniquely accomplish because it is not anchored to true-or-false but conveys timeless emotion and wisdom. Benjamin felt that, relative to oral cultures, written text tended towards information rather than storytelling, and I wonder if a potential of digital communication is a reengagement with storytelling, a renewed ability to share experience (though, to be sure, one without any slow down in the proliferation of information)?
An example, and I hope a productive way forward in our conversation, is the recent history of image making. If I may be a little provocative, perhaps I would say that the central cultural importance of photography in the 20th Century was about information, about creating records and documents. Today, with so many people making so many de facto (and sometimes literally) ephemeral images, photography has also become about talking in a form closer to the oral tradition of sharing experience, and as such becomes something more linguistic than formally artistic. Social photos today with filters and augmented reality manipulations aren’t as focused on accurately representing the photons (i.e., information) but are more expressive as a kind of experiential storytelling. Perhaps the social photo, liberated from the fact of the matter, is able to tell new truths.
I look forward to your reply.
Next “Correspondence” by Fred Ritchin will be published in June 13th.
Let me first back up for a bit into the pre-Web world of print.
While working at the New York Times Magazine as picture editor in the late 1970s and early 1980s, I was extremely conscious of my role as an arbiter of how people might see the world, as the person situated between the photographer in the field and the reader. I would double-check captions, afraid of damaging the public trust which even then could be fragile. And I was acutely aware of the multiplicity of messages that could be sent via photographs that would not always be noticed by many of my colleagues, given that their expertise was usually in words, not in the reading of imagery—for me the passion was not in illustrating a caption or any preconceived idea but in eliciting other ways of knowing. I was very conscious as well of sequence – in a linear publication if a certain photograph were to follow another it would change the meanings of both images.I had come up in the ranks of the image, rather than text (although early on I chose writing as my métier, rather than being a photographer) in large part because it had more freedom. Whereas the person well versed in photography, with an understanding of design and text, would almost never become the person in charge of a publication, the picture editor had at his or her disposal a vastness of possibility to roam in both the archives and in new photographs, an image universe that was rich with ideas while still somewhat, for most people, unknown.
There were certainly barriers to be dealt with. For example, at the very beginning of my tenure I was told that I could not publish a color photograph (as I remember it was for the cover) of Philippe de Montebello, the director of the Metropolitan Museum, in the basement storeroom of the museum surrounded by Greek statues from antiquity—because their breasts were visible, and it was a “family magazine.” At a previous job, at Time-Life Books, I could not publish a photograph of the world-renowned classical pianist Arthur Rubinstein in a photo essay on the creative process because, I was told, classical musicians were not creative but only played someone else’s music. It’s true that it was not always possible to fight the prejudices of those who made such decisions and social media now allows wholesale contradictions—although what is lost now is a singular, authoritative voice.
At that time, particularly when working at the New York Times, I often found myself wrestling with choices, asking myself if I knew enough to make certain selections, frequently speaking both with the photographers who had been in the field, their agents (often French) who sometimes knew more about what was going on than anyone else, and the writer whose piece we were publishing. If I selected one photograph of Palestinians in pain, did I need to pick one of Israelis to balance it out? What if one image showed more devastation than the other—how was I to decide? Would people be able to read the differences between the two images? It was a job that required an intense dedication along with a sense of responsibility that one was helping to shape public perception on issues both enormous and small, and one needed to be as fair as possible.
“I photograph to see what things look like when they are photographed,” Garry Winogrand famous phrase
At that time there was a photograph that came to my attention of a nude young woman running from her burning house somewhere in the United States. It was a beautiful photograph, with her home visible behind her as she ran towards the camera. I remember thinking that we would never publish such a photograph because there was no reason to—at the Times we used the Golden Rule, asking if any of us, or our sister or wife or mother or daughter, had been depicted in the photograph would we have published it. Certainly the publication of this image would have no effect on future house fires, and we certainly would not have wanted to make people think in some later instance that they would have to run back into their burning home to get a towel to cover themselves in order not to be photographed nude. In another instance, the 1972 photograph by Nick Ut of a young Vietnamese girl Phan Thi Kim Phúc, who was similarly nude and burning from napalm, the thinking was different. In this case her predicament as shown in the photograph would lead to widespread condemnation and re-examination of the war, and although it is impossible to posit any direct cause and effect, US troops would be pulled out from Vietnam the year after this iconic image was published.
Why do I offer these examples? In part because of a sense that in the enormous numbers of cases of dubious ethical decisions, from Donald Trump’s decision to launch 59 missiles at Syria while eating the “most beautiful piece of chocolate cake that you’ve ever seen” with the president of China because of imagery of a poison gas attack that is said to have upset his daughter, to the recent controversy over a photograph that identifies a 16-year-old Indian girl being raped while forced into prostitution, the absence of gatekeepers is having an enormous impact on credibility and knowledge that only to a certain extent is balanced by the power of social media, in large part because it is so diffuse. The press is unable to provide a coherent and powerful deterrent—a narrative, if you will—to political incoherence, and each day presents a new chaos of charges of “fake news” and off-the-cuff policies. The press, in its weakened financial state, has fewer resources and less credibility in the digital era. While people are indeed conversing over social media, a kind of fierce tribalism is emerging in which there is an absence of societal reference points that can be widely shared. During the Vietnam War, whether one was for it or against it, there was an agreement, whatever one’s political orientation, that what was reported in the press had some validity, a strong connection to the factual. Now we can’t even agree that climate change is unfolding at an enormously speeded-up rate, let alone do something about it.
“Perhaps it is finally time to abolish the term “photograph” for “digital image,” taking into account the billions of images that are produced to take part in this conversation daily?”
Your sense that the 20th century photograph was more about information, and today’s photography about conversation, is both useful and portentous. How do we set our compasses, establishing parameters, if the imagery we are looking at for the most part emerges from the explicit subjectivity of conversation? Perhaps it is finally time to abolish the term “photograph” for “digital image,” taking into account the billions of images that are produced to take part in this conversation daily? Perhaps the term “photograph” should only be used for the indexical image in which the signifiers and the signified are still strongly linked?
But I think that the use of the term “information” applied to photography misses an extraordinarily important role—that of discovery. “I photograph to see what things look like when they are photographed,” Garry Winogrand’s famous phrase, distinguishes camera vision from human vision, arguing that the photograph need not fulfill preconceptions but can elicit other truths, and that the application of a caption to define and limit the photograph is equivalent to the quantum collapse that happens when the ambiguities of matter are reified, either wave or particle. It is these discoveries, the John Szarkowski mirror/window dynamic in which the most interesting and revelatory photographs are those that express both an outer and inner world, that would be sacrificed to the conversational mode in which the image tends to simplify reality often equivalent to a series of nouns with a few adjectives, categorizing it (“my breakfast,” “my new girlfriend,” “the Eiffel Tower”) rather than creating a new sense of experience independent of previous vocabularies.
And you are right that the Internet will evolve. But I think that here we have to ask how we want it and us to evolve, and what we will do to try and make that happen. I am convinced that brain implants are not far away as a means of making the Internet into an amplified (and/or diminished?) part of our reality. I am terrified of this largely because the seductiveness of digital media has largely overwhelmed human will and autonomy, because Google knows everything and we really don’t have to read books, to learn how to wrestle with ideas, because we have everything at our fingertips (just ask Siri).
So here I am going to end with a reconceptualization of the photograph, or more specifically of the image that looks like a photograph. The great majority are being made by machines for other machines involved in activities such as surveillance that don’t require the machines to “see” any image but just to read each other’s code from which the image, for human use, is derived. Then I would suggest that the serious image-maker can no longer depend on the photograph’s indexical value, because that is for the most part no longer credible, putting us in a situtaion of photography after photography, as I argued in a 2008 book called After Photography. And as the digital image becomes more and more a computational construct with, for example, the values of pixels being approximated by algorithms vs. recording the light that comes through the lens, we are not talking about photography when we talk about digital imaging.
So the code-based digital image becomes more an investigation of genotype than phenotype, as photography had been. We are less interested in appearances than in systems, for better and for worse. Part of this is self-interest—as a species we may not be able to survive counting on “nature” given what we have done to it, and will need to invent and reinvent what we now think of as biological and other modes of life. This may be the major contribution of not only digital imaging but all digital media—it is more of a life-and-death game that we are in than we are currently prepared to admit.
We need, as we progress, to find ways to slow down, to reassert control over our hybrid environment that is, as you assert, part digital and part analog. We need to find ways to underline the spiritual in our lives. And we need to stop privileging our consumerist selves, arguing for an entitlement that is camouflaged as democratic when it is too often a wrong-headed impulse for status in a profligate world.
And as to story telling – your reference to Benjamin reminds me of William Carlos Williams. Benjamin put it that “every morning brings us the news of the globe yet we are poor in noteworthy stories.” Williams wrote: “It is difficult to get the news from poems yet men die miserably every day for lack of what is found there.” Certainly we are telling each other stories on social media. Yet what are the narratives that we can all share? As Paul Stookey of the singing group Peter, Paul and Mary once reflected onstage—after the demise of Life magazine the next popular magazine was People, a title that in its focus left out much of “life,” which then was followed by Us magazine, thus excluding most “people,” which in turn led to Self, a title that excluded just about anybody and anything else. Nicholas Negroponte, the pioneering founder of the MIT Media Lab, then suggested a customized newspaper called the “Daily Me.” Social media has further intensified this concentration on self and its immediate surroundings, from Facebook and other platforms as a source of customized news to the obsession with selfies as an existential declaration that one actually matters.
There was a time when print media allowed us to encounter a few major topics on a front page of a newspaper, some of them quite unknown. While this limited our purview, it did focus us on certain issues that we could try, at times, to solve. Today a hybridized front page that links to many other divergent points of view is also possible. When I was hired to create the first multimedia version of the New York Times in 1994-95, I was intent on retaining and amplifying the front page and specific sections of the newspaper with a dedicated software as a first step as we entered the digital environment. Then the Web emerged as the default platform, and the newspaper of record became, to some extent, just another source of information and comment among many others. Now it has become more assertive, with more digital subscribers as people respond, in part, to Trump and the “post-truth” era. Many other journalistic publications have also stepped up their defense of facts and reason. This, I agree, is hopeful.
What then are the next steps? Which better world can the online help take us to?
I look forward to your response.
Next “Correspondence” by Nathan Jurgenson will be published in June 27th.
“More information, more hype, bigger ratings: these are the values handed down from the past that need to be unlearned.”
I very much share your interest in these debates about terminology, if what people are doing so routinely today is a shift in “photography” or something requiring a new vocabulary. As always, wherever we run out of words is where we should give extra conceptual work. When words stop making sense and new sets of terms proliferate, as is happening around new, visual social media, there must be something more interesting happening under the semantic surface.
So, should call what people are doing with mobile, digital, and networked cameras “photography”? The mechanics of image making has changed, as you described at the time of the rise of computational photography, and as such the fundamental ontology of what a camera produces has changed too. No longer mechanical, no longer having such a strong and necessary correlation with the outside world, no longer needing to exist in a pictorial form when not called upon by a computer, this new digital image making is very different than traditional photography. Yet, people seem to intuitively and routinely cling to that term to describe the everyday images being taken and shared.
That most people still call all of this “photography” isn’t a convincing argument one way or the other. Instead, I think the continued popularity of the term points to how the contemporary digital image retains many of the functions of traditional photography. Enough, it seems, for most people to continue using the term. While the image can be drawn algorithmically with little essential grounding to the world around us, the way the image is used, is understood, and shared still often has much to do with accuracy and reality. I think the continued truth-telling function is what links contemporary image making with traditional photography, called as such.
The term “camera” also interests me. What image appears in people’s minds when they hear the term?
For example, in my previous letter I suggested that digital photography privileges expression over mere information, an epistemic shift from an objective to a more subjective knowing, but still an epistemic function. And we can think of examples where digital photography excels at more objective truth-telling as well. A speculative case, if I may: Digitality and network connection means there are many more photograph-like-images, and while each object on its own may have a reduced bearing on objectivity, taken together they might accomplish quite the opposite. The one shot could be manipulated but surely very different people from many different vantage points didn’t all fake it the same. A single image may be less trustworthy today, but a crowd taking images from many angles can provide more proof than ever. (And this is similar to your previous writings on the idea of newspapers publishing more photos from more angles to better describe a newsworthy scene.)
This example points to a way digital imagery is used for something like objectivity (in aggregate) even though each image is (individually) more manipulable. Digitality, I think, creates new opportunities for image truth and fiction, and as such shares that same objective/subjective tension all photography has. (I am remembering how Sontag put it, that every photograph is made by both the poet and the scribe.) I certainly do not want to try to make a case for if mechanical or computational photography is more or less “true,” that seems like an impossible task, but instead I just want to acknowledge that many of the meanings and functions of photography are retained even though the ontological status has changed. And as a sociologist, my taste for terms is usually driven more by cultural uses than mechanical workings or even ontological status. So, I tend to continue using the word “photograph,” though this discussion makes we wonder for how long?
The term “camera” also interests me. What image appears in people’s minds when they hear the term? Maybe people weren’t widely aware of the ontological sea change of computational photography that we are dealing with here because, at the rise of digital photography, the images still looked like photographs and the mechanisms producing them still looked like cameras? The hardware of digital cameras has aspired to be like those that came before. And their output was judged by how close the images could resemble their film counterparts. Even with camera phones and social networks, most of the platforms aspire to the skeuomorphic “photo album” or “gallery.” Even the radical editing possibilities of digital photography at first harkened back to film, for example the early “filtering” platforms like Hipstamatic and Instagram provided interfaces and filters that would make your digital image look like a paper photo, aged, saturated, vintage, and nostalgic. And the icons for social photography apps usually align with what people imagine when they hear the word “camera”: the camera hardware, body, lens, and shutter.
But is the “camera” really, centrally, the hardware anymore? This entire dialogue we are having mostly centers on the software, where “digital photography” or “computational photography” of course means “software-assisted photography.” Given our conversation and the changing ontology of image-making, I’d like to posit that the term “camera” should be understood today as centered in software more than hardware. It is the software that allows images to be more and less than they were, to be expressive and communicative and whatever else they’ll turn into in the future. And most importantly, the software is increasingly being used to do much more with imagery than simply emulate what hardware did before.
An admittedly off-hand and probably too simplistic start might be to say that a camera is a tool that uses the outside world to make images and unlock a vast field of visual possibilities, from beautiful portraits and landscapes to selfies and everyday phatic expressions of visual communication, including image processing such as filters to augmented reality. Indeed, the rise of augmented reality makes clear that digital imaging is now more routinely going beyond emulating film and is the biggest reason why I am centering software rather than hardware in what a “camera” is. In this provocation, the hardware, like the lens and shutter and now the sensor, are merely a component to what the software does. Why the image was taken, what it looks like, how it is used, how it is shared, and why it is important is centered in the code the visual information is run through. Perhaps when we hear the term “camera” we should think of the software first?
I am curious what you think of the status of the “camera” today, as parallel with your previous comments on the term “photograph?”
So far, I think we are developing two lines of thought, the recent changes in photography and also the recent rise of a so-called “post-truth” and Trumpian politics. I think they might have less to do with each other than you have described. I don’t presume such a necessary connection firstly because I think computational photography still has these truth-telling functions and advantages I described above. And, secondly, because I do not think our current epistemological chaos is so different from what came before. The intensity of our current confusion is perhaps more the result of the previous order rather than the outcome of new technologies.
The task of imaging what better world might come next can’t be done until we come to terms with how epistemic authorities have forfeited legitimacy and trust just as much as it was taken. One typical critique of the phrase “post-truth” is to ask if there ever really was such a popular stable order of facts and truth before? There is no evidence that many more people were reading books and engaging with complex ideas before that are now gone. Though, there is some evidence these things are more popular than ever, but of course, and as always, still a niche. Today, this niche doesn’t have a monopoly in being heard so it can feel like a kid of decline. With the rise of a plurality of media outlets as well as the social web, mainstream organizations failed to invest in their unique advantage: legitimacy and trust in fact-based reporting. That is, the important and expert editorial insights and skills you describe. Instead, the outlets sold this for profitable ratings and, later, virality. The rejection of facts and the willingness to profit from the resulting chaos has been a long story in formerly trusted media outlets, and I don’t believe the internet is even the most important chapter; just the most recent.
“As far back into the last century as we look into news media coverage we find an acknowledgement of spectacle, of fiction making, of ratings and profits over substance.”
Daniel Boorstin’s “The Image” in 1961 influentially described how political news was inherently cheapened by new media technologies. Joan Didion’s famous 1988 essay “Insider Baseball” similarly described political news as a kind fiction made to be covered. We can imagine the many more works that could be cited here, and today it continues. From CNN to FiveThirtyEight to most of the other outlets, politics is treated like a fake sports-like drama with countdown clocks, constant poll-statistics, and melodramatic visuals and music. As far back into the last century as we look into news media coverage we find an acknowledgement of spectacle, of fiction making, of ratings and profits over substance.
24 hour cable news has done so much work for so long to degrade the idea of a common fact, they have instead insisted that news events are reality TV entertainment, they have as a consistent theme a world that is scary and chaotic and unknowable. It is hard for me to distinguish where the newer impact of the internet (let alone digital image making) fits in. There was already the popular foundation for an entertainment and profit model of news reporting that was at the same time mainstream knowledge-producing institutions forfeiting their own legitimacy and undermining the very notion that there could be any facts at all. Print, television, and digital networks all achieve ratings and profits through Trump, he is partly their creation and they certainly benefit from his presence, always expanding. As I type this today in the lead up to James Comey’s Senate testimony, CNN is enjoying the ratings, billing it as like “the super bowl,” a kind of ratings grab to sell attack ads against. Newsrooms are throwing parties. There is excitement and profit with each new drama. Between expressions of shock and anger over Trump, I get at the same time the sense that we’re getting what we always wanted.
“Trump wasn’t the first post-truth candidate but surely was the most explicit in admitting the whole process was a kind of fiction.”
This is the groundwork the rise of digital communications was born onto, a fabric of power through attention where news is told according to ratings and candidates run on celebrity. Trump wasn’t the first post-truth candidate but surely was the most explicit in admitting the whole process was a kind of fiction. (A fiction of course with a very real bigotry with very real material and harmful consequences especially for the most vulnerable.) The logic of the spectacle, of positing the world and its coverage as a reality show, is of course much older than the internet. And I do no think there is anything essential to the web that allows it to somehow undo this groundwork.
So, how do we build a better world? I think we do it by not romanticizing the past and not treating the web as the only problem or the central solution. I don’t want to embrace the past as a solution because past editorial philosophies gave us this mess. These old institutions that we mourn were also troublingly comfortable at producing spectacle rather than truth, and now this logic has attached itself to an even more powerful media dissemination technology that is the internet. Instead, the solution is somehow undoing this long-standing groundwork that links truth to attention and attention to profit. To build something new and better we need to undermine this underlying logic, a logic born from print papers and television networks. We need to break the idea that more information somehow provides more clarity when in reality it often produces more confusion. More information, more hype, bigger ratings: these are the values handed down from the past that need to be unlearned.
My hopeful conclusion is that we can use this moment to describe and undo these longer trends, using the novelty of our new digital tools as an opportunity. But where are the incentives to do any of this?
Next “Correspondence” by Fred Ritchin will be published in July 11th.
I am writing this a day or so after the president of the United States, widely thought to be the most powerful person in the world, tweeted a fabricated video of himself wrestling a CNN logo to the ground and then pummeling it. The 28-second clip ends with an onscreen reconfiguration of the CNN logo as “FNN: Fraud News Network.”
— Donald J. Trump (@realDonaldTrump) July 2, 2017
I find this latest manifestation of presidential animus horrifying on multiple counts, as well as interesting for what it says about our politics and our media, and is in line with what we have been addressing in our correspondence. It can be seen as a paean to self-publishing on a platform that has immediate and widespread distribution, so that anyone, no matter how mean-spirited and vapid, can make a statement that has the potential of attracting enormous attention. And as is so frequently the case, Trump’s statement was made by appropriating other imagery, apparently from an appearance he had made years ago at WrestleMania, an annual professional wrestling event. Interestingly enough, the attack on CNN and on media in general belies their enormous support to his candidacy, allotting him coverage far outdistancing any of his competitors (given the high ratings, this coverage was also beneficial for media companies intent on selling advertising).
Perhaps Trump’s tweet, as well as many of his other extraordinary statements that mix the public with the personal (such as his reveling in “the most beautiful piece of chocolate cake” while launching 59 missiles at Syria that I had mentioned in a previous letter), are acknowledgments that the ability to elide filters – whether they be brick-and-mortar retail stores or legacy news media – is seen by many as a defining triumph of consumer capitalism that has been made possible by digital media. On the physical plane, more murderous and distorted forms of this unfiltered lack of constraint are the immoral wars, terrorism, and gun violence that plague us in so many countries today.
Trump’s tweet is also an acknowledgment that bullying and harassment remain an important currency of online behavior, even if, as in this case, the wife of the person doing the bullying has articulated her mission as First Lady as dedicated to eradicating such behavior. And given the recent physical assault on a journalist for The Guardian by a Republican politician in Montana, among many other acts of harassment, Trump’s tweet can be seen as an approval of such activities and as an incitement to more violence against various “elites.” (As Jim Rutenberg pointed out in the New York Times, the National Rifle Association head Wayne LaPierre “recently called ‘academic elites, political elites and media elites’ America’s ‘greatest domestic threats.’”)
Legacy media used to be good at the visual – “the weight of the words, the shock of the photos,” as Paris Match advertised itself.
It may be a good moment then to bring in Marshall McLuhan’s “the medium is the message” as a way of contrasting, let’s say, Twitter to CNN. Media themselves affect the ways in which we perceive the world in different ways—Adolf Hitler on television, McLuhan’s “cool” medium, would have had considerably less success in stirring up his fellow Germans to an embrace of Nazism than did his voice on radio, a “hot,” high-definition medium, similarly to a comparatively suave-looking John F. Kennedy winning the 1960 US election debate on television while Richard Nixon won the same debate on the radio.
Certainly, to your point, CNN or any other legacy medium can and do distort our sense of reality, sometimes on purpose (to sell advertising, as with their 24/7 coverage of Trump) and sometimes because aspects of the medium make certain issues and events easier to cover (for a visual medium, the propensity is to depict a famine rather than analyze its causes). An emphasis on the visual may well leave the viewer with a sense of the spectacle rather than the systems that were necessary for an event to occur. A solution here would be to link the two—to attract the viewer’s attention with specific imagery and then explain how and why the event happened and, in the case of a disaster, to try and posit ways for it to not happen again.
Legacy media used to be good at the visual – “the weight of the words, the shock of the photos,” as Paris Match advertised itself. But right now, in the chaos of media in which we find ourselves, we find that the image floats about online, able to be re-contextualized for any reason. Imagery unanchored, or torn from its moorings, can be used for just about any point that one wants to make. Now the filter of “fake news” and “alternative facts” demolishes the old adage that, while at times useful, made so many of us in the field of photography uncomfortable, “the camera never lies.”
It would seem then that the truncated tweet is much more akin to a “hot” medium that is capable, dagger-like, of inflaming and roiling contemporary parameters, including those of decency and logic. I don’t believe that a drawing of Trump attacking CNN would have been as successful at enraging the population although, in more traditional societies, drawings of Muhammad did just that. Nor would Trump have achieved a similar result in castigating the free press in a print medium that would have required a certain amount of logic in building a verbal argument that could then be contested and possibly eviscerated by others. It is as if logic is the very constraint that Trump and his supporters are decrying, whether it be among CNN reporters or climate change scientists, abandoning a Newtonian sense of cause and effect for a different worldview that aligns much more closely to a quantum set of probabilities and possibilities.
I agree with you that any medium, whether “hot” or “cold” or anything else, can be used for betterment and there are no innate parameters that make it inherently good or bad, just as a gun at times may be essential for survival and atomic energy might actually be helpful. But I do believe that the character of specific kinds of media do push us in certain directions, and we need to be conscious of their propensities in order to diminish their negative effects and maximize the positive ones (the requirement that people wear seat belts would be an example of how society has thoughtfully responded to the dangers of advances in automobile and other motorized technologies).
For example, despite the proliferation of “connecting” media such as the telephone, the television, and social media, including the exchanges over Facebook and Instagram, there seems to be at the same time an extraordinary estrangement and aloneness in society today. In our country, similar to others, we are divided between red and blue states, rural and urban, Republican and Democrat, globalists and nationalists, rich and poor, and along religious, gender, racial, age and ethnic dividing lines.
I was recently reading an interview that is pertinent to this discussion on Vox by Sean Illing with the scholar Lyndsey Stonebridge. The latter, riffing off of Hannah Arendt, remarked:
“The big price we pay for mass loneliness is the loss of a shared reality. Arendt disagreed with Orwell that everyone knows two plus two doesn’t make five. We’re not idiots. We know a lie. But the problem is when people decide they don’t have to accept this reality. Then everyone begins to inhabit their own world, and that loss of a shared reality is what produces the loneliness, and that’s what makes the chaos of post-truth and willful lies so politically and existentially traumatic.”
Despite the omnipresent social media that surround us today, or perhaps because of them, society has accelerated in its disintegration. As Stonebridge put it, “Once you’re uprooted from your sense of reality as a community, that allows all sorts of other uprootings to take place. We lose our human connection to other people, and that’s when the conditions are in place for tribalism and mass violence, for the extermination of “superfluous people,” for “others.” This [is] something Arendt understood all too well.”
So what to we do? Bringing back Orwell, where war may be peace and peace is actually war, I would wonder whether “social media” might well be just as well characterized as “anti-social media,” so that we have constructed a system that negates who we are as individuals. It may be that there is a hollowness to the voice that we are given on social media, so that it becomes a hobbyist’s version of media rather than a fuller, more engaged and nuanced one, and each photo uploaded is undermined by the billions of others uploaded daily, while each voice is immediately vulnerable to being undercut by comments, many of them unkind, from others. (This helps explain, in the photography community, the enormous interest in books as autonomous statements.)
You ask at the end of your letter “where are the incentives to do any of this?” The incentives, for me, are overwhelmingly ethical and spiritual.
Perhaps, at some level, this mass aggregation of points of view online is untenable without a filter that is both effective and transparent—and perhaps, in essential ways, this lack of transparent filtering in a universe of concealed algorithms is what has unnecessarily widened the separation between print and online media, creating a chasm that was never necessary. For example, this correspondence in which you and I are partaking allows each of us to write at length, to delve into different ideas, to agree and to disagree, and then to have our words read in two languages by people in various parts of the world. There are risks involved, but also safety nets. This correspondence is, in my opinion, an example of a slowed-down, healthier use of the possibilities of the online, more so than what we usually experience.
You ask at the end of your letter “where are the incentives to do any of this?” The incentives, for me, are overwhelmingly ethical and spiritual. We have no choice but to deconstruct media in order to reconstruct it as best we can. And I view this correspondence as a tentative step in that direction, and as a preliminary articulation that there are productive paths to be followed in this pursuit.
And to respond to your question as to whether the term “camera” is still appropriate, or whether it is really all about software, I am of the opinion that what we now call the camera is itself a manifestation of the online universe, currently producing nodes more than signifiers, conversational gestures that lead in multiple directions. It is as if the “camera” is part of the “phone” not only physically but as part of a similar conversational strategy, an attempt to affirm common reference points among our online and physical communities. The camera is a tool to assert that each of us exists and is important in the swirling chaos, and that even if each of us is not recognized for his or her individuality, at the very least we can each be geo-tagged physically, in itself a kind of a spiritual victory.
When not all that long ago the American poet Robert Frost wrote “two roads diverged in a yellow wood, and I-, I took the one less travelled by, and that has made all the difference,” his words still leave me breathless at the simplicity of his choices, and their integrity. In a world without hyperlinks, certain kinds of logic and self-actualization were possible. This may also explain some of the loneliness that has infected us—it is no longer a question of the road less travelled by as much as the fact that there is no longer a continuous path that is presented to us which we can follow, alone, but more of an unending series of leaps among parallel universes in our media-saturated world.
This too, of course, may eventually lead us to someplace better. But I am, right now, backing up some, looking for reference points like any befuddled traveler might, so as to know if any wrong turns were taken and how to compensate for them. I want to know how to move forward in a way that all of our accumulated intelligences and passions can find more positive synergies. It is not nostalgia on my part as much as it is an admission of a profound disquiet. We are in a quandary that needs to be unpacked and understood, which is what in these letters both of us have been trying to do.
Next and last “Correspondence” by Nathan Jurgenson will be published in July 25th.
I like where this correspondence has gone and has ended up, with some disagreements along the way and finishing with a mutual appreciation about the value of this kind of dialogue. Our discussion in its own way is an honest reflection of this peculiar political moment, one where any conversation is interrupted by important breaking news that is itself overshadowed by bigger news the following week, one where each conversation finds its way back to one person, Donald Trump.
“Thinking about the use of images and political spectacle and the way any conversation circles back to one man like a whirlpool makes me wonder if anyone has ever been as famous as Donald Trump is right now?”
This habit is predictable and tiring and unavoidable but I don’t say this out of any negativity towards this conversation. Like a photograph, looking back from the future on this correspondence will help me remember what it was like to think about anything in this time. You’ve referenced the background news events in your letters, as have I, with the James Comey testimony happening during my previous letter as the news screamed “impeachment” and writing today during the Donald Trump Jr./Russia emails story as the news screams “treason.” And all the while I’m not sure my future self will remember any of these discrete events. What will “Trump Jr./Russia emails” come to mean? I’m not even sure if the media machinery will still care about any of this next month.
Looking back from the future, will what we are expressing here seem strange or normal? Or will our moment that feels like too much news, too fast, too confusing seem with hindsight a kind of quaint slowness to be nostalgic for? The acceleration of these things across modernity is perhaps quite predictable.
Thinking about the use of images and political spectacle and the way any conversation circles back to one man like a whirlpool makes me wonder if anyone has ever been as famous as Donald Trump is right now? To so thoroughly saturate discourse more and less intellectual or professional, to dominate the major headlines every day, to borough so deeply into how we think about and describe our world is a kind omnipresence the term “fame” seems almost too small to describe.
I think a lot about the mere existence of the little blue “tweet” icon on Trump’s phone screen. That icon is connected to an impossibly massive assemblage of technologies from the Twitter platform to his phone to the entire internet infrastructure to the billions of devices a tweet will be projected to and the countless media platforms the tweet will be discussed on. All of this comes together at that little icon on his screen so that one man can at any impulse type a thought, alone, and with one tap the entire world has to care. One tap and millions of hours of human attention and energy are spent reading and caring and discussing in response. If we could possibly bracket debates about what is a “presidential” use of social media or how the news should cover his tweets, I can’t help but to marvel at the plain fact that we have collectively designed a world where that “tweet” icon on his device exists with its almost supernatural power.
It is within this technological arrangement that we encounter a President Trump and try to make sense of this a unique kind of political senselessness. It is tempting, as you argue, to correlate these new digital technologies with our current political order, but I do worry the analysis so far is a bit technodeterminist, that is, granting too much causal power to the technology or medium and not enough to the longstanding political through-lines that cut across the technological waves of the last century.
“Also, you bring in McLuhan’s point about how different mediums have different messages, that print or audio or video each afford different ways of speaking, thinking, and knowing, with different styles of truth, justification, and convincingness”
For example, I do like your argument that part of Trump’s appeal is his ability to elide filters or traditional gatekeepers, something social media makes more possible and valued. Not to disagree as much as add an equally important layer, Trump’s fame and political career were born and maintained primarily by such traditional gatekeepers and their institutions. Real estate, tabloids, entertainment, and cable news: this is where he comes from and also who his celebrity most benefits. And, yes, new digital media being the latest and perhaps most important chapter.
Also, you bring in McLuhan’s point about how different mediums have different messages, that print or audio or video each afford different ways of speaking, thinking, and knowing, with different styles of truth, justification, and convincingness. And perhaps Trump is the message for the digital news medium. I know many people call Trump “The Twitter President”, and he certainly understands the medium better than most politicians. But we could also make the case that he is the cable news president. Or the tabloid president. And we might remember how one of his earlier political moves was buying full-page advertising space in four New York City newspapers in 1989 to write a racist op-ed during the so-called “Central Park Five” commotion, exploiting white fear of the city. This is an example of how his use of a print medium presaged how he’d exploit fear on a bigger scale later on. To try to rank how much Trump’s success is spread differently across text versus voice versus video seems like a task missing the bigger point. Sometimes the message is more important than the medium, and Trump’s message of fear, intolerance, and chaos has succeeded across these mediums. This has worked because each of the mediums, despite their differences, are operating under the bigger logic that links power and profit through commodified attention. Nothing about newspapers, cable news formats, Twitter or Facebook are immune to this deeper cultural logic.
That cultural logic undermined journalistic institutions long before the internet came along. It’s the logic that compels CNN to treat political news like a sport, or like reality television, or the most perfect intersection of both, professional wrestling. Trump responding to CNN with a professional wrestling gif was fitting. Some called the tweet “an attack” on the press but Trump was joining a game CNN and others created and benefit from. Nothing Trump does is antithetical to CNN’s values or actions. Politics as a violent, corrosive sport fought among mutually exclusive teams that at best produce entertaining chaos is the logic Trump and his coverage share. It’s not an attack but, like professional wrestling, a performative mutually beneficial rivalry. Each self-inflicted injury to the networks own credibility also pays off as a ratings play, a state of Constant News and a political reality that reflects their own cynicism.
For all the destruction, for all the material harm Trump causes especially the most vulnerable, at a minimum he lays this state of affairs bare. The performativity and manipulation isn’t unlike previous administrations and their coverage, but Trump today does it all a little too on-the-nose with a movie-like uncanniness. There is a small few who still think the political class (Democrats, Republicans, and the news media that cover them) are doing real politics in good faith. Their ranks are rightfully decreasing. Given this, some have turned to nihilism and bigoted tribalism, and those attitudes can certainly be found online. They bypass existing gatekeepers to create chaos and fight for the impossibility of a public sphere. Some others, instead, want to bypass existing gatekeepers to rebuild a newer, stronger public sphere.
The internet-discourse-pendulum swings from the internet strengthens democracy to the internet undermines all sociality. And of course it does both, and does each in new ways, as we both have described in these letters. I’d like to end on a hopeful note, perhaps one that will appear naive looking back from the future. The first generation in the United States that grew up with social media is also the generation that largely wants to rebuild social institutions. Those who get their news from social media are reading from a more diverse set of perspectives than their counterparts, despite the still-justifiable “filter bubble” worries. And those who grew up with the internet to a much larger degree also believe in rebuilding stronger social institutions by supporting things like universal healthcare and debt-free college tuition. Social media has meant new kinds of political thinking, for better and worse. We’re feeling the worst now, what we need to build next is something better. Not a new medium but a stronger message.
Thank you again for this conversation.