Yesterday in Facebook Killed the Feed I highlighted the way Facebook and Twitter have contributed to the decline of scholarly blogging. In truth though, those specific platforms can’t take all the blame. There are other reasons why academic bloggers have stopped blogging. There are systemic problems, like lack of time in our ever more harried and bureaucratically-burdened jobs, or online trolling, doxxing, and harassment that make having a social media presence absolutely miserable, if not life-threatening.
There are also problems with blogging itself as it exists in 2018. I want to focus on those issues briefly now. This post is deeply subjective, based purely on an inventory of my own half-articulated concerns. What about blogging keeps me from blogging?
Images. Instagram, Facebook, and the social media gurus have convinced us that every post needs to have an image to “engage” your audience. No image, no engagement. You don’t want to be that sad sack blogger writing with only words. Think of your SEO! So, we feel pressure to include images in our posts. But nothing squelches the mood to write more than hunting down an image. Images are a time suck. Honestly, just the thought of finding an appropriate image to match a post is enough to make me avoid writing altogether.
Length. I have fallen into the length trap. Maybe you have too. You know what I’m talking about. You think every post needs to be a smart 2,000 word missive. Miniature scholarly essays, like the post I wrote the other week about mazes in interaction fiction. What happened to my more playful writing, where I was essentially spitballing random ideas I had, like my plagiarism allegations against Neil Gaiman. And what about throwaway posts like my posts on suburbia or concerts? To become an active blogger again, forget about length.
Timing. Not the time you have or don’t have to write posts, but the time in between posts. Years ago, Dan Cohen wrote about “the tyranny of the calendar” with blogging, and it’s still true. The more time that passes in between posts, the harder it is to start up again. You feel an obligation for your comeback blog posts to have been worth the wait. What pressure! You end up waiting even longer then to write. Or worse, you write and write, dozens of mostly-done posts in your draft folder that you never publish. Like some indie band that feels the weight of the world with their sophomore effort and end up spending years in the studio. The solution is to be less like Daft Punk and more like Ryan Adams.
WordPress. Writing with WordPress sucks the joy out of writing. If you blog with WordPress you know what I’m talking about. WordPress’s browser composition box is a visual nightmare. Even in full screen mode it’s a bundle of distractions. WordPress’s desktop client has promise, but mine at least frequently has problems connecting to my server. I guess I’d be prepared to accept that’s just how writing online has to be, but my experience on Medium has opened my eyes. I just want to write and see my words—and only my words—on the screen. Whatever else Medium fails at, it has a damn fine editor.
Individually, there are solutions to each of these problems. But taken together—plus other sticking points I know I’m forgetting—there’s enough accumulated friction to making blogging very much a non-trivial endeavor.
It doesn’t have to be. What are your sticking points when it comes to blogging? How have you tried to overcome them?
There’s a movement to reclaim blogging as a vibrant, vital space in academia. Dan Cohen, Kathleen Fitzpatrick, and Alan Jacobs have written about their renewed efforts to have smart exchanges of ideas take place on blogs of their own. Rather than taking place on, say Twitter, where well-intentioned discussions are easily derailed by trolls, bots, or careless ¯\_(ツ)_/¯. Or on Facebook, where Good Conversations Go to Die™.
An author might still blog, but (thanks to the post-Google-Reader decline in RSS use) ensuring that readers knew that she’d posted something required publicizing it on Twitter, and responses were far more likely to come as tweets. Even worse, readers might be inspired to share her blog post with their friends via Facebook, but any ensuing conversation about that post was entirely captured there, never reconnecting with the original post or its author. And without those connections and discussions and the energy and attention they inspired, blogs… became isolated. Slowed. Often stopped entirely.
You can’t overstate this point about the isolation of blogs. I’ve installed FreshRSS on one of my domains (thanks to Reclaim Hosting’s quick work), and it’s the first RSS reader I feel good about in years—since Google killed Google Reader. I had TinyRSS running, but the interface was so painful that I actively avoided it. With FreshRSS on my domain, I imported a list of the blogs I used to follow, pruned them (way too many have linkrotted away, proving Kathleen’s point), and added a precious few new blogs. FreshRSS is a pleasure to check a couple of times a day.
Now, if only more blogs posts showed up there. Because what people used to blog about, they now post on Facebook. I detest Facebook for a number of reasons and have gone as far as you can go without deleting your Facebook account entirely (unfriended everyone, stayed that way for six months, and then slowly built up a new friend network that is a fraction of what it used to be…but they’re all friends, family, or colleagues who I wouldn’t mind seeing a pic of my kids).
Anyway, what I want to say is, yes, Google killed off Google Reader, the most widely adopted RSS reader and the reason so many people kept up with blogs. But Facebook killed the feed.
The kind of conversations between academics that used to take place on blogs still take place, but on Facebook, where the conversations are often locked down, hard to find, and written in a distractedsocialmediamultitaskingway instead of thoughtful and deliberative. It’s the freaking worst thing ever.
You could say, Well, hey, Facebook democratized social media! Now more people than ever are posting! Setting aside the problems with Facebook that have become obvious since November 2016, I counter this with:
No. Effing. Way.
Facebook killed the feed. The feed was a metaphorical thing. I’m not talking about RSS feeds, the way blog posts could be detected and read by offsite readers. I’m talking about sustenance. What nourished critical minds. The feed. The food that fed our minds. There’s a “feed” on Facebook, but it doesn’t offer sustenance. It’s empty calories. Junk food. Junk feeds.
To prove my point I offer the following prediction. This post, which I admit is not exactly the smartest piece of writing out there about blogging, will be read by a few people who still use RSS. The one person who subscribes to my posts by email (Hi Mom!) might read it. Maybe a dozen or so people will like the tweet where I announce this post—though who knows if they actually read it. And then, when I drop a link to this post on Facebook, crickets. If I’m lucky, maybe someone sticks the ? emoji to it before liking the latest InstantPot recipe that shows up next in their “feed.”
When does anything—service, teaching, editing, mentoring, coding—become scholarship?
My answer is simply this: a creative or intellectual act becomes scholarship when it is public and circulates in a community of peers that evaluates and builds upon it.
Now for some background behind the question and the rationale for my answer.
What counts as the threshold of scholarship has been on my mind lately, spurred on by two recent events at my home institution, George Mason University. The first was a discussion in my own department (English) about the public humanities, a concept every bit as hard to pin down as its two highly contested constitutive terms. A key question in the department discussion was whether the enormous amount of outreach our faculty perform—through public readings, in area high schools, with local teachers and lifelong learners outside of Mason—counts as the public humanities. I suggested at the time that the public humanities revolves around scholarship. The question, then, is not when does outreach become the public humanities? The question is, when does outreach become an act of scholarship?
The department discussion was a low-stakes affair. It decided the fate of exactly nothing, except perhaps the establishment of a subcommittee to further explore the intersection of faculty work and the public humanities.
But the anxiety at the heart of this question—when does anything become scholarship?—plays out in much more consequential ways in the academy. This brings me to the second event at Mason, the deliberations of the College of Humanities and Social Science’s Promotion and Tenure committee. My colleague Sean Takats, whom some may know as the Director of Research Projects for the Roy Rosenzweig Center for History and New Media and the co-director of the Zotero project, has recently given a devastating account of the RPT committee’s response to his tenure case. Happily, the college committee approved Sean’s case 10-2, but what’s devastating is the attitude of some members of the committee toward Sean’s significant work in the digital humanities. Sean quotes from the committee’s official letter, with the money quote being “some [committee members] determined that projects like Zotero et al., while highly valuable, should be considered as major service activity instead.”
Sean deftly contrasts the committee’s impoverished notion of scholarship with Mason’s own faculty handbook’s definition, which is more expansive and explicitly acknowledges “artistic work, software and media, exhibitions, and performance.”
I absolutely appreciate Mason’s definition of scholarly achievement. But I like my definition of scholarship even more. Where does mine come from? From the scholarship of teaching—another field, like digital humanities, which has challenged the preeminence of the single-authored manuscript as the gold standard of scholarship (though, like DH, it doesn’t exclude such forms of scholarship).
More specifically, I have adapted my definition from Lee Shulman, the former president of the Carnegie Foundation for the Advancement of Teaching. In “Taking Learning Seriously,” Shulman advances a persuasive case for the scholarship of teaching and learning. Shulman argues that for an intellectual act to become scholarship, it should have at least three characteristics:
it becomes public; it becomes an object of critical review and evaluation by members of one’s community; and members of one’s community begin to use, build upon, and develop those acts of mind and creation.
In other words, scholarship is public, circulating in a community that not only evaluates it but also builds upon it. Notice that Shulman’s formulation of scholarship is abstracted from any single discipline, and even more crucially, it is platform-agnostic. Exactly how the intellectual act circulates and generates new work in response isn’t what’s important. What’s important is that the work is out there for all to see, review, and use. The work has been made public—which after all is the original meaning of “to publish.”
Let’s return to the CHSS committee’s evaluation of Sean’s work with Zotero. I don’t know enough about the way Sean framed his tenure case, but from the outside looking in, and knowing what I know about Zotero, it’s not only reasonable to acknowledge that Zotero meets these three criteria of scholarship (public, reviewed, and used), it’d take a willful misapprehension of Zotero, its impact, and implications to see it as anything but scholarship.
Sean notes that the stance of narrow-minded RPT committees will have a chilling effect on digital work, and I don’t think he exaggerates. But I see this as a crisis that extends beyond the digital humanities, encompassing faculty who approach their scholarship in any number of “unconventional” ways. The scholarship of teaching, certainly, but also faculty involved in scholarly editing, the scholarship of creativity, and a whole host of public humanities efforts.
The solution—or at least one prong of a solution—must be for faculty who have already survived the gauntlet of tenure to work ceaselessly to promote an atmosphere that pairs openness with critical review, yet which is not entrenched in any single medium—print, digital, performance, and so on. We can do this in the background by writing tenure letters, reviewing projects, and serving on committees ourselves. But we can and should also do this publicly, right here, right now.
I’ve gone on record as saying that the digital humanities is not about building. It’s about sharing. I stand by that declaration. But I’ve also been thinking about a complementary mode of learning and research that is precisely the opposite of building things. It is destroying things.
I want to propose a theory and practice of a Deformed Humanities. A humanities born of broken, twisted things. And what is broken and twisted is also beautiful, and a bearer of knowledge. The Deformed Humanities is an origami crane—a piece of paper contorted into an object of startling insight and beauty.
I come to the Deformed Humanities (DH) by way of a most traditional route—textual scholarship. In 1999 Lisa Samuels and Jerry McGann published an essay about the power of what they call “deformance.” This is a portmanteau that combines the words performance and deform into an interpretative concept premised upon deliberately misreading a text, for example, reading a poem backwards line-by-line.
As Samuels and McGann put it, reading backwards “short circuits” our usual way of reading a text and “reinstalls the text—any text, prose or verse—as a performative event, a made thing” (Samuels & McGann 30). Reading backwards revitalizes a text, revealing its constructedness, its seams, edges, and working parts.
In many ways this idea of textual transformation as an interpretative maneuver is nothing new. Years before Samuels and McGann suggested reading backward as the paradigmatic deformance, the influential composition professor Peter Elbow suggested reading a poem backwards as a way to “breathe life into a text” (Elbow 201).
Still, Samuels and McGann point out that “deformative scholarship is all but forbidden, the thought of it either irresponsible or damaging to critical seriousness” (Samuels & McGann 34–35). Yet deformance has become a key methodology of the branch of digital humanities that focuses on text analysis and data-mining.
This is an argument that Steve Ramsay makes in Reading Machines. Computers let us practice deformance quite easily, taking apart a text—say, by focusing on only the nouns in an epic poem or calculating the frequency of collocations between character names in a novels.
Deformance is a Hedge
But however much deformance sounds like a progressive interpretative strategy, it actually reinscribes more conventional acts of interpretation. Samuels and McGann suggest—and many digital humanists would agree—that “we are brought to a critical position in which we can imagine things about the text that we did not and perhaps could not otherwise know” (36). And this is precisely what is wrong with the idea of deformance: it always circles back to the text.
Even the word itself—deformance—seems to be a hedge. The word is much more indebted to the socially acceptable activity of performance than the stigmatized word deformity. It reminds me of a scene in Alison Bechdel’s graphic memoir Fun Home, where the adult narrator Alison comments upon her teenage self’s use of the word “horrid” in her diary. “How,” Bechdel muses, “horrid has a slightly facetious tone that strikes me as Wildean. It appears to embrace the actual horror…then at the last second nimbly sidesteps it” (Bechdel 174). In a similar fashion, deformance appears to embrace the actual deformity of a text and then at the last possible moment sidesteps it. The end result of deformance as most critics would have it is a sense of renewal, a sense of de-forming only to re-form.
To evoke a key figure motivating the playfulness Samuels and McGann want to bring to language, deformance takes Humpty Dumpty apart only to put Humpty Dumpty back together again.
And this is where I differ.
I don’t want to put Humpy Dumpty back together.
Let him lie there, a cracked shell oozing yolk. He is broken. And he is beautiful. The smell, the colors, the flow, the texture, the mess. All of it, it is unavailable until we break things. And let’s not soften our critical blow by calling it deformance. Name it what it is, a deformation.
In my vision of the Deformed Humanities, there is little need to go back to the original. We work—in the Stallybrass sense of the word—not to go back to the original text with a revitalized perspective, but to make an entirely new text or artifact.
The deformed work is the end, not the means to the end.
The Deformed Humanities is all around us. I’m only giving it a name. Mashups, remixes, fan fiction, they are all made by breaking things, with little regard for preserving the original whole. With its emphasis on exploring the insides of things, the Deformed Humanities shares affinities with Ian Bogost’s notion of carpentry, the practice of making philosophical and scholarly inquiries by constructing artifacts rather than writing words. In Alien Phenomenology, Or, What It’s Like to Be a Thing, Bogost describes carpentry as “making things that explain how things make their world” (93). Bogost goes on to highlight several computer programs he’s built in order to think like things—such as I am TIA, which renders the Atari VCS’s “view” of its own screen, an utterly alien landscape compared to what players of the Atari see on the screen. Where carpentry and the Deformed Humanities diverge is in the materials being used. Carpentry aspires to build from scratch, whereas the Deformed Humanities tears apart existing structures and uses the scraps.
For a long while I’ve told colleagues who puzzle over my own seemingly disparate objects of scholarly inquiry that “I study systems that break other systems.” Systems that break other systems is the thread that connects my work with electronic literature, graphic novels, videogames, code studies, and so on. Yet I had never thought about my own work as deformative until earlier this year. And it took someone else to point it out. This was my colleague Tom Scheinfeldt, the managing director of the Roy Rosenzweig Center for History and New Media. In February, Scheinfeldt gave a talk at Brown University in which he argued that the game-changing element of the digital humanities was its performative aspect.
Scheinfeldt uses Babe Ruth as an analogy. Ruth wasn’t merely the homerun king. He essentially invented homeruns as a strategy, transforming the game. As Scheinfeldt puts it, “the change Ruth made wasn’t engendered by him being able to bunt or steal more effectively than, say, Ty Cobb…it was engendered by making bunting and stealing irrelevant, by doing something completely new.”
Scheinfeldt then picks up on Ramsay’s use of “deformance” to suggest that what’s game-changing about digital technology is the way it allows us “to make and remake” texts in order “to produce meaning after meaning.”
Hacking the Accident
As an example, Scheinfeldt mentions a project of mine, which I had never thought about in terms of deformance. This was a digital project and e-book I made last fall called Hacking the Accident.
Hacking the Accident is a deformed version of Hacking the Academy, an edited collection forthcoming by the digitalculturebooks imprint of the University of Michigan Press. Hacking the Academy is a scholarly book about the disruptive potential of the digital humanities, crowdsourced in one week and edited by Dan Cohen and Tom Scheinfeldt.
Taking advantage of the generous BY-NC Creative Commons license of the book, I took the entire contents of Hacking the Academy, some thirty something essays by leading thinkers in the digital humanities, and subjected them to the N+7 algorithm used by the Oulipo writers. This algorithm replaces every noun—every person, place, or thing—in Hacking the Academy with the person, place, or thing—mostly things—that comes seven nouns later in the dictionary.
The results of N+7 would seem absolutely nonsensical, if not for the disruptive juxtapositions, startling evocations, and unexpected revelations that ruthless application of the algorithm draws out from the original work. Consider the opening substitution of Hacking the Academy, sustained throughout the entire book: every instance of the word academy is literally an accident.
Other strange transpositions occur. Every fact is a fad and print is a prison. Instructors are insurgents and introductions are invasions. Questions become quicksand. Universities, uprisings. Scholarly associations wither away to scholarly asthmatics. Disciplines are fractured into discontinuities. Writing, the thing that absorbs our lives in the humanities, writing, the thing that we produce and consume endlessly and desperately, writing, the thing upon which our lives of letters is founded—writing, it is mere “yacking” in Hacking the Accident.
These are merely the single word exchanges, but there are longer phrases that are just as striking. Print-based journals turn out as prison-based joyrides, for example. I love that The Chronicle of Higher Education always appears as The Church of Higher Efficiency; it’s as if the newspaper was calling out academia for what it has become—an all-consuming, totalizing quest for efficiency and productivity, instead of a space of learning and creativity.
Consider the deformed opening lines of Cohen’s and Scheinfeldt’s introduction, which quotes from their original call for papers:
Can an allegiance edit a joyride? Can a lick exist without bookmarks? Can stunts build and manage their own lecture mandrake playgrounds? Can a configuration be held without a prohibition? Can Twitter replace a scholarly sofa?
At the most obvious level, the work is a parody of academic discourse, amplifying the already jargon-heavy language of academia with even more incomprehensible language. But one level down there is a kind of Bakhtinian double-voiced discourse at work, in which the original intent is still there, but infused with meanings hostile to that intent—the print/prison transposition is a good example of this.
I’m convinced that Hacking the Accident is not merely a novelty. It’d be all too easy to dismiss the work as a gag, good for a few amusing quotes and nothing more. But that would overlook the several levels in which Hacking the Accident acts as a kind of intervention into academia. A deformation of the humanities. A deformation that doesn’t strive to put the humanities back together and reestablish the integrity of a text, but rather, a deformation that is a departure, leading us somewhere new entirely.
The Deformed Humanities—though most may not call it that—will prove to be the most vibrant and generative of all the many strands of the humanities. It is a legitimate mode of scholarship, a legitimate mode of doing and knowing. Precisely because it relies on undoing and unknowing.
A column in the Chronicle of Higher Education by former Idaho State University provost and official Stanley Fish biographer Gary Olson has been making waves this weekend. Entitled “How Not to Reform Humanities Scholarship,” Olson’s column is really about scholarly publishing, not scholarship itself.
Or maybe not. I don’t know. Olson conflates so many issues and misrepresents so many points of view that it’s difficult to tease out a single coherent argument, other than a misplaced resistance to technological and institutional change. Nonetheless, I want to call attention to a troubling generalization that Olson is certainly not the first to make. Criticizing the call (by the MLA among others) to move away from single-authored print monographs, Olson writes that a group of anonymous deans and department chairs have expressed concern to him that “graduate students and young faculty members—all members of the fast-paced digital world—are losing their capacity to produce long, in-depth, sustained projects (such as monographs).”
Here is the greatest conflation in Olson’s piece: mistaking form for content. As if “long, in-depth” projects are only possible in monograph form. And the corollary assumption: that “long, in-depth” peer-reviewed monographs are automatically worthwhile.
Olson goes on to summarize the least interesting and most subjective aspect of Maryanne Wolf’s otherwise fascinating study of the science of reading, Proust and the Squid:
…one disadvantage of the digital age is that humans are rapidly losing their capacity for deep concentration—the type of cognitive absorption essential to close, meditative reading and to sustained, richly complex writing. That loss is especially deleterious to humanities scholars, whose entire occupation depends on that very level of cognitive concentration that now is so endangered.
Here again is that conflation of form and content. According to Olson, books encourage deep concentration for both their writers and readers, while digital media foster the opposite of deep concentration, what Nicholas Carr would call shallow concentration. I don’t need to spend time refuting this argument. See Matthew Battles’ excellent Reading Isn’t Just a Monkish Pursuit. Or read my GMU colleague Dan Cohen’s recent post on Reading and Believing and Alan Jacob’s post on Making Reading Hard. Cohen and Jacob both use Daniel Kahneman’s Thinking, Fast and Slow, which offers a considerably more nuanced take on reading, distraction, and understanding than Olson.
But Olson is mostly talking about writing, not reading. Writing a book, in Olson’s view, is all about “deep concentration” and “richly complex writing.” But why should length have anything to do with concentration and complexity? There’s many a book-length monograph (i.e. a book) that is too long, too repetitive, and frankly, too complex—which is a euphemism for obscure and convoluted.
And why, too, should “cognitive concentration” correspond to duration? Recalling the now ancient Stephen Wright joke, “There’s a fine line between fishing and just standing on the shore like an idiot.” The act of writing is mostly standing on the shore like an idiot. And Olson is asking us to stand there even longer?
I am not saying that I don’t value concentration. In fact, I value concentration and difficult thinking above almost all else. But I want to suggest here—as I have elsewhere—that we stop idealizing the act of concentration. And to go further, I want to uncouple concentration from time. Whether we’re writing or reading, substantive concentration can come in small or large doses.
The act of writing is mostly standing on the shore like an idiot. And Olson is asking us to stand there even longer?
There’s a cultural prejudice against tweeting and blogging in the humanities, something Dan Cohen is writing about in his next book (posted in draft form, serially, on his blog). The bias against blogs is often attributed to issues of peer review and legitimacy, but as Kathleen Fitzpatrick observed in an address at the MLA (and posted on her blog), much of the bias is due to the length of a typical blog post—which is much shorter than a conventional journal article. Simply stated, time is used as a measure of worth. When you’re writing a blog post, there’s less time standing on the shore like an idiot. And for people like Olson, that’s a bad thing.
I want to build on something Fitzpatrick said in her address. She argues that a blog “provides an arena in which scholars can work through ideas in an ongoing process of engagement with their peers.” It’s that concept of ongoing process that is particularly important to me. Olson thinks that nothing fosters deep concentration like writing a book. But writing a scholarly blog is an ongoing process, a series of posts, each one able to build on the previous post’s ideas and comments. Even if the posts are punctuated by months of silence, they can still be cumulative. Writing on a blog—or building other digital projects for that matter—can easily accommodate and even facilitate deep concentration. Let’s call it serial concentration: intense moments of speculation, inquiry, and explanation distributed over a period of time. This kind of serial concentration is particularly powerful because it happens in public. We are not huddled over a manuscript in private, waiting until the gatekeepers have approved our ideas before we share them, in a limited, almost circumspect way. We share our ideas before they’re ready. Because hand-in-hand with serial concentration comes serial revision. We write in public because we are willing to rewrite in public.
I can’t imagine a more rigorous way of working.
(Digital Typography Woodcut courtesy of Donald Knuth, provenance unknown)
Every scholarly community has its disagreements, its tensions, its divides. One tension in the digital humanities that has received considerable attention is between those who build digital tools and media and those who study traditional humanities questions using digital tools and media. Variously framed as do vs. think, practice vs. theory, or hack vs. yack, this divide has been most strongly (and provocatively) formulated by Stephen Ramsay. At the 2011 annual Modern Language Association convention in Los Angeles, Ramsay declared, “If you are not making anything, you are not…a digital humanist.”
I’m going to step around Ramsay’s argument here (though I recommend reading the thoughtful discussion that ensued on Ramsay’s blog). I mention Ramsay simply as an illustrative example of the various tensions within the digital humanities. There are others too: teaching vs. research, universities vs. liberal arts colleges, centers vs. networks, and so on. I see the presence of so many divides—which are better labeled as perspectives—as a sign that there are many stakeholders in the digital humanities, which is a good thing. We’re all in this together, even when we’re not.
I’ve always believed that these various divides, which often arise from institutional contexts and professional demands generally beyond our control, are a distracting sideshow to the true power of the digital humanities, which has nothing to do with production of either tools or research. The heart of the digital humanities is not the production of knowledge; it’s the reproduction of knowledge. I’ve stated this belief many ways, but perhaps most concisely on Twitter: [blackbirdpie url=”http://twitter.com/samplereality/statuses/26563304351″]The promise of the digital is not in the way it allows us to ask new questions because of digital tools or because of new methodologies made possible by those tools. The promise is in the way the digital reshapes the representation, sharing, and discussion of knowledge. We are no longer bound by the physical demands of printed books and paper journals, no longer constrained by production costs and distribution friction, no longer hampered by a top-down and unsustainable business model. And we should no longer be content to make our work public achingly slowly along ingrained routes, authors and readers alike delayed by innumerable gateways limiting knowledge production and sharing.
I was riffing on these ideas yesterday on Twitter, asking, for example, what’s to stop a handful of of scholars from starting their own academic press? It would publish epub books and, when backwards compatibility is required, print-on-demand books. Or what about, I wondered, using Amazon Kindle Singles as a model for academic publishing. Imagine stand-alone journal articles, without the clunky apparatus of the journal surrounding it. If you’re insistent that any new publishing venture be backed by an imprimatur more substantial than my “handful of scholars,” then how about a digital humanities center creating its own publishing unit?
It’s with all these possibilities swirling in my mind that I’ve been thinking about the MLA’s creation of an Office of Scholarly Communication, led by Kathleen Fitzpatrick. I want to suggest that this move may in the future stand out as a pivotal moment in the history of the digital humanities. It’s not simply that the MLA is embracing the digital humanities and seriously considering how to leverage technology to advance scholarship. It’s that Kathleen Fitzpatrick is heading this office. One of the founders of MediaCommons and a strong advocate for open review and experimental publishing, Fitzpatrick will bring vision, daring, and experience to the MLA’s Office of Scholarly Communication.
I have no idea what to expect from the MLA, but I don’t think high expectations are unwarranted. I can imagine greater support of peer-to-peer review as a replacement of blind review. I can imagine greater emphasis placed upon digital projects as tenurable scholarship. I can imagine the breadth of fields published by the MLA expanding. These are all fairly predictable outcomes, which might have eventually happened whether or not there was a new Office of Scholarly Communication at the MLA.
But I can also imagine less predictable outcomes. More experimental, more peculiar. Equally as valuable though—even more so—than typical monographs or essays. I can imagine scholarly wikis produced as companion pieces to printed books. I can imagine digital-only MLA books taking advantage of the native capabilities of e-readers, incorporating videos, songs, dynamic maps. I can image MLA Singles, one-off pieces of downloadable scholarship following the Kindle Singles model. I can imagine mobile publishing, using smartphones and GPS. I can imagine a 5,000-tweet conference backchannel edited into the official proceedings of the conference backchannel.
There are no limits. And to every person who objects, But, wait, what about legitimacy/tenure/cost/labor/& etc, I say, you are missing the point. Now is not the time to hem in our own possibilities. Now is not the time to base the future on the past. Now is not the time to be complacent, hesitant, or entrenched in the present.
William Gibson has famously said that “the future is already here, it’s just not very evenly distributed.” With the digital humanities we have the opportunity to distribute that future more evenly. We have the opportunity to distribute knowledge more fairly, and in greater forms. The “builders” will build and the “thinkers” will think, but all of us, no matter where we fall on this false divide, we all need to share. Because we can.
(Radiohead Crowd photograph courtesy of Flickr user Samuel Stroube / Creative Commons Licensed]
In a recent post on the group blog Play the Past, I wrote about the way torture-interrogation is often described by its proponents as a kind of game. I wrestled for a long time with the title of that post: “The Gamification of Interrogation.” Why? Because I oppose the general trend toward “gamifying” real world activities—mapping game-like trappings such as badges, points, and achievements onto otherwise routine or necessary activities.
A better term for such “gamification” is, as Margaret Robertson argues, pointsification. And I oppose it. I oppose pointsification and the gamification of life. Instead of “gamifying” activities in our daily life, we need to meanify them—imbue them with meaning. The things that we do to live, breathe, eat, laugh, love, and die, we need to see as worth doing in order to live, breathe, eat, laugh, love, and die. A leaderboard is not the path toward discovering this worthwhileness.
So, back to my title and what troubled me about it: “The Gamification of Interrogation.” I didn’t want this title to appear to be an endorsement of gamification. Perhaps the most cogent argument against both the practice of gamification and the rhetoric of the word itself comes from Ian Bogost, who observes that the contorted noun “gamification” acts as a kind of magic word, making “something seem easy to accomplish, even if it is in fact difficult.”
Ian proposes that we begin calling gamification what it really is. Because gamification seeks to “replace real incentives with fictional ones,” he argues that we call it “exploitationware”—a malevolent practice that exploits a user’s loyalty with fake rewards.
I’m skeptical that “exploitationware” will catch on, even among the detractor’s of gamification. It doesn’t exactly roll off the tongue. Its five syllables are so confrontational that even those who despise gamification might not be sympathetic to the word. Yet Ian himself suggests the way forward:
[quote]the best move is to distance games from the concept [of gamification] entirely, by showing its connection to the more insidious activities that really comprise it.[/quote]
And this is where my title comes in. I’ve connected gamification to an insidious activity, interrogation. I’m not trying to substitute a more accurate word for gamification. Rather, I’m using “gamification,” but in conjunction with human activities that absolutely should not be turned into a game. Activities that most people would recoil to conceive as a game.
The gamification of torture.
The gamification of radiation poisoning.
The gamification of child pornography.
This is how we disabuse the public of the ideology of gamification. Not by inventing another ungainly word, but by making the word itself ungainly. Making it ungamely.
Prison Tower Barb photo courtesy of Flickr user Dana Gonzales / Creative Commons License]
(Exactly ten years ago this week I turned in my last graduate seminar paper, for a class on late 19th and early 20th century American literature taught by the magnificent Nancy Bentley. The paper was about the 1904 World’s Fair and Geronimo, a figure I’ve been thinking about deeply since Sunday night. Because of the strange resonances between the historical Geronimo and the code name for Osama Bin Laden, I’ve posted that paper here, hoping it helps others to contextualize Geronimo, and to acknowledge his own voice.)
[heading]”A Very Kind and Peaceful People”:
Geronimo and the World’s Fair[/heading]
[quote]St. Louis had an “Exposition” in 1904. Of course, Geronimo was there, was becoming a permanent exposition exhibit, basking in hero-worship, selling postcards, bows and arrows, putting money in his pockets.[/quote]
– The son of Indian agent John P. Clum, in the latter’s biography, Apache Agent[1. Woodworth Clum, Apache Agent: The Story of John P. Clum (1936; Lincoln: University of Nebraska Press, 1978), p. 291.]
Nearly twenty years after he had surrendered for the last time and became a permanent prisoner of war at Fort Sill, Oklahoma, instead of a renegade Indian whose name struck terror in the hearts of Americans and Mexicans alike across the Southwest, the Chiricahua Apache leader Geronimo attended, or rather, appeared at the 1904 St. Louis World’s Fair. Also known as the Louisiana Purchase Exposition—the event commemorated “the greatest peaceable acquisition of territory the world has known”[2. This ironic claim, considering the genocide that followed the United States’ takeover of the territory, was made by James W. Buel, Louisiana and the Fair: An Exposition of the World, Its People and Their Achievements, vol. 1, 10 vols. (Saint Louis: World’s Progress Publishing Company, 1904), p. 7.]—the Fair devoted a number of exhibits to traditional Native American culture, including an “Apache Village” that had been constructed along the midway.
There under the strict supervision of the War Department was Geronimo, nestled between a stall of Pueblo women pounding corn and a group of Indian pottery makers. Here the seventy-five-year-old war chief sat in his own booth, making bows and arrows and selling signed photographs of himself for as much as two dollars apiece.[3. Angie Debo, Geronimo: The Man, His Time, His Place (Norman: University of Oklahoma Press, 1976), pp. 410-412.] Two interpretations of Geronimo’s participation in the Louisiana Purchase Exposition have long prevailed. On the one hand, Geronimo is derided as a self-serving scoundrel, “basking in hero-worship,” whose fame, or more appropriately, whose infamy had been earned at the expense of the blood of dozens of American settlers and soldiers. This is the view Woodworth Clum adopts in Apache Agent, the biography he writes of his father, who was once the acting governor of the New Mexico Territory and an Indian agent who had negotiated with the Apaches.
On the other hand, Geronimo is celebrated as a hero, a noble warrior from a lost age, one who has successfully and with dignity (and business acumen) assimilated into American society. It is fascinating that while both of these views presume some form of agency on Geronimo’s part, they take for granted that Geronimo is present at the Fair as an attraction, a crowd-pleasing museum piece. Neither of these views take into account what that museum piece himself might have thought about his experiences at the Exposition. What happens when the attraction talks back? What happens when one exhibit walks among other exhibits and comments upon them? What happens when Geronimo the fiend and Geronimo the hero give way to Geronimo the spectator? In the brief essay that follows I hope to tentatively answer these questions by setting in dialogue with each other two collections of texts: the first, what was written about Native American exhibits at the St. Louis World’s Fair and other spectacles like it by the events’ organizers and contemporary journalists and visitors; and the second, Geronimo’s own life story, taken down in 1906, in which he offers his account of what he did and what he saw at the Louisiana Purchase Exposition.
Native Americans on display
The display of Native Americans as exotic curiosities or specimens of a disappearing culture was of course nothing new by the time of the St. Louis World’s Fair. A half century earlier P. T. Barnum was one of the first “curators” of such displays. In his Struggles and Triumphs Barnum remembers one exhibit of American Indians from the “far West” that demonstrates his consummate showmanship, in which he transforms a group of Indian chiefs into museum pieces supposedly without their even realizing it. Capitalizing on the language barrier between the chiefs and himself, Barnum “convinces” the Indians that visitors to his American Museum in New York are there to honor the Indians. The chiefs appear to be pleased with this news and they welcome the endless, seemingly adoring crowds who come to pay them “respect.” The success of the exhibit depends upon the Indians remaining ignorant—or at least acting ignorant—of the museum’s true function. “If they suspected that your Museum was a place where people paid for entering,” the Indian’s interpreter tells Barnum, “you could not keep them a moment after the discovery”[4. P. T. Barnum, Struggles and triumphs; or, Forty Years’ Recollections of P. T. Barnum (Buffalo: Courier Company, 1882), p. 214.]—a claim that heightened the audience’s sense of witnessing savagery from a position of safety.
The language barrier worked to Barnum’s advantage especially when he introduced one Kiowa chief, Yellow Bear, to the audience. Smiling and genially patting Yellow Bear on the shoulder, Barnum “pretended to be complimenting him to the audience” when he was in fact saying the opposite:
[Yellow Bear] has killed, no doubt, scores of white persons, and he is probably the meanest, black-hearted rascal that lives in the far West. […] If the blood-thirsty little villain understood what I was saying, he would kill me in a moment; but as he thinks I am complimenting him, I can safely state the truth to you, that he is a lying, thieving, treacherous, murderous monster. He has tortured to death poor, unprotected women, murdered their husbands, brained their helpless little ones; and he would gladly do the same to you or to me, if he though he could escape punishment.[5. Ibid., pp. 215-216.]
The incongruity of Barnum’s inflammatory words and his affectionate manner creates a humorous effect for his readers, if not for those in attendance at the museum, and establishes a pattern to how Native American “savages” would be described to the masses for generations to come.
Though the problem of translation is one with which Geronimo would have to contend, in more subtle ways, when he told his own life story, later exhibits of American Indians did not depend upon gross misrepresentations of which those being represented were supposedly unaware. Instead, the exhibits relied on the ready complicity of Native Americans. Spectacles like Buffalo Bill’s Wild West Show and his competitors, which were immensely popular from the 1880s through the 1920s, actively sought out willing Native Americans, including Geronimo. And many of the traits Barnum attributed to Yellow Bear—lying, thieving, treachery—were said of Geronimo as well. After the 1904 World’s Fair Geronimo briefly joined Pawnee Bill’s Wild West show (again, with the permission of the U. S. government, since he was still technically a prisoner of war). Geronimo’s act, never mind that Apaches were not buffalo hunters like the Plains Indians, was to shoot a buffalo from a moving automobile. In a move reminiscent of Barnum, Pawnee Bill billed Geronimo for this performance as “The Worst Indian That Ever Lived.”[6. Paul Reddin, Wild West Shows (Urbana: University of Illinois Press, 1999), pp. 153, 161.]
And Geronimo went along with the billing. By 1904, the Apache warrior was a seasoned showman and knew how to sell himself (or how to avail himself to be sold by others). He had appeared at the Pan-American Exposition in 1901, and even earlier he was paraded at the 1898 Trans-Mississippi and International Exposition in Omaha, Nebraska.[7. Ibid., p. 161.] This expo was Geronimo’s debut, so to speak, his first time appearing in public as an attraction. Like the other Native Americans showcased in the exposition’s “Congress of American Indians,” Geronimo was subject to the crowd’s immense curiosity. According to Overland Monthly magazine, the “Congress of American Indians” was organized with a very specific goal in mind:
to present the different Indian tribes and their primitive modes of living; to reproduce their old games and dances; compare the varied and characteristic style of dress; illustrate their strange customs; recall their almost forgotten traditions; prove their skill in bead embroidery, basket-weaving, and pottery; and most important of all, to afford a comparison of the various tribes and a study of their characteristic and tribal traits.[8. Mary Alice Harriman, “The Congress of American Aborigines at the Omaha Exposition,” Overland Monthly 33 (1898), p. 506.]
The American Indians are specimens whose function is to “illustrate” strange customs yet enable distinctions to be made between the different tribes. In other words, the Indian is static, reified, deserving not so much of respect as scholarship. These representatives of a “fast-dying race,” stereotypically associated with “primitive” crafts like embroidery and pottery, are made all the more striking when juxtaposed against the modern architectural and engineering feats displayed at the exposition. “The Indian,” declared Overland Monthly, “will always be a fascinating object.”[9. Ibid., p. 507.] Indeed, nearly every page of the report in Overland Monthly is accompanied by a photograph of a notable Native American, often deliberately posing for the camera. On the first page is a portrait of the aged Geronimo, shot by the official photographer of the Trans-Mississippi Exposition and Indian Congress, F. A. Rinehart. Described as having a “deeply wrinkled face, scarred and seamed with seventy years of treachery and cunning,” Geronimo stands as a nostalgic reminder of the Apache wars two decades earlier, a nostalgia that further reduces the Indian into an object, a memento.[10. Ibid., p. 510.]
Paul Greenhalgh argues in Ephemeral Vistas, his study of late nineteenth and early twentieth century expositions and world’s fairs, that this process of objectification depended upon a blurring between entertainment and education:
Between 1889 & 1914, the exhibitions [the expositions and world’s fairs] became a human showcase, when people from all over the world were brought to sites in order to be seen by others for their gratification and education. […] Through this twenty-five year period it would be no exaggeration to say that as items of display, objects were seen to be less interesting than human beings, and through the medium of display, human beings were transformed into objects.[11. Paul Greenhalgh, Ephemeral Vistas: A History of the Expositions Universelles, Great Exhibitions and World’s Fairs, 1851-1939 (New York: St. Martin’s Press, 1988), p. 82.]
Surely Greenhalgh had in mind, among other examples, the representation of Native Americans as they appeared at the Omaha Exposition of 1898 and the St. Louis Exposition in 1904. Evidence abounds that “human beings were transformed into objects” at both events. According to David R. Francis, the former governor of Missouri and chairman of the St. Louis Exposition’s executive committee, Geronimo “illustrated at once a native type and an aboriginal personage of interest alike to special students and passing throngs of visitors.”[12. David R. Francis, The Universal Exposition of 1904 (St. Louis: Louisiana Purchase Exposition Company, 1913), p. 529.] Geronimo, according to this formulation, is a “type,” which scholars and tourists alike can find interest in, providing the turn-of-the-century equivalent of edutainment.
Perhaps “edutainment” is too light a word, for it glosses over the complex power relations at work in Native American exhibitions. Linking the open displays of Native Americans in expositions like St. Louis with the more insidious Foucauldian panopticons that structured modern prisons, the historian Jo Ann Woodsum reasons, “As in the panopticon, the person(s) on display are under constant surveillance and therefore participate in their own discipline before the omnipresent gaze of the colonial eye.” Woodsum concludes that “Americans could gaze on their vanquished enemies [the Indians] with a twofold purpose. First, to acknowledge their triumph over a terrible obstacle on the road to progress. Second, as a way of reconciling the bloody nature of that triumph of empire with the foundation of the country as a democratic republic.”[13. Jo Ann Woodsum, “‘Living Signs of Themselves’: A Research Note on the Politics and Practice of Exhibiting Native Americans in the United States at the Turn of the Century,” UCLA Historical Journal 13 (1993), pp. 114-118.] The displays, Woodsum suggests, patch over an enormous ideological rift in American history. Indeed, there is a redemptive element to the fair, but in the case of Geronimo, I would argue, what is redeemed is not the nation, but the native. “Here,” Francis writes in The Universal Exposition of 1904, the official account of the St. Louis event, “the once bloody warrior Geronimo completed his own mental transformation from savage to citizen and for the first time sought to assume both the rights and the responsibilities of the high stage.”[14. Francis, p. 529.] The exposition was nothing less than the means through which Geronimo, whose name was once invoked as a kind of bogeyman, became the paragon of citizenship.[15. Consider what one pioneer’s granddaughter recalls: “When my mother was growing up, people said to their children, ‘If you don’t behave, Geronimo will get you.’” Quoted in C. L. Sonnichsen, “From Savage to Saint: A New Image for Geronimo,” Journal of Arizona History 27 (1986), p. 8.]
Samuel M. McCowan, the superintendent of the Chilocco Indian Training School in Oklahoma who became the director of the St. Louis Indian exhibits, had wished to present examples of Indian industry from tribes as diverse as Navajo, Pueblo, Apache, and Sioux. These Native Americans who sold their “native” crafts—pottery, beads, baskets, blankets, buckskins, silver jewelry—stood in sharp contrast to Geronimo, who sat in his booth signing photographs and hawking souvenirs (like the buttons taken from his coat, of which he had a curiously large supply).[16. Debo, pp. 400-405.] McCowan initially had felt that Geronimo was “no more than a blatant blackguard, living on a false reputation,” but he arranged for the Chiricahua Apache to visit the fair anyway, since his presence would guarantee a large crowd for the more educational aspects of the Indian exhibit.[17. Robert A. Trennert, “A Resurrection of Native Arts and Crafts: The St. Louis World’s Fair, 1904,” Missouri Historical Review 87 (1993), pp. 286-288.] As Geronimo’s time at the Fair came to a close, however, even McCowan changed his mind about Geronimo. McCowan almost gushes as he reports back to the army captain responsible for guarding the Apache warrior in Oklahoma:
He really has endeared himself to whites and Indians alike. With one or two exceptions, when he was not feeling well, he was gentle, kind and courteous. I did not think I could ever speak so kindly of the old fellow whom I have always regarded as an incarnate fiend. I am very glad to return him to you in as sound and healthy condition as when you brought him here.[18. Debo, p. 415.]
Geronimo, redeemed through his budding civility (not to mention his newfound interest in capitalism), impressed McCowan just as he impressed so many other visitors. As one Arizona visitor to the Louisiana Purchase Exposition remarked, Geronimo “had been tamed and looked alright.”[19. Robert A. Trennert, “Fairs, Expositions, and the Changing Image of Southwestern Indians, 1876-1904,” New Mexico Historical Review 62 (1987), p. 146.] Two decades earlier that same visitor from the Southwest might have been clamoring for Geronimo’s hanging. These changes in attitude of those around him are the virtues of converting “from savage to citizen.”
Geronimo the Reader and Spectator
[quote]When people first came to the World’s Fair they did nothing but parade up and down the streets. When they got tired of this they would visit the shows. There were many strange things in these shows.[/quote]
– Geronimo, describing his experience at the 1904 St. Louis World’s Fair [20. Geronimo and S. M. Barrett, Geronimo: His Own Story, ed. Frederick Turner (1906; New York: Meridian-Penguin, 1996), p. 156.]
But what did Geronimo think about all this? We can begin to hazard some guesses because Geronimo told us in the most subtle of ways. In 1905, back in custody at Fort Sills, Geronimo agreed to tell S. M. Barrett, a school superintendent from a nearby town, his life story. “Each day,” Barrett recalls, “he had in mind what he would tell and told it in a very clear, brief manner. […] Whenever his fancy led him, there he told whatever he wished to tell and no more.” Geronimo controlled what was said, how it was said, and when it was said. When asked a question after the first interview session, Geronimo simply responded, “Write what I have spoken.”[21. Ibid., p. 41.] Refusing to speak if a stenographer was present, Geronimo crafted an autobiography which is the legacy of an oral tradition. Doubly so—since he told his story to an interpreter, Asa Daklugie, who then told it to Barrett, at which point the story was put down in writing. Geronimo speaks, but someone else writes.[21. The relationship between Geronimo’s orality and literacy would make for a very interesting case study. Geronimo was illiterate, yet there was one word which Geronimo could write: his signature, a line of clumsy block letters G-E-R-O-N-I-M-O, which he autographed his photographs with.] The manuscript was then submitted for approval to the War Department, whose Secretary had found that “there are a number of passages which, from the departmental point of view, are decidedly objectionable.”[22. Geronimo and Barrett, p. 45.] It was only after President Roosevelt approved the manuscript in 1906 that this “autobiography” was published as Geronimo’s Story of His Life (it has since been reissued as Geronimo: His Own Story).
Rarely has Geronimo’s Own Story been treated as a literary text. More often it has been read as a historical document, or as Barrett phrases it in his preface, “an authentic record of the private life of the Apache Indians.”[23. Ibid., p. 1.] One notable exception is John Robert Leo’s “Riding Geronimo’s Cadillac: His Own Story and the Circumstancing of Text.” Written in the late seventies in the heady days of American deconstructionism, the article is quite concerned with the construction of meaning through the aporia in the text. In one rather—and now, predicable—Derridean move, for example, Leo announces that “Geronimo is he whose meaning always is emerging.”[24. John Robert Leo, “Riding Geronimo’s Cadillac: His Own Story and the Circumstancing of Text,” Journal of American Culture 1 (1978), p. 820.] While what Leo means by this statement of différance is too complicated to trace out here, I do want to emphasize Leo’s point that despite the translations, transcribing, and censorship that His Own Story underwent, “a residue of Geronimo’s way of seeing comes through the repressive imprint of white textual authority.”[25. Ibid., p. 824.] In other words, Geronimo circumvents a repressive ideological textual apparatus in order to convey a reading of the dominant white culture that goes against the grain—what Stuart Hall calls an “oppositional reading,” a reading that “detotalizes the message in the preferred code in order to retotalize the message within some alternative framework of reference.”[26. Stuart Hall, “Encoding, Decoding,” The Cultural Studies Reader, ed. Simon During (New York: Routledge, 1993), p. 103.] Geronimo begins the chapter of His Own Story called “At the World’s Fair,” with just such an encoding:
When I was at first asked to attend the St. Louis World’s Fair I did not wish to go. Later, when I was told that I would receive good attention and protection, and that the President of the United States said that it would be all right, I consented.[27. Geronimo and Barrett, p. 155.]
What Geronimo does not say is that he did not wish to go because the government was only willing to pay $1 per day for appearing at the exposition, while a commercial promoter had offered Geronimo $100 per month. Once the government made it clear that Geronimo could only leave his compound at Fort Sill under the War Department’s terms, he acquiesced.[28. Trennert, “Native Arts and Crafts,” pp. 287-288; Debo, p. 410.] Or, as he phrased it, in way that puts the power back in his hands, “I consented.” Of course, Geronimo would receive “good attention and protection” during his trip—in other words, close supervision by government guards.
What allows Geronimo to decode the fair, revealing some of its absurdities and paradoxes and then re-encode his reading in an understated narrative that we ourselves must decode is his appreciation of the power of the printed word, a lesson learned during his warrior days of the 1880s. Geronimo demonstrates this awareness in a transcript of his March 25, 1886 parley with General Crook:
I do not want you [General Crook] to believe any bad papers about me. I want the papers sent you to tell the truth about me, because I want to do what is right. Very often there are stories put in the newspapers that I am to be hanged. I don’t want that any more. When a man tries to do right, such stories out not to be put in the newspapers. […] Don’t believe any bad talk you hear about me. The agents and the interpreter hear that somebody has done wrong, and they blame it all on me. Don’t believe what they say. […] I think I am a good man, but in the papers all over the world they say I am a bad man.”[29. Britton Davis, The Truth About Geronimo, ed. M. M. Quaife (New Haven: Yale University Press, 1929), pp. 202-203.]
Geronimo brought this understanding that “bad papers” tell “bad talk” to bear as he told Barrett his thoughts about his six months at the St. Louis World’s Fair, where, when he was not selling photographs and buttons or, as he did every Sunday, roping buffalo for delighted audiences in a Wild West show, he would himself venture to the shows. “There were,” Geronimo decided, “many strange things in these shows.”[30. Geronimo and Barrett, p. 156.] What follows in the rest of the chapter is the exhibition object par excellence, whom audiences come to gawk at, speaking about what he finds strange, what he finds alien about the fair. And Geronimo does so in such an understated way, focusing on seemingly unrelated shows, that we might wonder what underlying sentiments the old Apache hoped to convey in this narrative.
Of all the “many strange things” at the World’s Fair, of all the marvels and exhibits—the Exposition power plant, the Swiss chalet, the arc lighting, the hot air balloon races, the automobile showcase—what does Geronimo remember, or at least tell Barrett about? Most telling is that Geronimo recounts a number of acts which involve dissimulation. Watching two Turks brandishing scimitars in a sham battle, he “expected both to be wounded or perhaps killed, but neither one was harmed.” In another show a “strange-looking negro” sat bound in a chair, his hands tied behind his back. In a moment, the escape artist was free. Geronimo tells Barrett that “I do not understand how this was done. It was certainly a miraculous power, because no man could have released himself by his own efforts.” In the same vein, Geronimo witnesses a magic show, in which a variation of the classic trick of sawing a woman in half is performed. “I heard the sword cut through the woman’s body,” Geronimo recalls, “and the manager himself said she was dead; but when the cloth was lifted from the basket she stepped out, smiled, and walked off the stage.” The magic for Geronimo, as he tells it, lies not in the illusion that a woman’s body was sliced in half, but in how she healed. “I would like to know how she was so quickly healed,” Geronimo asks, “and why the wounds did not kill her.”[31. Geronimo and Barrett, p. 156.]
Men who fiercely fight and are not injured. A man who escapes the inescapable. A woman whose mortal wound disappears. I would venture that Geronimo is not simply dictating a chronological account of his wanderings through the midway, but is consciously, strategically constructing a discourse on power and the evasion of its effects. Perhaps Geronimo sees in the black escape artist who wrests himself free with “a miraculous power” a version of his own struggle against the repressive white world
A visit to a glassmaker likewise turns into a meditation on deception, authority, and control:
I had always thought that these things [glassware] were made by hand, but they are not. The man had a curious little instrument, and whenever he would blow through this into a little blaze the glass would take any shape he wanted it to. I am not sure, but I think that if I had this kind of an instrument I could make whatever I wished. There seems to be a charm about it. But I suppose it is very difficult to get these little instruments, or other people would have them.[32. Ibid., p. 160.]
Here Geronimo imagines what it would be like to “make whatever I wished,” a tantalizing power for one whose land and family were torn away from him a quarter of a century earlier. Geronimo recognizes the impossibility of such wishing, though, hinting that if such powers were readily available no one would want for anything. What it is that makes possessing “these little instruments” of power so very difficult is left unsaid—a striking silence in the text.
The conclusion of the chapter in His Own Story devoted to the World’s Fair is particularly stirring—and particularly coded. Geronimo, at the fair as a representative of one group of anthropological specimens, mentions an encounter with another group of anthropological specimens, “some little brown people” that United States troops had “captured recently on some islands far away from here.” These were Iggorrotes from the Philippines, about whom Geronimo had “heard that the President sent them to the Fair so they could learn some manners, and when they went home teach their people how to dress and how to behave.”[33. Ibid.] On the surface Geronimo appears to distance himself from these “brown people,” disavowing any similarities between his situation and theirs. But this is to ignore Geronimo’s next remark, in which he implicitly places himself in the same subject position as the Filipinos:
I am glad I went to the Fair. I saw many interesting things and learned much of the white people. They are a very kind and peaceful people.[32. Ibid., pp. 161-162.]
Just as the Iggorrotes were to learn how to dress and behave by observing whites, so too did Geronimo learn from the whites. I would argue that this closing passage is laced with irony. What exactly did Geronimo learn of the white people? Not of their technology, their engineering, their art—all on display at the Exposition—nor how to dress and how to behave. Rather, he learned of the mechanisms of power, of deception, of a feigned aggression which is merely a mask for real violence. Geronimo concludes that “had this [Exposition] been among the Mexicans I am sure I should have been compelled to defend myself often.”[33. Ibid., p. 162.] Surely this is an allusion to Geronimo’s earlier days, when he did have to defend himself against Mexicans, but also, unspoken here, against whites, who deceived Geronimo and his Apache band time and time again with their false promises and broken treaties.[34. I have not the space to present a detailed history of the Apache Wars and the various treaties and negotiations which pushed Geronimo and his tribe into a reservation system, but suffice it to say, that Geronimo himself covers this history earlier in His Own Story, especially pages 119-131, which makes Geronimo’s conclusion all that much more ironic.] A “very kind and peaceful people”? Hardly, if one adopts an oppositional reading, as Geronimo dryly does.
Barnum, P. T. Struggles and triumphs; or, Forty Years’ Recollections of P. T. Barnum. Buffalo: Courier Company, 1882.
Buel, James W. Louisiana and the Fair: An Exposition of the World, Its People and Their Achievements. Vol. 1. 10 vols. Saint Louis: World’s Progress Publishing Company, 1904.
Clum, Woodworth. Apache Agent: The Story of John P. Clum. 1936. Lincoln: University of Nebraska Press, 1978.
Davis, Britton. The Truth About Geronimo. Ed. M. M. Quaife. New Haven: Yale University Press, 1929.
Debo, Angie. Geronimo: The Man, His Time, His Place. Norman: University of Oklahoma Press, 1976.
Francis, David R. The Universal Exposition of 1904. St. Louis: Louisiana Purchase Exposition Company, 1913.
Geronimo, and S. M. Barrett. Geronimo: His Own Story. Ed. Frederick Turner. 1906. New York: Meridian-Penguin, 1996.
Greenhalgh, Paul. Ephemeral Vistas: A History of the Expositions Universelles, Great Exhibitions and World’s Fairs, 1851-1939. New York: St. Martin’s Press, 1988.
Hall, Stuart. “Encoding, Decoding.” The Cultural Studies Reader. Ed. Simon During. New York: Routledge, 1993. 90-103.
Harriman, Mary Alice. “The Congress of American Aborigines at the Omaha Exposition.” Overland Monthly 33 (1898): 505-512.
Leo, John Robert. “Riding Geronimo’s Cadillac: His Own Story and the Circumstancing of Text.” Journal of American Culture 1 (1978): 818-837.
Reddin, Paul. Wild West Shows. Urbana: University of Illinois Press, 1999.
Sonnichsen, C. L. “From Savage to Saint: A New Image for Geronimo.” Journal of Arizona History 27 (1986): 5-34.
Trennert, Robert A. “Fairs, Expositions, and the Changing Image of Southwestern Indians, 1876-1904.” New Mexico Historical Review 62 (1987): 127-150.
—. “A Resurrection of Native Arts and Crafts: The St. Louis World’s Fair, 1904.” Missouri Historical Review 87 (1993): 274-292.
Woodsum, Jo Ann. “‘Living Signs of Themselves’: A Research Note on the Politics and Practice of Exhibiting Native Americans in the United States at the Turn of the Century.” UCLA Historical Journal 13 (1993): 110-129.
[This is the text, more or less, of the talk I delivered at the 2011 biennial meeting of the Society for Textual Scholarship, which took place March 16-18 at Penn State University. I originally planned on talking about the role of metadata in two digital media projects—a topic that would have fit nicely with STS’s official mandate of investigating print and digital textual culture. But at the last minute (i.e. the night before), I changed the focus of my talk, turning it into a thinly-veiled call for digital textual scholarship (primarily the creation of digital editions of print works) to rethink everything it does. (Okay, that’s an exaggeration. But I do argue that there’s a lot the creators of digital editions of texts should learn from born-digital creative projects.)
Also, it was the day after St. Patrick’s Day. And the fire alarm went off several times during my talk.
None of these events are related.]
The Poetics of Metadata and the Potential of Paradata
in We Feel Fine and The Whale Hunt
I once made fun of the tendency of academics to begin their papers by apologizing in advance for the very same papers they were about to begin. I’m not exactly going to apologize for this paper. But I do want to begin by saying that this is not the paper I came to give. I had that paper, it was written, and it was a good paper. It was the kind of paper I wouldn’t have to apologize for.
But, last night, I trashed it.
I trashed that paper. Call it the Danny Boy effect, I don’t know. But it wasn’t the paper I felt I needed to deliver, here, today.
Throughout the past two days I’ve detected a low level background hum in the conference rooms, a kind of anxiety about digital texts and how we interact with them. And I wanted to acknowledge that anxiety, and perhaps even gesture toward a way forward in my paper. So, I rewrote it. Last night, in my hotel room. And, well, it’s not exactly finished. So I want to apologize in advance, not for what I say in the paper, but for all the things I don’t say.
My original talk had positioned two online works by the new media artist Jonathan Harris as two complementary expressions of metadata. I had a nice title for that paper. I even coined a new word in my title.
But this title doesn’t work anymore.
I have a new title. It’s a bit more ambitious.
But at least I’ve still got that word I coined.
It’s a lovely word. And truth be told, just between you and me, I didn’t coin it. In the social sciences, paradata refers to data about the data collection process itself—say the date or time of a survey, or other information about how a survey was conducted. But there are other senses of the prefix “para” I’m trying to evoke. In textual studies, of course, para-, as in paratext, is what Genette calls the threshold of the text. I’m guessing I don’t have to say anything more about paratext to this audience.
But there’s a third notion of “para” that I want to play with. It comes from the idea of paracinema, which Jeffrey Sconce first described in 1996. Paracinema is a kind of “reading protocol” that valorizes what most audiences would otherwise consider to be cinematic trash. The paracinematic aesthetic redeems films that are so bad that they actually become worth watching—worth enjoying—and it does so in a confrontational way that seeks to establish a counter-cinema.
Following Sconce’s work, the videogame theorist Jesper Juul has wondered if there can be such a thing as paragames—illogical, improbable, and unreasonably bad games. Such games, Juul suggests, might teach us about our tastes and playing habits, and what the limits of those tastes are. And even more, such paragames might actually revel in their badness, becoming fun to play in the process.
Trying to tap into these three different senses of “para,” I’ve been thinking about paradata. And I’ve got to tell you, so far, it’s a mess. (And this part of my paper was actually a mess in the original version of my paper as well). My concept of paradata is a big mess and it may not mean anything at all.
This is what I have so far: paradata is metadata at a threshold, or paraphrasing Genette, data that exists in a zone between metadata and not metadata. At the same time, in many cases it’s data that’s so flawed, so imperfect that it actually tells us more than compliant, well-structured metadata does.
So let me turn now to We Feel Fine, a massive, ongoing digital databased storytelling project rich with metadata—and possibly, paradata.
We Feel Fine is an astonishing collection of tens of thousands of sentences extracted from tens of thousands of blog posts, all containing the phrase “I feel” or “I am feeling.” It was designed by new media artist Jonathan Harris and the computer scientist Sep Kamvar and launched in May 2006.
The project is essentially an automated script that visits thousands of blogs every minute, and whenever the script detects the words “I feel” or “I am feeling,” it captures that sentence and sends it to a database. As of early this year, the project has harvested 14 million expressions of emotions from 2.5 million people. And the site has done this at a rate of 10,000 to 15,000 “feelings” a day.
Let me repeat that: every day approximately 10,000 new entries are added to We Feel Fine.
The heart of the project appears to be the multifaceted interface that has six so-called “movements”—six ways of visualizing the data collected by We Feel Fine’s crawler.
The default movement is Madness, a swarm of fifteen-hundred colored circles and squares, each one representing a single sentence from a blog post, a single “feeling.” The circles contain text only, while the squares include images associated with the respective blog post.
The colors of the particles signify emotional valence, with shades of yellow representing more positive emotions, red signaling anger. Blue is associated with sad feelings, and so on. This graphic, by the way, comes from the book version of We Feel Fine.
The book came out in 2009. In it, Harris and Kamvar curate hundreds of the most compelling additions to We Feel Fine, as well as analyze the millions of blog posts they’ve collected with with extensive data visualizations—graphs, and charts, and diagrams.
The book is an amazing project in and of itself and deserves its own separate talk. It raises important questions about archives, authorship, editorial practices, the material differences between a dynamic online project and a static printed work, and so on. I’ll leaves aside these questions right now; instead, I want to turn to the site itself. Let’s look at the Madness movement in action.
(And here I went online and interacted with the site. Why don’t you do that too, and come back later?)
(Also, right about here a fire alarm went off. Which, semantically, makes no sense. The alarm turned on, but I said it went off.)
(I can’t reproduce the sound of that particular fire alarm going off. I bet you have some sort of alarm on your phone or something you could make go off, right?)
(No? You don’t? Or you’re just as confused about on and off as I am? Then enjoy this short video intermission, which interrupts my talk, which I’m writing and which you’re reading, about as intrusively as the alarms interrupted my panel.)
(Okay. Back to my talk, which I’m writing, and which you’re reading.)
In the Madness movement you can click on any single circle, and the “feeling” will appear at the top of the screen. Another click on that feeling will drill down to the original blog post in its original context. So what’s important here is that a single click transitions from the general to the particular, from the crowd to the individual. You can also click on the squares to show “feelings” that have an image associated with them. And you have the option to “save” these images, which sends them to a gallery, just about the only way you can be sure to ever find any given image in We Feel Fine again.
At the top of the screen are are six filters you can use to narrow down what appears in the Madness movement. Working right to left, you can search by date, by location, the weather at that location at the time of the original blog post, the age of the blogger, the gender of the blogger, and finally, the feeling itself that is named in the blog post. While every item in the We Feel Fine database will have the feeling and date information attached to it, the age, gender, location, and weather fields are populated only for those items in which that information is publicly available—say a LiveJournal or Blogger profile that lists that information, or a Flickr photo that’s been geotagged.
What I want to call your attention to before I run through the other five movements of We Feel Fine is that these filters depend upon metadata. By metadata, I mean the descriptive information the database associates with the original blog post. This metadata not only makes We Feel Fine browsable, it makes it possible. The metadata is the data. The story—if there is one to be found in We Feel Fine—emerges only through the metadata.
You can manipulate the other five movements using these filters. At first, for example, the Murmurs movement displays a reverse chronological streaming, like movie credits, of the most recent emotions. The text appears letter-by-letter, as if it were being typed. This visual trick heightens the voyeuristic sensibility of We Feel Fine and makes it seem less like a database and more like a narrative, or even more to the point, like a confessional.
The Montage movement, meanwhile, organizes the emotions into browsable photo galleries:
By clicking on a photo and selecting save, you can add photos to a permanent “gallery.” Because the database grows so incredibly fast, this is the only way to ensure that you’ll be able to find any given photograph again in the future. There’s a strong ethos of ephemerality in We Feel Fine. To use one of Marie-Laure Ryan’s metaphors for a certain kind of new media, We Feel Fine is a kaleidoscope, an assemblage of fragments always in motion, never the same reading or viewing experience twice. We have little control over the experience. It’s only through manipulating the filters that we can hope to bring even a little coherency to what we read.
The next of the five movements is the Mobs movement. Mobs provides five separate data visualization of the most recent fifteen-hundred feelings. One of the most interesting aspects of the Mobs movement is that it highlights those moments when the filters don’t work, or at least not very well, because of missing metadata.
For instance, clicking the Age visualizations tells us that 1,223 (of the most recent 1,500) feelings have no age information attached to them. Similarly, the Location visualization draws attention to the large number of blog posts that lack any metadata regarding their location.
Unlike many other massive datamining projects, say, Google’s Ngram Viewer, We Feel Fine turns its missing metadata into a new source of information. In a kind of playful return of the repressed, the missing metadata is colorfully highlighted—it becomes paradata. The null set finds representation in We Feel Fine.
The Metrics movement is the fourth movement. And it shows what Kamvar and Harris call the “most salient” feelings, by which they mean “the ways in which a given population differs from the global average.”
Right now, for example, we see that “Crazy” is trending 3.8 times more than normal, while people are feeling “alive” 3.1 times more than usual. (Good for them!). Here again we see an ability to map the local against the global. It addresses what I see as one of the problems of large-scale data visualization projects, like the ones that Lev Manovich calls “cultural analytics.”
Ngram and the like are not forms of distant reading. There’s distant reading, and then there’s simply distance, which is all they offer. We Feel Fine mediates that distance, both visually, and practically.
(And here I was going to also say the following, but I was already in hot water at the conference for my provocations, so I didn’t say it, but I’ll write it here: Cultural analytics echo a totalitarian impulse for precise vision and control over broad swaths of populations.)
And finally, the Mounds movement, which simply shows big piles of emotion, beginning with whatever feeling is the most common at the moment, and moving on down the line towards less common emotions. The Mounds movement is at once the least useful visualization but also the most playful, with its globs that jiggle as you move your cursor over them.
(Obviously you can’t see it above, in the static image but…) The mounds convey what game designers call “juiciness.” As Jesper Juul characterizes juiciness, it’s “excessive positive feedback in response to the player’s actions.” Or, as one game designer puts it, a juicy game “will bounce and wiggle and squirt…it feels alive and responds to everything that you do.”
Harris’s work abounds with juicy, playful elements, and they’re not just eye candy. They are part of the interface, part of the design, and they make We Feel Fine welcoming, inviting. You want to spend time with it. Those aren’t characteristics you’d normally associate with a database. And make no mistake about it. We Feel Fine is a database. All of these movements are simply its frontend—a GUI Java applet written in Processing that obscures a very deliberate and structured data flow.
The true heart of We Feel Fine is not the responsive interface, but the 26,000 lines of code running on 5 different servers, and the MySQL database that stores the 10,000 new feelings collected each and every day. In their book, Kamvar and Harris provide an overview of the dozen or so main components that make up We Feel Fine’s backend.
It begins with a URL server that maintains the list of URLs to be crawled and the crawler itself, which runs on a single dedicated server.
Pages retrieved by the crawler are sent to the “Feeling Indexer,” which locates the words “feel” or “feeling” in the blog post. The adjective following “feel” or “feeling” is matched against the “emotional lexicon”—a list of 2,178 feelings that are indexed by We Feel Fine. If the emotion is not in the lexicon, it won’t be saved. That emotion is dead to We Feel Fine. But if the emotion does match the index, the script extracts the sentence with that feeling and any other information available (this is where the gender, location, and date data are parsed).
Next there’s the actual MySQL database, which stores the following fields for each data item: the extracted sentence, the feeling, the date, time, post URL, weather, and gender, age, and location information.
Then there’s an open API server and several other client applications. And finally, we reach the front end.
Now, why have I just taken this detour into the backend of We Feel Fine?
Because, if we pay attention to the hardware and software of We Feel Fine, we’ll notice important details that might otherwise escape us. For example, I don’t know if you noticed from the examples I showed earlier, but all of the sentences in We Feel Fine are stripped of their formatting. This is because the Perl code in the backend converts all of the text to lowercase, removes any HTML tags, and eliminates any non-alphanumeric characters:
The algorithm tampers with the data. The code mediates the raw information. In doing so, We Feel Fine makes both an editorial and aesthetic statement.
In fact, once we understand some of the procedural logic of We Feel Fine, we can discover all sorts of ways that the database proves itself to be unreliable.
I’ve already mentioned that if you express a feeling that is not among the 2,178 emotions tabulated, then your feeling doesn’t count. But there’s also the tricky language misdirection the algorithm pulls off, in which the same “feeling” is interpreted by the machine to be the same, no matter how it is used in the sentence. In this way, the machine exhibits the same kind of “naïve empiricism” (using Johanna Drucker’s dismissive phrase) that some humanists do interpreting quantitative data.
And finally, consider many of the images in the Montage movement. When there are multiple images on a blog page, the crawler only grabs the biggest one—and not biggest in dimensions, but biggest in file size, because that’s easier for the algorithm to detect—and this image often ends up being the header image for the blog, rather than connected to the actual feeling itself, as in this example.
The star pattern happens to be a sidebar image, rather than anything associated with the actual blog post that states the feeling:
So We Feel Fine forces associations. In experimental poetry or electronic literature communities, these kinds of random associations are celebrated. The procedural creation of art, literature, or music has a long tradition.
But in a database that seeks to be a representative “almanac of human emotions”? We’re in new territory there.
But in fact, it is representative, in the sense that human emotions are fungible, ephemeral, disjunctive, and, let’s face, sometimes random.
Let me bring this full circle, by returning to the revised title of my talk. I mentioned at the beginning that I felt this low-grade but pervasive concern about digital work these past few days at STS. I’ve heard questions like Are we doing everything we can to make digital editions accessible, legible, readable, and teachable? Where are we failing, some people have wondered. Why are we failing? Or at least, Why have we not yet reached the level of success that many of the very same people at this conference were predicting ten or fifteen or, dare I say it, twenty years ago?
Maybe because we’re doing it wrong.
I want to propose that we can learn a lot from We Feel Fine as we exit out the far end of what some media scholars have called the Gutenberg Parenthesis.
What can we learn from We Feel Fine?
Imagine if textual scholars built their digital editions and archives using these four principles.
Think about We Feel Fine and what makes work. Most importantly, We Feel Fine is a compelling reading experience. It’s not daunting. There’s a playful balance between interactivity and narrative coherence.
Secondly, and this goes back to my idea of paradata. Harris and Kamvar are not afraid to corrupt the source data, or to create metadata that blurs the line between metadata and not-metadata. They are not afraid to play with their sources, and for the most part, they are up front about how they’re playing with them.
This relates to the third feature of We Feel Fine that we should learn from. It’s open. Some of the source code is available. The list of emotions is available. There’s an open API, which anyone can use to build their own application on top of We Feel Fine, or more generally extract data from.
And finally, it’s juicy. I admit, this is probably not a term many textual scholars use in their research, but it’s essential for the success of We Feel Fine. The text responds to you. It’s alive in your hands, and I don’t think there’s much more we could ever ask from a text.
Drucker, Johanna. 2010. “Humanistic Approaches to the Graphical Expression of Interpretation” presented at the Hyperstudio: Digital Humanities at MIT, May 20, Cambridge, MA. http://mitworld.mit.edu/video/796.
Genette, Gerard. 1997. Paratexts: Thresholds of Interpretation. Cambridge: Cambridge University Press.
[I was on a panel called “The Open Professoriat(e)” at the 2011 MLA Convention in Los Angeles, in which we focused on the dynamic between academia, social media, and the public. My talk was an abbreviated version of a post that appeared on samplereality in July. Here is the text of the talk as I delivered it at the MLA, interspersed with still images from my presentation. The original slideshow is at the end of this post. Co-panelists Amanda French and Erin Templeton have also posted their talks online.]
Rather than make an argument in the short time I have, I want to make a provocation, urging everyone here to consider the way social media can enable what I call tactical collaborations both within and outside of the professoriate.
I’ve always had trouble keeping the words tactic and strategy straight. Or, as early forms of the words appear in the OED, tactick and the curiously elongated stratagematick.
This quote comes from a 17th century translation of a history of Roman emperors (circa 240 AD).
I love the quote and it tells me that tactics and strategy have always been associated with battle. But I still have trouble telling one from the other. I know one is, roughly speaking, short term, while the other is long range. One is the details and the other the big picture.
I’ll blame the old board game Stratego for my confusion. The placement of my flag, the movement of my scouts, that seemed tactical to me, yet the game was called Stratego.
Even diving into the etymology of the words doesn’t help much at first: Tactic is from the ancient Greek τακτóς, meaning arranged or ordered.
While Strategy comes from the Greek στρατηγóς, meaning commander or general. A general is supposed to be a big-picture kind of guy, so I guess that makes sense. And I suppose the arrangement of individual elements comes close to the modern day meaning of a military tactic.
All of this curiosity about the meaning of the word tactic began last May, when Dan Cohen and Tom Scheinfeldt at the Center for History and New Media at George Mason University announced a crowd-sourced book called Hacking the Academy. They announced it on Twitter on Friday, May 21 and by Friday, May 28, one week later, all the submissions were in. 330 submissions from nearly 200 people.
The collection is now in the final stages of editing, with a table of contents of around 60 pieces by 40 or so different authors. It will be peer-reviewed and published by Digital Culture Books, an imprint of the University of Michigan Press. As you can imagine, the idea of crowdsourcing a scholarly book, in a week no less, generated excitement, questions, and some worthwhile skepticism.
And it was one of these critiques of Hacking the Academy that prompted my thoughts about tactical collaboration. Jennifer Howard, a senior reporter for The Chronicle of Higher Education, posed several questions that would-be hackers ought to consider during the course of hacking the academy. It was Howard’s last question that resonated most with me.
Have you looked for friends in the enemy camp lately? Howard cautioned us that some of the same institutional forces we think we’re fighting might actually be allies when we want to be a force for change. I read Howard’s question and I immediately began rethinking what collaboration means. Instead of a commitment, it’s an expedience. Instead of strategic partners, find immediate allies. Instead of full frontal assaults, infiltrate and disseminate.
In academia we have many tactics for collaboration, but very little tactical collaboration.
And this is how I defined tactical collaboration:
I’m reminded of de Certeau’s vision of tactics in The Practice of Everyday Life. Unlike a strategy, which operates from a secure base, a tactic, as de Certeau writes, operates “in a position of withdrawal…it is a maneuver ‘within the enemy’s field of vision’” (37).
De Certeau goes on to add that a tactic “must vigilantly make use of the cracks….It poaches in them. It creates surprises in them. It can be where it is least expected” (37).
So that’s what a tactic is. I should’ve skipped the OED and Stratego and headed straight for de Certeau. He teaches us that strategies, like institutions, depend upon dominance over space—physical as well as discursive space. But tactics rely upon momentary victories in and over time. Tactics require agility, surprise, feigned retreats as often as real retreats. They require collaborations that the more strategically-minded might otherwise discount. And social media presents the perfect landscape for these tactical collaborations to play out.
Despite my being here today, I’m very skeptical of institutions and associations. We live in a world where we can’t idly hope for or rely upon institutional support or recognition. To survive and thrive, humanists must be fleet-footed, mobile, insurgent. Decentralized and nonhierarchical. We need to stop forming committees and begin creating coalitions. We need affinities over affiliations, and networks over institutes.
Tactical collaboration is crucial for any humanist seeking to open up the professoriate, any scholar seeking to poach from the institutional reserves of knowledge production, any teacher seeking to challenge the ever intensifying bureaucratization and Taylorization of learning, any contingent faculty seeking to forge success and stability out of contingency.
We need tactical collaborations, and we need them now. The strategematick may be the domain of emperors and institutions, but like the word itself, it’s quaint and outdated. Let tactics be our ruse and our practice.
Certeau, Michel de. The Practice of Everyday Life. Berkeley: University of California Press, 1984. Print.
Herodian. Herodians of Alexandria his imperiall history of twenty Roman caesars & emperours of his time / First writ in Greek, and now converted into an heroick poem by C.B. Staplyton. London: W. Hunt, 1652. Web. 14 July 2010.
[This is the text of my second talk at the 2011 MLA convention in Los Angeles, for a panel on “Close Reading the Digital.” My talk was accompanied by a Prezi “Zooming” presentation, which I have replicated here with still images (the original slideshow is at the end of this post). In 15 minutes I could only gesture toward some of the broader historical and cultural meanings that resonate outward from code—but I am pursuing this project further and I welcome your thoughts and questions.]
New media critics such as Nick Montfort and Matthew Kirschenbaum have observed that a certain “screen essentialism” pervades new media studies, in which the “digital event on the screen,” as Kirschenbaum puts it (Kirschenbaum 4), becomes the sole object of study at the expense of the underlying computer code, the hardware, the storage devices, and even the non-digital inputs and outputs that make the digital object possible in the first place. There are a number of ways to remedy this essentialism, and the approach that I want to focus on today is the close reading of code.
Frederich Kittler has said that code is the only language that does what it says. But the close reading of code insists that code not only does what it says, it says things it does not do. Like any language, code operates on a literal plane—literal to the machine, that is—but it also operates on an evocative plane, rife with gaps, idiosyncrasies, and suggestive traces of its context. And the more the language emphasizes human legibility (for example, a high-level language like BASIC or Inform 7), the greater the chance that there’s some slippage in the code that is readable by the machine one way and readable by scholars and critics in another.
Today I want to close read some snippets of code from Micropolis, the open-source version of SimCity that was included on the Linux-based XO computers in the One Laptop per Child program.
Designed by the legendary Will Wright, SimCity was released by Maxis in 1989 on the Commodore 64, and it was the first of many popular Sim games, such as SimAnt and SimFarm, not to mention the enduring SimCity series of games—that were ported to dozens of platforms, from DOS to the iPad. Electronic Arts owns the rights to the SimCity brand, and in 2008, EA released the source code of the original game into the wild under a GPL License—a General Public License. EA prohibited any resulting branch of the game from using the SimCity name, so the developers, led by Don Hopkins, called it Micropolis, which was in fact Wright’s original name for his city simulation.
From the beginning, SimCity was criticized for presenting a naive vision of urban planning, if not an altogether egregious one. I don’t need to rehearse all those critiques here, but they boil down to what Ian Bogost calls the procedural rhetoric of the game. By procedural rhetoric, Bogost simply means the implicit or explicit argument a computer model makes. Rather than using words like a book, or images like a film, a game “makes a claim about how something works by modeling its processes” (Bogost, “The Proceduralist Style“).
In the case of SimCity, I want to explore a particularly rich site of embedded procedural rhetoric—the procedural rhetoric of crime. I’m hardly the first to think about the way SimCity or Micropolis models crime. Again, these criticisms date back to the nineties. And as recently as 2007, the legendary computer scientist Alan Kay called SimCity a “pernicious…black box,” full of assumptions and “somewhat arbitrary knowledge” that can’t be questioned or changed (Kay).
Kay goes on to illustrate his point using the example of crime in SimCity. SimCity, Kay notes, “gets the players to discover that the way to counter rising crime is to put in more police stations.” Of all the possible options in the real world—increasing funding for education, creating jobs, and so on—it’s the presence of the police that lowers crime in SimCity. That is the procedural rhetoric of the game.
And it doesn’t take long for players to figure it out. In fact, the original manual itself tells the player that “Police Departments lower the crime rate in the surrounding area. This in turn raises property values.”
It’s one thing for the manual to propose a relationship between crime, property values, and law enforcement, but quite another for the player to see that relationship enacted within the simulation. Players have to get a feel for it on their own as they play the game. The goal of the simulation, then, is not so much to win the game as it is to uncover what Lev Manovich calls the “hidden logic” of the game (Manovich 222). A player’s success in a simulation hinges upon discovering the algorithm underlying the game.
But, if the manual describes the model to us and players can discover it for themselves through gameplay, then what’s the value of looking at the code of the game. Why bother? What can it tell us that playing the game cannot?
Before I go any further, I want to be clear: I am not a programmer. I couldn’t code my way out of a paper bag. And this leads me to a crucial point I’d like to make today: you don’t have to be a coder to talk about code. Anybody can talk about code. Anybody can close read code. But you do need to develop some degree of what Michael Mateas has called “procedural literacy” (Mateas 1).
Let’s look at a piece of code from Micropolis and practice procedural literacy. This is a snippet from span.cpp, one of the many sub-programs called by the core Micropolis engine.
It’s written in C++, one of the most common middle-level programming languages—Firefox is written in C++, for example, as well as Photoshop, and nearly every Microsoft product. By paying attention to variable names, even a non-programmer might be able to discern that this code scans the player’s city map and calculates a number of critical statistics: population density, the likelihood of fire, pollution, land value, and the function that originally interested me in Micropolis,a neighborhood’s crime rate.
This specific calculation appears in lines 413-424. We start off with the crime rate variable Z at a baseline of 128, which is not as random at it seems, being exactly half of 256, the highest 8-bit binary value available on the original SimCity platform, the 8-bit Commodore 64.
128 is the baseline and the crime rate either goes up or down from there. The land value variable is subtracted from Z, and then the population density is added to Z:
While the number of police stations lowers Z.
It’s just as the manual said: crime is a function of population density, land value, and police stations, and a strict function at that. But the code makes visible nuances that are absent from the manual’s pithy description of crime rates. For example, land that has no value—land that hasn’t been built upon or utilized in your city—has no crime rate. This shows up in lines 433-434:
Also, because of this strict algorithm, there is no chance of a neighborhood existing outside of this model. The algorithm is, in Jeremy Douglass’s words when he saw this code, “absolutely deterministic.” A populous neighborhood with little police presence can never be crime free. Land value is likewise reduced to set formula, seen in this equation in lines 264-271:
Essentially these lines tell us that land value is a function of the property’s distance from the city center, the type of terrain, the nearby pollution, and the crime rate. Again, though, players will likely discover this for themselves, even if they don’t read the manual, which spells out the formula, explicitly telling us that “the land value of an area is based on terrain, accessibility, pollution, and distance to downtown.”
So there’s an interesting puzzle I’m trying to get at here. How does looking at the code teach us something new? If the manual describes the process, and the game enacts it, what does exploring the code do?
I think back to Sherry Turkle’s now classic work, Life on the Screen, about the relationship between identity formation and what we would now call social media. Turkle spends a great deal of time talking about what she calls, in a Baudrillardian fashion, the “seduction of the simulation.” And by simulations Turkle has in mind exactly what I’m talking about here, the Maxis games like SimCity, SimLife, and SimAnt that were so popular 15 years ago.
Turkle suggests that players can, on the one hand, surrender themselves totally to the simulation, openly accepting whatever processes are modeled within. On the other hand, players can reject the simulation entirely—what Turkle calls “simulation denial.” These are stark opposites, and our reaction to simulations obviously need not be entirely one or the other.
There’s a third alternative Turkle proposes: understanding the simulation, exploring its assumptions, both procedural and cultural (Turkle 71-72).
I’d argue that the close reading of code adds a fourth possibility, a fourth response to a simulation. Instead of surrendering to it, or rejecting it, or understanding it, we can deconstruct it. Take it apart. Open up the black box. See all the pieces and how they fit together. Even tweak the code ourselves and recompile it with our own algorithms inside.
When we crack open the code like this, we may well find surprises that playing the game or reading the manual will not tell us. Remember, code does what it says, but it also says things it does not do. Let’s consider the code for a file called disasters.cpp. Anyone with a passing familiarity with SimCity might be able to guess what a file called disasters.cpp does. It’s the routine that determines which random disasters will strike your city. The entire 408 line routine is worth looking at, but what I’ll draw your attention to is the section that begins at line 109, where the probability of the different possible disasters appears:
In the midst of rather generic biblical disasters (you see here there’s a 22% chance of a fire, and a 22% chance of a flood), there is a startling excision of code, the trace of which is only visible in the programmer’s comments. In the original SimCity there was a 1 out of 9 chance that an airplane would crash into the city. After 9/11 this disaster was removed from the code at the request Electronic Arts.
Playing Micropolis, say perhaps as one of the children in the OLPC program, this erasure is something we’d never notice. And we’d never notice because the machine doesn’t notice—it stands outside the procedural rhetoric of the game. It’s only visible when we read the code. And then, it pops, even to non-programmers. We could raise any number of questions about this decision to elide 9/11. There are questions, for example, about the way the code is commented. None of the other disasters have any kind of contextual, historically-rooted comments, the effect of which is that the other disasters are naturalized—even the human-made disasters like Godzilla-like monster that terrorizes an over-polluted city.
There are questions about the relationship between simulation, disaster, and history that call to mind Don DeLillo’s White Noise, where one character tells another, “The more we rehearse disaster, the safer we’ll be from the real thing…..There is no substitute for a planned simulation” (196).
And finally there are questions about corporate influence and censorship—was EA’s request to remove the airplane crash really a request, or more of a condition? How does this relate to EA’s more recent decision in October of 2010 to remove the Taliban from its most recent version of Medal of Honor? If you don’t know, a controversy erupted last fall when word leaked out that Medal of Honor players would be able to assume the role of the Taliban in the multiplayer game. After weeks of holding out, EA ended up changing all references to the Taliban to the unimaginative “Opposing Force.” So at least twice, EA, and by proxy, the videogame industry in general, has erased history, making it more palatable, or as a cynic might see it, more marketable.
I want to close by circling back to Michael Mateas’s idea of procedural literacy. My commitment to critical code studies is ultimately pedagogical as much as it is methodological. I’m interested in how we can teach everyday people, and in particular, nonprogramming undergraduate students, procedural literacy. I think these pieces of code from Micropolis make excellent material for novices, and in fact, I do have my videogame studies students dig around in this source code. Most of them have never programmed, let alone in C++, so I give them some prompts to get them started.
And for you today, here in the audience, I have similar questions, about the snippets of code that I showed, but also questions more generally about close reading digital objects. What other approaches are worth taking? What other games, simulations, or applications have the source available for study, and what might you want to look at with those programs? And finally, what are the limits of reading code from a humanist perspective?
So, it was entirely coincidental that the night before Anthologize’s release, I tweeted:
I had no idea that the One Week Team was working on a WordPress plugin that could take our blogs and turn them into formats suitable for e-readers or publishers like Lulu.com (the exportable formats include ePub, PDF, RTF, and TEI…so far). When I got a sneak preview of Anthologize via the outreach team’s press kit, it was only natural that I revisit my previous night’s tweet, with this update:
I’m willing to stand behind this statement—Twitter and Blogs are the first drafts of scholarship. All they need are better binding—and I’m even more willing to argue that Anthologize can provide that binding.
But the genius of Anthologize isn’t that it lets you turn blog posts into PDFs. They are already many ways to do this. The genius of the tool is the way it lets you remix a blog into a bound object. A quick look at the manage project page (larger image) will show how this works:
All of your blog’s posts are listed in the left column, and you can filter them by tag or category. Then you drag-and-drop specific posts into the “Parts” column on the right side of the page. Think of each Part as a separate section or chapter of your final anthology. You can easily create new parts, and rearrange the parts and posts until you’ve found the order you’re looking for.
Using the “Import Content” tool that’s built into Anthologize, you aren’t even limited to your own blog postings. You can import anything that has an RSS feed, from Twitter updates to feeds from entirely different blogs and blogging platforms (such as Movable Type or Blogger). You can remix from a countless number of sources, and then compile it all together into one slick file. This remixing isn’t simply an afterthought of Anthologize. It defines the plugin and has enormous potential for scholars and teachers alike, ranging from organizing tenure material to building student portfolios.
Something else that’s neat about how Anthologize pulls in content is that draft (i.e. unpublished) posts show up alongside published posts in the left hand column. In other words, drafts can be published in your Anthologize project, even if they were never actually published on your blog. This feature makes it possible to create Anthologize projects without even making the content public first (though why would you want to?).
From Alpha to Beta to You
As excited as I am about the possibilities of Anthologize, don’t be misled into thinking that the tool is a ready-to-go, full-fledged publishing solution. Make no mistake about Anthologize: this is an extremely alpha version of the final plugin. If the Greeks had a letter that came before alpha, Anthologize would be it. There are several major known issues, and there are many features yet to add. But don’t forget: Anthologize was developed in under 200 hours. There were no months-long team meetings, no protracted management decisions, no obscene Gantt charts. The team behind Anthologize came and saw and coded, from brainstorm to repository in one week.
[pullquote align=”left”]The team behind Anthologize came and saw and coded, from brainstorm to repository in one week.[/pullquote]
The week is over, and they’re still working, but now it’s your turn too. Try it out, and let the team know what works, what doesn’t, what you might use it for, and what you’d like to see in the next version. There’s an Anthologize Users Group you can join to share with other users and the official outreach team, and there’s also the Anthologize Development Group, where you can share your bugs and issues directly with the development team.
As for me, I’m already working on a wishlist of what I’d like to see in Anthologize. Here are just a few thoughts:
More use of metadata. I imagine future releases will allow user-selected metadata to be included in the Anthologized content. For example, it’d be great to have the option of including the original publication date.
Cover images. It’s already possible to include custom acknowledgments and dedications in the opening pages of the Anthologized project, but it’ll be crucial to be able to include a custom image as the anthology front cover.
Preservation of formatting. Right now quite a bit of formatting is stripped away when posts are anthologized. Block quotes, for example, become indistinguishable from the rest of the text, as do many headers and titles.
Fine-grained image control. A major bug prevents many blog post images from showing up in the Anthologize-generated book. Once this is fixed, it’d be wonderful to have even greater control of images (such as image resolution, alignment, and captions).
I haven’t experimented with Anthologize on WordPressMU or BuddyPress yet, but it’s a natural fit. Imagine each user being able to cull through tons of posts on a multi-user blog, and publishing a custom-made portfolio, comprised of posts that come from different users and different blogs.
As I play with Anthologize, talk with the developers, and share with other users, I’m sure I’ll come up with more suggestions for features, as well as more ways Anthologize can be used right now, as is. I encourage you to do the same. You’ll join a growing contingent of researchers, teachers, archivists, librarians, and students who are part of an open-source movement, but more importantly, part of a movement to change the very nature of how we construct and share knowledge in the 21st century.
Foursquare and its brethren (Gowalla, Brightkite, Loopt, and so on) are the latest social media darlings, but honestly, are they really all that useful? Sharing your location with your friends is not very compelling when you spend your life in the same four places (home, office, classroom, coffee shop). Are these apps really even fun? Does becoming the Mayor of a Shell filling station or earning the Crunked badge for checking into four different airport terminals on the same night* count as fun? I hope not. In truth, making fun of Foursquare is more fun than actually using Foursquare.
*The Crunked badge is for checking into four separate locations during a single evening. They don’t all have to be airport terminals. That’s just my own quirk.
Aside from the free chips I got for checking into a California Tortilla, the only redeeming value of these geolocation apps is that they offer the slightest glimmer—a glimmer!—of creative and pedagogical use. While some of the benefits of geolocation have been immediately seized upon by museums and historians—think of the partnership between Foursquare and the History Channel—very few people have considered using geolocation in a literary context. Even less attention has been paid to the ways geolocation can foster critical and creative thinking. So I’ve been pondering re-purposing Foursquare and its ilk in ways unintended and unforeseen by their creators.
[pullquote align=”right”]Let’s turn locative media into platforms for renegotiating space and telling stories[/pullquote]Following Rob MacDougall’s call for playful historical thinking, I’ve been imagining what you could call playful geographic thinking. Let’s turn locative media from gimmicky Entertainment coupon books and glorified historical guidebooks into platforms for renegotiating space and telling stories.
In this case, that rigid structure comes from the core mechanics of the different geolocation apps: checking in and tagging specific places with tips or comments. What’s supposed to happen is that users check in to bars or restaurants and then post tips on the best drinks or bargains. But what canhappen, given the free movement within this structure, is that users can define their own places and add tips that range from lewd to absurd.
This is exactly what Dean Terry is doing. Along with his colleagues and students at the Emerging Media and Communication program at the University of Texas at Dallas, Dean has been renaming spaces and making his own places. Even better, Dean and his group at the MobileLab at UT Dallas are not only testing the limits of existing geolocation apps, they’re building one of their own.
I’m not designing my own app, but I am playing with the commercial apps. And again, by playing,I mean moving freely within a larger, more constrained structure. For instance, within my dully named campus office building, Robinson A, I’ve created my own space, The Office of Incandescent Light and Industrial Runoff. Which is pretty much how I think of my office. And I’m mayor there, thank you very much.
Likewise, when I’m home, I often check into the Treehouse of Sighs. I have an actual treehouse there, but the Treehouse of Sighs is not that one. The Treehouse of Sighs exists only in my mind. It’s a metaphysical Hotel California. You can check in any time you like, but you can never be there.
Just as evocative as creating your own space is tagging existing spaces with virtual graffiti, which you can use to create a counter-factual history of a place. Anyone who checks into the Starbucks on my campus can see my advice regarding the fireplace there. Also on GMU’s campus, I’ve uncovered Fenwick Library’s dirty little secret. And sometimes I leave surrealist tips in public places, like this epigram in yet another airport terminal:
All of this play has led me to think about using geolocative media with my students. Next spring I’m teaching an undergraduate class called “Textual Media,” a vague title that I’ve taken to describing as post-print fiction. My initial idea for using Foursquare was to have students add new venues to the app’s database, with the stipulation that these new venues be Foucauldian “Other Spaces”—parking decks, overpasses, bus depots, etc.—that stand in sharp contrast to the officially sanctioned places on Foursquare (coffee shops, restaurants, bars, etc.). One of the points I’d like to make is that much of our lives are actually spent in these nether-places that are neither here nor there. Tracking our movements in these unglamorous but not unimportant unplaces could be a revelation to my students. It might actually be one of the best uses of geolocation—to defamiliarize our daily surroundings.
I recently participated in a geolocation session at THATCamp that helped me refine some of these ideas. We had about fifteen historians, librarians, archivists, literary scholars, and other humanists at the session. We broke off into groups, with the mission of hacking existing geolocation apps for teaching or learning. I worked with Christa Willaford and Christina Jenkins, and as befits brainstorming about space, we left the windowless room, left the building entirely, and stood out near a small field (that’s not even on the outdated satellite image of the place) and came up with the idea we called Haunts.
Haunts is about the secret stories of spaces.
Haunts is about locative trauma.
Haunts is about the production of what Foucault calls “heterotopias”—a single real place in which incompatible counter-sites are layered upon or juxtaposed against one another.
The general idea behind Haunts is this: students work in teams, visiting various public places and tagging them with fragments of either a real life-inspired or fictional trauma story. Each team will work from an overarching traumatic narrative that they’ve created, but because the place-based tips are limited to text-message-sized bits, the story will emerge only in glimpses and traces, across a series of spaces.
[pullquote align=”left”]They’ve stumbled upon a fictional world haunting the real one.[/pullquote]Emerge for whom? For the other teams in the class. But also for random strangers using the apps, who have no idea that they’ve stumbled upon a fictional world augmenting the real one. A fictional world haunting the real one.
There are several twists that make Haunts more than simple place-based creative writing. For starters, most fiction doesn’t require any kind of breadcrumb trail more complicated than sequential page numbers. In Haunts, however, students will need to create clues to act as what Marc Ruppel calls migratory cues—nudging participants from one locale to the next, from one medium to the next. These cues might be suggestive references left in a tip, or perhaps obliquely embedded in a photograph taken at the check-in point. (Most geolocation apps allow photographs to be associated with a place; Foursquare is a holdout in this regard, though third-party services like picplz offer a work-around.)
Another twist subverts the tendency of geolocation apps to reward repeat visits to a single locale. Check in enough times at your coffee shop with Foursquare and you become “mayor” of the place. Haunts disincentivizes multiple visits. Check in too many times at the same place and you become a “ghost.” No longer among the living, you are stuck in a single place, barred from leaving tips anywhere else. Like a ghost, you haunt that space for the rest of the game. It’s a fate players would probably want to avoid, yet players will nonetheless be compelled to revisit destinations, in order to fill in narrative gaps as either writers or readers.
[pullquote align=”right”]Imagine the same traumatic kernel, being told again and again, from different points of views.[/pullquote]The final twist is that Haunts does not rely only upon Foursquare. All of the geolocative apps have the same core functionality. This means that one team can use Foursquare, while another team uses Gowalla, and yet another Brightkite. Each team will weave parallel yet diverging stories across the same series of spaces. Each Haunt hosts a number of haunts. The narrative and geographic path of a single team’s story should alone be engaging enough to follow, but even more promising is a kind of cross-pollination between haunts, in which each team builds upon one or two shared narrative events, exquisite corpse style. Imagine the same traumatic kernel, being told again and again, from different points of views. Different narrative and geographic points of views. Eventually these multiple paths could be aggregated onto a master narrative—or more likely, a master database—so that Haunts could be seen (if not experienced) in its totality.
There is still much to figure out with Haunts. But I find the project compelling, and even necessary. The endeavor turns a consumer-based model of mobile computing into an authorship-based model. It is a uniquely collaborative activity, but also one that invites individual introspection. It imagines trauma as both private and public, deeply personal yet situated within shared semiotic domains. It operates at the intersection between game and story, between reading and writing, between the real and the virtual. And it might finally make geolocation worth paying attention to.
In a few days the latest iteration of THATCamp will convene on the campus of George Mason University, hosted by the Center for History and New Media. Except “convene” really isn’t the right word. Most of my readers will already know that The Technology and Humanities Camp is an “unconference,” which as Ethan Watrall explains on ProfHacker, is “a lightly organized conference in which the attendees themselves determine the schedule.” You can’t really convene such a self-emergent event. But 75 or so participants will nonetheless be there on Saturday morning, and we will indeed get started, figuring out the sessions democratically and then sharing ideas and conversation. This format takes the place of “sharing” (by which I mean dully reading) 20-minute papers that through a bizarre rift in the space-time continuum take 30 minutes to read, leaving little time for discussion.
[pullquote align=”left”]If you go into a panel knowing exactly what you’re going to say or what you’ve already said, there’s little room for exploration or discovery.[/pullquote]
The unconference obviously stands in contrast to the top-down, largely monologic model of the traditional conference. Most THATCamp attendees rave about the experience, and they find themselves craving similar open-ended panels at the more staid academic conferences in their respective fields. Change is slow to come, of course. What happens for the most part are slight tweaks to the existing model. Instead of four people reading 20-minute papers during a session, four people might share 20-minute papers beforehand, with the session time dedicated to talking about those 20-minute papers. Yet this model still relies on the sharing of prepared material. If you go into a panel knowing exactly what you’re going to say or what you’ve already said, there’s very little room for actual exploration or discovery. It reminds me of Nietzsche’s line that finding “truth” is like someone hiding an object in a bush and later being astonished to find it there. That’s the shape of disingenuous discovery at academic conferences.
So what’s a poor idealistic professor to do?
Let’s forget about unconferences, even as they gain momentum, and start thinking about underconferences.
What’s an underconference?
Before I answer that, let’s run through some other promising alternative conference models:
The Virtual Conference: This is the conference held entirely online, in which the time and space limitations of the real world can be broken at will. The recent Critical Code Studies Working Group, held over six weeks this spring, was a good example, though the conference was, unfortunately, only open to actual participants. The proceedings will be published on Electronic Book Review, however, and at least one research idea seeded at the virtual conference may see the light of the day in a more traditional publishing venue. HASTAC (Humanities, Arts, Science, and Technology Advanced Collaboratory) has had success with its virtual conference as well.
The Simulated Conference: Like Baudrillard’s simulacrum, this is the simulation of a conference for which there is no original, the conference for which there is no conference. This sounds impossible, but in fact I hosted an entirely simulated conference one weekend in February 2010. It was a particularly conference-heavy weekend for the digital humanities, and since I couldn’t attend any of them, I created one of my own: MarksDH2010. Spurred on at first by Ian Bogost and Matt Gold, the simulated conference turned into a weekend affair, hosted entirely on Twitter, and catered by Halliburton. Dozens of participants spontaneously joined in the fun, and in the very act of lampooning traditional conferences (e.g. see my notes on the fictional Henri Jenquin’s keynote), I humbly suggest we advanced forward the humanities by at least a few virtual inches. As I later explained, MarksDH2010 “was a folie à deux and then some.” You can read the complete archives, in chronological order, and decide for yourself about that characterization.
The Unconference: Do I need to say more about the unconference? Read about the idea in theory, or see it in practice by following the upcoming THATCamp Prime on Twitter.
The Underconference: The virtual conference and the simulated conference are both made possible by technology. They take place at a distance, mediated by screens. The final model I wish to consider is the opposite, rooted in physical space, requiring actual—not virtual—bodies. This is not the unconference, but the underconference. The prerequisite of the underconference is the conference. There is the official conference—say, the MLA—and at the very same time there is an entirely parallel conference, running alongside—no, under—the official conference. Think of it as the Trystero of academia. Inspired by the Situationists, Happenings, flash mobs, Bakhtin, ARGs, and the absurdist political theater of the Yippies, the underconference is the carnival in the churchyard. Transgressive play at the very doorstep of institutional order. And like most manifestations of the carnivalesque, the underconference is at its heart very serious business.
[pullquote align=”right”]The participants of the underconference are also participants in the conference. They are not enemies, they are co-conspirators.[/pullquote]
Let me be clear, though. The underconference is not a chaotic free-for-all. Just as carnival reinforces many of the ideas it seems to make fun of, the underconference ultimately supports the goals of the conference itself: sharing ideas, discovering new texts and new approaches, contributing to the production of knowledge, and even that tawdry business of networking. The participants of the underconference are also participants in the conference. They are not enemies, they are co-conspirators. The underconference is not mean-spirited; in fact, it seeks to overcome the petty nitpicking that counts as conversation in the conference rooms.
The Underconference is:
Playful, exploring the boundaries of an existing structure;
Collaborative, rather than antagonistic; and
Eruptive, not disruptive.
What might an underconference actually look like?
Whereas the work of the conference takes place in meeting rooms and exhibit halls, the underconference takes place in “the streets” of the conference: the hallways and stairwells, the lobbies and bars.
The underconference begins with a few “seed” shadow sessions, planned and coordinated events that occur in the public spaces of the conference venue: an unannounced poetry reading in a lobby, an impromptu Pecha Kucha projected inside an elevator, a panel discussion in the fitness room.
As the underconference builds momentum, bystanders who find themselves in the midst of an unevent are encouraged to recruit others and to hold their own improvised sessions.
The underconference has much to learn from alternate reality games (ARGs), and should incorporate scavenger hunts, geolocation, environmental puzzles, and even a reward or badge system that parodies the official system of awards and prizes.
I have reason to believe that at least a few of the major academic conferences would look the other way if they were to find themselves paired with an underconference, if not openly sanction a parallel conference. Support might eventually take the form of dedicated space, perhaps the academic equivalent of Harry Potter’s Room of Requirement.
Do you get the idea? It’s a bold and ambitious plan, and I don’t expect many to think it’s doable, let alone worthwhile. Which is exactly why I want to do it. My experiences with virtual conferences, simulated conferences, and unconferences have convinced me that good things come from challenging the conventions of academic discourse. For every institutionalized practice we must develop a counter-practice. For every preordained discussion there should be an infusion of unpredictability and surprise. For every conference there should be an underconference.
Two or three years ago it’d be difficult to imagine a university shuttering an internationally recognized program, one of the leading such programs in the country.
Oh, wait. Never mind.
That happens all the time.
My own experience tells me that it’s usually a marginalized field, using new methodologies, producing hard-to-classify work, heavily interdisciplinary, challenging many entrenched institutional forces, and subject to an endless number of brutal personal and professional territorial battles. American Studies, Cultural Studies, Folklore Studies. It’s happened to them all.
Sometimes the programs die a slow death, downsized from a department to a program, then to a center, and finally to a URL. They’re dismantled one esteemed professor at a time, their budgets and their space shrinking ever smaller, their funding for graduate students dwindling to nothing. Sometimes the programs die spectacularly fast but no less ignobly, the executioner’s axe visible only in the instant replay. The recession makes this quick death easy to rationalize from a state legislator’s or university administrator’s perspective. Today’s cutting edge initiative is tomorrow’s expendable expenditure.
Indeed, financial considerations seem to have driven a provost-appointed task force’s recommendation that the renowned film studies program at the University of Iowa be eliminated. Such drastic cutbacks make me wonder about innovative programs at my own university, where the state is sharply curtailing public funding. (The state has funded up to 70% of George Mason University’s budget in the recent past, but now Virginia only provides 25%, a figure that is certain to fall even lower in the years ahead.) And then I wonder about innovative programs and initiatives at other colleges and universities.
And then I fear for the digital humanities center.
There is no single model for the digital humanities center. Some focus on pedagogy. Others on research. Some build things. Others host things. Some do it all. Regardless, in most cases the digital humanities center is institutionally supported, grant dependent, physically situated, and powered by vision and personnel. A sudden change in any one of these underpinnings can threaten the existence of the entire structure.
Despite the noise at last year’s MLA Convention that the digital humanities were an emerging recession-proof, bubble-proof, bullet-proof field in academia, I fear for this awkward new hybrid. Funding is tight and it’s only going to get tighter. Sustainability is the biggest issue facing digital humanities centers across the country. Of course, digital humanities centers are often separate from standard academic units. I don’t know whether this auxiliary position will help or hurt them. In either case, it’s not unreasonable to assume that some of the digital humanities centers around today will ultimately disappear.
The death of the digital humanities center. It’s not inevitable everywhere, but it will happen somewhere.
Let me be clear: I am a true believer in the value of the digital humanities center, a space where faculty, students, and researchers can collaborate and design across disciplines, across technologies, across communities. I cut my own chops in the nineties working on the American Studies Crossroads Project, one of the only groups at the time seriously looking at how digital tools were transforming research and learning. I’m grateful to have friends in several of the most impressive digital humanities outfits on the East Coast. I have the feeling that the Center for History and New Media will always be around. The Maryland Institute for Technology in the Humanities is not going anywhere. The Scholars’ Lab will continue to be a gem at the University of Virginia.
There will always be some digital humanities center. But not for most us.
Most of us working in the digital humanities will never have the opportunity to collaborate with a dedicated center or institute. We’ll never have the chance to work with programmers who speak the language of the humanities as well as Perl, Python, or PHP. We’ll never be able to turn to colleagues who routinely navigate grant applications and budget deadlines, who are paid to know about the latest digital tools and trends—but who’d know about them and share their knowledge even if they weren’t paid a dime. We’ll never have an institutional advocate on campus who can speak with a single voice to administrators, to students, to donors, to publishers, to communities about the value of the digital humanities.
There will always be digital humanities centers. But not for us.
Fortunately even digital humanities centers themselves realize this—as well as funders such as the NEH’s Office of Digital Humanities and the Mellon Foundation—and outreach has become a major mission for the digital humanities.
And fortunately too, a digital humanities center is not the digital humanities. The digital humanities—or I should say, digital humanists—are much more diverse, much more dispersed, and stunningly resourceful to boot.
So if you’re interested in the transformative power of technology upon your teaching and research, don’t sit around waiting for a digital humanities center to pop up on your campus or make you a primary investigator on a grant.
Act as if there’s no such thing as a digital humanities center.
Instead, create your own network of possible collaborators. Don’t hope for or rely upon institutional support or recognition. To survive and thrive, digital humanists must be agile, mobile, insurgent. Decentralized and nonhierarchical.
Stop forming committees and begin creating coalitions. Seek affinities over affiliations, networks over institutes.