“Superstitious Applicances” and Abusive Game Design

“Now It’s Personal: On Abusive Game Design” by Douglas Wilson and Miguel Sicart discusses the meaning of abusive game design and the rhetoric behind this design approach. Abusive game design focuses on creating a dialogue between the game designer and the player to force the player to experience something out of the ordinary and beyond his/her expectations. As the player experiences the game, he/she can begin to understand the designer behind the system.

Abusive game design differs from conventional, or contemporary, game design theory in that abusive game design seeks to establish a dialogue between player and designer through games that push the player outside the normal expectations, whereas conventional game design seeks to satisfy players’ desires so that they are challenged just enough and will feel satisfied with their actions. Conventional game design is a one-sided arrangement in which the game design adapts to the ideal and potential performances of the players so that the game always satisfies the user—the game designer is catering to the audience’s needs and wants. For instance, the game Frogger could be seen as having a conventional game design. As the player moves onto more challenging levels, the game is not impossible to beat and it is challenging enough to make the player feel accomplished when he/she beats a tough level or receives a high score. In addition, the designers of Frogger release expansion packs and numerous sequels to meet the players’ needs and some of these versions enable the players to access extra-hard modes or secret levels to showcase their skills and expertise. There are certain expectations that come with the game as well, for instance the themed levels, number of lives, and the intuitive way to play.

In contrast, games classified as having abusive game design force the player to think outside of how he/she would normally play a game and to have uncomfortable and unexpected experiences. Jason Nelson’s “Superstitious Applicances” demonstrates Aesthetic Abuse, specifically attacking the player’s sense of hearing. The homepage of the game emits overlapping voices that repeat the same sentences over and over, one of which sounds guttural and robotic. As the player clicks on certain areas of the homepage, he/she experiences various sounds consisting of high pitched tones, bombing/exploding sounds, and one piece with uncomfortable silence. In terms of the player’s visual perception, the pictures are hard to decipher with flickering images and hard to read/overlapping text that does not stay still long enough for the player to read.

This “user-unfriendliness” is what brings about an interaction between the player and the designer. The designer pushes the player right up to the breaking point, but still keeps the player intrigued, and the player feels as if he/she is fighting with the designer to make some sense of the game. In addition, the design of “Superstitious Applicances” supports continuous surprises and new insights over the course of encounters between different players through eccentric, unexpected, and confrontational experiences.

 

Procedural Rhetoric

In the first chapter of his book, Persuasive Games, Ian Bogost coins the term “procedural rhetoric.” First he goes on at length about procedure; then he goes on at length about rhetoric. Finally he pulls them together to make the argument that procedures themselves can make arguments. Since I’ll be discussing this in class tonight, I’m going to use this blog post to go over some of what won’t make it into my presentation.
I’m a technical writer, so I document procedures for a living. I also audit procedures, which means that I read written procedures and look for evidence that they are being followed. Both of these tasks are soul-suckers because life isn’t about procedures, it’s about goals. I want, for example, to feed my kids. There’s a procedure for that, and it goes a little something like this: 1) plan menu; 2) go to grocery store; 3) cook food. Yes, it’s a grand oversimplification, but even at that macro level, there are flaws. What if I didn’t plan the menu and go to the store, and it’s already dinner time? What if I did plan the menu and go to the store, but the lettuce is starting to turn brown, so I decide to throw it out? What if I planned the menu and went to the store, but the power went out, so I can’t cook? I can still feed my kids under any of these circumstances, but the procedural rhetoric tells me that I’m doing it wrong.
The procedure, which may be helpful in many circumstances, makes an argument for the right way to do something. So now if I order a pizza, I’m doing something wrong. If we eat cereal for dinner, I’m doing something wrong. If we go over to my mom’s house for dinner, I’ve failed. This is why people hate procedures. They don’t account for our human ability to reason and inject logic and creativity into our lives.
The computer can’t deviate from its established procedures. It is programmed to do one thing, and it can’t reason its way into a creative solution if it hasn’t been programmed to do so. That’s why procedural rhetoric is a topic worthy of exploration: it can weasel its way into your psyche with its intractability. It makes limits your creativity and your autonomy.

Meaning Maker

“The computer program has no real understanding of the user’s input” (Bogost 11).

Writing regarding the Rogerian psychologist program “Eliza,” Ian Bogost makes the point that the computer itself cannot really understand the input of the user, but can only process the input and respond based on procedural rules that the programmer has set up in the system. Although it is clear that a computer program has limits to its functionality as a result of its design, this question of “meaning,” I believe, is critical to the discussion of the relative persuasiveness of computer-produced rhetoric.

“Wherever there is meaning there is persuasion” (Bogost quoting Kenneth Burke, 21). If this statement is inversely true, then in order for a piece of digital rhetoric to be meaningful, it has to be persuasive. That persuasiveness is (according to Bogost), a consequence of the process written into the program and the effectiveness of its expression. I would extrapolate further and propose that “meaning” is also created in the interaction between the user and the program, and that the interaction follows the procedure of the designer and is constrained by their authorship of “potential events” (Bogost 64). In other words, meaning is not inherent to the actual physical technology and the efficacy of the program can be limited by the foresight of the designer.

The design or process that is crafted by the program creates a space in which a discourse is possible and obviously intended. The interface of the digital media allows for new ways to interact with information. In addition, the programs are able (by their processes) to articulate that information successfully to the user. Working as a medium for the purposes of the creator, computer-generated rhetoric is a powerful tool in producing avenues of meaning and understanding that are no longer chained to geographical locations or limited by physicality. Transcending the boundaries of all the forms it utilizes (i.e. text, image, film, sound, etc.), the computer program can be undeniably persuasive in its ability to communicate meaning.

Mario is an unreliable narrator

Television will rot your brain. Heavy metal makes you worship Satan. Dungeons & Dragons will make you a homicidal lunatic. These are all claims I have heard growing up, and the people who said them honestly believed them. Of course, television has also brought us “Sesame Street”, heavy metal has introduced millions to Samuel Taylor Coleridge, and Dungeons & Dragons has taught many lonely, lonely young men how to do math. (They’re still lonely, though.)

As Ian Bogost et al note in their article “Documentary”, “[c]ontroversy is nothing new for video games – it is a medium that has been accused of inspiring prurience, brutality, and sloth for decades.” This puts video games in excellent company, particularly as they attempt to make the difficult transition from entertainment to documentary. However, despite all of the strengths that video games bring to the genre, there are certain inherent qualities to video games that have to be taken into consideration along the way, qualities that may be more easily masked than in other mediums.

As N. Katherine Hayles has suggested, “print is flat, code is deep”. This “flatness” applies equally well to film. Video games have an underlying structure that, while they may seem to offer more freedom of control in a given environment, still do not reflect the real world. The programmers make the rules, and those rules can shape the outcome either blatantly (such as with Kuma\War, wherein the player’s actions are tightly constrained to reflect John Kerry’s account of events), or subtlety (such as with JFK Reloaded, wherein the game physics define the possibility of “making the shot”). By shaping the outcome in this way, the programmer has the option of shaping the narrative despite any protestations of objectivity (such as Kuma’s insistence that “the players can decide for themselves”).

Another way that games’ protean nature allows them to differ from more traditional media is in presenting procedural reality. Rather than creating a specific situation with established characters in a linear story, a situation is created with a rule set that players engage with and make their own decisions. This gives the illusion of freedom and a lack of bias, but invokes Noah Wardrip-Fruin’s argument that “we must read both process and data.” How the character interacts with the scenario is dictated by the rules as they are provided, and choices are constrained both by the data given and the processes set forth. As an example, the game Peacemaker involves several sets of variables that include potential points of bias. How each “leader” is programmed to respond to a situation, the resources given to work with, and even the intermediary goals can be opportunities to shape the overall narrative, as well as the fact that the only available solution is a two-state solution.

While video games have great potential as documentary, they are no more or less objective in nature than their predecessors, and it is important to be aware of the ways that they will be manipulated, by their creators, their supporters, and their detractors.

 

What’s the difference?

Sure, I play my fair share of shooters–Call of Duty, Halo, Medal of Honor–but when reading this article, I was taken aback with the notion of rewarding a player $100,000 emulating the ballistics of JKF’s assasination with hopes of dispelling the many conspiracy theories that surround that tragic day. The question I asked myself while reading this was: how is this different from Call of Duty? or any game for that matter that involves the assassination of any figure of histortical relevance.

Tracey Fullerton introduces the term documentary games, which serves as an “umbrella term for commercial war games that feature fictional recreations” (62). Now, this may apply to Call of Duty and Medal of Honor, but it appears the same for JFK Reloaded, which doesn’t make sense–the assassination wasn’t fictional, it really happened.

How is this different from killing Fidel Castro on Call of Duty: Black Ops, for example? In the game, players are given the objective to storm Fidel Castro’s stronghold, and neutralize him. As you break through security, exchanging gunfire with hostiles, you reach Castro, and put a bullet right between the eyes–awesome, right? Of couse it is! You completed the objective, but would players feel awesome putting a bullet through JFK’s head, and if so, why? Is it because you accomplished the assassination, or is it because your ballistic marks match that of Lee Harvey Oswald’s?

This article was interesting because it questioned my morals. How is it that I can streight-faced run into an enemy stronghold, kill everyone, put a hole through the head of a communist revolutionary, continue through the rest of the campaign, and not feel disturbed? I can’t really see JFK Reloaded as a game–a documentary, sure, but not a game. In Call of Duty you can game over very easily, and continue from a checkpoint. Conversely, JFK Reloaded only has one way to win–there are no chekpoints, or game over screens.

The Assassination of Documentary Videogames

Ian Bogost writes in his piece “Procedural Rhetoric” that by the works of Aristotle, rhetoric has come “to refer to effective expression—writing, speech, or art that both accomplishes the goals of the author and absorbs the reader or viewer.”  While he doesn’t use it in reference to modes of persuasion, this definition nonetheless applies to the documentary videogames Bogost, Simon Gerrari, and Bobby Schweizer detail in “Documentary” that were meant to convey a communal, emotional angst but received criticism for callous and offensive material.

In JFK Reloaded and Super Columbine Massacre RPG!, videogames that the above authors reference, the intention of the creators was to portray a series of actions that pinnacled in a single historical event: the assassination of President John F. Kenny or the student-shooting of Columbine High School.  In the latter case, the creator of the game had a deep and personal connection with the events he depicted, signifying what I feel like would be the opposite of an attempt at mockery.  In both games, I feel like the creators used visual rhetoric to not influence the attitudes or opinions of their viewers, but to affect their awareness of the events.  After all, as Bogost says, “images are move ‘vivid’ than text or speech, and therefore they are more easily manipulated toward visceral responses.”  For an audience in which the majority of people did not have an eye-witness account of JFK’s assassination, these games have a more evocative power than any description could hope to possess.

Bogost has two similar ideas that conflict with the goals of human interest/documentary videogames.  In “Documentary,” he says that “experience means something much more abstract: the emotional sensation of an event…if citizens were able to experience the sensations of an experience through simulation rather than by description, perhaps they would better connect world events with [emotions] in their own lives.”  Similarly, Bogost says in “Procedural Rhetoric” that “the closer we get to real experience, the better…the best interactivity [comes] closest to real experience.”  Through this appeal, those who play games like Darfur is Dying or September 12th should become more perceptive and aware of the facts: the choices a family in Darfur had to decide between to get water, the consequences of their actions, and why they were made to act so in the first place.  Why, then, have these types of games received so much criticism?  Is it investigative reporting devolving into “fear-mongering,” or that the general public just isn’t ready to have videogames rip open old wounds with evocative images?  Considering the fact that JFK was killed almost 50 years ago, I’m less likely to concur with the latter and draw my own conclusion that people consider these historical events to be taboo (makes people uncomfortable to talk about to a certain extent) and therefore would really prefer to not have related graphic images shoved into their faces reminding them of what they shouldn’t/won’t talk about.

Will we ever be ready for documentary games?

According to Ian Bogost, Simon Ferrari, and Bobby Schweizer, games such as Walden and JFK Reloaded seek to record an event, its space, and its stakeholders posterity. These goals categorize the games as documentary games. Now can videogames represent actuality (the truth of an event, not just the way something looks) in the way that cinema, photography, and nonfiction writing have done? My immediate answer is yes. Videogames are so entwined and dependent off cinema, photography, and nonfiction that of course they can. The authors agree by stating that videogames can engage actuality three ways: explorable spatial reality, operational reality, and procedural reality.

These ways seem nearly fool proof.  But in the end, does the proof that videogames can represent actuality really matter when the public cannot accept them? One of the last points made in the article is the controversy over documentary games, and I can see why.

Newsgames – Journalism at Play points out that procedural documentary does not weave a path through evidence like film or articles to provide a backdrop to the historical situation. Instead, it models the behavior and dynamics of the situation. Characters, setting, and even events are just a side effect of the overall logic. It’s because of this that I feel documentary games appear distant and cruel. For the most part, the public seems used to games that have more emphasis on the self, if not emotion. There’s an isolated and awful feel to playing events like they actually happened. This unfavorable reaction is only heightened with events with great casualties, which happen in games such as Super Columbine Massacre RPG and 9-11 Survivor.

It’s with games like these that documentary games are more likely to be labeled “survival horrors.” Tampte claims “a realistic portrayal of the battle must frighten the player, like a horror game might do,” and I have to agree. To portray a historical event truthfully, the maker should depict it in all of its frightening glory. But as far as the general public is concerned, I don’t think they will ever be completely ready or happy to play videogames that reenact charged events and memories. I can imagine it to be torture for some people to live through the past so accurately. This is where I think documentary videogames can take actuality a step further than the other mediums. But by no means should we expect the public to willingly go through the trauma that some of these documentary games could cause them.

The Blurring of Big Data

Manovich states that there is a specific definition for “big data” in the computer industry that is different from the designation uses in his article “Trending: The Promises and Challenges of Big Social Data,” illustrating that there are separate disciplines that perceive and use data in different ways.  However, I would argue that the scholarly fields of study that rely on data for their research is actually increasing so that eventually there may be crossover between these two definitions of “big data.”  I deduce this from his focus of the later definition in which he discusses the fields in the humanities that use social data to study human behavior, such as Sociology student, Nathan Eagle, who collected data from 100 MIT student phones for 9 months (4).

He excitedly mentions the new possibilities that “big data” brings; however, he discusses their limitations, such how much a “lay person” can analyze and manipulate data that is now readily available to the public (versus a person trained in technology that can better manipulate data) (10-11).  Thus, while the access to big data is enabling the “non-technical” fields of study to utilize technological methods of interpretation (within limits) it is blurring the separation between science and the humanities.

Technology experts, economists, and Manovich agree that we are now part of the “industrial revolution of data” (as qtd 1).  Therefore, the infiltration of “big data” into the lives of the individual, and the industries (e.g. retail, travel, hotels, education, manufacturing, etc.) is bound to have cross-over. Technology itself enables more efficient communication and connection between people and groups; therefore it follows that this melding of society would also penetrate to distinct methodologies as it becomes a “way of life” and grants us new ways of accomplishing things.

A Remedy for Stagnation?

As I read Lev Manovich’s Trending: The Promises and the Challenges of Big Social Data, I kept wondering if Manovich was going to address how big data could be accessible and used for all researchers, not just people in special positions in particular companies and with special skills. I was rather disappointed when he ended his article with that question and no answer. This question of accessibility and usefulness for all people seems to be a rather important one. Manovich addresses that if big data research is limited to these people in specified areas of work and knowledge, it is almost going to waste, but I really do think that the above question needs an answer.

Manovich’s only answer to this question is about how the people themselves need to change in order for big data to be used. On page 14. Manovich states, “However, this requires a big change in how students, particularly in humanities, are being educated.” He is talking about how, in order to understand and manipulate the data, the user must know programming. They must, essentially, be computer scientists. Though I don’t disagree with this, I think, however, that most of the change needs to be in how accessible it is for people to get ahold of these pieces of data. It appears that, according to Manovich, it is impossible for an ordinary person to get most of the information that is collected.

If this problem cannot be remedied, then it is pointless to attempt to have students educated differently in order to be able to analyze and interpret the data. Is there truly a way for this big data to be available to more people? I completely agree with Manovich that this data is there and should be used for research. Currently, it seems to be going almost completely to waste. It is available to such a small network of people that, compared to how much information there is, it seems to be being rendered useless. So, is there a way to make this information available to researchers, including students? Is there any growth in the availability of this data? If not, perhaps this whole conversation is pointless. Perhaps we are stuck.

Critical Analysis

I thought that Lev Manovich did a good job in his article explaining why he thinks that the emerging form of big-database could benefit not only the sciences, but also humanities. I absolutely agree that with the development of new technologies and databases, the capacity for scientific research and the study of humans will only improve. However, I did appreciate how Lev Manovich also stated way in which the changes in the way people collect data and manage data could also affect research and people in a negative way. Manovich listed four arguments against his claim that big-databases could affect the sciences t which I agree. I think that the emerging forms of database are a good thing for research, but I do see how they could have questionable influences on the media. The first argument being the amount of data people would have access to With the amount of information that is now able to be stored and accessed by certain companies or individuals, I think that it is a liability having a large amount of information being accessible to one entity or person. For instance, the controversy over the newly developed program by google, where google could access your accout information and use that information for advertising purposes. Large amounts of data do need to be managed properly in order to prevent controversy. The second argument was that not all the information collected online is accurate, which is absolutely true. On social networks people post and comment according to what they want other people to think about them. People can filter what they share or lie about what they share to others. Also certain websites have different political views or outlooks on fact and can interpret information differently. The third argument addressed how information could change through the proxy by which it is obtained. An Anthropologist will gather different information if it is gathered by human interaction rather than through technology or computer data. The information being obtained is subjective based on how it is collected. The fourth argument is that in order to be able to fully use the technology to its full advantage, one must be able to understand the computer language. People will not eb able to benefit from the new form of data if they are not familiar with the proper education.