Human Developers

When working in a silo, developers often begin to think of themselves and to act as if they were non-humans, foregoing sleep, food, and the niceties of human life (like rising from a chair…or blinking). The closer to the metal, the further from the flesh. Peer review by other developers helps those developers to see themselves as users—humans. Not just a computing brain with inputs and outputs and advanced algorithms and fingers for clicking, or a demiurge with all power and control, but a real live human.

It might only take a three-year-old to review someone’s application for them, because they are human too.

Experiencing someone else’s application is a simple human affair, but requires openness to learn how another person thinks, sees and organizes the world. For a developer reviewing another’s work, you are no longer god (as you are when you are the coder, creating worlds with your words)—you are Adam or Eve, newly born in a green garden with opportunities all around. It is the Adams and Eves that we design our worlds for. Human developers, who know first-hand how to be human, make more human-like applications. Developers ought to practice being human every once in a while. To review and to be reviewed.

And so in training other Web developers, we may turn them into computers, or we may lead them to develop into humans. We cannot give what we do not have. We must be humans, we must develop humans, and we must develop worlds for humans. Open the door to your world and let some humans in.

Making Pi

Originally posted on Mizzou Server Hackerspace:

Photo from Wikimedia by cowjuice, http://commons.wikimedia.org/wiki/File:Raspberry_Pi_Photo.jpg?uselang=en

Raspberry Pi: inedible, but oh so sweet

These last two sessions (with a Spring Break in the middle), our group has looked at setting up a Raspberry Pi as a server and we are still working on connection issues with Cloud9 IDE.

For the Raspberry Pi server, we had a fun string of events. First, we had to find power for the thing (no power cord). We used the mini USB port onboard and connected that to a powered USB port on a computer. Problem solved. After connecting an ethernet wire, a Mac mouse/keyboard combo, and an HDMI cord, we realized that no monitors or Macs in the area had HDMI input. What a crock! After scrounging around in the office for a usable monitor, and even trying an old Panasonic tv (the room looked like a Texas Instruments developer’s lab from the 80s before we were done), we finally found…

View original 150 more words

The Rise of the Machines

rwadholm:

We have started a Mizzou Server Hackerspace at the Digital Media Zone that I manage, with a blog to match.

Originally posted on Mizzou Server Hackerspace:

Old-timey machine shop from http://commons.wikimedia.org/wiki/File:Machine-shop-r.jpg

The age of the machines has come, and we as mere humans want to understand our overlords better. This week, in celebration of the World Wide Web’s 25th anniversary, we have inaugurated an informal weekly get-together at University of Missouri Columbia Digital Media Zone around the topic of the machines that make the Web work: servers.

This blog (and possibly others) will serve as the log of our adventures into all things server-related. Our weekly meetings, Thursdays 6pm-7:30pm in the Digital Media Zone in Townsend Hall, are going to be short hackathons on the following topics:

  • Making our own servers (with Raspberry Pi, Android, Intel Galileo and such)
  • Working with our own Apache server space (free unlimited space if you join the club, which is also free)
  • Using JS server-side (Node.js)
  • MySQL, PHPMyAdmin, MongoDB, CouchDB, PouchDB, LocalStorage syncing with a RESTful API
  • PHP
  • What makes blogs work on the…

View original 269 more words

The Puzzle of Transparency

In Experience and Experiment in Art, Alva Noё (2000) argues that experience is more complex than it seems. Reflection on art can aid in our understanding of perceptual consciousness, and be a “tool of phenomenological investigation” (p. 123). Reflection on specific kinds of art may help us solve the puzzle of the transparency of perception. The puzzle of transparency is this: in art (and experience in general) we have a tendency to see through our own perceptions (to not reflect on our perceptions themselves) to the objects of our experience in the outside world. When we attempt to reflect on our window to the world (perceptions), we look through it and end up reflecting on the world itself. When we try to reflect on our seeing, we end up describing what we are seeing, rather than the experience of seeing. Experience is transparent to our descriptions and reflections (we instead reflect on and describe the experienced).

So, to solve the problem, we need to think of perceptual experience as a temporally extended process, and we need to look at the activities of this process (what we do as we experience our environment). Instead of describing the window, we describe the window’s actions. Noё sees the sculpture of Richard Serra (particularly his Running Arcs) as exemplifying this idea: the sculpture allows for active meta-perception (perception of the act of perceiving). The work causes us to reflect on our experiences with them and on our own perceptions by making us feel off-balanced, intimidated, like the piece is a whole with its environment–things that highlight the nature of our perceptions rather than just the objects themselves. It allows us to perceive our situated activity of perceiving, the act of exploring our world. Serra’s work, as what Noё terms “experiential art”, brings us into contact with our own act of perceiving: the window’s transparency and the act of mediating that which lies beyond.

Noё, Alva. (2000). Experience and experiment in art. Journal of Consciousness Studies, 7(8-9), 123-135.

Kant’s Subjective Universal Validity

In Analytic of the Beautiful (Book 1 in Critique of Judgment), Kant discusses the judgment of taste, and presents arguments concerning the nature of beauty in comparison with that which is pleasant or good. Kant argues that judgments of taste (specifically of beauty) have subjective universal validity.

Judgments of taste are not logical or evaluations of reason. Instead, their determining ground is subjective. Judgments of taste are tied to internal feelings of satisfaction and pleasure, and so are aesthetical and subjective. Satisfaction in judgments of taste are disinterested and indifferent to the existence of objects. In contrast, both satisfaction with the pleasant and satisfaction with the good are interested in the state of the existence of objects. In pleasure we seek gratification, while in good we desire either utility (the mediate good—good for something) or good in itself (the immediate and absolute good). What gratifies is pleasant, what is esteemed is good, but what merely pleases is said to be beautiful (and note that what pleases is subjective).

The universality of the judgment of taste is related to its subjectivity: because it is both subjective and disinterested, this feeling of pleasure is valid for all humans (i.e. beauty is imputed to all as universally satisfying). Each individual feels pleasure in the beautiful, without reference to themselves and without interest in the object, and infers that this same satisfaction is universal (it is not bound to the subject’s interests or to the object’s existence). While the pleasant is individually satisfying, and the good is conceptual in nature, the beautiful is universally satisfying and non-conceptual. Judgments of beauty are not postulated as universal: all who make a claim of “beauty” impute universal agreement.

Kant may be seen as conflating satisfaction with the beautiful with taste: for Kant each person cannot possibly have their own tastes because tastes have subjective universal validity. However, his arguments seem to only suggest that satisfaction with the beautiful has subjective universal validity, not that all judgments of taste have subjective universal validity. If satisfaction with beauty is a subset of tastes, as some understand Kant to be saying, then there are others in the subset that may or may not share all of the same characteristics as beauty, and thus what is always true of beauty is not always true of all tastes (and vice versa). On the other hand, if satisfaction with beauty is synonymous with taste in Kant’s writings (and not a mere subset), we are left with the problem of why tastes differ, yet satisfaction with beauty does not. All humans find satisfaction with beauty (they impute absolute universal experience of beauty in beautiful objects); though they find beauty in different objects and differ amongst themselves about what objects are beautiful. In contrast, all humans find satisfaction with their own judgments of taste, but they do not impute absolute universal experience of taste with pleasant objects). In short, there seems to be an imputation of general objectivity (or at least intersubjectivity) in judgments of the beautiful, even though the experiences are subjective, while there seems to be no imputation of general objectivity in judgments of taste (in general): judgments of taste have subjective and indexical validity, but not always imputed general universality.

New Media Literacy & Video Games

This past week I have been studying up on new media, media literacy, and new media literacy. It is interesting to me how much of this literature surrounds use of video games in education. New media is defined in different ways by researchers in different disciplines, but my gloss definition is something like “digital interactive communication formats”. Why is this kind of literacy important? Why should people be able to intelligently and critically consume, analyze and create digital interactive media? Education is not only about concepts: it is also about artifacts. As the artifacts (the tools and things we humans create and use) in our world change, so must our education. What it means to live in a high functioning way in our world changes as the artifacts change (and because artifacts are a part of culture, as the culture changes). Similarly, as the formats and functions of the artifacts change (for instance, from paper to digital, from read-only to to read-write), so must our education change. If K-12 education is to bring our children up to speed with the world they inhabit, and empower them to become meaning-making citizens, we must attend to the enculturation process and the products of our culture (and to reflect on those processes and products).

Where video games fit into this artifactual nature of our current digital world is an interesting question (interesting at least to me). How can we operationalize new media literacy education, what are the ultimate goals of education in general (and do these also change with cultural/artifactual changes), and how can video games help us (or hinder us) in achieving those goals? Does new media even represent a change, or is it just hype? To be sure, new media literacy education is merely new media education if we fail to focus on reflection, critique and synthesis. Video games are fun and learning experiences themselves, but can they be also used as contexts for reflection, tools for creation, and worlds of transferable learning scenarios and skills? If so, what affordances are necessary for this to occur?

Violent Video Games Study Design

Thinking about the several research studies that have been done in the last decade on the impact of violent video games on levels of aggression and violence in players, I think a major stumbling block has been in the area of research design–the validity of studies has been questioned. If we really want to scientifically evaluate the hypothesis that violent video game exposure has a correlational effect on aggression/violence, it seems to me that we need a more rigorous research design. I do not think that in general violent video games cause or are correlative with violence/aggression, but it would be interesting to look at this from a research perspective and test these ideas quantitatively. Note that I am not a quantitative researcher by background, and am not fully convinced in the total efficacy of “scientific research” of whatever variety. I am convinced that it is a good idea to look into our world, and to develop hypotheses and theories that we can test, I am just not convinced that the results of those analyses are dependable as truth/facts (I don’t go for “it turns out that” research results–I do go for “it seems to us that”) , though I do believe we should act on what we believe based on justifications.

That said, I think it would be good to have groups of around 100 children of about the same age/demographic background who are given a “pre-test” on current aggression/violence measures (and have several of these groups). The first group plays no video games for one month, and is given a post-test, then plays violent video games for two months and is given a post-test again. The second  plays no video games for two months, and is given a post-test, then is given several specific violent video games to play regularly for one month and is given another post-test. The third plays no video games for three months and is given a post-test. The fourth plays several specific violent video games regularly for three months and is given a post-test. This same thing is done with non-violent games. The pre- and post-test would ask parents about violence/aggression of child, the same of peers, and the same of  the children themselves. The pre-test would also gather data about previous violent video game exposure (and try to place children in groups evenly distributed based on this). They would also be given pre- and post-test physical skills tests related to violent video games (skill level in violent endeavors like wrestling, paint-ball gun competition, shooting etc.). Then we could see better if children actually learn skills in violent video games, and if aggression levels and violence is significantly different between different groups, and between the start of the study and the end, and between no video games, video games, and violent video games.

Depiction & Reflection

Are reflections depictive? Does my reflection in the mirror depict me? At first blush I would say no, reflections do not depict. The mirror reflection of Bob Wadholm is not a depiction of Bob Wadholm. However, if this is true, it seems that all photography is likewise not depictive, because photography collects the light, and then using mediating technologies allows new light to display a reflected image of a sort back to the perceiver. But I refuse to believe that the photograph in my wallet is not a picture, so something must be wrong with this assessment. Does a depiction require a capture of that light for storage and later display: and is this why I feel that the mirror does not depict me, but that photographs do? But I may set up a web cam to record my face and display it back to me in real-time, functioning as a mirror. And if I do not record this video, is it depictive (i.e., does a video reflection of me depict me)?

Accessibility of Video Game Research

So sorry to any regular readers of my posts on video games: I wrote a post this past week along with a podcast about violent video games that I could not share with a wider audience. Why? Because the article I was discussing is not open access. What does that mean? That means, in this case, that if you don’t own the rights to read the article (like by paying an exorbitant price for a subscription to a particular research journal) you can’t access me discussing in a thorough way the study and findings. This is knowledge you can’t get (maybe you are not sad about this, but I am). And you wouldn’t find the article unless I told you first what it was. I could tell you the name of the article, but I don’t want to call out this particular journal or authors (I feel like it is a systemic problem, not a particular problem with a specific journal or group of authors).

What I do want to do is this: provide you, the general reader, with articles that I find useful and interesting. I want you to have the knowledge I have access to. I think it is wrong for me to get to read these articles and benefit from them, and for you to not be able to even really know what they are saying. Knowledge is being locked up behind a pay door. And I think that is not good–especially because I think it is important knowledge. So if you are writing or thinking about video game use in education, please continue to do so. But please make your work available to the masses who actually play video games, and not just to the very few of us who happen to work at a huge university that can afford a journal subscription to read your article. Release your work under a Creative Commons license of some sort. I don’t think you want to keep your knowledge bound up where no one can read it. And we want to read it. So make it accessible, make it readable by everyone, make it available for us to use and talk about freely and openly. The world is only made worse when useful video game research (along with much other useful research) is not available to be read legally by those whom it would benefit the most. End of diatribe.

Learning Analytics & Serious Games

Do learning analytics have a place in serious games? Learning analytics is “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs” (https://tekri.athabascau.ca/analytics/). Don’t games already have built-in game mechanics that do this? Yes and no. Performance is tested in-game. Game designers think through how a task will be learned and accomplished, and try to optimize this through testing. So if a learner in a serious game gets through the game, they should have learned what it takes to get through the game, meaning that they learned what the game intended them to learn (as long as the game designers were teaching and testing what they intended to teach and test).

However, this data (about learners) is not always measured. What about a task that most learners have trouble with? Game designers (counterintuitively) do not always know the most optimal way to get through their own games (or to learn their own games), even after sitting and observing representative testers. Expert players may, in the end, come up with more optimal solutions than what the original designers thought of in the first place. Without measuring this, collecting data about it, analyzing, and reporting this data about learners and contexts, it is difficult to know or share knowledge about such learner optimizations (or oppositely, sub-optimizations). This does not just happen in-game. Designers can be assisted in creating better games if they are better informed about optimal and sub-optimal performances in games. And game data can help bring this knowledge to designers. Learning analytics seem to have the potential to play an important role in serious game development, getting the designer to the understandings they need to frame the world that the learner needs.

Hume’s “Of the Standard of Taste”

In the essay “Of the Standard of Taste” (http://www.csulb.edu/~jvancamp/361r15.html), David Hume (1711-1776) briefly argues that although tastes, like opinions, differ, tastes are not right or wrong (as opinions might be), though sentiment does admit of varieties of delicacy that may be used to establish a kind of standard of taste. People speak of beauty with language that admits of absolutes. In comparison with moral matters, use of language like “vice” entails blame, while use of language like “virtue” entails praise—all people universally agree that vices are reprehensible and virtues are praiseworthy, the question is “what are the virtues and what are the vices?” So too in art appreciation, aspects of beauty are universally acknowledged, while individual tastes seem to differ with sentiment rather than with reason. There are many opinions, and there are many sentiments. However, while one opinion among many may be true, sentiment is always true and right: “each mind perceives a different beauty.” Seeking true taste is like seeking true sweet or true bitter. Individual dispositions of biological faculties are at play, seeming to make beauty in the eye of the beholder.

While this has passed as common truth, it is also acknowledged that we do seem to consider it absurd when tastes are widely different from the norm, and do not fit with ideals of beauty. Humans do seem to generally share (please forgive the split infinitive) a common sentiment (albeit in extreme example cases). There are, Hume argues, in the midst of the myriad differences of taste, real general principles for praise or blame. People sometimes mis-see beauty because of personal defect. Objects have certain qualities that produce particular feelings, but small amounts of differences (subtle admixtures of beauty and deformity) are more difficult to distinguish. Resemblances to the familiar brings a percipient greater pleasure, preference, and prejudice to the appreciation of a work of art and may muddy the water. However, you can prove one critic’s taste is better than another’s, and the deficient critic will acknowledge their taste indelicacy when presented with evidence derived from general principles about beauty that are universally acknowledged. Delicacy of taste in beauty requires practice, comparison, perspective, and absence of prejudice. Reason is required to check prejudice, and keep it from influencing one’s sentiment of beauty. So, although principles of taste are universal, still few people have come to the heights of delicacy to “establish their own sentiment as the standard of beauty.” Nevertheless, when these elite few issue joint verdict, we have on our hands the true standard of beauty and taste.

So for Hume, while judgments of taste may be based on sentiment (and sentiment may vary wildly), aesthetic relativism is not the answer. There are absolutes, and these are to be found in persons of delicate and practiced (though unprejudiced) taste. This seems to be an argument for taste elitism (a kind of tastocracy), which causes me to wonder if Hume sees himself in the ranks of this group of arbiters of true beauty? Beyond this criticism, there is potentially a more fundamental weakness with this strategy: admitting that tastes are always right, but that some tastes are better (more delicate) than others, providing a standard by which to judge tastes as good or bad, seems to say that while a particular taste may be right, it may not fit the standard (and thus be deformed, deficient and/or wrong in its verdict). For instance, in contradistinction to Hume, I find John Bunyan’s work of greater genius and elegance than Joseph Addison’s, based on sentiment. Another critic will likely disagree with me, and find the opposite (as does Hume, who characterizes detractors as “absurd” and “ridiculous”). Can we prove one taste right, and another wrong? Is my sentiment wrong or deficient? If you “prove” that, you provide reasons, which is not what taste is based on for Hume (it is based on sentiment). If sentiment is the basis of judgments of taste, no reasons may overrule my verdict. So, perhaps, as Hume seems to suggest, a better, more informed sentiment may be the basis of judgment of my taste. But what makes it better or more informed or more delicate if it is in the end still mere sentiment (is it closer to an absolute truth)? And does the verdict of the elite group turn my “true” verdict based on sentiment into a “false” verdict based on sentiment? Further, if Hume’s argument is successful (and the elite groups’ sentiments are better or more delicate than my own in regard to esteeming Bunyon’s work), this does not mean they are a standard, but that their sentiment is more well-informed (or better in some other way). Even if that is the case, it may also be that a group of even more well-informed critics later looks down their noses on these elitists’ sentiments, and are even more well-informed in their valuations and pronounce my earlier valuation as more delicate than my current detractors. In this case, is the standard of tastes getting more standard? Does an absolute standard admit of degrees? It does not seem logical that it could.

Experienced Resemblance of Outline Shape: Pictures & Depiction

In “The Domain of Depiction,” Dominic Lopes (2005) argues against the EROS (experienced resemblance of outline shape) theory of depiction and proposes and defends a recognition theory of depiction. I would here like to focus on what I think is Lopes’ strongest objection against EROS, and to show how a defender of EROS theory might reply.

There are six cardinal truths about depiction that EROS theory attempts to explain: 1- depiction is not property-less: to depict something, you must depict it as having some property; 2- depiction is always from a point of view; 3- only visible objects can be depicted; 4- objects can only be partially misrepresented in pictures (total misrepresentation is merely non-representation); 5- understanding a picture entails knowing what the depicted object looks like; and 6- knowing how the depicted object looks is necessary and suffices for understanding a depiction of the object (2005, p. 162). In EROS theory, depiction is explained (though not defined) as an experience of resemblance of an object in outline shape. This experience is characterized as “seeing-in”, but depiction requires more than just seeing-in, it also requires the right kind of causal relation between the picture and the object, or requires that the picture is made with the intent for the object to be seen in the picture (p. 163).

Lopes offers several objections to EROS theory, and perhaps his strongest objection is his analysis of the limits of EROS. This objection takes the form of four illustrations of problematic depictions:

  1. A cube shown in parallel oblique perspective (EROS either says that in many cases our intuition about what is depicted is incorrect, or that EROS is so flexible that it does not have to track with objective resemblance in outline shape—either way, EROS is not necessary for seeing-in),
  2. Three outline shapes of parts of a cube (recognizable outline shapes depend on the presence of significant contours or boundaries in the depiction, so EROS is not sufficient for seeing-in),
  3. The outline of shading on a face, shown in positive, negative and outline (outline shape is only recognizable with the presence of a an illumination boundary, so again EROS is not sufficient for seeing-in), and
  4. R.C. James’s “Photograph of a Dalmatian” (the outline shape of the dog is not seen until the percipient sees the dog in the picture, so EROS depends upon seeing-in).

So EROS is not sufficient or necessary for seeing-in, and seems to imply circularity.

How might an EROS theorist respond? I think one avenue might be to look back at what EROS theory claims: that depiction is seeing-in plus intention/causality. The cube depicts an object as cubical, we see a cube in the picture, and we experience the outline shape in the picture as matching a cube in outline shape. How? EROS allows for multiple points of view of an object in a depiction, and this can be taken to be one such case of multiple perspectives (and EROS is still necessary for seeing-in). In the illustration of three outline shapes of parts of a cube, only the first is depicting a cube: there is no indication that the other two (the irregular shaped outlines) are intended to depict a cube. In fact, as the outlines are presented here, they are intended to not depict a cube (or they would be poor examples for Lopes’ argument). Similarly, for the outline shading on a face, the last picture is not intended to depict a face (and it does not). The “Photograph of a Dalmatian” case is a bit different. The picture intentionally leaves out significant portions of the outline shape, which are provided by the percipient (via Gestalt). The viewer never sees the outline shape of a dog, but rather creates it in their mind (infers the presence of such a shape given the parts of the picture’s surface, and the percipient’s previous experiences of outline shapes of dogs). This picture is not made with the intention of the dog being seen: the point is to not see the dog in the picture, but to infer the resemblance to the outline shape of the dog in the picture (and from there to experience the resemblance to the outline shape of the dog in the picture). So the conditions for depiction are not met (we are not intended to see the dog). In sum, these cases can be argued to be by and large not depiction (due to lack of intent) or to be cases of multiple perspectives. No circularity seems to be involved, and it seems that EROS can still be said to be sufficient and necessary for seeing-in.

Lopes, D. M. (2005). The domain of depiction. In Matthew Kieran (Ed.), Contemporary Debates in Aesthetics and Philosophy of Art, pp. 160-175. Oxford: Blackwell.

Mod Me Baby, One More Time

One of the great things about many popular video games (especially games in a series) is that modification of the game itself is possible, either through cracking or through in-game allowances for modification. For instance, the violent video game GTA IV has a modding community that has grown up around it, and mods include playing as SpongeBob, R2D2, and driving a birthday cake (http://www.youtube.com/watch?v=WoBH_HgHsKg). These games have their own worlds, their own characters, and their own plots, but players have reenvisioned the world, characters and plots to fit their pleasure. If games are learning environments (which I think they are), one of the things players are learning is how to change their learning environments. Here we see new media skills in action. Hacking and cracking these games requires skills and know-how, and often results in communities of practice (not just around the game itself, but in modification of the game). New vocabularies spring up, new popular twists arise, and users become creators. Now as game makers, players are embodying their skills and knowledge in their modifications. Playing through a game often leaves very few digital artifacts. Creating a game is all about the artifacts. Mastery is visible. Game modding often gets a person to the highest level of thinking in Bloom’s taxonomy. Pulling off a good mod is personally satisfying and publicly rewarding.

Distilled from games, what can we say about modding? Successful modification of our environment, identity, and motivations/directions are a part of growing up. What affordances are there for modification in learning? When we are able to have control over where we learn and in what manner, when we become the teacher or identify with what is being taught, and when we change our mind or surge toward something from inner compulsion, that is a successful learning mod. But learning modification requires free will. And not all who are allowed to mod end up modding. A person must choose to be a modifier. They must step into modification. They must choose to move from situated learning (as in video games) to resituated learning (as in video game mods). As the learner resituates the learning, they may fail miserably at the tasks before them. But given support, encouragement, and teaching regarding how to succeed (just in time and as needed), they will mod their learning, and hopefully mod their lives. Beyond gamification of learning we might think about focusing on the modification of learning. And modification of modification of learning…

Genuine Rational Fictional Emotions

In “Genuine Rational Fictional Emotions,” Gendler and Kovakovich (2006) seek to resolve the “paradox of fictional emotions.” The paradox goes like so: 1- we have genuine rational emotional responses to specific fictional characters and situations (the response condition); 2- we believe that those characters/situations are fictional (the belief condition); but 3- we can only have genuine rational emotional responses toward characters/situations if we believe they are not fictional (the coordination condition). Each of these claims is plausible at first blush. For instance, I can respond to Frodo’s plight in Mordor with genuine rational sadness, I can believe that Frodo is merely fictional, and I am not feeling genuine rational emotions for a three-foot high furry-footed curly-haired person (because I believe Frodo is fictional, and so I cannot feel genuine rational emotion for the poor guy). But these three conditions cannot all be true simultaneously.

The solution hinges on the question of whether emotions about actual characters/situations are similar enough to emotions about fictional characters/situations to warrant considering them “two species of the same genus” (p. 243). The differences between these two kinds of emotions (dubbed here “fictional emotions” and “actual emotions”) are related to subject matter (real or fictional) and to motivation (I do not respond to Frodo in the same way as I would if I actually met a real hobbit in such tormented circumstances). Earlier resolutions of the paradox posit that fictional emotions are not genuine, or are not rational (addressing the response condition), or that we lose track of our own belief (addressing the belief condition).

Gendler and Kovakovich deny the coordination condition, and do so partially on the basis of recent empirical research which suggests that autonomic responses (response behaviors linked to the involuntary nervous system) help people in practical reasoning by the following process: we imagine consequences of our actions, which activates emotional responses, and these become reinforced to the point of automatic responses which help us behave rationally (based in part on these automatized responses). So autonomic emotional responses tied to future circumstances (“what if” scenarios) may help us behave rationally. Automatic emotional responses to imaginary events are a part of rationality. Further, when we fear actual future events, these emotions are genuine, even though the events have not happened (and may not happen). So we can have genuine rational emotions about things that do not exist. Concerning the belief condition, with optical illusions we may perceive and respond to things we do not believe. This is because we have automatized our responses to stimuli in such a way as to act subdoxastically (without requiring belief). If this is true, we may have genuine rational emotions toward fictional characters/situations without needing to believe those characters/situations are real (the emotions occur subdoxastically). Without such emotional engagement in fiction, we would be limited in our capacity to behave rationally (we would be limited to our own narrow real circumstances for building up autonomic responses).

This proposed solution is both elegant and convincing, though there are several potential weaknesses. First, the cases given for subdoxastic responses may turn out to be doxastic (but be false beliefs that are overturned by further evidence). When I get near a window in a high-rise apartment, I believe I will fall to my death (I am afraid of heights, and of falling to my death). I also have other beliefs that outweigh this false belief, and which sometimes allow me to stand near the window and enjoy the view without a response of fear. It is not irrational to hold two beliefs at the same time that are contradictory: it is irrational to still hold both after evaluating the merits of each (and recognizing they are incompatible). Second, the force of the argument depends upon our acceptance of the idea that the similarities between actual and fictional emotions are more striking than the differences, but what if we were able to show there are additional differences between the two? For instance, fictional emotion is a source of pleasure (even when the emotion is fear or sadness), which we derive in part from knowing that the fiction is not real. If actual and fictional emotions are indeed different, and if our emotions are not subdoxastic in the case of responses to fiction, then the argument presented here may fail to convince.


Gendler, Tamar Szabo, & Kovakovich, Karson. (2006). Genuine rational fictional emotions. In Matthew Kieran (Ed.), Contemporary Debates in Aesthetics and the Philosophy of Art (pp. 241-253). Malden: MA: Blackwell.

Using a TV as a Hammer: From Entertainment to Usability

I am pursuing research in the direction of testing the pedagogical usability of popular off-the-shelf video games (that are not intended to be educational). Pedagogical usability is basically the affordances of a tool for use in learning (or as a learning environment). One question arises from this research in my mind, and I have not fully wrapped my mind around it: what business do we have assessing the use of entertainment-centered video games for their educational utility? Is this like measuring the aesthetic features of PhD candidates?

And when we use tools like blogs (intended for journaling self-reflective thoughts in a public fashion online) for educational purposes, are we misusing the tool? When we use a tool like a television to hammer in a nail, are we abusing the tool? Similarly, in my research I will need to address this important problem. Am I suggesting through my research that we should take every fun and interesting thing in the world and use it for our own purposes, against the intents of the creators of those tools? Ought we to misuse and/or purposefully thwart the true intent of a tool if we perceive our own purposes for the tool as more important to society?

Popular video games are generally not created to be educational, or to be used in education. We may learn new media skills using them, but that is typically not their purpose. What right do have to judge them for something they don’t even intend? Or to use them against their intended purposes?

Here might be one way out of this dilemma: if we view this problem from a holistic standpoint, we might be able to understand the several major elements at play in this problem, and to create a model for synthetic artifactuality to apply to these kinds of situations. First, we have creators/designers of tools, users of tools, and the tools themselves. Second, we have interactions: users interact with tools in ways that the original designer intended. But they also use them in new and unique ways not intended, sometimes with better results than what was intended with the original design. When this beneficial use occurs, we may see this merely as chance, or as a second designer in action: the user. The user comes to the tool and the way in which it is designed, sees its intended purposes, understands its affordances, and synthesizes his or her own thoughts about possible affordances of the tool with those of the original designer. This new design is then tested as the tool is used for the new purpose. We see old tools put to new uses all of the time, so this is not a new idea. The question is the beneficiality of the new use over the old use, and perhaps also the possibility of negative effects on the tool itself or its ability to fulfill its original purposes.

Are video games negatively affected by their use in educational settings? Maybe. Maybe they don’t look as cool anymore, or as enjoyable. After all, if motivation is supposed to rub off of video games onto educational content when used in an educational setting, could not negative connotations of formal schooling rub off on games as well? Is it a possibility that video games might be less entertaining if we are worrying about how they are educating? It seems like a live possibility. Might the act of assessing them against their designers’ intentions cause them to become less valued in their original state (or for us to undervalue their original intentions)? I sure hope not. Let’s hope that we don’t ruin these tools. More important still, let’s hope that we don’t ruin the learners.

Where is all of this blood coming from?

The causal link between aggression and video game violence is an interesting topic. Craig Anderson (along with Karen Dill) has done a lot of empirical research in this regard, and suggests that exposure to violence in video games increases aggressive thoughts and behavior in both the short and long term (2000). The explanation for this is that continually playing violent video games actually teaches a person something: the gamer is learning, rehearsing, and reinforcing aggression-related knowledge structures (p. 775). They are learning a way to think and behave in the world. This is because video games are successful learning environments and exemplary teachers, a line of thought taken up recently by Douglas Gentile and Ronald Gentile (2007). Video games (even the violent entertainment-centered ones) follow many of the best practices of learning and instruction. Further, people who play a more varied assortment of violent video games are more likely to learn to be aggressive and to have aggressive thoughts, and players who play for the same amount of time but more frequently are also more likely to learn these thoughts and behaviors and to retain these long term (Gentile & Gentile, 2007). One question remains: where did all of the violence in video games come from?

Many violent video games are gory. We see blood, guts, brains, and body parts being splattered, spilled, exploded, ripped, sprayed on objects…the gore can even be seen dripping from the player’s avatar at times (like when a zombie takes a bite out of your head). Where is this gore coming from? Do 3D computer objects have blood, hearts, and brains? Do we need the gore for fun or entertainment? Further, do we need the violence? And where does this violence arise? Are video game designers an extremely violent or aggressive lot by nature? Are they nefariously trying to get all of the rest of us mere humans to kill each other so that they can take over the world? They are teaching us to be more aggressive: is this intentional? If not, does this unintentional (but highly successful) learning occur in a vacuum: where does the violence originate?

It would be interesting to study if designers of violent video games are more aggressive individuals than designers of non-violent video games. I would venture a guess that they are not. I would further surmise that the violence arises from the genre, and the genre exists because people buy those games, and people buy those games (in part) because they are cognitively aggressive and enjoy pretending to have violent actions. I know that is why I like some violent video games. Violent video games don’t necessarily teach us violence, but rather aggression. But are we learning aggression from video game designers, or are they learning it from us (and mirroring back to us what we want to see)?

Another related question is whether or not players of violent video games more likely to actually cause blood to flow than players of other kinds of video games? What about other activities that we think of in a positive light, like sports? Are football, basketball, and soccer players more likely to be aggressive than people not frequently engaging in these activities? Are athletes by and large taught to be more aggressive by engaging in aggressive thoughts and behaviors in their sport? Yes (Oproiu 2013). Taking this further, are athletes more likely to cause physical violence to other people than non-athletes? And are they more likely than violent video game players to draw blood–to commit violent actions? Well, yes. This happens on the field quite often. Why are we engaged in these activities (sports and violent video games) that we know affect our levels of aggression?

I think because they are fun, and because we think that aggression is not necessarily negative. Violence in both sports and video games transpire in a ruled environment: there are rules governing the use of violence, who can commit violent actions, and what (and who) can properly have violent actions perpetrated against them. There is unlawful violence and negative aggression in both sports and video games, allowing for proper and improper (or disallowed) violent behaviors. The question is “How permissive are the rules, and do the rules map well with real life?” In other words, is this an acceptable thought or action in the real world? If I learn well from what I am doing (in sports, video games, and elsewhere), and I transfer that knowledge and those skills, will I (and the world around me) be better? How can I reflect best on my practices (even my virtual practices) and critique my thoughts so that I transfer the good and leave the ill-mapping thoughts and behaviors where they belong (in the game)?


Anderson, Craig A., & Dill, Karen E. (2000). Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life. Journal of Personality and Social Psychology, 78:4, pp. 772-790.

Gentile, Douglas A., & Gentile, J. Ronald. (2008). Violent video games as exemplary teachers: A conceptual analysis. Journal of Youth and Adolescence, 37, pp. 127-141.

Oproiu, Ioana. (2013). A study on the relationship between sports and aggression. Sport Science Review, 22:1-2, pp. 33-48.

Pedagogical Usability & Video Games: Turning Fun into Learning, While Learning if Fun is Actually Good for Learning

I have been thinking a bit about pedagogical usability as of late, as I am prepping for presenting a paper on pedagogical usability at the 2013 E-Learn conference in Las Vegas. Petri Nokelainen and others have come up with evaluations of pedagogical usability for online courses, learning management systems, knowledge management systems, wikis, and websites, and my presentation (with two friends of mine from Missouri University) is a pedagogical usability evaluation of a digital library system, a unique application for this kind of evaluation. Pedagogical usability is the affordance in a system for (or presence of): learner control, learner activity, cooperative/collaborative learning, goal orientation, applicability, added value, motivation, valuation of previous knowledge, flexibility, and feedback. These components go hand in hand with usability of the normal technical variety (what helps you understand and get through a website or designed experience efficiently and effectively). But pedagogical usability has a constructivist pedagogical leaning, and seeks to assess a system’s pedagogical affordances in this regard.

I have been reading through some of the literature on the use of video games in education (for instance, see Leonard Annetta’s 2008 Video games in education: Why they should be used and how they are being used), specifically the use of popular entertainment games for learning purposes (like the Civ series). Educators are using these games, and there has been extensive research on how they are using them (and theories about why they are using them). But I have not found research that offers educators ways to assess how useful these games are for pedagogical purposes. How would you even test that?

Pedagogical usability evaluations may be one way forward. If we take the top popular video games being used in education for learning purposes (maybe just the top 3), analyze how they are being used, and then assess their pedagogical usability, we could offer both assessments of those specific games, and models for pedagogical assessments of any other games in the future. Educators would then have a tool in hand to test the latest-greatest game and see if it will meet their specific pedagogical aims. Omar, Ibrahim & Jaafar (2011) have presented a very brief outline of doing this with educational computer games. I am suggesting something further: assess the pedagogical usability of games that are not meant to be used in education at all. Basically, game designers make a game they intend to be played for fun, educators use it for learning, and we test it to see if that is a good idea with that specific game (given its pedagogical usability–usefulness for educational purposes).


Omar, H. M., Ibrahim, R., & Jaafar, A. (2011). Methodology to evaluate interface of educational computer game. 2011 International Conference on Pattern Analysis and Intelligent Robotics. http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=05976931

Aesthetic Anti-Intentionalism: Art Tells Its Own Story

In “Art, meaning, and artist’s meaning”, Daniel Nathan (2006) offers a defense of anti-intentionalism, an explanation of various vulnerabilities that have been leveled against classic anti-intentionalism, and offers a re-formulation of anti-intentionalism, with several arguments laid out in its defense. Nathan’s argument is set in the context of art interpretation, interpretation that: 1- discovers or discloses meaning, 2- pertains to understanding “artistically pertinent properties” of a work (p. 282), and 3- is the ordinary activity of the everyday appreciator of art (as well as the professional critic). The question is simple: are an artist’s intentions relevant (and/or limiting) in our interpretation of their work? We will here look at how anti-intentionalism is characterized, and at Nathan’s central argument on its behalf.

The classic anti-intentionalist argument is straightforward. An artist intends something with her work. These intentions may be successfully embodied in the work, or not. For instance, an artist may intend a poem to be a treatise on death, but the work is usually interpreted as describing a birthday cake, because there are no indications in the work that suggest “treatise on death” as a possible intention. In this case, the artist’s intentions are not successfully embodied in the work. If the artist is successful in embodying her intentions (for instance, if we readily perceive her intentions regarding “death” as encoded in the work), we do not need to look elsewhere for meaning. If she is not successful (as in the birthday cake example), when we look at things other than the work itself (like the artist’s notes about writing a “treatise on death”), we are being taken away from the art and toward what is irrelevant—we end up reading into the work rather than reading the work (eisogesis, rather than exegesis). It is no longer the art that is speaking to us or that we are interpreting: it is the artist, or the context, or something external to the work. While artwork may be evidence of original intent (p. 287), original intent found outside of the encoded work (such as in an artist’s interview about a poem) is irrelevant to our interpretation of the work. At its heart, anti-intentionalism emphasizes what the artwork means rather than just what the artwork means.

Nathan offers a slightly different approach. His anti-intentionalism is built on distinctive features of art that seem to be different from ordinary experience (and that make intentionalism seem out of place): causal explanations do not help us interpret art, our response to representational objects in art is often different than our response to the objects they represent (we fear a stalker, but we do not fear a painting of a stalker), we are looking for aesthetic excellence in art but not in ordinary objects, etc. Nathan’s central proposal, linked to the idea that art is different than a conversation and ordinary experience, is that there is a paradox of intention: works of art are intended by artists to be publicly appreciated and interpreted. An artist intends for his sculpture to stand on its own two feet, so to speak, and explain itself to the world (without any outside help). There is a hierarchy of intentions going on here: the artist intends his work to mean something (and this meaning is relevant to our interpretation in the intentionalist account), but the artist also intends for his work to not require outside information for its interpretation—it should tell its own story. The first intention is a first-order intention, the second is a second-order intention. And the second-order intention (for the work to speak for itself) is claimed to be a “necessary presupposition of the artistic endeavor” (293), whether the artist is aware of this or not. The force of Nathan’s argument is this: artists intend their work to encode the artists’ intentions: if we are charitable, we must interpret those intentions from the work itself (rather than deriving those intentions elsewhere). This is not meant to downplay the importance of original intent or context, but to argue that the work itself is a unity, and is the public embodiment of the meaning. After the work is sent out into the world, it succeeds in telling the story the artist intended, or it fails and tells a different story—either way, according to the anti-intentionalist, it tells its own story. Any external story is just that: an external story (and not the story of the work itself).


Nathan, Daniel O. (2006). Art, meaning, and artist’s meaning. In Matthew Kieran (Ed.), Contemporary Debates in the Philosophy of Art. Oxford, UK: Blackwell’s Publishers.

Aesthetics & Rock’n’Roll

In “Rock ‘n’ Recording: The Ontological Complexity of Rock Music”, John Fisher (1998) argues that the ontology of rock music is unique, in that the primary object of appreciation is the recording, rather than the performance, song, or score, as in standard aesthetic accounts of music. For Fisher, these concepts to not fully account for rock music, which he asserts is primarily concerned with recordings. The definitive version of a rock song is the studio recording, rather than the performance or the score (which cannot sufficiently describe the sound events in rock music), making rock music more akin to film or printmaking than classical music.

Part of Fisher’s concern is to articulate what a recording is: the temporal sequence of sounds encoded in a medium, not the medium itself. When we refer to music on a rock album, we refer to the sounds, not to the physicality of the medium (though the physicality of the medium makes a difference as to the fidelity and quality of the sound events). Fisher here refers to the idea of a “norm-kind”, a kind of thing that can have defective instances (like the species “lion”, and actual instances of lions, which may fail to meet the standards of the ideal lion). Rock music’s recordings are this ideal sound event, for which the sound events at playback of a master tape are mere descriptive-kinds. The musician authorizes the norm-kind (recording) and declares it the ideal form of that work. If the recording does not meet the standards of the norm-kind (the original standard sound event), it is said to be defective in some way.

Rock recordings are often constructive, in the sense that they do not record how a live performance might sound, but are “built up” in a studio from various sounds and takes. Because of this, the recording itself is a musical work—the primary musical work. The producer, engineers, and musicians are all the artists. A recording is not merely of a performance of a song (indeed, the sound event may not even be able to be performed), or even an arrangement (rock recordings cannot be fully accounted for by arrangements or scores, and there are often no pre-existing scores). When the sound recording is authorized by its creators, that is the moment the work is brought into existence (it is now the norm). People also commonly ascribe the properties of the recording (for instance, studio-created abrupt song beginnings) to the work (abruptness is supposedly not a property of the song, but of the recording). So these three make up the meat of Fisher’s argument: 1- the record production process is constructive, so that the resulting recording is the artistic work, 2- the absence of pre-existing scores and the inability to account for rock sound events with scores shows that scores are not the primary work, and 3- properties of the work are ascribed as properties of the recording, but not of the song. Fisher denies that rock music can be reduced to the recording, and believes that live performances are still very important in rock, though we should look at the musical work here as the product of collaboration, so that we appreciate the work more fully as we understand the complexity of its creation. While While the musical objects include the recording, the song, and the arrangement, the recording is the primary work.

It seems to me that Fisher’s look is at classic rock, and fails to account for the current rock music scene. Music videos of rock songs (which are often sonic variations of the “recorded” work) and the importance of live performances and recordings of those performances, as well as variations of songs, are just as important in current rock music as a “normative” studio recording from a specific album (note that many of the works in Pandora’s collection are of live performances, or are variations of songs). His account applies much more readily to the rock music of his own youth than it does to the rock music of the present. Further, classic rock may not be unique in its ontology: what about movie soundtracks? Are these instances where the primary work is often the recording, and where the arrangement or song is secondary?


Fisher, John Andrew. (1998). Rock ‘n’ recording: The ontological complexity of rock music. In Philip Alperson (Ed.), Musical worlds: New directions in the philosophy of music (pp. 109-123). University Park, PA: Penn State Press.

Learnification of Gaming

What if educators, instead of trying to create educational video games that engage and spur learning, focused on the games that people are already playing, and found ways to have learners reflect on those games? Instead of learning games, or the gamification of learning (which I think is a cool idea as well), what about the learnification of gaming? Turn entertaining games into learning opportunities. Students are already engaged. Why not ask them about what they are already engaged in? Students have knowledge, what knowledge do they have?

In a 2008 Edutopia interview (included below), James Paul Gee, an academic who has written extensively on games and learning, says: “The first thing the teacher needs to do is to understand what kids do and the range of it; she has to understand what her own children do. Let them teach you how they engage with games and other digital media. Let them talk about it, reflect on it, because this is very good for their learning. For them to become meta-aware about what they do, how they do it, why they do it, what more they could do with it — that itself is a good teaching tool. So the first thing you’d need to do is be an ethnographer of your own kids, with respect for their knowledge and letting them teach it, and for the diversity of kids’ interests.” (http://www.edutopia.org/technology-integration-experts)

This is an interesting perspective, one that I think fits well with Paulo Freire’s pedagogy of the oppressed: have students learn their own world, not just ours. And don’t think of education as a bank, where the teacher fills the students’ minds with knowledge, or as video games as being best put to use by educators as filling student’s minds with what they ought to know (based on what the designer thinks they ought to know), but rather as part of the students’ worlds that form their own experiences of reality. Start from where the student is.

If a student is interested in a particular video game, what kind of game is it? Why do they like it? What kinds of words would they use to describe it? What strategies help them when they play it? Why is the game fun? Who are the characters, and how do they interact? If you lived in the game world (as a full-time character) what skills would you need to have to succeed? What secrets? How would our world be different if the game world suddenly became our world (would people disappear, or look different, or be always trying to kill you, or be very blocky)? Do students think of themselves as being “in the game” (do they identify with their character in the game if it is a game where players have a character)?

Can this kind of learnification be used to learn anything? And will it work with every audience? Well, if the audience doesn’t play games, no. And if you are teaching trigonometry, you probably need a different approach. But reflection practices and developing a rhetoric in a domain can be potentially empowering things for learners. This is a very specific kind of learning, following a specific kind of pedagogical model (a form of constructivism). If all cognition is situated cognition, where are our students situated, and how will knowing that help us to understand their cognition?

Note: This kind of activity could also be gamified. In a class, when a student references a game and how it applies to a subject everyone is learning in the class, they can be given “points” and can “level up” to achieve different objectives in class discussions (students’ roles in discussions could be a function of mastery levels).

This kind of learning would be student-specific, and would require time and a great amount of skill by the person given the instructor role. If you think it is impossible, think about parenting. If learnification of activities is not occurring in parent-child relationships (i.e., reflection, prediction, dialogue about concepts, relationships and things), something is missing. If you are a parent, you should already be practicing learnification with your children. This is not mere enculteration–it is personal and social examination of what is, in order to inform what can become.