Identity & Identicality

Identity and identicality are two different things. I may have an identical twin who shares all of my physical characteristics, and yet who is not me. We are different persons. I have one identity, and he has another. However, is my twin identical to me in every way? Do we share every attribute? For instance, are we in the same place at the same time and share all characteristics in common? If so, I have no twin. There is only one of me. Exact identicality seems to always entail identity. People can identify me (mark my identity) based on my exact identicality with myself (if that even makes sense). If I look like a duck, swim like a duck, taste like a duck, etc., I must be a duck.

On the other hand, identity cannot always be said to entail identicality. I am not always identical with myself. For instance, I changed my clothes this morning, my mind this afternoon, and my haircut this evening, yet I remain me. I can be non-identical with myself at a different point in time. In fact, being in a different time means that I am now different than myself yesterday, even if everything else about me remained exactly as it was yesterday (if that were possible) because I am in a different time (a non-identical attribute)—I am different relative to the external world. Even though my brain cells change (for instance through learning/growing, through new cell growth, and through loss of cells), I can continue to be me. I hold on to my identity through change.

So I can be identical with myself (I am), and non-identical with myself (for instance, myself from a different point in time) and continue in my identity.

This idea might be fruitfully applied to our understanding of the Trinity. In the three persons of the Trinity, we have God the Father, God the Son, and God the Holy Spirit. God the Father, God the Spirit and God the Son all have the identity of being God, yet are not identical (they do not share all attributes in common). Many of the attributes of God are enjoyed by all of the persons of the Trinity, such as being eternal, but some others are specific to specific persons. For instance, Jesus underwent death. The Father did not (he is a spirit). In this they are not identical. This does not mean they are not one. It also does not mean that they are mere modes of one thing (or are parts of one thing). Jesus is all God. He is the exact representation of the Godhead bodily. When we see Jesus physically, we see the Father. Yet no man has seen God (that which is identical with God). Nevertheless, Jesus, who is said to sit at his right hand, has seen the Father (he who has the identity of God), and testified of him, revealing him to the world through his own person. In this, Jesus can change, and yet God does not change. Jesus is identical with Jesus, and is non-identical with Jesus (himself from a different point in time/space or with differing attributes), and he continues in his identity as Jesus. Jesus is also non-identical with God the Father, non-identical with the Holy Spirit, yet continues in his identity as God the one and only. So being God is not the same as being identical to God (because God can be non-identical to God). In this way Jesus can empty himself of the glory of God through kenosis, and yet be God. He can live in time and be timeless. Jesus in time is not identical with Jesus that is timeless, but Jesus is the timeless God, and Jesus is God in time.

The distinction between identity and identicality might also help us to understand ourselves. I am me (I share the same identity with myself, if that even makes sense). I am not always self-identical though, as we discussed above. Further, my spirit is me. He (me) is the spirit of Bob. My spirit is not identical with me (we do not share all properties in common, such as having material substance and dying in the future). I do have material substance and will likely die in the future. Yet my spirit is me. My spirit is fully Bob (it is me, my identity). The same goes for my physical body, which is me, is not a part of me (it is wholly me), is not identical with my spirit, and yet is truly me. When you point at my body, you can truly say “You are Bob”. My body has the identity of Bob. My body is not a part of Bob (a part of my identity), my body is Bob (it is my identity, though it is sometimes non-identical). If my body dies, I die. I cannot be said to die if my body is not me. If my spirit lives on, I live on. I cannot be said to live on after death if my spirit does not live on. If my body is resurrected, I live on bodily. I cannot be said to live on bodily if my resurrected body is not me.

The differentiation of identity and identicality helps us solve several other major metaphysical puzzles, for instance, Zeno’s paradox of motion, the cat and its fur (fuzzy identities) and identifying a cloud (where does it start and where does it end?).

One puzzle presents itself in this analysis: how different can a being be from itself before it is not itself? Well, we might answer (tongue in cheek) that a being can be as different from itself as possible and still be itself. It does not cease to be itself through loss of identical attributes. Or does it? If a shovel has had the handle replaced multiple times, the wood stem replaced several times, and the scoop replaced several times, and has had different owners, is it the same shovel as when first created? Most would agree that it is not identical to the original shovel. Does it keep its identity through all of these changes, or is there a point (for instance, when the last remnant of the original is replaced) that it ceases to be itself, and something else is there (for instance, a new shovel)? If it is still called a shovel, it retains at least one attribute of the original. Perhaps to make the original shovel not be, it must cease to exist in totality (no attributes shared with the original). If this is the case, is it even possible for things to lose their identity? (Can’t things always be said to retain at least one attribute that is identical?)

In admitting that identity might be lost only through loss of every identical attribute, are we still holding on to some form of “identity equals identicality”? Because in this, we seem to be saying that identity requires at least some identicality (though not necessarily total identicality).

On the other hand, identicality of some attributes does not equal identity (you are not me just because you are also human). Loss of all attributes that are identical might mean loss of identity, but this would also require loss of being, since one of the attributes of the original identity would include being. I suppose it should come as no surprise that if a thing ceases to exist entirely it also ceases to retain its identity.

I don’t actually believe in the existence of attributes or properties of objects as things in themselves. How might this change the conversation? I believe that what we describe as attributes or properties of objects are actually merely “the way the thing exists”, in its relative relationship with other things and itself. “Being” is not part of the way a thing exists. While “being” could be construed to be a property of a thing (that which makes it exist), I don’t believe in properties. Instead, in my framework “the way the thing exists” presupposes the thing’s existence in order to describe it. So if (as in the previous paragraph) a thing suddenly ceased to exist completely, we could not speak of “the way it exists,” since it does not exist any longer. So, in reiterating our previous thesis, identicality of some “ways a thing exists” does not equal identity (though identicality of all the ways a thing exists does entail identity). For a specific object, loss of all the ways of existing that are identical ways of existing might mean loss of identity (though not necessarily loss of existence). It would take a loss of all ways of existing (even those that are non-identical) in order to lose existence entirely. Why would existence cease when ways of existing ceased? If a thing does not have any way that it exists, it should also be said to not exist. For example, there is no way of walking for the person who does not walk. And if a person loses all ways of walking they should be said to not walk. Similarly, if a person loses all ways of existing, they do not exist.

This might be applied to hermeneutics as well. The meaning I get from a text might share the identity with the meaning of the original author, though it might not be identical with the author’s meaning (it might not share everything in common with it). The meaning I get might share some (though not all) identical ways of being with the author’s meaning, and as such retains its identity as the original meaning, while not requiring identicality with that meaning. I can understand without fully understanding.

Similarly with perception. I perceive the way a thing is (outside of me), and I believe it is a certain way (the way I just perceived it). For instance, I perceive the cake is hot (I see steam rising from it), and I am thereby justified in believing the cake is hot (in the absence of defeaters). I perceive the way the cake exists (hot), and I believe it is a certain way (hot). But in this case, let’s say that the cake is actually cold, and there is a hot cup of coffee behind the cake that is letting off steam (because it is hot). I mistakenly think the cake is hot, but I might still be justified in thinking the cake is hot. The reality of the cake (that it is cold) is not exactly identical with what I perceive (that it is hot). Yet my perception matches the identity of the cake with the cake in the external world (and the way it exists). How? The cake looks brown to me, and the cake actually does let off a brown hue in this light (my perception in this case is identical with how the cake exists). Something of my perception is identical with the way the cake actually exists, meaning the identity of the cake is perceived, even though my perception (and resulting belief) do not identically match the state of the world that is external to me. So the statement “the cake is hot” is false (it does not correspond with external reality—the cake is actually not hot), but I am still referring to the same cake nevertheless (though it is not identical with what I believe or what I perceive). As long as some one way of existing crosses the boundary from the external world to my perception, I have (at least imperfectly) perceived the identity of the thing (because I have perceived one of the ways that it exists).

This last example presents an interesting problem. What if I perceive a way of existing for the cake that is not possible for cakes? What if the identity of the thing is irreconcilable with one of the ways of existing that I perceive (and I go on to believe this perception)? For instance, I perceive that the cake is alive and walking around. It is not possible for a cake to be a real cake and also to be alive and walking around. That’s not a possible way of existing for cakes (at least, for what we mean when we say “a real cake”). In that case, the identity is broken. A real cake cannot be alive and walk, but I perceive a cake that is living and walking, and therefore I do not perceive a real cake. The identity of the real cake is lost entirely, not as the result of missing all identical ways of existing (as before), but only by the perception of one way of existing that is not possible for that identity. For instance, a human being cannot turn into a cloud. If that is true, then my identity as “human” is lost if I suddenly become a cloud (in other words, “I” do not become a cloud—rather, a cloud is where I once was, or in some other way shares identical attributes with me, though it is not me).

To sum up, identity and identicality are different things, and an appreciation (and analysis) of how they differ helps us to understand ourselves, our God, philosophical puzzles, hermeneutics, perception, and even cakes. Exact identicality entails identity (if it quacks like a duck…). Identity does not entail identicality (I can be different than myself). Some identicality is necessary for identity (there must be something that is the same). Loss of all identicality entails loss of identity (loss of every identical attribute entails loss of identity), but difference of identity does not entail the absence of identicality (two different persons may have many things in common). And finally, there are essential identical ways of existing that, if absent, entail absence of identity (for instance, there are some ways of existing that are not possible for humans).

Identity & Identicality

On the Timelessness of an In-Time God

The following is a response to Roger E. Olson’s short online article An Example of Unwarranted Theological Speculation: Divine Timelessness ( Olson’s main thesis is that the idea of God’s timelessness is a flawed alien concept borrowed from Greek philosophy that has no basis in biblical Christianity. He claims that timelessness has no scriptural support, and that such a God could not possibly interact with creatures in time. Contrary to Olson, there is scriptural and logical warrant for the belief in a timeless God that interacts in time yet is not bound by it. This God can understand our time and interact in it, while also being eternally existent.

A Timeless God in the Book

Olson claims that the scriptures do not say or hint that God is outside of time. I present here a brief overview of what I think is compelling evidence to the contrary. Jesus is called the Alpha and Omega, the beginning and the end, the one who is, who was and who is to come (Rev. 1:8). By this, is it only meant that he lives long, and that he has always been? Or is this to be understood as a testament to God’s timelessness and in-timeness, since he exists in the future as well as the past? Again, one day is as a thousand years, and a thousand years are as a day (1 Pet. 2:8). A short amount of time is long to God. A long amount of time is short to God. Does this not speak of (or least hint at) God’s being outside of time, or that time is a concept that does not bind God in his being? He is not a God of the dead, but of the living, for all live to him (Luke 20:38). All currently live to God. He is currently the God of Abraham, Isaac, and Jacob, who are apparently alive for him. Jesus states this in the context of the resurrection. Has the resurrection of the just and unjust occurred already? Perhaps it has in God’s perspective (he is currently the God of already-not-yet resurrected people). Jesus was before John the Baptist and Abraham, yet came after them (John 1:15; 8:56-58), before Abraham was (in the past), “I am” (in the present). He (Jesus) takes on himself many times the words “I am”, the one who is. He is YHWH, the God who is. The world and all in it will grow old and change, but God is the same (he does not change) and his years will have no end (Psalm 102:26-27).

Similarly, the creation event seems to point to God’s timelessness. “In the beginning God created the heavens and the earth” (Gen. 1:1). Does this mean that God also had his beginning when he created the physical universe (including time)? Or was time not created, but existed, along with God for all eternity? From outside of scripture, we find evidence of the linkage of space with time—according to relativity theory, time is part and parcel of the physical universe. Time and space make up a continuum. If this is so, 1- God comes into being with the universe (in the beginning), or 2- the universe has no beginning (in conflict with much biblical and scientific evidence to the contrary), or 3- God created time. God chose us in Christ before the foundation of the world (the creation of the physical universe) (Eph. 1:4) and perhaps Christ died before creation (Rev. 13:8). We existed for him to choose before anything existed. These things are spoken of as having taken place before time (if time was created in the beginning), and certainly they took place before we existed. Before I existed, I existed to him. In creation and throughout scripture, God points us to his being that is not bounded by time or by our line of events, and he does this by revealing himself in time.

Alien Invasion

Olson claims that the idea of timelessness came from Greek philosophy. There is no such thing. There are Greek philosophies, and there are many of them, but there is not one (maybe there was only one Greek philosophy at a time when there was only one Greek philosopher). There was more divergence, if possible, in Greek philosophical systems than there are in current philosophical systems. I am fairly sure that Olson is not ignorant of this fact, but he seems not tell us which specific Greek philosophy he is actually referring to—is he referring to some form of Platonist philosophy? Why put all Greek philosophy together and not name his opponent?

Olson uses invective language to say that this view “invaded” Christian theology from “alien sources.” Is this just rhetoric, or does he take his words seriously in these cases? Are Greek philosophers alien sources? Alien to what? They are humans. Aren’t humans the ones who do theology? I would think that at least some theology transpires among humans. These philosophers are not alien to me (I consider myself a human). Maybe Olson means alien to the Bible and/or the thoughts of the Biblical authors and audience, or the community of faith within Judeo-Christianity? By alien and invading, does he mean to say this influence is evil in some way? But aren’t the ideas of the philosophers and theologians that Olson quotes from (and whose arguments he borrows), like Pannenberg, Moltmann, Polkinghorne, and Torrence, also alien in similar ways? Might he be liable to taking away one alien view and replacing it with another, ideas that are outside of the minds of the biblical authors?

Is Greek philosophy not to be trusted as a source for truth? If a Greek philosopher were to argue for the existence of one God, must we reject his proposition because he is a Greek philosopher? (So that if a Greek philosopher says there is one God, he must be wrong; or do we mean he must be backed up by scripture for his words to be correct, and until that time his thoughts are incorrect?) Is the god of some ancient Greek philosophers the God of the Bible? Olson responds with a resounding “No”—different God entirely. It seems to me that perhaps Paul in Athens points to the idea that Greek philosophers might have some true knowledge about God: “In him we live and move and have our being” and “we are his offspring”—what is true of God to Greeks is true of the true God (Acts 17:28).

If we deny any and all truth about God in cultures outside of scriptural revelation (like that of Greek philosophers), we fail to recognize and appreciate that our own doctrines of Christ’s two natures and the Trinity are derived in part from the ideas of specific Greek philosophers. These would also seem to fall under the rhetoric of “invasions from alien sources.” Must we then deny the hypostatic union of Christ’s two natures and the substance of the Trinitarian creed for these reasons? And because the apostle John used the Greek concept Logos (with very important ties to specific Greek philosophers’ ideas) referring to God in Christ, must we also reject this revelation on the grounds that this is an alien concept invading the theological landscape? What languages were the NT scriptures written in? Greek? To whom were the NT scriptures written? Some Greeks? Does being Greek disqualify a human from knowing the truth? From being the object of revelation? From speaking truly about the true God?

Timelessly In-time

Olson argues that “a timeless being cannot interact with temporal beings.” This does not seem to me to be a true statement. By timeless, we do not mean (unless we are transcendentalists of some kind) that God does not or cannot interact with and in time. We do not mean that God exists only outside of time (or that God has parts that exist in time and others that exist outside of time). If we acknowledge that God exists everywhere spatially, we similarly acknowledge that God exists everywhere temporally (we seem to live in a space-time continuum after all).

No Time & G-Time

In the comments on Olson’s blog post, Olson remarks that “I tend to agree that God created time and entered into it with us. ‘Before’ time began there was either no time or (Barth) ‘divine temporality’ that is somehow different from our time.” If Olson agrees that God created time, then God exists outside of time (to create it). That does not mean he does not also exist in time. He created the universe from not-universe, he created time from not-time. If Olson believes in a God before creation without time, this God is timeless (there was a God who was timeless, even if for Olson he is not timeless now). Olson seems to me to believe in what he denies (the possibility of a God who is timeless). If God can be timeless before creation, and yet create the universe (even if he is not acknowledged to be timeless now by Olson), apparently a timeless being can interact with beings in time (by creating). The creation event seems to me to provide logical warrant for belief in a timeless God (and this warrant exists in Olson’s perspective as well, though this does not go acknowledged).

Belief in the creation of time (and belief in a God existing in no-time “before” creation) seems to be either 1: belief in a God that came into being with the creation of the universe (something opposed to scripture), or 2: belief in a God that exists outside of time creating time (along with creating the universe). I don’t imagine that Olson believes 1, so we are left with 2. If he believes 2, then God was at some point timeless (and this with justification from scripture—he created the universe in the beginning). If God was timeless and interacted with time (to create it), then this is possible, and Olson’s argument losses its force: a timeless being interacted with temporal beings at creation, even for Olson.

On the other hand, what if Barth is correct and there is some different divine temporality (other kind of time) outside of our time, in which God existed to create the universe (this has been called by some “G-time”)? If G-time is not the time we speak of when we talk about time in our universe, then it is not time (it is outside of our time, which is all we need to mean when we say that God is “timeless”—he is not restricted by time and/or is outside of our time, though he is not bounded by his timelessness and can still be in our time). On the other hand, if by time we mean “that by which we judge the progression of events”, then we might ask “how is God’s divine temporality different than ours?” Perhaps we might say that ours is created and dependent on materiality, while divine temporality is not? Then I agree that God is not bound by our created time that is dependent on materiality. But we should understand that by acknowledging this, we are also saying that our progression of events is possibly of another sort than God’s (though it is not to say that he does not participate in our progression of events). My tomorrow is not of necessity his tomorrow. If that is true, we are back to the timelessness of God under a different name (G-time/divine temporality and our time).

A God Who Can Tell Time

Olson argues that a timeless God cannot know that “today is February 18, 2015.” But for Olson, can a God who exists in G-time know that the day of creation is the day of creation? Or can a God who existed without time “before” creation know the day of creation is the day of creation? Olson might respond that because God is no longer timeless he can tell the time of day (he has entered into time now). One question: for God, is the day Olson speaks of the 18th or the 17th of February (which time zone does God live in)? If God does not live in a time zone, and it is both the 17th and 18th for God, we are dealing with a God not bound by our time. If we have a God bound by time (as in Olson’s conception, unless I misunderstand him), we also have a God bound by space (he must be in a specific part of the universe, and only that part). If God is only in our time, he cannot be omnipresent as it is traditionally understood (and conversely, if God is omnipresent, he cannot only be in our time).

Can a timeless God know that “today is February 18, 2015” for me? In Olson’s blog post, he states that “today is February 18, 2015.” I believe that I understand what he means, even though today’s date (for me) as I write this is February 24th. I understand that for him it was the 18th when he originally wrote his article. For him, in that writing, it is the 18th. I am outside of that, and for me it is the 24th, yet my knowledge of that event is not bound by my time. I can enter into the world of Olson and know what it is for the day’s date to be February 18th for him. Similarly, I can read a book of fiction in which I live completely outside of the world of the story, and I can know that “today is February 18, 2015” for the characters in the story. I am in a timeless place relative to the characters in the story world, yet I know the day’s date in that world. I could have written the story myself, and for me, relative to the story characters, I live in a timeless place (their world is an eternal now to me). I can still know that “today is February 18, 2015” for the characters in the story at a specific point in time for that world. If I write myself into the story, and interact with story characters within their world and time, I am not thus bound to their time. I can exist in their time and without it. By analogy, can this not also be true of God?


Asserting the timelessness of God is not to deny that God is in time, but that he is beyond and not bound by it. We can affirm with Olson and others that God does indeed act in and for creatures in time. That he lives in time with us and for us. And that we live in and for him. We should additionally affirm, though, that God is not bound by this time, by this space. And we can affirm this timelessness based on scripture, on logic, and on analogy. As the creator of time and space, he is, and there is none like him.

On the Timelessness of an In-Time God

Human Developers

When working in a silo, developers often begin to think of themselves and to act as if they were non-humans, foregoing sleep, food, and the niceties of human life (like rising from a chair…or blinking). The closer to the metal, the further from the flesh. Peer review by other developers helps those developers to see themselves as users—humans. Not just a computing brain with inputs and outputs and advanced algorithms and fingers for clicking, or a demiurge with all power and control, but a real live human.

It might only take a three-year-old to review someone’s application for them, because they are human too.

Experiencing someone else’s application is a simple human affair, but requires openness to learn how another person thinks, sees and organizes the world. For a developer reviewing another’s work, you are no longer god (as you are when you are the coder, creating worlds with your words)—you are Adam or Eve, newly born in a green garden with opportunities all around. It is the Adams and Eves that we design our worlds for. Human developers, who know first-hand how to be human, make more human-like applications. Developers ought to practice being human every once in a while. To review and to be reviewed.

And so in training other Web developers, we may turn them into computers, or we may lead them to develop into humans. We cannot give what we do not have. We must be humans, we must develop humans, and we must develop worlds for humans. Open the door to your world and let some humans in.

Human Developers

Making Pi

Originally posted on Mizzou Server Hackerspace:

Photo from Wikimedia by cowjuice, Raspberry Pi: inedible, but oh so sweet

These last two sessions (with a Spring Break in the middle), our group has looked at setting up a Raspberry Pi as a server and we are still working on connection issues with Cloud9 IDE.

For the Raspberry Pi server, we had a fun string of events. First, we had to find power for the thing (no power cord). We used the mini USB port onboard and connected that to a powered USB port on a computer. Problem solved. After connecting an ethernet wire, a Mac mouse/keyboard combo, and an HDMI cord, we realized that no monitors or Macs in the area had HDMI input. What a crock! After scrounging around in the office for a usable monitor, and even trying an old Panasonic tv (the room looked like a Texas Instruments developer’s lab from the 80s before we were done), we finally found…

View original 150 more words

Making Pi

The Rise of the Machines


We have started a Mizzou Server Hackerspace at the Digital Media Zone that I manage, with a blog to match.

Originally posted on Mizzou Server Hackerspace:

Old-timey machine shop from

The age of the machines has come, and we as mere humans want to understand our overlords better. This week, in celebration of the World Wide Web’s 25th anniversary, we have inaugurated an informal weekly get-together at University of Missouri Columbia Digital Media Zone around the topic of the machines that make the Web work: servers.

This blog (and possibly others) will serve as the log of our adventures into all things server-related. Our weekly meetings, Thursdays 6pm-7:30pm in the Digital Media Zone in Townsend Hall, are going to be short hackathons on the following topics:

  • Making our own servers (with Raspberry Pi, Android, Intel Galileo and such)
  • Working with our own Apache server space (free unlimited space if you join the club, which is also free)
  • Using JS server-side (Node.js)
  • MySQL, PHPMyAdmin, MongoDB, CouchDB, PouchDB, LocalStorage syncing with a RESTful API
  • PHP
  • What makes blogs work on the…

View original 269 more words

The Rise of the Machines

The Puzzle of Transparency

In Experience and Experiment in Art, Alva Noё (2000) argues that experience is more complex than it seems. Reflection on art can aid in our understanding of perceptual consciousness, and be a “tool of phenomenological investigation” (p. 123). Reflection on specific kinds of art may help us solve the puzzle of the transparency of perception. The puzzle of transparency is this: in art (and experience in general) we have a tendency to see through our own perceptions (to not reflect on our perceptions themselves) to the objects of our experience in the outside world. When we attempt to reflect on our window to the world (perceptions), we look through it and end up reflecting on the world itself. When we try to reflect on our seeing, we end up describing what we are seeing, rather than the experience of seeing. Experience is transparent to our descriptions and reflections (we instead reflect on and describe the experienced).

So, to solve the problem, we need to think of perceptual experience as a temporally extended process, and we need to look at the activities of this process (what we do as we experience our environment). Instead of describing the window, we describe the window’s actions. Noё sees the sculpture of Richard Serra (particularly his Running Arcs) as exemplifying this idea: the sculpture allows for active meta-perception (perception of the act of perceiving). The work causes us to reflect on our experiences with them and on our own perceptions by making us feel off-balanced, intimidated, like the piece is a whole with its environment–things that highlight the nature of our perceptions rather than just the objects themselves. It allows us to perceive our situated activity of perceiving, the act of exploring our world. Serra’s work, as what Noё terms “experiential art”, brings us into contact with our own act of perceiving: the window’s transparency and the act of mediating that which lies beyond.

Noё, Alva. (2000). Experience and experiment in art. Journal of Consciousness Studies, 7(8-9), 123-135.

The Puzzle of Transparency

Kant’s Subjective Universal Validity

In Analytic of the Beautiful (Book 1 in Critique of Judgment), Kant discusses the judgment of taste, and presents arguments concerning the nature of beauty in comparison with that which is pleasant or good. Kant argues that judgments of taste (specifically of beauty) have subjective universal validity.

Judgments of taste are not logical or evaluations of reason. Instead, their determining ground is subjective. Judgments of taste are tied to internal feelings of satisfaction and pleasure, and so are aesthetical and subjective. Satisfaction in judgments of taste are disinterested and indifferent to the existence of objects. In contrast, both satisfaction with the pleasant and satisfaction with the good are interested in the state of the existence of objects. In pleasure we seek gratification, while in good we desire either utility (the mediate good—good for something) or good in itself (the immediate and absolute good). What gratifies is pleasant, what is esteemed is good, but what merely pleases is said to be beautiful (and note that what pleases is subjective).

The universality of the judgment of taste is related to its subjectivity: because it is both subjective and disinterested, this feeling of pleasure is valid for all humans (i.e. beauty is imputed to all as universally satisfying). Each individual feels pleasure in the beautiful, without reference to themselves and without interest in the object, and infers that this same satisfaction is universal (it is not bound to the subject’s interests or to the object’s existence). While the pleasant is individually satisfying, and the good is conceptual in nature, the beautiful is universally satisfying and non-conceptual. Judgments of beauty are not postulated as universal: all who make a claim of “beauty” impute universal agreement.

Kant may be seen as conflating satisfaction with the beautiful with taste: for Kant each person cannot possibly have their own tastes because tastes have subjective universal validity. However, his arguments seem to only suggest that satisfaction with the beautiful has subjective universal validity, not that all judgments of taste have subjective universal validity. If satisfaction with beauty is a subset of tastes, as some understand Kant to be saying, then there are others in the subset that may or may not share all of the same characteristics as beauty, and thus what is always true of beauty is not always true of all tastes (and vice versa). On the other hand, if satisfaction with beauty is synonymous with taste in Kant’s writings (and not a mere subset), we are left with the problem of why tastes differ, yet satisfaction with beauty does not. All humans find satisfaction with beauty (they impute absolute universal experience of beauty in beautiful objects); though they find beauty in different objects and differ amongst themselves about what objects are beautiful. In contrast, all humans find satisfaction with their own judgments of taste, but they do not impute absolute universal experience of taste with pleasant objects. In short, there seems to be an imputation of general objectivity (or at least intersubjectivity) in judgments of the beautiful, even though the experiences are subjective, while there seems to be no imputation of general objectivity in judgments of taste (in general): judgments of taste have subjective and indexical validity, but not always imputed general universality.

Kant’s Subjective Universal Validity

New Media Literacy & Video Games

This past week I have been studying up on new media, media literacy, and new media literacy. It is interesting to me how much of this literature surrounds use of video games in education. New media is defined in different ways by researchers in different disciplines, but my gloss definition is something like “digital interactive communication formats”. Why is this kind of literacy important? Why should people be able to intelligently and critically consume, analyze and create digital interactive media? Education is not only about concepts: it is also about artifacts. As the artifacts (the tools and things we humans create and use) in our world change, so must our education. What it means to live in a high functioning way in our world changes as the artifacts change (and because artifacts are a part of culture, as the culture changes). Similarly, as the formats and functions of the artifacts change (for instance, from paper to digital, from read-only to to read-write), so must our education change. If K-12 education is to bring our children up to speed with the world they inhabit, and empower them to become meaning-making citizens, we must attend to the enculturation process and the products of our culture (and to reflect on those processes and products).

Where video games fit into this artifactual nature of our current digital world is an interesting question (interesting at least to me). How can we operationalize new media literacy education, what are the ultimate goals of education in general (and do these also change with cultural/artifactual changes), and how can video games help us (or hinder us) in achieving those goals? Does new media even represent a change, or is it just hype? To be sure, new media literacy education is merely new media education if we fail to focus on reflection, critique and synthesis. Video games are fun and learning experiences themselves, but can they be also used as contexts for reflection, tools for creation, and worlds of transferable learning scenarios and skills? If so, what affordances are necessary for this to occur?

New Media Literacy & Video Games

Violent Video Games Study Design

Thinking about the several research studies that have been done in the last decade on the impact of violent video games on levels of aggression and violence in players, I think a major stumbling block has been in the area of research design–the validity of studies has been questioned. If we really want to scientifically evaluate the hypothesis that violent video game exposure has a correlational effect on aggression/violence, it seems to me that we need a more rigorous research design. I do not think that in general violent video games cause or are correlative with violence/aggression, but it would be interesting to look at this from a research perspective and test these ideas quantitatively. Note that I am not a quantitative researcher by background, and am not fully convinced in the total efficacy of “scientific research” of whatever variety. I am convinced that it is a good idea to look into our world, and to develop hypotheses and theories that we can test, I am just not convinced that the results of those analyses are dependable as truth/facts (I don’t go for “it turns out that” research results–I do go for “it seems to us that”) , though I do believe we should act on what we believe based on justifications.

That said, I think it would be good to have groups of around 100 children of about the same age/demographic background who are given a “pre-test” on current aggression/violence measures (and have several of these groups). The first group plays no video games for one month, and is given a post-test, then plays violent video games for two months and is given a post-test again. The second  plays no video games for two months, and is given a post-test, then is given several specific violent video games to play regularly for one month and is given another post-test. The third plays no video games for three months and is given a post-test. The fourth plays several specific violent video games regularly for three months and is given a post-test. This same thing is done with non-violent games. The pre- and post-test would ask parents about violence/aggression of child, the same of peers, and the same of  the children themselves. The pre-test would also gather data about previous violent video game exposure (and try to place children in groups evenly distributed based on this). They would also be given pre- and post-test physical skills tests related to violent video games (skill level in violent endeavors like wrestling, paint-ball gun competition, shooting etc.). Then we could see better if children actually learn skills in violent video games, and if aggression levels and violence is significantly different between different groups, and between the start of the study and the end, and between no video games, video games, and violent video games.

Violent Video Games Study Design

Depiction & Reflection

Are reflections depictive? Does my reflection in the mirror depict me? At first blush I would say no, reflections do not depict. The mirror reflection of Bob Wadholm is not a depiction of Bob Wadholm. However, if this is true, it seems that all photography is likewise not depictive, because photography collects the light, and then using mediating technologies allows new light to display a reflected image of a sort back to the perceiver. But I refuse to believe that the photograph in my wallet is not a picture, so something must be wrong with this assessment. Does a depiction require a capture of that light for storage and later display: and is this why I feel that the mirror does not depict me, but that photographs do? But I may set up a web cam to record my face and display it back to me in real-time, functioning as a mirror. And if I do not record this video, is it depictive (i.e., does a video reflection of me depict me)?

Depiction & Reflection

Accessibility of Video Game Research

So sorry to any regular readers of my posts on video games: I wrote a post this past week along with a podcast about violent video games that I could not share with a wider audience. Why? Because the article I was discussing is not open access. What does that mean? That means, in this case, that if you don’t own the rights to read the article (like by paying an exorbitant price for a subscription to a particular research journal) you can’t access me discussing in a thorough way the study and findings. This is knowledge you can’t get (maybe you are not sad about this, but I am). And you wouldn’t find the article unless I told you first what it was. I could tell you the name of the article, but I don’t want to call out this particular journal or authors (I feel like it is a systemic problem, not a particular problem with a specific journal or group of authors).

What I do want to do is this: provide you, the general reader, with articles that I find useful and interesting. I want you to have the knowledge I have access to. I think it is wrong for me to get to read these articles and benefit from them, and for you to not be able to even really know what they are saying. Knowledge is being locked up behind a pay door. And I think that is not good–especially because I think it is important knowledge. So if you are writing or thinking about video game use in education, please continue to do so. But please make your work available to the masses who actually play video games, and not just to the very few of us who happen to work at a huge university that can afford a journal subscription to read your article. Release your work under a Creative Commons license of some sort. I don’t think you want to keep your knowledge bound up where no one can read it. And we want to read it. So make it accessible, make it readable by everyone, make it available for us to use and talk about freely and openly. The world is only made worse when useful video game research (along with much other useful research) is not available to be read legally by those whom it would benefit the most. End of diatribe.

Accessibility of Video Game Research

Learning Analytics & Serious Games

Do learning analytics have a place in serious games? Learning analytics is “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs” ( Don’t games already have built-in game mechanics that do this? Yes and no. Performance is tested in-game. Game designers think through how a task will be learned and accomplished, and try to optimize this through testing. So if a learner in a serious game gets through the game, they should have learned what it takes to get through the game, meaning that they learned what the game intended them to learn (as long as the game designers were teaching and testing what they intended to teach and test).

However, this data (about learners) is not always measured. What about a task that most learners have trouble with? Game designers (counterintuitively) do not always know the most optimal way to get through their own games (or to learn their own games), even after sitting and observing representative testers. Expert players may, in the end, come up with more optimal solutions than what the original designers thought of in the first place. Without measuring this, collecting data about it, analyzing, and reporting this data about learners and contexts, it is difficult to know or share knowledge about such learner optimizations (or oppositely, sub-optimizations). This does not just happen in-game. Designers can be assisted in creating better games if they are better informed about optimal and sub-optimal performances in games. And game data can help bring this knowledge to designers. Learning analytics seem to have the potential to play an important role in serious game development, getting the designer to the understandings they need to frame the world that the learner needs.

Learning Analytics & Serious Games

Hume’s “Of the Standard of Taste”

In the essay “Of the Standard of Taste” (, David Hume (1711-1776) briefly argues that although tastes, like opinions, differ, tastes are not right or wrong (as opinions might be), though sentiment does admit of varieties of delicacy that may be used to establish a kind of standard of taste. People speak of beauty with language that admits of absolutes. In comparison with moral matters, use of language like “vice” entails blame, while use of language like “virtue” entails praise—all people universally agree that vices are reprehensible and virtues are praiseworthy, the question is “what are the virtues and what are the vices?” So too in art appreciation, aspects of beauty are universally acknowledged, while individual tastes seem to differ with sentiment rather than with reason. There are many opinions, and there are many sentiments. However, while one opinion among many may be true, sentiment is always true and right: “each mind perceives a different beauty.” Seeking true taste is like seeking true sweet or true bitter. Individual dispositions of biological faculties are at play, seeming to make beauty in the eye of the beholder.

While this has passed as common truth, it is also acknowledged that we do seem to consider it absurd when tastes are widely different from the norm, and do not fit with ideals of beauty. Humans do seem to generally share (please forgive the split infinitive) a common sentiment (albeit in extreme example cases). There are, Hume argues, in the midst of the myriad differences of taste, real general principles for praise or blame. People sometimes mis-see beauty because of personal defect. Objects have certain qualities that produce particular feelings, but small amounts of differences (subtle admixtures of beauty and deformity) are more difficult to distinguish. Resemblances to the familiar brings a percipient greater pleasure, preference, and prejudice to the appreciation of a work of art and may muddy the water. However, you can prove one critic’s taste is better than another’s, and the deficient critic will acknowledge their taste indelicacy when presented with evidence derived from general principles about beauty that are universally acknowledged. Delicacy of taste in beauty requires practice, comparison, perspective, and absence of prejudice. Reason is required to check prejudice, and keep it from influencing one’s sentiment of beauty. So, although principles of taste are universal, still few people have come to the heights of delicacy to “establish their own sentiment as the standard of beauty.” Nevertheless, when these elite few issue joint verdict, we have on our hands the true standard of beauty and taste.

So for Hume, while judgments of taste may be based on sentiment (and sentiment may vary wildly), aesthetic relativism is not the answer. There are absolutes, and these are to be found in persons of delicate and practiced (though unprejudiced) taste. This seems to be an argument for taste elitism (a kind of tastocracy), which causes me to wonder if Hume sees himself in the ranks of this group of arbiters of true beauty? Beyond this criticism, there is potentially a more fundamental weakness with this strategy: admitting that tastes are always right, but that some tastes are better (more delicate) than others, providing a standard by which to judge tastes as good or bad, seems to say that while a particular taste may be right, it may not fit the standard (and thus be deformed, deficient and/or wrong in its verdict). For instance, in contradistinction to Hume, I find John Bunyan’s work of greater genius and elegance than Joseph Addison’s, based on sentiment. Another critic will likely disagree with me, and find the opposite (as does Hume, who characterizes detractors as “absurd” and “ridiculous”). Can we prove one taste right, and another wrong? Is my sentiment wrong or deficient? If you “prove” that, you provide reasons, which is not what taste is based on for Hume (it is based on sentiment). If sentiment is the basis of judgments of taste, no reasons may overrule my verdict. So, perhaps, as Hume seems to suggest, a better, more informed sentiment may be the basis of judgment of my taste. But what makes it better or more informed or more delicate if it is in the end still mere sentiment (is it closer to an absolute truth)? And does the verdict of the elite group turn my “true” verdict based on sentiment into a “false” verdict based on sentiment? Further, if Hume’s argument is successful (and the elite groups’ sentiments are better or more delicate than my own in regard to esteeming Bunyon’s work), this does not mean they are a standard, but that their sentiment is more well-informed (or better in some other way). Even if that is the case, it may also be that a group of even more well-informed critics later looks down their noses on these elitists’ sentiments, and are even more well-informed in their valuations and pronounce my earlier valuation as more delicate than my current detractors. In this case, is the standard of tastes getting more standard? Does an absolute standard admit of degrees? It does not seem logical that it could.

Hume’s “Of the Standard of Taste”

Experienced Resemblance of Outline Shape: Pictures & Depiction

In “The Domain of Depiction,” Dominic Lopes (2005) argues against the EROS (experienced resemblance of outline shape) theory of depiction and proposes and defends a recognition theory of depiction. I would here like to focus on what I think is Lopes’ strongest objection against EROS, and to show how a defender of EROS theory might reply.

There are six cardinal truths about depiction that EROS theory attempts to explain: 1- depiction is not property-less: to depict something, you must depict it as having some property; 2- depiction is always from a point of view; 3- only visible objects can be depicted; 4- objects can only be partially misrepresented in pictures (total misrepresentation is merely non-representation); 5- understanding a picture entails knowing what the depicted object looks like; and 6- knowing how the depicted object looks is necessary and suffices for understanding a depiction of the object (2005, p. 162). In EROS theory, depiction is explained (though not defined) as an experience of resemblance of an object in outline shape. This experience is characterized as “seeing-in”, but depiction requires more than just seeing-in, it also requires the right kind of causal relation between the picture and the object, or requires that the picture is made with the intent for the object to be seen in the picture (p. 163).

Lopes offers several objections to EROS theory, and perhaps his strongest objection is his analysis of the limits of EROS. This objection takes the form of four illustrations of problematic depictions:

  1. A cube shown in parallel oblique perspective (EROS either says that in many cases our intuition about what is depicted is incorrect, or that EROS is so flexible that it does not have to track with objective resemblance in outline shape—either way, EROS is not necessary for seeing-in),
  2. Three outline shapes of parts of a cube (recognizable outline shapes depend on the presence of significant contours or boundaries in the depiction, so EROS is not sufficient for seeing-in),
  3. The outline of shading on a face, shown in positive, negative and outline (outline shape is only recognizable with the presence of a an illumination boundary, so again EROS is not sufficient for seeing-in), and
  4. R.C. James’s “Photograph of a Dalmatian” (the outline shape of the dog is not seen until the percipient sees the dog in the picture, so EROS depends upon seeing-in).

So EROS is not sufficient or necessary for seeing-in, and seems to imply circularity.

How might an EROS theorist respond? I think one avenue might be to look back at what EROS theory claims: that depiction is seeing-in plus intention/causality. The cube depicts an object as cubical, we see a cube in the picture, and we experience the outline shape in the picture as matching a cube in outline shape. How? EROS allows for multiple points of view of an object in a depiction, and this can be taken to be one such case of multiple perspectives (and EROS is still necessary for seeing-in). In the illustration of three outline shapes of parts of a cube, only the first is depicting a cube: there is no indication that the other two (the irregular shaped outlines) are intended to depict a cube. In fact, as the outlines are presented here, they are intended to not depict a cube (or they would be poor examples for Lopes’ argument). Similarly, for the outline shading on a face, the last picture is not intended to depict a face (and it does not). The “Photograph of a Dalmatian” case is a bit different. The picture intentionally leaves out significant portions of the outline shape, which are provided by the percipient (via Gestalt). The viewer never sees the outline shape of a dog, but rather creates it in their mind (infers the presence of such a shape given the parts of the picture’s surface, and the percipient’s previous experiences of outline shapes of dogs). This picture is not made with the intention of the dog being seen: the point is to not see the dog in the picture, but to infer the resemblance to the outline shape of the dog in the picture (and from there to experience the resemblance to the outline shape of the dog in the picture). So the conditions for depiction are not met (we are not intended to see the dog). In sum, these cases can be argued to be by and large not depiction (due to lack of intent) or to be cases of multiple perspectives. No circularity seems to be involved, and it seems that EROS can still be said to be sufficient and necessary for seeing-in.

Lopes, D. M. (2005). The domain of depiction. In Matthew Kieran (Ed.), Contemporary Debates in Aesthetics and Philosophy of Art, pp. 160-175. Oxford: Blackwell.

Experienced Resemblance of Outline Shape: Pictures & Depiction

Mod Me Baby, One More Time

One of the great things about many popular video games (especially games in a series) is that modification of the game itself is possible, either through cracking or through in-game allowances for modification. For instance, the violent video game GTA IV has a modding community that has grown up around it, and mods include playing as SpongeBob, R2D2, and driving a birthday cake ( These games have their own worlds, their own characters, and their own plots, but players have reenvisioned the world, characters and plots to fit their pleasure. If games are learning environments (which I think they are), one of the things players are learning is how to change their learning environments. Here we see new media skills in action. Hacking and cracking these games requires skills and know-how, and often results in communities of practice (not just around the game itself, but in modification of the game). New vocabularies spring up, new popular twists arise, and users become creators. Now as game makers, players are embodying their skills and knowledge in their modifications. Playing through a game often leaves very few digital artifacts. Creating a game is all about the artifacts. Mastery is visible. Game modding often gets a person to the highest level of thinking in Bloom’s taxonomy. Pulling off a good mod is personally satisfying and publicly rewarding.

Distilled from games, what can we say about modding? Successful modification of our environment, identity, and motivations/directions are a part of growing up. What affordances are there for modification in learning? When we are able to have control over where we learn and in what manner, when we become the teacher or identify with what is being taught, and when we change our mind or surge toward something from inner compulsion, that is a successful learning mod. But learning modification requires free will. And not all who are allowed to mod end up modding. A person must choose to be a modifier. They must step into modification. They must choose to move from situated learning (as in video games) to resituated learning (as in video game mods). As the learner resituates the learning, they may fail miserably at the tasks before them. But given support, encouragement, and teaching regarding how to succeed (just in time and as needed), they will mod their learning, and hopefully mod their lives. Beyond gamification of learning we might think about focusing on the modification of learning. And modification of modification of learning…

Mod Me Baby, One More Time

Genuine Rational Fictional Emotions

In “Genuine Rational Fictional Emotions,” Gendler and Kovakovich (2006) seek to resolve the “paradox of fictional emotions.” The paradox goes like so: 1- we have genuine rational emotional responses to specific fictional characters and situations (the response condition); 2- we believe that those characters/situations are fictional (the belief condition); but 3- we can only have genuine rational emotional responses toward characters/situations if we believe they are not fictional (the coordination condition). Each of these claims is plausible at first blush. For instance, I can respond to Frodo’s plight in Mordor with genuine rational sadness, I can believe that Frodo is merely fictional, and I am not feeling genuine rational emotions for a three-foot high furry-footed curly-haired person (because I believe Frodo is fictional, and so I cannot feel genuine rational emotion for the poor guy). But these three conditions cannot all be true simultaneously.

The solution hinges on the question of whether emotions about actual characters/situations are similar enough to emotions about fictional characters/situations to warrant considering them “two species of the same genus” (p. 243). The differences between these two kinds of emotions (dubbed here “fictional emotions” and “actual emotions”) are related to subject matter (real or fictional) and to motivation (I do not respond to Frodo in the same way as I would if I actually met a real hobbit in such tormented circumstances). Earlier resolutions of the paradox posit that fictional emotions are not genuine, or are not rational (addressing the response condition), or that we lose track of our own belief (addressing the belief condition).

Gendler and Kovakovich deny the coordination condition, and do so partially on the basis of recent empirical research which suggests that autonomic responses (response behaviors linked to the involuntary nervous system) help people in practical reasoning by the following process: we imagine consequences of our actions, which activates emotional responses, and these become reinforced to the point of automatic responses which help us behave rationally (based in part on these automatized responses). So autonomic emotional responses tied to future circumstances (“what if” scenarios) may help us behave rationally. Automatic emotional responses to imaginary events are a part of rationality. Further, when we fear actual future events, these emotions are genuine, even though the events have not happened (and may not happen). So we can have genuine rational emotions about things that do not exist. Concerning the belief condition, with optical illusions we may perceive and respond to things we do not believe. This is because we have automatized our responses to stimuli in such a way as to act subdoxastically (without requiring belief). If this is true, we may have genuine rational emotions toward fictional characters/situations without needing to believe those characters/situations are real (the emotions occur subdoxastically). Without such emotional engagement in fiction, we would be limited in our capacity to behave rationally (we would be limited to our own narrow real circumstances for building up autonomic responses).

This proposed solution is both elegant and convincing, though there are several potential weaknesses. First, the cases given for subdoxastic responses may turn out to be doxastic (but be false beliefs that are overturned by further evidence). When I get near a window in a high-rise apartment, I believe I will fall to my death (I am afraid of heights, and of falling to my death). I also have other beliefs that outweigh this false belief, and which sometimes allow me to stand near the window and enjoy the view without a response of fear. It is not irrational to hold two beliefs at the same time that are contradictory: it is irrational to still hold both after evaluating the merits of each (and recognizing they are incompatible). Second, the force of the argument depends upon our acceptance of the idea that the similarities between actual and fictional emotions are more striking than the differences, but what if we were able to show there are additional differences between the two? For instance, fictional emotion is a source of pleasure (even when the emotion is fear or sadness), which we derive in part from knowing that the fiction is not real. If actual and fictional emotions are indeed different, and if our emotions are not subdoxastic in the case of responses to fiction, then the argument presented here may fail to convince.

Gendler, Tamar Szabo, & Kovakovich, Karson. (2006). Genuine rational fictional emotions. In Matthew Kieran (Ed.), Contemporary Debates in Aesthetics and the Philosophy of Art (pp. 241-253). Malden: MA: Blackwell.

Genuine Rational Fictional Emotions

Using a TV as a Hammer: From Entertainment to Usability

I am pursuing research in the direction of testing the pedagogical usability of popular off-the-shelf video games (that are not intended to be educational). Pedagogical usability is basically the affordances of a tool for use in learning (or as a learning environment). One question arises from this research in my mind, and I have not fully wrapped my mind around it: what business do we have assessing the use of entertainment-centered video games for their educational utility? Is this like measuring the aesthetic features of PhD candidates?

And when we use tools like blogs (intended for journaling self-reflective thoughts in a public fashion online) for educational purposes, are we misusing the tool? When we use a tool like a television to hammer in a nail, are we abusing the tool? Similarly, in my research I will need to address this important problem. Am I suggesting through my research that we should take every fun and interesting thing in the world and use it for our own purposes, against the intents of the creators of those tools? Ought we to misuse and/or purposefully thwart the true intent of a tool if we perceive our own purposes for the tool as more important to society?

Popular video games are generally not created to be educational, or to be used in education. We may learn new media skills using them, but that is typically not their purpose. What right do have to judge them for something they don’t even intend? Or to use them against their intended purposes?

Here might be one way out of this dilemma: if we view this problem from a holistic standpoint, we might be able to understand the several major elements at play in this problem, and to create a model for synthetic artifactuality to apply to these kinds of situations. First, we have creators/designers of tools, users of tools, and the tools themselves. Second, we have interactions: users interact with tools in ways that the original designer intended. But they also use them in new and unique ways not intended, sometimes with better results than what was intended with the original design. When this beneficial use occurs, we may see this merely as chance, or as a second designer in action: the user. The user comes to the tool and the way in which it is designed, sees its intended purposes, understands its affordances, and synthesizes his or her own thoughts about possible affordances of the tool with those of the original designer. This new design is then tested as the tool is used for the new purpose. We see old tools put to new uses all of the time, so this is not a new idea. The question is the beneficiality of the new use over the old use, and perhaps also the possibility of negative effects on the tool itself or its ability to fulfill its original purposes.

Are video games negatively affected by their use in educational settings? Maybe. Maybe they don’t look as cool anymore, or as enjoyable. After all, if motivation is supposed to rub off of video games onto educational content when used in an educational setting, could not negative connotations of formal schooling rub off on games as well? Is it a possibility that video games might be less entertaining if we are worrying about how they are educating? It seems like a live possibility. Might the act of assessing them against their designers’ intentions cause them to become less valued in their original state (or for us to undervalue their original intentions)? I sure hope not. Let’s hope that we don’t ruin these tools. More important still, let’s hope that we don’t ruin the learners.

Using a TV as a Hammer: From Entertainment to Usability

Where is all of this blood coming from?

The causal link between aggression and video game violence is an interesting topic. Craig Anderson (along with Karen Dill) has done a lot of empirical research in this regard, and suggests that exposure to violence in video games increases aggressive thoughts and behavior in both the short and long term (2000). The explanation for this is that continually playing violent video games actually teaches a person something: the gamer is learning, rehearsing, and reinforcing aggression-related knowledge structures (p. 775). They are learning a way to think and behave in the world. This is because video games are successful learning environments and exemplary teachers, a line of thought taken up recently by Douglas Gentile and Ronald Gentile (2007). Video games (even the violent entertainment-centered ones) follow many of the best practices of learning and instruction. Further, people who play a more varied assortment of violent video games are more likely to learn to be aggressive and to have aggressive thoughts, and players who play for the same amount of time but more frequently are also more likely to learn these thoughts and behaviors and to retain these long term (Gentile & Gentile, 2007). One question remains: where did all of the violence in video games come from?

Many violent video games are gory. We see blood, guts, brains, and body parts being splattered, spilled, exploded, ripped, sprayed on objects…the gore can even be seen dripping from the player’s avatar at times (like when a zombie takes a bite out of your head). Where is this gore coming from? Do 3D computer objects have blood, hearts, and brains? Do we need the gore for fun or entertainment? Further, do we need the violence? And where does this violence arise? Are video game designers an extremely violent or aggressive lot by nature? Are they nefariously trying to get all of the rest of us mere humans to kill each other so that they can take over the world? They are teaching us to be more aggressive: is this intentional? If not, does this unintentional (but highly successful) learning occur in a vacuum: where does the violence originate?

It would be interesting to study if designers of violent video games are more aggressive individuals than designers of non-violent video games. I would venture a guess that they are not. I would further surmise that the violence arises from the genre, and the genre exists because people buy those games, and people buy those games (in part) because they are cognitively aggressive and enjoy pretending to have violent actions. I know that is why I like some violent video games. Violent video games don’t necessarily teach us violence, but rather aggression. But are we learning aggression from video game designers, or are they learning it from us (and mirroring back to us what we want to see)?

Another related question is whether or not players of violent video games more likely to actually cause blood to flow than players of other kinds of video games? What about other activities that we think of in a positive light, like sports? Are football, basketball, and soccer players more likely to be aggressive than people not frequently engaging in these activities? Are athletes by and large taught to be more aggressive by engaging in aggressive thoughts and behaviors in their sport? Yes (Oproiu 2013). Taking this further, are athletes more likely to cause physical violence to other people than non-athletes? And are they more likely than violent video game players to draw blood–to commit violent actions? Well, yes. This happens on the field quite often. Why are we engaged in these activities (sports and violent video games) that we know affect our levels of aggression?

I think because they are fun, and because we think that aggression is not necessarily negative. Violence in both sports and video games transpire in a ruled environment: there are rules governing the use of violence, who can commit violent actions, and what (and who) can properly have violent actions perpetrated against them. There is unlawful violence and negative aggression in both sports and video games, allowing for proper and improper (or disallowed) violent behaviors. The question is “How permissive are the rules, and do the rules map well with real life?” In other words, is this an acceptable thought or action in the real world? If I learn well from what I am doing (in sports, video games, and elsewhere), and I transfer that knowledge and those skills, will I (and the world around me) be better? How can I reflect best on my practices (even my virtual practices) and critique my thoughts so that I transfer the good and leave the ill-mapping thoughts and behaviors where they belong (in the game)?

Anderson, Craig A., & Dill, Karen E. (2000). Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life. Journal of Personality and Social Psychology, 78:4, pp. 772-790.

Gentile, Douglas A., & Gentile, J. Ronald. (2008). Violent video games as exemplary teachers: A conceptual analysis. Journal of Youth and Adolescence, 37, pp. 127-141.

Oproiu, Ioana. (2013). A study on the relationship between sports and aggression. Sport Science Review, 22:1-2, pp. 33-48.

Where is all of this blood coming from?

Pedagogical Usability & Video Games: Turning Fun into Learning, While Learning if Fun is Actually Good for Learning

I have been thinking a bit about pedagogical usability as of late, as I am prepping for presenting a paper on pedagogical usability at the 2013 E-Learn conference in Las Vegas. Petri Nokelainen and others have come up with evaluations of pedagogical usability for online courses, learning management systems, knowledge management systems, wikis, and websites, and my presentation (with two friends of mine from Missouri University) is a pedagogical usability evaluation of a digital library system, a unique application for this kind of evaluation. Pedagogical usability is the affordance in a system for (or presence of): learner control, learner activity, cooperative/collaborative learning, goal orientation, applicability, added value, motivation, valuation of previous knowledge, flexibility, and feedback. These components go hand in hand with usability of the normal technical variety (what helps you understand and get through a website or designed experience efficiently and effectively). But pedagogical usability has a constructivist pedagogical leaning, and seeks to assess a system’s pedagogical affordances in this regard.

I have been reading through some of the literature on the use of video games in education (for instance, see Leonard Annetta’s 2008 Video games in education: Why they should be used and how they are being used), specifically the use of popular entertainment games for learning purposes (like the Civ series). Educators are using these games, and there has been extensive research on how they are using them (and theories about why they are using them). But I have not found research that offers educators ways to assess how useful these games are for pedagogical purposes. How would you even test that?

Pedagogical usability evaluations may be one way forward. If we take the top popular video games being used in education for learning purposes (maybe just the top 3), analyze how they are being used, and then assess their pedagogical usability, we could offer both assessments of those specific games, and models for pedagogical assessments of any other games in the future. Educators would then have a tool in hand to test the latest-greatest game and see if it will meet their specific pedagogical aims. Omar, Ibrahim & Jaafar (2011) have presented a very brief outline of doing this with educational computer games. I am suggesting something further: assess the pedagogical usability of games that are not meant to be used in education at all. Basically, game designers make a game they intend to be played for fun, educators use it for learning, and we test it to see if that is a good idea with that specific game (given its pedagogical usability–usefulness for educational purposes).

Omar, H. M., Ibrahim, R., & Jaafar, A. (2011). Methodology to evaluate interface of educational computer game. 2011 International Conference on Pattern Analysis and Intelligent Robotics.

Pedagogical Usability & Video Games: Turning Fun into Learning, While Learning if Fun is Actually Good for Learning

Aesthetic Anti-Intentionalism: Art Tells Its Own Story

In “Art, meaning, and artist’s meaning”, Daniel Nathan (2006) offers a defense of anti-intentionalism, an explanation of various vulnerabilities that have been leveled against classic anti-intentionalism, and offers a re-formulation of anti-intentionalism, with several arguments laid out in its defense. Nathan’s argument is set in the context of art interpretation, interpretation that: 1- discovers or discloses meaning, 2- pertains to understanding “artistically pertinent properties” of a work (p. 282), and 3- is the ordinary activity of the everyday appreciator of art (as well as the professional critic). The question is simple: are an artist’s intentions relevant (and/or limiting) in our interpretation of their work? We will here look at how anti-intentionalism is characterized, and at Nathan’s central argument on its behalf.

The classic anti-intentionalist argument is straightforward. An artist intends something with her work. These intentions may be successfully embodied in the work, or not. For instance, an artist may intend a poem to be a treatise on death, but the work is usually interpreted as describing a birthday cake, because there are no indications in the work that suggest “treatise on death” as a possible intention. In this case, the artist’s intentions are not successfully embodied in the work. If the artist is successful in embodying her intentions (for instance, if we readily perceive her intentions regarding “death” as encoded in the work), we do not need to look elsewhere for meaning. If she is not successful (as in the birthday cake example), when we look at things other than the work itself (like the artist’s notes about writing a “treatise on death”), we are being taken away from the art and toward what is irrelevant—we end up reading into the work rather than reading the work (eisogesis, rather than exegesis). It is no longer the art that is speaking to us or that we are interpreting: it is the artist, or the context, or something external to the work. While artwork may be evidence of original intent (p. 287), original intent found outside of the encoded work (such as in an artist’s interview about a poem) is irrelevant to our interpretation of the work. At its heart, anti-intentionalism emphasizes what the artwork means rather than just what the artist means.

Nathan offers a slightly different approach. His anti-intentionalism is built on distinctive features of art that seem to be different from ordinary experience (and that make intentionalism seem out of place): causal explanations do not help us interpret art, our response to representational objects in art is often different than our response to the objects they represent (we fear a stalker, but we do not fear a painting of a stalker), we are looking for aesthetic excellence in art but not in ordinary objects, etc. Nathan’s central proposal, linked to the idea that art is different than a conversation and ordinary experience, is that there is a paradox of intention: works of art are intended by artists to be publicly appreciated and interpreted. An artist intends for his sculpture to stand on its own two feet, so to speak, and explain itself to the world (without any outside help). There is a hierarchy of intentions going on here: the artist intends his work to mean something (and this meaning is relevant to our interpretation in the intentionalist account), but the artist also intends for his work to not require outside information for its interpretation—it should tell its own story. The first intention is a first-order intention, the second is a second-order intention. And the second-order intention (for the work to speak for itself) is claimed to be a “necessary presupposition of the artistic endeavor” (293), whether the artist is aware of this or not. The force of Nathan’s argument is this: artists intend their work to encode the artists’ intentions: if we are charitable, we must interpret those intentions from the work itself (rather than deriving those intentions elsewhere). This is not meant to downplay the importance of original intent or context, but to argue that the work itself is a unity, and is the public embodiment of the meaning. After the work is sent out into the world, it succeeds in telling the story the artist intended, or it fails and tells a different story—either way, according to the anti-intentionalist, it tells its own story. Any external story is just that: an external story (and not the story of the work itself).

Nathan, Daniel O. (2006). Art, meaning, and artist’s meaning. In Matthew Kieran (Ed.), Contemporary Debates in the Philosophy of Art. Oxford, UK: Blackwell’s Publishers.

Aesthetic Anti-Intentionalism: Art Tells Its Own Story