A Critique of Bertrand Russell’s Why I am Not a Christian

Bertrand Russell, in a lecture titled “Why I am not a Christian,” states the following:

“We want to stand upon our own feet and look fair and square at the world—its good facts, its bad facts, its beauties, and its ugliness; see the world as it is, and be not afraid of it. Conquer the world by intelligence, and not merely by being slavishly subdued by the terror that comes from it. The whole conception of God is a conception derived from the ancient Oriental despotisms. It is a conception quite unworthy of free men. When you hear people in church debasing themselves and saying that they are miserable sinners, and all the rest of it, it seems contemptible and not worthy of self-respecting human beings. We ought to stand up and look the world frankly in the face. We ought to make the best we can of the world, and if it is not so good as we wish, after all it will still be better than what these others have made of it in all these ages. A good world needs knowledge, kindliness, and courage; it does not need a regretful hankering after the past, or a fettering of the free intelligence by the words uttered long ago by ignorant men. It needs a fearless outlook and a free intelligence. It needs hope for the future, not looking back all the time towards a past that is dead, which we trust will be far surpassed by the future that our intelligence can create.”

Reading Russell is always enormously fruitful, not only because of his beautiful use of language (he won the Nobel Prize in literature), but also because of the cogency of his arguments and his great influence on so many philosophers and thinkers after him (as well as popular culture). This particular essay is one of his most optimistic writings, and has provided a strong outline that has been greatly elaborated on by Christopher Hitchens, Daniel Dennett, Richard Dawkins and others, which I think makes it foundational in current western thought to some extent. I do not find this piece as logically strong as his Problems in Philosophy, or his Introduction to Mathematical Philosophy. As a side note, I must say I find it funny that he mentions how we should not look back all the time to a past that is dead, and speaks about “words uttered long ago by ignorant men,” yet he wrote an extensive History of Western Philosophy—I consider his to be one of the best ever written. His writing of that tome is evidence that he thought not all men of long ago were ignorant, nor that history was a fruitless subject of inquiry (and it even helps us progress as a society to look back to history).

Russell’s claim that the concept of God is derived from Oriental despotisms seems to me to be historically inaccurate—people all over the world and throughout all of history have believed in gods, even a supreme God, regardless of their cultural heritage, political and societal organization, or the “progress” of their civilizations—think of Amerindians or Celts who had these conceptions outside of Oriental despotism. If we push back in time to the beginning of theistic religion, we go back before written history, and if all humans everywhere had a common origin, we come to a time where (if we ignore later histories and accounts) we cannot know which came first, despotism or belief in a god. If we do not ignore later histories and accounts, belief in gods came before despotism. This thought of Russell’s seems to me to be based more on his views of the progress of humans (that we are becoming more enlightened, and thus ought to give up our outmoded beliefs from the past) than on the actual history of human civilization.

The idea of god not being a concept worthy of free men also seems to beg the question—what do we mean by “free men”? Do individuals in the classical Greek and Roman world not count as free men (and did they not by and large believe in gods)? It seems to me that if a man is free (and his intelligence is free), he is free to believe in a god or not to believe.

If by “free men” we mean people who do not have this concept of god, then Russell is perhaps not a free man (he does, after all, have the concept of god as shown by his words, so either he is doing something unworthy of a free man by having this concept, or he is not a free man). If he is a free man, but having unworthy thoughts (thoughts concerning God), he should stop having those thoughts. If the conception that is unworthy is that we are sinners and should fear god, then it is not the concept of a god that is unworthy of a free man, but that particular concept of a god (who judges sinners). But most believers in gods throughout history (including today) do not have this concept of God—Russell is concerned here primarily with monotheists in the Judeo-Christian and Muslim traditions. But why actively disbelieve in the other gods, or in a supreme god other than the god of Jews, Christians and Muslims? More to the point, why is it more free to disbelieve?

Russell here posits what he terms “intelligence” as what is needed for progress in society, for it to be less bad, less fearful, etc. But why must intelligence be antithetical to belief in a god? Most philosophers, mathematicians, physicists, etc. have been theists of some variety—are/were they not intelligent? If there actually happens to be a god or gods, then the intelligence of these men and women would not be greater by denying it (you would think they would be more intelligent for believing in a god if he did in fact exist). “Free intelligence” does not entail lack of belief in god. Intelligence is a rhetorical device here—the thought seems to be that you either have free intelligence, or you have belief in god (but not both). Russell is not merely saying that free intelligence, free inquiry is something that we should push for, and we should include the belief in god or disbelief in god as part of that freedom, but that the two are somehow opposed to one another, so that our fear of a god will logically disallow us from free intelligence.

However, Socrates had a free intelligence (and was killed for it), yet he believed in a god (and feared him). We might make a syllogism of this:

  1. Socrates was intelligent.
  2. Socrates’ intelligence was free.
  3. Socrates believed in a god.
  4. Socrates feared the god that he believed in.

Therefore, free intelligence is possible alongside belief in a god that is to be feared.

If all of the premises above are true, then the conclusion must be true as well, and in that case, Russell’s argument is false (or I have misunderstood his argument, which is always a possibility). To overcome this problem, Russell would have to either a) not claim that free intelligence and belief in god are antithetical (as he appears to here), or b) disagree with one of the premises above. If I were to guess, if this second route were taken he might dismiss premise 2 (Socrates’ intelligence was free). But in this case, Socrates’ belief in a god caused him to be charged with atheism (because of what he believed about the supreme god, even though he seemed to still believe in the Greek gods), and thus he showed the freedom of his intelligence by believing in a god (and even by fearing him). I do not think there is a way out of the premises above, or the conclusion—free intelligence might allow for belief in a god (and even fear of that god) given the example of someone like Socrates.

One problem is that if a god exists as a fact, and someone believes that fact, they may be intelligent because they believe that fact—free intelligence is not freedom from facts. The question is whether or not such a god exists: if he does, and a person seems to be justified to believe it, their free intelligence should be allowed to lead them in that direction. If a person seems to be justified to disbelieve in a god (whether a god exists or not in fact), their free intelligence should be allowed to lead them in that direction. Free intelligence, the existence of a god and the belief in that god are not paradoxical even in the same person, and I think that they entail no logical contradictions.

But Russell seems (throughout the essay) not merely to focus on individual freedom of thought, but societal allowance of “free intelligence.” To that, I wholeheartedly agree with Russell—I think a better society would result from people allowing for divergence of belief in a god and encouraging free thought on that and other issues. We ought to question theism, just as we ought to question atheism and skepticism. We all ought to believe what we seem to be justified believing, and to disbelieve what we seem to be justified in disbelieving. At that point, we are left with the facts and our justifications—these are the rightful data which free intelligence may then work upon. Then the argument is no longer about intelligence, fear, etc., but what actually seems to be the case—what are our justifications for believing in a god (or for actively disbelieving, as does Russell, or even for passively not believing, as seems to be the case in some forms of agnosticism)?

If there is in fact a god, and if we are indeed justified to believe that, and if in fact this god is a god that should be feared and that will judge all according to their actions in this life, then free intelligence might be free to believe this and act upon it. Free intelligence might not be free from fear, it should not be free from facts, and it need not be free from belief in god.

 

What is Teaching? What is Learning?

What is teaching, and what is learning? Is teaching the conveyance of information, the banking of knowledge (both knowing how and knowing that) into additional human minds and lives? Is it mere replication of stored knowledge and skills, authoritatively passed on to the next generation? Are we uploaders?

Is learning all about downloading the correct information, the replication of outside knowledge and skills inside one’s own life and mind? Is it the receipt of that which is passed down? (If so, couldn’t we say that computers learn in some sense?)

Socrates thought that maybe knowledge was recognition of the truth, and that teaching was the asking of questions that drove a person to seek the truth. Teachers are midwifes who help learners to give birth to the truth in their own lives (and who help them to recognize it when what is being birthed is not truth). Do questions and conversations count as teaching and learning?

Jesus’ teaching seems to show the use of questions, as well as story-telling. Perhaps we might ask if Jesus’ pedagogy was a mix of:

  • Example (he healed, and had his disciples heal; he taught, and had his disciples teach; he loved and served, and had his disciples love and serve),
  • Enigma (Jesus was always saying crazy things—why?), and
  • Embodied simulation (a.k.a., storytelling, which brings the hearer to embody themselves in the midst of the story world and “live” the story in order to understand the point—note that his stories were not mere anecdotes from his own life, but were diverse in their settings and characters, often centered on people other than himself, and usually included surprising twists in common life).

Jesus’ teaching was often about full buy-in: to learn was to love and give one’s life, just as to teach was to love and give one’s life. In this light, are teachers and learners fellow slaves in God’s kingdom?

Paul seemed to like to teach using rhetoric (purposely persuasive speech patterns) from the truths of previous writings and accepted thoughts, through the lens of Jesus, and toward good works. Did he see learning as a kind of basking in the truth, and teaching as a kind of pouring out of the teacher? To learn is to be dipped like bread into wine, and to teach is to provide the wine? What is your wine, and how can you help learners to soak in it? (And do you have a good cheese to pair with it?)

Paulo Friere, a modern theorist of pedagogy, speaks of learning as transformational and communal concept-making. Learners are participants in creating their world, and teachers act as liberators from oppression (liberators through assisting learners to discover their world in order to transform it). Learning is never to be a mere acceptance of the way things are, but is instead to be always a challenge to make things the way they ought to be. Should we be so bold as to question our world with students?

What is the ultimate result of teaching or learning? Students who can pass tests, write papers and take notes? Replicates who think and act like professionals in a particular field? Answers to life’s great questions? Transformation of persons and societies? Death and cross carrying? Glory and immortality? All of the above?

On Time Travel

Deterministic fatalism is the view that all events are determined or predetermined by prior events, usually with the result that free will is an illusion. Anything that happens would always have happened that way, and no (material) events can occur that were not already determined after the state of the world has some sense of established cause and effect in order.

Whatever will be, will be.

I will argue here that deterministic fatalism has more than a single variety. Most humans do not live or act as if they believe that their present actions or decisions are necessary results of a long line of necessary cause-effect chains of actions and events. It seems to be, instead, that most believe that their actions and decisions make a difference. However much this may seem to be necessary (that we have at least limited free actions as agents in the universe), it seems that many people yet subscribe to a limited form of deterministic fatalism—namely, with regard to past events. While we may allow for options in the present and/or future, the past is thought to be “set in stone,” fatalistically determined even.

Whatever was, was.

Let us assume a fictional state of affairs to get at the heart of this argument (and have some fun while we’re at it). We have somehow invented a time machine and travel to the future. No really, this happened, and it was great. We meddled with events in the future and came back to our own original time. Is anything different? If you are not an absolute deterministic fatalist, then you might answer “No, nothing has changed.” Being in the future was not much different than affecting things in the present. It is present and past events that help to determine the future state of events—the future does not determine the past or present.

However, let’s say that we travel to the past, and influence some past event by our presence there. Now we travel back to our original time. What affect will our actions have on our original present state of affairs? You might say some kind of “butterfly effect”, tiny ripples in the past (like a butterfly being crushed) might lead to large consequences in the present and future (like a different Prime Minister in power). Or you might think a paradox might occur, such as killing one’s earlier self, a grandparent, or accidentally causing the time machine itself to have never been created. You might also respond that an alternate history would be created from this, separate from the original “time stream”, in which a different set of affairs were transpiring at the present due to a change in the past (through alternate universes, multiverses, or alternate causal time streams).

As a Christian, you might think, instead, that changes cannot occur in the past for the time traveler because of 1- God’s providence, 2- the immutability (unchangeable character) of the past, or because 3- direct access to the past is not possible (for theological, historical, or physical reasons).

But if we are free to make choices today (and tomorrow) that affect the future, even given providence, then why not in the past?

And is the past fundamentally immutable (unchangeable)? What makes it so, and must it be that way? Further, how do we know that the past cannot be changed? After all, all of our data up to now seems to suggest that the past has been changed over and over again—in the past. And if our argument is that the future cannot change the past, that argument is actually outside the purview of our case of a time traveler to the past, since the time traveler makes changes in the past from the past (i.e. her future actions do not affect the past, rather, her actions in the past affect later events).

If we argue that direct access to the past is not possible (on any grounds), then we disallow for the time traveler out of necessity. It is not truly time travel pure and simple (it seems to me) if our time traveler is not allowed access to change or interact with anything in the past. I think the real basis of this thought (of no direct access) is a belief in deterministic fatalism in the past. I think this is a variety of absolute deterministic fatalism, and it may be that belief in one may require belief in the other if it is to be coherent.

For instance, may not each argument made on behalf of deterministic fatalism of the past variety also be used on behalf of absolute deterministic fatalism (God’s providence, the present is immutable, direct access to the present is not possible, etc.)? Would this not force (if a person were believing rationally and coherently) acceptance of the absolute variety if the arguments of the past variety were acceptable? (Accept one and you accept both?)

At least this seems to me to be the case if a time traveler goes to the past. If it is the case that time travel is not possible (and not merely improbable), then deterministic fatalism of the past variety need not even come up. Of course, since the problem has arisen (if only in our argument here), even without the possibility of time travel, we may as well discuss it.

Do you believe the past is unchangeable? Is it fully determined and uneditable? What if it were not? What would that mean? How might the world be if we could edit the past? (Not merely edit the history books, but history itself?) Maybe we can! We edit the past in many moments throughout every day. Our actions are decided in the present, but their causal connection is in the future when we make these decisions, the actions cause effects that move to our past, and we then continue to act based on our edits of the past.

Take a cup of hot coffee for example. I make a decision to drink it right now, knowing full well that the decision is an event in the present that actually has an outcome in the near future, which will be past by the time I perceive that I have taken a drink, and I will continue to drink based on feedback about how good the coffee tastes in the past. I edited the past by taking a sip of the strong java (the past event of tasting the coffee was the result of an edit I made in the past). While I was deciding it, I was in the present deciding a future event, and when I was contemplating drinking more, I did so based in part on my past decision that affected my past (I already drank some after all). I edited the past by deciding on the future in the present.

And as a time traveler I would be in the present “in the past” (for instance, if I went back to my own birth). For me, at that time, it would be my present. For instance, I could hold my own infant hand as a present event. I would decide to do an action in the future (hold my baby hand) in the present, and this would edit the past, perhaps in just the same way as I edit my own past when drinking coffee.

You might ask “What if you accidentally slipped and landed on your baby self, and you killed your baby self in the past?” Paradox! (And a very bad day.)

Poof! I cease to exist, but then again I never was able to have killed my younger self if I died before I was able to go back in time, so I never died, but then I went back in time and killed my younger self, and on and on. Maybe this describes a time-loop? But what if deterministic fatalism of the past variety is not the case in our world? If changes can be made in the past (it is not wholly determined), then we can make changes in the present that do not irrevocably alter the future (or past or present). So a time traveler might accidentally kill their younger self, yet might not cease to exist in the present (and future) as an adult. They would just be dead as a baby. This might not cause a time paradox, even if they never grew to be an adult to be able to come back and accidentally kill themselves.

Here is the line of reasoning:

  1. Adult time traveler goes back in time
  2. Traveler kills younger self
  3. Younger self never grows up to be time traveler

From the point of view of the time traveler, however, here might be what happened:

  1. Time traveler is born
  2. Traveler grows up to invent machine to travel to the past
  3. Adult traveler goes back in time
  4. Traveler accidentally kills younger self
  5. Traveler continues to live their life in the past as an adult (or maybe they go back to their original time)

Maybe there is no time traveler’s young life in between their death as a baby and their future adult life. Maybe the past can be changed for the traveler so that their intermediate state was no more. But then who invented the time machine? Well, the time traveler seems to have invented it. How? He did in the past, the past that was edited. Now that he is back to the present, perhaps he did not exist at the time of the invention (after all, he died years earlier), but perhaps his non-existent self (or non-self) invented it. Or maybe he didn’t. Even given no inventor, and no time machine invented (in the future), still the time traveler might come to the original time in his non-invented time machine. We could posit a miracle if we wanted to, or we might argue that causality is still satisfied through earlier unedited history. Every causal link might still exist (in the edited past).

Again, if we have the choice to edit the future in the present, why not the past (by going back and editing the future from there)?

A further thought experiment: what if the time traveler went to the past and kidnapped his younger self, brought his younger self to his original time and dropped him off, then went back to the past and lived there as an older self (his younger self being stuck in his older self’s future). What happens if his past self uses the time machine after a while and travels again to the past in it? He can live his life over again (and edit it differently each time), sending his younger self to the future each time. What happens to his original younger selves in the future? Apparently each of them lives a life as normal (except they are at later points in time than they should be) and each grows old (older than their kidnapping self who is now in their past). Multiply-instanced selves? That’s what we had when we travelled back in time to visit our baby selves in the past anyway, isn’t it?

Granted, this is confusing and ridiculous. But remember that even though this is a ridiculous mess of a problem, the question is not “Should we travel into the past?” but “Can we travel into the past?” If we can in any meaningful way, deterministic fatalism of the past variety must not be an option. We must be able to make decisions in the past that edit the past in some meaningful way, and I have outlined here a bit of what this might look like (if it were possible).

To the point: don’t invent a time machine (I’m talking to you, future self). You don’t want to have to live with those kinds of decisions as a future past self.

Identity & Identicality

Identity and identicality are two different things. I may have an identical twin who shares all of my physical characteristics, and yet who is not me. We are different persons. I have one identity, and he has another. However, is my twin identical to me in every way? Do we share every attribute? For instance, are we in the same place at the same time and share all characteristics in common? If so, I have no twin. There is only one of me. Exact identicality seems to always entail identity. People can identify me (mark my identity) based on my exact identicality with myself (if that even makes sense). If I look like a duck, swim like a duck, taste like a duck, etc., I must be a duck.

On the other hand, identity cannot always be said to entail identicality. I am not always identical with myself. For instance, I changed my clothes this morning, my mind this afternoon, and my haircut this evening, yet I remain me. I can be non-identical with myself at a different point in time. In fact, being in a different time means that I am now different than myself yesterday, even if everything else about me remained exactly as it was yesterday (if that were possible) because I am in a different time (a non-identical attribute)—I am different relative to the external world. Even though my brain cells change (for instance through learning/growing, through new cell growth, and through loss of cells), I can continue to be me. I hold on to my identity through change.

So I can be identical with myself (I am), and non-identical with myself (for instance, myself from a different point in time) and continue in my identity.

This idea might be fruitfully applied to our understanding of the Trinity. In the three persons of the Trinity, we have God the Father, God the Son, and God the Holy Spirit. God the Father, God the Spirit and God the Son all have the identity of being God, yet are not identical (they do not share all attributes in common). Many of the attributes of God are enjoyed by all of the persons of the Trinity, such as being eternal, but some others are specific to specific persons. For instance, Jesus underwent death. The Father did not (he is a spirit). In this they are not identical. This does not mean they are not one. It also does not mean that they are mere modes of one thing (or are parts of one thing). Jesus is all God. He is the exact representation of the Godhead bodily. When we see Jesus physically, we see the Father. Yet no man has seen God (that which is identical with God). Nevertheless, Jesus, who is said to sit at his right hand, has seen the Father (he who has the identity of God), and testified of him, revealing him to the world through his own person. In this, Jesus can change, and yet God does not change. Jesus is identical with Jesus, and is non-identical with Jesus (himself from a different point in time/space or with differing attributes), and he continues in his identity as Jesus. Jesus is also non-identical with God the Father, non-identical with the Holy Spirit, yet continues in his identity as God the one and only. So being God is not the same as being identical to God (because God can be non-identical to God). In this way Jesus can empty himself of the glory of God through kenosis, and yet be God. He can live in time and be timeless. Jesus in time is not identical with Jesus that is timeless, but Jesus is the timeless God, and Jesus is God in time.

The distinction between identity and identicality might also help us to understand ourselves. I am me (I share the same identity with myself, if that even makes sense). I am not always self-identical though, as we discussed above. Further, my spirit is me. He (me) is the spirit of Bob. My spirit is not identical with me (we do not share all properties in common, such as having material substance and dying in the future). I do have material substance and will likely die in the future. Yet my spirit is me. My spirit is fully Bob (it is me, my identity). The same goes for my physical body, which is me, is not a part of me (it is wholly me), is not identical with my spirit, and yet is truly me. When you point at my body, you can truly say “You are Bob”. My body has the identity of Bob. My body is not a part of Bob (a part of my identity), my body is Bob (it is my identity, though it is sometimes non-identical). If my body dies, I die. I cannot be said to die if my body is not me. If my spirit lives on, I live on. I cannot be said to live on after death if my spirit does not live on. If my body is resurrected, I live on bodily. I cannot be said to live on bodily if my resurrected body is not me.

The differentiation of identity and identicality helps us solve several other major metaphysical puzzles, for instance, Zeno’s paradox of motion, the cat and its fur (fuzzy identities) and identifying a cloud (where does it start and where does it end?).

One puzzle presents itself in this analysis: how different can a being be from itself before it is not itself? Well, we might answer (tongue in cheek) that a being can be as different from itself as possible and still be itself. It does not cease to be itself through loss of identical attributes. Or does it? If a shovel has had the handle replaced multiple times, the wood stem replaced several times, and the scoop replaced several times, and has had different owners, is it the same shovel as when first created? Most would agree that it is not identical to the original shovel. Does it keep its identity through all of these changes, or is there a point (for instance, when the last remnant of the original is replaced) that it ceases to be itself, and something else is there (for instance, a new shovel)? If it is still called a shovel, it retains at least one attribute of the original. Perhaps to make the original shovel not be, it must cease to exist in totality (no attributes shared with the original). If this is the case, is it even possible for things to lose their identity? (Can’t things always be said to retain at least one attribute that is identical?)

In admitting that identity might be lost only through loss of every identical attribute, are we still holding on to some form of “identity equals identicality”? Because in this, we seem to be saying that identity requires at least some identicality (though not necessarily total identicality).

On the other hand, identicality of some attributes does not equal identity (you are not me just because you are also human). Loss of all attributes that are identical might mean loss of identity, but this would also require loss of being, since one of the attributes of the original identity would include being. I suppose it should come as no surprise that if a thing ceases to exist entirely it also ceases to retain its identity.

I don’t actually believe in the existence of attributes or properties of objects as things in themselves. How might this change the conversation? I believe that what we describe as attributes or properties of objects are actually merely “the way the thing exists”, in its relative relationship with other things and itself. “Being” is not part of the way a thing exists. While “being” could be construed to be a property of a thing (that which makes it exist), I don’t believe in properties. Instead, in my framework “the way the thing exists” presupposes the thing’s existence in order to describe it. So if (as in the previous paragraph) a thing suddenly ceased to exist completely, we could not speak of “the way it exists,” since it does not exist any longer. So, in reiterating our previous thesis, identicality of some “ways a thing exists” does not equal identity (though identicality of all the ways a thing exists does entail identity). For a specific object, loss of all the ways of existing that are identical ways of existing might mean loss of identity (though not necessarily loss of existence). It would take a loss of all ways of existing (even those that are non-identical) in order to lose existence entirely. Why would existence cease when ways of existing ceased? If a thing does not have any way that it exists, it should also be said to not exist. For example, there is no way of walking for the person who does not walk. And if a person loses all ways of walking they should be said to not walk. Similarly, if a person loses all ways of existing, they do not exist.

This might be applied to hermeneutics as well. The meaning I get from a text might share the identity with the meaning of the original author, though it might not be identical with the author’s meaning (it might not share everything in common with it). The meaning I get might share some (though not all) identical ways of being with the author’s meaning, and as such retains its identity as the original meaning, while not requiring identicality with that meaning. I can understand without fully understanding.

Similarly with perception. I perceive the way a thing is (outside of me), and I believe it is a certain way (the way I just perceived it). For instance, I perceive the cake is hot (I see steam rising from it), and I am thereby justified in believing the cake is hot (in the absence of defeaters). I perceive the way the cake exists (hot), and I believe it is a certain way (hot). But in this case, let’s say that the cake is actually cold, and there is a hot cup of coffee behind the cake that is letting off steam (because it is hot). I mistakenly think the cake is hot, but I might still be justified in thinking the cake is hot. The reality of the cake (that it is cold) is not exactly identical with what I perceive (that it is hot). Yet my perception matches the identity of the cake with the cake in the external world (and the way it exists). How? The cake looks brown to me, and the cake actually does let off a brown hue in this light (my perception in this case is identical with how the cake exists). Something of my perception is identical with the way the cake actually exists, meaning the identity of the cake is perceived, even though my perception (and resulting belief) do not identically match the state of the world that is external to me. So the statement “the cake is hot” is false (it does not correspond with external reality—the cake is actually not hot), but I am still referring to the same cake nevertheless (though it is not identical with what I believe or what I perceive). As long as some one way of existing crosses the boundary from the external world to my perception, I have (at least imperfectly) perceived the identity of the thing (because I have perceived one of the ways that it exists).

This last example presents an interesting problem. What if I perceive a way of existing for the cake that is not possible for cakes? What if the identity of the thing is irreconcilable with one of the ways of existing that I perceive (and I go on to believe this perception)? For instance, I perceive that the cake is alive and walking around. It is not possible for a cake to be a real cake and also to be alive and walking around. That’s not a possible way of existing for cakes (at least, for what we mean when we say “a real cake”). In that case, the identity is broken. A real cake cannot be alive and walk, but I perceive a cake that is living and walking, and therefore I do not perceive a real cake. The identity of the real cake is lost entirely, not as the result of missing all identical ways of existing (as before), but only by the perception of one way of existing that is not possible for that identity. For instance, a human being cannot turn into a cloud. If that is true, then my identity as “human” is lost if I suddenly become a cloud (in other words, “I” do not become a cloud—rather, a cloud is where I once was, or in some other way shares identical attributes with me, though it is not me).

To sum up, identity and identicality are different things, and an appreciation (and analysis) of how they differ helps us to understand ourselves, our God, philosophical puzzles, hermeneutics, perception, and even cakes. Exact identicality entails identity (if it quacks like a duck…). Identity does not entail identicality (I can be different than myself). Some identicality is necessary for identity (there must be something that is the same). Loss of all identicality entails loss of identity (loss of every identical attribute entails loss of identity), but difference of identity does not entail the absence of identicality (two different persons may have many things in common). And finally, there are essential identical ways of existing that, if absent, entail absence of identity (for instance, there are some ways of existing that are not possible for humans).

On the Timelessness of an In-Time God

The following is a response to Roger E. Olson’s short online article An Example of Unwarranted Theological Speculation: Divine Timelessness (http://www.patheos.com/blogs/rogereolson/2015/02/an-example-of-unwarranted-theological-speculation-divine-timelessness/). Olson’s main thesis is that the idea of God’s timelessness is a flawed alien concept borrowed from Greek philosophy that has no basis in biblical Christianity. He claims that timelessness has no scriptural support, and that such a God could not possibly interact with creatures in time. Contrary to Olson, there is scriptural and logical warrant for the belief in a timeless God that interacts in time yet is not bound by it. This God can understand our time and interact in it, while also being eternally existent.

A Timeless God in the Book

Olson claims that the scriptures do not say or hint that God is outside of time. I present here a brief overview of what I think is compelling evidence to the contrary. Jesus is called the Alpha and Omega, the beginning and the end, the one who is, who was and who is to come (Rev. 1:8). By this, is it only meant that he lives long, and that he has always been? Or is this to be understood as a testament to God’s timelessness and in-timeness, since he exists in the future as well as the past? Again, one day is as a thousand years, and a thousand years are as a day (1 Pet. 2:8). A short amount of time is long to God. A long amount of time is short to God. Does this not speak of (or least hint at) God’s being outside of time, or that time is a concept that does not bind God in his being? He is not a God of the dead, but of the living, for all live to him (Luke 20:38). All currently live to God. He is currently the God of Abraham, Isaac, and Jacob, who are apparently alive for him. Jesus states this in the context of the resurrection. Has the resurrection of the just and unjust occurred already? Perhaps it has in God’s perspective (he is currently the God of already-not-yet resurrected people). Jesus was before John the Baptist and Abraham, yet came after them (John 1:15; 8:56-58), before Abraham was (in the past), “I am” (in the present). He (Jesus) takes on himself many times the words “I am”, the one who is. He is YHWH, the God who is. The world and all in it will grow old and change, but God is the same (he does not change) and his years will have no end (Psalm 102:26-27).

Similarly, the creation event seems to point to God’s timelessness. “In the beginning God created the heavens and the earth” (Gen. 1:1). Does this mean that God also had his beginning when he created the physical universe (including time)? Or was time not created, but existed, along with God for all eternity? From outside of scripture, we find evidence of the linkage of space with time—according to relativity theory, time is part and parcel of the physical universe. Time and space make up a continuum. If this is so, 1- God comes into being with the universe (in the beginning), or 2- the universe has no beginning (in conflict with much biblical and scientific evidence to the contrary), or 3- God created time. God chose us in Christ before the foundation of the world (the creation of the physical universe) (Eph. 1:4) and perhaps Christ died before creation (Rev. 13:8). We existed for him to choose before anything existed. These things are spoken of as having taken place before time (if time was created in the beginning), and certainly they took place before we existed. Before I existed, I existed to him. In creation and throughout scripture, God points us to his being that is not bounded by time or by our line of events, and he does this by revealing himself in time.

Alien Invasion

Olson claims that the idea of timelessness came from Greek philosophy. There is no such thing. There are Greek philosophies, and there are many of them, but there is not one (maybe there was only one Greek philosophy at a time when there was only one Greek philosopher). There was more divergence, if possible, in Greek philosophical systems than there are in current philosophical systems. I am fairly sure that Olson is not ignorant of this fact, but he seems not tell us which specific Greek philosophy he is actually referring to—is he referring to some form of Platonist philosophy? Why put all Greek philosophy together and not name his opponent?

Olson uses invective language to say that this view “invaded” Christian theology from “alien sources.” Is this just rhetoric, or does he take his words seriously in these cases? Are Greek philosophers alien sources? Alien to what? They are humans. Aren’t humans the ones who do theology? I would think that at least some theology transpires among humans. These philosophers are not alien to me (I consider myself a human). Maybe Olson means alien to the Bible and/or the thoughts of the Biblical authors and audience, or the community of faith within Judeo-Christianity? By alien and invading, does he mean to say this influence is evil in some way? But aren’t the ideas of the philosophers and theologians that Olson quotes from (and whose arguments he borrows), like Pannenberg, Moltmann, Polkinghorne, and Torrence, also alien in similar ways? Might he be liable to taking away one alien view and replacing it with another, ideas that are outside of the minds of the biblical authors?

Is Greek philosophy not to be trusted as a source for truth? If a Greek philosopher were to argue for the existence of one God, must we reject his proposition because he is a Greek philosopher? (So that if a Greek philosopher says there is one God, he must be wrong; or do we mean he must be backed up by scripture for his words to be correct, and until that time his thoughts are incorrect?) Is the god of some ancient Greek philosophers the God of the Bible? Olson responds with a resounding “No”—different God entirely. It seems to me that perhaps Paul in Athens points to the idea that Greek philosophers might have some true knowledge about God: “In him we live and move and have our being” and “we are his offspring”—what is true of God to Greeks is true of the true God (Acts 17:28).

If we deny any and all truth about God in cultures outside of scriptural revelation (like that of Greek philosophers), we fail to recognize and appreciate that our own doctrines of Christ’s two natures and the Trinity are derived in part from the ideas of specific Greek philosophers. These would also seem to fall under the rhetoric of “invasions from alien sources.” Must we then deny the hypostatic union of Christ’s two natures and the substance of the Trinitarian creed for these reasons? And because the apostle John used the Greek concept Logos (with very important ties to specific Greek philosophers’ ideas) referring to God in Christ, must we also reject this revelation on the grounds that this is an alien concept invading the theological landscape? What languages were the NT scriptures written in? Greek? To whom were the NT scriptures written? Some Greeks? Does being Greek disqualify a human from knowing the truth? From being the object of revelation? From speaking truly about the true God?

Timelessly In-time

Olson argues that “a timeless being cannot interact with temporal beings.” This does not seem to me to be a true statement. By timeless, we do not mean (unless we are transcendentalists of some kind) that God does not or cannot interact with and in time. We do not mean that God exists only outside of time (or that God has parts that exist in time and others that exist outside of time). If we acknowledge that God exists everywhere spatially, we similarly acknowledge that God exists everywhere temporally (we seem to live in a space-time continuum after all).

No Time & G-Time

In the comments on Olson’s blog post, Olson remarks that “I tend to agree that God created time and entered into it with us. ‘Before’ time began there was either no time or (Barth) ‘divine temporality’ that is somehow different from our time.” If Olson agrees that God created time, then God exists outside of time (to create it). That does not mean he does not also exist in time. He created the universe from not-universe, he created time from not-time. If Olson believes in a God before creation without time, this God is timeless (there was a God who was timeless, even if for Olson he is not timeless now). Olson seems to me to believe in what he denies (the possibility of a God who is timeless). If God can be timeless before creation, and yet create the universe (even if he is not acknowledged to be timeless now by Olson), apparently a timeless being can interact with beings in time (by creating). The creation event seems to me to provide logical warrant for belief in a timeless God (and this warrant exists in Olson’s perspective as well, though this does not go acknowledged).

Belief in the creation of time (and belief in a God existing in no-time “before” creation) seems to be either 1: belief in a God that came into being with the creation of the universe (something opposed to scripture), or 2: belief in a God that exists outside of time creating time (along with creating the universe). I don’t imagine that Olson believes 1, so we are left with 2. If he believes 2, then God was at some point timeless (and this with justification from scripture—he created the universe in the beginning). If God was timeless and interacted with time (to create it), then this is possible, and Olson’s argument losses its force: a timeless being interacted with temporal beings at creation, even for Olson.

On the other hand, what if Barth is correct and there is some different divine temporality (other kind of time) outside of our time, in which God existed to create the universe (this has been called by some “G-time”)? If G-time is not the time we speak of when we talk about time in our universe, then it is not time (it is outside of our time, which is all we need to mean when we say that God is “timeless”—he is not restricted by time and/or is outside of our time, though he is not bounded by his timelessness and can still be in our time). On the other hand, if by time we mean “that by which we judge the progression of events”, then we might ask “how is God’s divine temporality different than ours?” Perhaps we might say that ours is created and dependent on materiality, while divine temporality is not? Then I agree that God is not bound by our created time that is dependent on materiality. But we should understand that by acknowledging this, we are also saying that our progression of events is possibly of another sort than God’s (though it is not to say that he does not participate in our progression of events). My tomorrow is not of necessity his tomorrow. If that is true, we are back to the timelessness of God under a different name (G-time/divine temporality and our time).

A God Who Can Tell Time

Olson argues that a timeless God cannot know that “today is February 18, 2015.” But for Olson, can a God who exists in G-time know that the day of creation is the day of creation? Or can a God who existed without time “before” creation know the day of creation is the day of creation? Olson might respond that because God is no longer timeless he can tell the time of day (he has entered into time now). One question: for God, is the day Olson speaks of the 18th or the 17th of February (which time zone does God live in)? If God does not live in a time zone, and it is both the 17th and 18th for God, we are dealing with a God not bound by our time. If we have a God bound by time (as in Olson’s conception, unless I misunderstand him), we also have a God bound by space (he must be in a specific part of the universe, and only that part). If God is only in our time, he cannot be omnipresent as it is traditionally understood (and conversely, if God is omnipresent, he cannot only be in our time).

Can a timeless God know that “today is February 18, 2015” for me? In Olson’s blog post, he states that “today is February 18, 2015.” I believe that I understand what he means, even though today’s date (for me) as I write this is February 24th. I understand that for him it was the 18th when he originally wrote his article. For him, in that writing, it is the 18th. I am outside of that, and for me it is the 24th, yet my knowledge of that event is not bound by my time. I can enter into the world of Olson and know what it is for the day’s date to be February 18th for him. Similarly, I can read a book of fiction in which I live completely outside of the world of the story, and I can know that “today is February 18, 2015” for the characters in the story. I am in a timeless place relative to the characters in the story world, yet I know the day’s date in that world. I could have written the story myself, and for me, relative to the story characters, I live in a timeless place (their world is an eternal now to me). I can still know that “today is February 18, 2015” for the characters in the story at a specific point in time for that world. If I write myself into the story, and interact with story characters within their world and time, I am not thus bound to their time. I can exist in their time and without it. By analogy, can this not also be true of God?

Conclusion

Asserting the timelessness of God is not to deny that God is in time, but that he is beyond and not bound by it. We can affirm with Olson and others that God does indeed act in and for creatures in time. That he lives in time with us and for us. And that we live in and for him. We should additionally affirm, though, that God is not bound by this time, by this space. And we can affirm this timelessness based on scripture, on logic, and on analogy. As the creator of time and space, he is, and there is none like him.

Human Developers

When working in a silo, developers often begin to think of themselves and to act as if they were non-humans, foregoing sleep, food, and the niceties of human life (like rising from a chair…or blinking). The closer to the metal, the further from the flesh. Peer review by other developers helps those developers to see themselves as users—humans. Not just a computing brain with inputs and outputs and advanced algorithms and fingers for clicking, or a demiurge with all power and control, but a real live human.

It might only take a three-year-old to review someone’s application for them, because they are human too.

Experiencing someone else’s application is a simple human affair, but requires openness to learn how another person thinks, sees and organizes the world. For a developer reviewing another’s work, you are no longer god (as you are when you are the coder, creating worlds with your words)—you are Adam or Eve, newly born in a green garden with opportunities all around. It is the Adams and Eves that we design our worlds for. Human developers, who know first-hand how to be human, make more human-like applications. Developers ought to practice being human every once in a while. To review and to be reviewed.

And so in training other Web developers, we may turn them into computers, or we may lead them to develop into humans. We cannot give what we do not have. We must be humans, we must develop humans, and we must develop worlds for humans. Open the door to your world and let some humans in.

Making Pi

Mizzou Server Hackerspace

Photo from Wikimedia by cowjuice, http://commons.wikimedia.org/wiki/File:Raspberry_Pi_Photo.jpg?uselang=en Raspberry Pi: inedible, but oh so sweet

These last two sessions (with a Spring Break in the middle), our group has looked at setting up a Raspberry Pi as a server and we are still working on connection issues with Cloud9 IDE.

For the Raspberry Pi server, we had a fun string of events. First, we had to find power for the thing (no power cord). We used the mini USB port onboard and connected that to a powered USB port on a computer. Problem solved. After connecting an ethernet wire, a Mac mouse/keyboard combo, and an HDMI cord, we realized that no monitors or Macs in the area had HDMI input. What a crock! After scrounging around in the office for a usable monitor, and even trying an old Panasonic tv (the room looked like a Texas Instruments developer’s lab from the 80s before we were done), we finally found…

View original post 150 more words

The Rise of the Machines

We have started a Mizzou Server Hackerspace at the Digital Media Zone that I manage, with a blog to match.

Mizzou Server Hackerspace

Old-timey machine shop from http://commons.wikimedia.org/wiki/File:Machine-shop-r.jpg

The age of the machines has come, and we as mere humans want to understand our overlords better. This week, in celebration of the World Wide Web’s 25th anniversary, we have inaugurated an informal weekly get-together at University of Missouri Columbia Digital Media Zone around the topic of the machines that make the Web work: servers.

This blog (and possibly others) will serve as the log of our adventures into all things server-related. Our weekly meetings, Thursdays 6pm-7:30pm in the Digital Media Zone in Townsend Hall, are going to be short hackathons on the following topics:

  • Making our own servers (with Raspberry Pi, Android, Intel Galileo and such)
  • Working with our own Apache server space (free unlimited space if you join the club, which is also free)
  • Using JS server-side (Node.js)
  • MySQL, PHPMyAdmin, MongoDB, CouchDB, PouchDB, LocalStorage syncing with a RESTful API
  • PHP
  • What makes blogs work on the…

View original post 269 more words

The Puzzle of Transparency

In Experience and Experiment in Art, Alva Noё (2000) argues that experience is more complex than it seems. Reflection on art can aid in our understanding of perceptual consciousness, and be a “tool of phenomenological investigation” (p. 123). Reflection on specific kinds of art may help us solve the puzzle of the transparency of perception. The puzzle of transparency is this: in art (and experience in general) we have a tendency to see through our own perceptions (to not reflect on our perceptions themselves) to the objects of our experience in the outside world. When we attempt to reflect on our window to the world (perceptions), we look through it and end up reflecting on the world itself. When we try to reflect on our seeing, we end up describing what we are seeing, rather than the experience of seeing. Experience is transparent to our descriptions and reflections (we instead reflect on and describe the experienced).

So, to solve the problem, we need to think of perceptual experience as a temporally extended process, and we need to look at the activities of this process (what we do as we experience our environment). Instead of describing the window, we describe the window’s actions. Noё sees the sculpture of Richard Serra (particularly his Running Arcs) as exemplifying this idea: the sculpture allows for active meta-perception (perception of the act of perceiving). The work causes us to reflect on our experiences with them and on our own perceptions by making us feel off-balanced, intimidated, like the piece is a whole with its environment–things that highlight the nature of our perceptions rather than just the objects themselves. It allows us to perceive our situated activity of perceiving, the act of exploring our world. Serra’s work, as what Noё terms “experiential art”, brings us into contact with our own act of perceiving: the window’s transparency and the act of mediating that which lies beyond.

Noё, Alva. (2000). Experience and experiment in art. Journal of Consciousness Studies, 7(8-9), 123-135.

Kant’s Subjective Universal Validity

In Analytic of the Beautiful (Book 1 in Critique of Judgment), Kant discusses the judgment of taste, and presents arguments concerning the nature of beauty in comparison with that which is pleasant or good. Kant argues that judgments of taste (specifically of beauty) have subjective universal validity.

Judgments of taste are not logical or evaluations of reason. Instead, their determining ground is subjective. Judgments of taste are tied to internal feelings of satisfaction and pleasure, and so are aesthetical and subjective. Satisfaction in judgments of taste are disinterested and indifferent to the existence of objects. In contrast, both satisfaction with the pleasant and satisfaction with the good are interested in the state of the existence of objects. In pleasure we seek gratification, while in good we desire either utility (the mediate good—good for something) or good in itself (the immediate and absolute good). What gratifies is pleasant, what is esteemed is good, but what merely pleases is said to be beautiful (and note that what pleases is subjective).

The universality of the judgment of taste is related to its subjectivity: because it is both subjective and disinterested, this feeling of pleasure is valid for all humans (i.e. beauty is imputed to all as universally satisfying). Each individual feels pleasure in the beautiful, without reference to themselves and without interest in the object, and infers that this same satisfaction is universal (it is not bound to the subject’s interests or to the object’s existence). While the pleasant is individually satisfying, and the good is conceptual in nature, the beautiful is universally satisfying and non-conceptual. Judgments of beauty are not postulated as universal: all who make a claim of “beauty” impute universal agreement.

Kant may be seen as conflating satisfaction with the beautiful with taste: for Kant each person cannot possibly have their own tastes because tastes have subjective universal validity. However, his arguments seem to only suggest that satisfaction with the beautiful has subjective universal validity, not that all judgments of taste have subjective universal validity. If satisfaction with beauty is a subset of tastes, as some understand Kant to be saying, then there are others in the subset that may or may not share all of the same characteristics as beauty, and thus what is always true of beauty is not always true of all tastes (and vice versa). On the other hand, if satisfaction with beauty is synonymous with taste in Kant’s writings (and not a mere subset), we are left with the problem of why tastes differ, yet satisfaction with beauty does not. All humans find satisfaction with beauty (they impute absolute universal experience of beauty in beautiful objects); though they find beauty in different objects and differ amongst themselves about what objects are beautiful. In contrast, all humans find satisfaction with their own judgments of taste, but they do not impute absolute universal experience of taste with pleasant objects. In short, there seems to be an imputation of general objectivity (or at least intersubjectivity) in judgments of the beautiful, even though the experiences are subjective, while there seems to be no imputation of general objectivity in judgments of taste (in general): judgments of taste have subjective and indexical validity, but not always imputed general universality.

New Media Literacy & Video Games

This past week I have been studying up on new media, media literacy, and new media literacy. It is interesting to me how much of this literature surrounds use of video games in education. New media is defined in different ways by researchers in different disciplines, but my gloss definition is something like “digital interactive communication formats”. Why is this kind of literacy important? Why should people be able to intelligently and critically consume, analyze and create digital interactive media? Education is not only about concepts: it is also about artifacts. As the artifacts (the tools and things we humans create and use) in our world change, so must our education. What it means to live in a high functioning way in our world changes as the artifacts change (and because artifacts are a part of culture, as the culture changes). Similarly, as the formats and functions of the artifacts change (for instance, from paper to digital, from read-only to to read-write), so must our education change. If K-12 education is to bring our children up to speed with the world they inhabit, and empower them to become meaning-making citizens, we must attend to the enculturation process and the products of our culture (and to reflect on those processes and products).

Where video games fit into this artifactual nature of our current digital world is an interesting question (interesting at least to me). How can we operationalize new media literacy education, what are the ultimate goals of education in general (and do these also change with cultural/artifactual changes), and how can video games help us (or hinder us) in achieving those goals? Does new media even represent a change, or is it just hype? To be sure, new media literacy education is merely new media education if we fail to focus on reflection, critique and synthesis. Video games are fun and learning experiences themselves, but can they be also used as contexts for reflection, tools for creation, and worlds of transferable learning scenarios and skills? If so, what affordances are necessary for this to occur?

Violent Video Games Study Design

Thinking about the several research studies that have been done in the last decade on the impact of violent video games on levels of aggression and violence in players, I think a major stumbling block has been in the area of research design–the validity of studies has been questioned. If we really want to scientifically evaluate the hypothesis that violent video game exposure has a correlational effect on aggression/violence, it seems to me that we need a more rigorous research design. I do not think that in general violent video games cause or are correlative with violence/aggression, but it would be interesting to look at this from a research perspective and test these ideas quantitatively. Note that I am not a quantitative researcher by background, and am not fully convinced in the total efficacy of “scientific research” of whatever variety. I am convinced that it is a good idea to look into our world, and to develop hypotheses and theories that we can test, I am just not convinced that the results of those analyses are dependable as truth/facts (I don’t go for “it turns out that” research results–I do go for “it seems to us that”) , though I do believe we should act on what we believe based on justifications.

That said, I think it would be good to have groups of around 100 children of about the same age/demographic background who are given a “pre-test” on current aggression/violence measures (and have several of these groups). The first group plays no video games for one month, and is given a post-test, then plays violent video games for two months and is given a post-test again. The second  plays no video games for two months, and is given a post-test, then is given several specific violent video games to play regularly for one month and is given another post-test. The third plays no video games for three months and is given a post-test. The fourth plays several specific violent video games regularly for three months and is given a post-test. This same thing is done with non-violent games. The pre- and post-test would ask parents about violence/aggression of child, the same of peers, and the same of  the children themselves. The pre-test would also gather data about previous violent video game exposure (and try to place children in groups evenly distributed based on this). They would also be given pre- and post-test physical skills tests related to violent video games (skill level in violent endeavors like wrestling, paint-ball gun competition, shooting etc.). Then we could see better if children actually learn skills in violent video games, and if aggression levels and violence is significantly different between different groups, and between the start of the study and the end, and between no video games, video games, and violent video games.

Depiction & Reflection

Are reflections depictive? Does my reflection in the mirror depict me? At first blush I would say no, reflections do not depict. The mirror reflection of Bob Wadholm is not a depiction of Bob Wadholm. However, if this is true, it seems that all photography is likewise not depictive, because photography collects the light, and then using mediating technologies allows new light to display a reflected image of a sort back to the perceiver. But I refuse to believe that the photograph in my wallet is not a picture, so something must be wrong with this assessment. Does a depiction require a capture of that light for storage and later display: and is this why I feel that the mirror does not depict me, but that photographs do? But I may set up a web cam to record my face and display it back to me in real-time, functioning as a mirror. And if I do not record this video, is it depictive (i.e., does a video reflection of me depict me)?

Accessibility of Video Game Research

So sorry to any regular readers of my posts on video games: I wrote a post this past week along with a podcast about violent video games that I could not share with a wider audience. Why? Because the article I was discussing is not open access. What does that mean? That means, in this case, that if you don’t own the rights to read the article (like by paying an exorbitant price for a subscription to a particular research journal) you can’t access me discussing in a thorough way the study and findings. This is knowledge you can’t get (maybe you are not sad about this, but I am). And you wouldn’t find the article unless I told you first what it was. I could tell you the name of the article, but I don’t want to call out this particular journal or authors (I feel like it is a systemic problem, not a particular problem with a specific journal or group of authors).

What I do want to do is this: provide you, the general reader, with articles that I find useful and interesting. I want you to have the knowledge I have access to. I think it is wrong for me to get to read these articles and benefit from them, and for you to not be able to even really know what they are saying. Knowledge is being locked up behind a pay door. And I think that is not good–especially because I think it is important knowledge. So if you are writing or thinking about video game use in education, please continue to do so. But please make your work available to the masses who actually play video games, and not just to the very few of us who happen to work at a huge university that can afford a journal subscription to read your article. Release your work under a Creative Commons license of some sort. I don’t think you want to keep your knowledge bound up where no one can read it. And we want to read it. So make it accessible, make it readable by everyone, make it available for us to use and talk about freely and openly. The world is only made worse when useful video game research (along with much other useful research) is not available to be read legally by those whom it would benefit the most. End of diatribe.

Learning Analytics & Serious Games

Do learning analytics have a place in serious games? Learning analytics is “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs” (https://tekri.athabascau.ca/analytics/). Don’t games already have built-in game mechanics that do this? Yes and no. Performance is tested in-game. Game designers think through how a task will be learned and accomplished, and try to optimize this through testing. So if a learner in a serious game gets through the game, they should have learned what it takes to get through the game, meaning that they learned what the game intended them to learn (as long as the game designers were teaching and testing what they intended to teach and test).

However, this data (about learners) is not always measured. What about a task that most learners have trouble with? Game designers (counterintuitively) do not always know the most optimal way to get through their own games (or to learn their own games), even after sitting and observing representative testers. Expert players may, in the end, come up with more optimal solutions than what the original designers thought of in the first place. Without measuring this, collecting data about it, analyzing, and reporting this data about learners and contexts, it is difficult to know or share knowledge about such learner optimizations (or oppositely, sub-optimizations). This does not just happen in-game. Designers can be assisted in creating better games if they are better informed about optimal and sub-optimal performances in games. And game data can help bring this knowledge to designers. Learning analytics seem to have the potential to play an important role in serious game development, getting the designer to the understandings they need to frame the world that the learner needs.

Hume’s “Of the Standard of Taste”

In the essay “Of the Standard of Taste” (http://www.csulb.edu/~jvancamp/361r15.html), David Hume (1711-1776) briefly argues that although tastes, like opinions, differ, tastes are not right or wrong (as opinions might be), though sentiment does admit of varieties of delicacy that may be used to establish a kind of standard of taste. People speak of beauty with language that admits of absolutes. In comparison with moral matters, use of language like “vice” entails blame, while use of language like “virtue” entails praise—all people universally agree that vices are reprehensible and virtues are praiseworthy, the question is “what are the virtues and what are the vices?” So too in art appreciation, aspects of beauty are universally acknowledged, while individual tastes seem to differ with sentiment rather than with reason. There are many opinions, and there are many sentiments. However, while one opinion among many may be true, sentiment is always true and right: “each mind perceives a different beauty.” Seeking true taste is like seeking true sweet or true bitter. Individual dispositions of biological faculties are at play, seeming to make beauty in the eye of the beholder.

While this has passed as common truth, it is also acknowledged that we do seem to consider it absurd when tastes are widely different from the norm, and do not fit with ideals of beauty. Humans do seem to generally share (please forgive the split infinitive) a common sentiment (albeit in extreme example cases). There are, Hume argues, in the midst of the myriad differences of taste, real general principles for praise or blame. People sometimes mis-see beauty because of personal defect. Objects have certain qualities that produce particular feelings, but small amounts of differences (subtle admixtures of beauty and deformity) are more difficult to distinguish. Resemblances to the familiar brings a percipient greater pleasure, preference, and prejudice to the appreciation of a work of art and may muddy the water. However, you can prove one critic’s taste is better than another’s, and the deficient critic will acknowledge their taste indelicacy when presented with evidence derived from general principles about beauty that are universally acknowledged. Delicacy of taste in beauty requires practice, comparison, perspective, and absence of prejudice. Reason is required to check prejudice, and keep it from influencing one’s sentiment of beauty. So, although principles of taste are universal, still few people have come to the heights of delicacy to “establish their own sentiment as the standard of beauty.” Nevertheless, when these elite few issue joint verdict, we have on our hands the true standard of beauty and taste.

So for Hume, while judgments of taste may be based on sentiment (and sentiment may vary wildly), aesthetic relativism is not the answer. There are absolutes, and these are to be found in persons of delicate and practiced (though unprejudiced) taste. This seems to be an argument for taste elitism (a kind of tastocracy), which causes me to wonder if Hume sees himself in the ranks of this group of arbiters of true beauty? Beyond this criticism, there is potentially a more fundamental weakness with this strategy: admitting that tastes are always right, but that some tastes are better (more delicate) than others, providing a standard by which to judge tastes as good or bad, seems to say that while a particular taste may be right, it may not fit the standard (and thus be deformed, deficient and/or wrong in its verdict). For instance, in contradistinction to Hume, I find John Bunyan’s work of greater genius and elegance than Joseph Addison’s, based on sentiment. Another critic will likely disagree with me, and find the opposite (as does Hume, who characterizes detractors as “absurd” and “ridiculous”). Can we prove one taste right, and another wrong? Is my sentiment wrong or deficient? If you “prove” that, you provide reasons, which is not what taste is based on for Hume (it is based on sentiment). If sentiment is the basis of judgments of taste, no reasons may overrule my verdict. So, perhaps, as Hume seems to suggest, a better, more informed sentiment may be the basis of judgment of my taste. But what makes it better or more informed or more delicate if it is in the end still mere sentiment (is it closer to an absolute truth)? And does the verdict of the elite group turn my “true” verdict based on sentiment into a “false” verdict based on sentiment? Further, if Hume’s argument is successful (and the elite groups’ sentiments are better or more delicate than my own in regard to esteeming Bunyon’s work), this does not mean they are a standard, but that their sentiment is more well-informed (or better in some other way). Even if that is the case, it may also be that a group of even more well-informed critics later looks down their noses on these elitists’ sentiments, and are even more well-informed in their valuations and pronounce my earlier valuation as more delicate than my current detractors. In this case, is the standard of tastes getting more standard? Does an absolute standard admit of degrees? It does not seem logical that it could.

Experienced Resemblance of Outline Shape: Pictures & Depiction

In “The Domain of Depiction,” Dominic Lopes (2005) argues against the EROS (experienced resemblance of outline shape) theory of depiction and proposes and defends a recognition theory of depiction. I would here like to focus on what I think is Lopes’ strongest objection against EROS, and to show how a defender of EROS theory might reply.

There are six cardinal truths about depiction that EROS theory attempts to explain: 1- depiction is not property-less: to depict something, you must depict it as having some property; 2- depiction is always from a point of view; 3- only visible objects can be depicted; 4- objects can only be partially misrepresented in pictures (total misrepresentation is merely non-representation); 5- understanding a picture entails knowing what the depicted object looks like; and 6- knowing how the depicted object looks is necessary and suffices for understanding a depiction of the object (2005, p. 162). In EROS theory, depiction is explained (though not defined) as an experience of resemblance of an object in outline shape. This experience is characterized as “seeing-in”, but depiction requires more than just seeing-in, it also requires the right kind of causal relation between the picture and the object, or requires that the picture is made with the intent for the object to be seen in the picture (p. 163).

Lopes offers several objections to EROS theory, and perhaps his strongest objection is his analysis of the limits of EROS. This objection takes the form of four illustrations of problematic depictions:

  1. A cube shown in parallel oblique perspective (EROS either says that in many cases our intuition about what is depicted is incorrect, or that EROS is so flexible that it does not have to track with objective resemblance in outline shape—either way, EROS is not necessary for seeing-in),
  2. Three outline shapes of parts of a cube (recognizable outline shapes depend on the presence of significant contours or boundaries in the depiction, so EROS is not sufficient for seeing-in),
  3. The outline of shading on a face, shown in positive, negative and outline (outline shape is only recognizable with the presence of a an illumination boundary, so again EROS is not sufficient for seeing-in), and
  4. R.C. James’s “Photograph of a Dalmatian” (the outline shape of the dog is not seen until the percipient sees the dog in the picture, so EROS depends upon seeing-in).

So EROS is not sufficient or necessary for seeing-in, and seems to imply circularity.

How might an EROS theorist respond? I think one avenue might be to look back at what EROS theory claims: that depiction is seeing-in plus intention/causality. The cube depicts an object as cubical, we see a cube in the picture, and we experience the outline shape in the picture as matching a cube in outline shape. How? EROS allows for multiple points of view of an object in a depiction, and this can be taken to be one such case of multiple perspectives (and EROS is still necessary for seeing-in). In the illustration of three outline shapes of parts of a cube, only the first is depicting a cube: there is no indication that the other two (the irregular shaped outlines) are intended to depict a cube. In fact, as the outlines are presented here, they are intended to not depict a cube (or they would be poor examples for Lopes’ argument). Similarly, for the outline shading on a face, the last picture is not intended to depict a face (and it does not). The “Photograph of a Dalmatian” case is a bit different. The picture intentionally leaves out significant portions of the outline shape, which are provided by the percipient (via Gestalt). The viewer never sees the outline shape of a dog, but rather creates it in their mind (infers the presence of such a shape given the parts of the picture’s surface, and the percipient’s previous experiences of outline shapes of dogs). This picture is not made with the intention of the dog being seen: the point is to not see the dog in the picture, but to infer the resemblance to the outline shape of the dog in the picture (and from there to experience the resemblance to the outline shape of the dog in the picture). So the conditions for depiction are not met (we are not intended to see the dog). In sum, these cases can be argued to be by and large not depiction (due to lack of intent) or to be cases of multiple perspectives. No circularity seems to be involved, and it seems that EROS can still be said to be sufficient and necessary for seeing-in.

Lopes, D. M. (2005). The domain of depiction. In Matthew Kieran (Ed.), Contemporary Debates in Aesthetics and Philosophy of Art, pp. 160-175. Oxford: Blackwell.

Mod Me Baby, One More Time

One of the great things about many popular video games (especially games in a series) is that modification of the game itself is possible, either through cracking or through in-game allowances for modification. For instance, the violent video game GTA IV has a modding community that has grown up around it, and mods include playing as SpongeBob, R2D2, and driving a birthday cake (http://www.youtube.com/watch?v=WoBH_HgHsKg). These games have their own worlds, their own characters, and their own plots, but players have reenvisioned the world, characters and plots to fit their pleasure. If games are learning environments (which I think they are), one of the things players are learning is how to change their learning environments. Here we see new media skills in action. Hacking and cracking these games requires skills and know-how, and often results in communities of practice (not just around the game itself, but in modification of the game). New vocabularies spring up, new popular twists arise, and users become creators. Now as game makers, players are embodying their skills and knowledge in their modifications. Playing through a game often leaves very few digital artifacts. Creating a game is all about the artifacts. Mastery is visible. Game modding often gets a person to the highest level of thinking in Bloom’s taxonomy. Pulling off a good mod is personally satisfying and publicly rewarding.

Distilled from games, what can we say about modding? Successful modification of our environment, identity, and motivations/directions are a part of growing up. What affordances are there for modification in learning? When we are able to have control over where we learn and in what manner, when we become the teacher or identify with what is being taught, and when we change our mind or surge toward something from inner compulsion, that is a successful learning mod. But learning modification requires free will. And not all who are allowed to mod end up modding. A person must choose to be a modifier. They must step into modification. They must choose to move from situated learning (as in video games) to resituated learning (as in video game mods). As the learner resituates the learning, they may fail miserably at the tasks before them. But given support, encouragement, and teaching regarding how to succeed (just in time and as needed), they will mod their learning, and hopefully mod their lives. Beyond gamification of learning we might think about focusing on the modification of learning. And modification of modification of learning…

Genuine Rational Fictional Emotions

In “Genuine Rational Fictional Emotions,” Gendler and Kovakovich (2006) seek to resolve the “paradox of fictional emotions.” The paradox goes like so: 1- we have genuine rational emotional responses to specific fictional characters and situations (the response condition); 2- we believe that those characters/situations are fictional (the belief condition); but 3- we can only have genuine rational emotional responses toward characters/situations if we believe they are not fictional (the coordination condition). Each of these claims is plausible at first blush. For instance, I can respond to Frodo’s plight in Mordor with genuine rational sadness, I can believe that Frodo is merely fictional, and I am not feeling genuine rational emotions for a three-foot high furry-footed curly-haired person (because I believe Frodo is fictional, and so I cannot feel genuine rational emotion for the poor guy). But these three conditions cannot all be true simultaneously.

The solution hinges on the question of whether emotions about actual characters/situations are similar enough to emotions about fictional characters/situations to warrant considering them “two species of the same genus” (p. 243). The differences between these two kinds of emotions (dubbed here “fictional emotions” and “actual emotions”) are related to subject matter (real or fictional) and to motivation (I do not respond to Frodo in the same way as I would if I actually met a real hobbit in such tormented circumstances). Earlier resolutions of the paradox posit that fictional emotions are not genuine, or are not rational (addressing the response condition), or that we lose track of our own belief (addressing the belief condition).

Gendler and Kovakovich deny the coordination condition, and do so partially on the basis of recent empirical research which suggests that autonomic responses (response behaviors linked to the involuntary nervous system) help people in practical reasoning by the following process: we imagine consequences of our actions, which activates emotional responses, and these become reinforced to the point of automatic responses which help us behave rationally (based in part on these automatized responses). So autonomic emotional responses tied to future circumstances (“what if” scenarios) may help us behave rationally. Automatic emotional responses to imaginary events are a part of rationality. Further, when we fear actual future events, these emotions are genuine, even though the events have not happened (and may not happen). So we can have genuine rational emotions about things that do not exist. Concerning the belief condition, with optical illusions we may perceive and respond to things we do not believe. This is because we have automatized our responses to stimuli in such a way as to act subdoxastically (without requiring belief). If this is true, we may have genuine rational emotions toward fictional characters/situations without needing to believe those characters/situations are real (the emotions occur subdoxastically). Without such emotional engagement in fiction, we would be limited in our capacity to behave rationally (we would be limited to our own narrow real circumstances for building up autonomic responses).

This proposed solution is both elegant and convincing, though there are several potential weaknesses. First, the cases given for subdoxastic responses may turn out to be doxastic (but be false beliefs that are overturned by further evidence). When I get near a window in a high-rise apartment, I believe I will fall to my death (I am afraid of heights, and of falling to my death). I also have other beliefs that outweigh this false belief, and which sometimes allow me to stand near the window and enjoy the view without a response of fear. It is not irrational to hold two beliefs at the same time that are contradictory: it is irrational to still hold both after evaluating the merits of each (and recognizing they are incompatible). Second, the force of the argument depends upon our acceptance of the idea that the similarities between actual and fictional emotions are more striking than the differences, but what if we were able to show there are additional differences between the two? For instance, fictional emotion is a source of pleasure (even when the emotion is fear or sadness), which we derive in part from knowing that the fiction is not real. If actual and fictional emotions are indeed different, and if our emotions are not subdoxastic in the case of responses to fiction, then the argument presented here may fail to convince.


Gendler, Tamar Szabo, & Kovakovich, Karson. (2006). Genuine rational fictional emotions. In Matthew Kieran (Ed.), Contemporary Debates in Aesthetics and the Philosophy of Art (pp. 241-253). Malden: MA: Blackwell.

Using a TV as a Hammer: From Entertainment to Usability

I am pursuing research in the direction of testing the pedagogical usability of popular off-the-shelf video games (that are not intended to be educational). Pedagogical usability is basically the affordances of a tool for use in learning (or as a learning environment). One question arises from this research in my mind, and I have not fully wrapped my mind around it: what business do we have assessing the use of entertainment-centered video games for their educational utility? Is this like measuring the aesthetic features of PhD candidates?

And when we use tools like blogs (intended for journaling self-reflective thoughts in a public fashion online) for educational purposes, are we misusing the tool? When we use a tool like a television to hammer in a nail, are we abusing the tool? Similarly, in my research I will need to address this important problem. Am I suggesting through my research that we should take every fun and interesting thing in the world and use it for our own purposes, against the intents of the creators of those tools? Ought we to misuse and/or purposefully thwart the true intent of a tool if we perceive our own purposes for the tool as more important to society?

Popular video games are generally not created to be educational, or to be used in education. We may learn new media skills using them, but that is typically not their purpose. What right do have to judge them for something they don’t even intend? Or to use them against their intended purposes?

Here might be one way out of this dilemma: if we view this problem from a holistic standpoint, we might be able to understand the several major elements at play in this problem, and to create a model for synthetic artifactuality to apply to these kinds of situations. First, we have creators/designers of tools, users of tools, and the tools themselves. Second, we have interactions: users interact with tools in ways that the original designer intended. But they also use them in new and unique ways not intended, sometimes with better results than what was intended with the original design. When this beneficial use occurs, we may see this merely as chance, or as a second designer in action: the user. The user comes to the tool and the way in which it is designed, sees its intended purposes, understands its affordances, and synthesizes his or her own thoughts about possible affordances of the tool with those of the original designer. This new design is then tested as the tool is used for the new purpose. We see old tools put to new uses all of the time, so this is not a new idea. The question is the beneficiality of the new use over the old use, and perhaps also the possibility of negative effects on the tool itself or its ability to fulfill its original purposes.

Are video games negatively affected by their use in educational settings? Maybe. Maybe they don’t look as cool anymore, or as enjoyable. After all, if motivation is supposed to rub off of video games onto educational content when used in an educational setting, could not negative connotations of formal schooling rub off on games as well? Is it a possibility that video games might be less entertaining if we are worrying about how they are educating? It seems like a live possibility. Might the act of assessing them against their designers’ intentions cause them to become less valued in their original state (or for us to undervalue their original intentions)? I sure hope not. Let’s hope that we don’t ruin these tools. More important still, let’s hope that we don’t ruin the learners.