2011-06-28 : Designing Philosophical Arguments

Ry asks:

How do you take a philosophical argument that's very, very important to you and express it in a game - without being pedantic?

(like you did with compassionate and uncertain vs. judgmental and certain in Dogs)

Well, wow. Mostly I fail!

I want to say that I don't try to design games that way, but it's not true. All the time I'm writing things in my notebook like "so, Vincent, compassionate vs judgmental, right? How?" But it turns out that when I approach a design that way, nothing ever comes of it. Ever! It's not fruitful for me.

When it works, it works because I ignore the philosophical argument and design the realities of the situation instead. What circumstances can a character find herself in, what can she do, and what can come of it?

At least, that's how it seems to me this afternoon. What do you think? It's a good topic - do you want to ask or say more?

1. On 2011-06-28, Ry said:

R. Scott Bakker does this with his fiction.  He has a knack for transforming a philosophical point into something that can hurt people.  Not as a monster, necessarily, but as a fact that monsters can exploit.

So when he wants to talk about how little we can see our own brains at work - how inaccurate our self-judgment is - he invents a neurosurgeon serial killer.  The killer provokes the reader / detectives with tapes of the murders where we can see the victims behaving against our assumptions about human nature, but which agree with what we know about the brain.

That connector - the tissue from 'the brain doesn't work the way we casually think it works' to 'serial killer' is the thing between me having, say, 3 insights and writing a game.  Or 2 insights and an apocalypse world hack.


2. On 2011-06-28, Ry said:

So to keep the topic from getting mired in neurology and serial killers, can we talk about how to make a game that sets group rights against individual rights?


3. On 2011-06-28, Simon C said:

Ooh! I love this topic!

I think it's like any good writing. If you sit down to write a poem or a story or anything "about" a philosophical subject you (or at least, I) get didactic wank. If you sit down to write about some characters you care about, or a situation you find interesting, or a difficult problem, you get something that actually addresses issues (maybe even the ones you wanted to) in an interesting way. And then people read it and they get something TOTALLY DIFFERENT from it.

"Realities of the situation" works for me too. I'm noodling with a police procedural game, and every time I write a move that's about portraying the realities of the situation (as I see them), it turns out to be about choosing what's easy or what works.


4. On 2011-06-28, Ben Lehman said:

I find that human situations I find compelling naturally result in philosophical conflict.

I can't imagine this is an accident.



5. On 2011-06-28, Ry said:

Ben, I see what you're saying - create deep situations and you'll naturally find philosophy there.

But I think some of these are trickier - for example, I can't imagine addressing the conscious vs. unconscious without deliberately going for it.

I mean, normally when I talk for my character I'm using my conscious brain to direct what my character's conscious brain does. If I add some mood I can suggest my character's unconscious. But I never have to face the otherness of the unconscious. I also don't have to see how I lionize myself in my personal narrative.  Those things stay implicit like they already are in life. So my game isn't "about" our self-knowledge the way Xxxxtreme Street Luge is 'about' microcelebrity.


6. On 2011-06-29, Neon Fox said:

can we talk about how to make a game that sets group rights against individual rights?

I think a lot of games already do that on some level, any time they have certain kinds of actions in them.  Specifically, once you have rules that can benefit one character at the expense of the group, or vice versa, there you go.


7. On 2011-06-29, Ry said:

I can see that.  We can ask how the group's choices constrain the individual - i.e. how hard-fought victories for the group might be bad for individual members.  We can answer that by making a game where hard new rules for groups happen, and driving play towards the people at the edges.

Maybe that's part of the problem - philosophical questions about how people should interact with each other might be easier to turn into games than philosophical problems about the mind / the self.


8. On 2011-06-29, Vincent said:

Just start with the realities of the situation.

People don't own their own decisions, but think that they do - great! Make a game where that's true. Who is it about? What do they do?


9. On 2011-06-30, Ry said:

Okay, it's about cops and the Hell's Angels.  But really about the web of people that get hurt in between.

Some of those in-between people act nervous near a cop on the cop's really bad day.  Or is taken by the wrong impulse and flips off a Hell's Angel on the highway.  Or kicks their dog 'for being bad' when really, they're mad that their boss is threatening to lay them off.

This thing about the unconscious and how it snowballs is as much about kicking that dog as it is about the mass arrests that happened in my town last year. And the way people afterwards demonize those arrested because it's easier than thinking about police as people that can be impulsive or make mistakes.

When it comes to cognitive fallacies / the unconscious, the rubber really hits the road in law enforcement (and lawmaking, and the courts).


10. On 2011-06-30, Ry said:

Here's the problem - the nervous guy, the cop investigating the nervous guy, the hell's angel beating the guy, the guy kicking the dog, all of those guys don't own their decisions, but they perceive all of their actions as if they DID own them.


11. On 2011-06-30, Vincent said:


Now that's just a design problem, not a philosophical argument.

Do you have an insight into roleplaying as a practice that goes with it, so you can approach the design problem?


12. On 2011-06-30, Ry said:

Sure, but it's not a helpful insight.

When we roleplay, we use our words, which a lot of the time means we're using the executive functions of our brains to make decisions for our characters.  Put another way, our characters don't do anything that isn't intentional.  In essence our characters don't have the rich set of less-than-conscious drivers that we do.  If my character runs away when I want him to stick around, I feel like the GM or the game screwed me.  My character doesn't act like they made that decision - doesn't do the work we all do to justify it.  The character says "that was a failed save, that was DONE to me", whereas we would say "I ran. I was being smart."

Our characters can tell the difference between actions that were imposed on them by their less-than-perfect brains and actions that they used executive functions to choose.  We can't make those distinctions.

So my insight about roleplaying is "Roleplaying games are limited in their presentation of the self because all interactions are channeled through the intentional, speaking, thinking self, i.e. the parts of our brains that play the game."


13. On 2011-06-30, Vincent said:

Yep. I can see all that.

It's possible that you could buy the player in on the preconscious driver's side, against the character's conscious self, so it's not like feeling screwed by a bad save, but I haven't thought very hard about that.

Is there are metaphor for our consciousness vs preconsciousness that's compelling enough to you, or do you have to play it straight? (Now we're in "insight into your subject matter" territory.) What about robots?


14. On 2011-06-30, Ry said:

All I can think of is:

"She was an ex District Attorney, sick of seeing hardened criminals walk on technicalities.

"He was the unforseen consequences of design decisions in the robotics lab.

"Together, they fight crime."

So I'll need to take some more time to answer that one.


15. On 2011-06-30, Ry said:

I see what you're getting at about the robot.  Maybe it's a giant fighty robot with a human pilot.  The fighty robot's systems are the analogy for the consciousness, and the human pilot is the analogy for the preconsciousness.

I like that idea, but when I think hard about it I don't know if just splitting them solves the problem.

Regardless of where an action came from, the robot has to feel like "I did that" - but I think the human controller has to say "I did that" too.  I mean, both of them see themselves as the one in the driver's seat.

My intentional brain thinks "I decided to go to work today, and I left early enough not to be late, so my coworkers won't think less of me", my preconscious brain thinks "I got to the safe place in time to prevent the other sapiens from making angry faces that make my heart start racing."  They're both under the impression that it's all about them.  They only talk when I meditate.  Right?

The mech thinks "I solved the problem of getting object X labeled 'missile' to intercept with object Y, labeled 'threat'.  Y was on an intercept course to Z. I solved the problem because Y was removed from the grid before Z=Y could in 3-space" The pilot thinks "I piloted my mech and fired a missile at the monster before it could reach the town."

What I'm saying is that I love the giant fighty robot metaphor but I don't think the pilot gets to be the smart one to the robot's dumb one.  They're both smart and both dumb.  Problems arise when they're out of sync.  Does that make sense?


16. On 2011-06-30, Vincent said:

If your character's a robot with game mechanics telling you what its programming says to do, will you feel the same "I'm screwed because I missed my saving throw"?

Or, what if I'm playing the robot and you're playing its programming?

Or, what if the PCs are Hell's Angels and people in that web, and the cops are the robots?

I don't know if workshopping your game, Ry, is really what you want out of this thread, or whether I should now just say, yeah, sometimes we want to design a game we can't figure out how to design. That's a thing too.


17. On 2011-06-30, Ry said:

I'm not sure I need to workshop my game here either.  I'm OK with "sometimes we want and we can't figure out how" especially if the "we" includes people who design games.  This discussion makes me wonder if the game that does this right won't be any fun to play... because it's no fun to be confronted by the places where anthropomorphism doesn't describe what we see in the mirror.

But I'm not trying to workshop here.  I'm looking for alternate guidelines for framing an issue like that into a game.

For example, here's the only guideline that's been effective for me:

Take the "thing" and try to express it in the game in a way that people can get hurt.

That's a start, but it's sounds like one of six or seven good rules for how to turn a philosophical point into a game.

I feel like we did something else in the last 10 posts but I'm having a hard time teasing out what the principles behind that are.


18. On 2011-06-30, Moreno R. said:

Ry said: "So to keep the topic from getting mired in neurology and serial killers, can we talk about how to make a game that sets group rights against individual rights?"

Isn't that Dogs in the Vineyard?

(Dogs in the Vineyard is maybe more about community cohesion against individual rights, but I think it's near enough...)


19. On 2011-06-30, Gregor said:

"[S]ometimes we want to design a game we can't figure out how to design. That's a thing too."

Damn true. And frustrating.


20. On 2011-06-30, Simon C said:

I think another question that's good to ask is "Why does this need to be a game?"

Like, what do we get out of playing this game? To my mind, productive play comes from the space between two sides of an issue. What are the two sides of your issue, Ryan? Who are the people stuck between them? For whom is it a critical, life or death issue that their conscious and subconcious minds don't always agree?


21. On 2011-07-01, Zac in Virginia said:

I'd just add that our philosophical prejudices (in the general sense of the term) are always informing our creativity.

I think it's important to figure out what we really think and what we're just parroting, but aside from that, be yourself :)

For example - I could never write a game about fighting crime. I could write a game about crooked cops, though.

Also - to stir the pot a bit, I'd add that, ime, what's perceived as pedantry stems from bad writing or from ideas with which the reader strongly disagrees.


22. On 2011-07-01, Vincent said:

Simon, my point and suggestion is to simply assert that our decisions aren't really our own and design the game accordingly, not to make it the issue at all. Just like I simply assert that the person losing the argument is the one who throws the punch.


23. On 2011-07-01, Simon C said:

Vincent, yup!

But like, that wouldn't have much juice if the game weren't about people who need to get into arguments a lot, but also who might like to not always be punching people. Right?


24. On 2011-07-01, David Berg said:

Finding the right relationship between (a) the philosophical position and (b) what the game is ostensibly about doing—this is tricky for me.

My recent game Within My Clutches says "the shit you own ends up owning you".  The mechanics enforce that—when you achieve stuff, your achievements saddle you with responsibilities that demand your resources away from other stuff you might want to do.

So, who experiences those kinds of situations?  I said "supervillains".  So there's some color and some mechanics that I'm happy with.

The odd bit, though, is that what you do when you play your supervillain is you try to achieve stuff.  So the philosophical message kinda traps you.  Every time you achieve stuff, it owns you, hosing your ability to achieve more stuff, and sending you on a downward spiral that can encompass play entirely if you're not clever about alternatives.

Compare this to Dogs, where your objective is not to lose arguments, but rather to judge situations in which you may wind up losing arguments.  You can encounter the "argument loser throws first punch" message independent of whether you succeed or fail at your primary objective.

Perhaps a takeaway is that the message should be attached to the process of play rather than its product?  The momentary feedback rather than the large-scale rewards?  Or at least that's one approach...


25. On 2011-07-01, Ry said:

David, I think you're really on there.


26. On 2011-07-05, GB Steve said:

Most games of D&D at some point involve a personal v group conflict, especially when it comes to dividing up the loot. There are several ways groups resolve this with a mixture of IC and OOC mechanics. It's seems to be a natural emergent feature of groups, especially where resources are discrete.

In our game of Mutants & Masterminds, the main effort of the group was in arguing about its name. My character had a massive intimidate bonus which she used to impose a name but it was only ever used when she was around and didn't get on the stationery.

All you have to do is to provide one less resource than characters.


27. On 2011-07-07, Josh W said:

Philosophical points don't have to be cautionary-tale type things, you can use them to compensate people for stuff too...


28. On 2011-07-08, Josh W said:

Putting a bit more meat on that, the obvious way to include philosophy is mechanically, either focusing on implementing a present tence tradeoff/constriction of action or a later pattern of consequence (reward cycle stuff, gotchas, whatever).

This can be great, but you are naturally making your design more rigid, constraining the possibilities of the story.

The other way to do it, (and this is best for things relating to human psychology) is to skip mechanics out on the basis of that philosophy:

Say you feel that people gain satisfaction not so much in big things going there way as in a pattern of small consistent successes?

So you build a game where more success equals a bigger scope of possible action, that will inevitably put you into conflict with other players. Then if you make it so that the primary result of loosing conflicts is to have your scope of effect automatically reduced, in such a way that you're no immediate threat to people but it's actually difficult for them to oppose you.

Then that player who looses will keep plodding on doing whatever, and if that philosophical thing is right, will gain satisfaction in it because of it's reliability. You don't need to add a mechanic to compensate them for reduced influence because withdrawing from the fight can itself be a compensation.

That's not a very good example, but you get the idea. You set up the mechanics so whatever you're talking about is likely to occur, so the players have a chance to prove you right, rather than reacting to you mandating it.


RSS feed: new comments to this thread