thread: 2005-08-30 : Joshua BishopRoby on Rules

On 2005-09-01, Mike Holmes wrote:

I'd say that, if this is correct, this is actually a complete description of the arrow between Ron's Exploration level and CA level.

A few notes, Josh, if you're reading, you're making the same mistake that Adam is making. That is, dysfunction at the Forge refers to RPGs, even though it doesn't explicitly say so. After all, it's what we talk about. Adam, he says things like "System validates SIS" but he expects you to understand that it's "players employ the system to validate SIS." Both of your protests seem oddly nitpicky.

How does CA and this model interact? What everybody forgets somehow is that CA is behavioral, and does not address goals specifically. So Josh is more or less right, but I'd put it like this: CAs are behaviors that players have to obtain their goals. Ron's model ignores the specifics of the goals, as does Josh's (both wisely, I'd argue, since goals are just too broad). Josh is just pointing out that they do exist - which nobody ever said they didn't. And that there are things that players do to support their goals. The result of all of these interactions seems to me to be CA. That is, how we behave to get to our goals.

Now, I laud the attempt made here. But I see a few things wrong with it. First, I'm sensing something that I see in game design a lot, which is a need to create symmetry to create a memorable model. That is, especially with Josh's terms, I think that he's used the thesaurus to discover the full cycle. Whether or not that full cycle exists, I think, has yet to be seen. His model looks like a feedback cycle, too, which may be yet another case of trying to make the model look like something familiar.

For instance, I think that a good argument could be made that goals don't really influence the the SIS at all, especially if you accept Lumpley. That is, no matter what is being introduced into the SIS, it comes via system by definition. So I'm seeing a potential linear model where the loop is just a line like: Goal



And that's just without looking really closely at the particulars. Which is to say that I may be completely wrong at this point, but I'm very suspicious about the model here. I think that it may however be a very good starting point to looking at how CA is obtained. In Ron's model it's a black box - we decide to explore, we decide to do that in a certain way. No explanation of how that happens.

So I agree that if this is dissected properly that we might find some very useful concepts to be able to discuss precisely what the model intends to cover - the very basic sorts of problems that cause dysfunction. Again, I see this precisely as Ron's model, however, just a closer look. That is, CA problems cause dysfunction. This is just looking at how those problems form.

Or at least that's what I'm seeing.

homeydont AT hotmail DOT com


This makes BR go "There is a reason it looks like a feedback model"
Not because it was supposed to be familiar, but because it is supposed to be a feedback model. Josh's hypothosis is that if you do these things it creates a self-reinforcing cycle to build good play by bringing the elements in different players heads close together.

This makes BR go "Also, Goal and Imagined only do not link directly if you ignore the emotionality of the game"
Which, funny enough, is something I took Josh to task for -- but something I often find utterly lacking in the cerebality and functional shell model descriptions of GNS and Big Model theory.

This makes...
short response
optional explanation (be brief!):

if you're human, not a spambot, type "human":