So I've been thinking about this a lot over the last several days. Actually I've been obsessively thinking about it non-stop and reading everything I can on related topics online.
I tend to do that for a different thing every week. This week, it's been this.
So there are a few pieces I've been focusing on, that seem most crucial to the success of a programming or game development tool for artists. There are probably others, but I thought I'd share what I've been thinking about so far.
One is readability through R-mode perception.First of all, a disclaimer: When I say "R-mode" versus "L-mode", as in "Right brain" or "Relational" or "Rtistic" versus "Left brain" or "Logical" or "Linear", I don't mean to suggest that the brain is really divided into strict differences between its physical right and left halves. That is an outdated belief. But I find the terminology to be a useful shorthand.Thinking on Michael's comments about visual flowcharts being easier for him to read than linguistic code, and looking back on my own experience, I think there really is something significant about how the code is presented and perceived, even when the underlying logic is the same.
When I am reading code (or a book!) I am usually using what I call "L-mode" perception - going through in a linear, linguistic way, and building up my mental model one step, one line of code at a time, following the logic that is expressed symbolically, in sequences like that.
However, sometimes my mind is in an "R-mode" of all-at-once, spatial perception like you'd use for looking at a painting or trying to find a certain LEGO piece in a big box of pieces. When I am in that state of mind, and I look at code (or to a lesser extent, written language) I see all the words at once and perceive the spatial relationships between them, and the underlying logic of the code is utterly incomprehensible to me. Obviously not the "right" way to read code.
But maybe it could be.
Ha, that would be a good tagline. "The
right way to code."
The thing about R-mode perception is that it's a lot easier to be creative when you're in it. The other thing about R-mode perception is that artists are usually a lot more skilled at functioning in R-mode than they are in L-mode.
Therefore, if you had a tool that let you do programming while in R-mode rather than L-mode, it might be slightly easier to do creative things with it. At the least, there would be less inefficiency caused by switching between R-mode and L-mode whenever you think about what you want to change and then have to dive into the code to actually change it.
However, this may not even be possible.
All the visual programming editors I've seen, all the examples that have been posted here, require an uncomfortable mix of L-mode and R-mode perception in order to use. What I tend to see is a bunch of visually identical boxes connected by lines, and differentiated by text.
What you see in R-mode is the set of relationships between the boxes. But you can't tell what each of the boxes does. To do that, you must read the text and think symbolically, in L-mode. Really, very little information is conveyed through spatial relationships, through R-mode. Most of it is still sequential and symbolic.
For that reason, I find that pure written code, all L-mode, is much easier for me to deal with, since I don't need to switch around multiple times a second just to figure out what everything means. However, I suspect that there may be a way to create a pure R-mode method of programming too. But I'm not confident that it's actually possible. Just intrigued enough to try.
There are some programmers who hate the idea of visual programming, and say that it's a waste of time to use spatial relationships to convey the meaning of code. If you are one of those people and you use syntax coloring or indentation, you are a hypocrite.
So there's one aspect. Make sure your tool is R-mode accessible, if you want artists to be able to use it.
The second thing is building with functional pieces in real (or almost real) time.Artists tend to appreciate tools where "what you see is what you get" - you're manipulating the end result, so you can immediately see the results of your actions. The process becomes more like sculpting.
Programmers tend to discount such tools as nice but unnecessary. They are used to typing in code for an hour, hitting a button, and waiting a minute for everything to compile and show up on the screen.
These are two fundamentally different mindsets, as different as a slideshow and an animation.
When you operate in the slow, "slideshow" approach, development and creativity tends to happen in an architected, "top-down" way. You have a plan, which is in your head, and then you put in a bunch of time and hard work to mold reality into the shape of that plan.
There is a fundamental shift that occurs as you decrease the time between action and result. It's as real as the shift that occurs when you hit 24 frames per second - from slideshow to animation. To your brain, it's alive, it's moving.
When you operate in the immediate, "animation" way, development tends to operate in a more exploratory, "bottom-up" process. You don't have to have an entire plan in your head. You see the results of your actions immediately, and if they are surprising or unexpected, you can adjust your plan. You can try random things and follow them if they prove to be interesting.
In the area of game design, innovation is much more likely to come out of an exploratory process than an architected one. As Jonathan Blow
said earlier. It's hard to do things that haven't been done before if you have to plan it all out in your head first.
So we want a tool that allows us to sculpt the end result, with immediate feedback.
Part of this is that everything you can make should work. It may not work in the way you desire or expect, but it should still do something.
If you are painting with pixels in an art program, no matter how you put those pixels down on the screen, it will always be a functional, viewable image. It might not be pretty, but you can still see it. You are never going to run into a error message that says, "Invalid pixels at position 55, 46. Image cannot be displayed."
But if you are writing the code for a program, this sort of thing happens all the time. Most of the things you can type won't work at all. They won't turn into a program, even a broken one. There are right ways to write code, and wrong ways to write it.
I would say that this also makes a big difference. Perhaps the biggest difference is that writing code has a much higher barrier to entry, more learning how to do things at all before you can start learning how to do them well. But it also makes experimentation so much more difficult. You can't throw a bunch of random stuff together just to see what happens. Because what happens is nothing. It just won't do anything at all.
So if you can build only with pieces that work, and immediately see what changes, this would make truly
artistic interactive art much easier to create.
The last thing is expressing general logic through specific examples.This is probably the most impossible and most revolutionary but least important of the three. If you just had a tool that you'd use in R-mode, that let you shape the end results with immediate feedback, that could be awesome, and probably enough to make a huge difference.
But at the same time I am intrigued by this further vision I have of providing specific examples, which the system will extrapolate to create possible general rules for creating those example, which you will then provide feedback on and refine in order to guide the system's hypotheses toward the end you have in mind.
Because I don't see how to actually avoid symbols when describing logic, or how to directly manipulate end results in a general way. Because games are systems, and the end results happen when you take the rules that you have set up and run them through their paces.
So maybe this is the only way to achieve those first two goals in their entirety.
What am I talking about?
You know how you draw diagrams and mockups for different things that happen in different situations in a game? Like
this. It's a pretty common way to organize your thoughts when you're designing. The thing about those is they're all organized around specific examples, not general rules. So you might draw a diagram with a guy hitting a wall, showing how he bounces off or breaks through it or whatever. It's not completely specific, as you might have an abstract line standing in for any kind of wall, and a stick figure representing any kind of guy, but at the same time it's very concrete. And you can add general connotations by writing in little notes, to explain the rules behind the example more clearly.
The reason that we don't just stop there is that our game development tools require everything to be spelled out exactly - they cannot extrapolate from these examples, because there is so much ambiguity. It could mean this or it could mean that.
However, we run into a similar problem when trying to communicate our ideas to other people who are helping us make them into a reality. Especially if we are designers and we are telling programmers what to do. How do we solve this problem with other people?
Part of this is by clarifying with more examples when an area is unclear. Kind of starting at the highest level and breaking it down into more specific situations when necessary. Another part is through conversation, asking "It sounds like you're describing this... Is that right?" and responding "Yes, exactly!" or "No, I was thinking something more like this..."
Both of these could be accomplished with a special computer program instead of a human programmer. Maybe not as well, especially in terms of accuracy of translation, but in some ways better - particularly, in the time between your description of a design and seeing something on the screen. And this increase in speed could make up for the lack of accuracy, since you can adjust and correct much more quickly. And as a result, make use of exploratory design instead of architecture.
Break dynamics down into stories instead of rules. A playthrough of the entire game could be an example story, and you could create example stories of successively smaller and smaller pieces of the game until you have specified it completely. Or completely enough.
The tool generates possible rules that could create the situations you specify, and presents several for you to try out. Most likely none of them work the way you want. Pick the one that's closest, and let it generate more possibilities based on that. It's an evolutionary search. Like
Biomorphs.
And stories don't always have to be specific stories about specific instances. They can be more or less abstract and universal. Like Raven, with a capital "R", who is both the character Raven and all ravens and all tricksters at once. Or the princess in the tower, or the wise old man, or the dragon in the cave. Or the stick figure on the crosswalk sign who represents all pedestrians who could ever walk this way. There is a continuum between the specific and the symbol.
I am particularly inspired by the concept of the
Dreamtime. The translation of this name is misleading, as it does not refer to a time in history. It is like a parallel slice of the world running alongside and underneath the specific, physical world, where the gods and heroes walk, creating and personifying the dynamics and processes that underlie everything we see in reality.
I want a tool where I can not only shape the world as a level designer, but also shift into the Dreamtime and shape the dynamics of that world as concretely I would shape the placement of coins and mushrooms.
When this happens, we will get our interactive art.
Update:
Just reposted on my blog...