Thoughts on Open Offices

There's been a lot of talk on twitter lately (some of it pushed by me) about open offices, interruptions at work, and generally how to avoid being multitasked to the point of lost productivity (frequently the cause of being interrupted and asked to fix too many things at once).

From an individual standpoint, there are things you can do to make sure you stay on task, but that's the subject of another post. For now, I want to focus on the external factors of interruption, and the negative impacts there. This is probably the first of several posts on the factors that lead to interruptions at the office.

First, let me talk about open offices. Many of us, especially at game companies, work in open offices. No one really knows why, except that it is less expensive to build cubicles and set up tables than it is to build actual offices. The given reason for using open offices is that they foster team collaboration, information sharing, and that they work as a way to bring the team together. By having an open office, people can (essentially) eavesdrop on each other's conversations, and offer their input to creative or technical decisions, should they want to. But the downside of an open office is that everyone can hear your conversations, whether they want to or not.

"Open offices aren't a problem" you say. "If you don't want to be disturbed, just put on your headphones. Have a standing rule that you can't be disturbed if you have headphones on, and your music will drown out the office noise." But what if you're like me and don't like the feel of headphones (they push my glasses into the side of my head). In addition, I can't think clearly on hard problems with music playing. I can only listen to music when I have a clear direction, or when I'm doing a less thought intensive task (I'm sure many people are like me in that respect). If you have the rule of "you can't be disturbed" and you're actively encouraging people to escape from office chatter, how are you say that the open office actually encourages creativity and collaboration? Instead it forces interruption unless people explicitly block it out.

Interestingly, I've not been able to find any studies on open offices fostering collaboration or productivity, and neither could the authors of Peopleware, though they could find numerous studies showing how interruptions and open offices hurt individual productivity. Though not backed by science, listen to Jason Fried's TED talk about open offices, and why "Work doesn't happen at work." Anecdotally, I'm sure you've experienced exactly what he's talking about. Fog Creek takes this to the extreme. Everyone has an office and can't be interrupted when the door is closed. It also has a no meeting culture, which I may also write a blog post on. People communicate through private chat. Bugs and support requests must go through their bug tracker.

The problem is, I think an open floor plan does encourage communication in some ways. Not through eavesdropping, but by removing psychological the barriers between you and other people. I think people are more likely to come talk to you if you're sitting at a desk on an open floor than if you're sitting in an office. Additionally, being able to just poke your head up and ask a question feels less intrusive than having to walk into someone's office. That said, this is kind of the point. If you do a Google search on the cost of even minor interruptions, you'll find that they can be extremely damaging, to productivity, to stress levels, and to quality of work.

For me, 90% of these "micro meetings," questions, etc, are better handled somewhere that would keep a record, a place that questions asked could be asked as easily, and more efficiently, and be easily ignored if someone is "in the zone" or doesn't want to be disturbed. Fog Creek recommends HipChat (which is what Fire Hose uses), but 37 Signals own Campfire also gets good reviews. These both have the added benefit of, if you have remote workers, it Is less likely that they will be excluded from the decision making process if your team is good about using chat for minor / micro discussions over open office eavesdropping.

What do other people think? Am I missing a benefit of open office plans (other than cost)? Do people feel that the interruptions of an open office aren't as bad as I make them out to be?

Transparent Persistence

I'm in the process of refactoring some code for Go Home Dinosaurs and I've run into an interesting problem.

Because GHD was originally designed to be a microtransaction based game, and because we wanted to have (essentially) cloud saves, it's using web services to keep track of player progress, coin totals, purchases, inventory, etc. Now, we want portions of this to be pushed into local save data. Because the schedule on GHD was so tight, there's a ton of code that just assumes that the server will be there, and it also (correctly) assumes that we want to update player information incrementally on the server. It's therefore filled with calls like this:

    GameDataServer.SendServerMessage("updateField_request", data);

// in some other portion of code
void HandleMessage(Message msg)
{
    switch(msg.Type)
    {
        case "updateField_response":
            // actually update the field with the new value
            field = msg.Data.NewValue;
            break;
    }
}

The problem is that this doesn't really fit with an offline mode. In order to combat this, one of my colleges wrote a "fake response" method, which, if there's no server to talk to, queues a fake "success" message for every call. While this was certainly the fastest, most straight forward way to accomplish an "offline" mode quickly, I'm not a huge fan. It's too coupled to the thought that there's a server between us and the persistence store. It also means that the server's responses and the "fake" responses need to be kept in sync, and I'm never a fan of "if you change this, make sure to change this" code. It's way too likely to break.

But how to combat this? I'm not sure I have an answer for this, given the current design. All I can do is outline what might work, and what I want.

Generally what I want is transparent persistence. I don't what to have to understand *how* the platform wants to persist things. I want to edit my data and then say "persist" and have the platform figure out the best way to perform this.

The first step here, I think, is to take a lesson from MMOs, and ignore trying to cache unconfirmed changes. Most MMOs make the change locally and send a message to the server, requesting that the change be made. For persistence, successful calls are generally ignored, because we know that what the server says and what the client says are now in sync (or are in sync from when we sent the message). It's only in the case of an error that things have to be handled. Generally, the correct response is then to rollback whatever transaction was made on the client, and hope the player doesn't notice (or present an error message). Since the server can (and should) return it's version of the variable in the error response, it means that you can remove a lot of special case caching code that can easily get mucked up (or start performing double duty, which is the case with at least one of our caching variables). It also means that platforms that always succeed in changing data don't have to send fake "success" messages. Only the (hopefully rare) errors in server calls need to be handled.

Second step is to start keeping track of "dirty" fields, and updating them as needed (whenever persist gets called). However, our current server code (which I didn't write, and some of which I'm trying to keep intact for a bit) assumes (again, correctly) updates for every single transaction. It's hard to say "the player has x coins." Instead, it asks to told how many coins the player gained. You can't send the entire inventory, you have to send "I'm adding / removing the following item from the inventory." Or "I'm purchasing the following item and it should be added to the inventory." These are common client – server interaction things, even in MMOs because it means that the server can actually confirm that the logic you're performing matches what it expects, but it does make doing transparent persistence very difficult.

The only way I can think to do this even remotely well is to push each update to the player to a platform intermediary. Platforms that persist the data locally ignore incremental updates and just save all the data once a full "persist" call is made. Platforms that persist the data remotely do the opposite, pushing each update to the server, rolling back errors, and ignoring requests for full persistence, or sending full persistence requests and to confirm that the local and remote versions are still in sync.

Resources from the Professional Programmer’s Panel

I got an email recently asking about the resources we went over in our GDC 2012 panel this year. I figured it would be useful to have these in one place, so here we go!

Game development sites mentioned

Blogs:

Books:

  • Code by Charles Petzold

Mike also recommended (several times) that you get on Twitter (http://www.twitter.com), which is a piece of advice I whole heartedly agree with. Then you can follow all of the panelists.

When we were talking about game jams, we mentioned some longer game jams, including the Experimental Gameplay Project, Ludem Dare, and the competitions held on TIGSource.

If you have other resources for beginning game programmers, let me know and I'll add them to the list.

Finding D3D9 Leaks

I ran into this problem working in a DirectX 9 application recently, and found it useful enough that I thought I'd share. We were running into an issue where returning from an alt-tab was not recreating the device properly. This is usually caused by a created D3DPOOL_DEFAULT asset not being released and recreated before attempting to recover the lost device. As a result, you get this message:

Direct3D9: (ERROR) :All user created D3DPOOL_DEFAULT surfaces must be freed before ResetEx can succeed. ResetEx Fails.
Direct3D9: (ERROR) :ResetEx failed and ResetEx/TestCooperativeLevel/Release are the only legal APIs to be called subsequently

One additional thing that was throwing me was this error message:

Direct3D9: :Window has been subclassed; cannot restore!

This can probably safely be ignored, and may be a symptom of the back buffer leaking specifically, but generally this whole error set is a sign of leaking D3D objects.

If this is something innocuous, like a single texture, you can sometimes figure it out by simply enabling the DirectX debug libraries, running and exiting your application, which will give you the AllocId of the leaking asset, which you can then break on. However, a large number of leaks indicate that something lower level may be leaking, and potentially a leaking reference to something, like the back buffer (which was our case). How to find it?

Here's what I ended up having to do.

First we did what we need to do for every bug, reduce it to the fewest steps possible.

Next, run PIX and take a full capture, until you've gotten to the point of the leak, then close. If you have a lot of steps to reproduce, you'll want to make sure you've got a lot of hard drive space, and some time on your hands, because it can take a while.

Once you have your full PIX run you can look at this column:

And set it to Never. This will show you every D3D object that's leaking, regardless of how many AllocIds it has (interesting tidbit, a single D3D object can allocate multiple AllocIds). If you move to the last frame (and wait for PIX to catch up, which, again, can take a while) you can even see how many references exist during shutdown. You could even walk each frame and see when the reference counts of your leaking assets go up.

In our case, like I said, we were leaking a reference to the back buffer. Most obvious was to look for calls to IDirect3DDevice9::GetBackBuffer, but none of those were leaking. Less obvious was to look for cases of IDirect3DDevice9::GetDepthStencilSurface and IDirect3DDevice9::GetRenterTarget(0). In our case, a call to GetRenderTarget was leaking as a result of a short circuit in a logic check, resulting in the back buffer reference to dangle in a very specific logic case.

Granted, this leaking does not pose a lost device recovery problem with DirectX 10 or DirectX 11, but since many engines are still DX9 based, I thought I would share.

Debugging Embedded Mono

It's been announced that Go Home Dinosaurs will be released using Google's Native Client technology later this year, but what isn't known (as much) is we're using Mono / C# as our "higher level" logic and scripting system. Generally, I'm a huge fan of C# as a language, and the performance benefits just seal the deal for me. The one issue we have right now, though, is debugging embedded Mono, especially in conjunction with the game running in Visual Studio. There aren't very many instructions on how to do this online, so I thought I'd share our experience.

There are two very weird issues when debugging embedded Mono. First, it currently works exactly opposite of how most debuggers work. In most soft debuggers, the application starts the server and the debugger connects to it. In Mono soft debugging, the debugger creates the server and the client connects to it. Second, the Mono soft debugger uses signals / exceptions to work halt threads. This means that you can't debug both with Visual Studio and Mono Develop, because Visual Studio will pull the signals before they get to the application. This also prevents

However, there are changes in the works from the Unity team which fix both of these issues. However, they are not currently part of a stable mono release (or in the Elijah's version for Native Client). Once they are, I may revisit this whole article, but that's where we are right now.

So, with all of that in mind, how do you get this working right now?

Set Up MonoDevelop

First up, grab MonoDevelop.

Second up, you will want to add an environment variable to your system called MONODEVELOP_SDB_TEST and set it to 1. This gives you access to the following menu item when you have a solution loaded:

Which brings up this dialog:

Now, for some reason, I've never been able to get MonoDevelop to launch the application directly. It always fails attempting to redirect standard out, and then immediately crashes (I could look at the code to figure out why this is, but I haven't had a chance to). We'll come back to this dialog in a minute.

Prepare Your Code

There are a few things you need to add to your code to get debugging to work. The first few lines can occur at initialization, regardless of whether you're debugging or not:

mono_debug_init(MONO_DEBUG_FORMAT_MONO);
mono_debug_domain_create(mMonoDomain);

These should be done before you load any assemblies.

In addition, if you're running Mono from a background thread (like we are), and not running it on your foreground thread, after initialization you need to detach the foreground thread. This is because when Mono attempts to halt all threads when it hits a breakpoint, it can only do so when you're executing Mono code. If your main thread never executes any Mono code, it never halts, which means debugging doesn't work. This is good practice regardless, because if the GC needs to halt all of your threads, it will also fail if a non-Mono executing thread is still attached.

Last thing you'll need to do in code is set up the debugger. Here's the code:

     bool isDebugging = AresApplication::Instance()->IsDebuggingMono();
     const char* options[] = {
          "--debugger-agent=transport=dt_socket,address=127.0.0.1:10000"
     };
     mono_jit_parse_options(1, (char**)options);

This needs to be combined with any other options you may have (I only have one other, which is –soft-breakpoints), and should only be done if you want to start debugging. If the Mono client can't connect to the debugging server when it starts up, it will call exit, which is no good.

Lastly, if you're using the Microsoft C# compiler, instead of the Mono one, you'll need to generate mdbs from all of your pdbs. There's thankfully a utility for this. We actually now perform this step as a post build event, which keeps the mdbs up to date with latest code.

Put it all together

Alright, so let's get this party started!

Open MonoDevelop. Set some breakpoints, then open this dialog again.

Don't change anything, and click "Listen".

Now start your application outside of Visual Studio with whatever command line arguments you chose to signal to your app you want to debug in Mono.

Everything should now connect and you should be debugging Mono in your app!

Let me know if you have any issues!

Post GDC 2012 Post

I'm back from GDC 2012, which is my 11th GDC straight, and I'm almost fully recovered. This year was certainly better than the last two for sure, but it doesn't take much to top getting minor food poisoning and getting (almost) deathly ill (I still can't top Darius who, three years ago, entirely lost his voice). I still have yet to find that balance of parties and not-parties, and when to just go hang out at the various hotel bars. I think I did better this year, but I still don't have it quite down.

This year, the high points were definitely the two events I ran. The first was the awesome Pre-GDC board game bash, which hosted probably about 130 people this year. This is always a lot of fun, and way more low-key than most other GDC parties which is why I like it. We're growing again next year, probably to about 150+ people, but I'm also looking for a bigger room with more tables (table space was certainly a problem this year).

In addition to that awesome event, this year I organized a private scotch tasting. This was really really awesome. So awesome, I don't have pictures of it. It was another great, low key, and educational event. Everyone there was really awesome, and I hope everyone had a great time. We'll see if that will happen next year, and if it will grow or not.

So what's up for next year? First, the board game night will certainly be running again. I've already started putting out requests for space the Sunday before GDC (and already been turned down once o_O!). Other events? Maybe the scotch tasting, and potentially a Black Tie Only reception. These last two require a good amount of planning and funding, so they'll probably be maybes until closer to GDC.

What else? I'll be posting more! Fire Hose announced that its next game (and the game I'm working on) Go Home Dinosaurs, will be shipping on Chrome Web Store using Native Client. I haven't been able to talk about this too much until now, but now that I can I'll be posting a little bit more on my experiences with Native Client and with Mono (which is what we're using for scripting). So keep an eye out for that!

GGJ 2012: Eat Sheep (and Die)

This weekend I participated in Global Game Jam for the third time (after taking a hiatus in 2011 to go speak at conference overseas). This year, I was at the UVA site and took part in a 3 person team with my brother in law and an awesome UVA student. The result was an interesting two player competitive game called Eat Sheep (and Die).

This jam was way less stressful than any other jam I've participated in. We essentially had something fun to play by middle of day 2, and just polished things throughout the end of day 2 and day 3. As a result, this is probably the most complete jam game I've ever worked on. Very little went wrong. It was great.

So what when right?

  1. AngelXNA – Power of Angel combined with the power of XNA! This was the third time I've used Angel in a jam and every time it's improved. We were able to get levels in games quickly, spawn actors, and do all sorts of things quickly and easily. It actually worked really well. Again.
  2. Abandoning Design – The original point of the game was to have a cyclic back and forth of helping and hurting the other player. On your turn, you would have your goal, and a directive to help or hurt your opponent. On the next turn, you would be given a bonus based on how well or poorly your opponent did. At some point, we realized there was never any reason to help your opponent. It just wasn't worth it. So we abandoned the helping, turn based component, and went with real time versus. This makes it questionable whether we obeyed the theme, but whatever. The game was much better as a result.

What went wrong?

  1. AngelXNA? – So, the one thing I don't like about Angel and XNA is that it does have a huge barrier to entry for people playing the game after the Jam. I know I don't bother playing a GGJ game that requires a download unless it comes VERY highly recommended. I will, however, look at pretty much every HTML5 or Flash game. That said, Angel is awesome, so maybe I have to figure out a way to fix this. Native Client maybe?

That's it. Not too many lessons this time out because everything went so well. It's a weird feeling. Really looking forward to next year!

Updates to AngelXNA

One of the tips I gave in my recent talk is to have a tool for GGJ. My tool of choice is AngelXNA, because it combines the power of XNA with the power of Angel. When I was demoing it to the other attendees, I realized I hadn't added a few demo screens that are really necessary.

To that end, I've added two demo screens showcasing two features that are in AngelXNA that aren't in Angel. The first is the Editor. While I had a problem with the Editor in the last GGJ, Fire Hose used it a lot for prototyping and I think we've worked out most of the kinks. It's not perfect, but it does make creating levels in Angel super easy. There are shortcuts for cloning, scaling, and rotating actors, which makes the level editing process super quick.

The second feature has several aspects, but comes down to the ability to use sprite sheets and the ability to create named, data driven animations. There are two parts to this. The first part is the Tiling system. By adding a .tileinfo file parallel to any given image file, it will automatically start rendering Actors that use that image as tiled. Here's an example .tileinfo:

IsAnim = true
FrameSize = Vector2(160, 200)
AnimDirection = TiledAnimDirection.Horizontal
StartFrame = Vector2(0, 0)
EndFrame = Vector2(0, 2)

It specifies that this file contains an animation, that each frame is 160x200, that the animation frames run left to right, starts at 0,0, and ends at 0,2. You can also use a non-anim and specify a specific tile to render.

Once you have these .tileinfos specified, you can specify animation files. Here's the example anim file:

BeginAnimation("Idle", TiledAnimRenderPath.Create())
    AnimDelay = 0.03
    File = "images/devo/devo_idle_strip"
    AnimType = AnimationType.Loop
EndAnimation()

BeginAnimation("Run", TiledAnimRenderPath.Create())
    AnimDelay = 0.03
    File = "images/devo/devo_run_strip"
    AnimType = AnimationType.Loop
EndAnimation()

BeginAnimation("Fall", TiledAnimRenderPath.Create())
    AnimDelay = 0.03
    File = "images/devo/devo_fall_strip"
    AnimType = AnimationType.OneShot
EndAnimation()

This allows you to call just Actor.PlayAnimation("Idle") or Actor.PlayAnimation("Run").

This is still a bit more complicated than I'd like, but does work really well. The only thing I'd like to improve is the specification of the render path. AngelXNA actually generally selects the correct render path automatically, but the data driving system, specifically the animation system, isn't detecting its render path. I can probably fix this, which I'll hopefully do after GGJ.

The Up and Coming

Update: Better late than never, my talk is now on my site.

Happy New Year!

For those that don't know, in May (or so) I moved from Boston to Charlottesville, VA. Thankfully, the awesome people at Fire Hose have allowed me to stay with them and work remotely, so I haven't moved companies. I have, however, moved communities.

Charlottesville isn't a huge center for game development, but I think if could be a great place for game companies in the future. There's a lot of talent to be had from the neighboring universities (UVA, JMU, GMU, Virginia Tech, and William and Mary are all fairly close) and it's just a great place to live. It's not a big city, but that's actually what I like about it.

To try to help this along, I've created the Charlottesville Game Developers meet-up group. We've had a few small meetings so far, with mostly me talking, but coming up over the next few months, we're going to have some great talks about starting companies, working with the city of Charlottesville, and maybe more! (By the way, if you'd like to come visit Charlottesville and give a talk, let me know and we'll work something out).

Tonight, I'm giving a talk on tips for game jams. Since UVA has been nice enough (at my urging) to host a Global Game Jam site this year, I'm making sure everyone comes in prepared. Most of the tips come from my previous post mortems of game jams, but there will be some new stuff in there. I'll try to be sure to post the slides tomorrow as well, so everyone can get the benefit.

Anyway, if you're in the Charlottesville area, come out to our meetings! I'm really looking forward to seeing what the Charlottesville community can produce over the next year.