Debugging Embedded Mono

It's been announced that Go Home Dinosaurs will be released using Google's Native Client technology later this year, but what isn't known (as much) is we're using Mono / C# as our "higher level" logic and scripting system. Generally, I'm a huge fan of C# as a language, and the performance benefits just seal the deal for me. The one issue we have right now, though, is debugging embedded Mono, especially in conjunction with the game running in Visual Studio. There aren't very many instructions on how to do this online, so I thought I'd share our experience.

There are two very weird issues when debugging embedded Mono. First, it currently works exactly opposite of how most debuggers work. In most soft debuggers, the application starts the server and the debugger connects to it. In Mono soft debugging, the debugger creates the server and the client connects to it. Second, the Mono soft debugger uses signals / exceptions to work halt threads. This means that you can't debug both with Visual Studio and Mono Develop, because Visual Studio will pull the signals before they get to the application. This also prevents

However, there are changes in the works from the Unity team which fix both of these issues. However, they are not currently part of a stable mono release (or in the Elijah's version for Native Client). Once they are, I may revisit this whole article, but that's where we are right now.

So, with all of that in mind, how do you get this working right now?

Set Up MonoDevelop

First up, grab MonoDevelop.

Second up, you will want to add an environment variable to your system called MONODEVELOP_SDB_TEST and set it to 1. This gives you access to the following menu item when you have a solution loaded:

Which brings up this dialog:

Now, for some reason, I've never been able to get MonoDevelop to launch the application directly. It always fails attempting to redirect standard out, and then immediately crashes (I could look at the code to figure out why this is, but I haven't had a chance to). We'll come back to this dialog in a minute.

Prepare Your Code

There are a few things you need to add to your code to get debugging to work. The first few lines can occur at initialization, regardless of whether you're debugging or not:

mono_debug_init(MONO_DEBUG_FORMAT_MONO);
mono_debug_domain_create(mMonoDomain);

These should be done before you load any assemblies.

In addition, if you're running Mono from a background thread (like we are), and not running it on your foreground thread, after initialization you need to detach the foreground thread. This is because when Mono attempts to halt all threads when it hits a breakpoint, it can only do so when you're executing Mono code. If your main thread never executes any Mono code, it never halts, which means debugging doesn't work. This is good practice regardless, because if the GC needs to halt all of your threads, it will also fail if a non-Mono executing thread is still attached.

Last thing you'll need to do in code is set up the debugger. Here's the code:

     bool isDebugging = AresApplication::Instance()->IsDebuggingMono();
     const char* options[] = {
          "--debugger-agent=transport=dt_socket,address=127.0.0.1:10000"
     };
     mono_jit_parse_options(1, (char**)options);

This needs to be combined with any other options you may have (I only have one other, which is –soft-breakpoints), and should only be done if you want to start debugging. If the Mono client can't connect to the debugging server when it starts up, it will call exit, which is no good.

Lastly, if you're using the Microsoft C# compiler, instead of the Mono one, you'll need to generate mdbs from all of your pdbs. There's thankfully a utility for this. We actually now perform this step as a post build event, which keeps the mdbs up to date with latest code.

Put it all together

Alright, so let's get this party started!

Open MonoDevelop. Set some breakpoints, then open this dialog again.

Don't change anything, and click "Listen".

Now start your application outside of Visual Studio with whatever command line arguments you chose to signal to your app you want to debug in Mono.

Everything should now connect and you should be debugging Mono in your app!

Let me know if you have any issues!

Jamming Post Mortem 2010 Edition

Last weekend, I took part in the Global Game Jam, like I did last year and let me say it was just as fun, if not MORE fun this year than last. Our game, Quest for Stick, was really, really awesome this year, and you can learn more about it from the GGJ page and from our Twitter account. We even have a video of a complete play through of the game. The game is super pretty, only a little bit buggy, and generally I think accomplished everything we wanted.

But this year I went in knowing what to expect. How'd I do this year? What did I learn?

What Went Right

  1. The Team. Last year, I said one of the things that went right was having a team. This year, that was even more so. We had a total of 7 people working full time on the game, which, initially, I thought was way too many. So much so that I actually asked people to leave the team and considered leaving the team myself to reduce the number of people. But, when it came down to it, I decided I wanted to work on the game idea and went with the other 6 people to create the game. Honestly, 7 people may still have been too many, as communication and tasking did get hard near the end of the project, but there's no way the game would have been anything near what it was if we didn't have at least that many people. Everyone was basically tasked the whole time, and the game came out great because of it.
  2. Getting Down To Business: We spent very little time talking design this time, which worked out to our advantage. Although we spent a lot of time later arguing about how exactly the game was going to play, it didn't take away from everyone working, which was good. We got down to making something playable quickly, and didn't try to design too much stuff up front.
  3. Tools choice: Last year, I was super happy with XNA. This year, the team used AngelXNA, even though I was the only one familiar at all with it, and I was the really the only one well versed in XNA. Even though I spent a lot of time helping people understand Angel / XNA, it was still, by far, better than attempting to use only XNA. It performed a lot of the heavy lifting for us in terms of doing animations, placing and managing actors, and, surprisingly, editing levels, though this is its own bullet point.

What Went Wrong

  1. Unclear tasking: Occasionally, we got duplicated work or weird moments of down time because, like most game jams, people just shouted out things they needed. Kate was really the only person keeping track of most of these tasks, and really only for herself. For the artists, no one was really in charge of knowing what art was still needed and who was doing what. For a team this large, what we needed to create and consult a list on a whiteboard or cork board that had any asset requests, who was potentially doing them, what was in progress and what was up for grabs. This would have avoided duplicated work and would have given us an idea of how much work was left.
  2. Late Playable: Despite my work to prevent this (more on this later), we still didn't end up with an actual playable game until mid day on the last day. Just having *SOMETHING* sometime on Saturday to hand to the artists and designers to make levels on would have helped. We did have lots of pieces that worked, essentially, but didn't get them integrated together fast enough.
  3. Encapsulation problems: We had three programmers working together on individual parts of the game, which helped not only keep them tasked without stomping on each other, but made it so people were in charge of very small systems. However, some of the systems were weirdly encapsulated, and required copying and pasting over when we actually got to the point of integrating. Though this actually ended up *helping* at the very end, I would have liked fewer instances where I had to copy and paste the code from one class to another in order to integrate a new system into the main game.

What Didn't Work

So, these things really didn't go wrong, but they were things I was hoping would help us during the jam, but didn't.

  1. The Simple CI: Before the Jam, I wrote a simple python script that would query a mercurial repository, pull down new code, build it, copy it up to a network location, then message everyone over gchat. This was awesome in theory, but not so much in practice for a few reasons. First, it didn't work so well. If anyone was signed out of gchat when it went to message them, the CI would get stuck in an endless loop. Second, the network drive would occasionally flake out and not be able to take the new build. Third, we didn't have anything the team could play until Sunday, so the CI ended up being useless until then.
  2. The Angel Editor: The editor in Angel was an awesome idea, but when we got to the Jam, it was buggy and untested. It didn't save out things correctly, crashed, spawned items in weird places, and didn't work at all with our custom actors. In addition, the editor saved all levels out to the build directory, which was great for everyone but the people who were using it. Besides fixing the other editor bugs, in the future, the editor will probably need to detect whether a debugger is attached, and figure out where to put the levels from there, or create a custom levels folder that can be easily moved back and forth and through to an integratable build.

All and all, an awesome Jam. Please play Quest for Stick, and let me know what you think. I'm super proud of it.

On Distributed Version Control

So I had a conversation last night with my good friend Steve about my decision to start using Mercurial. Talking to him, I realized I hadn't really posted much on what I think about distributed version control systems (DVCS), so the switch to Mercurial may have taken many people off guard. So, I wanted to spend a post (maybe two) talking about what I see as the advantages of DVCS over a traditional "central server" (VCS) mentality.

The Leap of Faith

Like many, about the idea of version control without a central server scares me. I don't trust my own machine, or my developer's machines, to be in any way fault tolerant. My server, on the other hand, has a RAID 5 controller in it, and is backed up off-site weekly (though if I were checking in more I'd probably back up nightly). Having that central server keep track of my changes feels safer, so I shied away from DVCS, opting instead for centralized version control with Subversion with user branches (more on that here). Even with tools like svnmerge though, Subversion utterly fails at merge tracking, especially from multiple branches bidirectionally. It was just never scaled to do that. Since then, I've watched several videos on the design of distributed version control (two on Git (here and here), one on Mercurial) and I made the leap of faith to attempt to see what DVCS is like.

The Centralized Model, Distributed

Now, distributed version control is amazing for open source projects, but what about working in a company where team communication and process is key? Where coordination is a requirement? Well, the nice thing about distributed version control is that it can, if you need it to, support a centralized model. Even in open source projects, you have what you call an "upstream" server, which is what everyone consults for the latest and greatest "official" changes to a product. You could, if you wanted to, push every commit you made to the upstream server, and pull every so often to make sure you have the latest version. In that case it would be exactly like using centralized version control (except it takes a more hard drive space, as you're holding a repository on your hard drive, not just the source code). Sure, you have to train people to commit, then push, but once they understand that, there's no issue. In addition, your centralized version of your tree can have pre/post-commit hooks just like any other version control system, allowing you to create check-in gauntlets should the need arise. That said, I'm sure other source control providers make it easier (the new Source Safe and Perforce I believe have GUIs for create check-in gauntlets, but I could be wrong). Still, with an easily extended system (like Mercurial), these additions wouldn't be hard.

Encouraging Experimentation and Checkpoints

So in this case, why use a DVCS if it's just going to work like centralized version control. Well, even in this situation, you now have more options available to you than you did before. First is the ability to checkpoint whenever you want, without affecting other developers. The key reason most people give for using version control is the Chinese proverb "The palest ink is better than the best memory." So what not increase your memory by encouraging your developers to commit as often as possible? And if they can do this even when their build is broken or when they're not completely finished with something, why can't we allow them to do this? In a centralized model, this is almost impossible (though packing apparently solves this partially). Then, when a developer is done, their commit can have all of the changes they just made, with a history of what they did. Of course, it doesn't have to (rebasing is always an option) but having it there can help you see potential problems or flaws in thought process.

In addition, distributed version control can encourage experimentation in branches. The problem I have had with branches in the past is that they are a pain in the ass to manage on a centralized server. Even Subversion, which makes branching easy, doesn't manage merging well at all, especially bi-directional merging. I've even had problems with Perforce merging in the past. Regardless of that, though, working off of branches and creating branches is never easy. Not as easy as it is in distributed version control environments anyway. In Mercurial, branches are just clones of the current repository. You can work in them, share them with others, and delete them without the central server being involved. This may sound bad, but it does encourage developer experimentation and sharing. If I can branch simply off of whatever my current state is, experiment a bit, show that change to other developers and get feedback, then merge back into my main development branch easily, that opens up a lot of opportunities for small team experimental work.

Here's an example of where I could have used this in the past. One of my former companies did not do and did not encourage unit testing, and I could not convince my coworkers (or the management) to let me try it in some of our newer libraries. I decided, though, to do unit testing on any new modules I wrote anyway. I made a copy of the library into a directory that had a unit test environment set up, worked on the library (writing new tests, making them pass, etc), then copied my changes back over when I was done, and committed them to the central server. The unit test folder, though, was completely unversioned. If I'd had a DVCS, I could have cloned the library, done my work, then pulled changes back, leaving the unit tests versioned in their own folder. In addition, any other developer that was also working on those libraries could have easily pulled the unit tests from me, thus spreading a new practice at the company. This may be an edge case, but it is an example where simple branching and merging would have encouraged experimentation between developers.

Shelving, Packing, Branching

This gets into the second option you have with distributed version control. DVCS makes it easy for developers to share uncommitted changes with each other. Other source control providers give you the option through "shelving" or "packing", but I can't see it being as easy as it is with distributed version control. I can pull changes from anyone, on any branch, into a cloned version of my own repository to test things out without talking to a central server. Those changes don't need to be based off of the same trunk and can easily be merged by whichever developer at whatever point, then pushed to the upstream server, all while retaining who made the original changes. This has to be experimented with to really see the full benefit, and honestly I haven't done it enough, but I can see where it would be useful. Unlimitted simple branching with the ability to push and pull changes from any developer coupled with a strong revision history just sounds nice to me.

Fault Tolerance

The last thing I want to talk about is when things go wrong. We try to avoid it, but every so often, our central server goes down. Sure, we have backups off site, maybe we have a passive failover server, but if we don't, our central server going down is very problematic. I have had this happened to me, and at fairly large companies that can afford lots of nice servers. If this happens in a traditional VCS, development (basically) stops for the day. Not so much in distributed environments. Since I can commit to my local repository all I want and share with everyone else without the benefit of a central server, a dead upstream doesn't affect me at all. This, in my mind, is pretty awesome.

More…

I will say that I'm using a DVCS in a very small team environment, but looking at it, I believe it could scale very easily. I also think there are lots of interesting ways to use DVCS, especially in agile environments where small teams may need to communicate changes without affecting central efforts. I really want to go into these, but this is already too long, so maybe I'll talk about them tomorrow. Until then, let me know what you think.

Open Source

I found it interesting a few weeks back when Warren posted something about not wanting to use open source projects because of fear of legal retribution. I can kind of understand this when using anything distributed under the GPL license, and maybe LGPL if you figure you’re going to make lots of changes to the work, but lately a lot more stuff has appeared using some pretty lenient open source licenses (like the MIT and Apache licenses), and it’s pretty easy to get a pretty good understanding of what you’re allowed to do and what you shouldn’t do using a simple Wikipedia lookup. That said, WINAL, and if Warren’s lawyers are telling him to stay away, I can completely respect that.

I, however, have been interested in using and contributing to open source software for a while, and working at Orbus, I’ve actually had a chance to work on some (I gave myself permission). What I really like about open source is that if something doesn’t quite work the way you want, or if you want to add functionality, you usually can. Open source projects can be treated as black boxes, or they can be changed to your liking. Now, open source may not always be as stable as some off the shelf products, or have all of the same well rounded features, but sometimes that’s okay, because off the shelf products are sometimes ridiculously expensive, and I’m willing to take customizability and price point sometimes over a feature or two. Additionally, most open source projects are based of standards, not made up protocols and file formats, so you can usually find other tools that work with them (the same reason I use XML over many other text based data formats).

So, while using open source, I’m also contributing back to open source. I’ve made some changes to the STOMP clients, and I’ve started a new project for doing mDNS and DNS-SD over at Google code called Mahalo. I’m hoping that these will just be my first contributions to the open source community, and that both projects will be around (and used) for a while. That way, I can feel all good about contributing back to the community.