Debugging Embedded Mono

It's been announced that Go Home Dinosaurs will be released using Google's Native Client technology later this year, but what isn't known (as much) is we're using Mono / C# as our "higher level" logic and scripting system. Generally, I'm a huge fan of C# as a language, and the performance benefits just seal the deal for me. The one issue we have right now, though, is debugging embedded Mono, especially in conjunction with the game running in Visual Studio. There aren't very many instructions on how to do this online, so I thought I'd share our experience.

There are two very weird issues when debugging embedded Mono. First, it currently works exactly opposite of how most debuggers work. In most soft debuggers, the application starts the server and the debugger connects to it. In Mono soft debugging, the debugger creates the server and the client connects to it. Second, the Mono soft debugger uses signals / exceptions to work halt threads. This means that you can't debug both with Visual Studio and Mono Develop, because Visual Studio will pull the signals before they get to the application. This also prevents

However, there are changes in the works from the Unity team which fix both of these issues. However, they are not currently part of a stable mono release (or in the Elijah's version for Native Client). Once they are, I may revisit this whole article, but that's where we are right now.

So, with all of that in mind, how do you get this working right now?

Set Up MonoDevelop

First up, grab MonoDevelop.

Second up, you will want to add an environment variable to your system called MONODEVELOP_SDB_TEST and set it to 1. This gives you access to the following menu item when you have a solution loaded:

Which brings up this dialog:

Now, for some reason, I've never been able to get MonoDevelop to launch the application directly. It always fails attempting to redirect standard out, and then immediately crashes (I could look at the code to figure out why this is, but I haven't had a chance to). We'll come back to this dialog in a minute.

Prepare Your Code

There are a few things you need to add to your code to get debugging to work. The first few lines can occur at initialization, regardless of whether you're debugging or not:

mono_debug_init(MONO_DEBUG_FORMAT_MONO);
mono_debug_domain_create(mMonoDomain);

These should be done before you load any assemblies.

In addition, if you're running Mono from a background thread (like we are), and not running it on your foreground thread, after initialization you need to detach the foreground thread. This is because when Mono attempts to halt all threads when it hits a breakpoint, it can only do so when you're executing Mono code. If your main thread never executes any Mono code, it never halts, which means debugging doesn't work. This is good practice regardless, because if the GC needs to halt all of your threads, it will also fail if a non-Mono executing thread is still attached.

Last thing you'll need to do in code is set up the debugger. Here's the code:

     bool isDebugging = AresApplication::Instance()->IsDebuggingMono();
     const char* options[] = {
          "--debugger-agent=transport=dt_socket,address=127.0.0.1:10000"
     };
     mono_jit_parse_options(1, (char**)options);

This needs to be combined with any other options you may have (I only have one other, which is –soft-breakpoints), and should only be done if you want to start debugging. If the Mono client can't connect to the debugging server when it starts up, it will call exit, which is no good.

Lastly, if you're using the Microsoft C# compiler, instead of the Mono one, you'll need to generate mdbs from all of your pdbs. There's thankfully a utility for this. We actually now perform this step as a post build event, which keeps the mdbs up to date with latest code.

Put it all together

Alright, so let's get this party started!

Open MonoDevelop. Set some breakpoints, then open this dialog again.

Don't change anything, and click "Listen".

Now start your application outside of Visual Studio with whatever command line arguments you chose to signal to your app you want to debug in Mono.

Everything should now connect and you should be debugging Mono in your app!

Let me know if you have any issues!

Updates to AngelXNA

One of the tips I gave in my recent talk is to have a tool for GGJ. My tool of choice is AngelXNA, because it combines the power of XNA with the power of Angel. When I was demoing it to the other attendees, I realized I hadn't added a few demo screens that are really necessary.

To that end, I've added two demo screens showcasing two features that are in AngelXNA that aren't in Angel. The first is the Editor. While I had a problem with the Editor in the last GGJ, Fire Hose used it a lot for prototyping and I think we've worked out most of the kinks. It's not perfect, but it does make creating levels in Angel super easy. There are shortcuts for cloning, scaling, and rotating actors, which makes the level editing process super quick.

The second feature has several aspects, but comes down to the ability to use sprite sheets and the ability to create named, data driven animations. There are two parts to this. The first part is the Tiling system. By adding a .tileinfo file parallel to any given image file, it will automatically start rendering Actors that use that image as tiled. Here's an example .tileinfo:

IsAnim = true
FrameSize = Vector2(160, 200)
AnimDirection = TiledAnimDirection.Horizontal
StartFrame = Vector2(0, 0)
EndFrame = Vector2(0, 2)

It specifies that this file contains an animation, that each frame is 160x200, that the animation frames run left to right, starts at 0,0, and ends at 0,2. You can also use a non-anim and specify a specific tile to render.

Once you have these .tileinfos specified, you can specify animation files. Here's the example anim file:

BeginAnimation("Idle", TiledAnimRenderPath.Create())
    AnimDelay = 0.03
    File = "images/devo/devo_idle_strip"
    AnimType = AnimationType.Loop
EndAnimation()

BeginAnimation("Run", TiledAnimRenderPath.Create())
    AnimDelay = 0.03
    File = "images/devo/devo_run_strip"
    AnimType = AnimationType.Loop
EndAnimation()

BeginAnimation("Fall", TiledAnimRenderPath.Create())
    AnimDelay = 0.03
    File = "images/devo/devo_fall_strip"
    AnimType = AnimationType.OneShot
EndAnimation()

This allows you to call just Actor.PlayAnimation("Idle") or Actor.PlayAnimation("Run").

This is still a bit more complicated than I'd like, but does work really well. The only thing I'd like to improve is the specification of the render path. AngelXNA actually generally selects the correct render path automatically, but the data driving system, specifically the animation system, isn't detecting its render path. I can probably fix this, which I'll hopefully do after GGJ.

Simplicity in AngelXNA

I know I've promised tutorials on AngelXNA, and I actually have the first one written, but I've been distracted by crunch on my current project and don't feel comfortable just posting the first portion without more to follow. In addition, the first tutorial feels way too long, so I'm hoping to add some minor changes to Angel to make getting started even easier, specifically by setting up GameManager and default screens in ClientGame, rather than forcing you to make them yourself.

But enough about that, this post is about rendering.

Angel is really nice in that it makes a lot of things really simple and really transparent. With a few exceptions, everything does what you expect. There are very few pre-conditions for any given call in Angel, so you don't have to worry about whether you've set up X or Y before calling things, or worry that calls will fail because objects weren't in the right state. That just doesn't exist In Angel. With that mind, I'm looking for a way to add a piece of complexity and functionality to Angel without sacrificing the external simplicity.

In AngelXNA, all actors render as a rectangle by default (we didn't port over circles from Angel C++), so by just putting an actor in the scene, you get something rendering. If you want to make it a sprite, you call Actor.SetSprite or (simpler) Actor.Sprite = "ContentFile". Animations follow a similar pattern, loading a sequence of files using Actor.LoadSpriteFrames (or, again, you can use Actor.Sprite = "ContentFile_001" and it will automatically load the sequence). Then you can play the animation sequence using Actor.PlaySpriteAnimation.

The problem is, now, I want to add two additional render paths, one for rendering static and animated sprite sheets, and one that allows you to specify and load multiple named animations (so you could say Acor.PlaySpriteanimation("Jump")). This is easily done by either inheriting from Renderable or Actor, but inheriting from Renderable loses all of the functionality in Actor, and inheriting from Actor means I can't use these render paths in other sub classes of Actor (namely PhysicsActor). In addition, if these render paths were reusable *outside* of actor, it might make alleviate some problems we've had adding things as Actors that aren't just to get their rendering properties.

So my question is, how do I make this simple? One option is to just add more methods and properties. Actor.SetSpriteSheet(), Actor.SpriteSheet, Actor.LoadSpriteSheetAnim() etc. This does have the benefit of being consistent, but doesn't make those render paths reusable. I could create an IRenderPath interface and have one for each render path, using SetRenderPath to assign a specific render path to an Actor and remap the current functions to creating those render paths. In addition, it means the SpriteSheetRenderPath could make use of a reusable SpriteSheet for both tilled maps as well as character animations. This, however, I feel makes Angel overly complicated. Maybe some combination of the two makes more sense?

I feel like I've spent way too much time thinking about this, and would love to get it implemented, hopefully soon.

Angel, New Features, Documentation?

I just pushed a new feature to AngelXNA which integrates a very simple (and currently fairly dirty and incorrect) lexer / parser into the console. The nice thing about this lexer / parser is that it allows the Angel console to now track C# objects, instead of just strings. In addition, you can bind various properties and methods of those C# objects to be console callable, which has made actor and level definition pretty easy to do in the new versions.

Here's an example of the old Angel adf files:

  1. ActorFactoryInitializeActorClass PhysicsActor
  2.  
  3. ActorFactorySetColor 0.5 0.5 0.8 1.0
  4. ActorFactorySetSize 3 3
  5. ActorFactorySetDensity 0
  6. ActorFactoryAddTag maze_wall

Each of these was marked with the [ConsoleMethod] attribute bound to their singleton via reflection. It was cool, and it worked, but here's the new system:

  1. ActorFactory.InitializeActor(PhysicsActor.Create())
  2.  
  3. Color = Color(0.5, 0.5, 0.8, 1.0)
  4. Size = Vector2(3, 3)
  5. Density = 0
  6. Tag("maze_wall")

In the back end, ActorFactor.InitializeActor and PhysicsActor.Create are static methods on C# classes, marked with [ConsoleMethod], and the Console class automatically finds them. Initialize actor has an implicit "Using" which means that the object passed to it is the subject of any calls until an EndUsing (or EndActor). So, in reality, the new adfs are shortcuts for console scripts that would look like this:

  1. ewActor = PhysicsActor.Create()
  2.  
  3. newActor.Color = Color(0.5, 0.5, 0.8, 1.0)
  4. newActor.Size = Vector2(3, 3)
  5. newActor.Density = 0
  6. newActor.Tag("maze_wall")
  7.  
  8. World.Add(newActor)

Each of the calls to set actor properties (Color, Size, Density, etc.) are all just properties on the C# Actor object, marked with [ConsoleProperty].

What all of this means is that if you want a class that's accessible from the console and has properties and methods you can call here's what you do. First, create a class that has methods and properties that you want console accessible tagged appropriately, like so:

  1. public class MyConsoleClass
  2. {
  3. public MyConsoleClass()
  4. {
  5.  
  6. }
  7.  
  8. [ConsoleProperty]
  9. public string Name
  10. {
  11. get; set;
  12. }
  13.  
  14. [ConsoleProperty]
  15. public string Value
  16. {
  17. get; set;
  18. }
  19.  
  20. [ConsoleMethod]
  21. public string GetMyInfo()
  22. {
  23. return String.Format("{0}:{1}", Name, Value);
  24. }
  25.  
  26. [ConsoleMethod]
  27. public static MyConsoleClass Create()
  28. {
  29. return new MyConsoleClass();
  30. }
  31. }

Second, compile and run you AngelXNA application. It will automatically detect your new class and allow you to now issue the following commands in the console:

  1. myVar = MyConsoleClass.Create()
  2. myVar.Name = "My Name"
  3. myVar.Value = "My Value"
  4. Echo(myVar.GetMyInfo())

So, that's pretty cool! The whole system needs work, as it's fairly dirty. The execution / parse tree can't handle chaining right now, so commands like myVar.Actor.PerformAction() won't work. In addition, there's a lot of boxing / unboxing of values going on, and some wonkyness where the console works mostly in floats. I'm not sure how much of this I'll have a chance to correct, but hopefully much of it as we move forward.

In addition, this whole system needs documentation on our wiki. If you want to help out please let me know.

Lastly, Ian Bogost and Borut Pfeifer have pointed out (via twitter) that AngelXNA definitely needs to be more user friendly to start with. As Ian said, we need things that " will help people get started' and "help them get up and running" quickly. Borut pointed out we may need more screens for the Intro game to explore a lot of the features, but I'm thinking we need more than that. For people that have tried Angel / AngelXNA, what keeps you from starting work quickly? What's confusing? How can we improve the experience and get you working on prototypes quickly and efficiently?

AngelXNA v1.0

Thanks to a lot of help from Darren, today we're officially announcing the release of AngelXNA 1.0. For those that don't know, AngelXNA is a port of the Angel prototyping engine made by EALA and released open source not too long ago. The justification for making a C#/XNA version is that I, personally, like working with C# more than I like working with C++, at least when I'm trying to do prototypes. C# allows me to program faster and worry less about things like memory leaks, memory trashing, and weird side effects. By utilizing XNA, we get a lot of stuff for free, including input handling, sound, and music handling, along with the possibility (though it has not been tested) or running prototypes on the 360.

The interesting thing about creating / porting Angel was a question Darius posed to me while we were working on it, which is this: were there any decisions we'd made about the design of Angel that favored simplicity of creating games over speed / efficiency? Did we do anything that we wouldn't do in an actual game engine, just because it made programming games simpler? It\'s a pretty good question. Certainly, neither Angel nor AngelXNA take advantage of memory optimizations like object recycling or pooling, and there's no inline optimizations in AngelXNA. In addition, we didn't optimize any of the rendering or animation systems by doing things like pooling similar objects together, and AngelXNA ends up opening SpriteBatch blocks a bit more often than it should. But the question is could either version of Angel be optimized this way and still keep the same interface? It's a pretty good question, and right now I'm not sure. Regardless, I think AngelXNA is pretty easy to develop on and runs fast enough to make it a pretty nifty prototyping system, so at least that part is a success.

If you'd like you can grab the v1.0 tag from our bitbucket site (zip direct link), and Darren did a great job of creating some initial documentation on the wiki, so please check it out and tell us what you think! Darren and I (and anyone else who\'d like to contribute?) will be looking at moving on to version 1.1 soon, which should include some simple pathing using A* (something that\'s in the original Angel) and some other simple AI.

XNA and Community Games Sales Disappointment

Right at the tail end of GDC, Microsoft released to developers the XNA Community Games sales figures. While many of the top sellers are keeping their data private (including Miner Dig Deep and the Masseuse products), many have released their data to Gamasutra, which has a very nice in depth discussion. The general consensus? Most developers are disappointed.

Now before I get into defending Microsoft and the XNA platform as a whole, let me say that the release of sales figures after GDC was certainly not a mistake. If sales had been going well, Microsoft would have done everything in its power to get those figures out either before or during GDC in order to bolster more developer support for the platform. As it is, they knew most developers would be disappointed with the figures, and thus waited until after GDC when the news would have the smallest impact. PR guys are smart like that. So trust me, Microsoft knew you'd be sad and is trying to cover up its mistakes on the platform.

Now I'm going to play the devil's advocate against the indies here (who I respect greatly), and just say that I think Microsoft is getting a bad wrap for poor sales figures. Most of the articles I'm seeing are about how Microsoft isn't promoting the platform, and how Microsoft isn't promoting their games, how Community Games are hard to find on the NXE (TRUE!!! I complained about this already!). No one's blaming themselves for the lack of good games on the system, which is relegating it to second class citizenship. No one is complaining about their own lack of marketing. In fact, some of the games that have pretty good sales figures (not great) are marketing themselves, and are seeing profits as a result. You can't expect Microsoft to do all of this for you, just because you're on their service that they provide you mostly for free. That said, even those with good marketing (like Weapon of Choice) aren't doing great, but at least it doesn't look like they're blaming Microsoft at all. They're just disappointed.

Of course, many are taking this opportunity to talk about how much better iPhone would be to develop for, but I can't stand these comparisons, mostly because the type of game that is going to sell well on the iPhone is going to be different from one that is going to sell well on a console. iPhone is all about the impulse buy. $1 for some small app that looks fun that I want to play right now because I'm bored on a bus, or that serves a need I have right now, but may never have in the future. If that's the game (or app) you're developing, you need to be on iPhone because your game isn't going to sell well on XBLCG anyway. Console sales just are not going to work that way, and you shouldn't expect them to. The amount of money, time, and polish that needs to go in to something that will be sold on a console is higher (IMO) than something sold on a mobile platform because it needs to capture the user's attention for longer. On iPhone you develop for a short attention span, and on XBLCG, you need to develop for the longer attention span. It's just necessary.

Lastly, I want to reiterate how Microsoft and Apple are the only people that are providing an open platform for development, and should be lauded for that alone. It seems both companies thought of the idea at the same time. The 1.0 refresh of XNA was released in April of 2007, a full year ahead of the iPhone SDK, and Microsoft announced Community games in February of 2008, a month ahead of Apple's SDK announcement (which included their app store announcement). However, Apple shipped first and with more features (including sales figures) starting in March of 2008, whereas the NXE (which included the CG store) launched in November of 2008, and didn't get sales figures until last week. In general, both companies deserve kudos for opening previously closed development platforms, and for giving the average person the opportunity to make money on the platforms, but when it comes down to it, they're still very different platforms with different concerns, and attempting to compare them (in my mind) is absolutely ludicrous, so much so that I won't even talk about the millions of features Microsoft HAS to offer in order to keep up with their own XDK technologies (including parental control) that Apple hasn't even touched on yet.

Really lastly, I have ideas on what really needs to happen for CG to really be a "quit your job" platform, but that will have to wait for another post.

Jamming Postmortem

I took part in the Global Game Jam this weekend, and I have to tell you, it was a lot of fun. Version 1 of the game we created, The Game Of Nom, is available from the Global Game Jam site, and was voted third favorite at the location we were participating in, and I think that was a fair place for it to be (Move Mouse To Fulfill Destiny and The Beat were really awesome). I'm really happy with how the game turned out. It had the right feel and I think it really extracted the emotions from players that we wanted. The rules were simple enough that you could easily sit down and play it, hard enough that you could play for a while before winning, and interesting enough to be fun. That all said, the game is fairly buggy, especially when you're moving around flocks or trying to combine them, and that's a huge detriment to the game. At some point, Darren or I may actually fix a few of the issues and post a new version on the game jam site, but don't hold your breath.

So, for my own sanity and for future reference for everyone, I thought I'd do a post mortem of my experience.

What Went Right

  1. Enlisting the full time help of an artist. Amanda did an amazing job of giving us a feel for the game very early. I have no doubt that without her, the game wouldn't have been nearly as fun or interesting, and wouldn't have achieved this balance of fun and message that we wanted. By having a cute style to the game, we were able to present the dark message without seeming overly pretentious, which was awesome. My new rule is "artists make things look cool quickly," so get them involved early and things will look cool early, and get everyone really energized for the rest of the jam.
  2. Having a team. The first game jam I participated in, it was just me. Now, that was great for rapid iteration, but not for making something really interesting. I didn't have anyone to bounce ideas off of, and no one to really keep me focused and in line. Working with Darren not only allowed us to do something a little bit more complicated than we would have been able to do alone, but also ended up producing a much better product.
  3. Not sweating the small stuff. For the most part, I think we did a good job not worrying about some of our problems until later, and getting the game playable quickly so we could test it and refine it as needed, instead of spending lots of time doing things like improving the flocking behavior (which, I'll admit, I spent a little too much time on anyway ;)). The key to Jams is knowing when things are "good enough," and I think we did a pretty good job with that.
  4. Tools choice. Although we had some problems with it, XNA/C# is a really great prototyping language. Right before the Jam, Darren and I were considering other options, including the beta of Unity that was made available to the jammers. The things was, we didn't want to spend lots of time fighting to get things on screen and working, when we could spend time on the game play. XNA didn't give us a lot of pain for our simple little 2D game, and for that we were pretty thankful.

What Went Wrong

  1. Needing a prototyping framework. XNA is awesome, but it's not a great prototyping framework. As I don't do too much prototyping, I really don't know what I need and what's overkill. I found that the two things I really ended up wanting / needing were a simple object manager and an actor framework / state machine framework. We actually implemented states very late in the process and they were very hacked together. I found myself wishing we'd had OnEnter / OnExit / ChangeState for the little blobs frequently, but implementing states would have taken more time than hacking around them. In this respect, we maybe should have gone with Angel which has that stuff already built in, but it'd come out the day of the Jam, and I didn't want to try to learn it while Jamming (I've learned my lesson from the OLPC jam).
  2. Clear message, not so clear implementation. We knew what we wanted to get across to the player early, but not how to do it, and trying to discuss it mid jam was hard. Another twenty minutes talking about implementation would have helped, though during our initial discussion I was itching to get things running. What we really should have done is a "stand up" style meeting when everyone arrived in the morning to discuss where we were, and where we wanted to be each day. I think it would have helped a lot.
  3. Not enough testing / balancing. We should have pulled in more people to play the game earlier, and should have gotten things for Amanda to play so he should see the results of her art changes quickly. As it was, I spent most of Saturday and Sunday balancing, but was so close to the game that I missed little problems. Having just one person play mid-day Saturday would have exposed lots of problems that could have been fixed by the deadline.

I'd love to know what people have to say about the game. We're rating well, and I think if we get around to fixing the bugs, it will rate even better. Thanks to everyone who organized the Game Jam for this great opportunity!

Parent-Child Unit Tests

So, I’m kind of wondering if something like this exists. I currently have a set of tests that share common set up code which I only want to execute once for all of the tests, as well as some set up code that I want executed before each test. In addition, though, there are sets of tests that share test set up that I want to execute only once for that set of tests. Basically here’s what I want to happen

Parent Fixture Set Up
    Child Fixture Set Up
        Parent Set Up
        Child Set Up
            Test
        Child Tear Down
        Parent Tear Down

        Parent Set Up
        Child Set Up
            Test
        Child Tear Down
        Parent Tear Down
        
        (Etc…)
    Child Fixture Tear Down
    Child Fixture Set Up
        (Etc…)
    Child Fixture Tear Down
Parent Fixture Tear Town

There’s two ways I can see of doing this. One: through inheritance. Basically, the parent / child relationship is expressed through a base / derived relationship. NUnit may support this, but I haven’t tried it. I’m not really sure what happens when you provide two SetUp attributes for one class to NUnit (even if they are a base / derived relationship)

Second, and this is the method I would like to use, you could do it through a containment system. The parent contains the child, like so:

[TestFixture]
public class ParentFixture
{
	[FixtureSetUp]
	public void FixtureSetUp()
	{
	}
	[FixtureTearDown]
	public void FixtureTearDown()
	{
	}

	[TestFixture]
	public class ChildFixture
	{
             // etc.
	}
}

The only problem there is that in order to access, say, private data or methods of the parent class, you need to pass the parent class in as a parameter to the child. I’m pretty sure this is not supported by NUnit.

Does anyone know of an addin that might do this? Has anyone had a similar use case for any unit testing environment? How do you get around it?

On PowerShell

So, I’ve been using PowerShell a whole bunch lately, mostly to write prebuild / postbuild scripts for our product. Now, for most things, a batch file works fine. However, I wanted a few features that powershell really shines at (like date comparisons without downloading extra tools), and I really wanted to see just how cool Powershell is.

For those of you that don’t know what PowerShell is, imagine working in a command line shell where each command is very tiny, and has a very consistent naming convention. In order to work with these commands, you pipe the output of one command into the next, but instead of piping text, you’re piping .NET objects. Objects which you can evaluate parameters on and do crazy things with. That’s (basically) PowerShell, and it’s freaking crazy awesome.

Now, overall, I’m very impressed with Powershell, but since I’m still learning (and it’s hard to find good tutorials on the subject) I feel like there are a lot of hoops I have to jump through just to get simple things done. The simplest powershell script I’ve written is around 3 lines: a simple date compare before calling out to a generator (avoids extra work during the build process). A good example of an excellent use of powershell over batch files, since doing date compares in DOS is neigh impossible. The most complicated, this one I’m writing now, would have been a simple 1 line batch file call to xcopy, but the powershell version is much more complicated.

The task is to copy over all .h files from a directory into an “Include” directory for deployment, ignoring the “Test” directories, and not copying empty directories. The xcopy command for this is simple:

xcopy *.h  ..\Include /S /Exclude:Exclude.txt

Where Exclude.txt would contain test* (which actually excludes all files starting with test, but that… might be okay). The PowerShell script, on the other hand, requires much more work. Here’s the “simplest” I could get it:

foreach($file in Get-ChildItem -filter *.h -recurse)
{
	$dest = $file.FullName.Replace($currentPath, $destPath)
	$destDir = [System.IO.Path]::GetDirectoryName($dest)

	if(!$destDir.ToLower().Contains("test"))
	{
		if(!(test-path $destDir))
		{
			New-Item -type directory -path $destDir | Out-Null
		}
		Copy-Item -path $file.FullName -destination $dest
	}
}

Now, I am pretty new to PowerShell, so I may be missing places where I could make simple changes and make the whole thing more readable, but I know I can’t use copy-item directly, since it will copy over empty directories when given the –Recurse command, and I can’t use –Exclude “Test*\” (or any other variant) to Get-ChildItem for some reason. I’m sure there’s a way to make the –filter parameter accept both inclusions and exclusions, but so far I’ve yet to find the help file to do it.

Has anyone else played with PowerShell? Had any luck with it?

Firefox, Silverlight, Services and ASP.NET Debugging

Edit: If you've been having problems with this, it's because I accidentally missed a step. Firefox will always look for an old process regardless of whether you want to start as a separate profile. To fix this, you need to add the MOZ_NO_REMOTE environment variable with a value of 1. Note: This will make it so you can't click FireFox again to open a new window. You'll have to use File->New Window instead. This whole problem kinda sucks, but at least there's a way around it.

Edit 2: That fix also doesn't work. It won't allow you to browse "as normal" since running the program with the same profile causes it to error out. It doesn't really matter though since the recently released, Orcas Beta 2 broke this fix entirely. For some reason starting an external program to the location doesn't have the Silverlight debugging system attach correctly, which just ruins everything. So, you're pretty much stuck closing down FireFox entirely if you want to debug Silverlight

So this one is really interesting, and took me quite a bit of time to figure out. We’re currently working on trying to do some interesting visualizations of our data using Microsoft’s new Silverlight platform, and, despite some initial problems with setting up the service, it looked like we had everything figured out. Except there were still some debugging problems.

Basically, after an initial success, whenever we tried to debugging, the debugger would attempt to start then immediately exit with no error other than

The program '[x] WebDev.WebServer.EXE: Managed' has exited with code 0 (0x0).

The page would load in Firefox (on a new tab) but none of the changes to our Silverlight components were actually taking effect.

As near as I can figure out, here’s what’s happening: When debugging Silverlight, Orcas attaches to both your web browser and WebDev.WebServer.Exe. However, if Firefox was already open and had the Silverlight .dll loaded (which is common if you open all new links in a new tab), the Orcas debugger was unable to push the new copy of the dll to the cache for Firefox to use. As a result, Orcas immediately shuts down with the incredibly cryptic "error" message. Firefox, meanwhile, happily uses a cached version and you are left frustrated.

So, how do you fix it and still be able to browse your web pages normally? The simple solution is to have Firefox start in a separate process and in a separate, non-default user space. Here’s the steps:

  1. Start Firefox’s profile manager with:

    Firefox.exe –ProfileManager

    You will be greeted with the following screen:
    Firefox Profile Manager

  2. Add a new profile. As you can see, I chose the name “Testing User.” Setting everything else to default is fine.
  3. Change the start up options for your website to execute Firefox with the new user profile. (See screen below). This will start Firefox in the separate user profile in a different process space that will close when you stop debugging.
    The Start Options for an ASP.NET web page
    The values are:

    Start External Program: [Firefox path]\firefox.exe
    Command line arguments: -P “Testing User” http://localhost: [port]/WebSite/Page.aspx
    Working Directory: [Firefox path]
    

    (Note: If anyone knows how to get the virtual path for the command line arguments, I’d appreciate knowing)

And that’s it! You should now be able to run Firefox normally and debug your Silverlight applications without problem.