Game Development Tools

(Note: The whole point of this post is to get you to join the IGDA Tools SIG, but I don’t get to that until below the fold, so I’ve decided to just push it up front here for those of you that don’t feel like reading my long tools rant. So, instead, please just go join here)

I’m a firm believer in tools.

I actually believe that good tools, provided both internal (from actual tools departments) and externally, are key to the success of any software development company, but especially game companies. I would go so far to say that 80% of any programmer’s job in a game development company is, or at least should be, making tools that make artists, designers, and other programmers happy. These are your engine programmers. They do good work, and they’re very important.

However, there should also be a good number of your programmers who’s *only* job is to keep your artists, designers, and programmers happy. These are your dedicated tools programmers, who do nothing but code and fix tools and process all day. If you don’t have these people, you’re missing out. Why? Think of all the places tools touch, or should touch, in game development and process that don’t actually involve working with your engine.

First, everyone benefits from a good build pipeline. Whether you’re doing continuous integration or not, having fast turnaround on art and code changes is always a good thing. Additionally, if everyone can grab changed assets quickly from central servers, that means work can get underway faster in the morning. The tools to do these types of things are usually available, but if you think an engine programmer (or worse, a producer) is going to go find them you are sorely mistaken. Of course, having that good build pipeline means also having good source control, and usually means having good integration between your source control and your tools for creating and exporting assets. And here, there aren’t usually tools available to you. You have to make them yourself, but the jump in productivity from this type of integration is enormous.

Additionally, programmers need tools themselves, but only the select few will think to make them, and usually only for themselves, on an ad-hoc basis. Deployment scripts, revision / code review change scripts, Visual Studio / WinMerge integration; these are all tools I’ve seen created by fellow programmers at game studios that never get passed around. But even then, there are tools that are incredibly useful that an engine programmer might never get around to. Memory inspectors, automatic bug reporters (and potentially bug collators), output filtering systems, external debug consoles, formal logging / metrics. All of these things are useful for programmers (and, in the case of formal logging, designers), but very few have (or will take) the time to really look into them as possibilities. Not My Job Syndrome.

Lastly, dedicated tools programmers have the time and ability, to keep up to date with the latest tool releases and technologies that people are using for tools in the game industry. This is key, because new concepts (like monkey testing, automatic crash reporting, formal logging / metrics, uses of XML / XSLT) pop up every day, and sometime lead to enormous gains in productivity. Without the time to look into them, engine programmers tend to lean on old technologies, with the “It’s not broke, don’t fix it” mentality. This is dangerous to your productivity, since sometimes it is broke, and your programmers just don’t realize it.

Now, if you feel like I do, or you want to learn more, I highly recommend joining the new(ish) IGDA Tools SIG. The SIG is dedicated to staying on top of best practices for tools development, posting about new tools that are becoming available, and hopefully (eventually) being a great educational and discussion resource for tools developers. If you are interested, I urge you to join. You will not regret it.

Open Source

I found it interesting a few weeks back when Warren posted something about not wanting to use open source projects because of fear of legal retribution. I can kind of understand this when using anything distributed under the GPL license, and maybe LGPL if you figure you’re going to make lots of changes to the work, but lately a lot more stuff has appeared using some pretty lenient open source licenses (like the MIT and Apache licenses), and it’s pretty easy to get a pretty good understanding of what you’re allowed to do and what you shouldn’t do using a simple Wikipedia lookup. That said, WINAL, and if Warren’s lawyers are telling him to stay away, I can completely respect that.

I, however, have been interested in using and contributing to open source software for a while, and working at Orbus, I’ve actually had a chance to work on some (I gave myself permission). What I really like about open source is that if something doesn’t quite work the way you want, or if you want to add functionality, you usually can. Open source projects can be treated as black boxes, or they can be changed to your liking. Now, open source may not always be as stable as some off the shelf products, or have all of the same well rounded features, but sometimes that’s okay, because off the shelf products are sometimes ridiculously expensive, and I’m willing to take customizability and price point sometimes over a feature or two. Additionally, most open source projects are based of standards, not made up protocols and file formats, so you can usually find other tools that work with them (the same reason I use XML over many other text based data formats).

So, while using open source, I’m also contributing back to open source. I’ve made some changes to the STOMP clients, and I’ve started a new project for doing mDNS and DNS-SD over at Google code called Mahalo. I’m hoping that these will just be my first contributions to the open source community, and that both projects will be around (and used) for a while. That way, I can feel all good about contributing back to the community.

More on the Server Conundrum

So, I’ve started development on multiple virtual servers and I’ve found that VMWare actually makes things easier than I thought it would have, at least for the structure I discussed at the end of the last post. Some things I’ve found that make this setup a bit easier:

First, you can set VMWare to setup all virtual servers to be on their own virtual network shared by the host machine. This allows you to name all the servers by their use and not have them conflict between developer’s machines. So I can have a virtual machine named JBoss on my network and you can be developing against the “same” virtual machine named Jboss, but there won’t be a naming collision because both machines are on separate virtual intranets, hidden by our actual boxes. This is extremely useful, and because it’s NAT, your virtual machines can still access the internet. Double bonus.

Second, I’ve found that having VMWare split the disk into multiple 2 gig files and not having it allocate its entire disk space at once is a very good thing. Why? Because copying up a full disk (even a small disk) to a network is a slow process, and even though you suffer a performance drop on the actual server, it’s worth it in the long run in developer hours.

Third, although I haven’t implemented this yet, I would recommend setting up the “basic box” and then having a source control repository for anything anyone might need to change in terms of configuration files server deployments or source code locations. You may need to supply some sort or name translation system if you’re not running DNS on your local network (which, yeah, I’m not…) but having developers copy disk drive around constantly would probably become a huge pain. Better to only need to move drives around when something major on the box changes, like if you upgrade the OS, or upgrade certain specific pieces of software.

Lastly, don’t try to dynamically start / stop servers in your unit tests or even before your unit tests. It takes way too long, and even after the server is powered up, there’s no way for you to tell when it’s fully booted. Better to just leave it running on your build /test machine and have developers start it when they need it. That is, of course, assuming that your build machine is up to the task (ours currently isn’t, but I’m fixing that…)

Even with that done, there’s still a lot more to figure out here. I’m still wondering if this is the best way to go about this. I know most cross platform development, you just re-compile and re-run your tests either on a virtual machine or an actual machine of that platform and you’re done. But having one library that needs to be able to connect to multiple different types of servers in many different combinations, and unit testing all of them… I’m not sure how often this is a problem. Usually, I think (especially in middleware) some sort of architecture is just assumed, or you force the issue. Here, I’m trying to be as flexible as possible, even if it means having my build server have 4 different virtual servers, just to make sure all possible combinations of integration work.

The Server Conundrum

The next big task at Orbus is to get our systems working with what I call “intermediary” servers: enterprise style message queues and things like that. Basically, they idea is that the game can sit and asynchronously pump metrics to the queue without sacrificing performance in the game, and the queue will catch up during times of low server load. These systems are also easy to run in parallel and in high availability modes, so if one messaging server does get overloaded (or dies), another is there to catch the slack. Right now, we're looking at JBoss, since it supports all of that in an LGPL license, which is nice.

Anyway, it’s time for me to install this server and start writing software against it. Of course, I’m planning on unit testing connections to it and making sure that everything actually works the way it’s supposed to. However, to do that, I’ll need to start up a JBoss server on my machine. This is on top of the three other servers currently running simply as multiplatform test beds. Additionally, our build / database machine is also starting to get overloaded with servers. IIS, CCNet, MySQL, MSSQL, soon JBoss, maybe eventually Postgres, MSMQ, and a small possibility of other Java Application Servers thrown in for good measure.

It’s starting to get ridiculous. Although the machines can handle it, I’d much rather see a situation where this wasn’t necessary, especially when it comes to my own development machine. But, it’s almost necessary to have each of those servers installed locally to develop against, as well as have another set up on the build machine to run the pre-deploy tests against. And, all of these systems have to be in sync. All of the developer’s databases need to match the database I developed against, and the test servers need to match mine before the tests are run. This should be done automatically, but for the time being it’s not (I’ll get around to it!).

So anyway, it’s a big fiasco. The way I see it, a perfect development / test server environment would have the following properties:

  1. You have copies of all the local servers installed, but they are only running when you’re developing against them. Otherwise they’re disabled.
  2. The build / test server can either have all servers running and installed, so long as they don’t conflict with one another.
  3. All of these servers need the same names, usernames, and passwords, or be able to generate similar ones in the unit tests.

I’ve been thinking about virtualization as a possible way to overcome this problem. Basically, have each of the server collections running in a separate virtual server. So, have my database servers running on a virtual server on my box that I can shut off when I don’t need it. Then have JBoss running on a separate virtual server, and MSMQ on another, and have them named along the lines of “hostname_dbs”, and “hostname_jboss”. The idea here is that the names of the servers could be generated in the unit test, and the server images copied (with the host name changed) to all the developers pretty easily.

For now, I'll probably just install JBoss this way and see how it works. I'll let you know.

On PowerShell

So, I’ve been using PowerShell a whole bunch lately, mostly to write prebuild / postbuild scripts for our product. Now, for most things, a batch file works fine. However, I wanted a few features that powershell really shines at (like date comparisons without downloading extra tools), and I really wanted to see just how cool Powershell is.

For those of you that don’t know what PowerShell is, imagine working in a command line shell where each command is very tiny, and has a very consistent naming convention. In order to work with these commands, you pipe the output of one command into the next, but instead of piping text, you’re piping .NET objects. Objects which you can evaluate parameters on and do crazy things with. That’s (basically) PowerShell, and it’s freaking crazy awesome.

Now, overall, I’m very impressed with Powershell, but since I’m still learning (and it’s hard to find good tutorials on the subject) I feel like there are a lot of hoops I have to jump through just to get simple things done. The simplest powershell script I’ve written is around 3 lines: a simple date compare before calling out to a generator (avoids extra work during the build process). A good example of an excellent use of powershell over batch files, since doing date compares in DOS is neigh impossible. The most complicated, this one I’m writing now, would have been a simple 1 line batch file call to xcopy, but the powershell version is much more complicated.

The task is to copy over all .h files from a directory into an “Include” directory for deployment, ignoring the “Test” directories, and not copying empty directories. The xcopy command for this is simple:

xcopy *.h  ..\Include /S /Exclude:Exclude.txt

Where Exclude.txt would contain test* (which actually excludes all files starting with test, but that… might be okay). The PowerShell script, on the other hand, requires much more work. Here’s the “simplest” I could get it:

foreach($file in Get-ChildItem -filter *.h -recurse)
{
	$dest = $file.FullName.Replace($currentPath, $destPath)
	$destDir = [System.IO.Path]::GetDirectoryName($dest)

	if(!$destDir.ToLower().Contains("test"))
	{
		if(!(test-path $destDir))
		{
			New-Item -type directory -path $destDir | Out-Null
		}
		Copy-Item -path $file.FullName -destination $dest
	}
}

Now, I am pretty new to PowerShell, so I may be missing places where I could make simple changes and make the whole thing more readable, but I know I can’t use copy-item directly, since it will copy over empty directories when given the –Recurse command, and I can’t use –Exclude “Test*\” (or any other variant) to Get-ChildItem for some reason. I’m sure there’s a way to make the –filter parameter accept both inclusions and exclusions, but so far I’ve yet to find the help file to do it.

Has anyone else played with PowerShell? Had any luck with it?

Firefox, Silverlight, Services and ASP.NET Debugging

Edit: If you've been having problems with this, it's because I accidentally missed a step. Firefox will always look for an old process regardless of whether you want to start as a separate profile. To fix this, you need to add the MOZ_NO_REMOTE environment variable with a value of 1. Note: This will make it so you can't click FireFox again to open a new window. You'll have to use File->New Window instead. This whole problem kinda sucks, but at least there's a way around it.

Edit 2: That fix also doesn't work. It won't allow you to browse "as normal" since running the program with the same profile causes it to error out. It doesn't really matter though since the recently released, Orcas Beta 2 broke this fix entirely. For some reason starting an external program to the location doesn't have the Silverlight debugging system attach correctly, which just ruins everything. So, you're pretty much stuck closing down FireFox entirely if you want to debug Silverlight

So this one is really interesting, and took me quite a bit of time to figure out. We’re currently working on trying to do some interesting visualizations of our data using Microsoft’s new Silverlight platform, and, despite some initial problems with setting up the service, it looked like we had everything figured out. Except there were still some debugging problems.

Basically, after an initial success, whenever we tried to debugging, the debugger would attempt to start then immediately exit with no error other than

The program '[x] WebDev.WebServer.EXE: Managed' has exited with code 0 (0x0).

The page would load in Firefox (on a new tab) but none of the changes to our Silverlight components were actually taking effect.

As near as I can figure out, here’s what’s happening: When debugging Silverlight, Orcas attaches to both your web browser and WebDev.WebServer.Exe. However, if Firefox was already open and had the Silverlight .dll loaded (which is common if you open all new links in a new tab), the Orcas debugger was unable to push the new copy of the dll to the cache for Firefox to use. As a result, Orcas immediately shuts down with the incredibly cryptic "error" message. Firefox, meanwhile, happily uses a cached version and you are left frustrated.

So, how do you fix it and still be able to browse your web pages normally? The simple solution is to have Firefox start in a separate process and in a separate, non-default user space. Here’s the steps:

  1. Start Firefox’s profile manager with:

    Firefox.exe –ProfileManager

    You will be greeted with the following screen:
    Firefox Profile Manager

  2. Add a new profile. As you can see, I chose the name “Testing User.” Setting everything else to default is fine.
  3. Change the start up options for your website to execute Firefox with the new user profile. (See screen below). This will start Firefox in the separate user profile in a different process space that will close when you stop debugging.
    The Start Options for an ASP.NET web page
    The values are:

    Start External Program: [Firefox path]\firefox.exe
    Command line arguments: -P “Testing User” http://localhost: [port]/WebSite/Page.aspx
    Working Directory: [Firefox path]
    

    (Note: If anyone knows how to get the virtual path for the command line arguments, I’d appreciate knowing)

And that’s it! You should now be able to run Firefox normally and debug your Silverlight applications without problem.

MySQL and Visual Studio

So, I’ve been working a lot with Visual Studio, and although it doesn’t do everything perfectly, I tend to like it. One of the things I tend to like about it is its ability to group together lots of seemingly disparate technologies into a single solution file so that you can have all the information about your “solution” right at your fingertips.

Today, in preparation for porting our technology to MySQL, I decided to try to attempt to make Visual Studio database projects and MySQL work nicely together. Believe it or not, this is insanely hard without purchasing an OLE DB provider, and I couldn’t find a single site through Google that told me how to get Visual Studio database projects working with MySQL. Even MySQL’s Visual Studio plugin doesn’t actually support everything you think it should (you can’t run scripts for example, the major piece of the puzzle I wanted). So what’s a programmer to do? In this case, I found a kind of solution that doesn’t give you all of the features of the MySql plugin in the Server Explorer, but it does give you the run feature, which I feel is much more important.

For those of you that are curious, here’s the solution. First, download and install the MySQL Connector/ODBC libraries. In my case, I used the 5.0 libraries because we’re developing against beta software anyway (the 5.1 release of MySQL), but I have a feeling the 3.51 release will work just as well. Next, open your server explorer and add a data connection. You’ll be presented with this window (click for the full image):

Add Connection window

Click “Change” next to the data source and select “Microsoft ODBC Data Source” and the add connection window will change to this:

Add Connection for ODBC

Now, you can go through the trouble of attempting to make system or user data sources here, but you’re better off just making a connection string, so select “Use Connection String” and type in your ODBC connection string. For me, this was similar to:

Driver={MySQL Connector/ODBC v5};server=localhost;database=Database;

and add the user name and password below (note that if you’re using the older version, I believe the driver needs to be set to “mySQL ODBC 3.51 Driver”, though I’m not actually sure). Test the connection, and you’re all set. Multi-line scripts should now run just fine and you’re all set to develop as if you were working on SQL Server.

Edit: Although this works for executing multi-statement queries, it doesn't work for doing things like creating procedures. MySQL chokes on the semi-colon being used inside multi-statement procedures, and there's no way of using the delimiter command to change the end of statement delimiter. In my opinion, this is a huge problem, and the whole concept of requiring that you change delimiters feels like a hack. I'm not sure if there's a way around this, but you can be sure I'll write about it if there is.

More on Generated Tests

Shortly after posting this post about using dlls to run generated code to make sure it’s actually generated correctly, I realized that, unfortunately, that approach doesn’t work 90% of the time, and actually doesn’t work 100% of the time if you’re using C++, unless you find a work around to import objects. If you’re using STL (especially as parameters to functions) though, you’re kind of screwed when trying to implement your unit tests in C#.

So what to do? Again we have the same problems, but I’ve come up with a different approach. In the case of this generated code I can deem it “correct” if it conforms to the following properties:

  1. It calls the correct functions in an external library with the correct parameters.
  2. It puts data received from the external library into the correct place

I decided to test these two different ways, but that will show me not only that my generator created the correct code, but that it also created the correct unit tests for that code. Basically here was my end solution.

  1. Code a Mock Object To Impersonate the External Library. In my case, the mock object is a mock connection to a database that always returns one row of data with fixed values. The fixed values are dependent on type, which makes it easier to track down if I’ve accidentally put the wrong data in an address. In addition, it logs ALL of the functions called to it, which I store for later use.
  2. Generate Unit Tests using the same generator as the code generator. The unit tests initialize the objects using the mock connection. The objects then do what they’re supposed to do and then check to make sure they get the right data back. At the end of the test, the test outputs the calls to the mock connection object to XML (something coded into the mock object) to a predefined folder.
  3. Code the C# Unit Tests. In this case, the unit tests generate code based off of a known value which has all of the features our generator supports. It then builds the generated code and its unit tests and runs them (actually as part of the post build step). If the unit tests fail, there’s something wrong in either the generated code or the generated tests. If they succeed, I go the extra mile to compare the output XML to files I’ve written (or at least verified) using Microsoft’s XMLDiffPatch library. This library was *also* how I checked that my Mock Object was outputing the XML correctly.

So why XML over just doing a diff on the code files? Again, all I care about are the calls to the external library. So long as they happen, I don’t care what the code internally looks like, and I actually may want (or need) to change the code to optimize things further on down the line, so comparisons against the code text is our of the questions. XML is nice because you can set the diff program to ignore whitespace and order, which I need.

So yeah, hopefully you find that helpful. By the way, I promise to go back to writing about design sometime in the future. I’m just finding a lot of this code stuff really exciting (and I can talk about it now, which is nice) so I figured… why not? Hopefully people are finding this interesting? I'd love to hear if anyone has any feedback on the shift in focus over the past few weeks.

Testing Generated Code

Edit: This post was superseded by this one. Basically, I realized that this approach for C++ is way to hard and not worth the effort. So, if you're interested in testing generated code, especially in C++, I'd go read the other article. However, if you're curious as to how to build, load and test a C .dll, this would be the article for you.

So, as you may have noticed, I’m enamored with the concept of automated testing. In my opinion, it’s really the only way you can actually be sure your code is working as intended. You can’t rely on you code reading skills, and even if could, things that “makes sense” don’t always work. Now, I know unit testing doesn’t actually prove your code works (though there are ways to do that as well), but it does let you know that it works in at least the cases you’ve tested (in perpetuity if you run the tests after every build).

Recently, I ran into an interesting problem concerning unit testing . For our metrics suite, Orbus is creating a code generator, and supplying templates for various languages. The problem was how do you write a unit test (or functional test in this case) for a code generator? In my mind there are only a few options:

  1. Take a test case and write out the code you want the generator to generate. Compare the generated code to the known value with a diff utility.
  2. Run the generator once, and take this as the known value. Compare the generated code to the known value, again with a diff utility.
  3. In a functional test, run the generator, compile the resulting code, then run it with various inputs, testing its outputs (much like a standard unit test).

In my mind, the first and second options are an exercise in futility. In reality, you don’t care about the text in the functions, you care about the fact that the generator produced functions that take certain parameters, and produce certain output. By trying to compare the text, you don’t actually get one of the main benefits of automated testing: discovering whether changes you made internal to the code (via refactoring or by adding additional functionality) have broken it. In this case, you want to be able to change the code generated inside the functions whenever you want, so long as the signatures remain the same and so long as the produce they proper output to external libraries. Basically, you want to treat your generated code as a black box.

The problem with the third option is that it’s really hard to do for some languages. Scripting languages, it’s pretty easy right? Just load up your trusty interpreter and feed it a few scripts that utilize your library. .NET languages have it pretty easy as well, since most of them have CodeDom objects that allow you to compile assemblies on the fly and load them. .NET’s reflection also allows you to check the number and types of parameters you’re loading, so no problems there. In reality we really only have a problem with native, compiled languages. For me, this was specifically a problem with native C++. Thankfully, I’ve actually “solved” the problem (at least for a C interface) in C# unit testing. Here are the steps our functional tests go through to test our generated code:

  1. Generate the code to a temporary directory and build it into a .dll. The .dll I built exposes all of the API functions that someone could use from the generated code. This isn’t hard to do using preprocessor macros and the like, and we need to do it anyway so that potential clients can use our API in a .dll if they so choose. To aid in the building process, the generator also builds a .vcproj file (it’s just XML) for all of the generated files and then uses devenv /build to build it.
  2. Load the library and do any initialization. Using LoadLibrary, you can get a handle to a loaded .dll pretty easily. Initialization is where things get hairy. In our API, we need to supply a connection object to a (static) initialization function. This object is really what generates the “output” of our library, so I wanted the connection object to be accessible to our C# unit tests. To do this, I created two mock objects in C++/CLI, one managed connection object and one unmanaged connection object that contained an auto_gcroot structure to the managed connection. I then pass the native class to the initialization routine, like so:
    typedef bool (*InitFunc)(Connection*);
    Initfunc proc = (InitFunc) GetProcAddress(module,”InitFunction”);
    MockNativeConnection* pconnection = new MockNativeConnection(managedConnection);
    proc(pconnection);
    

    Now all calls into my libraries will use that connection object (it’s a singleton) so it’s a pretty easy way for me to check that everything is running nicely.

  3. Use P/Invoke to call out to your system. Now, you could write typedefs out for all the functions you’ve generated and call them in C++/CLI using the same “GetProcAddress” from above, but since I want to do the actual tests in C#, it makes more sense to use the DllImport attribute from C# to import the methods you’re looking for. Furthermore, P/Invoke will do automatic marshaling of most of your types, and will automatically do name matching for you. In general, it's just an easier interface to use over GetProcAddress. Here’s an example:
    [DllImport("MyDll.dll", CharSet = CharSet.Ansi)]
    static extern int LogEvent(string asDataPoint1, int aiDataPoint2);
    
    [Test]
    public void TestLogEvent()
    {
    	LogEvent ("My Test", 1);
    
    	Assert.AreEqual("LogEvent", _connection.ProcedureName);
    	Assert.AreEqual("My Test",  _connection.Parameter[0]);
    	Assert.AreEqual(1, _conneciton.Parameter[1]);
    }
    

    This tests that my event generated method (LogEvent) told the connection object to execute the “LogEvent” procedure with the two parameters I supplied.

Now this may seem like a lot of work, but it’s worth it if you fell that you’re either going to be changing your generators a lot (or if you want the peace of mind of having these tests around) or if (like me) you’re going to be generating for a lot of different languages and platforms that can all compile to .dlls. With this library, I should be able to test any language that compiles to a native .dll no problem (so long as it exposes a C-like interface and doesn’t use any structures….)

Next, I’m going to work on grabbing C++ objects from the .dll instead of just a straight C interface and testing those. This stands to be a much more daunting task, since, unlike P/Invoke, I can’t rely on auto marshaling, and I can’t actually code the structures if I want them to be reusable. I’ll write up what I find.

Tools of the Trade

So, the official Orbus Gameworks blog, Measuring Gameplay, had a post yesterday about some work I’ve done with integrating metrics into GtkRadiant. Something Darius mentioned, but I don’t think he made a big enough deal about, is my commitment (and, really, Orbus’s commitment) to making sure that our APIs and utilities are easy to integrate into existing pipelines and tools, no matter what language or system they’re running on. The GtkRadiant stuff is just a starting proof of concept of how that might work, and how people might want to do metric integration with current tools.

For me, making integration APIs easy just makes sense, for a number of reasons. First of all, from a practical perspective, there’s absolutely no way that Orbus could support loading all of the various pieces of game data (including map files) for every single game our clients would want to metric. It’s just not feasible. I even felt that writing our own Quake 3 map loader, though simple (and there are many tutorials and example code on how to do it), didn’t make sense. It was easier (and more productive) simply to learn GtkRadiant’s plugin system and work through that. I can’t even imagine how hard it would be to make loaders for closed level formats.

Second, and actually more importantly, if you want your designers to look at, analyze, and act on your metrics (which is the whole point, right?), you need to be ale to display them in something they’re already comfortable with. This means integrating into the toolsets that your designers use every day, which are usually your level editors. Additionally, integrating into the tools that actually caused the phenomenon in the first place alleviates the need for your designers to task switch between the metrics tools and the tools that will fix the problem. This holds as true for deaths locations being shown in your level editor as it does for pick up and use metrics being shown in your spawn and power tables.

In my opinion, this show why making the metric gathering system easy to use is just as important as making the analysis and integration systems easy to use. Gathering the data is just the first step. Putting it somewhere you can use it is just as important, and it’s our job to make absolutely sure that doing that is as quick and painless as possible.