More on the Server Conundrum

So, I’ve started development on multiple virtual servers and I’ve found that VMWare actually makes things easier than I thought it would have, at least for the structure I discussed at the end of the last post. Some things I’ve found that make this setup a bit easier:

First, you can set VMWare to setup all virtual servers to be on their own virtual network shared by the host machine. This allows you to name all the servers by their use and not have them conflict between developer’s machines. So I can have a virtual machine named JBoss on my network and you can be developing against the “same” virtual machine named Jboss, but there won’t be a naming collision because both machines are on separate virtual intranets, hidden by our actual boxes. This is extremely useful, and because it’s NAT, your virtual machines can still access the internet. Double bonus.

Second, I’ve found that having VMWare split the disk into multiple 2 gig files and not having it allocate its entire disk space at once is a very good thing. Why? Because copying up a full disk (even a small disk) to a network is a slow process, and even though you suffer a performance drop on the actual server, it’s worth it in the long run in developer hours.

Third, although I haven’t implemented this yet, I would recommend setting up the “basic box” and then having a source control repository for anything anyone might need to change in terms of configuration files server deployments or source code locations. You may need to supply some sort or name translation system if you’re not running DNS on your local network (which, yeah, I’m not…) but having developers copy disk drive around constantly would probably become a huge pain. Better to only need to move drives around when something major on the box changes, like if you upgrade the OS, or upgrade certain specific pieces of software.

Lastly, don’t try to dynamically start / stop servers in your unit tests or even before your unit tests. It takes way too long, and even after the server is powered up, there’s no way for you to tell when it’s fully booted. Better to just leave it running on your build /test machine and have developers start it when they need it. That is, of course, assuming that your build machine is up to the task (ours currently isn’t, but I’m fixing that…)

Even with that done, there’s still a lot more to figure out here. I’m still wondering if this is the best way to go about this. I know most cross platform development, you just re-compile and re-run your tests either on a virtual machine or an actual machine of that platform and you’re done. But having one library that needs to be able to connect to multiple different types of servers in many different combinations, and unit testing all of them… I’m not sure how often this is a problem. Usually, I think (especially in middleware) some sort of architecture is just assumed, or you force the issue. Here, I’m trying to be as flexible as possible, even if it means having my build server have 4 different virtual servers, just to make sure all possible combinations of integration work.