Category: Pipeline
A .Net Developer’s Adventure in Minecraft Modding
Why Mod Minecraft
My kids love Minecraft. Actually, love is too weak a word. They are always asking me to get this mod and that skin and they are just sooo passionate about this game. They watch YouTube videos of mod developers showing off their new mods and it occurred to me that I could probably make a mod. I mean I am a software engineer. It’s just Java. Mind you, I haven’t touched Java in many years, but its object oriented programming, how different is it from C#. So, I told my kids what I wanted to do and that I would need there help and they were very excited. This gives me a chance to discover something they are into and I get to introduce them to programming… win-win. And the adventure begins.
Java Development Environment
The first order of business is to get a Java Development environment up and running. I decided on Eclipse as my IDE as it comes highly recommended and has an awesome community. Below is what I installed.
Java JDK (Java Development Kit) – I downloaded JDK 7 – http://www.oracle.com/technetwork/java/javase/downloads/index.html
Java JRE (Java Runtime Environment) – this actually came with the JDK and I already had it installed.
Eclipse – I downloaded the standard – http://www.eclipse.org/downloads/
While at it I decided to round out my Dev Rig with JUnit for testing and a private GitHub account for source control (I will open it up to the public when I have something that won’t crash and burn). You will also see later in this post that I also added a build server, Gradle, and some JAVA functional programming goodness with Scala. If I’m going to do this I’m going all out.
Minecraft Loader
The Minecraft loader is the program that launches the Minecraft game. You purchase the loader on minecraft.net.
Minecraft Forge
Minecraft Forge is a Minecraft Mod Loader and API. As I understand it right now, it is a wrapper around the official Minecraft game that allows you to load mods (modifications). Forge provides a simplified API for working with the Minecraft source code to make mods. The Forge Mod Loader allows you to load mods made with the Forge API.
I decided on Minecraft Forge as it seemed to have a good community. I wanted to use something called MCP, which from what I understand, Forge actually uses MCP for decompilation of the Minecraft source. Although, I also read that Forge is working on their own decompiler. I didn’t use MCP because I got frustrated as I am a noob and couldn’t figure some things out in the install process. Plus, the kids tell me that Forge has the cool mods.
Anyway you download the latest version of Forge for your version of Minecraft, but I was told by a community member that version 1.6.4 had the most mods at the time I wrote this. Minecraft.exe currently defaults the game version to 1.7.4 and Forge is at 1.7.2 and this gave me the biggest headache trying to get the two to work together until I figured out you can edit your Minecraft profile in the loader and select the version you want to play (this would have fixed my MCP issue too). So let’s get Minecraft 1.6.4 working with Forge 1.6.4. So here is how I got the Forge loader working:
- Launch Minecraft
- Update profile to use the 1.6.4 version and save it
- Click the Play button (this sets up the 1.6.4 files that are needed for Forge)
- Then download the recommended Forge 1.6.4 installer from http://files.minecraftforge.net/
- Run the installer
- Click windows start key plus R to open the run dialog and type %appdata% and click OK (you can also type this in Windows explorer)
- Open .minecraft folder, this holds all of the Minecraft assests
- Open the version folder
- Open the Forge folder for the Forge version you downloaded and copy the two files
- Go back to the version folder and create a new folder and name it whatever you want (remember this name you will need it)
- Open the folder and paste the files you copied
- Rename both files to the same name you used for the folder
- Open the file with extension .json and find the ID field and change it to the name you used for the folder and save
- Launch Minecraft
- Update profile to use the version that matches the name of the new version folder and save it
- Click the Play button
- You will be running the new Forge mod loader you just installed and you can add new Forge based mods to it by copying the mod zip files to the mod folder in the .minecraft folder (make sure your mods match the version you are running or it will crash and burn)
Mod Development Setup
Now we will continue with our Development Environment setup to get it ready for mod development.
- Download the recommended Forge 1.6.4 src from http://files.minecraftforge.net/ (you can get whatever version of Minecraft you want to develop for)
- Extract the zip to a folder any where you want
- Then open the folder and run the install.cmd
- After the install completes, open Eclipse
- Select a workspace by browsing to your Forge install, mcp/eclipse folder and click OK
This will import the Minecraft source code into the IDE and you can get to work modding your Minecraft world.The install took awhile so you can take a break when you start the install.
Other Goodies
Scala
During the Forge install I noticed in the command window that it is working with MCP and it does a lot of decompiling, updating, and recompiling of the Mincecraft source code. I also noticed a message: “scalac” is not found on the PATH. Scala files will not be recompiled. I am not sure if I will need to use Scala, but I always wanted to use Scala so this is a good a time as any to get it set up in my Java Environment. I downloaded the Scala installer from http://www.scala-lang.org/ and I allowed the setup to update my system path variables. I didn’t feel like reruning the Forge installer so hopefully I won’t need the recompiled Scala files for basic learning.
Gradle
In earlier versions of Forge you had to use the Gradle Build Server to setup your source code and Eclipse for development. Even though it isn’t necessary in the version I am using I still setup Gradle because it seems very cool, well Geek Cool. I really need to look into Gradle more for my .Net environment as they have some very interesting concepts for build environments. Anyway, you can install Gradle from http://www.gradle.org/downloads. You can just unzip to some location on your machine. Then copy the path to the Gradle bin folder and add it to your system path environment variable. That’s it, you have a Java build server.
Minecraft Server
I want to host our mods in our own Minecraft server, but I ran into the version issue again. I have 1.7.4 server, but my mods will be 1.6.4. Well with a little URL hacking you can download the 1.6.4 version of the server from the Minecraft download server. This is the same server jar URL as the latest release, I just change the release from 1.7.4 to 1.6.4. https://s3.amazonaws.com/Minecraft.Download/versions/1.6.4/minecraft_server.1.6.4.jar
Conlusion
That’s if for now, I will try to post some of my modding experience later. I am in a very strange land, but I think it will be great doing something both constructive and fun with my kids.
Validating Tab Order with WebDriver
I had a spec that defined the tab order on a form. Starting with the default form field the user will be able to press the tab key to move the cursor to the next form field. Tabbing through the fields will follow a specific order. I couldn’t find much on Google or Bing to help automate this with WebDriver, maybe I’m loosing my search skills.
Below is code to implement this with WebDriver. In production I use a SpecFlow Table instead of an array to hold the expected tab order and I have a custom wrapper around WebDriver so much of this code is hidden from test code. Below is the untested gist of my production implementation. Since all of my elements have IDs, and your’s should too, we are simply validating that the active element has the ID of the current element in the array iteration.
- If the element doesn’t have an ID, fail the test.
- If the element ID doesn’t match the expected ID, fail the test.
- If the ID matches, tab to the next element and loop.
public void TestTabOrder()
{
//Code to open the page elided.
....
//This is the expected tab order. The strings are element IDs so the test assumes all of your elements have IDs.
string[] orderedElementIds = new string[] { "FirstControl", "SecondControl", "NextControl" };
foreach (var item in orderedElementIds)
{
string elementId = item;
//Get the current active element, element with focus.
IWebElement activeElement = webDriver.SwitchTo().ActiveElement();
//Get the id of the active element
string id = activeElement.GetAttribute("id");
//If the active element doesn't have an id, fail the test because all of our elements have IDs.
if (string.IsNullOrWhiteSpace(id))
{
throw new AssertionException("Element does not have expected ID: " + elementId);
}
//If the active element doesn't match the current ID in our orderedElementIds array, fail the test.
if (elementId != id)
{
throw new AssertionException("Element: " + elementId + " does not have focus.");
}
//Tab to the next element.
activeElement.SendKeys(Keys.Tab);
}
}
You don’t have to assert anything as the exceptions will fail the test (using MSTest AssertionException), hence no exception equals passing test. You get a bonus assert with this test in that it also verifies that you have a certain element with default focus (the first element in the array).
I am sure there is a better way to do this, but it works. Hope this helps someone as it wasn’t something well publicized.
Calling Overridden Virtual Method from Base Class in C#
I had a serious brain freeze today. I forgot polymorphism 101. I couldn’t remember if a virtual method defined in a base class and called in the constructor of the base class would call the overriden virtual method in a derived class. Instead of ol’reliable (Google Search), I decided to do a quick test because doing beats searching as it kind of burns in a lesson learned for me. Hopefully, I won’t forget this one for awhile. Anyway, the answer is yes, it will call the override and here is a test if you are having a brain freeze too and want to prove it (this is implemented with MSTest).
namespace BrainFreeze
{
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
[TestClass]
public class PolymorphismTest
{
[TestMethod]
public void BaseClassVirtualMethodWhenCalledFromBaseContructorShouldCallDerivedVirtualOverride()
{
string expected = "ImpCallVirtual";
Imp imp = new Imp();
string actual = imp.Result;
Assert.AreEqual(expected, actual);
}
public class Base
{
public Base()
{
CallVirtual();
}
public string Result { get; set; }
public virtual void CallVirtual()
{
this.Result = "BaseCallVirtual";
}
}
public class Imp : Base
{
public Imp()
: base()
{
}
public override void CallVirtual()
{
this.Result = "ImpCallVirtual";
}
}
}
}
Typing Git Username and Password is Lame
I set up a local Git server to serve as a central repository. Every time I push changes I have to submit my username and password and it got old real quick. I discovered that it is very easy to get around this, although what I am about to share is a little insecure as I am storing my credentials in plain text, but there are ways to secure this.
First a little background. I am using TortoiseGit as my Git client, I am on Windows 7, and my Git server is not exposed to the public internet.
To allow my credentials to be found I first ran this command:
setx HOME %USERPROFILE%
This sets up a Home environment variable on my system that points to my user profile (see this for more info http://technet.microsoft.com/en-us/library/cc755104.aspx).
Then I create a text file named _netrc in the root of my user profile folder (C:\Users\{yourusername}\_netrc). In the text file I list the machine name, login, and password for each Git server I want to interact with. I assume this could also work for any server that accepts HTTP credentials.
machine mycoolserver
login mysecretlogin
password mysecretpassword
machine someotherhost.com
login mysecretlogin2
password mysecretpassword2
Machine is the root name of the server you are connecting to. In my case I have a local server without a top level domain (no .com). Then you add your credentials. Like I said this is saved in plain text, so you have to be careful with this and make sure you use credentials that you don’t use on any other accounts (e.g. your bank account).
Thanks to StackOverflow and VonC for help on this:
Setup a NuGet Server
Setting up a NuGet Server is so easy that everyone should do it. Why? If you are beholden to corporate policies that restrict the applications and references your projects can have, you can still benefit from the awesomeness of NuGet by hosting corporate approved packages. If you have a critical build process, you may not want to depend on the reliability of third party servers. Oh, I can keep going, but I won’t. The point is with 5 easy steps (depending on how you may break it down), you can have a NuGet server up and serving packages.
- Create an Empty Web Application (I’m using Visual Studio)
- Use NuGet to add reference in the Web Application to “NuGet.Server”
- Add the nupkg files that you want to host to the Packages folder
- Deploy the Web Application
- Add the URL of the Web Application to your local NuGet package manager.
Thanks to docs.nuget.org and Adam James Naylor for opening my eyes to how simple this is:
http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds
http://www.adamjamesnaylor.com/2013/04/26/Setting-Up-A-Private-NuGet-Server.aspx
.NET Code Coverage with OpenCover
I made more progress in improving my Code Quality Pipeline. I added test code coverage reporting to my build script. I am using OpenCover and ReportBuilder to generate the code coverage reports. After getting these two tools from Nuget and Binging a few tips I got this going by writing a batch script to handle the details and having NAnt run the bat in a CodeCoverage target. Here is my bat
REM This is to run OpenCover and ReportGenerator to get test coverage data. REM OpenCover and ReportGenerator where added to the solution via NuGet. REM Need to make this a real batch file or execute from NANT. REM See reference, https://github.com/sawilde/opencover/wiki/Usage, http://blog.alner.net/archive/2013/08/15/code-coverage-via-opencover-and-reportgenerator.aspx REM Bring dev tools into the PATH. call "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\VsDevCmd.bat" mkdir .\Reports REM Restore packages msbuild .\.nuget\NuGet.targets /target:RestorePackages REM Ensure build is up to date msbuild "MyTestSolution.sln" /target:Rebuild /property:Configuration=Release;OutDir=.\Releases\Latest\NET40\ REM Run unit tests .\packages\OpenCover.4.5.2316\OpenCover.Console.exe -register:user -target:"C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\mstest.exe" -targetargs:"/testcontainer:.\source\tests\MytestProjectFolder\bin\Debug\MyTestProject.dll" -filter:"+[MyTestProjectNamespace]* -[MyTestProjectNamespace.*]*" -mergebyhash -output:.\Reports\projectCoverageReport.xml REM the filter +[MyTestProjectNamespace]* includes all tested classes, -[MyTestProjectNamespace.*]* excludes items not tested REM Generate the report .\packages\ReportGenerator.1.9.1.0\ReportGenerator.exe -reports:".\Reports\projectCoverageReport.xml" -targetdir:".\Reports\CodeCoverage" -reporttypes:Html,HtmlSummary^ -filters:-MyTestProject* REM Open the report - this is just for local running start .\Reports\CodeCoverage\index.htm pause
Issues
I have integration tests that depend on files in the local file system. These were failing because OpenCover runs the tests from a different path than the one the files are copied to during build. To overcome this I added the DepoloymentItem attribute to my test classes for all of the files I depend on for tests. This attribute will cause the files to be moved to the test run location with the DLLs with OpenCover does its thing.
[TestClass]
[DeploymentItem("YourFile.xml")] //Can also be applied to [TestMethod]
public class YourAwesomeTestClass
{
}
Another problem I had prevented the database connection strings from being read from the app.config. I was running MSTest with the /noisolation command line option. I removed the option and it worked. It seems like noisolation is there to improve performance of the test run. I don’t see much difference in timing right now and when I hit a wall in the time of my test execution I will revisit…no premature optimization for me. See
Virtualization Strategy for Browser Based Testing
I have been ramping up my knowledge and startegies for browser based testing on virtual machines (VM) and thought I would capture some of the best practices I have so far.
- Start a new VM at start of test and destroy it at end of test.
- Keep VM images small. Only have the bare minimum of software needed to run your test included in the VM image. Get rid of any default software that won’t be used.
- Compress the VM image.
- Image storage
- SANS – storage area network. They are expensive ,but the best options for IO intensive scenarios such as this.
- Use solid state drives – this is the next best option, but expensive. You’re able to have more efficient access from one drive when compared to rotating head drives.
- Image per drive on rotating head drive – this is the least expensive option, but also the least efficient. Since IO is slow on these drives you could spread your images across multiple drives to improve parallel VM startup.
That’s where I am so far. Still need to get experience with various implementations of each practice. Should be fun.
NUnit, OK Maybe
Don’t get me wrong there is nothing wrong with NUnit and it may or may not be superior to MSTest. I am currently a user of MSTest in my personal projects and the jury is still out if I will use it at work. I just never found a truly compelling reason to use one over the other. MSTest comes well integrated in Visual Studio out the box and had the least amount of pain in terms of setup and getting a test project going. With the release of VS 2012, the playing field has been leveled a bit more as I can run an NUnit test through the Test Explorer, just like an MSTest/VSTest. This is accomplished by adding a simple NuGet package to the test project, NUnit Test Adapter for VS2012 and VS2013.
Anyway, another compelling reason to choose one over the other that I keep bumping into is being able to run tests in parallel. MSTest has the ability to run tests in parallel, but the implementation doesn’t sound solid by some of the posts I have been reading. VSTest, the VS 2012+ default test engine, does not run tests in parallel. NUnit does not support parallel either although the community has been waiting on the next version that is supposed to have this feature…if it ever is released.
Actually, the reason for this post is I was doing a little reading up on PNUnit. It is supposed to run NUnit tests in parallel. Not sure how good the project is, but their website started discussing the need to run their tests across Windows and Linux. Ah..ha! there you go. If you need to run cross platform tests you may lean towards NUnit and with PNUnit providing parallelization you may lean a little bit more.
I guess I am going to toy around more with NUnit VS2012 integration to see if I can somehow get as comfortable a workflow as I do running NUnit tests in VS2013. I will also toy around with PNUnit as this would have an immediate impact on my decision for automation engine at work.
Page Object as Collection of Controls as Collection of Elements
I love the whole modular development ideal. Creating small discrete modular chunks of functionality instead of large monolithic globs of interdependent functionality helps to promote code reuse and increase maintainability of a code base. Well, for me, it was natural to extend this design pattern to the Page Object Model in functional browser based testing.
In ASP.NET Web Forms there is a style of web development that favors building the various parts of a page in User Controls. Your pages become a collection of User Controls that you can stitch together at runtime. So, your home page might have a main content control, slide show control, and secondary content control. Wrapping the home page could be a master page that has a masthead control, navigation control, and footer control. In each control would be the elements that make up a particular section of a web page. Anytime you need new functionality, you build a new user control and plug it into your application. When I am building my page objects for my tests I figured I would follow the same concept.
When I model a web page as a page object I start with the page wrapper that provides a shell for the other objects contained in the page. I will model the various user controls as control objects and I will add them to the page object that represents the page. This modularization also helps me to quickly and easily compose new pages as I don’t have to recreate common page parts.
The page wrapper just functions as a container control object and can provide communication between the controls and a central point to access page state. I try to keep the page wrapper light on functionality and focus on composition to provide the functionality that tests need.
I mentioned master pages and I model master pages through inheritance instead of composition. If the web page I am modeling uses a master page the page object for the page will inherit from another page object that models the master page. This is another way to cut down on duplication and while increasing maintainability.
This pattern is probably something common in the testing community, so I need to do more research on it. It is a work in progress for me as I am still not solid on how to implement the composition. Should the child objects know about their containing page object, how should I manage the WebDriver across controls, what if a User Control is added to a page multiple times, how should I model it. I am trying different solutions to these problems and more common and edge cases that get me stuck in the mud from time to time. Hopefully, I can provide some tried and true strategies for this extension of the page object pattern as I exercise it over the next year.
For now here is sort of where I am. I start with an interface to define the page object contract and a base page to define the core functionality all pages should have. From these abstractions I build up an actual page model as I described earlier by composing control objects that in turn compose element objects.
Below is some early code for my page abstractions. I won’t go into the specifics of this code, but you can get the gist of where I am headed. One thing to note is that I have abstracted the concept of Browser and Test Environment. This gives me flexibility in the usage of various Browser Automation Frameworks and the ability to easily configure tests for various test environments. Actually, I also have a base control object to model User Controls and an object that models page elements (think WebDriver element, but abstract so I can wrap any Browser Automation Framework). OK, last note is PageKey is used in my reporting module. As test results are collected I also store the page key with the results so that I have traceability and can be more expressive with analysis of the result data.
//This is not production code
public interface IPage
{
string PageKey { get; }
string PageUrl { get; }
string PageVirtualUrl { get; }
ITestEnvironment TestEnvironment { get; }
string Title { get; }
bool HasTitle();
bool HasUrl();
bool IsOpen();
void Open();
}
public class BasePage : IPage
{
public BasePage(string pageKey, Browser browser, ITestEnvironment environment)
{
this.Initialize();
this.PageKey = pageKey;
this.Browser = browser;
this.TestEnvironment = environment;
}
private BasePage()
{
}
public Browser Browser { get; protected set; }
public string PageKey { get; protected set; }
public string PageUrl
{
get
{
return this.GetPageUrl();
}
}
public string PageVirtualUrl { get; protected set; }
public ITestEnvironment TestEnvironment { get; protected set; }
public string Title { get; protected set; }
public virtual bool HasTitle()
{
if (this.Title == this.Browser.Title)
{
return true;
}
return false;
}
public virtual bool HasUrl()
{
if (!string.IsNullOrEmpty(this.PageUrl))
{
if (this.Browser.HasUrl(this.PageUrl))
{
return true;
}
}
return false;
}
public virtual bool IsOpen()
{
if (!this.HasUrl())
{
return false;
}
return this.HasTitle();
}
public virtual void Open()
{
Browser.Open(this.PageUrl);
}
private string GetPageUrl()
{
if (this.TestEnvironment == null)
{
return string.Empty;
}
string baseUrl = this.TestEnvironment.BaseUrl;
string virtualUrl = this.PageVirtualUrl;
if (string.IsNullOrEmpty(baseUrl))
{
return string.Empty;
}
if (!baseUrl.EndsWith("/"))
{
baseUrl += "/";
}
if (virtualUrl.StartsWith("/"))
{
virtualUrl = virtualUrl.Substring(1);
}
return string.Format("{0}{1}", baseUrl, virtualUrl);
}
}
Have You Heard about Event Sourcing?
I cut my teeth in programming in the BASIC language on a computer that had a tape recorder as the persistent memory store (if you don’t know what a tape recorder is Google it). From there I transitioned to VBA and VBScript which wasn’t a stretch because it was all procedural, chaining a bunch of instructions to make the computer do what I want.
During my VB scripting days I was exposed to ASP and relational databases through Access then SQL Server. Cobb’s third normal form was not that much of a stretch for me to grasp. When .Net 1.0 came along as soon as it was released I jumped from ASP to ASP.NET and I took all of my procedural habits along with me to VisualBasic.NET.
Then as I barely got started with .Net I heard all the buzz around C# and object orientation and I just didn’t get it. I tried to force my procedural understanding into an OOP hole. In ASP I would create separate scripts for little pieces of functionality I wanted to reuse. I thought I was getting the same benefit of reuse and object composition that everyone was raving about with C#. How little I knew.
Today I find myself in the same boat trying to understand Event Sourcing. I am trying to fit Event Sourcing into a relational hole, but this time I won’t spend a couple years just doing it absolutely wrong. My boss asked me to talk about Event Sourcing and I took it as an opportunity to learn more about it, even though I will likely never give the talk. I did quite a bit of research and this is more of a post on where you can find some useful info.
Everyone starts with Wikipedia definitions, not sure why, but here is Event Sourcing according to Wikipedia…wait, there isn’t a Wikipedia page for it (as of 8/26/2013). Even Martin Fowler has Event Sourcing as a Work-in-Progress on his EAA page on the subject. So why the hell are we talking about it?
Event sourcing is in production on some of the most data intensive systems on the planet. People way smarter than me advocate it. Also, sometimes it’s nice to be on the cutting edge of a movement as it forces you to innovate.
Event Sourcing is a data persistence technique that focuses on capturing the state of an application overtime. The states are captured in an event object and the objects are stored in sequential order according to the time that the state changed. Once the state is captured, it can’t be changed or undone, it is immutable. To correct a mistaken state change you have to issue a compensating state change to correct it. So, your persisted state is the gospel, if it was stored you can trust that it is true and wasn’t tampered with (outside of some malicious change to mess with you).
OK, I’m not sure about you, but when I learned this it blew my mind. The idea of persisting the entire history of the state of my application was a red or green pill moment for me. On one hand it seemed terribly inefficient to store the state of every object in my application especially since most of the discussions is about using NoSQL DB’s. How could you possibly query this data easily and what benefit does it get me? Then I learned about the ease of data recovery and production incident research and being able to replay events that happen in production last month on my local box today…what!
Then I had an epiphany. I have source control for my code, took me a little to get comfortable with it and it provided a lot of benefits for me and hopefully for you too. Event Sourcing is something a little like source control for application state. Actually, SVN is an example of Event Sourcing used in my environment at work today. So, this understanding made it a practical solution to me, but I was still unclear on real world usage and what scenarios would benefit most from Event Sourcing.
Being in the financial industry auditing is a big deal and Event Sourcing could provide an instant audit log of every transaction we record. Yet, the whole logging of every event seemed a little overkill. I won’t try to persuade you either way or actually try to explain it to you as I couldn’t do the subject justice, but I decided it was too much for my current projects. Actually, a couple videos by one of the originators of CQRS (another concept that I am researching) has a lot to offer on the subject of event sourcing. Its buried in these references, but its all related and in my opinion all fascinating. Especially, if you are into broadening your coding horizons.