Category: Dev
Confirming MSBuild Project Dependency Build Behavior
So we have one giant Visual Studio solution that builds every application project for our product. It is our goal to one day be able to build and deploy each application independently. Today, we have a need to build one project for x86 and all others as x64. This requirement also provides a reason to explore per application build and deploy. The x64 projects and x86 project share some of the same dependencies and provides a good test for per application build. The purpose of this exploration is to determine the best way to automate the separate platform builds and lay groundwork for per application build. These are just my notes and not meant to provide much meat.
First, I will do some setup to provide a test solution and projects to experiment with. I created 4 projects in a new solution. Each project is a C# class library with only the default files in them.
- ProjA
- ProjB
- ProjC
- ProjD
Add project dependencies
- ProjA > ProjB
- ProjB > ProjC, ProjD
- ProjC > ProjD
Set the platform target (project properties build tab) for each project like so
- ProjA x64
- ProjB x64
- ProjC x64
- ProjD x86
Behavior when a Dependent Project is Built
Do a Debug build of the solution and inspect the bin folders.
- BuildTest\ProjA\bin\Debug
- ProjA.dll 10:03 AM 4KB B634F390-949F-4809-B937-66069C5F058E v4.0.30319 / x64
- ProjA.pdb 10:03 AM 8KB
- ProjB.dll 10:03 AM 4KB B0C5B475-576D-44D2-BD41-135BDA69225E v4.0.30319 / x64
- ProjB.pdb 10:03 AM 8KB
- BuildTest\ProjB\bin\Debug
- ProjB.dll 10:03 AM 4KB B0C5B475-576D-44D2-BD41-135BDA69225E v4.0.30319 / x64
- ProjB.pdb 10:03 AM 8KB
- ProjC.dll 10:03 AM 4KB DBB9482F-6609-4CA5-AB00-009473E27CDA v4.0.30319 / x64
- ProjC.pdb 10:03 AM 8KB
- ProjD.dll 10:03 AM 4KB 4F0F7877-5046-4A32-8B8E-FAD8E2660CE6 v4.0.30319 / x86
- ProjD.pdb 10:03 AM 8KB
- BuildTest\ProjC\bin\Debug
- ProjC.dll 10:03 AM 4KB DBB9482F-6609-4CA5-AB00-009473E27CDA v4.0.30319 / x64
- ProjC.pdb 10:03 AM 8KB
- ProjD.dll 10:03 AM 4KB 4F0F7877-5046-4A32-8B8E-FAD8E2660CE6 v4.0.30319 / x86
- ProjD.pdb 10:03 AM 8KB
- BuildTest\ProjD\bin\Debug
- ProjD.dll 10:03 AM 4KB 4F0F7877-5046-4A32-8B8E-FAD8E2660CE6 v4.0.30319 / x86
- ProjD.pdb 10:03 AM 8KB
Do a Debug rebuild of ProjA
- ProjA all DLLs have new date modified.
- ProjB all DLLs have new date modified.
- ProjC all DLLs have new date modified.
- ProjD all DLLs have new date modified.
Do a Debug build of ProjA
- ProjA no DLLs have new date modified.
- ProjB no DLLs have new date modified.
- ProjC no DLLs have new date modified.
- ProjD no DLLs have new date modified.
Change Class1.cs in ProjA and do a Debug build of ProjA
- ProjA: ProjA.dll has new data modified, ProjB.dll does not.
- ProjB no DLLs have new date modified.
- ProjC no DLLs have new date modified.
- ProjD no DLLs have new date modified.
Change Class1.cs in ProjB and do a Debug Build of ProjA
- ProjA all DLLs have new date modified.
- ProjB: ProjB.dll has new data modified, ProjC.dll and ProjD do not.
- ProjC no DLLs have new date modified.
- ProjD no DLLs have new date modified.
We change Class1.cs in ProjC and do a Debug Build of ProjA
- ProjA all DLLs have new date modified.
- ProjB: ProjB.dll and ProjC.dll have new data modified, and ProjD does not.
- ProjC ProjC.dll has new date modified, and ProjD does not.
- ProjD no DLLs have new date modified.
We change Class1.cs in ProjD and do a Debug Build of ProjA
- ProjA all DLLs have new date modified.
- ProjB all DLLs have new date modified.
- ProjC all DLLs have new date modified.
- ProjD all DLLs have new date modified.
Conclusion
- If a dependency has changes it will be built when the dependent project is built.
Behavior When Project with Dependencies is Built
Next, I want to verify the behavior when a project that has dependencies is built.
Clean the solution and do a debug build of the solution.
Do a Debug build of ProjD
- ProjA no DLLs have new date modified.
- ProjB no DLLs have new date modified.
- ProjC no DLLs have new date modified.
- ProjD no DLLs have new date modified.
We change Class1.cs in ProjD and do a Debug build of ProjD
- ProjA no DLLs have new date modified.
- ProjB no DLLs have new date modified.
- ProjC no DLLs have new date modified.
- ProjD all DLLs have new date modified.
We change Class1.cs in ProjC and do a Debug Build of ProjD
- ProjA no DLLs have new date modified.
- ProjB no DLLs have new date modified.
- ProjC no DLLs have new date modified.
- ProjD no DLLs have new date modified.
Conclusion
- If a project with dependencts is built, any projects that depend on it will not be built.
Behavior When Bin is Cleaned
I manually deleted DLLs in ProjD and built ProjA and the DLLs with same date modified reappeared. Maybe they were fetched from obj folder.
I do a clean on ProjD (this cleans obj) and build ProjA and new DLLs are added to ProjD.
Conclusion
- Obj folder acts like a cache for builds.
Behavior when External Dependencies are Part of Build
Add two new projects to solution
- ExtA x64 > ExtB
- ExtB x86
Updated these projects so they output to the solution Output/Debug folder.
Added references to the ExtA and ExtB output DLLs
- ProjA > ExtA
- ProjB > ExtB
I did a solution rebuild and I noticed something that may also be a problem in other tests. When building ProjC, ProjD, and ExtA we get an error:
warning MSB3270: There was a mismatch between the processor architecture of the project being built “AMD64” and the processor architecture of the reference. This mismatch may cause runtime failures. Please consider changing the targeted processor architecture of your project through the Configuration Manager so as to align the processor architectures between your project and references, or take a dependency on references with a processor architecture that matches the targeted processor architecture of your project.
Also, ProjA and ProjB are complaining about reference resolution:
warning MSB3245: Could not resolve this reference. Could not locate the assembly “ExtA”. Check to make sure the assembly exists on disk.
In Visual Studio I update the dependency for ProjA and ProjB to include the Ext projects. This fixes the MSB3245 error.
Conclusion
- We need to build all dependencies with the same platform target as the dependent.
- We need to build external references before building any dependents of the external references (e.g. get NuGet dependencies).
- When a solution contains a project that is dependent on another project, but does not have a project reference, update the dependency to force the dependency to build first.
Separating Platform Builds
Add new Platforms for x64 and x86. Update configuration so each project can do a x86 and x64 build. Have Ext project output to x86 and x64 folders for release and debug builds.
Add new projects for ExtC and ExtD and have respective Proj reference their release output. ProjC should ref ExtC x64 release and ProjD should ref ExtD x86 release.
Issue Changing Platform on Newly Add Solution Projects
So, I am unable to change platform target for ExtD/C as x86 and x64 do not appear in drop down and I can’t add them because UI says they are already created. I manually add them to project file.
<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x64'"> <DebugSymbols>true</DebugSymbols> <OutputPath>bin\x64\Debug\</OutputPath> <DefineConstants>DEBUG;TRACE</DefineConstants> <DebugType>full</DebugType> <PlatformTarget>x64</PlatformTarget> <ErrorReport>prompt</ErrorReport> <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x64'"> <OutputPath>bin\x64\Release\</OutputPath> <DefineConstants>TRACE</DefineConstants> <Optimize>true</Optimize> <DebugType>pdbonly</DebugType> <PlatformTarget>x64</PlatformTarget> <ErrorReport>prompt</ErrorReport> <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x86'"> <DebugSymbols>true</DebugSymbols> <OutputPath>bin\x86\Debug\</OutputPath> <DefineConstants>DEBUG;TRACE</DefineConstants> <DebugType>full</DebugType> <PlatformTarget>x86</PlatformTarget> <ErrorReport>prompt</ErrorReport> <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet> </PropertyGroup> <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x86'"> <OutputPath>bin\x86\Release\</OutputPath> <DefineConstants>TRACE</DefineConstants> <Optimize>true</Optimize> <DebugType>pdbonly</DebugType> <PlatformTarget>x86</PlatformTarget> <ErrorReport>prompt</ErrorReport> <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet> </PropertyGroup> |
No I can update the output path for ExtC/D and update Configuration Manager to proper platform.
Since ProjD is exclusively an x86 projects I removed them from the build for x64. I have ExtD building both x86 and x64. I updated dependencies so ExtC/D build before ProjC/D.
Final Conclusion
This was a bit much to verify what may be common knowledge on the inter-webs, but I wanted to see for myself. There is more that I want to experiment with like NuGet, build performance and optimization, but this gave me enough to move forward with an initial revamp of our automated build. I am going to proceed with a separate pipeline for the an x86 build of the entire solution and a separate deploy for the x86 application. I really believe that going forward that NuGet can become a key tool in standardizing and optimizing per application build and deployment.
Summary
- If a dependency has changes it will be built when the dependent project is built. Otherwise it will not be rebuilt. This is all a feature of MSBuild. When we move to per application build we will have to build separate solutions for each application and build in isolated pipelines. To prevent conflicts with other applications, we should build dependencies that are shared across applications in a separate pipeline.
- If a project with dependents is built, any projects that depend on it will not be built. If we
- Obj folder acts like a cache for builds. We can extend this concept to a common DLL repo where all builds send their DLLs, but we would need a confident way of versioning the DLLs so that we always use the most recent or specific version… sounds like I am proposing NuGet
. - We need to build all dependencies with the same platform target as the dependent. We may build the main solution as x64 and build other projects that need x86 separately. I believe this would be the most efficient since the current x86 projects will not change often.
- We need to build external references before building any dependents of the external references (e.g. get NuGet dependencies). We do this now with NuGet, we fetch packages first, but when we move to per application build and deploy this will automatically be handled by Go.cd’s fan in feature. We will have application builds have pipeline dependencies on any other application builds it needs. This will cause the apps to always use the most recent successful build of an application. We can make this stronger by having application depend on test pipelines to insure the builds have been properly tested before integration.
- When a solution contains a project that is dependent on another project, but does not have a project reference, update the dependency to force the dependency to build first.
NOTE
This is actually a re-post of a post I did on our internal team blog. One comment there was we should also do an Any CPU build then we can have one NuGet will all versions.
What is Object Oriented Programming?
Until today I never contemplated what object orientated programming means. I have thought about OOP in the context of writing object oriented code and figuring out how to be better at OOP, but I never really studied its original design and intent. I was listening to a .Net Rocks podcast on the subject of lean functional programming and they had a discussion that led to Dr. Alan Kay’s definition of OOP. Dr. Kay coined the phrase object oriented programming during his work developing the SmallTalk language. Even though he is credited with the term OOP, what OOP became is not the same as he envisioned it.
I did a little research and landed on this email thread between Stefan Ram and Dr. Kay.
http://www.purl.org/stefan_ram/pub/doc_kay_oop_en
Dr. Kay credits his thoughts on OOP to his background in biology and mathematics. He says,
I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning — it took a while to see how to do messaging in a programming language efficiently enough to be useful). Dr. Alan Kay, July 17, 2003
Even polymorphism wasn’t a part of the his original thoughts on OOP. He proposed another term to describe behavior in related objects, “genericity.”
He ends the thread by giving his definition of OOP,
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. Dr. Alan Kay, July 23, 2003
Doesn’t this sounds like functional programming?
Even though this is years after Dr. Kay’s original OOP work he may have had changes in thought over the years. Yet, I still believe that his intent is still true here. So it seems that through the years, OOP may have been forked or hijacked. Some argue that we are paying for it today. They say that OOP did not stay true to the OOP of Dr. Kay and his peers in the early days of OOP (am I spamming OOP here?).
Now, I have another reason to learn F#. With the little I know now, I have already started trying to remove the ill affects of mutable state and use message passing to communicate across layers. I know there is so much more to learn and I have been doing this a long time. I look forward to digging into F# maybe even Erlang to see how it can influence my OOP.
Multitenant Thoughts
I am building my 3rd multitenant SAAS solution. I am not referencing any of my earlier work because I think they were way more work than they should have been. Also, I have since moved on from the whole ASP.net web forms development mindset and I want to start with a fresh perspective instead of trying to improve my big balls of spaghetti code.
Today, my thoughts center around enforcing the inclusion and processing of a tenant ID in every command and query. My tenant model keeps all tenant data in a shared database and tables. To keep everything segregated every time I write data and read data there has to be a tenant ID included so that we don’t mess with the wrong tenants data.
I have seen all kinds of solutions for this, some more complicating than I care to tackle at this moment. I am currently leaning towards enforcing it in the data repository.
I am using a generic repository for CRUD operations and an event repository for async event driven workflows. In the repository API’s I want to introduce a validated parameter for tenant ID in every write and read operation. This will force all clients to provide the ID when they call the repos.
I just have to update a couple classes in the repos to enforce inclusion of the tenant ID when I write data. Also, every read will use the tenant ID to scope the result set to a specific tenant’s data. I already have a proof of concept for this app so this change will cause a breaking change in my existing clients, but still not a lot of work considering the fact that I almost decided to enforce the tenant ID in a layer higher than the repo, which would have been a maintenance nightmare.
Is this best practice? No. I don’t think there is a best practice besides the fact that you should use a tenant ID to segregate tenant data in a shared data store. This solution works for my problem and I am able to maintain it in just a couple classes. If the problem changes I can look into the fancy solutions I read about.
Now, how will I resolve the tenant ID? Sub-folder, sub-domain, query string, custom domain…?
CQRS is so easy.
This is just a quick post as I sit in the hospital bored to death and recovering.
I have been intrigued by CQRS, command and query responsibility segregation, ever since I heard about it in a talk by Greg Young at the first Code on the Beach conference. I decided to surf the blogosphere to see what people are doing these days with CQRS. It seems like there is still quite a bit of confusion about what it is.
I have to admit, at first it was extremely intimidating to me too. Not because CQRS is hard, but like many people, I blurred the lines between CQRS, DDD, and event sourcing. When you look at CQRS in the context of everything it is not, it tends to look more complicating than it really is.
I am going to borrow a trick that Greg uses and show a code example of the simplicity of CQRS. Let’s say we have a service that provides an API to manage a product catalog:
ProductManager void CreateProduct(Product product) Product GetProduct(string productId) bool IsProductInStock(string productId) void UpdateProductPrice(string productId, decimal price) void RemoveProduct(string productId) List<Product> GetProductRecommendations(string productId)
If we apply CQRS to this service, we would simply wind up with two services. One that we can optimize for write operations (commands) and another that is optimized for read operations (queries).
ProductQueries Product GetProduct(string productId) bool IsProductInStock(string productId) List<Product> GetProductRecommendations(string productId)
ProductCommands void CreateProduct(Product product) void UpdateProductPrice(string productId, decimal price) void RemoveProduct(string productId)
Easy, peazy… With the read and write operations segregated into their own APIs you are free to do all sorts of fancy optimizations and are on better footing to explore DDD and event sourcing.
Conclusion
The point is CQRS is just an extension of the simple CQS pattern that moves the concept a step further from the method to the class or API level. Nothing more, nothing less. I believe most applications can benefit from CQRS even if you aren’t going to do DDD or event sourcing. So, read all about CQRS, but if you are studying more than simple separation of read and write operations you have read too far and getting into other concepts.
Quick Testing Legacy ASP.net Web Services
If you still have legacy ASP.net webservices, the old asmx file variety, and you need to do a quick test from a server that doesn’t have fancy testing tools. This article provided an easy way to test the service with just a browser and an HTML file.
Test Service GET Method
To test the service’s GET methods you can use a browser and a specially formatted URL.
http://domain/service.asmx/method?parameter=value
For example, I have
- a domain, legacywebservices.com
- it hosts a service, oldservice.asmx
- that has a GET method, GetOldData
- that accepts parameters, ID and Name
The URL to test this web service method would be
http://legacywebservices.com/oldservice.asmx/GetOldData?ID=1000&Name=Some Old Data
This would return an XML file containing the response from the service or an error to troubleshoot.
Test Service POST Method
To test the service’s POST methods you can use a simple HTML file containing a form. Just open the form in your browser, enter the values, and submit.
<form method="POST" action="http://domain/service.asmx/method"><div><input type="text" name="parameter" /></div><div><input type="submit" value="method" /></div></form>
For example, I have
- a domain, legacywebservices.com
- it hosts a service, oldservice.asmx
- that has a Post method, SaveOldData
- that accepts parameters, ID and Name
The HTML form to test this web service method would be
This would return an XML file containing the response from the service or an error to troubleshoot.
Troubleshoot
If you get a System.Net.WebException error message that indicates the request format is unrecognized, you need to do some configuration to get it to work as explained in this KB. Just add this to the system.web node in the web.config of the web service and you should be good to go.
<webServices> <protocols> <add name="HttpGet"/> <add name="HttpPost"/> </protocols> </webServices>
Conclusion
If you are sentenced to maintaining and testing legacy ASP.net web services, these simple tests can help uncover pesky connectivity, data and other issues that don’t return proper exceptions or errors because your app is old and dumb (even if you wrote it).
PowerShell in Visual Studio, Finally, At Last… Almost
Even if you don’t like Microsoft or .Net, you have to admit that Visual Studio is a boss IDE. After being thrust into the world of scripting and PowerShell, it was disappointing to find the PowerShell support in Visual Studio to be lacking. Well, today I received a notice that Microsoft joined Adam Driscoll’s open source project, PowerShell Visual Studio Tools (PVST). They announced a release of a new version and I am ready to give it another go.
Adam makes note that Microsoft submitted a large pull request full of bug fixes and features. This project provides pretty nice PowerShell support inside my favorite IDE including:
- Edit, run and debug PowerShell scripts locally and remotely using the Visual Studio debugger
- Create projects for PowerShell scripts and modules
- Leverage Visual Studio’s locals, watch, call stack for your scripts and modules
- Use the PowerShell interactive REPL window to execute PowerShell scripts and command right from Visual Studio
- Automated Testing support using Pester
From https://visualstudiogallery.msdn.microsoft.com/c9eb3ba8-0c59-4944-9a62-6eee37294597
You can download it for free from the Visual Studio Gallery. A quick double click install of the visx file you download and your ready.
My first test was to create a PowerShell project. In the Visual Studio New Project window there’s a new project template type, PowerShell. Inside of it are two templates: PowerShell Module Project and PowerShell Script Project.
Scripting and Debugging
I start with a script project and bang out a quick Hello World script to see debugging in action.
$myName = "Charles Bryant"
$myMessage = "How you doin?"
function HelloWorld($name, $message) {
return "Hello World, my name is $name. $message"
}
HelloWorld $myName $myMessage
It feels very comfortable… like Visual Studio. I see IntelliSense, my theme works and I can see highlighting. I can set breakpoints, step in/over, see locals, watches, call stack, console output… feeling good because its doing what it said it can do and scripting PowerShell now feels a little like coding C#.
REPL Window
What about the REPL window. After a little searching, I found it tucked away on the menu: View > Other Windows > PowerShell Interactive Window. You can also get to it by Ctrl + Shift + \. I threw some quick scripts at it… ✓, it works too.
Unit Testing
Last thing I have time for is testing unit testing. First, I install Pester on the solution. Luckily there’s a NuGet package for that.
>Install-Package Pester
Then I create a simple test script file to test my Hello World script.
$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".tests.", ".")
. "$here\$sut"
Describe "HelloWorld" {
It "returns correct message" {
HelloWorld "Charles Bryant" "How you doin?" | Should Be "Hello World, my name is Charles Bryant. How you doin?"
}
}
Houston there’s a problem. When I open the Test Explorer I can see a bunch of tests that come with Pester, but I don’t see my little test. I try to reorganize the tests in the explorer and it freezes. Not sure if this is a problem with PVST, Pester, NuGet, Visual Studio, or user error… oh well. I can’t say it is a problem with PVST because I didn’t try to find out what was wrong (I still have work to do for my day job).
Conclusion
OK, unit testing isn’t as intuitive as the other operations, hence the Almost in the title. It will feel complete when I get unit testing working for me, but none the less, I like this tool a lot so far. I will definitely be watching it. If I see something up to my skills that I can contribute, I will definitely pitch in as this is something I can definitely use.
Rethrowing More Expressive Exceptions in C#
This post was contributed by Jonathan Hamm, one of the developer gurus I have the privilege of working with at my day job.
I did not realize this behavior with rethrowing exceptions existed where information can be added to the exception by the “catch” then in subsequent “catch” blocks the data will remain. It does make sense now with this test.
The inner most method throws an error, the next level adds an element to the Data property then rethrown, then the main level catches the exception that has the additional Data property.
LINQPad>
void Main()
{
try
{
LogAndRethrow();
}
catch (Exception ex)
{
ex.Data.Dump();
}
}
void LogAndRethrow()
{
try
{
CreateException();
}
catch (Exception ex)
{
ex.Data.Add("caught and rethrown", true);
throw;
}
}
void CreateException()
{
throw new NotImplementedException();
}
Jon used LINQPad to explore this feature and run the code above. Actually, he does a lot of amazing things with LINQPad you should definitely give this tool a try if you haven’t already. Speaking of LINQPad, did you know you can run LINQPad scripts from the command line with lprun? Something to think about if you are looking to use your C# skills for continuous delivery automation.
Ctrl-S Rapid Feedback Loop
Kent Beck, inventor of Extreme Programming and TDD guru, did a short video on how he went about learning CoffeeScript. The beauty of what he described didn’t have much to do with what he learned, but how. He based the video off of his “Making Making Manefesto” and some example of Making Making that inspired him.
What he did was create a quick little test framework that gave him instant feedback on the quality of his code every time he saved his code (Ctrl-S on Windows). This in effect gave him feedback on not only the code he was writing, but his making making thought process while learning CoffeeScript all at the same time.
I have seen rapid test feedback with MightyMoose for .Net, but that is slow in comparison to what he was able to achieve. It helps that JavaScript, even with CoffeeScript in the middle, doesn’t have a heavy compilation step as it is an interpreted language. I have also seen the benefits of file watchers when working with SASS and LESS for CSS to speed up feedback loops in UI development. I have played with rapid feedback with HTML changes in Chrome Developer tools (very fast). Yet, the context of using it to learn a new language never dawned on me. I have used numerous scripting sites, like CodeAcademy, to learn the basics of Perl, Ruby and others by following a set guide to learning the language. I have never seen it done like this with such ease, expressiveness, and ability to experiment and wander while maintaining a constant sense that you are on the right path.
Anyway, with my intense, somewhat obsessive, focus on improving feedback loops in software delivery, this was a great example of how automation can help increase efficiency. I wish we could do this in Visual Studio with similar speed.
- A test window to write tests for new code or code changes I want to write.
- A code window to write the new code or code changes.
- A result window to view instant results of the tests after saving either the test or code window.
Does a solution with similar speed as Kent’s example, but for C#, reside somewhere in Roslyn, maybe. It’s possible that MightMoose is the answer and is faster than when I first tried it years back. Will I find time to explore it, probably not, but I would really like to.
Making Making Coffee
.Net Continuous Test
Chrome Developer Tools Live Editing
IIS 8 Configuration File
Note to self
The IIS 8 configuration file is located in %windir%\System32\inetsrv\config\applicationHost.config. It is just an XML file and the schema is well known. You can open it, edit it (if you are brave), and otherwise do configuration stuff with it. You can diff it from system to system to find inconsistencies or save it in a source code repository to standardize on a base configuration across web server nodes, if your project needs that kind of thing. Lastly, you can manage it with Powershell… you can manage it with Powershell… you can manage it with Powershell DSC!
The possibilities are endless so stop depending so much on the IIS Server Manager UI like you are in Dev preschool. You are a big boy now, remove the training wheels, but you might want to wear a helmet.
I don’t want to have this discussion again!
Static Property Bug Caused by a Code Tool
With the following private properties at the top of a class.
private static readonly string taskTimeStampToken = string.Format("{0}Timestamp: ", taskToken);
private static readonly string taskToken = "[echo] @@";
What will this method in the same class print to the console when called.
public void Print()
{
Console.Print(taskTimeStampToken);
}
For all you code geniuses that got this right away, I say shut up with your smug look and condescending tone as you say of course it prints
Timestamp:
So, why doesn’t it print
[echo] @@Timestamp:
Because taskToken hasn’t been initialized yet, of course. The order of static properties matter in your code. Don’t forget it, especially if you use a tool that reorganizes your code.
I really did know this ;), but this fact caused me about an hour of pain. I use a wonderful little tool called Codemaid and I use it to help keep my code standardized. One of its functions is to reorganize my code in a consistent format (e.g. private members above public, constructor before everything…).
I obviously never ran Codemaid on this particular file with code similar to above because unit tests have always passed. Well I had a code change in said file, Codemaid did its thing, it caused the order of the properties to flip like they are above, and unit tests started failing. It took me at least an hour before I took a deep breath and noticed that the properties flipped.
Lesson Learned
- If you do any type of refactoring, even with a tool, make sure you have unit tests covering the file you refactor.
- Use a diff tool to investigate mysteriously failing tests. It will give better visual clues on changes to investigate instead of relying on tired eyes.
- Configure your code formatting tool to not reorganize your static properties, or request the feature to configure it.