Working with the Windows Registry with Powershell 4.0

I figured I would rehash some of the learning I did on working with the registry with PowerShell. Most of my research on this topic was on a couple technet pages:

There is nothing really new here, just trying to commit what I learned at technet to my brain.

WARNING: Editing your registry is dangerous. Make sure you know what your doing, document your changes, and have a backup so you can revert when you mess up.

The first interesting tidbit I learned was that PowerShell looks at the registry like it is a drive and working with the registry is similar to working with files and folders. The big difference is all of the keys are treated like folders and the registry entries and values are properties on the key. So, there is no concept of a file when working with the registry.

Viewing Registry Keys

Just like working with the file system in PowerShell, we can use the powerful Get-ChildItem command.

PS> Get-ChildItem -Path hkcu:\

Interesting right? hkcu is the HKEY_CURRENT_USER registry and its treated like a drive with all of its keys as folders under the drive. Actually, hkcu is a PowerShell drive.

PowerShell Drives

PowerShell creates a data store for PowerShell drives and this allows you to work with the registry like you do the file system.

If you want to view a list of the PowerShell drives in your session run the Get-PSDrive command.

PS> Get-PSDrive

Did you notice the other drives like Variable and Env? Can you think of a reason to use the Env drive to get access to Path or other variables?

Since we are working with a drive we can achieve the same results that we did with Get-Children with basic command line syntax.

PS> cd hkcu:\
PS> dir

Path Aliases

We can also represent the path with the registry provider name followed by “::” registry. The registry provider name is Microsoft.Powershell.Core\Registry and can be shortened to Registry. The previous example can be written as:

PS> Get-ChildItem -Path Microsoft.Powershell.Core\Registry::HKEY_CURRENT_USER
PS> Get-ChildItem -Path Registry::HKCU

The first syntax is much easier, but having the provider is more verbose and explicit in what is happening (less comments in the code to explain what’s happening).

More Get-ChildItem Goodies

The examples above only list the top level keys under the path. If you want to list all keys you can use the -Recurse parameter, but if you do this on a path with many keys you will be in for a long wait.

PS> Get-ChildItem -Path Registry::HKCU -Recurse

We can use the Set-Location command to set the location of the registry. With a location set we can use “.” in the path to refer to the current location and “..” for the parent folder.

PS> Set-Location -Path Registry::HKCU\Environment
PS> Get-ChildItem -Path .
PS> Get-ChildItem -Path ..\Keyboard Layout

Above, we set the location to the Environment key, then we get the items for the key using only “.” as the path, then we get the items in another key using the “..” to represent the parent key and indicating the key under the parent we want to get items for.

When using Get-ChildItem on the registry we have its parameters at our disposal, like Path, Filter, Include, and Exclude. Since these parameter only work against names we have to use more powerful cmdlet’s to get more meaningful filtering done. In the example provide on technet, we are able to get all keys under HKCU:\Software with no more than one subkey and exactly four values:

 PS> Get-ChildItem -Path HKCU:\Software -Recurse | Where-Object -FilterScript {
     ($_.SubKeyCount -le 1) -and ($_.ValueCount -eq 4) 
}

Working with Registry Keys

As we saw registry keys are PowerShell items. So, we can use other PowerShell item commands. Keep in mind that you can represent the paths in any of the ways that we already covered.

Copy Keys

Copy a key to another location.

PS> Copy-Item -Path Registry::HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion -Destination Registry::HKCU -Recurse

This copies the keys in the Path to the Destination. Since we added Recurse all of the keys, not just the top level, will be copied.

Creating Keys

Create a new key.

PS> New-Item -Path Registry::HKCU\_DeleteMe

Deleting Keys

Delete a key.

PS> Remove-Item -Path Registry::HKCU\_DeleteMe\* -Recurse

This will remove all items under _DeleteMe. \* is telling PowerShell to delete the items, but keep the container. If we didn’t use \* the container, _DeleteMe, would be removed too. -Recurse will remove all items in the container, not just the top level items. If we attempted to remove without adding the -Recurse parameter and the item has child items we would get a warning that we are about to remove the item and all of its children. -Recurse hides that message.

Working with Registry Entries

Working with registry keys is simple because we get to use the knowledge we know about working with the file system in PowerShell. One problem is that registry entries are represented as properties of registry key items. So, we have to do a little more work to deal with entries.

List Entries

The easiest way, IMHO, to view registry entries is with Get-ItemProperty.

PS>  Get-ItemProperty -Path Registry::HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion

This will list all of the properties for the key with PowerShell related keys prefixed with “PS”.

Get Single Entry

To get a single entry we use the same Get-ItemProperty and add a Name property to specify the entry we want to return.

PS> Get-ItemProperty -Path HKLM:\Software\Microsoft\Windows\CurrentVersion -Name DevicePath

This will return just the DevicePath entry along with the related PS properties.

Create New Entry

We can add a new registry key entry with the New-ItemProperty command.

PS> New-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PowerShellPath -PropertyType String -Value $PSHome

A little more complexity to this operation, but still not rocket science. We added two more properties. PropertyType signifies the type of property to create and it must be a Microsoft.Win32.RegistryValueKind (how to deal with 64bit is something I haven’t dealt with so I leave it to you for now). Value in the example uses a PowerShell value $PSHome which is the install directory of PowerShell. You can use your own values or variables for -Value.

PropertyType Value Meaning
Binary Binary data
DWord A number that is a valid UInt32
ExpandString A string that can contain environment variables that are dynamically expanded
MultiString A multiline string
String Any string value
QWord 8 bytes of binary data

Rename Entry

To rename an entry just specify the current name and the new name.

PS> Rename-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PowerShellPath -NewName PSHome -passthru

NewName is the new name for the entry. The passthru parameter is optional and it is used to display the renamed value.

Delete Entry

To delete an entry just specify the name.

PS> Remove-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name PSHome

Conclusion

Is it really this easy? Why yes it is young Padawan…yes it is. You too can control the force to destroy your registry with PowerShell :).

 

Scripting Builds with C#, Yes I Said Scripting

If you haven’t partaken of the delicious goodness that is Roslyn, don’t fret it’s easy to get in on the fun. Have you heard of ScriptCS? It’s kind of what I hoped Powershell would become, scripting with C#. No stinking compiling and complex builds, no having to learn new complex syntax and functionality, just code in C# and go. This is what ScriptCS brings by way of Roslyn. I had fun just writing C# and running it, but I needed to find a practical reason to script C# in the real world.

Then it hit me. I was in the middle of writing a build script for a project and I wondered how I could do it with ScriptCS. Looking at my NAnt script I started looking for a way to port my build script to ScriptCS and failed to envision an easy way to do it. So, I ended up doing some searching and stumbled upon Nake (actually I think a co-worker may have told me about it, can’t remember) . As the author, Yevhen Bobrov, describes Nake, “Write you build automation scripts in C# without paying the angle bracket tax!” According to Yevhen, he uses the ScriptCS pre-processing engine and takes advantage of Roslyn’s syntax re-writing features to rewrite task invocations.

Enough talk, let’s code. We will start with a build in NAnt using the sample from nant.org.

<?xml version="1.0"?>
<project name="Hello World" default="build" basedir=".">
<description>The Hello World of build files.</description>
<property name="debug" value="true" overwrite="false" />
<target name="clean" description="remove all generated files">
<delete file="HelloWorld.exe" failonerror="false" />
<delete file="HelloWorld.pdb" failonerror="false" />
</target>
<target name="build" description="compiles the source code">
<csc target="exe" output="HelloWorld.exe" debug="${debug}">
<sources>
<includes name="HelloWorld.cs" />
</sources>
</csc>
</target>
</project>

We can do similar in Nake like so:

using System;
using Nake;
//The Hello World of build files.
public static string basedir = ".";
public static string configuration = "Debug";
public static string platform = "AnyCPU";
[Task] public static void Default()
{
Build();
}
//remove all generated files
[Task] public static void Clean()
{
File.Delete("HelloWorld.exe");
File.Delete("HelloWorld.pdb");
}
//compiles the source code
[Task] public static void Build()
{
Clean();
MSBuild
 .Projects(HelloWorld.csproj)
 .Property("Configuration", configuration) 
 .Property("Platform", platform)
 .Targets(new[] {"Rebuild"})
 .BuildInParallel();
}

I really like Nake, it feels like regular C# coding to me. There may be more lines, but its easily readable lines IMHO. Not to mention, I have access to the power of C# and not just features added to a scripting tool like NAnt.

After working with Nake for a little while I found another project that targets task scripting with C#, Bau. Bau’s author, Adam Ralph, tags the project with “The C# task runner.” It’s built as a scriptcs script pack and is inspired by Rake, Grunt and gulp.” He uses an interesting approach where tasks are chained together like a long fluent build train. I haven’t had the chance to actually use Bau, but I read through some of the source and documentation. Having to chain tasks together in Bau seems foreign and limiting as I am not sure how to achieve a reusable, composable design in a manner that I am accustom to. It’s probably very simple, just not readily apparent to me like it is in Nake.

Well, I’ll keep it short. It’s good to see that there are options beginning to emerge in this space. I hope the community contributes to them. Both Nake and Bau open up scripting builds to C# developers. We get to leverage what we know about C# and script with syntax and tools we are already familiar with. We get the task based nature of NAnt with the familiarity of C#. So, if you aren’t ready to take the plunge in Roslyn, how about testing the waters with C# build scripting.

Footnote: Nake hasn’t had any commits in 5 months, Yevhen lists his location as Kiev, Ukraine. Yevhen, I have been watching the news about all of the violence happening in the Ukraine and if you are there I hope that you are OK and thanks for Nake.

TestPipe Test Automation Framework Release Party

Actually, you missed the party I had with myself when I unchecked private, clicked save on GitHub, and officially release TestPipe. You didn’t miss your chance to checkout TestPipe, a little Open Source project that has the goal of making automated browser based testing more maintainable for .NET’ters. The project source code is hosted on GitHub and the binaries are hosted on NuGet:

 

 

If you would like to become a TestPipe Plumber and contribute, I’ll invite you to the next party :).

 

Results of my personal make a logo in 10 minutes challenge. Image
 
 

ThoughtWorks Go Continuous Delivery, Now In Open Source Flavor

If you haven’t heard the ThoughtWorks Go Continuous Delivery server is now open source. The source code is located on GitHub, https://github.com/gocd/gocd. I decided to give it a test drive and I was pleased. Since I am primarily a Windows developer my point of reference is CCNET (which is based on ThoughtWorks Cruise Control Continuous Build server), TFS Team Build and TeamCity. I don’t have a lot of TeamCity experience, but I can say that I can easily see automating many scenarios in Go that I was having a hard time conceiving in CCNET. Adding the concept of Environments, Pipelines, Stages, and User Roles opened an easier path to automated production deployment for me.

Install

Install was pretty simple. Go is cross platform, but I have a Windows server. I downloaded the Windows packages from http://www.go.cd/download/. I installed the server and agent on my server, opened it up in a browser, and it was there ready to go. Very easy, only a few minutes of clicking and I was ready to start. Before I started building pipelines, I made a few customization for my environment. I want to use Git and NAnt in my build, test, deploy process so I added the path to their exe’s to the Path (Windows system environment variable). This makes it less painless to run them from Go.

Server User Authentication

I am eventually going to use LDAP for user authorization, but for now I setup an htpasswd file with usernames and SHA-1 hashed passwords. Then I entered the name of the file, htpasswd.txt, in the server configuration (Admin > Server Configuration > User Management > Password File Settings). I generated the contents for the SHA-1 hashed password file on http://aspirine.org/htpasswd_en.html, but I could have easily just used a Crypto library to hash the passwords myself. Usernames are not case sensitive, but you shouldn’t have colon, spaces or equal sign in the username unless you escape them with backslash.

Configuration

The Go configuration is stored in an XML file, like Jenkins and CCNET. I know many people have a disdain for XML, but it doesn’t bother me and it makes Go portable. I can deploy it to another server or even a developer workstation, use a common config file, and its ready to start processing pipelines. You can use the UI to configure most of what you want to do, but I enjoy the fine grain control in editing the XML directly. There is an XML validator so when my error prone fingers type the wrong character it will automatically reject the change and continue using the current configuration. Since the configuration is XML, I decided to put the file under source control. The reason for this is to have a backup of the config and to allow the ability to config the server from XML and automatically push the changes to the Go server with the Go server (sweet). This doesn’t work both ways, so if there are changes made through the UI they won’t be pushed to source control (although I can envision some convoluted solution for this). For now, I am the only person managing the server and I will configure through the XML file and not the UI.

Pipelines

Pipelines are a unit of organization in Go. A pipeline is comprised of stages which is comprised of jobs which is comprised of tasks. Tasks are the basic unit of work in Go. In my instance most of my tasks are NAnt tasks that call targets in nant build scripts. There are all kinds of ways to create chains of actions and dependencies. This is going to probably be where I focus a lot of attention as this is where the power of the system lies, IMHO. Being able to customize the pipelines and wire up various dependencies is huge for me. Granted I could do this in CCNET to a certain degree, but Go just made it plain to envision and implement.

NAnt Problems

Working with Nant was a pain. Actually, this was the only major hurdle I had to cross. I couldn’t figure out how to pass properties to the nant build file. Then I decided to try to pass the properties through the target argument of the Go nant task, like this

<nant buildfile=”testcode\test\this-is-my-buildfile.xml” target=”-D:this-is-a-nant-property=&quot;dev&quot; -D:another-nant-property=&quot;jun&quot; ThisIsMyNantTarget” />

Note: Paths are relative to your Agent pipeline working directory.

This worked great, but a more intuitive way of doing this would have been good. Maybe an arguments property so there is no confusion between nant property and nant target.

Conclusion

I know this post is lite on details, but I just wanted to get a quick brain dump of my experience with Go. Go has pretty good documentation on the Go.cd website and posting questions to support elicited pretty fast feedback for a free product. I am excited to get involved with the Go and the Go Community. Overall, it was very easy to get a powerful Continuous Delivery server up and running in no time. You should check it out.

Trust No One or a Strange Automated Test

Nullius in verba (Latin for “on the word of no one” or “Take nobody’s word for it”)
http://en.wikipedia.org/wiki/Nullius_in_verba

This is the motto for the Royal Society, UK’s Academy of Science. I bring this up because I inherited an automated test suite and I am in the process of clearing current errors and developing a maintenance plan for them. As I went through the test I questioned whether I could trust them.  In general its difficult to trust automated tests and its worse when I didn’t write them. Then I remembered “nullius in verba” and decided that although I will run these tests, fix them and maintain them, I can not trust them. In fact, since I am now responsible for all automated tests I can’t put any value in any test unless I watch them run, understand their purpose, and ascertain the validity of their assumptions. This is not to say that the people that write tests that I maintain cannot be trusted because of incompetence. In fact, many of the tests that I maintain have been crafted by highly skilled professionals. I just trust no one and want to see for myself.

Even after evaluating automated tests, I can’t really trust them because I don’t watch every automated test run. I can’t say for certain that they passed or failed or that they are a false positive. Since I don’t watch every test run I can only hope they are OK. I can’t even trust someone else’s manual testing with the infallibility of man, so I can’t trust an automated check written by an imperfect human. So, I view automated tests like manual test, they are tools in the evaluation of the software under test.

It would be impractical to manually run every test covered by the automated suite so a good set of tests provide more coverage than manual execution alone. One way automated tests provide value is when they uncover issues that point to interesting aspects of the system that warrant further investigation. Failing tests or unusually slow tests can give a marker to focus on in manual exploration of the software. This is only true if the tests are good, like being focused on one concept, not flaky or sometimes passing or failing, and other attributes of a good automated test. If the tests are bad, their failures may not be actual and take away all value from the automated test because I have to waste time instigating them. In fact, having an automated test suite plagued with bad tests can increase the effort required to maintain test so much that it negates any value they provide. The maintainability of a test is a primary criteria that I evaluate when I inherit them from someone else and I have to see for my self if each test is good and maintainable before I can place any value in them.

So, my current stance is to not trust anyone else’s test. Also, I do not elevate automated tests to being the de facto standard that the software works. Yet, I find value in the automated tests as another tool in my investigation of the quality of the software. If they don’t cost much in terms of maintenance or running them, they provide value in my evaluation of software quality.

Nullius in verba

Scientific Exploration and Software Testing

Test ideas by experiment and observation,

build on those ideas that pass the test,

reject the ones that fail.

Follow the evidence wherever it leads

and question everything.

Astronomer Neil deGrasse Tyson, Cosmos, 2014

This was part of the opening monologue to the relaunch of the Cosmos television series. It provides a nice interpretation of the scientific method, but also fits perfectly with one of my new roles as software tester. Neil finishes this statement with

Accept these terms and the cosmos is yours. Now come with me.

It could be said, “Accept these terms and success in software testing is yours.” What I have learned so far about software testing falls firmly in line with the scientific method. I know software testing isn’t as vast as exploring billions of galaxies, but with millions of different pathway through a computer program, software testing still requires similar rigor as any scientific exploration.

Visual Studio Conditional Build Events

I wanted to run a Post Build Event on Release build only. I have never done a conditional event, but found out that it isn’t that difficult. From what I have found so far there are two ways to accomplish this.

If you defined your Post Build Event in the Project Configuration, Build Event screen in Visual Studio, you can add a conditional if statement to define the condition you want the event to run on.

if $(ConfigurationName) == Release (
copy $(TargetPath) $(SolutionDir)\Plugins\$(TargetFileName)
)

In this example I compare the $(ConfigurationName) property to the text “Release”. You could replace this with the name of the build configuration you want to run your post build script on. A note on build events, they are translated to batch files then ran so you could do anything that you could do in a bat in your event (this is a big assumption as I haven’t ran every command in a build event yet, but I strongly suspect that its safe to assume most cases will be OK).

If you define your build event directly in the Project file you could

<PropertyGroup Condition=" '$(Configuration)' == 'Release' ">
    <PostBuildEvent>copy $(TargetPath) $(SolutionDir)\Plugins\$(TargetFileName)</PostBuildEvent>
</PropertyGroup>

If you haven’t used Build Events you should check them out as you can bend your build to your will. You can preprocess files before your build, move files after the build, clean directories…basically anything you can do with a batch file you can do in a Build Event, because it is a batch file.

Reference

Build Events – http://msdn.microsoft.com/en-us/library/ke5z92ks.aspx

Batch Files – http://technet.microsoft.com/en-us/library/bb490869.aspx

Build Event Macros – http://msdn.microsoft.com/en-us/library/42x5kfw4.aspx

IE WebDriver Proxy Settings

I recently upgraded to the NuGet version of IE WebDriver (IEDriverServer.exe). I started noticing that when I ran my tests locally I could no longer browse the internet. I found myself having to go into internet settings to reset my proxy. My first thought was that the new patch I just received from corporate IT may have botched a rule for setting the browser proxy. After going through the dance of running tests, resetting proxy, I got pretty tired and finally came to the realization that it must be the driver and not IT.

First stop was to check Bing for tips on setting proxy for WebDriver. Found lots of great stuff for Java, but no help on .Net. Next, I stumbled upon a log message in the Selenium source change log that said, “Adding type-safe Proxy property to .NET InternetExplorerOptions class.” A quick browse of the source code and I had my solution.

In the code that creates the web driver I added a proxy class set to auto detect.

Proxy proxy = new Proxy();
proxy.IsAutoDetect = true;
proxy.Kind = ProxyKind.AutoDetect;

This sets up a new Proxy that is configured for auto detect. Next, I added 2 properties, Proxy and UsePerProcessProxy to the InternetExporerOptions

var options = new OpenQA.Selenium.IE.InternetExplorerOptions
{
     EnsureCleanSession = true,
     Proxy = proxy,
     UsePerProcessProxy = true
};

Proxy is set the the proxy we previously set up. UsePerProcessProxy tells the driver that we want this configuration to be set per process, NOT GLOBALLY, thank you. Shouldn’t this be the default, I’m just saying. EnsureCleanSession, clears the cache when the driver starts, this is not necessary for the Proxy config and is something I already had set.

Anyway, with this set up all we have to do is feed it to the driver.

var webDriver = new OpenQA.Selenium.IE.InternetExplorerDriver(options);

My test coding life is back to normal, for now.

Running SQL Files in C# with SMO

I have used SMO, older versions of SMO, to run SQL in C#, but I wanted to do it a new application I’m writing to help with seeding database for tests. Actually, it was pretty easy and you may ask why not just use ADO or sqlcmd. Well the SQL I want to run has TSQL statements that ADO can’t work with. The sqlcmd tool is awesome from the command line, but I wanted a C# solution and SMO allowed me to have less ceremony to get everything up and running.

First you have to get the SMO DLLs necessary to connect with SQL Server and execute the scripts. I am using the files for SQL Server 2012 and I found the DLLs in C:\Program Files\Microsoft SQL Server\110\SDK\Assemblies\. You will need 3 of them:

  • Microsoft.SqlServer.ConnectionInfo.dll
  • Microsoft.SqlServer.Management.Sdk.Sfc.dll
  • Microsoft.SqlServer.Smo.dll

You can copy them to a common folder in your solution and reference them in the project you will use to code up your SQL file runner. If you are into all the Ninja code stuff, you will probably host them on a private NuGet server. Next, all you need is a little code:

using System;
using System.Data.SqlClient;
using System.IO;
using Microsoft.SqlServer.Management.Common;
using Microsoft.SqlServer.Management.Smo;

public class ExcuteSqlScript
{
	public void FromFile(string filePath, string connectionString)
	{
		FileInfo file = new FileInfo(filePath);
		string script = file.OpenText().ReadToEnd();
		this.FromString(script, connectionString);
		file.OpenText().Close();
	}

	public void FromString(string script, string connectionString)
	{
		SqlConnection connection = new SqlConnection(connectionString);
		Server server = new Server(new ServerConnection(connection));
		server.ConnectionContext.ExecuteNonQuery(script);
	}

	//Finally figured out how to display formatted code, yay!
}

I basically have two methods. One loads the SQL from a file path to the SQL file and the other accepts a string with the SQL you want to run. They are both pretty self explanatory. You just need to supply the file path or SQL string and a connection string and it will execute. You should add some error handling and perhaps tweak the file read security and performance for your situation (see refs below). One thing I will be adding is a method to run all SQL files in a directory or iterate over a config file containing the paths to SQL files that need to be ran.

Anyway, this gives a basis to create a more robust solution. If you need more advanced interaction, like transactions, take a closer look at the API for ServerConnection and read the docs, it wasn’t too hard to get through it as the API is simple.

References

SQL Server Management Objects (SMO) Programming Guide – http://technet.microsoft.com/en-us/library/ms162169.aspx

C# .Net: Fastest Way to Read Text Files – http://blogs.davelozinski.com/curiousconsultant/csharp-net-fastest-way-to-read-text-files

Happy Coding!

Get Deep .NET Code Insight with SonarQube

Mapping My .NET Code Quality Pipeline with SonarQube

In this throwback Tuesday post is a draft post from 2013 that I updated the post to use the latest SonarQube. I got the new server running, but SonarQube is not currently a part of our production pipelines. Actually, I think it is a lot easier to run the Docker image for this (docker pull sonarqube:latest). Although, doing it the hard way was a fun trip down memory lane.

Lately, I’ve been sharing updates about my Code Quality Pipeline. Today, I’m thrilled to report that the core pipeline is nearly operational. What’s even more exciting is that I’ve integrated SonarQube, a powerful tool to monitor and analyze code quality. For those unfamiliar, here’s how SonarQube defines itself:

SonarQube® is an open-source quality management platform. It is designed to continuously analyze and measure technical quality. This analysis ranges from project portfolios to individual methods. It supports multiple programming languages via plugins, including robust support for Java and .NET.

In this post, I’ll guide you on setting up SonarQube to monitor your Code Quality Pipeline. We will leverage its capabilities for a .NET-focused development environment.


Setting Up SonarQube for .NET: Step-by-Step

To get started, I grabbed the latest versions of the required tools:

The SonarQube Docs was a helpful reference. It has been updated here. I’ll share the specific steps I followed to install and configure SonarQube on a Windows 11 environment.


1. Database Configuration

SonarQube requires a database for storing analysis results and configuration data. Here’s how I set it up on PostgreSQL (reference):

  1. Create an empty database:
    • Must be configured to use UTF-8 charset.
    • If you want to use a custom schema and not the default “public” one, the PostgreSQL search_path property must be set:
      ALTER USER mySonarUser SET search_path to mySonarQubeSchema
  2. Create a dedicated SonarQube user:
    • Assign CREATE, UPDATE, and DELETE permissions.
  3. Update the sonar.properties file with the database connection after unziping the SonarQube package (see below): sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar;SelectMethod=Cursor sonar.jdbc.username=your-sonarqube-user sonar.jdbc.password=your-password

2. Installing the SonarQube Web Server

The SonarQube server handles analysis and provides a web interface for viewing results.

  1. Unzip the SonarQube package.
  2. Open the conf\sonar.properties file and configure:
    • Database connection details (see above).
    • Web server properties: sonar.web.host=0.0.0.0 sonar.web.port=9000 sonar.web.context=/sonarqube
  3. Ensure Java JDK 17 is installed. Any higher and I had issues with SecurityManager.
  4. Start the server by running the batch file: \bin\windows-x86-{your-system}\StartSonar.bat
  5. Verify the server is running by visiting http://localhost:9000 in your browser. The default credentials are: Username: admin Password: admin

3. Adding Plugins for .NET Support

SonarQube’s plugins for .NET projects enhance its ability to analyze C# code quality.

  • Navigate to the Marketplace within the SonarQube web interface.
  • Install the ecoCode – C# language plugin and any additional tools needed for your pipeline.

4. Integrating Sonar Scanner

Sonar Scanner executes code analysis and sends results to the SonarQube server.

  1. Download and extract Sonar Scanner.
  2. Add its bin directory to your system’s PATH.
  3. Configure the scanner by editing sonar-scanner.properties: sonar.host.url=http://localhost:9000 sonar.projectKey=my_project sonar.projectName=My Project sonar.projectVersion=1.0
  4. Run the scanner from the root of your project: sonar-scanner

Monitoring Key Metrics

One of my goals with SonarQube is to track critical operational metrics like:

  • Code Quality: Bugs, vulnerabilities, code smells.
  • Performance: Memory and CPU usage, database load, cache requests.
  • Application Metrics: Web server requests, bandwidth usage, key transactions (e.g., logins, payments, background jobs).

To achieve this, I’ll leverage SonarQube’s dashboards and custom reports. These tools make it easy to visualize and monitor these KPIs in real-time.


The Impact: A Quality-First Development Workflow

With SonarQube integrated, my Code Quality Pipeline is equipped to ensure:

  • Continuous Code Quality: Early detection of bugs and vulnerabilities.
  • Performance Optimization: Proactive monitoring of resource utilization.
  • Improved Collaboration: Shared insights into code quality for the entire team.

Ready to Level Up Your Code Quality?

SonarQube makes it simple to raise the bar on your development processes. Whether you’re optimizing legacy code or building new features, this tool provides the insights you need to succeed.

Start your journey today: Download SonarQube.

Have questions or need guidance? Let me know in the comments—I’d love to hear how you’re leveraging SonarQube in your own pipelines!