Category: Throwback Tuesdays

Estimates are Bullshit

We had an issue in a new environment we were building out. For some reason branded images were not being found on one of the websites. At one time there were 6 developers focusing on this one problem for about a couple hours with no results (that’s 12 hours of effort). How would we estimate this beforehand? How would we account for those lost 12 hours, because they are not in the estimate? Those 12 hours have to come from somewhere.

I have been involved in hundreds of planning and estimation sessions. Some resulted in dead on estimates, but most were over or under. Projects ended up with nothing to do near the end or skimping on quality and increasing technical debt to meet the estimated deadline. Estimates are contextual. They can change every moment we understand something new about context. What we know regardless of context is that we want to deliver value to our customers. We want to deliver and maintain a quality product. We want to deliver often.

The business management wants to make sure the cost to deliver does not exceed the return on the value we deliver. Business wants to deliver to generate revenue and manage costs to increase profits. I am not a business major so I may have a naïve simplistic view, but this is what I have experienced. Business wants to control costs so they ask for estimates. When a project is delivered over the estimate people get upset. So how do we provide the business with what they need while not relying on our inability as humans to predict the future.

The lean or agile practitioners give some clues to what is a viable solution.

  • Break down deliverables into bite sized pieces. Bite sized meaning what ever makes sense for your team.
  • Provide an estimate on each piece based on current understanding of the context. Take no more than 10-15 minutes doing an estimate on each piece. You can use hours, days, team weeks, story points…doesn’t matter how hard you try you can’t accurately predict the future 100% of the time.
  • Deliver in small iterations. You can commit to delivering a set number of pieces per iteration with floating release date. You can commit to a release date and deliver the pieces you have ready on the release date.
  • At the end of each iteration re-estimate the pieces in the back log and break down new deliverables to replace the ones that have been promoted to an iteration.

What does this mean for the business? They still get their estimates from the mystic developers and their sprint tarot card readings, but the business has to understand that adjustments will be made iteratively to those estimates to match the reality we live in. The business has to be willing to loose an investment in a first iteration. If developers promise a working product at the end of the iteration, the product should be worth the investment. If developers don’t deliver, the business can opt not to continue based on the re-estimate or be willing to loose until they get a something shippable. First iteration deliver a working prototype, demo it to the business, get their feedback, adjust scope, and re-estimate what it will take to deliver the next iterations based on current understanding of the context.

If you believe that developers can give perfect estimates and deadlines, I have a bridge you can buy for a steal.

If the business needs to forecast the future, deliver in small, fast, continuous increments. This builds predictability in the system with increasing level of probability, until the system changes and the cycle starts again.

In the end, estimates are bullshit! 

What do you think?

The Copilots Are Coming

This is an unpublished throwback from 2023. Obviously, the Copilots are here and its much scarier than I thought.

In “The age of copilots” Satya Nadella, the CEO of Microsoft, outlines the company’s vision for Microsoft Copilot, positioning it as an integral tool across all user interfaces.

Microsoft Copilot
Meet your everyday AI companion for work and life.

https://www.microsoft.com/en-us/copilot

Copilot incorporates search functionality, harnessing the context of the web. This was a genius pivot of Bing Chat into a multi-platform service. They even have an enterprise version with added data protection (they are listening to the streets). And they are giving power to the people, Microsoft 365 now features Copilot, which operates across various applications. As a developer, my Semantic Kernel plugins can be easily integrated, my OpenAI GPTs and Assistants can be integrated. I can build some things, my team can build more things and considering the world currently needs so many Copilot things, I’m so excited. So many tasks to optimize, so many roles to bring efficiency to, so many jobs-to-be-done to be supported by automation and AI.

We believe in a future where they will be a copilot for everyone and everything you do. 

Satya Nadella, CEO of Microsoft

Nadella emphasizes the customizability of Copilot for individual business needs, highlighting its application in different roles. GitHub Copilot aids developers in coding more efficiently, while SecOps teams leverage it for rapid threat response. For sales and customer service, Copilot integrates with CRM systems and agent desktops to enhance performance.

Furthermore, Nadella speaks about the extension of Copilot through the creation of a Copilot Studio, which allows for further role-specific adaptations. He notes the emerging ecosystem around Copilot, with various independent software vendors and customers developing plugins to foster productivity and insights. I hope this means there is a Copilot Store coming with some revenue share with independent software vendors like the me and the company I work for.

You will, of course, need to tailor your Copilot for your very specific needs, your data, your workflows, as well as your security requirements. No two business processes, no two companies are going to be the same. 

Satya Nadella, CEO of Microsoft

Lastly, Nadella touches on future innovations in AI with mixed reality, where user interactions extend beyond language to gestures and gazes, and in AI with quantum computing, where simulations of natural phenomena can be emulated and quantum advancements can accelerate these processes. He envisions a future where such technology empowers every individual globally (actually Nadella expressed more on Microsoft’s vision of caring for the world and I appreciated it), offering personalized assistance in various aspects of life.

Nadella did a good job of expressing Microsoft’s vision on caring for our world. Microsoft will be “generating 100 percent of the energy they use in their datacenters, from zero-carbon sources by 2025.” He said that and next year is 2024. I hope they stay on track towards this goal.

Charles L. Bryant, Citizen of the World

The message concludes with a reference to a video featuring a Ukrainian developer’s experience with Copilot. This is also a lesson in the power of expressing the value of a product with story and emotion. Storyboard Copilot is coming too.

Streamlining Dependency Management: Lessons from 2015 to Today

In this throwback Tuesday post, we revamp at a dusty draft post from 2015.

In 2015, I faced a challenging problem. I had to manage dependencies across a suite of interconnected applications. It was crucial to ensure efficient, safe builds and deployments. Our system included 8 web applications, 24 web services, and 8 Windows services. This made a total of 40 pipelines for building, deploying, and testing. At the time, this felt manageable in terms of automation, but shared dependencies introduced complexity. It was critical that all applications used the same versions of internal dependencies. This was especially important because they interacted with a shared database and dependencies can change the interaction.

Back then, we used zip files for our package format and were migrating to NuGet to streamline dependency management. NuGet was built for exactly this kind of challenge. However, we needed a system to build shared dependencies once. It was necessary to ensure version consistency across all applications. The system also needed to handle local, and server builds seamlessly.

Here’s how I approached the problem in 2015 and how I’d tackle it today, leveraging more modern tools and practices.


The 2015 Solution: NuGet as a Dependency Manager

Problem Statement

We had to ensure:

  1. Shared dependencies were built once and consistently used by all applications.
  2. Dependency versions were automatically synchronized across all projects (both local and server builds).
  3. External dependencies are handled individually per application.

The core challenge was enforcing consistent dependency versions across 40 applications without excessive manual updates or creating a maintenance nightmare.

2015 Approach

  1. Migrating to NuGet for Internal Packages
    We began by treating internal dependencies as NuGet packages. Each shared dependency (e.g., ProjB, ProjC, ProjD) was packaged with a version number and stored in a NuGet repository. When a dependency changed, we built it and updated the corresponding NuGet package version.
  2. Version Synchronization
    To ensure that dependent applications used the same versions of internal packages:
    • We used nuspec files to define package dependencies.
    • NuGet commands like nuget update were incorporated into our build process. For example, if ProjD was updated, nuget update ProjD was run in projects that depended on it.
  3. Automating Local and Server Builds
    We integrated NuGet restore functionality into both local and server builds. On the server, we used Cruise Control as our CI server. We added a build target that handled dependency restoration before the build process began. Locally, Visual Studio handled this process, ensuring consistency across environments.
  4. Challenges Encountered
    • Updating dependencies manually with nuget update was error-prone and repetitive, especially for 40 applications.
    • Adding new dependencies required careful tracking to ensure all projects referenced the latest versions.
    • Changes to internal dependencies triggered cascading updates across multiple pipelines, which increased build times.
    • We won’t talk about circular dependencies.

Despite these challenges, the system worked, providing a reliable way to manage dependency versions across applications.


The Modern Solution: Solving This in 2025

Fast forward to today, and the landscape of dependency management has evolved. Tools like NuGet remain invaluable. However, modern CI/CD pipelines have transformed how we approach these challenges. Advanced dependency management techniques and containerization have also contributed to this transformation.

1. Use Modern CI/CD Tools for Dependency Management

  • Pipeline Orchestration: Platforms like GitHub Actions, Azure DevOps, or GitLab CI/CD let us build dependencies once. We can reuse artifacts across multiple pipelines. Shared dependencies can be stored in artifact repositories (e.g., Azure Artifacts, GitHub Packages) and injected dynamically into downstream pipelines.
  • Dependency Locking: Tools like NuGet’s lock file (packages.lock.json) ensure version consistency by locking dependencies to specific versions.

2. Automate Version Synchronization

  • Semantic Versioning: Internal dependencies should follow semantic versioning (e.g., 1.2.3) to track compatibility.
  • Automatic Dependency Updates: Use tools like Dependabot or Renovate to update internal dependencies across all projects. These tools can automate pull requests whenever a new version of an internal package is published.

3. Embrace Containerization

  • By containerizing applications and services, shared dependencies can be bundled into base container images. These images act as a consistent environment for all applications, reducing the need to manage dependency versions separately.

4. Leverage Centralized Package Management

  • Modern package managers like NuGet now include improved version constraints and dependency management. For example:
    • Use a shared Directory.Packages.props file to define and enforce consistent dependency versions across all projects in a repository.
    • Define private NuGet feeds for internal dependencies and configure all applications to pull from the same feed.

5. Monitor and Enforce Consistency

  • Dependency Auditing: Tools like WhiteSource or SonarQube can analyze dependency usage to ensure all projects adhere to the same versions.
  • Build Once, Deploy Everywhere: By decoupling build and deployment, you can reuse prebuilt NuGet packages in local and server builds. This ensures consistency without rebuilding dependencies unnecessarily.

Case Study: Revisiting ProjA, ProjB, ProjC, and ProjD

Let’s revisit the original example that help me figure this out in 2015 but using today’s tools.

  1. When ProjD changes:
    • A CI/CD pipeline builds the new version of ProjD and publishes it as a NuGet package to the internal feed.
    • Dependency lock files in ProjB and ProjC ensure they use the updated version.
  2. Applications automatically update:
    • Dependabot identifies the new version of ProjD and creates pull requests to update ProjB and ProjC.
    • After merging, ProjA inherits the changes through ProjB.
  3. Consistency is enforced:
    • Centralized package configuration (Directory.Packages.props) ensures that local and server builds use the same dependency versions.

The Results

By modernizing our approach:

  • Efficiency: Dependencies are built once and reused, reducing redundant builds.
  • Consistency: Dependency versions are enforced across all projects, minimizing integration issues.
  • Scalability: The system can scale to hundreds of applications without introducing maintenance overhead.

Conclusion

In 2015, we solved the problem using NuGet and MSBuild magic to enforce dependency consistency. Today, with modern tools and practices, the process is faster, more reliable, and scalable. Dependency management is no longer a bottleneck; it’s an enabler of agility and operational excellence.

Are you ready to future-proof your dependency management? Let’s talk about optimizing your build and deployment pipelines today.

Writing Automated Integration Tests by the Numbers

In this Throwback Tuesday post is a revamped draft post from January 2014 where I wrote about writing SpecFlow tests. Here I am generalizing the processes because I don’t use SpecFlow anymore.

One thing I learned in the Marine Corps was to do things by the numbers. It was a natural fit for my analytical mind. Plus, let’s face it, we were told we were useless maggots as dumb as a rock, and this training method was apparently the easiest way to teach a bunch of recruits. Naturally, it worked great for a dumb rock like me, OORAH!

Because of this lesson, I’ve always tried to distill common processes into neat little numbered lists. They’re easy to refer to, teach from, and optimize. When I find a pattern that works across a wide range of scenarios, I know I’ve hit on something useful. So, with that in mind, here’s how I approach writing automated integration tests by the numbers.


1. Understand the Test Data Needs

The first step in any integration test is figuring out the test data you need. This means asking questions like, “What inputs are required? What outputs am I validating?” You can’t test a system without meaningful data, so this step is non-negotiable.

2. Prepare the Test Data

Once you know what you need, it’s time to create or acquire that data. Maybe you generate it on the fly using a tool like Faker. Maybe you’ve got pre-existing seed scripts to load it. Whatever the method, getting the right data in place is critical to setting the stage for your tests.

3. Set Up the Environment

Integration tests usually need a controlled environment. This might involve spinning up Docker containers, running a seed script, or setting up mock services. Automating this step wherever possible is the key to saving time and avoiding headaches.

4. Run a Manual Sanity Check

Before diving into automation, I like to run the test manually. This gives me a feel for what the system is doing and helps catch any obvious issues before I start coding. If something’s off, it’s better to catch it here than waste time troubleshooting broken automation.

5. Create Reusable Test Components

If the test interacts with a UI, this is where I’d create or update page objects. For APIs or other layers, I’d build out reusable components to handle the interactions. Modular components make tests easier to write, maintain, and debug.

6. Write and Organize the Tests

This is the core of the process: writing the test steps and organizing them logically. Whether you’re using SpecFlow, pytest, or any other framework, the principle is the same: break your tests into clear, reusable steps.

7. Tag and Manage Tests

In SpecFlow, I used to tag scenarios with @Incomplete while they were still under development. Modern frameworks let you tag or group tests to control when and how they run. This is handy for managing incomplete tests or running only high-priority ones in CI/CD pipelines.

8. Debug and Refine

Once the test is written, run it and fix any issues. Debugging is a given, but this is also a chance to refine your steps or improve your reusable components. The goal is to make each test rock-solid and maintainable.


Lessons Learned

Breaking things down by the numbers isn’t just about being organized—it’s about being aware of where the bottlenecks are. For me, steps 1 and 2 (understanding and preparing test data) are often the slowest. Knowing that helps me focus on building tools and processes to speed up those steps.

This approach also makes training others easier. If I need to onboard someone to integration testing:

  1. Pair with them on a computer.
  2. Pull out the “Integration Tests by the Numbers” list.
  3. Call them a worthless maggot as dumb as a rock (just kidding… mostly).
  4. Walk through the process step by step.

Relevance Today

Even though I don’t use SpecFlow anymore, this process still applies. Integration testing frameworks and tools have evolved, but the principles are timeless. Whether you’re using Playwright, Cypress, or RestAssured, these steps form the foundation of effective testing.

What’s different now is the tooling. Tools like Docker, Terraform, and CI/CD pipelines have made environment setup easier. Test data can be generated on the fly with libraries like Faker or FactoryBot. Tests can be grouped and executed conditionally with advanced tagging systems.

The key takeaway? Processes evolve, but the mindset of breaking things down by the numbers is as valuable as ever. It’s how I keep my integration tests efficient, maintainable, and scalable.

Unlock JavaScript Key Code Values for Developers

I have a mountain of unpublished blog posts. I can now use AI to make more sense of all of this content. So, I decided to start cleaning up the posts and releasing them in a series I’m calling “Throwback Tuesdays”. I aim to make them relevant for today as best I can. However, I will publish even if it’s a dead concept.

First up, JavaScript ASCII Key Code Values from January 2015. I was struggling to find key codes and got a list of values from Rachit Patel (see the end of this post for the list). This was just a reference post so I didn’t have to dig through Google search results to find a key code. This is somewhat useless today with all the ready made code and AI that can crank out solutions that are based on key codes.

Unlocking the Power of Keyboard Shortcuts in Web Applications

Why did I need a list of key codes? I needed to make keyboard shortcuts for various use cases. Let’s explore the possibilities.

Keyboard shortcuts are an essential tool for enhancing user experience and interaction in web applications. By responding to key presses, developers can create intuitive and powerful functionality, ranging from custom navigation to accessibility features. Below, we explore examples of why and how these shortcuts can be used, complete with code and explanations to inspire your next project.


Form Navigation

Use Case: Improve user experience by enabling seamless navigation between input fields.

Code:

document.addEventListener('keydown', (event) => {
    if (event.keyCode === 9) { // Tab key
        event.preventDefault();
        const inputs = Array.from(document.querySelectorAll('input, textarea'));
        const current = inputs.indexOf(document.activeElement);
        const next = (current + 1) % inputs.length;
        inputs[next].focus();
    }
});

Explanation:

  • Listens for the Tab key press (keyCode 9).
  • Prevents the default behavior and cycles focus through input and textarea fields in a custom order.

Custom Keyboard Shortcuts

Use Case: Provide power users with quick access to application features.

Code:

document.addEventListener('keydown', (event) => {
    if (event.ctrlKey && event.keyCode === 83) { // Ctrl+S
        event.preventDefault();
        console.log('Save shortcut triggered');
    }
});

Explanation:

  • Detects when the Ctrl key is pressed along with S (keyCode 83).
  • Prevents the browser’s default save dialog and triggers custom functionality, such as saving data.

Game Controls

Use Case: Enable interactive movement in games or apps.

Code:

document.addEventListener('keydown', (event) => {
    switch (event.keyCode) {
        case 37: // Left arrow
            console.log('Move left');
            break;
        case 38: // Up arrow
            console.log('Move up');
            break;
        case 39: // Right arrow
            console.log('Move right');
            break;
        case 40: // Down arrow
            console.log('Move down');
            break;
    }
});

Explanation:

  • Maps arrow keys to movement directions (left, up, right, down).
  • Switch statements check the keyCode and trigger corresponding actions.

Text Editor Commands

Use Case: Allow users to insert a tab character in text areas.

Code:

document.addEventListener('keydown', (event) => {
    if (event.keyCode === 9) { // Tab key
        event.preventDefault();
        const editor = document.getElementById('editor');
        const start = editor.selectionStart;
        editor.value = editor.value.slice(0, start) + '\t' + editor.value.slice(start);
        editor.selectionStart = editor.selectionEnd = start + 1;
    }
});

Explanation:

  • Overrides the default Tab key behavior to insert a tab character (\t) at the cursor position in a text editor.

Secret Feature Activation

Use Case: Trigger hidden features using specific key sequences.

Code:

let secretSequence = [38, 38, 40, 40, 37, 39, 37, 39, 66, 65]; // Konami Code
let inputSequence = [];

document.addEventListener('keydown', (event) => {
    inputSequence.push(event.keyCode);
    if (inputSequence.slice(-secretSequence.length).join('') === secretSequence.join('')) {
        console.log('Secret mode activated!');
    }
});

Explanation:

  • Tracks user key presses and compares them to a predefined sequence (e.g., the Konami Code).
  • Executes an action when the sequence is completed.

Virtual Keyboard Input

Use Case: Mimic physical keyboard input for touchscreen devices.

Code:

const virtualKeys = document.querySelectorAll('.virtual-key');
virtualKeys.forEach((key) => {
    key.addEventListener('click', () => {
        const keyCode = parseInt(key.dataset.keyCode, 10);
        const event = new KeyboardEvent('keydown', { keyCode });
        document.dispatchEvent(event);
    });
});

Explanation:

  • Creates virtual keys that simulate real key presses by dispatching synthetic keydown events.
  • Useful for applications that run on touchscreen devices.

Accessibility Features

Use Case: Provide shortcuts to assist users with disabilities.

Code:

document.addEventListener('keydown', (event) => {
    if (event.keyCode === 16) { // Shift key
        console.log('Accessibility shortcut triggered');
    }
});

Explanation:

  • Detects the Shift key press (keyCode 16) and performs an action, such as enabling high-contrast mode.

Media Controls

Use Case: Control video playback using the keyboard.

Code:

document.addEventListener('keydown', (event) => {
    const video = document.getElementById('videoPlayer');
    if (event.keyCode === 32) { // Spacebar
        video.paused ? video.play() : video.pause();
    } else if (event.keyCode === 37) { // Left arrow
        video.currentTime -= 5;
    } else if (event.keyCode === 39) { // Right arrow
        video.currentTime += 5;
    }
});

Explanation:

  • Spacebar toggles play/pause, while the left and right arrow keys adjust the playback position.

Form Validation

Use Case: Restrict input to numeric values only.

Code:

document.getElementById('numberInput').addEventListener('keydown', (event) => {
    if ((event.keyCode < 48 || event.keyCode > 57) && // Numbers 0-9
        (event.keyCode < 96 || event.keyCode > 105)) { // Numpad 0-9
        event.preventDefault();
    }
});

Explanation:

  • Prevents non-numeric keys from being entered, ensuring valid input.

Fullscreen or Escape

Use Case: Toggle fullscreen mode or close a modal.

Code:

document.addEventListener('keydown', (event) => {
    if (event.keyCode === 27) { // Escape
        console.log('Modal closed');
    } else if (event.keyCode === 122) { // F11
        event.preventDefault();
        document.documentElement.requestFullscreen();
    }
});

Explanation:

  • Escape key closes modals or cancels actions.
  • F11 toggles fullscreen mode, overriding default behavior.

Conclusion

By leveraging keyboard shortcuts, developers can create applications that are not only more user-friendly but also highly functional and accessible. These examples range from form navigation to hidden features. They demonstrate how key presses can enhance interactivity and usability in your web applications. Explore these ideas in your own projects to deliver delightful and intuitive user experiences.

JavaScript ASCII Key Code Values

KeyCode

backspace

8

tab

9

enter

13

shift

16

ctrl

17

alt

18

pause/break

19

caps lock

20

escape

27

page up

33

page down

34

end

35

home

36

left arrow

37

up arrow

38

right arrow

39

down arrow

40

insert

45

delete

46

0

48

1

49

2

50

3

51

4

52

5

53

6

54

7

55

8

56

9

57

a

65

b

66

c

67

d

68

e

69

f

70

g

71

h

72

i

73

j

74

k

75

l

76

m

77

n

78

o

79

p

80

q

81

r

82

s

83

t

84

u

85

v

86

w

87

x

88

y

89

z

90

left window key

91

right window key

92

select key

93

numpad 0

96

numpad 1

97

numpad 2

98

numpad 3

99

numpad 4

100

numpad 5

101

numpad 6

102

numpad 7

103

numpad 8

104

numpad 9

105

multiply

106

add

107

subtract

109

decimal point

110

divide

111

f1

112

f2

113

f3

114

f4

115

f5

116

f6

117

f7

118

f8

119

f9

120

f10

121

f11

122

f12

123

num lock

144

scroll lock

145

semi-colon

186

equal sign

187

comma

188

dash

189

period

190

forward slash

191

grave accent

192

open bracket

219

back slash

220

close braket

221

single quote

222

  

Software Delivery Metrics

This is a post from 2014 stuck in my drafts. Be free little post… be free.

We have been pondering metrics for software delivery at work. Let me tell you, trying to hammer down a core set of global metrics for an organization with thousands of developers is not an easy task. Fortunately, in my personal projects I am only concerned with:

  • How many defects are reported in production.
  • How fast are we fixing production defects.
  • How many production defects are recurring or repeat offenders.

Can there be more metrics? Absolutely, but until I have a good handle on these I don’t want to complicate things by tracking anything that doesn’t have a direct affect my customers. Having 5, 10, 20…or more metrics that I actively track would make me over analyze and spread my focus to wide. Keeping it simple and focused on the metrics that bring the most insight into keeping my customers happy with my product is most important.

Would this limited set of metrics work for every project, every company…no. My metrics are optimized to the goals of my small product and company. You have to find that thing that is most important to your company. This is where it get’s difficult. There are so many opinions of what a good metric is and people want to advocate the metrics that have worked for them. The answer to large scale metrics projects may be a focus on achieving a core set of goals and only having metrics that have a direct correlation with the goal while having relevance in every part of the company. Easier than it sounds, but I believe this would force the scope of the metrics program downward. Less metrics is a good, good thing.

In fact, I believe that a burgeoning metrics program should focus on one thing at a time as you ramp up. Choose one problem to fix in your software delivery and find a metric that can shed light on a possible way to fix it. If you have a problem with delivery time, maybe some type of process flow type metric would benefit you if you follow Kanban. What you want to do is optimize your metrics for your particular problem space and there isn’t a secret formula or magic bullet that someone can write in a blog to get you there. You have to try something. Pick a relevant metric throw it at the wall, if it sticks, run with it and find another one.

Once you have a metric, get your benchmark by querying your current data to see where you are with the metric. As you know, the benchmark is your measuring stick and the point where you measure your good and bad trends from. Once you have your benchmark, then develop a tracking system: who to collect, store and report on the metric. Begin tracking it and implementing programs to improve the metric. Follow the trend of the metric to see how your changes are affecting the metric. Then when you have a handle on how the metric works for you then you will have a framework to develop additional metrics. You can call it the Minimum Viable Metric, if you will.

The point is, if you spin your wheels analyzing what metrics to use, months will roll by and you will be no better. Precious data would just be passing right by you. Start today and you may find yourself with a wealth of actionable data at your disposal and the means to roll out more metrics.

Digital Services Playbook

https://playbook.cio.gov/

The US governments digital service playbook was born out of the failures of HealthCare.gov.

I thought it was awesome when I first wrote this as a draft post in 2014. After a quick peek at some of the plays, it’s still something that can be modified and used by many teams wanting to improve how they are delivering value through software in or out of government.

You can actually find this in github so it is open which is a theme of Code for America.

Event Sourcing: Stream Processign with Real-time Snapshots

I began writing this a long time ago after I viewed a talk on Event Sourcing by Greg Young. It was just a simple thought on maintaining the current snapshot of a projection of an event stream as events are generated. I eventually heard a talk called “Turning the database inside-out with Apache Samza” by Martin Kleppmann, http://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/. The talk was awesome, as I mentioned in a previous post. It provided structure, understanding and coherence to the thoughts I had.

It still took a while to finish this post after seeing the talk (because I have too much going on), but I would probably still be stuck on this post for a long time if I hadn’t heard the talk and looked further into stream processing.

Event Sourcing

Event sourcing is the storing of a stream of facts that have occurred in a system. Facts are immutable. Once a fact is stored it can’t be changed or removed. Facts are captured as events.

An event is a representation of an action that occurred in the past. An event is usually an abstraction of some business intent and can have other properties or related data.

To get the state for a point in time we have to process all of the previous events to build up the state to the point in time.

State Projections

In our system we want to store events that happen in the system. We will use these events to figure out the current state of the system. When we need to know the current state of the system we calculate a left fold of the previous facts we have stored from the beginning of time to the last fact stored. We iterate over each fact, starting with the first one, calculating the change in state at each iteration. This produces a projection of the current transient state of the system. Projections of current state are transient because they don’t last long. As new facts are stored new projections have to be produced to get the new current state.

Projection Snapshots

Snapshots are sometimes shown in examples of event sourcing. Snapshots are a type of memoization used to help optimize rebuilding state. If we have to rebuild state from a large stream of facts, it can be cumbersome and slow. This is a problem when you want your system to be fast and responsive. So we take snapshots of a projection taken at various points in the stream so that we can begin rebuilding state from a snapshot instead of having to replay the entire stream. So, a snapshot is a cache of a projection of state at some point in time.

Snapshots in traditional event sourcing examples have a problem because the snapshot is a version of state for some version of a projection. A projection is a representation of state based on current understanding. When the understanding changes there are problems.

Snapshot Issues

Let’s say we have an application and we understand the state as a Contact object containing a name and address property. Let’s also say we have a couple facts that alter state. One fact is a new contact was created and is captured by a “Create Contact” event containing “Name” and “Address” data that is the name and address of a contact. Another fact is a contact changed their address and is captured by a “Change Contact Address” event containing “Address” data with the new address for a contact.

When a new contact is added a “Create Contact” event is stored. When a contact’s address is changed a “Change Contact Address” event is stored. To project the current state of a contact that has a “Create Contact” and “Change Contact Address” event stored, we first create a new Contact object, then get the first event, “Create Contact”, from the event store and update the Contact object from the event data. Then we get the “Change Contact Address’ event and update the Contact object with the new address from the event.

That was a lot of words, but very simple concept. We created a projection of state in the form of a Contact object and changed the state of the projection from stored events. What happens when we change the structure of the projection? Instead of a Contact object with Name and Address, we now have a Contact object with Name, Address1, Address2, City, State, and Zip. We now have a new version of the projection and previous snapshots made with other versions of the projection are invalid. To get a valid projection for the current state with the new projection we have to recalculate from the beginning of time.

Sometimes we don’t want the current state. What if we want to see state at some other point in time instead of the head of our event stream. We could optimized rebuilding a new projection by using some clever mapping to transform an old snapshot version to the new version of the projection. If there are many versions, we would have to make sure all supported versions are accounted for.

CQRS

We could use a CQRS architecture with event sourcing. Commands write events to the event store and queries read state from a projection snapshot. Queries would target a specific version of a projection. The application would be as consistent as the time it takes to take a new snapshot from the previous snapshot which was only one event earlier (fast).

Real-time Snapshots

A snapshot is like a cache of state and you know how difficult it is to invalidate state. If we instead create a real-time snapshot as facts are produced, we always have the current snapshot for a version of the projection. To maintain backwards compatibility we can have real-time snapshots for various versions of projections that we want to maintain. When we have a new version of a projection we start rebuilding state from the beginning of time. When the rebuilding has caught up with current state we start real-time snapshots. So, there will be a period of time where new versions of projections aren’t ready for consumption as they are being built. With real-time snapshots we don’t have to worry about running funky code to invalidate or rebuild state, just read the snapshot for the version of the projection that we want. When we don’t want to support a version of a projection, just take the endpoint that points to it offline. When we have a new version that is ready for consumption we bring a new endpoint online. When we want to upgrade or downgrade we just point to the endpoint we want.

Storage may be a concern if we are storing every snapshot of state. We could have a strategy to purge older snapshots. Deleting a snapshot is not a bad thing. We can always rebuild a projection from the event store. As long as we keep the events stored we can always create new projections or rebuild projections.

Conclusion

Well, this was just me trying to clean out a backlog of old posts and finishing some thoughts I had on real-time state snapshots from an event stream. If you want to read or see a much better examination of this subject visit “Turning the database inside-out with Apache Samza” by Martin Kleppmann, http://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/. You can also check out implementations of the concepts with Apache Samza or something like it with Azure Steam Analytics.

Get Deep .NET Code Insight with SonarQube

Mapping My .NET Code Quality Pipeline with SonarQube

In this throwback Tuesday post is a draft post from 2013 that I updated the post to use the latest SonarQube. I got the new server running, but SonarQube is not currently a part of our production pipelines. Actually, I think it is a lot easier to run the Docker image for this (docker pull sonarqube:latest). Although, doing it the hard way was a fun trip down memory lane.

Lately, I’ve been sharing updates about my Code Quality Pipeline. Today, I’m thrilled to report that the core pipeline is nearly operational. What’s even more exciting is that I’ve integrated SonarQube, a powerful tool to monitor and analyze code quality. For those unfamiliar, here’s how SonarQube defines itself:

SonarQube® is an open-source quality management platform. It is designed to continuously analyze and measure technical quality. This analysis ranges from project portfolios to individual methods. It supports multiple programming languages via plugins, including robust support for Java and .NET.

In this post, I’ll guide you on setting up SonarQube to monitor your Code Quality Pipeline. We will leverage its capabilities for a .NET-focused development environment.


Setting Up SonarQube for .NET: Step-by-Step

To get started, I grabbed the latest versions of the required tools:

The SonarQube Docs was a helpful reference. It has been updated here. I’ll share the specific steps I followed to install and configure SonarQube on a Windows 11 environment.


1. Database Configuration

SonarQube requires a database for storing analysis results and configuration data. Here’s how I set it up on PostgreSQL (reference):

  1. Create an empty database:
    • Must be configured to use UTF-8 charset.
    • If you want to use a custom schema and not the default “public” one, the PostgreSQL search_path property must be set:
      ALTER USER mySonarUser SET search_path to mySonarQubeSchema
  2. Create a dedicated SonarQube user:
    • Assign CREATE, UPDATE, and DELETE permissions.
  3. Update the sonar.properties file with the database connection after unziping the SonarQube package (see below): sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar;SelectMethod=Cursor sonar.jdbc.username=your-sonarqube-user sonar.jdbc.password=your-password

2. Installing the SonarQube Web Server

The SonarQube server handles analysis and provides a web interface for viewing results.

  1. Unzip the SonarQube package.
  2. Open the conf\sonar.properties file and configure:
    • Database connection details (see above).
    • Web server properties: sonar.web.host=0.0.0.0 sonar.web.port=9000 sonar.web.context=/sonarqube
  3. Ensure Java JDK 17 is installed. Any higher and I had issues with SecurityManager.
  4. Start the server by running the batch file: \bin\windows-x86-{your-system}\StartSonar.bat
  5. Verify the server is running by visiting http://localhost:9000 in your browser. The default credentials are: Username: admin Password: admin

3. Adding Plugins for .NET Support

SonarQube’s plugins for .NET projects enhance its ability to analyze C# code quality.

  • Navigate to the Marketplace within the SonarQube web interface.
  • Install the ecoCode – C# language plugin and any additional tools needed for your pipeline.

4. Integrating Sonar Scanner

Sonar Scanner executes code analysis and sends results to the SonarQube server.

  1. Download and extract Sonar Scanner.
  2. Add its bin directory to your system’s PATH.
  3. Configure the scanner by editing sonar-scanner.properties: sonar.host.url=http://localhost:9000 sonar.projectKey=my_project sonar.projectName=My Project sonar.projectVersion=1.0
  4. Run the scanner from the root of your project: sonar-scanner

Monitoring Key Metrics

One of my goals with SonarQube is to track critical operational metrics like:

  • Code Quality: Bugs, vulnerabilities, code smells.
  • Performance: Memory and CPU usage, database load, cache requests.
  • Application Metrics: Web server requests, bandwidth usage, key transactions (e.g., logins, payments, background jobs).

To achieve this, I’ll leverage SonarQube’s dashboards and custom reports. These tools make it easy to visualize and monitor these KPIs in real-time.


The Impact: A Quality-First Development Workflow

With SonarQube integrated, my Code Quality Pipeline is equipped to ensure:

  • Continuous Code Quality: Early detection of bugs and vulnerabilities.
  • Performance Optimization: Proactive monitoring of resource utilization.
  • Improved Collaboration: Shared insights into code quality for the entire team.

Ready to Level Up Your Code Quality?

SonarQube makes it simple to raise the bar on your development processes. Whether you’re optimizing legacy code or building new features, this tool provides the insights you need to succeed.

Start your journey today: Download SonarQube.

Have questions or need guidance? Let me know in the comments—I’d love to hear how you’re leveraging SonarQube in your own pipelines!

Automated Testing in Visual Studio

NUnit in Visual Studio 2012/2013

I normally us MSTest as I like the integration with Visual Studio. At work we do a lot of NUnit and it just feels dirty having to run my tests outside of Visual Studio (we don’t have a runner for VS). Well we finally got a VS upgrade and I was able to install the NUnit Test Adapter for Visual Studio 2012 and 2013, http://nunit.org/index.php?p=vsTestAdapter&r=2.6.2.

I just installed it through the Extension Manager with Nuget, easy peasy. Now I can run NUnit tests without having to run the NUnit GUI or command line manually. Tests will appear in the VS Test Explorer so organizing, running and debugging tests is incredibly simplified.

Dev Life

This was a blast from the past from a draft post from January 2013 when unit tests in Visual Studio was becoming easier to deal with. Below we review the current state of automated testing in Visual Studio.

Modern Automated Testing in Visual Studio: A Seamless Experience

As developers, we thrive on tools that simplify our workflows and integrate seamlessly into our development environments. Back in the day, running NUnit tests without a dedicated Visual Studio runner felt clunky and disjointed. Fast forward to today, Visual Studio has matured into an incredibly robust IDE for automated testing, offering rich support for various frameworks like MSTest, NUnit, and xUnit, all within the IDE itself. Let’s explore how modern Visual Studio takes automated testing to the next level.

Integrated Test Adapters

One of the major advancements in Visual Studio since the olden days is the seamless integration of test adapters via the NuGet package manager. For example, installing the NUnit Test Adapter or xUnit.net Test Adapter is as simple as adding the package to your test project. Once installed, tests are automatically discovered by the Visual Studio Test Explorer. No additional configuration is required, and the days of running separate GUIs or command-line tools are long behind us.

Test Explorer: The Nerve Center

The Visual Studio Test Explorer has become the central hub for managing and executing tests. It’s packed with features:

  • Test Discovery: Automatically detects and organizes tests from all supported frameworks in your solution.
  • Grouping and Filtering: Group tests by project, namespace, or custom traits for easy navigation. Filters allow you to focus on failing tests or specific categories.
  • Real-Time Feedback: See pass/fail results instantly, with options to debug failing tests directly from the Test Explorer.
  • Continuous Testing: Enable live unit testing (available in Visual Studio Enterprise) to automatically run tests impacted by your code changes in real-time.

Debugging Tests

Debugging unit tests is now as straightforward as debugging application code. Simply set breakpoints in your test or application code and run the test in debug mode from the Test Explorer. Visual Studio’s rich debugging tools, including watch variables, call stacks, and IntelliTrace (in Enterprise), make diagnosing issues a breeze.

Code Coverage and Test Impact Analysis

Understanding how well your tests cover your codebase is critical. Visual Studio provides built-in tools for:

  • Code Coverage Analysis: Highlighting which parts of your code are tested and which are not.
  • Test Impact Analysis: Identifying which tests are affected by your recent code changes, optimizing the tests you need to run.

Cross-Platform and CI/CD Integration

With .NET Core and .NET 6+, Visual Studio supports cross-platform testing, allowing you to run and debug tests on Windows, macOS, and Linux. Additionally, automated tests integrate seamlessly with CI/CD pipelines using Azure DevOps, GitHub Actions, or other CI platforms. Test results can be published as part of your build and deployment workflows, ensuring quality gates are met.

Choosing the Right Framework

While MSTest remains the native framework for Visual Studio, NUnit and xUnit are popular for their flexibility and extensive ecosystem. All three frameworks are first-class citizens in Visual Studio, making it easy to choose one based on your project’s needs or team preferences.

Getting Started

Here’s a quick guide to setting up automated testing in Visual Studio:

  1. Install the Framework: Use NuGet Package Manager to add MSTest, NUnit, or xUnit to your test project.
  2. Install the Adapter: Add the corresponding Test Adapter package for NUnit or xUnit.
  3. Write Your Tests: Create test classes and methods following the chosen framework’s conventions.
  4. Run Your Tests: Open Test Explorer and run or debug your tests directly from within Visual Studio.
  5. Analyze Results: Use Test Explorer’s grouping, filtering, and debugging tools to refine your tests.

Dev Life Today

Automated testing in Visual Studio has come a long way since the early days. Whether you’re building enterprise-grade software or experimenting with side projects, Visual Studio’s modern testing tools streamline the process, reduce context switching, and let you focus on delivering quality code.

Gone are the days of manually running tests in external tools. With Visual Studio, everything you need for automated testing is right at your fingertips, making it an indispensable tool for modern developers. We can also mimic a lot of this functionality in VS Code so .NET developers have options. What are your favorite testing features in Visual Studio or VS Code?