Category: Pipeline

Part 1 & 2: Get Up and Running with #React and #TypeScript

The “Get Up and Running with React and TypeScript” series is made up of posts from chapters in my book “Hello React and TypeScript“. If you have questions, comments or corrections, please reach out to me in the comments or on Twitter @charleslbryant.

You can view the posts in this series on the Hello React and TypeScript category page.

If you haven’t heard, I have been writing a little book called “Hello React and TypeScript“. Its a simple book that chronicles the steps I took to learn how to develop a React application using TypeScript and ES6.

I’m going to post some of the chapters here on my blog. This will provide an easy reference for me because the chapters are really a reorganization of notes that I took as I learned writing React with TypeScript/ES6. It is my hope that these posts may be helpful for someone looking to learn.

I will post the chapters, but I won’t keep them in sync with the book. If you want the latest and greatest, please check out the free book.


To start this series off we will look at the code samples. In this first post we will look at two chapters: Setting Up Samples and Component Basics. If you haven’t used React or TypeScript, I recommend you actually try out the code to get a better feel for React and TypeScript. You can reach out to me in the comments below or on Twitter @charleslbryant.

You can view the posts in this series on the Hello React and TypeScript category page.


Setting Up Samples

https://charleslbryant.gitbooks.io/hello-react-and-typescript/content/SettingUpSamples.html

Requirements

All of the code samples have a couple requirements

  • npm package manager (Node 5.2.0 used in development)
  • IE 10+ or similar modern browser that supports the History API

This first sample is the source for the basic development environment. This includes the packages and automation that will be used in the samples.

Note – to get the latest version of packages and automation you should clone the repository from root, https://github.com/charleslbryant/hello-react-and-typescript.git . We will make updates as we move along with new samples so version 0.0.1 is only a starting point.

Source Code

https://github.com/charleslbryant/hello-react-and-typescript/releases/tag/0.0.1

Installing

To install a sample you need to run npm to get the required dependencies. You should open a console at the root path of the sample directory and run

npm install

Then

tsd install

These commands will install the required npm packages and TypeScript Typings.

Run the Sample

To run the samples, open a console at the sample directory and run

gulp

This will kick off the default gulp task to run all the magic to compile TypeScript, bundle our dependencies and code into one bundle.js file, and move all of the files need to run the application to a dist folder, open the index.html page in a browser, and reload the site when changes are made to source files.

Basic React Component

https://charleslbryant.gitbooks.io/hello-react-and-typescript/content/Samples/ComponentBasic.html

This is a very basic example of a single React component. We could make it even simpler, but this example is still basic enough to start with.

Source Code

https://github.com/charleslbryant/hello-react-and-typescript/releases/tag/0.0.2

src/index.html

<!DOCTYPE html>
<html lang="en">
<head>
<title>Hello World</title>
	<link rel="stylesheet" href="css/bundle.css"/>
</head>
<body>
<div id="app"></div>
<script src="scripts/bundle.js"></script>
</body>
</html>

If you don’t understand this, I recommend that you find an introduction to HTML course online.

The important part of this file is

. We will tell React to inject its HTML output to this div.

src/main.tsx

/// <reference path="../typings/tsd.d.ts" />
import * as React from 'react';
import * as DOM from 'react-dom';

const root = document.getElementById('app');

class Main extends React.Component<any, any> {
 constructor(props: any) {
 super(props);
 }

 public render() {
 return (
<div>Hello World</div>
;
 );
 }
}

DOM.render(<Main />, root);

If you know JavaScript, this may not look like your mama’s JavaScript. At the time I wrote this ES6 was only 6 months old and this sample is written with ES6. If you haven’t been following the exciting changes in JavaScript, a lot of the syntax may be new to you.

This is a JavaScript file with a .tsx extention. The .tsx extenstion let’s the TypeScript compiler know that this file needs to be compiled to JavaScript and contains React JSX. For the most part, this is just standard ES6 JavaScript. One of the good things about TypeScript is we can write our code with ES6/7 and compile it as ES5 or older JavaScript for older browsers.

Let’s break this code down line by line.


/// <reference path="../typings/tsd.d.ts" />

The very top of this file is a JavaScript comment. This is actually how we define references for TypeScript typing files. This let’s TypeScript know where to find the definitions for types used in the code. I am using a global definition file that points to all of the actual definitions. This reduces the number of references I need to list in my file to one.


import * as React from 'react';
import * as DOM from 'react-dom';

The import statements are new in JavaScript ES6. They define the modules that need to be imported so they can be used in the code.


const root = document.getElementById('app');

const is one of the new ES6 variable declaration that says that our variable is a constant or a read-only reference to a value. The root variable is set to the HTML element we want to output our React component to.


class Main extends React.Component<any, any> {

class is a new declaration for an un-hoisted, stict mode, function defined with prototype-based inheritance. That’s a lot of fancy words and if they mean nothing to you, its not important. Just knowing that class creates a JavaScript class is all you need to know or you can dig in athttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes.

Main is the name of the class.

extends says that the Main class will be created as a child of the React.Component<any, any>class.

React.Component<any, any> is a part of the TypeScript typing definition for React. This is saying that React.Component will allow the type any to be used for the components state and props objects (more on this later). any is a special TypeScript type that basically says allow any type to be used. If we wanted to restrict the props and state object to a specific type we would replace any with the type we are expecting. This looks like generics in C#.


constructor(props: any) {
    super(props);
}

The contructor method is used to initialize the class. In our sample, we only call super(props) which calls the contructor method on our parent class React.Component passing any props that were passed into our constructor. We are using TypeScript to define the props as any type.

In additon to calling super we could also initialize the state of the component, bind events, and other tasks to get our component ready for use.


public render() {
    return (
<div>Hello World</div>
);
}

Maybe the most important method in a React component is the render method. This is where the DOM for our component is defined. In our sample we are using JSX. We will talk about JSX later, just understand that it is like HTML and outputs JavaScript.

JSX is not a requirement. You could use plain JavaScript. We can rewrite this section of code as

public render() {
    return (
        React.createElement("div", null, "Hello World")
    );
}

The render method returns the DOM for the component. After the return statement you put your JSX. In our sample, this is just a simple div that says Hello World. It is good practice to wrap your return expression in parenthesis.


DOM.render(<Main />, root);

Lastly, we call DOM.render and pass Main, the React component we just created, and root, the variable containing the element we want to output the component to.


That’s it, pretty simple and something we can easily build upon.

Event Sourcing: Stream Processign with Real-time Snapshots

I began writing this a long time ago after I viewed a talk on Event Sourcing by Greg Young. It was just a simple thought on maintaining the current snapshot of a projection of an event stream as events are generated. I eventually heard a talk called “Turning the database inside-out with Apache Samza” by Martin Kleppmann, http://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/. The talk was awesome, as I mentioned in a previous post. It provided structure, understanding and coherence to the thoughts I had.

It still took a while to finish this post after seeing the talk (because I have too much going on), but I would probably still be stuck on this post for a long time if I hadn’t heard the talk and looked further into stream processing.

Event Sourcing

Event sourcing is the storing of a stream of facts that have occurred in a system. Facts are immutable. Once a fact is stored it can’t be changed or removed. Facts are captured as events.

An event is a representation of an action that occurred in the past. An event is usually an abstraction of some business intent and can have other properties or related data.

To get the state for a point in time we have to process all of the previous events to build up the state to the point in time.

State Projections

In our system we want to store events that happen in the system. We will use these events to figure out the current state of the system. When we need to know the current state of the system we calculate a left fold of the previous facts we have stored from the beginning of time to the last fact stored. We iterate over each fact, starting with the first one, calculating the change in state at each iteration. This produces a projection of the current transient state of the system. Projections of current state are transient because they don’t last long. As new facts are stored new projections have to be produced to get the new current state.

Projection Snapshots

Snapshots are sometimes shown in examples of event sourcing. Snapshots are a type of memoization used to help optimize rebuilding state. If we have to rebuild state from a large stream of facts, it can be cumbersome and slow. This is a problem when you want your system to be fast and responsive. So we take snapshots of a projection taken at various points in the stream so that we can begin rebuilding state from a snapshot instead of having to replay the entire stream. So, a snapshot is a cache of a projection of state at some point in time.

Snapshots in traditional event sourcing examples have a problem because the snapshot is a version of state for some version of a projection. A projection is a representation of state based on current understanding. When the understanding changes there are problems.

Snapshot Issues

Let’s say we have an application and we understand the state as a Contact object containing a name and address property. Let’s also say we have a couple facts that alter state. One fact is a new contact was created and is captured by a “Create Contact” event containing “Name” and “Address” data that is the name and address of a contact. Another fact is a contact changed their address and is captured by a “Change Contact Address” event containing “Address” data with the new address for a contact.

When a new contact is added a “Create Contact” event is stored. When a contact’s address is changed a “Change Contact Address” event is stored. To project the current state of a contact that has a “Create Contact” and “Change Contact Address” event stored, we first create a new Contact object, then get the first event, “Create Contact”, from the event store and update the Contact object from the event data. Then we get the “Change Contact Address’ event and update the Contact object with the new address from the event.

That was a lot of words, but very simple concept. We created a projection of state in the form of a Contact object and changed the state of the projection from stored events. What happens when we change the structure of the projection? Instead of a Contact object with Name and Address, we now have a Contact object with Name, Address1, Address2, City, State, and Zip. We now have a new version of the projection and previous snapshots made with other versions of the projection are invalid. To get a valid projection for the current state with the new projection we have to recalculate from the beginning of time.

Sometimes we don’t want the current state. What if we want to see state at some other point in time instead of the head of our event stream. We could optimized rebuilding a new projection by using some clever mapping to transform an old snapshot version to the new version of the projection. If there are many versions, we would have to make sure all supported versions are accounted for.

CQRS

We could use a CQRS architecture with event sourcing. Commands write events to the event store and queries read state from a projection snapshot. Queries would target a specific version of a projection. The application would be as consistent as the time it takes to take a new snapshot from the previous snapshot which was only one event earlier (fast).

Real-time Snapshots

A snapshot is like a cache of state and you know how difficult it is to invalidate state. If we instead create a real-time snapshot as facts are produced, we always have the current snapshot for a version of the projection. To maintain backwards compatibility we can have real-time snapshots for various versions of projections that we want to maintain. When we have a new version of a projection we start rebuilding state from the beginning of time. When the rebuilding has caught up with current state we start real-time snapshots. So, there will be a period of time where new versions of projections aren’t ready for consumption as they are being built. With real-time snapshots we don’t have to worry about running funky code to invalidate or rebuild state, just read the snapshot for the version of the projection that we want. When we don’t want to support a version of a projection, just take the endpoint that points to it offline. When we have a new version that is ready for consumption we bring a new endpoint online. When we want to upgrade or downgrade we just point to the endpoint we want.

Storage may be a concern if we are storing every snapshot of state. We could have a strategy to purge older snapshots. Deleting a snapshot is not a bad thing. We can always rebuild a projection from the event store. As long as we keep the events stored we can always create new projections or rebuild projections.

Conclusion

Well, this was just me trying to clean out a backlog of old posts and finishing some thoughts I had on real-time state snapshots from an event stream. If you want to read or see a much better examination of this subject visit “Turning the database inside-out with Apache Samza” by Martin Kleppmann, http://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/. You can also check out implementations of the concepts with Apache Samza or something like it with Azure Steam Analytics.

Recent Talks with Big Effect on My Development

I have seen many, many development talks, actually an embarrassing amount. There have been many talks that have altered how I develop or think about development. Actually, since 2013, some talks have caused a major shift in how I think about full stack development, my mind has been in a major evolutionary cycle.

CQRS and Event Sourcing

My thought process about full stack development started down a new path when I attended Code on the Beach 2013. I watched a talk by Greg Young that included Event Sourcing, mind ignited.

Greg Young – Polyglot Data – Code on the Beach 2013

He actually followed this up at Code on the Beach 2014 with a talk on CQRS and Event Sourcing.


Reactive Functional Programming

Then I was introduced to React.js and I experience a fundamental paradigm shift in how I believed application UIs should be built and began exploring reactive functional programming. I watched a talk by Pete Hunt where he introduced the design decisions behind React, mind blown.

Pete Hunt – React, Rethinking Best Practices – JSConf Asia 2013


Stream Processing

Then my thoughts on how to manage data flow through my applications was significantly altered. I watched a talk by Martin Kleppmann that promoted subscribe/notify models and stream processing and I learned more about databases than any talk before it, mind reconfigured.

Martin Kleppmann – Turning the database inside-out with Apache Samza – Strange Loop 2014


Immutable Data Structures

Then my thoughts on immutable state was refined and I went in search of knowledge on immutable data structures and their uses. I watched a talk by Lee Bryon on immutable state, mind rebuilt.

Lee Bryon – Immutable Data in React – React.js Conf 2015


Conclusion

I have been doing paid application development since 2000. There was a lot burned into my mind in terms of development, but these talks where able to cut through the internal program of my mind to teach this old dog some new tricks. The funny thing is all of these talks are based on old concepts from computer science and our profession that I never had the opportunity of learning.

The point is, keep learning and don’t accept best practices as the only or best truth.

My New Book: “Hello React and TypeScript”

So, I have been doing a deep dive into React and TypeScript and found it a little difficult to find resources to help me. I could find plenty of great resources on React and TypeScript separately. Just exploring the documentation and many blogs and videos for the two is great. Yet, resources that explore writing React applications with TypeScript was either old, incomplete, or just didn’t work for me for one reason or another. I have to admit that I haven’t done a deep search in a while so that may have changed.

After some frustration and keyboard pounding, I decided to just consume as much as I can about both individually and just get a Hello World example working. With the basics done I went slowly all the way to a full blown application. Since I was keeping notes on that journey, why not share them?

I was going to share them in a few blog posts, but I have been trying to do a book on GitBook for a while and this topic seemed like a good fit. Also, I enjoy digging into React and TypeScript so this book is probably something I can commit to, not like my other sorry book attempts.

My little guide book isn’t complete and still very rough. It may not be completed any time soon with the rapid pace of change for JavaScript, TypeScript and React. Also, I am thinking about moving from Gulp/Browserify to WebPack. I have only completed a fraction of the samples I want to do… so much still to do. Even though I don’t think it is ready for prime time, it is a public open source book so why not share it, get feedback, and iterate.

You can get the free book from GitBook

https://charleslbryant.gitbooks.io/hello-react-and-typescript/content/index.html

The source code for the samples in the book are on GitHub

https://github.com/charleslbryant/hello-react-and-typescript

If you don’t like the book let me know. If you find something wrong, bad practice or advice, incoherent explanations… whatever, let me know. If you like it let me know too, I can always use an ego boost :). Now I just have to convince myself to do some videos and talks… public speaking, bah :{

Enjoy!

Online Computer Science Education

I have always had an inferiority complex as a software developer. I am self taught and I felt that I was missing a good base that people with a computer science degree have. I no longer feel as inferior as I use to because I have been doing this for 15 years and I am always able to hold my own. Yet, there are things that I don’t immediately understand that I think would be more clear if I had the grounding of a CS degree. But, I am not going to spend a bunch of money to do it and I don’t have the time to commit to school full time.

I am a Pluralsight junkie and YouTube computer science’y videos are  my TV. There are many top notch CS schools that have released lecture videos to the public for free. Why can’t I organize these videos into a curriculum and get a virtual CS education. I am constantly watching videos anyway. Well I ran across two posts that suggest that very thing.

Free Online CS Education

The MIT Challenge chronicles Scott Young’s quest to get the equivalent of a 4 year CS education from MIT in 12 months. I am not that ambitious, but I appreciate his journey and he is very thoughtful in providing some help in actually getting through the challenge.

$200K for a computer science degree? Or these free online classes? is a post by Andrew Oliver where he puts forth a sample CS curriculum made up of Coursera videos. He thinks these videos would be as good as a CS degree. He says he hasn’t watched all of the videos, but based on the overview of the videos he says he would hire someone who went through them and did all the associated course work.

aGupieWare: Online Learning: A Bachelor’s Level Computer Science Program Curriculum is a good post that breaks down what’s involved in a CS degree program. They put forth a good list of online classes to choose from. They received a good amount of responses and based on feedback they posted an improved expanded list Online Learning: An Intensive Bachelor’s Level Computer Science Program Curriculum, Part II.

Conclusion

I can’t say that I would do all of these, it’s a lot, but I am going to take a more focused approach to getting a more structured CS education. I need to shore up my basic understanding of CS. If I am able to stick to it, I will post the courses I do, but don’t count on it.

If you don’t have a CS degree, you can still get a very good CS education. Even if you are a seasoned developer, if you haven’t received a formal education in CS, why not expand your understanding of our profession and take advantage of some of the excellent, free, online CS learning.

Testing Liskov Substitution Principle

In my previous post I talked about Liskov Substitution Principle in relation to TypeScript. I thought I would continue on my thoughts on LSP by defining it in terms of testing since testing has been a large part of my world for the past two years.

Here is another definition of LSP

Let q(x) be a property provable about objects x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T.

That’s how two titans of computer science Barbara Liskov and Jannette Wing defined it in 1993. Unfortunately, when I first learned of LSP through this definition the meaning of LSP alluded me because I was light on this strange genius speak. It wasn’t until I eventually stumbled on to seeing “provable” as some observable behavior or outcome that the definition of LSP clicked for me.

If I have an interface and I create an implementation of the interface, I can prove that the implementation is correct if it does what I expect it to. That expectation is represented by q in the equation above. Then if I create another implementation of the same interface, the same expectation or q should hold true with this new type. Hey, q is a test… duh.

Testing LSP

I have an interface that can be implemented in a type that can be used to accesses source code repositories. One property of this interface that I expect is that I can get a list of all of the tags in a repository returned in a string array.

IRepository {
string[] GetBranches();
string[] GetTags();
...
}

So, I create an implementation that can connect to a Git repository and it returns a string array of tags in the repository. I hook up the implementation to a UI and my client is happy because they can see all of their tags in my awesome UI on their mobile phone.

Now, they want an implementation for their SVN repository. No problem. I do another implementation and I return a string array of tags from their SVN repository. All good, expectation met, no LSP violations. I know this not because I am a genius and can do a mathematical proof, but because I wrote a functional test to prove the behavior by asserting what I expected to see in the string array (a mathematical proof at a higher abstraction for non-geniuses). When I run the test the tags returned match my expectation. With my test passing, I follow another SOLID principle, Dependency Inversion Principle (DIP), and I easily hook this up to the UI with a loose coupling. Anyway, now my client can open the UI and see a list of tags for their Git and SVN repositories. As far as they are concerned the expectation (q) is correct. My implementations satisfy the proof and my client doesn’t call an LSP violation on me.

My client says they now want to see a list of tags in their Perforce repository and I assign this to another dev team because this is boring to me now :). The team misunderstood the spec because I didn’t adequately define what a tag is for q. So, instead of returning tags in an array of strings they return a list of labels. While it is true that every tag in Perforce is a label, every label isn’t a tag. What’s even worse is the team has passing functional test that says they satisfied q. On top of this we didn’t properly QA the implementation to determine if their tests or definition of q is correct and we delivered the change to production. The client opens the UI and expects to see a list of tags from their Perforce repository and they see all the labels instead. They immediately call the LSP cops on us. This new type implementation of the interface does not meet expectation and is a violation of LSP.

Context is Key

Yes, this is a naive example of LSP, but it is how I understand it and how I apply it. If I have expectations when using an interface, abstract type, or implementation of some supertype, then every implementation or subtype should meet the expectation and be provable by the same expectation. The proof can be expressed as a mathematical equation, unit test, UI test, or visual observation as long as the expectation is properly expressed.

Conclusion

The point is, in order to not violate LSP we have to first have a shared understanding of the expectations expressed in our test (q). In our example, the development team had one expectation that wasn’t shared by the client and LSP was violated. To not violate LSP we have to understand how objects are expected to work, then we can define checks and tests to validate that LSP wasn’t violated.

This goes beyond just checking sub types in traditional object oriented inheritance. Every object that we create is an abstraction of something. If we create a People object, we expect it to have certain properties and behaviors. Even if we have a People type that won’t be sub typed, it can be said that it is a sub type of an actual person. The expectations that we define for People object should hold true. We expect a real person to have a name, address, and age. We could have a paper form (a People form) that we use to capture this information and the expectations are valid for the form. One day we decide to automate the form and we create an abstract People type and when we create an object of this type we expect it to have a name address, and age. We can test this object and determine if our People object violates LSP because it is a sub type of the manual form and we use the same expectations for the form and for our new object.

Now this is a little abstract mumbo jumbo, but it is a tenant that I believe is very important in software development. Don’t violate LSP!

Liskov Substitution Principle (LSP) in TypeScript

Liskov Substitution Principle (LSP)

I was recently asked to name some of “Uncle Bob’s” SOLID OOP design principles. One of my answers was Liskov Substitution Principle. I was told it is the least given answer, I  wonder why?

Liskov Substitution Principle (wikipedia)
“objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.”

 

TypeScript

If you are a serious developer that stays up to date with the profession, you know what TypeScript is by now. TypeScript is a superset of JavaScript that provides type checking based on the structure or shape of values at compile time and at design time if your IDE supports it. This structural type checking is sometimes referred to as duck typing. If it looks like and quacks like the type, its the type.

As I dive deeper into TypeScript I am looking for ways to keep my code SOLID. This is where the question that prompted this post surfaced. How do I maintain LSP in TypeScript?

LSP and TypeScript

When I subtype an object in JavaScript its not the same inheritance that we have in Java or C#. In JavaScript I can have function B stand in for A as long as function B has the same structure, which is different than static type checking. TypeScript provides syntactic sugar on top of JavaScript to enforce type constraints, but this doesn’t change the dynamic typing of JavaScript. In fact, I can write plain JavaScript in TypeScript so how does LSP apply when subtypes are a different animal in JavaScript as opposed to a statically typed OOP based language.

Then it hit me. The most important part of LSP (to me) is maintaining intent or semantics. It doesn’t matter if we are talking about strong typing or replacing ducks with swans, how we subtype isn’t as important as the behavior of the subtype with respect to the parent. Does the subtype pass our expectation q?

The principle asks us to reason about what makes the program correct. If you can replace a duck with a swan and the program holds its intent, LSP is not violated. Yet, if you can replace a Mallard duck with a Pekin duck in a method that only expects Mallards and maintains the proper behavior only with Mallards, when you use a Pekin duck in the method you have broken LSP. Pekin violates LSP for the expectation we have for Mallard even if Pekin is a structurally correct subtype of Mallard, using Pekin changes the behavior of the program. Now if we have a Gadwal duck that is also a structurally correct subtype of Mallard and using it I observed the same behavior as using a Mallard, LSP isn’t violated and I am safe to use a Gadwal duck in place of a Mallard.

Conclusion

I believe LSP has merit in TypeScript and any programming languages in general. I believe that violation of LSP goes beyond throwing a “not implemented exception” or hiding members of the parent class. For example, if you have an interface that defines a method that should only return even numbers, then you have an implementation of the interface method that allows returning of odd numbers, LSP is violated. Pay attention to subtype behavior with respect to expectations of behavior.

Respecting the LSP in a program helps increase flexibility, reduces coupling and makes reuse a good thing, but it also helps reduce bugs and increase the value of a program when you pay attention to behavioral intent.

What do you think, should we care about LSP in TypeScript?

Testing GoCD on Azure Linux VM

I need a GoCD server to test some configuration and API work I need to accomplish. So I thought, “Maybe I can get a GoCD server running on Azure.” There are no examples of doing this with GoCD that I could find on the interwebs, but I didn’t do a deep search. Anyway, it can’t be that hard… right?

Create VM

On the Azure portal I setup an Ubuntu VM with the lowest size, A0. I am using the Resource Manager version of docs to walk me through this, https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-tutorial-portal-rm/.

I configured username/password instead of the secure SSH, because this is a throw away.

Connect to VM

I install PuTTY on my laptop. Actually, no install I just download the exe and run it (http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html). PuTTY will allow me to SSN into the VM to manage it.
Then I just connect to the IP of my new VM and enter my username and password and I have access to a command window to manage my new VM.

Package Management

I am a Windows groupie so my first thought was I wonder if I can use Chocolatey to install GoCD. Chocolatey NuGet is a Machine Package Manager, somewhat like apt-get (clue), but built with Windows in mind.
Nope, no Chocolatey on Ubuntu, but the apt-get clue above was right on target (https://help.ubuntu.com/community/AptGet/Howto). I can use apt-get to easily install applications.

Sudo

Before I begin, I want to learn about sudo. I see sudo in a lot of Linux commands so it must be important. Now is as good a time as any to get some context because I have virtually no Linux experience (no VM pun intended).
sudo (/ˈsuːduː/ or /ˈsuːdoʊ/) is a program for Unix-like computer operating systems that allows users to run programs with the security privileges of another user, by default the superuser.
sudo -s
This would allow the current session to run as root (superuser), not sure of the significance of this or if this is 100% correct or best practice. All of the resources I read seem to indicate this as a way to elevate the user in the session to fix certain permission issues while running commands.
So sudo is kind of like a better more friendly runas command.

Install GoCD Server

I use apt-get to install the server.
echo "deb http://dl.bintray.com/gocd/gocd-deb/ /" > /etc/apt/sources.list.d/gocd.list
wget --quiet -O - "https://bintray.com/user/downloadSubjectPublicKey?username=gocd" | sudo apt-key add -
apt-get update
apt-get install go-server

The first commands adds the GoCD desbian package repo to the apt package source list. Then a key is added for the new repo (not sure how this is used yet, but something must be encrypted). Next apt-get is ran and it goes through the source list and downloads missing packages. Lastly, apt-get install is ran to install the bits. This workflow is very familiar because its like Chocolatey. apt-get handles all dependencies so this is easy as can be.
I open PuTTY, connect to the server and run sudo -s to run as root (I probably should try to install without this step because I don’t know if it is necessary). Then I run the commands to install GoCD sever and it failed on the first command with “command not found.” I updated the command to use sudo sh -c, run the command as superuser, and it ran.
sudo sh -c 'echo "deb http://dl.bintray.com/gocd/gocd-deb/ /" > /etc/apt/sources.list.d/gocd.list'
I run the rest of the commands and the magic started happening… finally.
Then the magic fades away on the last command to install go-server, Unable to locate package go-server. After poking around I figured out how to open a file so I can inspect the .list file to see if it has the URL from the first command.
sudo vim /etc/apt/sources.list.d/gocd.list
This opens Vim, good thing I have been learning Vim. Anyway, the file has nothing in it. OK, the first command didn’t work for some reason. OK, we’ll fix that later. Right now I decide to manually add the URL to the deb repo, http://dl.bintray.com/gocd/gocd-deb/ /.
I naively try to ctrl-c, copy, the URL into Vim. Of course I had another mistake, because Vim is a different animal (I have to learn how to say sit in German or something). I couldn’t figure out how to copy from the clipboard to Vim (another day) so I just typed the repo URL. As with many manual tasks I mess up and entered 1 instead of l and when I run the command again I get 404 errors (d1.bintray.com instead of dl.bintray.com). After fixing this I was able to apt-get update and apt.get install… Yah!
Now, the GoCD dashboard is accessible at http://MyUbuntuVM.cloudapp.net:8153/go/pipelines.

Install GoCD Agent

Feeling accomplished I install a GoCD agent
apt-get install go-agent
Then, I edit the agent config to update the IP to the IP of my VM.
sudo vim /etc/default/go-agent
Then, I start the agent
/etc/init.d/go-agent start
Lastly, I check the dashboard to make sure the agent exits, http://MyUbuntuVM.cloudapp.net:8153/go/agents, and it does.
Mission Accomplished!

Conclusion

This was a lot easier than I expected, even with my goofs. Creating the VM was so simple I thought I missed something (nice job Azure team).
Now how do I automate it, can I use a Docker container to make this process even easier and faster, how do I move storage to a more durable persistent location, how do I install Git (apt-get maybe)?

GTP for BDD

Graphical Test Plan

I read a little about graphical test planning created by Hardeep Sharma and championed by David Bradley, both from Citrix. It’s a novel idea and sort of similar to the mind map test planning I have played around with. The difference is your not capturing features or various heuristics and test strategies in a mind map, you are mapping expected behavior only. Then you derive a test plan from the graphical understanding of the expected behavior of the system. I don’t know a lot about GTP, so this is a very watered down explanation. I won’t attempt to explain it, but you can read all about it:

Plan Business Driven Development with GTP

What interested me was the fact that I could abstract how we currently spec features into a GTP type model. I know the point of GTP is not to model features, but our specs model behavior and they happen to be captured in feature files. Its classic Behavior Driven Development (BDD) with Gherkin. We have a feature that defines some aspect of value that the system is expected to provide to users. In the feature we have various scenarios that describe the expected behaviors of the feature. Scenarios have steps that define pre-condition, action, and expectations (PAE) or in Gherkin, Given-When-Then (GWT) that define how a user would execute the scenario. We also have feature backgrounds which is a feature wide pre-condition that is shared by all scenarios in the feature.
I said we use Gherkin, but our new test runner transcends just GWT. We can define PAE in plain English without the GWT constraints, we can select the terms to describe PAE instead of being forced to use GWT which sometimes causes us to jump through hoops to force the GWT wording to sound correct. 

GTP Diagram

If we applied something like GTP we would model the scenarios, but there would be more hierarchy before we define the executable scenarios. We currently use tagging to group similar scenarios that exercise a specific subset of a feature’s scenarios. This allows us to provide faster feedback by running checks for just a subset instead of the entire feature when we are only concerned with changes to the subset.
In a GTP’ish model the left most portion of the diagram would hold generalized behavior specs, similar to how we use tagging, and as we go to the right the behavior becomes more granular until we hit a demarcation point for executable scenarios that can then be expressed in a linked test case diagram (TCD). In the GTP there are ways to capture meta data like related requirement/ticket ID for traceability back to requirements. Also, meta for demarcation point (can’t think of better name) to link to the TCD or feature file that further defines it.

Test Case Diagram

The test case diagram would define various scenarios that define the behavior of the demarcation points in the GTP. The TCD diagram would also include background preconditions and the steps to execute the scenario. At this point it feels like this is an extra step. We have to write the TCD in a feature file so diagramming it is creating a redundant document that has to be maintained.
In the TCD there are shapes for behavior, preconditions, steps, and expectations. I think there should be additional shapes or meta to express tags because this is important in how we categorize and control running of scenarios. It may help if there is also meta to link back to the GTP that the TCD is derived from so we can flow back and forth between the diagrams. Meta in the TCD is important because it gives us the ability extract understanding outside of just the test plan and design. We could have shapes, meta descriptions and links to
  • execute automated checks
  • open a manual exploratory test tool
  • view current test state (pass/fail)
  • view historical data (how many times has this step failed, when was the last failure of this scenario…)
  • view flake analysis or score
  • view delivery pipeline related to an execution
  • view team members responsible for plan, develop, test and release
  • view related requirement or ticket
  • much more…

Since we also define manual tests by just tagging features or scenarios with a manual tag or creating exploratory test based feature files, we could do this for both automated checks and manual tests.

GTP-BDD Binding

To get rid of the TCD redundancy we could generate the feature file from the diagram or vice-versa. Being able to bind GTP to BDD would make GTP more valuable to me.
We would need an abstract object graph that could be used to generate both the diagram and the feature file (Excel spread sheet, HTML page or whatever else). We are almost here, we have a tool that can generate feature files from persisted objects and vice versa. We would just have to figure out how to generate the diagram and express it as an interactive UI and not just a static picture.
What we have been struggling with is the ability to manually edit feature files and keep that in sync with the persisted objects. With a centralized UI this is easy because everyone uses the UI to update the objects. When people are updating features files from a source code repository we have to worry about merge conflicts (yuck) and if we consider the feature file or the persisted object as the source of truth. So, we may have to reduce flexibility and force everyone to use the UI only. Everyone would have to have discipline and not touch the feature files even though we have nice tools built into our IDE to help write and manage them. The tool would have to detect when someone has violated the policy and so on…I digress.

Conclusion

With a graphical UI modeled on GTP/TCD to manage BDD we can provide an arguably simpler way to visualize tests and provide the ability to drill down to see different aspects of test plans and designs and their related current and historical execution. With 2-way binding from diagram to feature file we have a new way to manage our executable specifications. This model could provide a powerful tool to not only aide test planning, but test management as a whole. The end result would hopefully be a better understanding for the team, increased flow in delivery pipeline, enhanced feedback, and more value to the customer and the business.
Now lets ask Google if something like this already exists so I don’t have to add it to my ever increasing backlog of things I want to build. Thanks to Hardeep Sharma, David Bradley, and Citrix for sharing GTP.

Extending the Reach of QA to Production

I have multiple lingering tasks for improving monitoring for for our applications. I believe this is a very important step we need to take to assess the quality of our applications and measure the value that we are delivering to customers. If I had my way, I would hire another me just so I can concentrate on this.

Usability

We need to monitor usage to better understand how our customer actually use the application in production. This will allow us to make better product design decisions and optimizations, prioritize testing effort in terms of regression coverage, and provide a signal for potential issues when trends are off.

Exceptions

We need a better way to monitor and analyze errors. We currently get an email when certain exceptions occur. We also log exceptions to a database. What we don’t have is a way to analyze exceptions. How often do they occur, what is the most thrown type of exception, what was system health when the exception was thrown.

Health

We need a way to monitor and be alerted of health issues (e.g. current utilization of memory, cpu, diskspace; open sessions; processing throughput…). Ops has a good handle on monitoring, but we need to be able to surface more health data and make it available outside of the private Ops monitoring systems. It’s the old “it takes a village to raise an app” thing being touted by the DevOps movement.

Visibility

Everyone on the delivery team needs some access to a dashboard where they can see the usability, exceptions, health of the app and create and subscribe to alerts for various condition thresholds that interest them. This should be even shared with certain people outside of delivery just to keep things transparent.

Conclusion

This can all be started in preproduction and once we are comfortable with it pushed to production. The point of having it is that QA is a responsibility of the entire team. Having these types of insight into production is necessary to insure that our customers are getting the quality they signed up for. When the entire team can monitoring production it allows us to extend QA because we can be proactive and not just reactive to issues in production. Monitoring production gives us the ammo we need to take preemptive action to avert issues in production while giving us the data we need to improve the application.