Category: Dev

Part 1 & 2: Get Up and Running with #React and #TypeScript

The “Get Up and Running with React and TypeScript” series is made up of posts from chapters in my book “Hello React and TypeScript“. If you have questions, comments or corrections, please reach out to me in the comments or on Twitter @charleslbryant.

You can view the posts in this series on the Hello React and TypeScript category page.

If you haven’t heard, I have been writing a little book called “Hello React and TypeScript“. Its a simple book that chronicles the steps I took to learn how to develop a React application using TypeScript and ES6.

I’m going to post some of the chapters here on my blog. This will provide an easy reference for me because the chapters are really a reorganization of notes that I took as I learned writing React with TypeScript/ES6. It is my hope that these posts may be helpful for someone looking to learn.

I will post the chapters, but I won’t keep them in sync with the book. If you want the latest and greatest, please check out the free book.


To start this series off we will look at the code samples. In this first post we will look at two chapters: Setting Up Samples and Component Basics. If you haven’t used React or TypeScript, I recommend you actually try out the code to get a better feel for React and TypeScript. You can reach out to me in the comments below or on Twitter @charleslbryant.

You can view the posts in this series on the Hello React and TypeScript category page.


Setting Up Samples

https://charleslbryant.gitbooks.io/hello-react-and-typescript/content/SettingUpSamples.html

Requirements

All of the code samples have a couple requirements

  • npm package manager (Node 5.2.0 used in development)
  • IE 10+ or similar modern browser that supports the History API

This first sample is the source for the basic development environment. This includes the packages and automation that will be used in the samples.

Note – to get the latest version of packages and automation you should clone the repository from root, https://github.com/charleslbryant/hello-react-and-typescript.git . We will make updates as we move along with new samples so version 0.0.1 is only a starting point.

Source Code

https://github.com/charleslbryant/hello-react-and-typescript/releases/tag/0.0.1

Installing

To install a sample you need to run npm to get the required dependencies. You should open a console at the root path of the sample directory and run

npm install

Then

tsd install

These commands will install the required npm packages and TypeScript Typings.

Run the Sample

To run the samples, open a console at the sample directory and run

gulp

This will kick off the default gulp task to run all the magic to compile TypeScript, bundle our dependencies and code into one bundle.js file, and move all of the files need to run the application to a dist folder, open the index.html page in a browser, and reload the site when changes are made to source files.

Basic React Component

https://charleslbryant.gitbooks.io/hello-react-and-typescript/content/Samples/ComponentBasic.html

This is a very basic example of a single React component. We could make it even simpler, but this example is still basic enough to start with.

Source Code

https://github.com/charleslbryant/hello-react-and-typescript/releases/tag/0.0.2

src/index.html

<!DOCTYPE html>
<html lang="en">
<head>
<title>Hello World</title>
	<link rel="stylesheet" href="css/bundle.css"/>
</head>
<body>
<div id="app"></div>
<script src="scripts/bundle.js"></script>
</body>
</html>

If you don’t understand this, I recommend that you find an introduction to HTML course online.

The important part of this file is

. We will tell React to inject its HTML output to this div.

src/main.tsx

/// <reference path="../typings/tsd.d.ts" />
import * as React from 'react';
import * as DOM from 'react-dom';

const root = document.getElementById('app');

class Main extends React.Component<any, any> {
 constructor(props: any) {
 super(props);
 }

 public render() {
 return (
<div>Hello World</div>
;
 );
 }
}

DOM.render(<Main />, root);

If you know JavaScript, this may not look like your mama’s JavaScript. At the time I wrote this ES6 was only 6 months old and this sample is written with ES6. If you haven’t been following the exciting changes in JavaScript, a lot of the syntax may be new to you.

This is a JavaScript file with a .tsx extention. The .tsx extenstion let’s the TypeScript compiler know that this file needs to be compiled to JavaScript and contains React JSX. For the most part, this is just standard ES6 JavaScript. One of the good things about TypeScript is we can write our code with ES6/7 and compile it as ES5 or older JavaScript for older browsers.

Let’s break this code down line by line.


/// <reference path="../typings/tsd.d.ts" />

The very top of this file is a JavaScript comment. This is actually how we define references for TypeScript typing files. This let’s TypeScript know where to find the definitions for types used in the code. I am using a global definition file that points to all of the actual definitions. This reduces the number of references I need to list in my file to one.


import * as React from 'react';
import * as DOM from 'react-dom';

The import statements are new in JavaScript ES6. They define the modules that need to be imported so they can be used in the code.


const root = document.getElementById('app');

const is one of the new ES6 variable declaration that says that our variable is a constant or a read-only reference to a value. The root variable is set to the HTML element we want to output our React component to.


class Main extends React.Component<any, any> {

class is a new declaration for an un-hoisted, stict mode, function defined with prototype-based inheritance. That’s a lot of fancy words and if they mean nothing to you, its not important. Just knowing that class creates a JavaScript class is all you need to know or you can dig in athttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes.

Main is the name of the class.

extends says that the Main class will be created as a child of the React.Component<any, any>class.

React.Component<any, any> is a part of the TypeScript typing definition for React. This is saying that React.Component will allow the type any to be used for the components state and props objects (more on this later). any is a special TypeScript type that basically says allow any type to be used. If we wanted to restrict the props and state object to a specific type we would replace any with the type we are expecting. This looks like generics in C#.


constructor(props: any) {
    super(props);
}

The contructor method is used to initialize the class. In our sample, we only call super(props) which calls the contructor method on our parent class React.Component passing any props that were passed into our constructor. We are using TypeScript to define the props as any type.

In additon to calling super we could also initialize the state of the component, bind events, and other tasks to get our component ready for use.


public render() {
    return (
<div>Hello World</div>
);
}

Maybe the most important method in a React component is the render method. This is where the DOM for our component is defined. In our sample we are using JSX. We will talk about JSX later, just understand that it is like HTML and outputs JavaScript.

JSX is not a requirement. You could use plain JavaScript. We can rewrite this section of code as

public render() {
    return (
        React.createElement("div", null, "Hello World")
    );
}

The render method returns the DOM for the component. After the return statement you put your JSX. In our sample, this is just a simple div that says Hello World. It is good practice to wrap your return expression in parenthesis.


DOM.render(<Main />, root);

Lastly, we call DOM.render and pass Main, the React component we just created, and root, the variable containing the element we want to output the component to.


That’s it, pretty simple and something we can easily build upon.

Recent Talks with Big Effect on My Development

I have seen many, many development talks, actually an embarrassing amount. There have been many talks that have altered how I develop or think about development. Actually, since 2013, some talks have caused a major shift in how I think about full stack development, my mind has been in a major evolutionary cycle.

CQRS and Event Sourcing

My thought process about full stack development started down a new path when I attended Code on the Beach 2013. I watched a talk by Greg Young that included Event Sourcing, mind ignited.

Greg Young – Polyglot Data – Code on the Beach 2013

He actually followed this up at Code on the Beach 2014 with a talk on CQRS and Event Sourcing.


Reactive Functional Programming

Then I was introduced to React.js and I experience a fundamental paradigm shift in how I believed application UIs should be built and began exploring reactive functional programming. I watched a talk by Pete Hunt where he introduced the design decisions behind React, mind blown.

Pete Hunt – React, Rethinking Best Practices – JSConf Asia 2013


Stream Processing

Then my thoughts on how to manage data flow through my applications was significantly altered. I watched a talk by Martin Kleppmann that promoted subscribe/notify models and stream processing and I learned more about databases than any talk before it, mind reconfigured.

Martin Kleppmann – Turning the database inside-out with Apache Samza – Strange Loop 2014


Immutable Data Structures

Then my thoughts on immutable state was refined and I went in search of knowledge on immutable data structures and their uses. I watched a talk by Lee Bryon on immutable state, mind rebuilt.

Lee Bryon – Immutable Data in React – React.js Conf 2015


Conclusion

I have been doing professional application development since 2000. There was a lot burned into my mind in terms of development, but these talks where able to cut through the internal program of my mind to teach this old dog some new tricks. The funny thing is all of these talks are based on old concepts from computer science and our profession that I never had the opportunity of learning.

The point is, keep learning and don’t accept best practices as the only or best truth.

 

My New Book: “Hello React and TypeScript”

So, I have been doing a deep dive into React and TypeScript and found it a little difficult to find resources to help me. I could find plenty of great resources on React and TypeScript separately. Just exploring the documentation and many blogs and videos for the two is great. Yet, resources that explore writing React applications with TypeScript was either old, incomplete, or just didn’t work for me for one reason or another. I have to admit that I haven’t done a deep search in a while so that may have changed.

After some frustration and keyboard pounding, I decided to just consume as much as I can about both individually and just get a Hello World example working. With the basics done I went slowly all the way to a full blown application. Since I was keeping notes on that journey, why not share them?

I was going to share them in a few blog posts, but I have been trying to do a book on GitBook for a while and this topic seemed like a good fit. Also, I enjoy digging into React and TypeScript so this book is probably something I can commit to, not like my other sorry book attempts.

My little guide book isn’t complete and still very rough. It may not be completed any time soon with the rapid pace of change for JavaScript, TypeScript and React. Also, I am thinking about moving from Gulp/Browserify to WebPack. I have only completed a fraction of the samples I want to do… so much still to do. Even though I don’t think it is ready for prime time, it is a public open source book so why not share it, get feedback, and iterate.

You can get the free book from GitBook

https://charleslbryant.gitbooks.io/hello-react-and-typescript/content/index.html

The source code for the samples in the book are on GitHub

https://github.com/charleslbryant/hello-react-and-typescript

If you don’t like the book let me know. If you find something wrong, bad practice or advice, incoherent explanations… whatever, let me know. If you like it let me know too, I can always use an ego boost :). Now I just have to convince myself to do some videos and talks… public speaking, bah :{

Enjoy!

Online Computer Science Education

I have always had an inferiority complex as a software developer. I am self taught and I felt that I was missing a good base that people with a computer science degree have. I no longer feel as inferior as I use to because I have been doing this for 15 years and I am always able to hold my own. Yet, there are things that I don’t immediately understand that I think would be more clear if I had the grounding of a CS degree. But, I am not going to spend a bunch of money to do it and I don’t have the time to commit to school full time.

I am a Pluralsight junkie and YouTube computer science’y videos are  my TV. There are many top notch CS schools that have released lecture videos to the public for free. Why can’t I organize these videos into a curriculum and get a virtual CS education. I am constantly watching videos anyway. Well I ran across two posts that suggest that very thing.

Free Online CS Education

The MIT Challenge chronicles Scott Young’s quest to get the equivalent of a 4 year CS education from MIT in 12 months. I am not that ambitious, but I appreciate his journey and he is very thoughtful in providing some help in actually getting through the challenge.

$200K for a computer science degree? Or these free online classes? is a post by Andrew Oliver where he puts forth a sample CS curriculum made up of Coursera videos. He thinks these videos would be as good as a CS degree. He says he hasn’t watched all of the videos, but based on the overview of the videos he says he would hire someone who went through them and did all the associated course work.

aGupieWare: Online Learning: A Bachelor’s Level Computer Science Program Curriculum is a good post that breaks down what’s involved in a CS degree program. They put forth a good list of online classes to choose from. They received a good amount of responses and based on feedback they posted an improved expanded list Online Learning: An Intensive Bachelor’s Level Computer Science Program Curriculum, Part II.

Conclusion

I can’t say that I would do all of these, it’s a lot, but I am going to take a more focused approach to getting a more structured CS education. I need to shore up my basic understanding of CS. If I am able to stick to it, I will post the courses I do, but don’t count on it.

If you don’t have a CS degree, you can still get a very good CS education. Even if you are a seasoned developer, if you haven’t received a formal education in CS, why not expand your understanding of our profession and take advantage of some of the excellent, free, online CS learning.

Liskov Substitution Principle (LSP) in TypeScript

Liskov Substitution Principle (LSP)

I was recently asked to name some of “Uncle Bob’s” SOLID OOP design principles. One of my answers was Liskov Substitution Principle. I was told it is the least given answer, I  wonder why?

Liskov Substitution Principle (wikipedia)
“objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.”

 

TypeScript

If you are a serious developer that stays up to date with the profession, you know what TypeScript is by now. TypeScript is a superset of JavaScript that provides type checking based on the structure or shape of values at compile time and at design time if your IDE supports it. This structural type checking is sometimes referred to as duck typing. If it looks like and quacks like the type, its the type.

As I dive deeper into TypeScript I am looking for ways to keep my code SOLID. This is where the question that prompted this post surfaced. How do I maintain LSP in TypeScript?

LSP and TypeScript

When I subtype an object in JavaScript its not the same inheritance that we have in Java or C#. In JavaScript I can have function B stand in for A as long as function B has the same structure, which is different than static type checking. TypeScript provides syntactic sugar on top of JavaScript to enforce type constraints, but this doesn’t change the dynamic typing of JavaScript. In fact, I can write plain JavaScript in TypeScript so how does LSP apply when subtypes are a different animal in JavaScript as opposed to a statically typed OOP based language.

Then it hit me. The most important part of LSP (to me) is maintaining intent or semantics. It doesn’t matter if we are talking about strong typing or replacing ducks with swans, how we subtype isn’t as important as the behavior of the subtype with respect to the parent. Does the subtype pass our expectation q?

The principle asks us to reason about what makes the program correct. If you can replace a duck with a swan and the program holds its intent, LSP is not violated. Yet, if you can replace a Mallard duck with a Pekin duck in a method that only expects Mallards and maintains the proper behavior only with Mallards, when you use a Pekin duck in the method you have broken LSP. Pekin violates LSP for the expectation we have for Mallard even if Pekin is a structurally correct subtype of Mallard, using Pekin changes the behavior of the program. Now if we have a Gadwal duck that is also a structurally correct subtype of Mallard and using it I observed the same behavior as using a Mallard, LSP isn’t violated and I am safe to use a Gadwal duck in place of a Mallard.

Conclusion

I believe LSP has merit in TypeScript and any programming languages in general. I believe that violation of LSP goes beyond throwing a “not implemented exception” or hiding members of the parent class. For example, if you have an interface that defines a method that should only return even numbers, then you have an implementation of the interface method that allows returning of odd numbers, LSP is violated. Pay attention to subtype behavior with respect to expectations of behavior.

Respecting the LSP in a program helps increase flexibility, reduces coupling and makes reuse a good thing, but it also helps reduce bugs and increase the value of a program when you pay attention to behavioral intent.

What do you think, should we care about LSP in TypeScript?

Monitoring Change Tickets in Delivery Pipelines

DevOps sounds cool like some covert special operations IT combat team, but it is missing the boat in many implementations because it only focuses on the relationship between Dev and Ops and is usually only championed by Ops. The name alienates important contributors on the software delivery team. The team is responsible for software delivery including analysis, design, development, build, test, deploy, monitoring, and support. The entire team needs to be included in DevOps and needs visibility in to delivery pipelines from end-to-end. This is an unrelated rant, but this lead me to thinking about how a delivery team can monitor changes in delivery pipelines.

Monitor Change

I believe it is important that the entire team be able to monitor changes as they flow through delivery pipelines.. There are ticket management systems that help capture some of the various stages that a change goes through, but its mostly various project management related workflow stages and they have to be changed manually. I’d like a way to automatically monitor a change as if flows from change request all the way to production and monitor actions that take place outside of the ticket or project management system.

Normally, change is captured in some type of ticket maybe in a project management system or bug database (e.g. Jira, Bugzilla). We should be able to track various activities that take place as tickets make their way to production. We need a way trace various actions on a change request back to the change request ticket. I’d like a system where activities involved in getting a ticket to production automatically generate events that are related to ticket numbers and stored in a central repository.

If a ticket is created in Jira, a ticket created event is created. A developer logs time on a ticket, a time logged activity event is created that links back to the time log or maybe holds data from the time log for the ticket number.

When an automated build that includes the ticket happens, then a build stated activity event is created with the build data is triggered. As various jobs and tasks happen in the automated build a build changed activity event is triggered with log data for the activity. When the build completes a build finished activity event is triggered. There may be more than one ticket involved in a build so there would be multiple events with similar data captured, but hopefully changes are small and constrained to one or a few tickets… that’s the goal right, small batches failing fast and early.

We may want to capture the build events and include every ticket involved instead of relating the event directly to the ticket, not sure; I am brainstorming here. The point is I want full traceability across my software delivery pipelines from change request to production and I’d like these events stored in a distributed event store that I can project reports from. Does this already exists? Who knows, but I felt like thinking about it a little before I search for it.

Ticket Events

  1. Ticket Created Event
  2. Ticket Activity Event
  3. Ticket Completed Event

A ticket event will always include the ticket number and a date time stamp for the event, think Event Sourcing. Ticket created occurs after the ticket is created in the ticket system. Ticket completed occurs once the ticket is closed in the ticket system. The ticket activities are captured based on the activities that are configured in the event system.

Ticket Activity Events

A ticket activity is an action that occurs on a change request ticket as it makes its way to production. Ticket activities will have an event for started, changed, and finished. Ticket activity events can include relevant data associated with the event for the particular type of activity. There may be other statuses included in each of these ticket activity events. For example a finish event could include a status of error or failed to indicate that the activity finished but it had an error or failed.

  • {Ticket Activity} Started
  • {Ticket Activity} Changed
  • {Ticket Activity} Finished

Deploy Started that has deploy log, Build Finished that has the build log, Test Changed that has new test results from an ongoing test run.

Maybe this is overkill? Maybe this should be simplified where we only need one activity event per activity and it includes data for started, changed, finished, and other statuses like error and fail. I guess it depends on if we want to stream activity event statuses or ship them in bulk when an activity completes; again I’m brainstorming.

Activities

Every ticket won’t have ticket activity events triggered for every activity that the system can capture. Tickets may not include every event that can occur on a ticket. Activity events are triggered on a ticket when the ticket matches the scope of the activity. Scope is determined by the delivery team.

Below are some of the types of activity events that I could see modeling for events on my project, but there can be different types depending on the team. So, ticket activity events have to be configurable. Every team has to be able to add and remove the types of ticket activity events they want to capture.

  1. Analysis
    1. Business Analysis
    2. Design Analysis
      1. User Experience
      2. Architecture
    3. Technical Analysis
      1. Development
      2. DBA
      3. Build
      4. Infrastructure
    4. Risk Analysis
      1. Quality
      2. Security
      3. Legal
  2. Design
  3. Development
  4. Build
  5. Test
    1. Unit
    2. Integration
    3. End-to-end
    4. Performance
    5. Scalability
    6. Load
    7. Stress
  6. Deploy
  7. Monitor
  8. Maintain

Reporting and Dashboards

Once we have the events captured we can make various projections to create reports and dashboards to monitor and analyze our delivery pipelines. With the ticket event data we can also create reports at other scopes. Say we want to report on a particular sprint or project. With the ticket Id we should be able to gather this and relate other tickets in the same project or sprint. It would take some though as to whether we would want to capture project and sprint in the event data or leave this until the time when we make the actual projection, but with ticket Id we can expand our scope of understanding and traceability.

Conclusion

The main goal with this exploration into my thoughts on a possible application is to explore a way to monitor change as it flows through our delivery pipelines. We need a system that can capture the raw data for ticket create and completed events and all of the configured ticket activity events that occur in between. As I look for this app, I can refer to this to see if it meets what I envisioned or if there may be a need for this.

Confirming MSBuild Project Dependency Build Behavior

So we have one giant Visual Studio solution that builds every application project for our product. It is our goal to one day be able to build and deploy each application independently. Today, we have a need to build one project for x86 and all others as x64. This requirement also provides a reason to explore per application build and deploy. The x64 projects and x86 project share some of the same dependencies and provides a good test for per application build. The purpose of this exploration is to determine the best way to automate the separate platform builds and lay groundwork for per application build. These are just my notes and not meant to provide much meat.

First, I will do some setup to provide a test solution and projects to experiment with. I created 4 projects in a new solution. Each project is a C# class library with only the default files in them.

  • ProjA
  • ProjB
  • ProjC
  • ProjD

Add project dependencies

  • ProjA > ProjB
  • ProjB > ProjC, ProjD
  • ProjC > ProjD

Set the platform target (project properties build tab) for each project like so

  • ProjA x64
  • ProjB x64
  • ProjC x64
  • ProjD x86

Behavior when a Dependent Project is Built

Do a Debug build of the solution and inspect the bin folders.

  • BuildTest\ProjA\bin\Debug
    • ProjA.dll               10:03 AM             4KB        B634F390-949F-4809-B937-66069C5F058E              v4.0.30319 / x64
    • ProjA.pdb             10:03 AM             8KB
    • ProjB.dll                10:03 AM             4KB        B0C5B475-576D-44D2-BD41-135BDA69225E          v4.0.30319 / x64
    • ProjB.pdb             10:03 AM             8KB
  • BuildTest\ProjB\bin\Debug
    • ProjB.dll               10:03 AM             4KB        B0C5B475-576D-44D2-BD41-135BDA69225E          v4.0.30319 / x64
    • ProjB.pdb            10:03 AM             8KB
    • ProjC.dll               10:03 AM             4KB        DBB9482F-6609-4CA5-AB00-009473E27CDA          v4.0.30319 / x64
    • ProjC.pdb            10:03 AM             8KB
    • ProjD.dll               10:03 AM             4KB        4F0F7877-5046-4A32-8B8E-FAD8E2660CE6            v4.0.30319 / x86
    • ProjD.pdb            10:03 AM             8KB
  • BuildTest\ProjC\bin\Debug
    • ProjC.dll               10:03 AM             4KB        DBB9482F-6609-4CA5-AB00-009473E27CDA          v4.0.30319 / x64
    • ProjC.pdb            10:03 AM             8KB
    • ProjD.dll               10:03 AM             4KB        4F0F7877-5046-4A32-8B8E-FAD8E2660CE6            v4.0.30319 / x86
    • ProjD.pdb            10:03 AM             8KB
  • BuildTest\ProjD\bin\Debug
    • ProjD.dll              10:03 AM             4KB        4F0F7877-5046-4A32-8B8E-FAD8E2660CE6            v4.0.30319 / x86
    • ProjD.pdb           10:03 AM             8KB

Do a Debug rebuild of ProjA

  • ProjA all DLLs have new date modified.
  • ProjB all DLLs have new date modified.
  • ProjC all DLLs have new date modified.
  • ProjD all DLLs have new date modified.

Do a Debug build of ProjA

  • ProjA no DLLs have new date modified.
  • ProjB no DLLs have new date modified.
  • ProjC no DLLs have new date modified.
  • ProjD no DLLs have new date modified.

Change Class1.cs in ProjA and do a Debug build of ProjA

  • ProjA: ProjA.dll has new data modified, ProjB.dll does not.
  • ProjB no DLLs have new date modified.
  • ProjC no DLLs have new date modified.
  • ProjD no DLLs have new date modified.

Change Class1.cs in ProjB and do a Debug Build of ProjA

  • ProjA all DLLs have new date modified.
  • ProjB: ProjB.dll has new data modified, ProjC.dll and ProjD do not.
  • ProjC no DLLs have new date modified.
  • ProjD no DLLs have new date modified.

We change Class1.cs in ProjC and do a Debug Build of ProjA

  • ProjA all DLLs have new date modified.
  • ProjB: ProjB.dll and ProjC.dll have new data modified, and ProjD does not.
  • ProjC ProjC.dll has new date modified, and ProjD does not.
  • ProjD no DLLs have new date modified.

We change Class1.cs in ProjD and do a Debug Build of ProjA

  • ProjA all DLLs have new date modified.
  • ProjB all DLLs have new date modified.
  • ProjC all DLLs have new date modified.
  • ProjD all DLLs have new date modified.

Conclusion

  • If a dependency has changes it will be built when the dependent project is built.

Behavior When Project with Dependencies is Built

Next, I want to verify the behavior when a project that has dependencies is built.

Clean the solution and do a debug build of the solution.

Do a Debug build of ProjD

  • ProjA no DLLs have new date modified.
  • ProjB no DLLs have new date modified.
  • ProjC no DLLs have new date modified.
  • ProjD no DLLs have new date modified.

We change Class1.cs in ProjD and do a Debug build of ProjD

  • ProjA no DLLs have new date modified.
  • ProjB no DLLs have new date modified.
  • ProjC no DLLs have new date modified.
  • ProjD all DLLs have new date modified.

We change Class1.cs in ProjC and do a Debug Build of ProjD

  • ProjA no DLLs have new date modified.
  • ProjB no DLLs have new date modified.
  • ProjC no DLLs have new date modified.
  • ProjD no DLLs have new date modified.

Conclusion

  • If a project with dependencts is built, any projects that depend on it will not be built.

Behavior When Bin is Cleaned

I manually deleted DLLs in ProjD and built ProjA and the DLLs with same date modified reappeared. Maybe they were fetched from obj folder.

I do a clean on ProjD (this cleans obj) and build ProjA and new DLLs are added to ProjD.

Conclusion

  • Obj folder acts like a cache for builds.

Behavior when External Dependencies are Part of Build

Add two new projects to solution

  • ExtA x64 > ExtB
  • ExtB x86

Updated these projects so they output to the solution Output/Debug folder.

Added references to the ExtA and ExtB output DLLs

  • ProjA > ExtA
  • ProjB > ExtB

I did a solution rebuild and I noticed something that may also be a problem in other tests. When building ProjC, ProjD, and ExtA we get an error:

warning MSB3270: There was a mismatch between the processor architecture of the project being built “AMD64” and the processor architecture of the reference. This mismatch may cause runtime failures. Please consider changing the targeted processor architecture of your project through the Configuration Manager so as to align the processor architectures between your project and references, or take a dependency on references with a processor architecture that matches the targeted processor architecture of your project.

Also, ProjA and ProjB are complaining about reference resolution:

warning MSB3245: Could not resolve this reference. Could not locate the assembly “ExtA”. Check to make sure the assembly exists on disk.

In Visual Studio I update the dependency for ProjA and ProjB to include the Ext projects. This fixes the MSB3245 error.

Conclusion

  • We need to build all dependencies with the same platform target as the dependent.
  • We need to build external references before building any dependents of the external references (e.g. get NuGet dependencies).
  • When a solution contains a project that is dependent on another project, but does not have a project reference, update the dependency to force the dependency to build first.

Separating Platform Builds

Add new Platforms for x64 and x86. Update configuration so each project can do a x86 and x64 build. Have Ext project output to x86 and x64 folders for release and debug builds.

Add new projects for ExtC and ExtD and have respective Proj reference their release output. ProjC should ref ExtC x64 release and ProjD should ref ExtD x86 release.

Issue Changing Platform on Newly Add Solution Projects

So, I am unable to change platform target for ExtD/C as x86 and x64 do not appear in drop down and I can’t add them because UI says they are already created. I manually add them to project file.

 

<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x64'">
    <DebugSymbols>true</DebugSymbols>
    <OutputPath>bin\x64\Debug\</OutputPath>
    <DefineConstants>DEBUG;TRACE</DefineConstants>
    <DebugType>full</DebugType>
    <PlatformTarget>x64</PlatformTarget>
    <ErrorReport>prompt</ErrorReport>
    <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x64'">
    <OutputPath>bin\x64\Release\</OutputPath>
    <DefineConstants>TRACE</DefineConstants>
    <Optimize>true</Optimize>
    <DebugType>pdbonly</DebugType>
    <PlatformTarget>x64</PlatformTarget>
    <ErrorReport>prompt</ErrorReport>
    <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x86'">
    <DebugSymbols>true</DebugSymbols>
    <OutputPath>bin\x86\Debug\</OutputPath>
    <DefineConstants>DEBUG;TRACE</DefineConstants>
    <DebugType>full</DebugType>
    <PlatformTarget>x86</PlatformTarget>
    <ErrorReport>prompt</ErrorReport>
    <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>
  <PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x86'">
    <OutputPath>bin\x86\Release\</OutputPath>
    <DefineConstants>TRACE</DefineConstants>
    <Optimize>true</Optimize>
    <DebugType>pdbonly</DebugType>
    <PlatformTarget>x86</PlatformTarget>
    <ErrorReport>prompt</ErrorReport>
    <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>

No I can update the output path for ExtC/D and update Configuration Manager to proper platform.

Since ProjD is exclusively an x86 projects I removed them from the build for x64. I have ExtD building both x86 and x64. I updated dependencies so ExtC/D build before ProjC/D.

Final Conclusion

This was a bit much to verify what may be common knowledge on the inter-webs, but I wanted to see for myself. There is more that I want to experiment with like NuGet, build performance and optimization, but this gave me enough to move forward with an initial revamp of our automated build. I am going to proceed with a separate pipeline for the an x86 build of the entire solution and a separate deploy for the x86 application. I really believe that going forward that NuGet can become a key tool in standardizing and optimizing per application build and deployment.

Summary

  • If a dependency has changes it will be built when the dependent project is built. Otherwise it will not be rebuilt. This is all a feature of MSBuild. When we move to per application build we will have to build separate solutions for each application and build in isolated pipelines. To prevent conflicts with other applications, we should build dependencies that are shared across applications in a separate pipeline.
  • If a project with dependents is built, any projects that depend on it will not be built. If we
  • Obj folder acts like a cache for builds. We can extend this concept to a common DLL repo where all builds send their DLLs, but we would need a confident way of versioning the DLLs so that we always use the most recent or specific version… sounds like I am proposing NuGet (smile).
  • We need to build all dependencies with the same platform target as the dependent. We may build the main solution as x64 and build other projects that need x86 separately. I believe this would be the most efficient since the current x86 projects will not change often.
  • We need to build external references before building any dependents of the external references (e.g. get NuGet dependencies). We do this now with NuGet, we fetch packages first, but when we move to per application build and deploy this will automatically be handled by Go.cd’s fan in feature. We will have application builds have pipeline dependencies on any other application builds it needs. This will cause the apps to always use the most recent successful build of an application. We can make this stronger by having application depend on test pipelines to insure the builds have been properly tested before integration.
  • When a solution contains a project that is dependent on another project, but does not have a project reference, update the dependency to force the dependency to build first.

NOTE

This is actually a re-post of a post I did on our internal team blog. One comment there was we should also do an Any CPU build then we can have one NuGet will all versions.