Category: Uncategorized

Digital Services Playbook

https://playbook.cio.gov/

The US governments digital service playbook was born out of the failures of HealthCare.gov.

I thought it was awesome when I first wrote this as a draft post in 2014. After a quick peek at some of the plays, it’s still something that can be modified and used by many teams wanting to improve how they are delivering value through software in or out of government.

You can actually find this in github so it is open which is a theme of Code for America.

Welcome to Simple Town, My Notes on ASP.NET Razor Pages

So, I took some time to finally look into Razor Pages and I was impressed and actually enjoyed the experience. Razor Pages simplify web development compared to MVC. Razor Pages reminds me of my failed attempts at MVP with Web Forms, but much less boilerplate. Feels like MVVM and still has the good parts of MVC. That’s because Razor Pages is MVC under the covers. I was able to immediately get some simple work done, unlike trying to get up and working with some of the JavaScript frameworks or even MVC for that matter.

Razor Pages provides a simplified abstraction on top of MVC. No bloated controllers, just bite sized modular pages that are paired with a page model (think Codebehind if you’ve used Web Forms). You don’t have to fuss over routing because routing defaults to your folder/page structure with simple conventions.

You may not want to use it for complex websites that need all the fancy smancy JavaScript interactivity, but simple CRUD apps are a great candidate for Razor Pages. IMHO I think I would select Razor Pages by default over MVC for server side HTML rendering of dynamic data bound websites (but I have very little experience with Razor Pages to stand behind that statement).

Here are some of my notes on RazorPages. This is not meant to teach RazorPages just a way to reinforce whats learned by just diving into it. These notes are the results of my research on questions that I ended up digging through docs.microsoft.com, StackOverflow and Google. Remember I’m still a RazorPage newbie so I may not have properly grasped some of this yet.

Page

A Page in Razor Pages is a cshtml file with the @page directive at the top of the page. Pages are basically content pages scripted as Razor templates. You have all the power of Razor and a simplified coding experience with Page Models. You can still use Razor layout pages to have a consistent master page template for your pages. You also get Razor partial pages that allow you to go super modular and build up your pages with reusable components (nice… thinking user controls keeping with my trip down Web Forms memory lane).

Page Models

Page Models are like Codebehinds from Web Forms because there is a one-to-one relationship between Page and PageModel. In fact, in the Page you bind the Page to the PageModel with the @model directive.

The PageModel is like an MVC Controller because it is an abstraction of a controller. It is unlike an MVC Controller because the Controller can be related to many Views and the PageModel has a beautiful, simplified, easy to understand one-to-one relationship with a Page.

Handlers

A PageModel is a simplified controller that you don’t have to worry about mapping routes to. You get to create methods that handle actions triggered by page requests. There is a well defined convention to map requests to handlers that I won’t go into because there are great sites that talk about the details of everything I bring up in my notes.

https://www.learnrazorpages.com is a great resource to start digging into the details.

BindProperty

BindingProperty is used when you want read-write two-way state binding between the PageModel and Page. Don’t get it twisted, Razor Pages is still stateless, but you have a way to easily bind state and pass state between the client and the server. Don’t worry, I know I keep saying Web Forms, but there is no View State, Sessions, or other nasties trying to force the stateless web to be stateful.

The BindingProperty is kind of like a communication channel between the Page and PageModel. The communication channel is not like a phone where communication can flow freely back and forth. Its more like a walkie talkie or CB radio where each side has to take turns clicking a button to talk where request and response are the button clicks. Simply place a BindingProperty attribute on a public property in the PageModel and the PageModel can send its current state to the Page and the Page can send its current state to the PageModel.

DIGRESS: As I dug into this I wondered if there was a way to do reactive one-way data flow like ReactJS. Maybe a BindingProperty that is immutable in the Page. The Page doesn’t update the BindingProperty when a BindingProperty is changed in the Page. Instead, when the Page wants to update a BindingProperty it would send some kind of change event to the PageModel. Then the PageModel handles the event by updating the BindingProperty which updates the Page state. We may need to use WebSockets, think SignalR, to provide an open communication channel to allow the free flow of change events and state changes.

What do you know, of course this has been done – https://www.codeproject.com/Articles/1254354/Turn-Your-Razor-Page-Into-Reactive-Form. Not sure if this is ready for prime time, but I loved the idea of reactive one way data flow when I started to learn about ReactJS. Maybe there is some real benefit that may encourage this to be built into Razor Pages.

ViewData

ViewData is the same ViewData we’ve been using in MVC. It is used to maintain read only Page state between postback (haven’t written “postback” since web forms… it all comes back around). ViewData is used in scenarios where one-way data flow from PageModel to the Page is acceptable. The page state saved to ViewData is passed from the PageModel to the Page.

ViewData is a data structure, a dictionary of objects with a string key. ViewData does not live beyond the request that it is returned to the Page in. When a new request is issued or a redirect occurs the state of ViewData is not maintained.

Since ViewData is weakly typed, values are stored as objects, the values have to be cast to a concrete type to be used. This also means that using ViewData you loose the benefits of Intellisense and compile-time checking. There are benefits that offset the shortcomings of weak typing. ViewData can be shared with a content Page’s layout and partials.

In a PageModel you can use the ViewData Attribute on public property of the PageModel. This makes the property available in ViewData. The property name becomes the key for the property values in the ViewData.

TempData

TempData is use used to send single-use read-only data from PageModel to the Page. The most common use of TempData is to provide user feedback after post actions that results in a redirect where you want to inform the user of the results of the post (“Hey, such and such was deleted like you asked.”).

TempData is marked for deletion after it is read from the request. There are Keep and Peek methods that can be used to look at the data without deleting it and a Remove method to delete it (I haven’t figured out a scenario where I want to use these yet).

TempData is contained in a dictionary of objects with a string key.

Razor Pages Life Cycle

Lastly, I wanted to understand the life cycle of Razor Pages and how I can plug-in to it to customize it for my purpose. Back to Web Forms again, I remember there being a well documented life cycle that let me shoot myself in the foot all the time. Below is the life cycle as I have pieced it together so far. I know we still have MVC under the hood so we still have the Middlewear pipeline, but I couldn’t find documentation on the life cycle with respect to Razor Pages specifically. Maybe I will walk through the source code one day or someone from the Razor Page team or someone else will do it for us (like https://docs.microsoft.com/en-us/aspnet/mvc/overview/getting-started/lifecycle-of-an-aspnet-mvc-5-application).

  1. A request is made to a URL.
  2. The URL is routed to a Page based on convention.
  3. The handler method in the IPageModel is selected based on convention.
  4. The OnPageHandlerSelcted IPageFilter and OnPageHandlerSelctedAsync IPageFilterAsync methods are ran.
  5. The PageModel properties and parameters are bound.
  6. The OnPageHandlerExecuting IPageFilter and OnPageHandlerExecutionAsync IPageFilterAsync methods are ran.
  7. The handler method is executed.
  8. The handler method returns a response.
  9. The OnPageHandlerExecuted IPageFilter method is ran.
  10. The Page is rendered (I need more research on how this happens in Razor, I mean we have the content, layout, and partial pages how are they rendered and stitched together?)

The Page Filters are cool because you have access the the HttpContext (request, response, headers, cookies…) so you can do some interesting things like global logging, messing with the headers, etc. They allow you to inject your custom logic into the lifecycle. They are kind of like Middlewear, but you have HttpContext (how cool is that?… very).

Conclusion

That’s all I got. I actually had fun. With all the complexity and various and ever changing frameworks in JavaScript client side web development, it was nice being back in simple town on the server sending rendered pages to the client.

Adding Report to Existing TFS 2017 Project

I had an issue where I couldn’t see reports for my TFS projects because they weren’t installed. I knew this because I opened SQL Reporting Services and I didn’t see a folder for my project under the TFS collection’s folder. I did a little digging and found a command that I could run to install the reports:

  1. Open administrator command prompt on server hosting TFS.
  2. Change directory to C:\Program Files\Microsoft Team Foundation Server 15.0\Tools
    Note: 64bit would be Program Files (x86)
  3. Run TFSConfig command to add project reports

TFSConfig addprojectreports /collection:”https://{TFSServerName}/{TFSCollectionName}” /teamproject:{TFSProjectName} /template:”Scrum”

You should replace the tokens with names that fit your context (remove the brackets). The template will be the template for your project:

  • Scrum – you will have backlog items under features
  • Agile – you will have stories under features

There’s another one, CMMI, but I’ve never used it. You should see a requirements work item, but I’m not sure if this template has a feature item.

Once you run the command, the reports will be added and you will be able to see how your team is doing by viewing the reports in SQL Reporting Services.

TransactionScope Async Thread Fail

I updated some data access code to wrap some operations in a TransactionScope. The operations are async methods running some Dapper execute statements to write data to a SQL Server database. Something like:

public async Task InserData(SourceData sourceData)
{
  using (var transactionScope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
  {
    using (IDbConnection connection = new SqlConnection(this.ConnectionString))
    {
      connection.Open();

      await InsertSomeData(sourceData.Registers, connection);
      await InsertMoreData(sourceData.Deposits, connection);

      transactionScope.Complete();
     }
   }
 }

Anyway, I wire up a unit test to this method and it fails with this message:

Result Message:
Test method ExtractSourceDataTest.CanStart threw exception:
System.InvalidOperationException: A TransactionScope must be disposed on the same thread that it was created.

As usual, Google to the rescue. I found a nice blog post that explains the issue, https://particular.net/blog/transactionscope-and-async-await-be-one-with-the-flow. Basically, TransactionScope was not made to operate asynchronously across threads, but there is a work around for that. Microsoft released a fix, TransactionScopeAsyncFlowOption.Enabled. I went from a zero

using (var transactionScope = new TransactionScope())

to a hero

using (var transactionScope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))

Now, if this would have been turned on by default I wouldn’t of had this little problem… talking to you Microsoft. I’m sure there is some backward compatibility issue or other quirk that makes default enable difficult, but I’m ranting anyway.

Conclusion

This is awesome, but I basically just enabled a distributed transaction and that scares me. You do not know the trouble I have seen with distributed transactions. Hopefully, its not that bad since we are distributing across processes on the same machine and not over the network, but scary none the least.

AWS Device Farm vs Microsoft Mobile Center

Someone at worked asked if I have ever used AWS Device Farm. I have never used it, but testing mobile apps in the cloud against hundreds of device profiles on real devices sounds like the way to go. It would be hard for us to build and manage a device farm on premise.

AWS Device Farm

After reading up on AWS Device Farm, I discovered that it is a mobile app testing service. It allows you to run tests on Android and iOS devices in the cloud with automated tests against multiple devices at once. It also allows you to run a manual test on a real devices in real time (awesome!!!). You can view video, screenshots, logs, and performance data on your tests to get deep insights into your app.

Microsoft Mobile Center

I went to a Xamarin meetup and learned that Microsoft Mobile Center is basically the same thing as AWS Device Farm, but it covers the entire continuous delivery pipeline for iOS, Android and Windows mobile devices:

  • Build
  • Test
  • Dstribute to app store
  • Monitor crashes and analytics

The build part was compelling for me because it allows me to build iOS in the cloud without having to own a Mac.

It also provides integration with:

  • Git repositories on GitHub or Visual Studio Online (Bitbucket coming soon).
  • Azure Table Data Storage for online/offline data storage and sync.
  • Azure Identity for app user identity management.

Differences

The major difference between them is scope. Device Farm is concerned only with testing in the cloud. Mobile Center is concerned with hosting your entire continuous delivery pipeline in the cloud. So, this is like comparing React to Angular, two different levels of abstraction.

Device Farm doesn’t provide Windows platform testing, but I don’t think that is a deal breaker for many people right now. It also doesn’t support any other continuous delivery automation outside of the test stage. So, you will have to find other services for build, distribute, and monitor or script your own automation.

Mobile Center doesn’t have Remote Access like Device Farm, but you could always write an automated test for the manual actions you’d like to reproduce.

Disclosure

I don’t have any real world experience with either solution. This is a surface level comparison based on docs and demos. I’m a little biased towards Microsoft because I am primarily a .Net developer, so yell at me if I was too unfair to Device Farm.

Build a .Net Core WebAPI using Visual Studio Code

So, we have an intern and she is helping us build an internal tool. She is good on the client side, but very light in experience on the back-end. So, I wanted to give her a challenge, Build a .Net Core WebAPI using Visual Studio Code. I wrote up these instructions and she had the API up and a basic understanding of how to iterate it forward in less than an hour. I thought I’d share it in hopes it helps someone else.

Check out Cmder, http://cmder.net/, as an alternative to Windows command prompt.

  • Make a directory for the application. I am creating my application in an “api” folder inside my _projects folder. Run
mkdir c:\_projects\api
  • Change to your new directory. Run
cd c:\_projects\api
  • Create a .Net Core application. Run
dotnet new
  • Restore dependencies that are listed in your project.json. Run
dotnet restore
  • Open Visual Studio Code and open your application folder. Run
code
  • You may see a warning, “Required assets to build and debug are missing from ‘api’. Add them?”, click yes.
  • Open the Quick Open (Ctrl+P)
  • Run this command “ext install charp”. https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp
  • Back in the console you should be able to run the application and see “Hello World!” printed to the console. Run
dotnet run

The project.json currently looks like:

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {
  },
  "frameworks": {
    "netcoreapp1.1": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.1.0"
        }
      },
      "imports": "dnxcore50"
    }
  }
}

We need to update this to run ASP.Net MVC:

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {
  },
  "frameworks": {
    "netcoreapp1.1": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.1.0"
        },
        "Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
        "Microsoft.AspNetCore.Mvc": "1.1.1",
        "Microsoft.AspNetCore.Mvc.Core": "1.1.1"
      },
      "imports": "dnxcore50"
    }
  }
}

Under frameworks, you will notice that we are running .Net Core 1.1, the current version when this was written. Also, we added some additional dependencies:

  • Kestrel – a web server that will serve up your API endpoints to clients
  • Mvc – The base ASP.Net Core 1.1.1 dependency
  • Mvc.Core – The core ASP.Net Core 1.1.1 depencency

These dependencies will allow us to write and serve our API using ASP.Net Core MVC.

Once you save the project.json Visual Studio Code will let you know “There are unresolved dependencies from ‘project.json’. Please execute the restore command to continue.” You can click “Restore” and you can open the console and run

dotnet restore

This will install the new dependencies that were added to project.json.

Now we need to configure our application to serve our API. We need to update Program.cs from:

using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}

to:

using System;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;

namespace BInteractive.StoryTeller.Api
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseStartup<Program>()
                .Build();
            host.Run();
        }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app)
        {
            app.UseMvcWithDefaultRoute();
        }
    }
}

Here we added new using statements at the top of the class to reference the dependencies we want to use. I changed the namespace to match my application, you can customize the name space to match you application. Normally, I like to have my namespace with MyCompanyName.MyApplicationName.{If the class is in a folder Under my root folder, MyFolderName}.

Now we update the Main method, the entry into the application, to run our API instead of printing “Hello World”. We wire up a host using the Kestrel web server, using this Program class as the start up class, then we build and call run on the host. This starts the server listening and will route based on the configured routes and handle them through the MVC service.

The ConfigureServices method allows you to configure the services you want to use with your API. Right now we only have MVC configured.

The Configure method allows you to inject middle wear into the HTTP pipeline to enhance HTTP request and response handling. You can add things like logging and error pages handling that would work across every request/response.

Now that we are wired up for ASP.Net MVC lets build an API. We are going to build an API that collects and serves questions. So, let define what a question is. Create a new folder under your root folder named “models”. Then create a file name questionmodel.cs.

using System;

namespace BInteractive.StoryTeller.Api.Models
{
    public class Question
    {
        public string Id { get; set; }
        public string Title { get; set; }
    }
}

This is a plain old CSharp object that has properties to get and set the question Id and Title.

With this we can create a controller that allows clients to work with this model through our API. Create a new folder under your root folder named “controllers”. Then create a file named questioncontroller.cs.

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using StoryTeller.Api.Models;

namespace BInteractive.StoryTeller.Api.Controllers
{
    [Route("api/[controller]")]
    public class QuestionController : Controller
    {
        private static List<Question> _questions;

        static QuestionController()
        {
            _questions = new List<Question>();

            Question question = new Question();
            question.Id = "1";
            question.Title = "Hello World?";

            _questions.Add(question);
        }

        [HttpGet]
        public IEnumerable<Question> GetAll()
        {
            return _questions.AsReadOnly();
        }

        [HttpGet("{id}", Name = "GetQuestion")]
        public IActionResult GetById(string id)
        {
            var item = _questions.Find(x => x.Id == id);

            if (item == null)
            {
                return NotFound();
            }

            return new ObjectResult(item);
        }

        [HttpPost]
        public IActionResult Create([FromBody] Question item)
        {
            if (item == null)
            {
                return BadRequest();
            }

            item.Id = (_questions.Count + 1).ToString();

            _questions.Add(item);

            return CreatedAtRoute("GetQuestion", new { controller = "Question", id = item.Id }, item);
        }

        [HttpDelete("{id}")]
        public void Delete(string id)
        {
            _questions.RemoveAll(n => n.Id == id);
        }
    }
}

There is a lot here, but the gist is we are setting up an endpoint route for our question API and we are adding methods to get, post, and delete questions. You can dive more into what this is doing by reading up on ASP.Net Core, https://www.asp.net/core.

You should be able to Ctrl-Shift-B to build the application and if everything is good you won’t see any errors. If you are all good you should be able to run the application. In the console go to the application root directory and run

dotnet run

Then you should be able to browse the API at http://localhost:5000/api/question and see a JSON response with the default question of “Hello World?”.

Modular MicroSPAs

Warning – this is just an unstructured thesis and a challenge for myself to find a solution for building applications with microSPAs. There is no real substance here, just me brainstorming and recording thoughts.

I recently had to bring many microSPAs under the control of one application. A microSPA in this context is just a SPA (single page application) that is meant to coexist with other SPAs in a single application. Each SPA is focused on a discrete domain of the application, maybe a decomposition something like microservices.

I only say micro because I have been through exercises to break up server side monolithic APIs into microservices. Now the break up was client side. Take a client side massive SPA or monolith and break out functionality to smaller SPAs then combine them with new SPAs to form a new modular application. This is nothing new, but it is new to me.

MEAN.js has a wonderful structure for discrete modular AngularJS microSPAs.

https://github.com/meanjs/mean/tree/master/modules

The idea is to have a folder containing all of your microSPAs. Each microSPA get’s its own folder. Each microSPA gets its own repository and development life cycle. An example is below, borrowing heavily from MEAN.js. I can’t go into the particulars because this is just a thought from a problem we had with microSPAs, but something I will be involved in solving.

  • app
    • myapp.core <— this is a microSPA
      • client
        • config
        • controllers
        • css
        • directives
        • images
        • models
        • services
          • interceptors
          • socket
        • views
      • server
        • config
        • controllers
        • data
        • models
        • policies
        • routes
        • templates
        • views
      • tests
        • client
          • small
          • medium
          • large
        • server
          • small
          • medium
          • large
      • myapp.core.client.js
    • myapp.stories
    • myapp.users
    • myapp.admin
    • myapp.other_micro_spa

Now the question is, how do you stitch the microSPAs together under one domain name, client context, user session… and manage the entire application across composed micro-SPAs? We need to think about problems areas like:

  • Authentication
  • Root Application and microSPA Level
    • Authorization
    • Routes
    • Menu
    • Layout Templates
    • Static Assets
      • Styles
      • Images
  • Sharing Across MicroSPAs
    • State
    • Components/Modules
    • Dependencies
  • Debugging
  • Testing
  • Delivery Automation (Build, Package, Test, Release)
  • Monitoring and Analytics

How to solve this with AngularJS 1 & 2, React, Vue.js…?

Why am I thinking about this? I just failed gloriously at breaking apart a monolithic SPA and stitching it back together with other SPAs and ran into issues in all of the problem areas above. I didn’t use the MEAN.js architecture or even the structured modular file layout above. The project was done fast and dirty with the only goal of getting the app working again with the new architecture and new SPAs as fast as possible (a few days fast).

The team finished the task, but I was embarrassed by the resulting structure and by many of the hacks we employed to over come issues in the problem areas above. Why we had to accomplish it so fast is another story, so is how we are going to use lessons learned to refactor and address the problem areas above. It’s been a long time since I blogged regularly, but I am hoping to journal our journey and how we solve the issues we faced with microSPAs.

If you have worked with combining multiple SPAs please share, I’m sure there are solutions all over the interwebs.

Thoughts on Multitenant Microservices

I have worked on SaaS and multitenant based applications. I have segmented application tenants in the database layer at the row, table, and schema levels. Also, done separate databases for each tenant. Each strategy had its pros and cons, but it only addressed data segmentation and I still had to deal with logic segmentation for each tenant.

When a tenant customer wants different or custom functionality how do I segment the logic in such a way to give the tenant what they want without affecting the other tenants. How do we meter and bill for logic? Complex “if” or “case” statements, reflection, dependency injection…? All a bit messy in my opinion.

Having made the leap to microservices we now have the option of separate services per tenant. In the UI layer each tenant can have a different UI that encapsulates the UI’s structure, layout, styling and logic for the tenant. The UI can also have configurable microservices. This is just a list of endpoints that define the microservices necessary to drive the UI. During on-boarding and on an administrative configuration page, tenants can define the functionality they want to use in place of or along side the default functionality by simply selecting from a list of services. We can query the service configuration and monitor service usage to provide customized per tenant metering and billing.

This is not much different than the plug-in strategy you see in content management systems like WordPress and Umbraco. This is just at a different layer of abstraction. Is this better than the other logic segmentation strategies? I don’t know I haven’t done it yet.

Am I excited to try it? Hell yeah. Will I fail while trying it, I hope so because I can learn some new tricks. One thing proper microservices provides is an easier way to reason about an application in bite sized chunks. Also, with end-to-end automation it is easier to experiment. We can fail often, early and fast, fix it, and repeat until we get it right. So, I think it is going to be fun, in a geeky way, to figure this out even though thinking about using GraphQL muddies the waters a bit, but that’s another post.

If you have done multitenant microservices or are interested in doing something similar with microservices, let’s talk about it :).

In SQL Null is not a value… not a value!

I have been spending a lot of time fixing SQL Server database errors caused by stored procedures attempting to compare null. If you don’t know, in SQL:

NULL = NULL is false

NULL <> NULL is false

Null is not a value. Null is nothing. You can’t compare nothing to nothing because there is nothing to compare. I know you can do a select and see the word NULL in the results in SQL Management Studio, but that is just a marker so you don’t confuse empty strings with NULL or something.

If you need to do a comparison on a nullable value please check that shit for null first:

t2.column2 is null or t2.column2 = t1.column2

t2.column2 is not null

Also, if you try to be smart and turn ANSI_NULLS off you are going to be hurt when you have to upgrade your SQL Server to a version that forces ANSI_NULLS on (it’s coming).

I have been guilty of comparing NULL and saying, “it has a NULL value.” Now that I am having to fix scripts written by someone who did think about NULL, I wanted to rant and hammer this point home for myself so I don’t cause anyone the pain I am feeling right now. Null is not a value… not a value!

.

Where is your logic?

RANT

I hate logic in the database. It’s hard to automate testing, hard to debug, hard to have visibility into logic that may be core to the success or failure of an application or business. Some of the worse problems I have had to deal with are database related, actually almost all of the worse problems have been linked to the database.

I am in love with the new movement to smaller services doing exactly one small thing very well. I think the database should persist data… period. Yes, there are times when it just makes sense to have logic closer to the data, but I can always think of a reason not to do it and it always goes back to my experiences with database problems. It’s been a love hate relationship, me and databases.

I’m not a DBA and I don’t have the reserve brain power to become one. So, to help my limited understanding I shy away from anything that looks like logic in my data layer. Call it lazy, naivete, or not wanting to use the right tool for the job, I don’t care. If I’m in charge get you shitty logic out of the database, including you evil MERGE statement and the current bane of my existence :).