AWS Device Farm vs Microsoft Mobile Center

Someone at worked asked if I have ever used AWS Device Farm. I have never used it, but testing mobile apps in the cloud against hundreds of device profiles on real devices sounds like the way to go. It would be hard for us to build and manage a device farm on premise.

AWS Device Farm

After reading up on AWS Device Farm, I discovered that it is a mobile app testing service. It allows you to run tests on Android and iOS devices in the cloud with automated tests against multiple devices at once. It also allows you to run a manual test on a real devices in real time (awesome!!!). You can view video, screenshots, logs, and performance data on your tests to get deep insights into your app.

Microsoft Mobile Center

I went to a Xamarin meetup and learned that Microsoft Mobile Center is basically the same thing as AWS Device Farm, but it covers the entire continuous delivery pipeline for iOS, Android and Windows mobile devices:

  • Build
  • Test
  • Dstribute to app store
  • Monitor crashes and analytics

The build part was compelling for me because it allows me to build iOS in the cloud without having to own a Mac.

It also provides integration with:

  • Git repositories on GitHub or Visual Studio Online (Bitbucket coming soon).
  • Azure Table Data Storage for online/offline data storage and sync.
  • Azure Identity for app user identity management.

Differences

The major difference between them is scope. Device Farm is concerned only with testing in the cloud. Mobile Center is concerned with hosting your entire continuous delivery pipeline in the cloud. So, this is like comparing React to Angular, two different levels of abstraction.

Device Farm doesn’t provide Windows platform testing, but I don’t think that is a deal breaker for many people right now. It also doesn’t support any other continuous delivery automation outside of the test stage. So, you will have to find other services for build, distribute, and monitor or script your own automation.

Mobile Center doesn’t have Remote Access like Device Farm, but you could always write an automated test for the manual actions you’d like to reproduce.

Disclosure

I don’t have any real world experience with either solution. This is a surface level comparison based on docs and demos. I’m a little biased towards Microsoft because I am primarily a .Net developer, so yell at me if I was too unfair to Device Farm.

Build a .Net Core WebAPI using Visual Studio Code

So, we have an intern and she is helping us build an internal tool. She is good on the client side, but very light in experience on the back-end. So, I wanted to give her a challenge, Build a .Net Core WebAPI using Visual Studio Code. I wrote up these instructions and she had the API up and a basic understanding of how to iterate it forward in less than an hour. I thought I’d share it in hopes it helps someone else.

Check out Cmder, http://cmder.net/, as an alternative to Windows command prompt.

  • Make a directory for the application. I am creating my application in an “api” folder inside my _projects folder. Run
mkdir c:\_projects\api
  • Change to your new directory. Run
cd c:\_projects\api
  • Create a .Net Core application. Run
dotnet new
  • Restore dependencies that are listed in your project.json. Run
dotnet restore
  • Open Visual Studio Code and open your application folder. Run
code
  • You may see a warning, “Required assets to build and debug are missing from ‘api’. Add them?”, click yes.
  • Open the Quick Open (Ctrl+P)
  • Run this command “ext install charp”. https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp
  • Back in the console you should be able to run the application and see “Hello World!” printed to the console. Run
dotnet run

The project.json currently looks like:

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {
  },
  "frameworks": {
    "netcoreapp1.1": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.1.0"
        }
      },
      "imports": "dnxcore50"
    }
  }
}

We need to update this to run ASP.Net MVC:

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {
  },
  "frameworks": {
    "netcoreapp1.1": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.1.0"
        },
        "Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
        "Microsoft.AspNetCore.Mvc": "1.1.1",
        "Microsoft.AspNetCore.Mvc.Core": "1.1.1"
      },
      "imports": "dnxcore50"
    }
  }
}

Under frameworks, you will notice that we are running .Net Core 1.1, the current version when this was written. Also, we added some additional dependencies:

  • Kestrel – a web server that will serve up your API endpoints to clients
  • Mvc – The base ASP.Net Core 1.1.1 dependency
  • Mvc.Core – The core ASP.Net Core 1.1.1 depencency

These dependencies will allow us to write and serve our API using ASP.Net Core MVC.

Once you save the project.json Visual Studio Code will let you know “There are unresolved dependencies from ‘project.json’. Please execute the restore command to continue.” You can click “Restore” and you can open the console and run

dotnet restore

This will install the new dependencies that were added to project.json.

Now we need to configure our application to serve our API. We need to update Program.cs from:

using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}

to:

using System;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;

namespace BInteractive.StoryTeller.Api
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseStartup<Program>()
                .Build();
            host.Run();
        }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app)
        {
            app.UseMvcWithDefaultRoute();
        }
    }
}

Here we added new using statements at the top of the class to reference the dependencies we want to use. I changed the namespace to match my application, you can customize the name space to match you application. Normally, I like to have my namespace with MyCompanyName.MyApplicationName.{If the class is in a folder Under my root folder, MyFolderName}.

Now we update the Main method, the entry into the application, to run our API instead of printing “Hello World”. We wire up a host using the Kestrel web server, using this Program class as the start up class, then we build and call run on the host. This starts the server listening and will route based on the configured routes and handle them through the MVC service.

The ConfigureServices method allows you to configure the services you want to use with your API. Right now we only have MVC configured.

The Configure method allows you to inject middle wear into the HTTP pipeline to enhance HTTP request and response handling. You can add things like logging and error pages handling that would work across every request/response.

Now that we are wired up for ASP.Net MVC lets build an API. We are going to build an API that collects and serves questions. So, let define what a question is. Create a new folder under your root folder named “models”. Then create a file name questionmodel.cs.

using System;

namespace BInteractive.StoryTeller.Api.Models
{
    public class Question
    {
        public string Id { get; set; }
        public string Title { get; set; }
    }
}

This is a plain old CSharp object that has properties to get and set the question Id and Title.

With this we can create a controller that allows clients to work with this model through our API. Create a new folder under your root folder named “controllers”. Then create a file named questioncontroller.cs.

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using StoryTeller.Api.Models;

namespace BInteractive.StoryTeller.Api.Controllers
{
    [Route("api/[controller]")]
    public class QuestionController : Controller
    {
        private static List<Question> _questions;

        static QuestionController()
        {
            _questions = new List<Question>();

            Question question = new Question();
            question.Id = "1";
            question.Title = "Hello World?";

            _questions.Add(question);
        }

        [HttpGet]
        public IEnumerable<Question> GetAll()
        {
            return _questions.AsReadOnly();
        }

        [HttpGet("{id}", Name = "GetQuestion")]
        public IActionResult GetById(string id)
        {
            var item = _questions.Find(x => x.Id == id);

            if (item == null)
            {
                return NotFound();
            }

            return new ObjectResult(item);
        }

        [HttpPost]
        public IActionResult Create([FromBody] Question item)
        {
            if (item == null)
            {
                return BadRequest();
            }

            item.Id = (_questions.Count + 1).ToString();

            _questions.Add(item);

            return CreatedAtRoute("GetQuestion", new { controller = "Question", id = item.Id }, item);
        }

        [HttpDelete("{id}")]
        public void Delete(string id)
        {
            _questions.RemoveAll(n => n.Id == id);
        }
    }
}

There is a lot here, but the gist is we are setting up an endpoint route for our question API and we are adding methods to get, post, and delete questions. You can dive more into what this is doing by reading up on ASP.Net Core, https://www.asp.net/core.

You should be able to Ctrl-Shift-B to build the application and if everything is good you won’t see any errors. If you are all good you should be able to run the application. In the console go to the application root directory and run

dotnet run

Then you should be able to browse the API at http://localhost:5000/api/question and see a JSON response with the default question of “Hello World?”.

Modular MicroSPAs

Warning – this is just an unstructured thesis and a challenge for myself to find a solution for building applications with microSPAs. There is no real substance here, just me brainstorming and recording thoughts.

I recently had to bring many microSPAs under the control of one application. A microSPA in this context is just a SPA (single page application) that is meant to coexist with other SPAs in a single application. Each SPA is focused on a discrete domain of the application, maybe a decomposition something like microservices.

I only say micro because I have been through exercises to break up server side monolithic APIs into microservices. Now the break up was client side. Take a client side massive SPA or monolith and break out functionality to smaller SPAs then combine them with new SPAs to form a new modular application. This is nothing new, but it is new to me.

MEAN.js has a wonderful structure for discrete modular AngularJS microSPAs.

https://github.com/meanjs/mean/tree/master/modules

The idea is to have a folder containing all of your microSPAs. Each microSPA get’s its own folder. Each microSPA gets its own repository and development life cycle. An example is below, borrowing heavily from MEAN.js. I can’t go into the particulars because this is just a thought from a problem we had with microSPAs, but something I will be involved in solving.

  • app
    • myapp.core <— this is a microSPA
      • client
        • config
        • controllers
        • css
        • directives
        • images
        • models
        • services
          • interceptors
          • socket
        • views
      • server
        • config
        • controllers
        • data
        • models
        • policies
        • routes
        • templates
        • views
      • tests
        • client
          • small
          • medium
          • large
        • server
          • small
          • medium
          • large
      • myapp.core.client.js
    • myapp.stories
    • myapp.users
    • myapp.admin
    • myapp.other_micro_spa

Now the question is, how do you stitch the microSPAs together under one domain name, client context, user session… and manage the entire application across composed micro-SPAs? We need to think about problems areas like:

  • Authentication
  • Root Application and microSPA Level
    • Authorization
    • Routes
    • Menu
    • Layout Templates
    • Static Assets
      • Styles
      • Images
  • Sharing Across MicroSPAs
    • State
    • Components/Modules
    • Dependencies
  • Debugging
  • Testing
  • Delivery Automation (Build, Package, Test, Release)
  • Monitoring and Analytics

How to solve this with AngularJS 1 & 2, React, Vue.js…?

Why am I thinking about this? I just failed gloriously at breaking apart a monolithic SPA and stitching it back together with other SPAs and ran into issues in all of the problem areas above. I didn’t use the MEAN.js architecture or even the structured modular file layout above. The project was done fast and dirty with the only goal of getting the app working again with the new architecture and new SPAs as fast as possible (a few days fast).

The team finished the task, but I was embarrassed by the resulting structure and by many of the hacks we employed to over come issues in the problem areas above. Why we had to accomplish it so fast is another story, so is how we are going to use lessons learned to refactor and address the problem areas above. It’s been a long time since I blogged regularly, but I am hoping to journal our journey and how we solve the issues we faced with microSPAs.

If you have worked with combining multiple SPAs please share, I’m sure there are solutions all over the interwebs.

Thoughts on Multitenant Microservices

I have worked on SaaS and multitenant based applications. I have segmented application tenants in the database layer at the row, table, and schema levels. Also, done separate databases for each tenant. Each strategy had its pros and cons, but it only addressed data segmentation and I still had to deal with logic segmentation for each tenant.

When a tenant customer wants different or custom functionality how do I segment the logic in such a way to give the tenant what they want without affecting the other tenants. How do we meter and bill for logic? Complex “if” or “case” statements, reflection, dependency injection…? All a bit messy in my opinion.

Having made the leap to microservices we now have the option of separate services per tenant. In the UI layer each tenant can have a different UI that encapsulates the UI’s structure, layout, styling and logic for the tenant. The UI can also have configurable microservices. This is just a list of endpoints that define the microservices necessary to drive the UI. During on-boarding and on an administrative configuration page, tenants can define the functionality they want to use in place of or along side the default functionality by simply selecting from a list of services. We can query the service configuration and monitor service usage to provide customized per tenant metering and billing.

This is not much different than the plug-in strategy you see in content management systems like WordPress and Umbraco. This is just at a different layer of abstraction. Is this better than the other logic segmentation strategies? I don’t know I haven’t done it yet.

Am I excited to try it? Hell yeah. Will I fail while trying it, I hope so because I can learn some new tricks. One thing proper microservices provides is an easier way to reason about an application in bite sized chunks. Also, with end-to-end automation it is easier to experiment. We can fail often, early and fast, fix it, and repeat until we get it right. So, I think it is going to be fun, in a geeky way, to figure this out even though thinking about using GraphQL muddies the waters a bit, but that’s another post.

If you have done multitenant microservices or are interested in doing something similar with microservices, let’s talk about it :).

In SQL Null is not a value… not a value!

I have been spending a lot of time fixing SQL Server database errors caused by stored procedures attempting to compare null. If you don’t know, in SQL:

NULL = NULL is false

NULL <> NULL is false

Null is not a value. Null is nothing. You can’t compare nothing to nothing because there is nothing to compare. I know you can do a select and see the word NULL in the results in SQL Management Studio, but that is just a marker so you don’t confuse empty strings with NULL or something.

If you need to do a comparison on a nullable value please check that shit for null first:

t2.column2 is null or t2.column2 = t1.column2

t2.column2 is not null

Also, if you try to be smart and turn ANSI_NULLS off you are going to be hurt when you have to upgrade your SQL Server to a version that forces ANSI_NULLS on (it’s coming).

I have been guilty of comparing NULL and saying, “it has a NULL value.” Now that I am having to fix scripts written by someone who did think about NULL, I wanted to rant and hammer this point home for myself so I don’t cause anyone the pain I am feeling right now. Null is not a value… not a value!

.

Where is your logic?

RANT

I hate logic in the database. It’s hard to automate testing, hard to debug, hard to have visibility into logic that may be core to the success or failure of an application or business. Some of the worse problems I have had to deal with are database related, actually almost all of the worse problems have been linked to the database.

I am in love with the new movement to smaller services doing exactly one small thing very well. I think the database should persist data… period. Yes, there are times when it just makes sense to have logic closer to the data, but I can always think of a reason not to do it and it always goes back to my experiences with database problems. It’s been a love hate relationship, me and databases.

I’m not a DBA and I don’t have the reserve brain power to become one. So, to help my limited understanding I shy away from anything that looks like logic in my data layer. Call it lazy, naivete, or not wanting to use the right tool for the job, I don’t care. If I’m in charge get you shitty logic out of the database, including you evil MERGE statement and the current bane of my existence :).

I’m old and set in my ways.

Today, I had a colleague question my use of a state object. He was just asking questions about its usage and other general questions about the code I wrote and how to refactor it. He wasn’t slamming my code or anything, but it made me remember the times my code has been slammed by someone that thought differently than me about coding.

Like many developers, having other people review your code can be a little hairy. I feel like I am waiting on a judgement to be reached and a sentence to be passed down. I’ve always hated having to hear from the code reviewers that thought nothing of writing a 100+ line method or 800+ line class. Those “just make it work” aficionados that made my life hell when I had to maintain their monsters by bolting more shit on top of shit and praying a new bug isn’t introduced.

Now, I don’t give a hoot what you think about my code unless you can give me a solid argument on why I should change it. Not saying I am a perfect coder, but I know there are a thousand ways to code the same thing. Some ways are better than others, but I’m not changing for change sake to appease your sensibilities. You have to provide proof that my code is so terrible that I have to go in a change it.

For example, I don’t like holding public state in a class that performs logic because you never know who will change the state. I will still have public read/write properties from time to time, but it always feels dirty when I do. So, I pass state through a constructor to readonly properties or through public method parameters. I don’t like having more than four method parameters, so I will create a plain old C# object (POCO) with no logic to hold state that I can pass to methods. I don’t like methods that do more than one thing (with thing being defined by me) and I like expressive sometimes long method names.

After years of learning about patterns, SOLID, DDD, functional programming and more, this is just how I naturally roll now. I don’t even think about it anymore. It just instinctively drives my fingers as I code in some zen state of mind. Does it create complexity, yes. Do I still write bugs, yes. Does it make the world a better place, no. I’d rather deal with the complexity and simple logic bugs than a bug related to some weird and unknown state mutation with error messages two times removed from the root cause of the damn bug.

You can complain about all the small methods I write that are doing one thing exceptionally well. You can roll your eyes while you have to follow a bunch of method calls with long names that explain what they do. I’m not changing the way I think about this anytime soon. So, if you ever have to read my code… suck it up deal with it and run the unit tests as you try to make it better. 🙂

Free Google I/O Event in Jacksonville, FL

Google I/O, Google’s annual developer-focused conference, kicks off May 18th and DiscoverTec is inviting you to watch the livestream of the opening keynote with lunch provided at DiscoverTec in Jacksonville, FL. Learn the newest and brightest from Google real-time, and meet with other developers to discuss the newest in technology.

  • Doors open at 12:00 noon
  • Lunch provided
  • Keynote livestream starts at 1:00 pm
  • Code Lab & Tech talks with the DiscoverTec team

Lunch, Free Giveaways, T-Shirts, and I/O 2016 Swag will be provided.
Register Today to Reserve your Spot: http://discovertec.com/google-io.

FIXED: Error Building Cordova in Visual Studio

So, I am trying to build an Apache Cordova project in Visual Studio 2015 and it is not playing nice. I see quite a few errors related to npm, so I’m going to blog it out.

Errors

First the errors. Here is a sample of them:

FindPackagesById: System.Console; File: RUNMDAINSTALL 

Error ENOENT, no such file or directory ‘C:\Users{name}\AppData\Roaming\npm\node_modules\vs-tac\node_modules\edge\src\CoreCLREmbedding\project.lock.json’; File: RUNMDAINSTALL 

BLD401 Error : BLD00401 : Could not find module ‘C:\Users{name}\AppData\Roaming\npm\node_modules\vs-tac\app.js’. Please Go to Tools –> Options –> Tools for Apache Cordova –> Cordova Tools –> Clear Cordova Cache and try building again. 

Solution

There is no way that I can say what the real solution is because it is dependent on versions of node, npm and VS Cordova Tools, but if you have a vs-tac issue, try:

  1. Clearing your Cordova Cache:
    in Visual Studio Go to Tools > Options > Tools for Apache Cordova > Cordova Tools > Clear Cordova Cache
  2. Copy vs-tac from your VS install to your profile:
    C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\ApacheCordovaTools\Packages\vs-tac to C:\Users\{name}\AppData\Roaming\npm\node_modules
  3. Manually install any missing node dependencies globally:
    >npm install {dependency name} -g

My Journey to Solution

First I tried the fix in the last error above: Options –> Tools for Apache Cordova –> Cordova Tools –> Clear Cordova Cache and try building again. This didn’t work, but I learned where the Cordova config is so that’s a plus.

Next I tried to manually install Cordova from npm and got this lovely error:

npm ERR! Failed to parse json
npm ERR! No data, empty input at 1:1
npm ERR!
npm ERR! ^
npm ERR! File: C:\Users\cbryant\AppData\Roaming\npm-cache\xtend\4.0.1\package\package.json
npm ERR! Failed to parse package.json data.
npm ERR! package.json must be actual JSON, not just JavaScript.

So, I went down the rabbit hole and focused on fixing this as it may be part of my original problem.

npm cache clean
npm install cordova -g

This worked, I was able to install Cordova manually, but had no effect on my original problem and this Yak still has a lot of hair to shave.

So the issue is linked to some npm package named vs-tac. A little searching and I discovered that it may already be installed here: C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\ApacheCordovaTools\Packages\vs-tac.

Let’s try to install it to my profile to see if that fixes the issue.

npm install “C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\ApacheCordovaTools\packages\vs-tac” -g

OK, I’m seeing some of the same errors that I see in Visual Studio. I discover that some of the errors are because of a bad Nuget source, so I remove the source and land on this error:

npm ERR! Failed at the edge@5.0.0 install script ‘node tools/install.js’.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the edge package,
npm ERR! not with npm itself.

How do you check the latest version of node.js and npm? I asked Google the same questions:

node -v
npm -v

I am running node 5 and the current stable is 4, not sure if that is an issue. Going to run the latest msi for v5 to see if it does something. By the way you can find all of the node installers here: https://nodejs.org/dist/.

Upgrading npm was a little different. I have npm installed in my node install, C:\Program Files\nodejs\npm.cmd. To upgrade I found this command

npm install npm -g

This installs the latest npm to my profile, but I assume running npm defaults to the one in the node install (based on a couple posts I read). So, I deleted the one in the node install and everything is upgraded and working (By the way, I had to restart my administrator command prompt to get npm to work at the new location), but I still get the last error above (still shaving this yak).

So, I have to read logs :(, C:\WINDOWS\system32\npm-debug.log. After a painful read, I give up on the command line and manually copy vs-tac from C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\ApacheCordovaTools\Packages\vs-tac to C:\Users{name}\AppData\Roaming\npm\node_modules. When I build again all of the errors are gone except one:

BLD401 Error : BLD00401 : Could not find module ‘elementtree’. Please Go to Tools –> Options –> Tools for Apache Cordova –> Cordova Tools –> Clear Cordova Cache and try building again. StationHouse.Mobile

I clear the Cordova Cache and it deletes the vs-tac in my profile. I add vs-tac back and build again with the same error. I go to check the package.json in the vs-tac folder and notice that node_modules doesn’t exists so I run npm install inside this directory to install the packages, but edge still won’t install.

I manually install edge:

npm install edge@5.0.0 -g

When I rebuild it succeeds, that yak has a nice crew cut.

 

 

 

KISS Your Big Data UI

It’s been so hard to blog lately. Mostly because I don’t have time to edit my posts. I guess if I wait until I have time to wordsmith better posts I’ll never post, so here is one that has been sitting on the shelf in all of its unedited glory.

What Qualifies A Developer To Talk About UI Design

I’m not qualified, I am not a designer. I haven’t done a lot of posts on the subject of UI design, but with the big push to big data, real-time streaming analytics, and IoT, I thought that I’d put a little thought into things that I would think about when designing a UI for them.

I started my tech career designing websites and desktop applications for a few years. Although my customers were happy with my UI designs, it’s not my thing and I don’t think I am good at it. Yet, I have been on many application teams and have had to work with many awesome designers. I believe that I can speak on UI design considerations from my 16 years of doing this. What I have to say is not gospel. I haven’t searched this stuff out on Bing like I do with engineering problems. Many UI and usability guru’s will probably crush me if they read this, but I’m not totally clueless when it comes to UI’s.

A Dashboard for Big Data

When dealing with designing an administrator’s dashboard for IoT device sensor data or maybe any big data application you should probably focus on making the right exceptional conditions highly visible and showing actionable data trends with the ability to drill down for more information and take necessary actions. The most important thing is to alert users of potential problems and anomalies that may indicate pending problems while providing some facility for taking action to investigate and mitigate problems. Just as important is being able to identify when something is going well because you want to learn from the successes to possibly apply the knowledge to other areas.

The UI should assist the user in being proactive in addressing problems. This is true if you have one device sending sensor data or a fleet of them. Granted there are differences in design consideration when you start scaling to 100s or 1000s of devices, but depending on the goal of the device, the basic premise is you want to make certain conditions identified by sensors up front and in your face.

KISS Your UI

When you have 100s of messages flowing from sensors compounded by multiple devices, a pageable grid of a hundred recent sensor messages on the first screen of the UI is useless, unless you are the type that enjoys trying to spot changes while scrolling the Matrix. The dashboard should lead you to taking action when problems exist, help you learn from successes, and give you peace of mind that everything is OK. The UI should help you identify potential areas to make improvements by uncovering weaknesses. This should be done without all of the noise from the mountain of data being held by the system.

If you could only show one thing on the UI what would it be. Maybe an alert box showing the number of critical exceptions triggered with a link to view more information? Start with that one thing and expand on it to provide the user with what they need. Big data UI is not a CRUD or basic application UI. It more closely related with what one might do for a reporting engine UI, but even that is a stretch. I am sure there are awesome blogs and books out there that speak on this subject, but many of the UIs I have been seeing were designed by people that didn’t get a subscription or something.

Keep It Simple Stupid (KISS) is as much a design principle as it is in software engineering. Stop making people work to understand thousands of data points when it should be the job of the UI designer to simplify it for them.

Example

Say you have a storage company. You have hundreds of garages and you want to deploy sensors to each garage and allow your customers to monitor them. The sensor will give you data on the door being open or closed, temperature, and relative humidity in the unit. The devices send messages every 5 minutes, 7,200 messages a day.

Your customers can opt in to SMS and email alerts on each data point. Some will only want open/close alerts. Some may have items that are sensitive to heat and humidity and they will want alerts when temperature or humidity cross some threshold.

Your customers also get access to a website that allows them to modify alerts and view a dashboard that allows them to investigate and query all of the sensor data. What good would it be to have a grid on the dashboard showing sensor data streaming into the UI every 5 minutes, 7,200 times a day. Why burden them with even having to see the data. Most customers will never have a breach. Temp and humidity sensitive customers will probably have an environment controlled unit that rarely triggers and alert. What does streaming data on the initial dashboard screen give them… nothing.

The only thing most of the customers want to know is has someone opened my unit or if the temp is fluctuating. The customer wants us to simplify all of that data into a simple digestible UI that addresses their concerns and helps them cure any pain they may experience when an alert is triggered.

There may be users that need to dig into the data to investigate, but normal daily usage is focused more on alerts and trends. Seeing all of the data is not the main concern and the data is only available if they click a link to dig into it. The UI is kept clean, not overwhelming, and focused on the needs of the majority of customers.

This is fictional and if no one is doing it, I probably just gave away a new app idea. This is the reason that IoT is hot, its wide open for dreamers. The gist of the example is there is so much data to contend with on these types of projects you have to hide it and simplify the UI so that you aren’t overwhelming the user.

Consumer Apps Are Not the Only Game In Town

Additionally, you have to take into account whether the UI is for consumers or businesses. Making things pretty for consumers can help to differentiate an app in the consumer market, but the time to make things pretty for B2B or enterprise is better served on improving usability and the feature set. I am not saying that businesses don’t care about aesthetics, but unless they are reselling your UI to consumers it’s usually not the most important thing. For both audiences usability is very important, but usability doesn’t mean using the latest UI tricks, fancy graphics, fussing over fonts and colors just for the sake of having them. Everything should be strategically implemented to help usability.

This is especially true for large enterprises. I have seen many very successful and useful apps in the enterprise that were nothing more than a set of simple colored boxes with links to take certain actions and drill down into more information. So, you not only have to take into account the amount of data being managed in the UI, but the user doing the managing. Know your audience and build the UI to their needs. Leave out all the gradients and curves and UI tricks until you have a functional UI that serves the core needs of the business and leave polish for later iterations. Focus first on how to reduce the mountain of data into bite sized actionable chunks.

Conclusion

So, Keep It Simple Stupid! Hide the data, show alerts and trends, allow drill down into the data for investigation, provide a way to take action, leave polish for later iterations.