Blog: Hack to the future

RedBlack CTO Steve Dickinson explains how two teams of our technologists took on the IoT challenge in our latest hackathon.

In the interest of keeping things cutting-edge at RedBlack, we like to get everyone together once a month and hold a good old hackathon.

For the uninitiated, a hackathon is where programmers team up to collaborate intensively over a short period of time to come up with a piece of useable software that solves a problem. In our case, pretty much everyone (developers, architects, managers) takes a short break from their usual schedule to join in. Participants get to have fun, think out of the box, play with technology and, very often, come up with some truly synapse-sparking ideas.

The theme of our latest hackathon was the internet of things. IoT, as it is abbreviated, is the term that describes the interconnection via the internet between computing devices embedded in everyday objects, enabling them to send and receive data.

Our hackers were given eight hours to produce a shippable product that used a Raspberry Pi or Arduino microcontroller and was based on a real-life scenario. The end result had to connect to the cloud, have a workable user interface and consist of at least two different components.

Employees across the company were split into two teams. Each team started with a 30-minute discovery session in which they reviewed what components were available to them and discussed any potential ideas.

Both teams had the same ideas of using a Raspberry Pi 3 with Windows IoT and RFID components. They also both decided to focus on the issues we have surrounding the supply of water for our water cooler. The problems are:

•    we keep running out of water
•    we have no indication of the amount of water left without lifting the cover of the water cooler
•    we have no indication of how long the water supply will last before it runs out.

The components of both team’s concepts for providing a solution were similar. These included:

•    semi-automated stock management for office water
•    a Raspberry Pi-based system, with no wired connections, fixed to the cooler
•    RFID-based identification of stock changes
•    visual identification of estimated water levels
•    audible indicators of RFID success
•    a web-based UI for stock management.

Within the allotted eight hours, the teams combined to produce and present a working product utilising Microsoft Azure to host the required APIs and store the relevant data, as below:

The application was built using Universal Windows Platform (UWP), which was packaged and deployed to the Raspberry Pi 3 running Windows IoT Core. The web user interface below shows the water level, average bottle consumption time, current stock level and estimated time before the stock levels are depleted.

Each time a new water bottle is installed onto the water cooler the bottle is scanned using an RFID tag. There was even enough time to develop a PowerBI report like this that shows stock levels:

If we had more time, some logical extensions of our system would be:

•    generating emails when stock levels are lower than the lead time for delivery
•    multiple RFID tags for each water bottle so each stock item is identifiable
•    packaging and waterproofing the Raspberry Pi
•    projection of future data using Power BI.

While we’re not about to run off and create a new start-up with our new water cooler app any time soon, we do have a nifty new gizmo in the office. More importantly, our latest hackathon was a great opportunity to explore the nexus between IoT and the cloud.

Web technology expert Steven Dickinson first joined RedBlack in 2002 as a programmer. He went on to take senior development positions at the Food Standards Agency and Capita. Steve rejoined RedBlack in 2014 and, as chief technology officer, heads up the development of our software solutions.

Blog: How we started out on a big data journey with Microsoft Azure

“Data is growing faster than ever before and by the year 2020; about 1.7 megabytes of new information will be created every second for every human being on the planet.”— IDC.

RedBlack Software and Microsoft Azure

RedBlack Software CTO Steve Dickinson explains how using Microsoft’s Azure cloud platform led to a new world of discovery for our developers.

Over the course of a couple of posts, I’m going to look at how the RedBlack team implemented solutions for data movements in a serverless cloud environment and embraced developments within machine learning and artificial intelligence. This first post reviews the approach we took to move away from the traditional data warehouse model of a SQL Server, SSIS and SSAS.

Some time ago, RedBlack decided to focus on building feature-rich mobile and desktop applications geared towards a cloud environment. On the question of application architecture, microservices were the obvious approach to take. When it came to our cloud platform, we chose Microsoft Azure.

The use of a cloud environment would allow us to scale and deploy individual components which would fit perfectly with our agile approach to development. Each microservice would have its own data source, whether it was an Azure SQL database, within Azure Blob storage or stored within cache.

The fact that each microservice possessed its own individual storage started our true journey with data within Microsoft Azure. There were numerous areas that we needed to find solutions to:

* how to keep data synchronised across different data stores
* how to prepare and transform the data
* how to store the transformed data
* how to publish data to the end user.

We looked at what was available within Microsoft Azure to help. There were some impressive features available for data and data movements like Azure HDInsight, Azure Data Warehouse and Azure Data Lake.

These are all big data solutions but, at this point in our journey, we just didn’t require these enterprise features and the cost was a little more than we had budgeted for. So, what did we use? We chose Azure Data Factory.

Transforming data

Using Azure Data Factory allowed us to create the required workflows to aid in moving, preparing and transforming data that was currently stored in multiple data locations and paved the way to publish data in either a raw format or through Microsoft’s Power BI.

The architecture and data flow we used with the Azure Data Factory

The Azure Data Factory has multiple pipelines tasked with ingesting data for each of the microservices’ individual data stores. Each pipeline has a collection of activities that can be custom built through Visual Studio or come as standard, out of the box.
Each of these activities are scheduled to perform the required data transformations and move the data to an Azure SQL database that acts as the data store for the data warehouse.

From the data warehouse, Azure Functions are utilised to produce the required datasets, on a per-company basis, which are partitioned and stored within Azure Blob storage. The data is then ready for consumption by Power BI, Excel Power Query or any other third-party that has authority to access the data.

Next steps

Our main goal is to get to a stage where the data movements are all event-driven microservices and data storage. The idea is to use either the Azure Service Bus or Azure Event Hubs, coupled with Azure Functions. There is also an option to use Azure Stream Analytics, but until we are at the stage where we require real-time data, this is currently on the back-burner.

Useful Links

Azure Data Factory

Azure Data Factory Local Environment for debugging in Visual Studio

Azure Data Factory Custom Activities

Azure Blob Storage

Azure Functions

Web technology expert Steven Dickinson first joined RedBlack in 2002 as a programmer. He went on to take senior development positions at the Food Standards Agency and Capita. Steve rejoined RedBlack in 2014 and, as chief technology officer, heads up the development of our software solutions.