Return to overview

Pillar 7: Advanced Analytics

The 7 Industrial IoT Platform Pillars: 7. Advanced Analytics 


In our penultimate blog post on the 7 Industrial IoT pillars, we are discussing advanced analytics on the Davra IoT platform. We must first highlight that in order to have a stable and accurate analytics platform, you must first develop a data management strategy so that whatever analytics you run is on accurate and clean data. Remember the old saying GIGO, if it’s only garbage data going into your system then it’ll be exactly that going out! 

If you would like a refresher on data management in an IIoT platform, please check out our previous webinar here, or the previous blog post here.  

What is Advanced Analytics? 

Before we dive into the use cases around advanced analytics, we will first discuss what analytics is according to Gartner: 

“[Analytics] includes processing of data streams, such as device, enterprise and contextual data, to provide insights into asset state by monitoring use, providing indicators, tracking patterns and optimizing asset use. A variety of techniques, such as rule engines, event stream processing, data visualization and machine learning, may be applied.”

As you can see from the above description, you will need a data management strategy in place to get to the AI scale. A lot of companies rush straight to AI, or Artificial Intelligence because they feel the pressure to be ahead of the curve. But this is not the case. In order to draw any benefit and decision-making from models, the data must first represent the business processes you want to track and be warehoused correctly. 

Advanced Analytics in The Platform

Our Davra platform has all the data management capabilities, but it’s the most time-consuming part to configure due to the complexities of data and all of the possible data streams. We can do all the real-time data feed tasks such as policy set up, data harvesting, data enrichment, tags and label additions, null values and out-of-range sets to build up your system. 

A data scientist’s job consists of cleaning the data that comes in. On the platform, we can bring all the data into a single format, including bringing multiple sets together. The company’s data scientists then trawl through the data to ensure it is accurate and complete. 

Advanced Analytics Use Cases 

Remote Patient Monitoring in Healthcare 

This use case is most often used to analyse patients who are independently living in or the telemedicine area. 

We have the capabilities to monitor patients at home by taking data from the sensors monitoring these patients. Ambient sensors that are non-intrusive can be set up in the house for older, independent and assisted living adults. These sensors are portable and easy to set up, providing valuable information to keep the patients safe. 

Any asset we monitor is easy to set up and uses an unsupervised learning model. This method involves pointing the data to a black box to learn what’s normal and learn when there’s an anomaly. 

Unsupervised Learning is a machine learning technique in which the users do not need to supervise the model. Instead, it allows the model to work on its own to discover patterns and information that was previously undetected. It mainly deals with the unlabelled data.

We can monitor when they visit rooms in their house or how they sleep, this helps us detect anomalies from that. We then apply the learning model into one single metric; their routine. 

The threshold in the model is set; if it’s less than 18 the routine is fine. If it’s above that threshold then there’s something significantly different, a system alert must be flagged. With the analytics, you must consider the UX user experience by taking the analytics and presenting in a way that’s readable. 

Learn what’s normal for the person or could be a machine and figure out when it deviates from that. In this case, humans are more random than machines. Takes about 30 days for it to become accurate, machines are usually quicker. For humans, this could be analysing their sleep or time in the bathroom.

This allows us to develop the patient or asset’s digital signature; learn the routine of the person or asset. Through labelling the data we can try to understand what’s going on. For example, turn on the lights, open cabinets, guess and label the activities and the recurrence of those. We can use door sensors when they enter the room. Cluster or kitchen doors and soup. Kettle, fridge, laundry room; you can track when they turned on the washing machine. Learn the routines and labelling incidents of interest which highlight when there are deviations in their routine. 

Triage Hub 

This is tied to all use cases in that they need to take a response to the deviations in the routine. It is like an incidence response. Pull all the data sources together and all the automating of the actions. 

Descriptive analytics: it is the process of gathering and interpreting data to describe what has occurred.

Diagnostic: this is the process of gathering and interpreting different data sets to identify anomalies, detect patterns, and determine relationships.

Predictive: take the data you receive and then understand if there is going to be a problem. This enables you to get ahead before something actually happens.

Prescriptive: this is a combination of data, mathematical models, and various business rules to infer actions to influence future desired outcomes.

The functional blocks are the data sources. They come into a complex events engine where we then try to figure out if there’s something wrong. The triage hub ranges on a spectrum of “if this, then that” logic to advanced AI, both supervised and unsupervised. Once the complex event engine realises there’s a problem, it will then send a message to our triage engine, with the diagnostic and prescriptive applications; we have a problem, what should we do, communicate this to people. The delivery mechanism for example could be the Cisco IM, but the alerts vary depending on the use case.

The Models

These models need the training to find out what’s normal and what’s not. Programmatic logical rules are used if a threshold is held for a long time, but then something is wrong. The data you’re collecting plays an important part too because the data enables stronger AI, hence why you need to have a strong data management strategy. The data coming in is used to train the models. Different models take different amounts of time and depend on the use case. 

The more often an event occurs, such as going to the washing machine or using the microwave, the easier it’ll be to label the data and feed that to the algorithm. Whereas if something only occurs once a month, then it’s more difficult to feed that to the algorithm and it will take longer to learn that particular routine. 

You can also run candidate models, for example, 5 models, side by side. Only two may become accurate, so you can then create an ensemble of models with the two. If they’re in agreement, then you know for sure that the event is accurately occurring. 

The triage engine diagnoses the event. Based on different kinds of triggers, you can have different responses. As the models get smarter, you can build recommended systems which recommend how to fix something based on how it was previously resolved. 

When creating a digital signature to set up the anomalies, important to get the inputs correct. Create multiple versions of these models and look at the historic data – look at precisions and recalls to check the accuracy. 

Mine Tailing

Advanced analytics is now moving towards mine and dam monitoring due to deadly mine destructions of the past. Incident response workflows and regulatory compliance for mine tailing must be much more transparent with the safety of the mine due to new safety standards.

In order to gauge when a mine may collapse, or the dam above a mine may breach, we assessed white papers from previous disasters to predict and analyse the factors that contribute. We then automate a previously manual process using the satellite imagery, but also sensor data. We can mark the edges of the dam wall that were previously breached. The imagery can be sought in geotherm format which gives a lot of information around the mine area. 

Flood monitoring is assessed to find the water table level, as well as weather prediction data to see if they may have a flood. Putting all of this data together can save lives and millions because assets can be safely stored before destruction, or the destruction is prevented completely by controlling dam levels. May need to take the big cranes out of the mine so they’re not destroyed. 

High-resolution imagery is used to take a closer look at the terrain. Typography information also shows vegetation. If there is seepage of toxic chemicals from the mine, we can look at the vegetation because if the mine release toxic chemicals then the vegetation will die. Or it could flourish due to too much water being released. Knowing these unusual factors can help with the overall monitoring of the mine to ensure it’s operating as it should be. Flood modelling allows us to look at the flood plain, what would the after-effects be. Villages could be destroyed, but by analysing all of this information using these predictions we can help mitigate the risk. 

It is clear from the above use cases that analytics has enabled never-before-to-be-seen incidents response predictors. With a clear strategy and data management system in place, the efforts the system can go to will propel any organisation beyond competitors. Before, the only way to control a situation was to simply react to it once the event has taken place. Now, why wait for the startling surprise and halting all operations, when you could simply put the steps in place to predict and remedy the situation before it’s even happened? If you would like to learn more about the multitude of use cases on the Davra platform and how they can be applied to your company, contact us today

Author

Brian McGlynn, Davra, COO

Connect on LinkedIn

 

Stay connected

Davra IoT Platform

Real IoT Solutions in 5 to 7 Weeks

REQUEST A DEMO