{"id":2579,"date":"2020-12-07T14:26:41","date_gmt":"2020-12-07T14:26:41","guid":{"rendered":"https:\/\/davra.com\/?p=2579"},"modified":"2020-12-15T16:33:36","modified_gmt":"2020-12-15T16:33:36","slug":"pillar-7-advanced-analytics-blog","status":"publish","type":"post","link":"https:\/\/davra.com\/pillar-7-advanced-analytics-blog\/","title":{"rendered":"Pillar 7: Advanced Analytics"},"content":{"rendered":"
In our penultimate blog post on the 7 Industrial IoT pillars, we are discussing advanced analytics on the Davra IoT platform. We must first highlight that in order to have a stable and accurate analytics platform, you must first develop a data management strategy so that whatever analytics you run is on accurate and clean data. Remember the old saying GIGO, if it\u2019s only garbage data going into your system then it\u2019ll be exactly that going out!\u00a0<\/span><\/p>\n If you would like a refresher on data management in an IIoT platform, please check out our previous webinar <\/span>here<\/span><\/a>, or the previous blog post <\/span>here<\/span><\/a>.\u00a0\u00a0<\/span><\/p>\n Before we dive into the use cases around advanced analytics, we will first discuss what analytics is according to Gartner:\u00a0<\/span><\/p>\n \u201c[Analytics] includes processing of data streams, such as device, enterprise and contextual data, to provide insights into asset state by monitoring use, providing indicators, tracking patterns and optimizing asset use. A variety of techniques, such as rule engines, event stream processing, data visualization and machine learning, may be applied.\u201d<\/span><\/p>\n As you can see from the above description, you will need a data management strategy in place to get to the AI scale. A lot of companies rush straight to AI, or Artificial Intelligence because they feel the pressure to be ahead of the curve. But this is not the case. In order to draw any benefit and decision-making from models, the data must first represent the business processes you want to track and be warehoused correctly.\u00a0<\/span><\/p>\n Our Davra platform has all the data management capabilities, but it\u2019s the most time-consuming part to configure due to the complexities of data and all of the possible data streams. We can do all the real-time data feed tasks such as policy set up, data harvesting, data enrichment, tags and label additions, null values and out-of-range sets to build up your system.\u00a0<\/span><\/p>\n A data scientist\u2019s job consists of cleaning the data that comes in. On the platform, we can bring all the data into a single format, including bringing multiple sets together. The company\u2019s data scientists then trawl through the data to ensure it is accurate and complete.\u00a0<\/span><\/p>\n This use case is most often used to analyse patients who are independently living in or the telemedicine area.\u00a0<\/span><\/p>\n We have the capabilities to monitor patients at home by taking data from the sensors monitoring these patients. Ambient sensors that are non-intrusive can be set up in the house for older, independent and assisted living adults. These sensors are portable and easy to set up, providing valuable information to keep the patients safe.\u00a0<\/span><\/p>\n Any asset we monitor is easy to set up and uses an unsupervised learning model. This method involves pointing the data to a black box to learn what\u2019s normal and learn when there\u2019s an anomaly.\u00a0<\/span><\/p>\n Unsupervised Learning is a machine learning technique in which the users do not need to supervise the model. Instead, it allows the model to work on its own to discover patterns and information that was previously undetected. It mainly deals with the unlabelled data.<\/span><\/p>\n We can monitor when they visit rooms in their house or how they sleep, this helps us detect anomalies from that. We then apply the learning model into one single metric; their routine.\u00a0<\/span><\/p>\n The threshold in the model is set; if it\u2019s less than 18 the routine is fine. If it\u2019s above that threshold then there\u2019s something significantly different, a system alert must be flagged. With the analytics, you must consider the UX user experience by taking the analytics and presenting in a way that\u2019s readable.\u00a0<\/span><\/p>\n Learn what\u2019s normal for the person or could be a machine and figure out when it deviates from that. In this case, humans are more random than machines. Takes about 30 days for it to become accurate, machines are usually quicker. For humans, this could be analysing their sleep or time in the bathroom.<\/span><\/p>\n This allows us to develop the patient or asset\u2019s digital signature; learn the routine of the person or asset. Through labelling the data we can try to understand what\u2019s going on. For example, turn on the lights, open cabinets, guess and label the activities and the recurrence of those. We can use door sensors when they enter the room. Cluster or kitchen doors and soup. Kettle, fridge, laundry room; you can track when they turned on the washing machine. Learn the routines and labelling incidents of interest which highlight when there are deviations in their routine.\u00a0<\/span><\/p>\n This is tied to all use cases in that they need to take a response to the deviations in the routine. It is like an incidence response. Pull all the data sources together and all the automating of the actions.\u00a0<\/span><\/p>\n Descriptive analytics: it is the process of gathering and interpreting data to describe what has occurred.<\/span><\/p>\n Diagnostic: this is the process of gathering and interpreting different data sets to identify anomalies, detect patterns, and determine relationships.<\/span><\/p>\n Predictive: take the data you receive and then understand if there is going to be a problem. This enables you to get ahead before something actually happens.<\/span><\/p>\n Prescriptive: this is a combination of data, mathematical models, and various business rules to infer actions to influence future desired outcomes.<\/span><\/p>\n The functional blocks are the data sources. They come into a complex events engine where we then try to figure out if there\u2019s something wrong. The triage hub ranges on a spectrum of \u201cif this, then that\u201d logic to advanced AI, both supervised and unsupervised. Once the complex event engine realises there\u2019s a problem, it will then send a message to our triage engine, with the diagnostic and prescriptive applications; we have a problem, what should we do, communicate this to people. The delivery mechanism for example could be the Cisco IM, but the alerts vary depending on the use case.<\/span><\/p>\n These models need the training to find out what\u2019s normal and what\u2019s not. Programmatic logical rules are used if a threshold is held for a long time, but then something is wrong. The data you\u2019re collecting plays an important part too because the data enables stronger AI, hence why you need to have a strong data management strategy. The data coming in is used to train the models. Different models take different amounts of time and depend on the use case.\u00a0<\/span><\/p>\n The more often an event occurs, such as going to the washing machine or using the microwave, the easier it\u2019ll be to label the data and feed that to the algorithm. Whereas if something only occurs once a month, then it\u2019s more difficult to feed that to the algorithm and it will take longer to learn that particular routine.\u00a0<\/span><\/p>\n You can also run candidate models, for example, 5 models, side by side. Only two may become accurate, so you can then create an ensemble of models with the two. If they\u2019re in agreement, then you know for sure that the event is accurately occurring.\u00a0<\/span><\/p>\n The triage engine diagnoses the event. Based on different kinds of triggers, you can have different responses. As the models get smarter, you can build <\/span>recommended systems<\/b> which recommend how to fix something based on how it was previously resolved.\u00a0<\/span><\/p>\n When creating a digital signature to set up the anomalies, important to get the inputs correct. Create multiple versions of these models and look at the historic data – look at precisions and recalls to check the accuracy.\u00a0<\/span><\/p>\n Advanced analytics is now moving towards mine and dam monitoring due to deadly mine destructions of the past. Incident response workflows and regulatory compliance for mine tailing must be much more transparent with the safety of the mine due to new safety standards.<\/span><\/p>\n In order to gauge when a mine may collapse, or the dam above a mine may breach, we assessed white papers from previous disasters to predict and analyse the factors that contribute. We then automate a previously manual process using the satellite imagery, but also sensor data. We can mark the edges of the dam wall that were previously breached. The imagery can be sought in geotherm format which gives a lot of information around the mine area.\u00a0<\/span><\/p>\n Flood monitoring is assessed to find the water table level, as well as weather prediction data to see if they may have a flood. Putting all of this data together can save lives and millions because assets can be safely stored before destruction, or the destruction is prevented completely by controlling dam levels. May need to take the big cranes out of the mine so they\u2019re not destroyed.\u00a0<\/span><\/p>\n High-resolution imagery is used to take a closer look at the terrain. Typography information also shows vegetation. If there is seepage of toxic chemicals from the mine, we can look at the vegetation because if the mine release toxic chemicals then the vegetation will die. Or it could flourish due to too much water being released. Knowing these unusual factors can help with the overall monitoring of the mine to ensure it\u2019s operating as it should be. Flood modelling allows us to look at the flood plain, what would the after-effects be. Villages could be destroyed, but by analysing all of this information using these predictions we can help mitigate the risk.\u00a0<\/span><\/p>\n It is clear from the above use cases that analytics has enabled never-before-to-be-seen incidents response predictors. With a clear strategy and data management system in place, the efforts the system can go to will propel any organisation beyond competitors. Before, the only way to control a situation was to simply react to it once the event has taken place. Now, why wait for the startling surprise and halting all operations, when you could simply put the steps in place to predict and remedy the situation before it\u2019s even happened? If you would like to learn more about the multitude of use cases on the Davra platform and how they can be applied to your company, <\/span>contact us today<\/span><\/a>.\u00a0<\/span><\/p>\n Brian McGlynn, Davra, COO<\/p>\nWhat is Advanced Analytics?\u00a0<\/strong><\/h1>\n
Advanced Analytics in The Platform<\/strong><\/h1>\n
Advanced Analytics Use Cases\u00a0<\/strong><\/h1>\n
Remote Patient Monitoring in Healthcare\u00a0<\/strong><\/h2>\n
Triage Hub\u00a0<\/strong><\/h2>\n
The Models<\/strong><\/h2>\n
Mine Tailing<\/strong><\/h2>\n
Author<\/strong><\/h2>\n