Davra Storms '19 MQ
The Internet of Things and artificial intelligence shouldn’t be lumped together just because they both have cool, trending acronyms. Even their tendency to share technological frameworks, coding conventions, networks and hardware might be insufficient motivation for blending the two domains — unless you have a faithful guide to help you get the job done correctly.
As any wise business veteran is painfully aware, systems that don’t mesh together precisely cause lasting problems. Although modern tools make it extremely easy to weave artificial intelligence into your IoT tapestry, you aren’t guaranteed to produce a stunning masterpiece as a result.
If it were up to Hollywood or the general populace, then the threats of lousy AI-IoT combinations would be limited to killer robots. While it’s true that some systems, such as healthcare facilities with linked medical devices or lots of personally identifiable patient data, are typically more risk-prone, this isn’t always the case. In the business arena, you’ll often face something far more insidious — the vaguely defined perils of mediocrity and underperformance.
Intelligent systems that leverage machine learning, genetic algorithms and other principles can accomplish quite a lot. For instance, you might use these tools to filter out data from your ever-expanding ecosystem of connected things. Or, your enterprise could build a system that makes important operating decisions, such as when to start adjusting network service availability or alert plant managers to production line irregularities, to make running a business behemoth less grinding.
The downsides of having such formidable potential at your beck and call include the fact that you can also make a whole slew of poor choices. For instance, if you were filtering a data stream with a badly chosen algorithm, then you might end up missing insights without ever knowing it — all because the system was merely doing the job that you assigned it to perform. What’s more, these kinds of errors are often notoriously hard to spot. Since they aren’t quite as noteworthy as events like having your network hacked, they tend to take a back burner to other threats, which may exacerbate their adverse outcomes.
This wouldn’t be such a challenge if people only used smart computing in limited applications. That’s unlikely to happen anytime soon, however, and some of the blame for the confusion might lie in the versatility of IoT-AI applications. For instance, Tesla’s self-driving cars take connectivity to the next level by sharing the knowledge acquired by one vehicle with the entire fleet. Companies like Intel have even published tutorials that teach people how to build basic facial recognition systems for facility access control and identify objects and shoppers in retail settings for inventory tracking. Such diversity of use cases is by no means atypical.
The reality is that AI-IoT systems are defined by the unique problems they aim to solve instead of being restricted to specific hardware or 100-percent predictable outcomes. Because the capacities of artificially intelligent systems typically reflect the underlying hardware they run on and the suitability of their algorithms, the results may leave lots of room for variable performance. Failing to match your IoT platform, AI toolset, algorithms and other components properly might result in a system that performs far less intelligently than you’d hoped it would.
Luckily, there’s a way around all of this, and no, it’s not to simply wait for the singularity to arrive. Here are some tips for achieving peak optimality:
Reaching for the nearest handy algorithm in your arsenal isn’t always the fastest or most accurate way to solve a problem. For instance, while neural networks may be great at image classification, something like a decision tree or naive Bayesian network might be better at tracing variables through probabilistic relationships. Also, recall that for every potential problem-solving solution, there are scores of nuanced variants, such as those enumerated in the Asimov Institute’s well-traveled neural network zoo.
The best answer to this challenge might be simply broadening your options. For instance, you might run a digital twin simulation alongside your application, giving you the ideal platform to test alternatives.
Many algorithms account for statistical and other errors, but it’s essential to know and respect their limitations. Certain solutions and data processing techniques may fail catastrophically without proper inputs, so do your research before you deploy an app that your enterprise has to rely on.
One common factor shared by artificially intelligent and distributed computing systems is that their results depend on their layouts. Such topological dependence is easy to appreciate with tools like neural networks, but it’s not always as obvious with the Industrial Internet of Things.
In this context, smart topology management might mean taking a more dynamic approach to your network and hardware resources. For example, imagine you’re running an algorithm that collects data from dozens of connected sensors that each produce mountains of output. Although the algorithm helps you get to the most critical parts, the traffic takes a toll on your business systems.
Your IoT platform should be imbued with the power to handle functions such as monitoring network traffic and making service quality adjustments in response to demand — not just have lots of extra bandwidth waiting in reserve. In this case, you might find answers by expanding your use of artificial intelligence to explore typical traffic patterns and spot bottlenecks. From there, it could be easier to decide which resources to reallocate permanently or implement proactive management software solutions that limit overhead.
Many business apps tend to obfuscate or simplify information so that humans can digest the contents more readily. Even though this big-picture approach has its merits, it can make it harder to troubleshoot some artificial intelligence problems.
When developing apps, steer clear of the typical black-box approach. Yes, it’s great not to have to worry about what goes on behind the scenes, but you also need a certain degree of visibility to make well-informed modifications. Before minifying, compressing or otherwise lightening your codebase, be sure that you’ve smoothed out the burrs, or adopt a build process that places the source maps at your convenient disposal for troubleshooting.
The chronic hurdles of implementing AI-IoT systems may seem hugely problematic, but luckily, you don’t have to be a trailblazer to ferret out workable solutions. Picking superior IoT platforms that support artificial intelligence allows you to scaffold powerful apps without having to independently confirm that each of the framework’s elements can support the weight.
The IoT and AI seem to be meant for each other, so why not help them work in tandem? Chat with a Davra data scientist to ensure your complex applications exhibit the smart behaviors you expect of them— instead of just following bad orders blindly.
Davra Storms '19 MQ
Who's Platform WinsRead More
What underlies the IoT's many urban victories? These use cases spell it all out.Read More