Hearing the Big Data of Sound

Sounds create their own big data - hearing it may create advantage

Can you hear the big data of sound coming to help your business? Image: Dennis Owusu-ansah | Dreamstime.com

What’s often the first indicator that something’s gone wrong? A change in what you hear. A sound. Think about this in your plant. Sound and vibration are powerful indicators of a change in condition. Wouldn’t it be great if you could incorporate that into your predictive and prescriptive maintenance? Or into your service offering for your product? Or into the product itself for self-monitoring? Now, you can.

Several companies now offer some form of machine sound analytics. We learned about this from Jags Kandasamy, Chief Product Officer at OtoSense, a startup that turns machine sounds into actionable meaning. Apparently others agree. An array of Fortune 500 and 100 companies from all over the world are deploying it now, for a wide array of applications. Jags told me, “We are the artificial intelligence for sound. We map the sound a human hears how we hear it in our heads – in the system.” The results are spatters like the one shown.

Splatters - how sound looks

Process experts and operators can work with the big data of sound through graphical spatters. Anomalies often show clearly in their spatial relationships. Image Courtesy Otosense

Robots and other automation have long had vision, and robots have tactile sensors as well – adding sound could be a revolution. Microphones and accelerometers are not new to the plant, but making sense of the sounds and vibrations they capture has been a largely manual process. Automating it is a very specialized field. As with all sensors and software to support them for plant operations, sound processing and analysis must be fast, reliable and relatively low cost to really resonate in the market.

Until now, speed has often been a problem for industrial and monitoring applications. Capturing the signals is fine, but processing them to understand what is – forgive the pun – signal vs. noise – has been challenging. Big data to the rescue, right?

Well not so fast. Sending information to the cloud for big data analysis typically slows the process too much for it to work effectively in a production operation. However, in the main, Otosense has figured out how to keep operations at the edge. So it’s speedy and local.

Only the expert interactions to train the system go to the cloud and back. During setup and expansion of the system, a subject matter expert categorizes sounds and their meaning (regular operating sound, startup, shutdown, particular situation, abnormal or problematic sound, and possibly many more specific states).

Otosense is not the only company working on the big data of sound. Several companies offer it for specific types of machinery like rotating machines (Augury, 3D Signals), oil and gas leaks (Sensoleak), or wind and turbine energy (APL’s AureSSound). Only a few are attempting to provide this machine learning approach to sound and vibration data for ANY machine: Neuron SoundWare and Reality Analytics Inc. (reality.ai) are in the category with Otosense.

Reality Analytics is for R&D engineers, and has received its first round of funding as well as winning the 2017 Innovator of Things Award from Project Kairos. This focus allows it to remain in the technical sphere. Neuron SoundWare has also won an award, the Napad Roku for best Czech or Slovak startup in 2016. They are not listing customers on their website, but it truly could be a challenge of NDAs. Otosense lists 20 “User” companies on its website and several awards, including Red Herring, Global Mobile, GSMA Mobile World Congress, and Silicon Valley Forum.

The competition in this space is exciting and important. These companies will help move each other along. Can you hear the sound of yet another improvement in machine monitoring and analytics knocking on your plant door?

Tags: , , , , , , , , , , , , , , , , , , , , , , , ,

No comments yet.

Leave a Reply