To use a distributed optical microphone technology to ensure control over long-length objects such as pipelines and railroads, as well as control over ground facility perimeters, etc.
We were contacted by an engineer of the optical fiber equipment that was capable of receiving audio information from an optical fiber up to 40 km (ca. 25 mi.) long.
It works as follows. The device emits a short coherent laser pulse to the optical fiber every 10 milliseconds. The pulse disperses in the optical fiber defects, while a part of the dispersed light reflects in the opposite direction. The further the dispersion point is situated along the fiber, the later the response signal comes back. As a result, the device captures the dispersed signal strength along the fiber in a graph. The next pulse is emitted 10 milliseconds later. In this period, the optical fiber is slightly displaced due to acoustic vibrations, similarly to a sensible membrane in a simple microphone. Because of this optical fiber displacement, the next pulse’s dispersion details are slightly different from the previous pulse’s ones. Thus, the purpose of the device is to capture such difference. This iteration occurs 1000 times per second.
At the output, there is a data stream acting like if every 10 meters of the optical fiber were a digital microphone with up to 1 KHz signals. Given that, a data stream coming from the device equals to 1 Gbit.
Sounds made by pedestrians, animals, cars, trains, passing over helicopters and other noise sources are included in such data streams. We were supposed to create a system that would use data streams to recognize which object is making noise and at what distance it is located.
The key aspect in the physical implementation of this technology is that data streams feature a high level of noise. However, a human can see objects of interest in the data stream that is displayed as a time-marked spectrogram. Our goal was to create an application that would be capable of processing data streams in real time and recognizing source types effectively.
1. We used GPU for stream signal parallel processing. CPUs just do not have enough capacity for that.
2. We divided tasks into two groups – signal detecting tasks and signal type recognition tasks. When we tried to use computationally complex algorithms to process the whole data stream, even GPUs were only able to process data coming from the first 2-4 km stretch of the optical fiber. We trained a Haar cascade specifically tailored to detect wanted signals. At that stage, a small group of segments featuring the wanted signal was defined.
3. Last but not least. The final and most difficult stage. Signal type recognition. Here we used our own product - the Auto Machine Learning System. This system is capable of conducting tens and hundreds of thousands of experiments aimed to create classification algorithms. It should be noted that it is unknown beforehand which algorithm classes will prove to be most accurate ones, taking into account the limits set on the algorithms’ computational complexity.
That was a tough challenge for both us and our client. To train the recognition algorithms effectively, we had to collect many terabytes of data and mark it with various object types. Only thereafter we were able to start experiments aimed to train the recognition algorithms.
First thousands of experiments proved that multilayer (deep) convolutional neural networks have the greatest potential for our goal.
At the next stage, we have limited the algorithm classes to neural networks only and then worked only with neural networks experimenting only with their topology.
Deep multilayer neural networks are good at pattern recognition when it comes to images, audio and video streams. In our case, neural networks learnt to recognize and distinguish the presence of the patterns in the source signal, whether it was a walking pedestrian, an operating car or bulldozer and a lot more options.
Python, Tensor Flow, Scikit learn
php, Codeigniter, Html5/css3
Project Manager, Machine Learning Architect, 2 Data Scientists, Android Developer, QA Engineer
We have developed a real-time system that uses a data stream coming from an optical fiber microphone to get to see wanted objects. The accuracy rate is 90-95%, while the system enjoys a low level of false alarm that might be caused by natural noises, such as wind, or stationary noises, like ones caused by permanently working equipment.
We can hear an individual walking along a pipeline or driving a car, an attempt to drill a hole in the pipe and can count a number of wheel sets that hammer in a moving train like “ta-dum-ta-dum”.