Share
linkedin twitter facebook email

 

Universal Sensor Fusion for Intelligent Surveillance

In computer vision, the choice of input features strongly impacts the performance and accuracy of detection, identification and tracking algorithms. For challenging tasks, such as the detection of pedestrians in low visibility conditions (e.g. smoke, fog, rain, etc.), it’s not sufficient to rely solely on low-resolution visual information.

For that reason, imec researchers have developed a universal solution for fusing images captured by cameras that cover different wavelength bands and different modalities, e.g. 3D, distance, etc. By combining the data collected by all these different sensors, objects of interest remain clearly visible even in adverse atmospheric conditions, which improves the performance of the computer vision algorithms. 

Imec’s solution is flexible and universal since any type of camera can be added to the setup. Our prototype generates aligned video streams with equal resolutions and frame rates, corresponding to the highest spatial resolution and frame rate in the setup. 

Copyright imec 2017