A new class of next-generation, high-resolution aerial sensors are coming online in the next few years. Northrop Grumman just tested United Technology’s SYERS-2 sensor on Global Hawk last week. Whether they are installed on satellites, manned aircraft, or drones, these new sensors will be collecting vastly more data than their predecessors. That data will need next-generation software applications that take advantage of the very latest in commercial computing technology. Are you using the right computing architecture?
If you are designing local or cloud-based image processing systems and are not planning to use GPUs (powerful graphic cards used for computing), you have some major obstacles ahead of you. Traditional computing processors (CPUs) have a substantial scaling problem with real-time image processing or computer vision algorithms like detection and tracking, and they have already been outclassed by GPUs which are far better suited to the massive processing requirements of high-definition video and high-resolution imagery.
Resolutions are going up faster than Moore’s Law
From 2009 to 2014, video resolutions of aerial sensors increased from SD (640×480) to 4K (3840×2160). That’s 27 times more raw pixels. Over that same period of time, GPUs increased their processing power by 33x, but Moore’s Law saw CPUs increase by only 10.3x.
Wide-area motion imagery (WAMI) sensor resolutions are going even farther. Harris’s CorvusEye is 115 megapixels (~10,720 x 10,720) and PV Labs’s PSi-ViSiON 3000 system is 300 megapixels (~17,320 x 17,320).
Using the GPU for compute
MotionDSP has been using GPUs to accelerate our image processing and computer vision software for more than 8 years. In 2008 when we saw our super-resolution image processing algorithm hit a wall with multi-core CPUs, we started working with NVDIA’s CUDA SDK which allows software to use GPUs for computing. We immediately saw performance of 5-10x over CPUs, and more importantly, GPUs enabled our software to work in real time on live video feeds. That simply wasn’t possible with CPUs. Since then, every 18-24 months, NVIDIA has released new GPU cards that are twice as powerful, easily keeping pace with the increase in video and image resolutions.
Airborne, real-time applications = GPU required
If your application requires real-time processing (as in on-the-fly processing of a live video or high-resolution image stream in an aircraft or sensor), then CPUs will not work. Take, for example, performing target detection and tracking from wide-area motion imagery (WAMI) sensors made by companies like Harris and PV Labs.
Detection and tracking on WAMI data
On a powerful GPU, our software will do live detection and tracking on 8K imagery at 2 frames per second. Switching to CPU processing, that figure drops to 0.146 frames per second, 13.7 times slower than the GPU.
In video processing, when you run into a wall like this, the traditional approach is to divide up the image into smaller tiles and process them on separate CPUs. This presents two big obstacles:
- It’s slow. Dividing up the image into tiles, processing separately, and then gathering the results and recombining them creates an enormous amount of computational/data overhead that results in unacceptable latency. Processing the full frame on a single GPU has a huge advantage.
- Enormous SWAP (size, weight and power). Using the previous example, our WAMI software can process an 8K image stream in real time on a single GPU, in a single server. On the CPU, that processing is 13.7x slower, so to achieve real time, we would need to use 14 CPUs. Even using dual-CPU servers, that’s still 7 servers, compared to 1.
Even if you don’t require real-time processing, and your use case is batch processing of imagery after the aircraft lands, CPUs still don’t scale. With resolutions and frame rates increasing, if you stay with CPU-based solutions, you will soon need a data center the size of the NSA’s facility in Utah.
New aerial and space sensors are here. The question is, is your software and hardware prepared to process all that data? If you are processing video or imagery, you should be configuring your new workstations, VDI/Thin Client solution, or servers with the biggest GPUs you can. Otherwise, you are building an obsolete system.
GPUs are mature, proven, commercial, and incredibly cost-effective for the TeraFLOPS of compute they offer. Writing software for GPUs is a new type of expertise, but the rewards are well worth the effort.