Ikena ISR. Know now.
Inspecting and identifying damage in underwater pipes is time-consuming, and expensive. Pipeline services companies, hired by oil companies, deploy underwater cameras to scan miles of underwater pipe looking for...
Inspecting and identifying damage in underwater pipes is time-consuming, and expensive. Pipeline services companies, hired by oil companies, deploy underwater cameras to scan miles of underwater pipe looking for damage. The company doing the inspection needs to detail the damage and its precise location so that they can direct emergency repair teams to the right location. The telemetry data from the underwater cameras is not designed for pinpoint accuracy, and location information can be off by as much as several meters. This can translate to significantly more search time for the repair teams. Accuracy can be greatly improved by relating the location of the damage to the location of the nearest joint or fitting on the pipe, but the cameras pass so close to the pipe that they don’t capture a wide enough field of view to provide those visual reference points.
Solution: Ikena ISR creates a seamless “mosaic” image where every frame of video is stitched with the previous frames to create an image of the entire pipe, such as the one you see here. With this image product, precise measurements can be derived from the position of the damaged area relative to the nearest joints and fittings.
A far more accurate and clear image of the damage is created and its location is pinpointed to within inches. The emergency repair teams can then be sent to a precise location, and the repairs can be completed more quickly, saving time and thousands of dollars.
Event officials were planning for a their annual two-day festival drawing on average 2,000 people – adults and children. While the festival had never had a serious issue in its 10-year history, the event...
Event officials were planning for a their annual two-day festival drawing on average 2,000 people – adults and children. While the festival had never had a serious issue in its 10-year history, the event organizers wanted to be prepared for any contingency.
The event organizers used Ikena ISR to give them a clear picture of the festival grounds so that they could determine the optimal positioning of their security crew and form an efficient evacuation plan. Ikena ISR enabled them to mosaic the multiple images from their overhead cameras into a full picture of the festival grounds so that the event officials could monitor activity across the entire festival in real time.
Festival officials were able to identify at all times the chokepoints, areas of congestion, and best routes of egress to ensure the success of the festival and the safety of all those involved.
This following describes an experiment done in collaboration with the US Air Force to test an operational scenario.
Preparing an attack may take months of preparation. Before and during the attack, the commander needs real-time information on the situation on the ground. The only source of information may be a small drone that supplies quality-challenged video -- in this case, overexposed, interlaced shaky video. In this example, two men are clearly visible on deck, but are the blurry dark shapes people or objects? Missing an armed person could be a life/death mistake.
Ikena ISR’s proprietary algorithms correct the live video in real-time, stabilizing, reducing overexposure, and eliminating interlacing artifacts to reveal that there is indeed a third person on the deck.
The commander is able to launch a successful raid with all threats correctly identified.
Full Motion Video (FMV) suffers from a variety of challenges. Video taken from large stand-off distances is subject to atmospheric effects and low resolution. Video from small unmanned aerial systems (SUAS) such as drones can be subject to high frequency jitter; and low bitrate communications can create noise and compression artifacts.
Detecting motion from a moving camera can be challenging. Ikena ISR’s change detection and motion estimation algorithms can accurately detect moving objects in a video while correcting for camera movement.
Users can also adjust detection sensitivity to filter for both size and speed of objects. All of this allows objects to be detected and tracked accurately even from sources like moving aerial platforms or drones.