Researchers in South Korea have developed an ultra-small, ultra-thin LiDAR device that splits a single laser beam into 10,000 points covering an unprecedented 180-degree field of view. It is capable of creating a 3D depth map of the entire visual hemisphere in a single shot.

Autonomous machines and robots must be able to perceive the world around them with incredible precision if they are to be safe and useful in real-world environments. In humans and other autonomous biological entities, this requires a whole range of different senses and extraordinary real-time data processing, and the same is likely to be true of our technological descendants.

LiDAR – short for Light Detection and Ranging – has been around since the 1960s and is now a well-established range-finding technology that is particularly useful in developing 3D point cloud representations of a given space. It works a bit like sonar, but instead of sound pulses, LiDAR devices send short pulses of laser light and then measure the light that is reflected or scattered when those pulses hit an object.

The time between the initial light pulse and the return pulse, multiplied by the speed of light and divided by two, gives the distance between the LiDAR unit and a given point in space. When you repeatedly measure a bunch of points over time, you get a 3D model of that space with information about distance, shape, and relative speed, which can be used alongside data streams from multi-point cameras, ultrasound sensors, and other systems to flesh out an autonomous system’s understanding of the environment. .

One of the key problems with existing LiDAR technology is the field of view, according to researchers at the Phangna University of Science and Technology (POSTECH) in South Korea. If you want to image a wide area from a single point, the only way to do it is to mechanically rotate the LiDAR device or rotate the mirror to direct the beam. Such equipment can be bulky, energy-intensive and fragile. It tends to wear out quite quickly and the rotation speed limits how often each point can be measured, reducing the frame rate of your 3D data.

Solid-state LiDAR systems, on the other hand, use no physical moving parts. Some of them, the researchers said — like the depth sensors Apple uses to make sure you don’t fool the iPhone’s face unlock system by holding up a flat photo of the owner’s face — project an array of dots together and watch for distortion of dots and patterns to recognize shape and distance information. But the field of view and resolution are limited, and the team says they are still relatively large devices.

The Pohang team decided to shoot the smallest possible depth-seeing system with the widest possible field of view, using the extraordinary abilities of metasurfaces to bend light. These two-dimensional nanostructures, one-thousandth the width of a human hair, can actually be thought of as ultra-flat lenses built from arrays of tiny, precisely-shaped individual nanorod elements. The incoming light splits into multiple directions as it travels through the metasurface, and with the right design of the nanorod array, parts of that light can be diffracted at nearly 90 degrees. A completely flat ultra fish eye, if you will.

Left: front and side views of the beam diffraction pattern showing both the loss of intensity at large bending angles and the loss of spot resolution as distance increases. Right: A precisely shaped nanosurface on the metasurface itself that can bend light nearly 90 degrees

SHEPHERD

The researchers designed and built a device that passes laser light through a metasurface lens with nanosteps configured to divide it into approximately 10,000 dots covering an extraordinary 180-degree field of view. The device then interprets the reflected or scattered light through a camera to provide a distance measurement.

“We have proven that we can control the propagation of light at any angle by developing a technology that is more advanced than conventional metasurface devices,” said Professor Junsuk Roh, co-author of the new study published in Communications of nature. “This will be an original technology to create an ultra-small and full-space 3D imaging sensor platform.”

Light intensity drops as diffraction angles become more extreme; a point bent at an angle of 10 degrees hit the target four to seven times more than a point bent closer to 90 degrees. Using the equipment in their lab, the researchers found that they got the best results with a maximum viewing angle of 60° (which represents a field of view of 120°) and a distance of less than 1 m (3.3 ft) between the sensor and the object. More powerful lasers and more precisely tuned metasurfaces will increase the appeal of these sensors, they say, but high resolution over long distances will always be a challenge with such ultra-wide-angle lenses.

This tiny dot of metasurface is all you need to split a single laser wide enough to reflect anything in front of you
This tiny dot of metasurface is all you need to split a single laser wide enough to reflect anything in front of you

SHEPHERD

Another potential limitation here is image processing. The “coherent point drift” algorithm used to decode the sensor data into a 3D point cloud is very complex and the processing time increases with the number of points. Therefore, full-frame high-resolution images decoding 10,000 pixels or more will be quite CPU intensive, and getting such a system to 30 frames per second will be a big challenge.

On the other hand, these things are incredibly small, and metasurfaces can be easily and cheaply produced on a massive scale. The team printed one on the curved surface of a set of safety glasses. It is so small that you can hardly distinguish it from a speck of dust. And that’s the potential here; Metasurface-based depth imaging devices can be incredibly small and easily integrated into the design of a number of objects, with their field of view adjusted to an angle that makes sense for the application.

The team believes these devices have huge potential in things like mobile devices, robotics, autonomous cars, and things like VR/AR glasses. Very neat stuff!

The research is in the open access journal Communications of nature.

Source: SHEPHERD

Source link

Previous articleGlassboro’s Brown makes a shutout in the shootout as the Bulldogs go ahead
Next articleKorea: Regulator approves merger of Tving and Seezn streamers