A microscopic camera that is the size of one grain of salt can still capture sharp, vivid images, even though it’s only 500,000 times smaller than normal lenses.

Researchers from Princeton University as well as the University of Washington created this ultra-compact device.

This solves the problems of micro-sized cameras that have taken blurry images and had very narrow fields of view.

This new camera can allow robots super small to detect their environment and even aid doctors in diagnosing problems inside the body.

Despite being the size of a grain of salt, a new microscopic camera design can capture crisp, full-colour images on par with lenses 500,000 times larger. Pictured: the tiny camera

A new microscopic camera can take crisp full-colour photos despite being only the size of one grain of salt. This is the microscopic camera.

MAKE THE CAMERA 

The so-called metasurface, which allows the camera to capture quality images even though it is so small, was fabricated by University of Washington optical engineer James Whitehead.

They’re made out of silicon nitride (a glass-like material) which can be used in the current manufacturing processes for computer chips.

This, the researchers explained, means that — once a fitting metasurface has been designed — it could be easily mass produced for lower costs than the lenses used in normal cameras. 

In a full-sized, traditional camera, there are a number of lenses made of either curved glass, or plastic that bend light to focus it on film, or digital sensors.

In contrast, the tiny camera developed by computer scientist Ethan Tseng and his colleagues relies on a special ‘metasurface’ studded with 1.6 million cylindrical posts — each the size of a single HIV virus — which can modulate the behaviour of light.

The 0.5-millimetre wide surface’s posts have a distinctive shape, which allows them to function in the same way as an antenna, shaping an optical wavefront.

Researchers claim that the secret to success of the camera was the integration design of an optical metasurface with machine-learning-based signal processing algorithms, which interpret light interaction into images. 

This tiny camera takes the best quality images and has the largest field of view in any full-colour, metasurface camera.

In the past, designs were often plagued by image distortions, limited fields of vision and issues capturing all visible light.

Experts call it ‘RGB imaging’ because the process involves mixing primary colors red, green, and blue in order to create other colours. It is similar to primary school’s red, yellow, and blue colour wheel.

In fact, the team said, the images they can capture — aside from a little blurring near the edges of the frame — are comparable to those that can be taken with a regular, full-sized camera setup featuring a series of six refractive lenses.

Additionally, it can function with natural light and not in laser or highly-optimised environments like previous metasurface camera models if required to capture high quality images.

It overcomes problems with previous micro-sized camera designs, which have tended to take only distorted and fuzzy images with very limited fields of view. Pictured: images of a flower, taken with the previous state-of-the-art microscopic camera (left) and the new design (right)

The new design overcomes issues with micro-sized cameras that were prone to blurry and fuzzy images, as well as limited field of view. The new microscopic design and an older state-of the-art micro-camera are shown below.

“It was a challenge to design these microstructures that do what you want,” said Mr Tseng. He is currently based at Princeton University, New Jersey.

“It’s hard to get large-field RGB images with a wide view because of all the microstructures. [on the metasurface]It’s unclear how they should be designed. 

To overcome this, University of Washington optics expert Shane Colburn created a digital model that could simulate metasurface designs and their photographic output, allowing automated testing of different nano-antennae configurations.

Professor Colburn said that because of the large number of antennae present on every surface, and their complex interactions with light, each simulation took’massive amounts’ of memory and time. 

According to the team, the images they can capture (left) — aside from a little blurring near the edges of the frame — are comparable to those that can be taken with a regular, full-sized camera setup featuring a series of six refractive lenses (right)

According to the team, the images they can capture (left) — aside from a little blurring near the edges of the frame — are comparable to those that can be taken with a regular, full-sized camera setup featuring a series of six refractive lenses (right)

‘Although the approach to optical design is not new, this is the first system that uses a surface optical technology in the front end and neural-based processing in the back,’ said optical engineer Joseph Mait, who was not involved in the study. 

“To design together the location, size and shape of the million metasurface features, and the parameters for the post-detection process to achieve the desired imaging performances, Mr Mait said.

With their initial study complete, the team are working to add computational abilities to the camera — both to further enhance image quality, but also to incorporate things like object detection that would be useful for practical applications. 

APPLICATIONS TO THE CAMERA 

Pictured: The tiny camera relies on a special 'metasurface' studded with posts which can modulate the behaviour of light

Pictured: This tiny camera uses a special “metasurface” that is studded with posts to modulate light’s behaviour.

The researchers believe that the camera is ideal for small-scale robotics applications, in which weight and size constraints make it difficult to use traditional cameras.

The optical metasurface could also be used to improve minimally-invasive endoscopic devices — allowing doctors to better see inside of patients in order to diagnose and treat diseases.

Furthermore, envisages paper author and Princeton University computer scientist Felix Heide, the concept could be used to turn surfaces into sensors.

He said that this could transform individual surfaces into ultra-high resolution cameras, which would eliminate the need for three cameras at the back of your smartphone. Instead, the entire back would be one big camera.

“We are able to think of totally different ways to make devices in the future.”