The new AI chip allows the artificial eye to finish analyzing images in a matter of nanoseconds
Over the past few years, machine vision technology has made many breakthroughs, and has become an integral part of systems that integrate artificial intelligence, such as working robots or self-driving cars.
Normally, the information in the form of images going into the camera will be converted into digital form, the algorithm will be responsible for processing them. The amount of data input is often very large (most of which is redundant data), when going through a series of changes to change data from one form to another, the end result will have a frame rate. Low profile but the data processing process is very energy consuming. However, we have good news.
Researchers at the Vienna Light Quantum Institute in Austria have created a new type of artificial eye, combining light sensor components with a neural lattice, to fit both on a small chip. They are capable of processing images in just a few nanoseconds, faster than any image sensor science possesses.
The new technology design, just published in the journal Nature, shows that the team has applied the magic of nature to technology: they seek to imitate the image processing process of animal eyes - The eyes can process input data before sending signals to the brain.
The team of scientists made chips from a Tungsten diselenide plate (WSe2, a compound of tungsten and selenium) with the equivalent thickness of several atoms adjacent to each other, on the WSe2 plate is a series of light sensor diodes. They then attached this chip to a neural net.
WSe2 makes the chip carry a special electrical property, allowing scientists to easily adjust the light sensitivity of the diodes. This means that the neural net can learn a range of methods of determining input image information just by adjusting the light sensitivity of the diode, adjusting continuously until they give an accurate result.
In this way, the chip can quickly recognize stylized n, v and z letters.
The new sensor allows machine vision technology to 'see' faster, more efficiently, but still has a long way to go: this eye only holds 27 light sensors and can only see images. 3x3 size.
There are still characteristics that make this technology better than before, enough for researchers to believe that scaling up the system won't be too complicated. Specifically, the chip can perform machine-learning tasks by itself without the intervention of scientists, such as identifying and encoding characters.
When it is possible to 'let' the artificial intelligence process itself to process images to learn gradually, we will reduce the time to monitor them.
You should read it
- [Infographic] AI and Machine Learning in the enterprise
- What is machine learning? What is deep learning? Difference between AI, machine learning and deep learning
- 7 practical applications of Machine Learning
- 7 best websites to help kids learn about AI and Machine Learning
- Winnow uses computer vision to help cut waste in food processing
- The difference between AI, machine learning and deep learning
- The best Python tools for Machine Learning and Data Science
- Google researchers for gaming AI to improve enhanced learning ability
- DeepScribe: AI can translate ancient texts thousands of years old
- Machine learning trends in the financial market
- Free online learning about AI and Machine learning on Google website
- What is welding machine? Classification of welding machines
Maybe you are interested
How much money do fighter planes burn per flight hour?
IBM Unveils Breakthrough Optical Data Transmission Technology That Enables 'Light-Speed' AI Training
How to download Dead By Daylight Mobile for iOS and Android
How to level up quickly in Empire, AoE to level up 2 and 3 at lightning speed
Light Shop: Evaluating the pros and cons of a film that is expected to be as successful as Moving
How to add a spotlight effect behind your subject using Adobe Camera Raw