One of the biggest challenges of artificial intelligence (AI) models in language processing today is to understand the context and context of each specific segment, thereby grasping the content of the whole paragraph. or, more broadly, the meaning of the whole work correctly - rather than simply understanding the meaning of each individual word as it is today.

To solve this problem, Google introduced a new machine learning model called Reformer, which can understand the content and context of a 1 million-line work - the equivalent of a novel, while only having to use use about 16GB of storage. Reformer was developed to overcome the disadvantages of Transformer - an old-fashioned neural network that works by comparing words in a paragraph to understand the relationship between them.

Google's new AI model can read an entire book Picture 1

However, by operating in a combination-pairing mode, Transformer will occupy a lot of data space in case it has to process more than a few thousand words of text. This weakness makes using Transformer to handle long articles or a book extremely impossible.

Google developed Reformer to address both of these key issues: The ability to handle long texts and the memory consumption of the old model.

To solve the first problem, Google's new model uses a method called locality-sensitive-hashing (LSH). That is, instead of comparing all the words together as before, the new model will use the hash function to join similar words together in a group, then compare words in the same group or in groups. adjacent, helping to limit overloading, and the ability to handle larger amounts of text.

To solve the memory consumption problem, researchers use a technique called Reversible Residual Network (RevNet). RevNet was developed from Deep Residual Networks (ResNets) - a technology that works by increasing efficiency as networks grow deeper and wider. RevNets owns layers, where the output of each layer can be rebuilt and used in another. As a result, the output data for most classes almost does not need to be stored in memory during the reverse transfer.

To test the effectiveness of this model, Google provided Reformer with a number of small cropped images, and it created a full-frame image from there.

Google engineers say the new model can easily handle an entire book with great accuracy, opening up the potential for large-scale word processing.

5 ★ | 2 Vote | 👨 147 Views

Above is an article about: "Google's new AI model can read an entire book". Hope this article is useful to you. Don't forget to rate the article, like and share this article with your friends and relatives. Good luck!

« PREV POST
NEXT POST »