Well, many of us know about AI development, technologies, benefits, and challenges. Many large tech companies are working with many researcher groups in developing a module that could provide a permanent solution for major challenges facing AI development. We are talking about Tiny AI, a module that can leverage AI’s capability to the next level with a lot of benefits. In this article, we are going to cover four things beginning from what is Tiny AI, how does Tiny AI work, what is the need for Tiny AI, and some challenges with real-world examples.
AI is progressively becoming more intelligent every day. However, it is not getting greener. In the ceaseless quest to build powerful AI solutions with complex and massive algorithms, enormous amounts of data and computing power are being consumed. This not only has an adverse effect on the environment in terms of carbon emissions but also reduces the speed and limits of the security of AI applications. As AI is increasingly becoming more accurate, the burden it poses on the environment proportionally increases as well.
A recent study conducted by researchers at the University of Massachusetts Amherst reveals that training one single algorithm may have five times as much lifetime carbon dioxide emissions as that of an average car. One example is the Turing Natural Language Generation model introduced by Microsoft, which is one of the largest AI models ever published at a whopping 17 billion parameters. Such a model, while extremely accurate, is also notoriously power-hungry. The relentless search for the highest accuracy possible in AI models has compromised energy efficiency goals. This is where Tiny AI may help. MIT Technology Review in April 2020 named Tiny Al as one of the major technological breakthroughs for this year.
Contents
WHAT IS TINY AI? HOW DOES TINY AI WORK?
Tiny AI is the counter-trend to increasing carbon emissions in order to develop more efficient AI. An army of researchers, as well as tech giants, are developing new algorithms that shrink existing deep-learning models while keeping their capabilities intact. Researchers are attempting to shrink the size of algorithms in AI models, especially those that utilize massive datasets and computational power.
One such example is BERT (Bidirectional Encoder Representations from Transformers). BERT is a pre-trained NLP (Natural Language Processing) technique developed by Jacob Devlin and his team at Google. BERT has the ability to understand words and the context in which they are used as well. As a result of this, it can give writing suggestions, finish off sentences, and much more. However, the ability to do this with almost unparalleled accuracy comes at a cost. BERT works with a colossal data set and requires massive amounts of computational power. It has a whopping 340 million data parameters and training it just one time utilizes as much electricity as what an average US household would use in 50 days. Therefore, BERT became a rather obvious selection for Tiny AI researchers who wanted to shrink large AI models. Reportedly, Huazhong University of Science and Technology and the Huawei Noah’s Ark Lab were successful at building TinyBERT. TinyBERT was 7.5 times smaller than the original BERT but was also 9.4 times faster than the original BERT too! According to the report by Huawei and Huazhong University, TinyBERT is almost just as accurate as BERT since it achieves 96 percent of the performance of the original BERT.
How are researchers going about achieving this, you ask? To answer the question, how does Tiny AI work. Tiny AI researchers develop ‘knowledge distillation’ methods that reduce the size of the AI model. With these distillation methods, an AI model can be scaled down to almost one-tenth of its size. These smaller AI models can be deployed on edge’ with inbuilt algorithms. This eliminates the need to send data over to the cloud to process and thereby reduces latency since the algorithms run on the device itself. Despite the reduced size, this method accelerates inference and maintains high levels of accuracy. This process involves training a smaller AI environment, let’s call it the student, from a larger, more sophisticated AI model, say the teacher. The training process involves running various iterations of data on both models in order to compare and tune the output of the student. Eventually, the student will be capable of producing almost the same/the same outcome as the teacher. This allows the development of a smaller AI engine that boasts the capabilities of a larger AI model. We hope we have tried answering ‘how does Tiny AI work’.
WHAT IS THE NEED FOR TINY AI?
As we briefly discussed above, training sophisticated AI models consumes massive amounts of energy. AI adoption is increasing exponentially every passing day, and the need for Tiny AI is to make the AI technology greener is apparent.
Another factor that is facilitating the push for more power-efficient AI is the need to run sophisticated AI models on smaller devices at the ‘edge’. This will ultimately allow for advanced use cases in robotics, automated video security, voice assistants, autocorrection, image processing in cameras, autonomous driving, connected healthcare, Industry 4.0, precision farming, and many many more areas. Additionally, reducing the size of AI and machine learning models also reduces the need for massive amounts of computational power. It lowers overall cloud and power costs. This could be a huge boon to the concerned industries since it is commonly seen that for every dollar spent on AI, there’s an extra $10-15 spent on cloud computing to support the infrastructure.
The AI of the future needs to be able to run on much smaller microprocessors, many of which are powered by batteries, such as smartphones, and a host of IoT devices. For example, mobile cameras can be used for medical image analysis or autonomous driving could be conducted without the cloud (which saves invaluable microseconds).
The ability to conduct sophisticated AI programs on small-form-factor devices without the need to go back and forth with the cloud will allow these devices to be on the edge of the ‘edge’ one step further than edge computing. Tiny AI involves building complex AI algorithms into the hardware at the very periphery of a network, in most cases, the devices or sensors themselves. By integrating these algorithms into the hardware, data analytics can be accurately performed at much lower power due to the absence of integration with the cloud. This also improves privacy manifold since the data never leaves the device and is therefore less prone to outsider attacks or breaches. The beefed-up privacy is extremely invaluable in highly regulated and privacy-conscious industries healthcare, and banking.
REAL WORLD EXAMPLES OF TINY AI
Alongside, researchers and academicians, tech giants such as Google, IBM. Amazon and Apple are also conducting research in this nascent field. Various industries and fields of technology are looking into Tiny Al to reduce computational costs, improve speed and privacy, and be more environmentally conscious.
Let’s take a look at Sony and Microsoft., who recently struck a deal on a Tiny Al chip to create Al-driven smart camera solutions. Sony and Microsoft are working towards embedding Al capabilities into Sony’s latest imaging chip. The new AI module will feature its own processor as well as memory and will be able to analyze videos using Al tech in a self-contained system. The AI chip will be able to analyze the video footage it sees and provide metadata about what’s in the frame since everything is stored on the device itself. Privacy fears are alleviated since hackers will not be able to intercept sensitive images or videos during the transit to the servers.
The Al-powered Sony sensor is also capable of recording high-res video and conducting Al analysis simultaneously. This rapid responsiveness means that it could be used in cars to detect the driver’s alertness. Since the data is not sent back to the cloud for processing, the reaction time is almost instantaneous, and this technology could effectively hasten the adoption of smart-car technology.
Existing services such as voice assistants (Siri, Google Assistant, Alexa, and others), autocorrect, and digital cameras will also become more efficient and faster if they adopt Tiny Al, since they Wouldn’t have to ping the cloud every time they needed to access the deep learning algorithms. Recently, Apple acquired Xnorai, a Seattle startup that specializes in low-power, edge-based Al tools. This startup developed a technology that embeds AI on the edge and enables tasks such as facial recognition, NLP, augmented reality, and other ML capabilities to be executed completely on low-powered devices rather than relying on the cloud. They achieve this by replacing the AI models’ complex mathematical operations with simple, less precise binary equivalents. This technology can substantially boost the speed and efficacy of AI models and greatly reduces computational power consumption.
Yet another development of Tiny AI came with Amazon Web Services’ release of the open-source AutoGluon toolkit. The AutoGluon toolkit includes a feature known as ‘neural architecture search’, which finds the most compact and efficient structure of a neural net to carry out a specific inferencing task. It also allows AI developers to automatically optimize the speed, accuracy, and efficiency of new or existing AI models for inferencing in edge devices. AutoGluon can actually generate a high-performance AI model automatically from just a few lines of Python code. It does this by tapping into available compute resources and using reinforcement learning algorithms to find the best suited, top-performing neural network architecture for the target environment. AutoGluon can also interface with already existing AI DevOps pipelines through APIS in order to automatically alter an ML model and improve its inferencing performance.
CHALLENGES AND THE FUTURE OF TINY AI
Research into Tiny AI is absolutely critical in order to enable it to reach its full potential. Researchers and technology firms need to efficiently manage the trade-off between shrinking the size of the AI modeland maintaining accuracy and high performance for inference. For example, an attempt was made to make BERT even smaller than the Huawei version (TinyBERT). However, the accuracy was much lower than the 96 percent achieved onTinyBERT. So, researchers need to be wary of the limitations of the technology and need to also develop methods to push the envelope further. Additionally, since Tiny AI will likely push the adoption of smart vehicles and autonomous vehicles, the room for error here is non-existential. It is vital to make Tiny AI algorithms at the edge extremely secure and ethical.
Tiny AI could potentially alter the very essence of how consumers interact with their devices and will be essential to create the future of context-aware consumer devices. It has its eyes set on an array of services and technologies and will also create new applications. Experts believe that Tiny Al is an expected evolution in the field of AI.
Mieke De Ketelaere, Program Director AI, IMEC, believes that Tiny AI warrants researchers to look into new ways of running AI since firstly, the algorithms need to be run on smaller-scale hardware, which needs to be more power-efficient, and secondly, the access to data is limited on smaller form factor devices, so researchers have to come up with more novel ways to work with smaller data sets that are much more contextually aware to be as accurate as the full-fledged models. Due to these requirements and challenges, Tiny Al is not commonplace and is highly research-driven as of now, as it should be, until it is perfected.
We believe we have tried answering the questions most of the people keen to know about, which includes: What is Tiny AI, how does Tiny AI work, what is the need for Tiny AI, and challenges with few real world examples of Tiny AI.
Thanks for reading this article. If you find this interesting, please visit thesecmaster.com.
原创文章,作者:ItWorker,如若转载,请注明出处:https://blog.ytso.com/269955.html