August 15, 2018
After years of focus on software development, the technology industry reinvented the importance of semiconductors as it recognizes the significant role that artificial intelligence (AI) chips play in the growing market of AI.
Also known as AI accelerators, AI chips are processors that perform AI-based calculations and other tasks vey fast. The machine learning technology that the chips are built with renders exceptional computing power for training algorithms and running applications that traditional computer chips cannot provide. The range of innovative AI-optimized chipset architectures is constantly expanding with several startups and well-established chip companies launching impressive hardware architectures optimized for machine learning, natural language processing, deep learning, and other areas of AI.
Chief among the new AI chip architectures include graphics processing units (GPUs), application-specific integrated circuits (ASICs), neural network processing units (NNPUs), field-programmable gate arrays (FPGAs), and central processing units (CPUs).
The market for AI chips is driven by the surge in demand for smart homes and smart cities, a considerable growth in AI startups, and the emergence of quantum computing. According to Allied Market Research, the AI chip market is likely to garner $91,185 million by 2025, growing at a CAGR of 45.4% during the period, 2018-2025.
Given the ever-growing demands of AI applications, more chip makers are engaging in activities such as collaboration and launches. Nvidia Corporation, an American technology company recently teamed up with Arm Holdings or Arm, a multinational semiconductor and software design company with the aim of helping IoT chip companies to easily incorporate AI technology into their designs. Baidu, a Chinese technology company unveiled an AI chip to implement fast computing in various AI scenarios. Google, an American tech giant recently developed the third version of its AI-focused chips called Tensor Processing Units (TPUs) to perform a variety of tasks such as word recognition and more. Arm recently unveiled two new AI chips that deliver exceptional computational capability.
Nvidia and Arm Form Partnership
In March 2018, Nvidia and Arm formed a partnership with the aim of integrating the open-source NVIDIA Deep Learning Accelerator (NVDLA) architecture into Arm’s Project Trillium platform for machine learning. The collaboration is aimed at simplifying the integration of AI for IoT companies into their designs and helping global consumers have access to intelligent and affordable products.
Based on the NVIDIA Xavier autonomous machine system on a chip (SoC), NVDLA is a free open architecture that is scalable, highly configurable and designed to simplify integration and portability. Deepu Talla, vice president and general manager of Autonomous Machines at NVIDIA said, “Inferencing will become a core capability of every IoT device in the futures. Our partnership with Arm will help drive this wave of adoption by making it easy for hundreds of chip companies to incorporate deep learning technology.”
Baidu Introduces New AI Chip
In July 2018, Baidu introduced a new, powerful self-developed artificial intelligence chip called Kunlun, joining the ranks of Google, Nvidia, Intel, and other tech companies making processors especially for AI.
China’s first cloud-to-edge AI chip, Kunlun was developed as a response to the increasing requirements being imposed on computational power. Consisting of thousands of small cores, the chip has a computational capability which is around 30 times faster than the original FPGA-based accelerator. Other features of the chips include 14nm Samsung engineering, 512 GB/second memory bandwidth, and 260TOPS while consuming 100 watts of power.
Providing both cloud and edge functionality, the innovative chip runs large-scale AI workloads and can be used across numerous AI applications such as voice recognition, search ranking, autonomous driving, natural language processing, and others.
Google Develops Tensor Processing Units
Google’s new Tensor Processing Unit is based on artificial intelligence. Representing an alternative to Nvidia’s graphics processing units, the chips help the company enhance its artificial intelligence-based tasks such as recognizing words from speeches in audio recordings, spotting objects in photos and videos, and picking up underlying emotions in written text.
Sundar Pichai, Chief Executive Officer (CEO) at Google said that the vast computing power of the chip is possible when large fleets of these third-generation TPUs or pods are used. “Each of these pods is now eight times more powerful than last year’s version – well over 100 petaflops,” said Pichai.
Arm New AI Chip Designs
In February 2018, Arm, the UK-based chipmaker introduced two new processor designs that promise to deliver an excellent capability of computation for companies building machine learning-powered devices.
The first design is the Arm Machine Learning (ML) Processor, whose purpose is to accelerate AI-based applications such as machine translation and facial recognition. The second is the Arm Object Detection (OD) Processor, which is meant to process and detect visual information such as people and objects. “These are new, ground-up designs, not based on existing CPU or GPU architectures,” said Jem Davies, Arm’s vice president of machine learning.
Pratik Kirve is a writer, blogger, and sport enthusiast. He holds a bachelor degree in Electronics and Telecommunication Engineering and currently works as a Content Writer at Allied Analytics LLP. He has avid interest in writing news articles across different verticals. When he is not following updates and trends, he spends his time reading, writing poetry, and playing football.