Google designed the deep of the neural network is used to drive a computer chip, deep neural network of artificial intelligence technology to reinvent the operation ways of Internet service.
The 18th morning time, in the Google I/O2016, CEO of sander, PiCai (Sundar Pichai), said Google has designed a ASIC (Application Specific Integrated Circuits: application-specific Integrated circuit), specifically for deep neural network. The hardware and software of the network, can through the analysis of large amounts of data to learn a particular task. Google use neural network to identify objects and faces in the photograph, understand your command of Android mobile phone to say, language or translation text. The technology and even began to change the Google search engine.
Google calls its chip Tensor Processing units (Tensor Processing Unit: TPU), because it holds up Tensor Flow (research and development of the second generation of artificial intelligent learning system), its deep learning services software engine driver.
Last fall, Google released a Tensor Flow of the open source license, this means that anyone can use outside of the company, and even modify the software to the engine. Google won't share its design of TPU, but outsiders can through different Google cloud services using the machine learning hardware and software.
Google just to deep learning to join one of the many companies of a wide range of Internet services, Microsoft and Facebook, and Twitter. In general, the Internet giant drive their neural network with the graphics processing unit, or GPU, such as chip makers NVIDIA. But some companies, including Microsoft, also is exploring the field programmable gate array (field program ablegatearrays: FPGA for short), the chip can be programmed for specific tasks. According to Google, the online service center of data within the huge amounts of hardware frame, TPU plate like a hard drive installed in the same slot, compared with other hardware, it provides "a series of orders of magnitude better optimization performance per watt machine learning" solutions. "TPU is designed for applications of machine learning, let more compatible chip to reduce calculation accuracy, this means that each transistor operating it need less." Google released in the blog post. "Because of this, we can take a second operation more integrated silicon, using more sophisticated and powerful machine learning model, application of these models, faster to make users more quickly get smarter."
This means, among other things, unlike Google use chip way Nvidia, unlike other companies also use fewer chips. At the same time we also can see, Google hopes to build its own chips, is a bad news for chip manufacturers, in particular, the world's largest: Intel. Intel processor drives the internal huge computer servers, Google and Intel's worry is that Google will one day to design their own central processing unit.