Contact us

Shenzhen City Jiaxinrui Technology Co., Ltd.
Phone: 0755-33132512 36973930

Mobile phone: 18665303593£¨Anna£© or 18665329839£¨Jack£©

Wechat: 18665329839£¨Jack£©  or 18665303593£¨Anna£©

QQ:1161074259£¨Jack£© or 1919265155£¨Anna£© or 3098703297£¨Anna£©
Email: fazhan3@163.com£¨Jack£©  or  anna.lv@163.com£¨Anna£©
Adress: Central 2306, Futian building, 3009 Shennan Middle Road, Futian District, Shenzhen

News
You are here£ºHome > News > News
Launched Intel Xeon Phi processor To build scalable machine learning technology
Time£º2016/7/8 10:10:12

Intel (Intel) artificial intelligence CPU market, in addition to competition from the GPU industry, still need to run away in the face of such big customers as Google self-developed processor. So Intel choice to replace longitudinal transverse expansion (Scale - Out) expand (Scale - Up) system. Like the forthcoming XeonPhi processor, it is with respect to working capacity is called for, hoping to win the favor of the more machine learning operator.

DataCenterKnowledge web site reported, large-scale machine learning system software code, is a difficult task. Intel has chosen the highly efficient operation system and the large scale network cloud application common lateral expansion of clustering method (ClusterApproach), dealing with machine learning of the scale.

In addition, because most people want to have a can also perform other work system, machine learning and thus Intel CPU artificial intelligence solution, also will not be only a single function. In that case, highly specialized Google custom chips TensorProcessingUnit (TPU), will not cause too much of a threat to Intel.

Intel in November 2015 for highly efficient operational requirements published XeonPhi processor. XeonPhi has three teraflops double precision (DoublePrecision) operation efficiency, and carrying 16 gbmcdram memory, power efficiency is more than five times the GDDR5. XeonPhi Intel is first designed for highly parallel workload can be facilitated main processor.

Intel, points out that compared with the accelerator products, XeonPhi provides greater scalability, and can handle a wider variety of workload and configuration.

XeonPhi can through Intel extensible system (SSF) improve processing scale of the system, and at the same time in the processing speed of 1.5 GHz, still can maintain more than other GPU or PCIe core system lower energy budget.

Vice President of Intel's data center and highly efficient operation platform, managing CharlesWuischpard pointed out that XeonPhi array 1.38 times than the speed of GPU is fast. In a 128 case of array, XeonPhi performs the task of deep learning speed is 50 times that of the GPU.

XeonPhi processors are hopes to further promote the development of the depth study of artificial intelligence technology. The image and voice recognition function of artificial intelligence, has become an indispensable part of many web companies business and XeonPhi processor is expected to bring the company a lot of help.

XeonPhi and GPU in machine learning expertise, mainly in the field of training, and the other a corollary (Inference), is already Xeon processor.

In addition to hardware, Intel has invested the development of machine learning software tools and databases, and start training partners, as the top research institutions first experience plan (EarlyAccessProgram). There are 100000 developers through partnership, Intel's ongoing training of machine learning.