Announced recently, seven large chip suppliers will jointly accelerator for server set up across the ARM, x86 alliance with Power architecture of operation...
Seven chip suppliers recently announced that it would jointly accelerator for server (server accelerators) to define the cache consistency interconnection architecture (cache - coherent interconnect), in the field of cloud computing applications provide Intel (Intel), Nvidia outside of the alternatives. The move is across the ARM, x86 server hardware architecture cooperation with Power architecture processor.
The above seven chipmakers including AMD, ARM, Huawei (Huawei), IBM, Mellanox, Qualcomm, Qualcomm, with the spirit (Xilinx), defined in the server accelerator Cache consistency Interconnect (Cache Coherent Interconnect for Accelerators, CCIX) specifications. The alliance is expected by the end of the published specification draft, but so far has not announced any technical or financial details about the cooperation case.
Intel last year to $16.7 billion for Altera components suppliers can be programmed logic, a part of the reason is to use the latter FPGA as the accelerator of Xeon processor server; Intel has already started to launch in the integration of Altera FPGA and its Xeon processor in a single package of products. As for the other processors is the supplier contact "spirit, respectively, trying to establish a cache coherence links for its chips; So the Xilinx puts forward the definition for all processors to idea of a single link.
Since last year, with the accelerator chips to improve processor performance requirements like wildfire spread the whole operation industry, most come from the various network giant started in many emerging applications such as speech recognition, image recognition and context search, using a new generation of machine learning algorithms.
Google in just over the year Google I/O conference also announced it had developed the accelerator chips, called the tensor processing units (tensor processing unit, TPU); Applied to the relative speed is low and the cache consistency of PCI Express bus. The TPU has been applied to the Google data center for a variety of tasks, is seen as the company one of the differentiation characteristics of cloud services.
Microsoft (Microsoft) and Baidu (Baidu), are also in a data center using FPGA, accelerating from a search engine to the network security and so on the task of increasing processing speed; They are usually in the PCIe card by use of FPGA. The Nvidia published earlier this year the first USES the cache consistency link interface NVLink graphics processors Pascal; This interface is used to link the Nvidia GPU, and USES the IBM Power architecture processor.
Earlier this year, Facebook published based on GPU, used in the design of artificial intelligence task server; While Google's a top engineers in recently disclosed that the company USES more and more number of GPU in the center of the data.
As well as machine learning, the aforementioned CCIX members said, will the development of the interface will help the accelerator to include large data analysis, network processing and other applications; Specific functions accelerator in assisting the role of the general processor in importance is more and more significant, because the latter if you want to catch up with the pace of Moore's law (Moore 's law), the cost is becoming more and more high. CCIX standard could be applied to a wide range of accelerator and server processor, but cooperative members are concrete plan has not been revealed.
And the spirit said the specification will apply to 16 nm process FPGA, but did not mention when products listed; Mellanox may be in the high-end network controller and acquisition since the specification EZChip network processors are adopted. IBM on show earlier this year, is expected in 2017 the Power of 9 processor circuit diagram, the cache consistency interface should be CCIX.
Qualcomm with huawei may be in the development of single chip ARM architecture server using the new interface; AMD is should be in the future of the ARM processor architecture and x86 architecture server and Radeon hd graphics processors are CCIX, but so far, AMD has not one as rival Nvidia GPU emphatically the accelerator of the market.
CCIX will is a supplement to the open the FPGA application programming interfaces (apis)
CCIX organization has not yet been decided its interface specification will use free authorization model, or be passed on to the certification standards bodies to operate; In addition the group has yet to reveal any of the interface bandwidth and data transfer rate or delay technology target, saying only that relevant parameters can at least comparable to the other alternatives.
"We will be able to compete with all the existing alternative technologies, in some cases even can better;" The spirit, vice President of architecture Gaurav Singh said: "we are going to have a single span x86, the Power and the consistency of the ARM architecture agreement ©¤ ©¤ this is never had before."
Internet giants like Google is preparing to explore by Intel x86 architecture to such as ARM or the possibility of Power architecture; Earlier this year, a Microsoft engineer, said an increasingly large data center of workload will force computation architecture redesign, and some will focus on new kinds of accelerators.
Red Hat is dominated by reforming recently open source software developers on the server, including the FPGA accelerator software programming interface support, Mr Singh: "CCIX and its complementary action;" He further pointed out: "CCIX does not define the application programming interface (API), the part will be dominated by software providers. CCIX will need some changes to support the platform software, but we are not expected to move to the operating system."
"Because we will have a selective and benefit, there are like CCIX such alternative to industry is a good thing;" Red Hat is responsible for the above for accelerator open programming interfaces of Jon Masters said CCIX provides industry need some functions; QPI at Intel and IBM CAPI (Coherent Accelerator Processor Interface), another good alternative technologies. He pointed out that in the software part, the goal is to define the use of the openness of the accelerator software interface, so whether the underlying interconnection technology are QPI, PCI Express, CAPI or CCIX, etc., have free driver programming environment.
CCIX members have a large number of existing technology available, such as the ARM has a consistency SoC interconnection, AMD the dominant Heterogeneous system Architecture Foundation (Heterogeneous Systems Architecture Foundation) for the mobile processor cache consistency of CPU and GPU developed links; In addition, IBM has been used in the Power architecture chip consistency accelerator (CAPI) processor interface.
Market research firm Moor Insights and Strategy, a senior analyst with Karl Freund believes that if CCIX smooth promotion, potential benefits will be very big; But finally he expected results to 2019 or even 2020 years to see, because the link interface must wait until the IBM Power 9, AMD Zen and ARM after the advent of a new generation of processor cores will emerge.
And Freund also said, promote CCIX seven companies less a ©¤ ©¤ Nvidia's absence will accelerate in them into the operation and around its software has established the valuable ecological system become a problem.