Several of the IBM researchers recently published a paper, paper expounds a kind of so-called resistance type processing unit (ResistiveProcessingUnit RPU) concept, the new chip is compared with the traditional CPU, which can be the depth of the neural network training speed up to 30000 times.
Depth (within DNN) is a kind of neural network with more hidden layer of the artificial neural network, the neural network can be supervised training, can also be unsupervised training, the result is to be able to "learn" by machine learning (or artificial intelligence), also known as the depth of the study.
Shortly before Google (Alphabet) DeepMind go programs in man-machine war defeat lee se-dol AI AlphaGo used a similar algorithm. AlphaGo by a search tree algorithm and two millions of neurons of depth of multi-layer neural network connection. One network called "policy network", what's the step is used to calculate the highest winning percentage, another network called "value network", used to tell AlphaGo how to move to an albino and sunspots are better, so that it can reduce the depth of the possibilities.
Due to the promising, many machine learning researchers have focused to the depth of neural network. However, in order to achieve a certain degree of intelligence, these networks need to be very much computing chips, such as AlphaGo use calculation quantity to reach thousands of chip. So this is a very consumption of computing resources, but also very costly. But now the IBM researchers put forward a new concept of chip, its powerful computing capacity can be a top traditional chip of thousands, and if the tens of thousands of this kind of chip together, the future of AI ability might appear more breakthrough.
Called RPU chip, which make use of the depth of the learning algorithm of two characteristics: such as local and parallelism. Therefore ROU with the aid of the next generation non-volatile memory (NVM) technology, the concept of weight value is stored locally, the algorithm used to minimize the data movement in the process of training. , researchers say, if the RPU mass applied to more than 1 billion, the depth of the neural network weights training speed can be accelerated highest 30000 times, that is usually need thousands of machine training days to the results using this chip a few hours, you could just and efficiency but also much lower.
Of course, the thesis is just a concept, the chip is still in the phase of study, at the same time given ordinary non-volatile memory has not yet entered the mainstream market, so the chips listed estimates it will take a few years. But if the chips do have so big of calculation and the advantage of energy efficiency, believe that Google, Facebook, etc is engaged in the research and application of AI giant will focus on, IBM itself also is one of AI and active participants of big data, if things should be made to market don't have to worry about.