Neural nets are extremely large and hence their computations are not practical for handheld devices as they are energy intensive. Many of the smartphones which depend on neural nets just upload the data to internet servers, which then processes this data and later sends the results back to the smartphone. Researchers from the MIT University have developed a special purpose chip that boosts the speed of neural network computations by 3-7%. Moreover, these new chips boast of significant reduction in power consumption. 93 to 96% to state the claim, which makes neural networks practical for smartphones as well as household appliances.
The computational algorithms of these hips is simplified to one specific operation, which is termed as dot product. The approach used by these researchers is to implement the dot product functionality inside the memory so as to eliminate the need for transferring data back and forth. The neural networks are in layers and a single processing node in a layer of the network can receive data from several nodes which are present in the layers above.
Every connection has some weight which is an indication of how large the role of output is of one node in computation performed by the next. The summation of multiplications is the dot product and if this dot product exceeds a certain threshold value then the node will transmit it to the next layer nodes, over connections with their own weight.