An anonymous reader writes from a report via PCWorld: Google says its Tensor Processing Unit (TPU) advances machine learning capability by a factor of three generations. “TPUs deliver an order of magnitude higher performance per watt than all commercially available GPUs and FPGA, ” said Google CEO Sundar Pichai during the company’s I/O developer conference on Wednesday. The chips powered the AlphaGo computer that beat Lee Sedol, world champion of the game called Go. “We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law), ” said Google’s blog post. “TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models, and apply these models more quickly, so users get more intelligent results more rapidly.” The chip is called the Tensor Processing Unit because it underpins TensorFlow, the software engine that powers its deep learning services under an open-source license. Read more of this story at Slashdot.
Google’s Tensor Processing Unit Could Advance Moore’s Law 7 Years Into The Future