Nvidia has just unveiled the Tesla V100, A insane 21.1 billion transistors on a 815mm² die. Your GPU is around 200 mm². A 4x area increase over mainstream chips, Why did nVidia decide to make this ultra expensive superchip? Basically, there is a new and immediate dire need for as much computational power for research and commercial purposes. Anything deep learning related has now reached a market of 5 Billion dollars, Just 4 years ago, the market was only half a billion dollars. That interest increase in deep learning is fucking insane and for obviously reasons, deep learning has produced and keeps producing amazing mind blowing results. A major problem with deep learning is training them can take anywhere from a day to over a week. With a rack of 6 superchips ($149,000). Researchers can reduce the time from a week to 8hrs. The faster the results, the more effective researchers can be.
Nvidia has it’s full force behind this area, Nvidia obviously sees deep learning the major computing platform of the next decade and wants to secure its place in the market. Nvidia is very quickly creating an eco-system based around their cards. They are supporting every iteration of deep learning algorithm as possible, having a GPU cloud service, specialized A.I driving systems, A virtual environment as close to real life physics for robots to learn on. They are betting big and if they can secure the majority market, It will be extremely profitable. Jensen the CEO of nvidia doesn’t seem just driven my profit or market opportunity either, I get the impression that he knows there is a need to boost the A.I industry for the betterment of humanity. This means better health, cheaper manufacturing, cheaper food production for everyone. Political systems will have to adapt to the coming onslaught of A.I. We can only hope they do a good enough job. Corporations will get more power from A.I, but hopefully the average individual will increase in power to the point where corporations don’t have the same power ratio over individuals.
While Cpu’s have been limited with the advancement of power increases over the last 8 years. Gpu’s don’t seem to be affected since you don’t have to channel the calculations in a single line between each cores. GPU’s are parallel processors, So as long as you can find cheaper ways to manufacture transistors, You can find ways to exponentially increase the computational power, It’s looking possible that the common CPU will be outpaced to the point where they might become more of a legacy device not able to perform the brute force that new types of software require. This video below shows virtual robots trying to hit the puck into the goal, As one achieves better results then the others, All the other robots are deleted, The best one is duplicated and then varied to refine to get the best technique for the robot. Since reinforcement learning requires the A.I to test an enormous amount of possibilities as it figures out it’s own boundaries.
The amount of calculation power you could use is unimaginable high, Nvidia is in a extremely strong position to deliver as the most power for everyone as possible for decades. AMD just don’t have the funds to complete with nvidia right now, Intel have a good chance to compete with nvidia as they too are making even more specialized deep learning chips. There is no doubt about it, We are entering the A.I revolution right now and it’s getting hugely accelerated by these Superchips. For fun they called up Square Enix for a graphical test on their Superchip, So they took assets out of the latest final fantasy film and ran it in real time.