What is an APU? Well, the short answer is that it is an Accelerated Processing Unit (APU). But what does that really mean? For AMD, it means that a low end graphics processing unit (GPU) is being combined with a traditional x86 CPU.
The real question that I have is, what will this do to NVIDIA? Since AMD is launching its APU, or Fusion line, with a GPU embedded in the CPU, and since Intel is launching Sandy Bridge with a GPU embedded in the CPU, what will NVIDIA do? On the extreme low end, which I define as under $100, I suspect the NVIDIA will lose market share to the point of becoming irrelevant. People buying low end desktops or laptops do not care (or even know) what kind of graphics card the computer has. On the low-, mid- and high-end I expect NVIDIA to still be relevant, as well as in the Quadro line. But how large is that market?
However, the Tesla line might be under some pressure in the next year or so. Imagine a high performance computer with Sandy Bridge or Fusion processors in it. Would you need, or want, to add a 200 Watt Tesla to such a system? After all, with Fusion, you get a one-to-one mapping of GPU with a CPU. We sure do live in exciting times, and it will be interesting to see how this plays out. For now, AMD and Intel are in the driver’s seat.
With Fusion technology from AMD, the PC industry will be changed forever. AMD is incorporating multi-core CPU (x86) technology, a powerful DirectX®11-capable discrete-level graphics and parallel processing engine onto a single die to create the first Accelerated Processing Unit (APU). Learn how AMD is doing that here.
@ Chad In the HPC space, the attraction is in the Heat & Power.
In a 2U blade I can get (maybe) 8 CPU’s (so 16,32, maybe 48 cores) and 1 high-end GPU, with a pretty high power & thermal requirement.
Or, in a 1U blade I can get 8 CPU’s with 8 mediocre GPU’s, with a power requirement almost identical to just 8 CPU’s with 0 GPU’s.
Granted, none of these have the horsepower of a discrete system, but with the immense density you could hit when you have racks and racks of them (like a Jaguar or Tianhe scale system), it’s attractive.
Hypothetically, how useful would a Tesla be if it was only the speed of say, a Radeon 5600, and didn’t support multiple cards? They probably wouldn’t sell any, even if it only cost $200.
The problem with the APU is that you only can get them bundled with the CPU and under the same thermal envelope. I don’t see the value in that for HPC. For notebooks, it’s a huge win.
HP will happily sell you a workstation with 2 or 4 APUs, but they’re going to still encourage you to add Tesla’s and Quadro’s too. Why would they stop trying to make money on those?
You can make the case that there are three different groups of people in the HPC game. The first is the actual programmer. Which is better for a person to write a program for? With Fusion and Sandy Bridge you have a small graphics core tightly integrated with the CPU. With Nvidia (or a high-end AMD card), you get a large graphics card that is loosely coupled with the CPU, but is more powerful and power hungry.
The second group of people in the HPC game if the manufacturer. It will be interesting to see if the HPC manufacturers continue to support Tesla cards in their systems.
Finally it will be interesting to see if the real people in power (those who procure the systems) will want Tesla or will settle for integrated graphics.
But in order for the integrated GPU to match the performance of the discrete Tesla GPU, wouldn’t it end up outputting ~200W also? There’s some efficiency gains to putting them on the same die, but is it really that large?