NVidia CEO Spills The Beans On The Future Of GeForce GPUs

During his keynote at Nvidia's GPU Technology Conference, the company’s CEO Jen-Hsun Huang shared a handful of details about the upgrades planned for their next two generations of graphics chips.

Following NVidia’s current Kepler graphics chip will be “Maxwell” which will be released early in 2014. The main draw of Maxwell is the introduction of “unified virtual memory” which allows the GPU to see the contents of the system RAM and vice versa. This would make programming the GPU much easier, especially in GPGPU applications.

Following Maxwell will be “Volta.” Volta will have its graphics memory moved to the GPU silicon. The on-chip memory will also be stacked vertically. Together, these design decisions allow the memory bandwidth to increase substantially over current modern architecture. According to NVidia’s CEO, Volta’s integrated memory will boast a whopping 1 TB/s, which is more than three times the memory bandwidth of NVidia’s $1,000 Titan.

Jen-Hsun Huang didn’t pin down Volta’s release window, but we don’t expect it to be released before 2016. NVidia has a tradition of releasing new architectures every two years whereas Fermi was released in 2010, Kepler was released in 2012 and Maxwell is scheduled for release in 2014.

Add new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.

Comments

To fly or not to fly?

SSD and stacked unified ram on a card in whatever position you can think wouldnt work in the short and long run. Bad idea, due to the speed of SSD and that what you would get on the chip would pretty much limit or break the idea of current PC user control and general efficiency in a few worst case scenarios. Im not saying it isnt interesting, Im just saying it isnt groundbreaking and I buy cards for the same price as consoles. They arent fooling anyone with two fingers of front, whatever they do with CUDA isnt going to pay off in that area, never has and never will. We will all be dead before that and after or before something better will be found. Whatever payoff, it will not be seen in PC graphics which is what people buy these cards for. Its interesting, even for games, but it isnt going to take off because it was never created to fly. Yes, there might be problems and its hardware, driver and Dx/Open all combined. Einstein didnt need RAM for his equations, neither did Newton. Im with the first post, direct to the brain. Get on with it! :)

Nvidia's solution to reducing

Nvidia's solution to reducing die size? I can't imagine how they would stand up as processor manufacturer. Gee lets dump out all the excess cache, kill a few threads here and there, factory clock the processors for maximum gaming and we're done.

Today gpu programmers have to

Today gpu programmers have to transfer data from system ram to gpu ram, they dont have to code the data transfer itself but they have to specify what to transfer and in which direction (System to GPU or GPU to System). If the name "Unified Virtual Memory" is anything to go by then I'd assume It'll make those transfers automatically, lowering the complexity of gpu coding and lowering the amount of bugs that may arise due to forgetfulness. It shouldn't be noticeable to gamers and other end users at all but its a plus for programmers (which is the target public for this announcement).

¡Memory X2!

Great, maybe thats too much word for it. Its small upgrade, something relative. We might be stuck with this for a few years if he didnt mention anything else or saved other details for the future. Wait, it says the main draw. Moral hit. Unless you are doing a lot of scientific computations with those GPUS, I dont see that this speed will come in hand with an edge, from a gaming perspective. Since games dont depend on personal computers anymore, they depend on consoles. Which narrows my speed. Multiplatform games, multiplatform hardware. Like a swiss knife, it has its use, for military campaigns. Sun Tzu kind of game; economics, least amount of resources, most efficient and etc. However in a civilian kitchen, you dont want a swiss knife. You want a Saji set of knives. Double the post, double the memory! :) Null hate.

Baking

Memory, memory and let me guess; more memory. I wish they could come up with something new to give us a real edge. That would make a good news, this one is a bit bland. Well, going back to baking GPUS.

Memmory

A computer (in general) is as fast as its slowest component. As a programmer (both CPU as GPGPU) the slowest component IS the memmory, so speeding it up sounds great.

The memory is extremely fast

The memory is extremely fast right now. 2800mhz OC? are you kidding me? Most systems can't even use the memories 2800Mhz because the processor bottlenecks the memories speed. Thats why it's not worth having memory past 1333mhz, because its just a waste of money.

lol

Wrong. There is a lot more to a components speed than clock. When will kids ever learn? If you have any background in OCing memory you'd know the latencies (such as RAS and RAS to CAS) have gone up steadily since the days of dimm memory. Other things make up for the higher latency but in the end what we have is the CPU idling for several nanoseconds until the memory responds, even longer until it sends all the data it needs to the cache and that process repeats itself way too many times over the span of a single second creating noticeable delays in program execution. RAM is one of the great bottlenecks of modern computing and shoving memory inside the CPU is one way to improve memory latency and transfer speeds (notice I said speed and not throughput). So please, don't spread ignorance around just because you're too lazy to get your facts straight, the internet has enough of that without you adding to it.

i disagree with memory being

i disagree with memory being the slowest part its the hard disk for sure even ssds are way behind rest of parts . even the fastest gpus are not limited with the memory bandwidth in 90 percent of situations and tht 10 perecnt that it is is usually with v sync is disabled wich is crap to do always vsync on for best quality ..i did some test and performance always increased when core gpu was upped not memory and that was on a 680 gtx.. the thing is if this card has 3 times the bandwidth then i hope the gpu can throw out 3 times the geometry or effects otherwise it is going to waist

Both

Both of u are right in saying that memmory is fast. The average computer game doesn't require faster memmory. However when running heavy algorithms on the GPGPU together with CPU the memory actions like copying memory from CPU <-> GPU and allocation of memmory is still slow (even official Nvidia CUDA docs state this). To gamers: faster memmory isn't necessarily interesting. To researchers: faster memmory is interesting.

Memmory

A computer (in general) is as fast as its slowest component. As a programmer (both CPU as GPGPU) the slowest component IS the memmory, so speeding it up sounds great.

Add new comment