Hej CGPUs, farvel nVIDIA...

Diverse d.  11. februar. 2007, skrevet af Huddy 14 Kommentarer.  Vist: 549 gange.

Intel har valgt selv at begynde at udvikle sine egne GPU´er.

Den tekniske forklaring er her og på engelsk (Tak til Mr. Copy/Paste).



WE FIRST TOLD you about Intel's GPU plans last spring, and the name, Larrabee last summer. That brings up the question of just what the heck it is, other than the utter death of Nvidia.
VR-Zone stumbled across some details about Larrabee recently (nice catch guys)*, so I guess that makes it open season on info. VRZ got it almost dead on, the target is 16 cores in the early 2009 time frame, but that is not a fixed number. Due to the architecture, that can go down in an ATI x900/x600/x300 fashion, maybe 16/8/4 cores respectively, but technically speaking it can also go up by quite a bit.

What are those cores? They are not GPUs, they are x86 'mini-cores', basically small dumb in order cores with a staggeringly short pipeline. They also have four threads per core, so a total of 64 threads per "CGPU". To make this work as a GPU, you need instructions, vector instructions, so there is a hugely wide vector unit strapped on to it. The instruction set, an x86 extension for those paying attention, will have a lot of the functionality of a GPU.

What you end up with is a ton of threads running a super-wide vector unit with the controls in x86. You use the same tools to program the GPU as you do the CPU, using the same mnemonics, and the same everything. It also makes things a snap to use the GPU as an extension to the main CPU.

Rather than making the traditional 3D pipeline of putting points in space, connecting them, painting the resultant triangles, and then twiddling them simply faster, Intel is throwing that out the window. Instead you get the tools to do things any way you want, if you can build a better mousetrap, you are more than welcome to do so. Intel will support you there.

Those are the cores, but how are they connected? That one is easy, a hugely wide bi-directional ring bus. Think four not three digits of bit width and Tbps not Gbps of bandwidth. It should be 'enough' for the average user, if you need more, well now is the time to contact your friendly Intel exec and ask.

As you can see, the architecture is stupidly scalable, if you want more CPUs, just plop them on. If you want less, delete nodes, not a big deal. That is why we said 16 but it could change on more or less on a whim. The biggest problem is bandwidth usage as a limiter to scalability. 20 and 24 core variants seem quite doable.

The current chip is 65nm and was set for first silicon in late 07 last we heard, but this was undoubtedly delayed when the project was moved from late 08 to 09. This info is for a test chip, if you see a production part, it will almost assuredly be on 45 nanometres. The one that is being worked on now is a test chip, but if it works out spectacularly, it could be made into a production piece. What would have been a hot and slow single threaded CPU is an average GPU nowadays.

Why bring up CPUs? When we first heard about Larrabee, it was undecided where the thing would slot in, CPU or GPU. It could have gone the way of Keifer/Kevet, or been promoted to full CPU status. There was a lot of risk in putting out an insanely fast CPU that can't do a single thread at speed to save its life.

The solution would be to plop a Merom or two in the middle, but seeing as the chip was already too hot and big, that isn't going to happen, so instead a GPU was born. I would think that the whole GPU notion is going away soon as the whole concept gets pulled on die, or more likely adapted as tiles on a "Fusion like" marchitecture.

In any case, the whole idea of a GPU as a separate chip is a thing of the past. The first step is a GPU on a CPU like AMD's Fusion, but this is transitional. Both sides will pull the functionality into the core itself, and GPUs will cease to be. Now do you see why Nvidia is dead?

So, in two years, the first steps to GPUs going away will hit the market. From there, it is a matter of shrinking and adding features, but there is no turning back. Welcome the CGPU. Now do you understand why AMD had to buy ATI to survive?


Link:
http://www.theinquirer.net/default.aspx?article=37548
Claus35
 
Elitebruger
Tilføjet:
11-02-2007 08:07:43
Svar/Indlæg:
5410/123
Uuuhhhh.... Lyder spændene :)


Ironfox
 
Superbruger
Tilføjet:
11-02-2007 09:04:18
Svar/Indlæg:
382/37
Det lyder faktisk som den helt rigtig løsning. Og Tror vi vil se mere af den slags.

Det ville vare længe før vi kommer over de 5 GHz på CPUen. I steden vil vi nok, som NANOmeteren bliver mindre, få mange flere funktioner i den samme chip.


Goaguy
 
Elitebruger
Tilføjet:
11-02-2007 09:34:07
Svar/Indlæg:
3410/199
Lyder genialt 😉


Moze15
 
Superbruger
Tilføjet:
11-02-2007 13:28:28
Svar/Indlæg:
1235/80
#2
nanometer (nm) bliver ikke mindre det er og bliver det samme....altså 1milliardtedel
du mener vel at når de bruger mindre transistore..


The
 
Elitebruger
Tilføjet:
11-02-2007 13:31:04
Svar/Indlæg:
1406/100
lyder meget spænde.. men noget voltsom vis jeg læser rigtig så ville de tag en Cpu og ind sætte den som gpu... Det gad jeg godt se med en core2 duo :e



bornholm
 
Elitebruger
Tilføjet:
11-02-2007 13:43:12
Svar/Indlæg:
2958/84
Flere kerner i en pc, de to jeg har, er der ikke ret mange programmer der kan udnytte pt. Så hvad f..... skal vi så med flere + et mega strømforbrug x(


-Timberwolf-
 
Superbruger
Tilføjet:
11-02-2007 13:45:08
Svar/Indlæg:
421/13
Gad vide hvilken effekt det har på NVidia's aktier?

Hmm så køber DAAMIT bare NVidia også. Og derefter køber Intel så AMD/ATI/NVidia..

Har de ik pengene til det, Intel?


Men det ville være pænt skidt udvikling.


Rednalyn2
 
Superbruger
Tilføjet:
11-02-2007 14:51:30
Svar/Indlæg:
558/73
Næste skridt for Nvidia bliver vel at invadere CPU markedet med deres egen CPU.


Mathis77
 
Elitebruger
Tilføjet:
11-02-2007 18:30:52
Svar/Indlæg:
6293/359
#6 ?? nu må snakken om at multicore ikk bliver til noget stoppe, plz :) og det med: det kan alligevel ikk udnytte mere en 2 kerner osv... læs nu lidt om hvad der sker med softwaren bare i år blot, hvor der allerede blir lavet til mange spil til 4 core... og det blir bare ved og ved. men det er i øvrigt heller ikk multicore tråden handler om men en helt anden teknik.

det er som om folk har fordomme mod udviklingen - men hvorfor dog..de har da mere forstand på det en 10xsuperbrugere her på sitet ikk? :i

#8 ja eller en cpu fra microsoft - men tror mere på AMD/intel løsninger. intels roadmap frem til 2011 ser alt for stærkt ud


Mathis77
 
Elitebruger
Tilføjet:
11-02-2007 18:36:13
Svar/Indlæg:
6293/359
#8 som jeg læser det tar nvidia nok ikk så mange skridt mere, men så blir de under navnet intel :D


DjeavleN
 
Elitebruger
Tilføjet:
11-02-2007 19:01:23
Svar/Indlæg:
2959/206
2009... Det er først om 2 år... Jeg tror nu nok at nVidia skal være kommet op med noget der kan matche det deromkring...


Mathis77
 
Elitebruger
Tilføjet:
11-02-2007 19:42:25
Svar/Indlæg:
6293/359
jeg troede at nvidia var gået sammen med intel ligesom ati blev købt af amd? altså at det er det, der kommer til at ske?

jeg gad da godt spare lidt op til den intel cgpu om 2 år...

spørgsmålet om mit 7600gt er "slidt" op inden da hehe :e


Goaguy
 
Elitebruger
Tilføjet:
11-02-2007 21:55:54
Svar/Indlæg:
3410/199
I kan da lige koble den til det her:
http://www.xbitlabs.com/news/v...


#14
dryz
 
Superbruger
Tilføjet:
11-02-2007 22:03:17
Svar/Indlæg:
519/32
Naturligvis tænder jeg på hurtigere og bedre CPU´er og GPU´er og en hybrid af disse, som i andre, men min første tanke var lidt som #7´s: aaargh, MONOPOL! og hvad kommer såddan en udvikling til at koste forbrugeren?