Tag Archives: gpu

NVIDIA’s David Kirk Interview on CUDA, CPUs and GPUs

David Kirk, Nvidia’s Chief Scientist, interviewed by the guys at bit-tech.net.

Read the full interview HERE.

Here are some snippets of this 8-page interview:

page1
“Kirk’s role within Nvidia sounds many times simpler than it actually is: he oversees technology progression and he is responsible for creating the next generation of graphics. He’s not just working on tomorrow’s technology, but he’s also working on what’s coming out the day after that, too.”
“I think that if you look at any kind of computational problem that has a lot of parallelism and a lot of data, the GPU is an architecture that is better suited than that. It’s possible that you could make the CPUs more GPU-like, but then you run the risk of them being less good at what they’re good at now”
“The reason for that is because GPUs and CPUs are very different. If you built a hybrid of the two, it would do both kinds of tasks poorly instead of doing both well,”

page 2:
“Nvidia has talked about hardware support for double precision in the past—especially when Tesla launched—but there are no shipping GPUs supporting it in hardware yet”
“our next products will support double precision.”
“David talked about expanding CUDA other hardware vendors, and the fact that this is going to require them to implement support for C.”
“current ATI hardware cannot run C code, so the question is: has Nvidia talked with competitors (like ATI) about running C on their hardware?”

page 3:
“It amazes me that people adopted Cell [the pseudo eight-core processor used in the PS3] because they needed to run things several times faster. GPUs are hundreds of times faster so really if the argument was right then, it’s really right now.”

Continue reading »

FurMark Fume les Cartes Graphiques

Je suis tombé sur ce fil de discussion chez [H]ard|Forum où l’on peut lire ceci:

“That fur demo is known to get cards the hottest, it is also known for smoking cards as well (it was the last thing run for a number of people on the extremesystems forums).”

En gros ça dit que le FurMark est connnu pour faire le plus chauffer les cartes mais qu’il aussi connu pour les faire fumer et c’est souvent la dernière chose que certains utilisateurs aient vu! Je savais que le FurMark était bon pour faire chauffer les cartes mais je ne savais pas qu’il était aussi capable de les détruire. Je sens que cet été va être chaud pour certaines cartes graphiques…

Compétition d’overclocking GPU

Le site www.hardware.info propose une compétition d’overclocking de GPU et utilise le Fur Rendering Benchmark comme utilitaire principal (bon c’est ma version des faits vu que tout est écrit en néerlandais mais je ne dois pas me tromper de beaucoup – si quelqu’un comprend cette langue, merci d’avance pour un petit feedback de ce qui s’y raconte). Décidément, le fur benchmark fait parler de lui ces derniers temps. Mais le truc marrant c’est que tout le monde s’obstine à utiliser la version 1.0.0 alors que la version 1.1.0 existe…

La page de la compétition se trouve ici: GPU Overclocking Contest

Fur Rendering Benchmark

I officially released the fur rendering benchmark 4 days ago. So let’s analyze a little bit the first feedbacks available on forums over the web.

Homepage: www.ozone3d.net/benchmarks/fur/

1 – Fur rendering benchmark isn’t cpu dependent and this is a very good thing for a graphics card benchmark. No matter the cpu speed, the result for a given card stays equivalent:
oZone3D.Net forums
extremeoverclocking.com forums
extremeoverclocking.com forums
“yes the first propper gpu bench ive come across, i really like this…. it stops all the arguments about memory timings and cpu speeds. its a good equaliser as all our systems can provide that 10% cpu info the gpu needs…”

2 – 8800GTX vs 2900XT
ATI 2900XT seems to beat NVIDIA 8800GTX. In all forums, the 2900XT is ahead:
oZone3D.Net forums
overclockers.co.uk forums
extremeoverclocking.com forums

3 – this benchmark seems to nicely overload the graphics card and then is a cool GPU burner and stress/stability test utility.
clubic.com forums : “Par contre j ai jamais vu ma carte graphique chauffer autant: environ 100° pendant la test:ouch:”
oZone3D.Net forums: “This thing just succeeded to shut down twice the PSU, caused by overloading of the graphics board!!”

I done a little test with my 8800GTX:
– gpu core temp at rest: 58°C
– gpu core temp at load: 83°C

Okay, that’s all for that small benchmark. :winkhappy:

Quick Review – GPU Caps Viewer

GPU Caps Viewer is the new I worked on these last days. It’s the successor of HardwareInfos. GPU Caps Viewer is based on the branch v3.x of the oZone3D engine (while HardwareInfos is an oZone3D v.2.x branch based tool). In addition to classic GPU/CPU information / capabilities, GPU Caps Viewer offers two cool features:

– an OpenGL Extensions database. Either you can see the extensions supported by the current graphics card or you can see all existing extensions no matter the graphics board you have. You can quickly select an extension and jump directly to ist webpage (SGI or NVIDIA extensions specs). I must confess it’s very useful for me.

– a GPU-Burner… that was the hard-coding part of GPU Caps Viewer. The GPU-Burner allows to open several 3D windows. Actually you can open as many 3D views you want (1, 2, 4, 6, 10, 20, …). Each view renders a GLSL toon-shaded object with vsync disabled. You can set the size of each window individually (default size is 400×400). Each 3D view is rendered in its own thread… I let you imagine how hard is to debug a multitreaded gfx application :raspberry: And because I’m only a human, there are always some bugs in my code. But there is a very cool tool that helped me to manage the mad threads: ProcessExplorer :thumbup: You can download it here: www.majorgeeks.com/Process_Explorer_d4566.html.

Here an screenshot of my desktop with 13 instances of the 3D view runing at the same time. I will release GPU Caps Viewer very very soon. So stay tuned! :winkhappy: