Groups | Search | Server Info | Login | Register


Groups > comp.graphics.rendering.renderman > #67

Re: GeForce2 announcement

Newsgroups comp.graphics.rendering.renderman
Date 2016-02-23 02:58 -0800
References <39099210.3717024@news.ntlworld.com> <3909BD4B.A107CD05@pixar.com> <390DACB1.A0FC5C9@lanl.gov> <390DB958.8C7FB059@pixar.com>#1/1>
Message-ID <dcd20738-affc-40f7-89ee-c5d9ee80992b@googlegroups.com> (permalink)
Subject Re: GeForce2 announcement
From mohd.tahauddin@gmail.com

Show all headers | View raw


Hello Tom:

I am not even sure if you are still following this :)

I read about your original rebuttal to NVIDIA's marketing hype 16 years ago and for some reason, I wanted to hear your thoughts about modern 3D accelerators with programmable shading languages.

How would you compare a modern accelerator in the following context:
1) Replacing/complimenting a render farm from 16 years ago. Do you think a modern graphics card begins to approach the original promise from NVIDIA (comparing against a render farm from 2000).
2) A modern graphics card vs a modern render farm

I would really love to hear your thoughts, especially, what needs to change/improve, how much more complex shaders in offline renderers are vs a modern graphics card etc. 

Thanks,

On Monday, May 1, 2000 at 12:00:00 AM UTC-7, Tom Duff wrote:
> Allen McPherson wrote:
> >         I'm curious.  Obviously, real-time TS2 frame generation is
> >         very difficult, especially given the required data rates you
> >         provided.  On the other hand, would it be useful to use this
> >         technology to preview shaders, different animation scenarios,
> >         develop new shading algorithms, etc? [on lower resolution
> >         models and image resolutions of course]
> 
> We certainly use 3D accelerators (SGI rather than NVIDIA) to preview
> animation.  Until you can compile shading language programs to run on them,
> they won't be much use for shading and lighting.
> 
> >         Also, rather than one on every desk, what about one in each of
> >         your 1000+ nodes of the render farm?  We work in very different
> >         domains, but we're looking at building just such a system (though
> >         only on 32-64 nodes for now).
> 
> 3D accelerators mostly don't do anyting we're interested in doing a lot of.
> Of the 1.2 million hours of CPU time that goes into making a set of TS2
> frames, about 1.1 million hours is devoted to executing shading language
> code, for which NVIDIA's cards are essentially no help at all.  If they could
> cache texture off a several-gigabyte UNIX filesystem and pull filtered
> texture samples at arbitrary coordinates out at the rates they advertise,
> they might be some use, since I think about half of the time we spend in
> shading is devoted to sampling texture maps.  But note that by Ahmdahl's
> law, if their boards were infinitely fast, we'd stilly only see a 50%
> rendering speed-up.
> 
> -- 
> Tom Duff.  Some sort of background check is in order.

Back to comp.graphics.rendering.renderman | Previous | Next | Find similar


Thread

Re: GeForce2 announcement mohd.tahauddin@gmail.com - 2016-02-23 02:58 -0800

csiph-web