The expanses of WolfWings' land
scratched on the wall for all to see


April 13th, 2004
April 13th, 2004
April 13th, 2004
April 13th, 2004
April 13th, 2004

[User Picture]02:22 am - Another of those fucked-up and plain-old wierd ideas I have occasionally...
What would happen if you threw out the age-old ideas? Look at the DirectX or OpenGL specifications... much of what they include is semi-optional. And look at most video cards. With few exceptions, they all still use very brute-force methods of calculating the graphics to draw...

What would happen if you threw out support for low-precision graphics? Yes, you'd lose performance. But you'd simplify the core while maintaining compatability with higher-end requirements. But drop everything below, say, 32-bit floating-point math. Anything else can be scaled up/down to fit that, except 32-bit integer math which would get truncated to 24-bit roughly. Even textures get bumped to that setting instead, for a simple reason. Now you can have a very broad and shallow memory bus, so while it may be 256MB it would only have 16MTexels of storage to allocate between frame-buffers and texture storage, but would have a 128-bit memory bus for 4 32-bit reads.

Next... pull an interesting trick. Decouple channels. By this, I mean that an RGB buffer is actually three buffers. A red buffer, a green buffer, and a blue buffer. So suddenly that 16MTexels of storage is actually 64MTexel-Channels. Yeah, convoluted term, I know. Allows for trivial support of things like multiple stencil buffers, and multiple graphics buffers as well.

Now, remove support for some forms of texture filtering. In this case, axe everything except Anisotropic filtering. On 'view plane parallel' polygons, it degenerates to Trilinear most of the time, and it usually has a speed-hit of only 5% or so.

Go a step further, remove support for loading multiple mip-map levels at the same time for a single texel-read. Use on-chip filtering to average more texels from a single level together. Note you still use mip-mapping. You just don't use multiple levels of it. Effectively, it's like Trilinear using a single 4x4-pixel block from a single texture-map, instead of 2 2x2-pixel blocks from two seperate texture-maps.

Next... edge-detection. Not just polygon-edge-detection like the Matrox video-cards ended up using. Full edge-rendering-detection, including transition states on alpha-masked textures like those commonly used for gratings and similair in many games. Guess what happens on the edge-rendering stage? Yup, anti-aliasing. But there's an interesting shortcut we can perform, since we're performing anisotropic filtering all the time. We don't need to calculate the seperate pixel-values and downsample. We can simply calculate an alpha-channel-style-mod for that one pixel, and use that, and get the same results. So the calculation becomes simpler, maybe 25% of the math of a full 4xAA, even less compared to a comparable level of AA, say 16xAA. Even calculated out to full 32-bit floating-point precision isn't that big a speed hit at that point, as it only applies to ~10% of the screen.

T&L, Pixel Shaders, and Vertex Shaders would be a given, though their implementation might be most appropriate via a seperate memory bus or even a cache for the shaders, relying on transfering the shaders over the connection some other way.

What would you end up with? A graphics card with incredibly high image quality, likely with a core that became incredibly simple and had rather complicated drivers that handled all the various upscaling of textures on upload to the card. As in, RISC-like simplicity of the core, even more so than most video cards, that would run modern games more slowly than normal cards, but wouldn't slow down as much with tommorows games.

Yeah, I know. There's likely some huge gotcha I missed with this idea... but so much of modern 3D hardware is wasted on compatability, or at least a nod towards such. And yet the OS itself is usually breaking compatability with the very games the 'antique' compatability is usually intended to support, like out-dated filtering compared to single-mip-level Anisotropic filtering (which works everywhere Bilinear works) and texture-compression in the face of perhaps a gigabyte or more of memory on future video cards.

Might as well just throw out the cruft entirely, accept the performance hit on older titles, and impress with the visual quality across the board. Get the hell out of the higher-FPS-is-better loop so many benchmarks are fixed on. I'd be much more impressed with the longevity of a video-card that showed minimal performance drop across all the modes of, say, the 3DMark 'Game' tests, regardless of options enabled or disabled. Because that means the video-card won't be rendered obselete in a few years.1 commentLeave a comment
?

Log in

No account? Create an account