[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Public WebGL] WebGL performance...seems like I have no hardware acceleration?!?
I've been thinking about porting some of my old OpenGL games over to
WebGL - and sticking them on my website for free to help to promote
WebGL (which is a noble and important cause!) and I'm seeing some weird
performance issues. (I've been an OpenGL programmer since it was
pronounced "IrisGL" - and I work in the games industry as a senior
graphics programmer - so I'm not entirely clueless).
graphics card - right? - so my expectation would be that if I push as
much functionality onto the GPU as possible and keep things simple on
To test, I wrote a really minimal application - it sets up matrices, it
clears the screen and renders a few simple objects and uses
setTimeout("draw()",1) to try to get the best framerate I can. I'm
getting like 10Hz. :-(
nasty, I tossed out all of the 3D rendering and did nothing but clear
the screen. Doing this test at a number of different canvas sizes, I get:
800x600 : 35Hz.
200x200 : 90Hz.
100x100 : 180Hz.
8x8 : 180Hz.
Commenting out the clear-screen and doing no OpenGL calls at all in my
main loop still gets me 180Hz - so I guess I'm CPU-limited at that frame
This is exceedingly surprising. If we have hardware acceleration - then
the screen should clear in WAY under a millisecond on my modern nVidia
loop rate of 180Hz even at 1280x1024.
So it looks like we're either not running with hardware acceleration -
or there is some kind of software operation on the raster going on
that's crippling the frame rate. I'm running the latest daily builds of
FireFox "minefield" - and I've double-checked that I have software
rendering disabled in the 'about:config' system. I'm running Linux on
one machine, WinXP on another and Windows-7 on a third - and getting
pretty consistent results on all three machines.
But even at that - the difference between 200x200 and 100x100 is 30,000
pixels rendered in 6ms or 5Mpixels/sec ...which for a simple: gl.clear
( gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT ) would be slow even for
The machine I'm rendering on is a 2.8GHz quad-core with a dual nVidia
GeForce GTX 285 GPU - but I get almost identical times on my ancient
2.6GHz single-core with a dusty old GeForce 6800! An even more ancient
machine with a 1GHz CPU gets roughly half the frame rate across the
board...again, suggesting we're seeing some software performance cap here.
You are currently subscribed to email@example.com.
To unsubscribe, send an email to firstname.lastname@example.org with
the following command in the body of your email: