[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WebGL benchmarking




On May 28, 2010, at 2:25 PM, Gregg Tavares wrote:



...userAgent = Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-us) AppleWebKit/533.7 (KHTML, like Gecko) Version/4.1 Safari/533.7
gl.VERSION = 2.1 NVIDIA-1.6.10
gl.VENDOR = NVIDIA Corporation
gl.RENDERER = NVIDIA GeForce GT 120 OpenGL Engine
+--------+-----+-------+------------+-----------+---------+
| layers | dim | prim  |  tris/sec  | draws/sec |   dt    |
+--------+-----+-------+------------+-----------+---------+
|      4 |  16 | strip |   21107784 |     46906 |   2.004 |
|      4 |  32 | strip |   87363636 |     45455 |   2.002 |
|      4 |  64 | strip |  219195266 |     27613 |   2.028 |
|      8 |  64 | strip |  247458354 |     31174 |   2.053 |
+--------+-----+-------+------------+-----------+---------+

Yes, that is a WebGL demo getting 247 million triangles per second! This is on an admittedly high end Mac Pro. But even on my MacBook Pro, I'm seeing 210 million.

Unfortunately how many triangles per second is dependent on the GPU, not WebGL. The more important number is draws per second for small prims since that's closer to showing the _javascript_/WebGL implementation overhead.  For example, the original beach demo from O3D was doing 270,000 draws per second (before we optimized the assets) and that was for relatively complex shaders (lots of uniforms and attribs to set before every draw call).

From an author's standpoint, EVERYTHING is dependent on WebGL because that's all they see. Sure, it might be true that to get big triangle numbers you need to have a small number of objects with a large number of triangles. But the fact that a WebGL program that get results like this is very impressive. And the hardware I'm testing it on isn't particularly esoteric. I tested on a 16 core Mac Pro, but the graphics card is pretty midrange. And I get similarly impressive results from my laptop. So I think we can expect a well written WebGL program to have some pretty nice looking graphics.


It would be nice to look into ways to get this number higher for WebGL.

The interesting thing here is to determine where the bottleneck is. If you were to rewrite the O3D demo in WebGL (which you should do right away :-) how many draws would you see? Would it be like the above numbers (31K - 46K) meaning there's 6x to 9x overhead? Or is it more or less than that? If we can see where the overhead is in WebGL maybe we can come up with higher level API calls to reduce it. But this is all for > 1.0.

-----
~Chris