[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
----- Original Message -----
> So I probably shouldn't respond to this because being on the Chrome
> team no matter what I say it will sound defensive but:
Eh, benchmarks are benchmarks. I don't think it's defensive to explain what other browsers are doing differently, etc; as you say, a single benchmark, especially a focused microbenchmark, does not an overall picture make. :-)
FWIW, our trace-based jit (the "inner loop" jit, as opposed to the whole-method jit) is extremely good at doing the kind of work that's involved in matrix manipulation. Lots of math, often lots of math done in loops. I and others also spent a bunch of time making our typed arrays implementation work very tightly with the jit. That's one of the reasons that mjs uses bare Float32Arrays instead of objects, and why it lets you use existing arrays as out params etc. -- to remove any potential performance pitfalls (object property access, lots of temporary object creation, etc.). It's not as convenient as using an object-based approach, but I don't think it makes much of a difference without operator overloading anyway -- V4_add(v1, v2) isn't much different than v1.add(v2) and may even be cleaner to read.
> engines will see that the results you are calculating are never used
> and optimize the entire test away. I don't know if Minefield goes that
> far though if it did that would be pretty cool
We don't, afaik; though we've talked about it, but we want to figure out how much it would help real world usage as opposed to just helping on benchmarks (which would just get updated, so it's not really much of a help there long-term anyway).