Your app must not attempt to change or extend the packaged content through any form of dynamic inclusion of code or data that changes how the application interacts with the Windows Runtime, or behaves with regard to Store policy. It is not permissible, for example, to download a remote script and subsequently execute that script in the local context of your app package.
This requirement applies if you depend on specific 3D graphics hardware features.
If your app includes an ARM or a Neutral package it must support Direct3D feature level 9_1. If your app does not support ARM it must support the minimum feature level chosen on the Store portal.
Because customers can change the graphics hardware in their computers after the app is installed, if you choose a minimum feature level higher than 9_1, your app must detect at launch whether or not the current hardware meets the minimum requirements. If not, the app must display a message to the customer detailing the Direct3D requirements.
In addition to supporting the chosen minimum Direct3D feature level, your app may use higher feature levels when available.
I've never looked into the D3D10+ compile times much, but have they improved over D3D9 at all? Ralf mentioned that a straight-to-bytecode approach would stop working once we upgrade, but if the compiler is better it may be a non-issue.Do we have any stats on that?
On Saturday, February 2, 2013, Florian Bösch wrote:On Sat, Feb 2, 2013 at 11:37 AM, Kornmann, Ralf <firstname.lastname@example.org> wrote:It is possible to generate D3D bytecode directly. There is even an assembler for this. Unfortunately this will not longer work as soon Angle would switch over to D3D 10+. To ensure that you don't mess with the bytecode anymore the compiled shaders are signed by the HLSL compiler and the runtime checks this.ArghlI have written a number of HLSL shaders and hardly run in compiler issues. In most cases problems were caused by me doing things wrong in the HLSL code. So I am not sure how many of the problems you noticed are caused by the ESSL to HLSL step.At least 3 of my WebGL demos have run into such issues, where a compile would take anything from 10 seconds to several minutes. The reaction of browsers to this problem is different. Chrome usually kills your context after about 11 seconds, and Firefox usually lets things run, but after about 15 seconds asks you if you want to kill the page since the JS is unresponsive.Anyway to ease the problem with the long compile times at least a bit it might be a good idea to add some kind of shader cache. This way it would a least faster the second time a user visit a page. Anything beyond this would most likely requires a custom shader container that can beside the pure GLSL code contains multiple binary shaders for different targets.There is a shader cache. But that doesn't really help that much because if you need to compile a serious amount of shaders (A typical high quality production has anything between 300 to 1000 different shaders) or if you run into a bunch of pathological cases (exceedingly likely with a large number of shaders) then the result is that a user never gets to the page. It'll lose context, or it'll ask him to kill the page, or he just leaves out of boredom to wait for stuff to happen. The typical 5ms or so compile time of GLSL via OpenGL is still way long. But the typical 100ms to 500ms compile time for GLSL via D3D pushes this beyond the point of breaking.