I want to allocate and render into an EGL surface using GLES2 and then use OpenWF display to bind that surface to display hardware. However, the OpwnWF Display specification seems to leave the allocation of source images unspecified and clearly states it is outside the scope of OpenWF Display. This is fine, however I was wondering how it was envisaged this would work and if there were any EGL extensions being developed for this?

As I see it, the first task is to allocate an EGL surface which is capable of being read by the display controller. I believe some display controllers are quite restrictive in this respect, for example some require the memory they "scan out" to be physically contiguous. Such buffers must be allocated (or at least reserved) at boot time before physical memory becomes too fragmented and are therefore a very scarce resource. In these cases, I think a new EGL surface type and entry point would be needed to differentiate surfaces which can be used directly by display hardware from other EGL surfaces. Such a concept already exists in the EGL_MESA_screen_surface extension, which defines a new surface type (EGL_SCREEN_BIT_MESA) and a new entry point for allocation of such surfaces (eglCreateScreenSurfaceMESA). While the rest of EGL_MESA_screen_surface goes on to define a different mechanism for modesetting, I think the screen surface can be re-used with OpenWF.

Assuming I am able to allocate an EGL surface which is compatible with my display controller, I need a mechanism to take the EGLSurface and turn it into WFDSource. The first option, which seems to be hinted at in the OpenWF Composition spec, wraps the EGLSurface into a stream. I guess this kinda makes sense - every time I do eglSwapBuffers on the surface, a new image is submitted into the stream and a new back buffer is allocated (or the old front-buffer is re-used). However, another option would be to use an image source. This would mean creating an EGLImage for the front colour buffer and another for the back colour buffer of the surface. I'm unsure how this would work with respect to eglSwapBuffers however. While it would be cool to have control over the swapping of buffers, I think it would take too much freedom away from the implementation to do things like triple buffering, etc. The more I think about it, the more I think a stream is the correct way to get content rendered to an EGL surface into OpenWF Display. In fact, it might be as simple as casting the EGLSurface to a WFDNativeStreamType can plugging that into wfdCreateSourceFromStream().


I guess another, totally different approach would be to treat OpenWF as a new EGL client API at the same level as OpenGL & OpenVG. I could create an OpenGL texture, create an EGLImage for it, use that EGLImage to create a WFDSource. I could then use an FBO to render into the GL texture and it's EGLImage siblings, including the WFDSource.


Personally, I think I prefer the idea of a new EGLSurface type... But I'm curious how others see this working.