I'm one of the developers of the volunteer computing project Einstein@Home using the Berkeley Open Infrastructure for Network Computing (BOINC). We're trying to harness the computational power of tens of thousands of compute devices using OpenCL. We need to schedule a set of tasks across different OpenCL devices in a volunteer's host (mostly GPUs/APUs). For this purpose it is vital that we do have dynamic information about the available memory on a given device. Querying the static device properties (max global memory) isn't sufficient since that changes over time, in particular for GPUs that are also used for desktop rendering.
NVIDIA's CUDA and AMD's Stream SDK (CAL) already provide APIs to query the amount of free/available global memory, so it should be more or less trivial to get this data:
NVIDIA's CUDA Driver API: cuMemGetInfo()
AMD's CAL/Stream SDK: calDeviceGetStatus()
This feature could be implemented either as a vendor extension (short-term solution) but getting into the official spec (long-term solution) would be optimal.
One further comment: using a vendor extension could turn out to be problematic. While NVIDIA and AMD might have an interest in supporting their particular hardware by adding that feature to their own OpenCL platform implementations, Apple might not do that or at a much lower prio. Anyway, vendor extensions would be better than nothing.