I'm working on a semester long project with a couple teammates at my university for our Software Engineering course. Our initial project idea was to come up with a kind of watered-down combination of CPU-Z & GPU-Z just to see if we could do it and learn how it all works along the way. Both of these programs being hardware information gathering and sensor monitoring applications.
For full project details please feel free to check our team website @ http://www.alienmania.com
(Please excuse the domain name, it's a free one I'm borrowing from a friend for the semester lol).
There are several screenshots on our homepage of our current prototype but the following illustrates about as far as we have gotten with GPU information:
Up until this point we have exclusively used WMI to query all CPU, mobo, RAM, and now limited GPU information. The above screenshot shows the extent of what we are able to retrieve through WMI for hardware specifications. We would like to retrieve at minimum the current core clock speed and memory clock speed of any local GPU's on the machine at runtime. I have browsed through the OpenCL C# bindings (which is how I found this forum) and noticed there is a property which returns the 'CL_DEVICE_MAX_CLOCK_FREQUENCY', I'm not totally sure what this speed is derived from but I know it's not the 'current' core clock speed. For my EVGA GTX 465 SC this property returns 810 mhz. I should note the speeds we are looking for are for '3d mode' or however it's considered.
Our project currently is a .NET Framework 4.0 windows forms application programmed in C#. We are all using VS 2010 Pro via our Microsoft Dreamspark accounts ^_^.
If anyone would care to take the time to look through our current source and help us out via PM's or instant messanging (Skype, AIM, MSN, Yahoo, whichever is convienent for you) please let me know! Thanks.