PDA

View Full Version : Passing scalar values to kernel



starapor
02-05-2009, 11:59 AM
Hi,
While evaluating the reduction example in the specification one of the values is passed as a scalar value instead of a pointer. I am trying to learn how an openCL implementation knows that the variable is a scalar value and not a pointer.

The kernel code looks like: (pg 295)
reduce(__global float *output, __global const float *input,
__local float *shared, unsigned int n)

The host code looks like: (pg 302)
err |= clSetKernelArg(kernels[i], 3, sizeof(int), &entries);

How does the OpenCL clSetKernelArg command know that the kernel 4th kernel argument is passed as an unsigned int and should be placed directly in a register for use by the kernel rather than being placed in global memory and read in by each thread?

The clSetKernelArg command is identical to the previous 3 commands which are passing blocks of memory yet somehow, this clSetKernelArg command knows to place the value that &entries points to in unsigned int 'n' but in the other clSetKernelArg commands, 'output' and 'input' are set to pointers that point to the global memory location that hold the values of what is passed.

Link to specification:
http://www.khronos.org/registry/cl/spec ... 1.0.33.pdf (http://www.khronos.org/registry/cl/specs/opencl-1.0.33.pdf)

Thanks in advance.

Xmas
02-06-2009, 03:29 AM
I am trying to learn how an openCL implementation knows that the variable is a scalar value and not a pointer.
OpenCL has to compile the program first, so it has all necessary information about any kernel.

starapor
02-06-2009, 06:54 AM
I agree the compiler has all the necessary information, but how does it distinguish that an argument passed using clSetKernelArg (which takes the memory address) should be interpreted as a scalar value (in this case unsigned int) and not a reference to some buffer (in this case a buffer containing unsigned ints).

This is confusing to me because the usage of clSetKernelArgs for scalar values and buffers of memory is identical but there must be a determination to pass a value or pass an address when the kernel function is invoked at run time. My question is how is that determination made? Thanks.

Xmas
02-06-2009, 07:57 AM
I agree the compiler has all the necessary information, but how does it distinguish that an argument passed using clSetKernelArg (which takes the memory address) should be interpreted as a scalar value (in this case unsigned int) and not a reference to some buffer (in this case a buffer containing unsigned ints).
The compiler stores information about the signature of each kernel with the program object. clSetKernelArg simply looks up that information, so it knows the type of the parameters at each index. And it also knows the number of arguments so it can raise an error when you pass an invalid index.