Results 1 to 10 of 10

Thread: diferent time in openCL programs execution

  1. #1
    Junior Member
    Join Date
    Feb 2012
    Posts
    20

    diferent time in openCL programs execution

    Hi again,
    I have done a simple matrix vector product with sparse matrix, and it works but the problem is that the first time I executed it takes much more time than the following times, for example, with a dimension of 4993 the first execution takes 500 miliseconds and the following are between 70 and 90 miliseconds!! By the way I measure time with getTimeOfDay() after reading the matrix and vector from a file, as the idea is to compare this version to a cpu version, and in both versions I have to read them from and there is no difference in the way I do it so that time is irrelevant.
    One more thing, these results are from a 1201n netbook with a 9400m gpu, but the code was tested in a much better machine and it also had a big difference.
    I guess this my be caused by my code but Iīm not sure. Any help would be apreciated!!
    Thanks in advance!!

    Pablo

  2. #2
    Senior Member
    Join Date
    Aug 2011
    Posts
    271

    Re: diferent time in openCL programs execution

    This is pretty much expected behaviour for any micro-benchmark on any platform. The details might change but in general the first run may incur (significant) additional overheads.

    Which is why, unless you're interested in the startup time itself, you should always do a few dummy runs first to let the system warm up, and then do more than one run of the test too (to average hide/over system interference, although this is less important now computers are so fast/multi-cored). This way you only measure the steady state performance which is more in-line with how a real application will end up using it.

    The first time might be slower because of operating system or driver allocation of resources, which don't need to occur on subsequent runs. These all of course depend on the os and the drivers themselves.

  3. #3
    Junior Member
    Join Date
    Feb 2012
    Posts
    20

    Re: diferent time in openCL programs execution

    Thanks!! So I guess the real time is not first but the following!! thatīs good! but what if I had a program that takes a lot of time? should I execute a smaller program before executing the main program? or in that case, the loss of time doesnīt increase and therefore doesnīt impact in the final result as much as it does in this case?
    Another question: Iīm trying to make a conjugate gradient method with sparse matrix but up to the moment the cpu version is much faster than the gpu version (in the 1201n cpu: 2 secs aprox and gpu: 11 secs aprox) with dimension 4993 and non zero items 34000 aprox.
    And I have for example the kernel that makes matriz vector multiplication a few times in the same loop:

    Code :
    /*some code*/
    for(int k=0;k<dim-4000;k++){
    		//rho=r*r;
     
    		/*some code*/
     
    		//p=r+beta*p
    		//betap=beta*p
     
    		for(i=0;i<dim-1;i++) {
    			betap[i]=0;
    		}
    		p_mem_obj = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, 
                dim * sizeof(float*), p, &ret);
    		betap_mem_obj = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, 
                dim * sizeof(float*), betap, &ret);
    		beta_mem_obj = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, 
                sizeof(float*), beta, &ret);
     
    		ret = clSetKernelArg(kernel4, 0, sizeof(cl_mem), (void *)&p_mem_obj);
    		ret = clSetKernelArg(kernel4, 1, sizeof(cl_mem), (void *)&betap_mem_obj);
    		ret = clSetKernelArg(kernel4, 2, sizeof(cl_mem), (void *)&beta_mem_obj);
    		ret = clSetKernelArg(kernel4, 3, sizeof(int *), (void *)&dim);
     
    		global_item_size[0]=cantGlobalItems;
    		local_item_size[0]=cantLocalItems;
     
    		ret = clEnqueueNDRangeKernel(command_queue, kernel4, 1, NULL, 
                global_item_size, local_item_size, 0, NULL, NULL);
     
    		/*some code*/
     
    		//x=x+alfa*p
    		//alfap=alfa*p
    		for(i=0;i<dim;i++) {
    			alfap[i]=0;
    		}
    		alfap_mem_obj = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, 
                dim * sizeof(float*), alfap, &ret);
    		alfa_mem_obj = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, 
                sizeof(float*), alfa, &ret);
     
    		ret = clSetKernelArg(kernel4, 0, sizeof(cl_mem), (void *)&p_mem_obj);
    		ret = clSetKernelArg(kernel4, 1, sizeof(cl_mem), (void *)&alfap_mem_obj);
    		ret = clSetKernelArg(kernel4, 2, sizeof(cl_mem), (void *)&alfa_mem_obj);
    		ret = clSetKernelArg(kernel4, 3, sizeof(int *), (void *)&dim);
     
    		global_item_size[0]=cantGlobalItems;
    		local_item_size[0]=cantLocalItems;
     
    		ret = clEnqueueNDRangeKernel(command_queue, kernel4, 1, NULL, 
                global_item_size, local_item_size, 0, NULL, NULL);
     
    		/*some code*/
     
    }
     
    /*some code*/

    so as it is the same kernel witch is called for the diferent cases, I have to call the clSetKernelArg inside the loop and I guess that may be very bad for performance, should I create two different kernels that do the same but with different names so I can set the arguments outside the loop or that doesnīt take to much time?
    Sorry for asking so many things but Iīm still a bit confused!!
    Thanks again!!

    Pablo

  4. #4
    Senior Member
    Join Date
    Aug 2011
    Posts
    271

    Re: diferent time in openCL programs execution

    Quote Originally Posted by mustang
    Thanks!! So I guess the real time is not first but the following!! thatīs good! but what if I had a program that takes a lot of time? should I execute a smaller program before executing the main program? or in that case, the loss of time doesnīt increase and therefore doesnīt impact in the final result as much as it does in this case?
    Well, it's anybody's guess, but an educated opinion would be that the overhead is basically fixed, and would be less a proportion in the end.

    If you're timing overall execution e.g. of a single-execution of programme, the startup time can't be excluded of course.

    I was just talking about micro-benchmarks: i.e. testing individual routines, or if you have a long-running gui application or server which will end up running the same problem many times during it's lifetime. In these cases the start-up time may be not be an issue at all.

    Another question: Iīm trying to make a conjugate gradient method with sparse matrix but up to the moment the cpu version is much faster than the gpu version (in the 1201n cpu: 2 secs aprox and gpu: 11 secs aprox) with dimension 4993 and non zero items 34000 aprox.
    And I have for example the kernel that makes matriz vector multiplication a few times in the same loop:

    Code :
    /*some code*/
    for(int k=0;k<dim-4000;k++){
    		//rho=r*r;
     
    		/*some code*/
     
    		//p=r+beta*p
    		//betap=beta*p
     
    		for(i=0;i<dim-1;i++) {
    			betap[i]=0;
    		}
    		p_mem_obj = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, 
                dim * sizeof(float*), p, &ret);
    		betap_mem_obj = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, 
                dim * sizeof(float*), betap, &ret);
    		beta_mem_obj = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, 
                sizeof(float*), beta, &ret);
     
    		ret = clSetKernelArg(kernel4, 0, sizeof(cl_mem), (void *)&p_mem_obj);
    		ret = clSetKernelArg(kernel4, 1, sizeof(cl_mem), (void *)&betap_mem_obj);
    		ret = clSetKernelArg(kernel4, 2, sizeof(cl_mem), (void *)&beta_mem_obj);
    		ret = clSetKernelArg(kernel4, 3, sizeof(int *), (void *)&dim);
     
    		global_item_size[0]=cantGlobalItems;
    		local_item_size[0]=cantLocalItems;
     
    		ret = clEnqueueNDRangeKernel(command_queue, kernel4, 1, NULL, 
                global_item_size, local_item_size, 0, NULL, NULL);
     
    		/*some code*/
     
    		//x=x+alfa*p
    		//alfap=alfa*p
    		for(i=0;i<dim;i++) {
    			alfap[i]=0;
    		}
    		alfap_mem_obj = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, 
                dim * sizeof(float*), alfap, &ret);
    		alfa_mem_obj = clCreateBuffer(context, CL_MEM_READ_WRITE | CL_MEM_COPY_HOST_PTR, 
                sizeof(float*), alfa, &ret);
     
    		ret = clSetKernelArg(kernel4, 0, sizeof(cl_mem), (void *)&p_mem_obj);
    		ret = clSetKernelArg(kernel4, 1, sizeof(cl_mem), (void *)&alfap_mem_obj);
    		ret = clSetKernelArg(kernel4, 2, sizeof(cl_mem), (void *)&alfa_mem_obj);
    		ret = clSetKernelArg(kernel4, 3, sizeof(int *), (void *)&dim);
     
    		global_item_size[0]=cantGlobalItems;
    		local_item_size[0]=cantLocalItems;
     
    		ret = clEnqueueNDRangeKernel(command_queue, kernel4, 1, NULL, 
                global_item_size, local_item_size, 0, NULL, NULL);
     
    		/*some code*/
     
    }
     
    /*some code*/

    so as it is the same kernel witch is called for the diferent cases, I have to call the clSetKernelArg inside the loop and I guess that may be very bad for performance, should I create two different kernels that do the same but with different names so I can set the arguments outside the loop or that doesnīt take to much time?
    Sorry for asking so many things but Iīm still a bit confused!!
    Thanks again!!

    Pablo
    [/quote][/quote]
    clSetKernelArg is no issue for performance; all it's doing is setting a simple (pointer) value into a table (probably an array). When you enqueue the task the values in the table are copied to the work queue, but even that isn't very expensive. You could save re-setting the values which don't change: but in reality you wont even be able to measure the difference.

    But you definitely shouldn't be creating any buffers inside loops: creating buffers is going to expensive if you're doing it thousands of times. Create them outside the loop once, and then just use them as required. You then need to use writeBuffer if you want to initialise them from the cpu side ... but unless the initialisation requires a whole lot of I/O, you should do the initialisation on the GPU side as well, or any processing of the intermediate values. Any loop like this should avoid any cpu synchronisation at all, if possible.

    I use the analogy that the work queue is like a shopping list; even if you live right next door to a shop in walking distance: if you write down one item at a time, go buy the one item, return, write down the next item, etc, it'll be orders of magnitude slower than just filling a basket with a list of all items and only going to the shop once.

    I can't tell from your code paste if you're reading results back on the cpu in the 'some code' sections, but if you are it's definitely the undesirable scenario I list above. It could be hundreds of times slower than the hardware is capable of.

  5. #5
    Junior Member
    Join Date
    Feb 2012
    Posts
    20

    Re: diferent time in openCL programs execution

    thanks for answering!!!
    I guess Iīm forced to pass read to the cpu some results cause I have to do for example a few scalar divisions (for example a float divided by another float and the result is a float) witch are then used to perform other operations.
    The only thing I managed to do outside the loop is doing cl_mem p_mem_obj; for example, cause I tried to put some of the clSetKernelArg outside the loop but never worked, I guess there are two reasons, one: the operations that are made in the cpu have to be "reenqued" as they are not made in the cl_mem but directly in the variables;
    two: I use in the same loop the same kernel for diferent values, for example I sum vectors r and p and then x and p (the names are a bit different) so until I enqueue the first I cannot set the arguments of the second, becuase as it is the same kernel, the arguments would be over written.

    Pablo

  6. #6
    Senior Member
    Join Date
    Aug 2011
    Posts
    271

    Re: diferent time in openCL programs execution

    Quote Originally Posted by mustang
    thanks for answering!!!
    I guess Iīm forced to pass read to the cpu some results cause I have to do for example a few scalar divisions (for example a float divided by another float and the result is a float) witch are then used to perform other operations.
    a gpu is fine at scalar operations too, infact you should be doing all calculations on the gpu if possible.

    but I can't really suggest more without seeing the code.

    The only thing I managed to do outside the loop is doing cl_mem p_mem_obj; for example, cause I tried to put some of the clSetKernelArg outside the loop but never worked, I guess there are two reasons, one: the operations that are made in the cpu have to be "reenqued" as they are not made in the cl_mem but directly in the variables;
    two: I use in the same loop the same kernel for diferent values, for example I sum vectors r and p and then x and p (the names are a bit different) so until I enqueue the first I cannot set the arguments of the second, becuase as it is the same kernel, the arguments would be over written.

    Pablo
    Obviously if the arguments change you need to keep those in the loop. Any i don't understand why you're even worrying about setting arguments: setting arguments is fast.

  7. #7
    Junior Member
    Join Date
    Feb 2012
    Posts
    20

    Re: diferent time in openCL programs execution

    thanks for the reply!!!
    Iīm worried about setting arguments and creating the buffers inside because I made the conjugate gradient method with space matrix in both gpu and cpu and the cpu is much faster!! even with 19713x19713 matrix, although it has 130000 aprox. non zero values, but as I had read about the enormous diferences in favour of the gpu I thought it would be faster than the cpu with these sizes!! And I was trying to improve my code in everything I can, at least from the host side. Anyway I managed to take all the setArguments and create buffers outside of the loop and I didnīt get a much better performance, what is worse is that in some cases I even got worse!!!
    I know there is place for a lot of improvement but my knowledge of openCL is low in order to know in what places I should use the local memory for example. But searching the web I found different strategies (not about the use of memory), for example to do in one kernel a*x+y being a an escalar and x and y vectors, but it didnīt improve the performance, at least not much!!

  8. #8
    Junior Member
    Join Date
    Feb 2012
    Posts
    20

    Re: diferent time in openCL programs execution

    Hi,
    I know this is an off topic but I prefer not to open another topic, but my question is if it is posible that I canīt handle a 19713X19713 matriz (it is not sparse). it throws segment fault (I donīt really how to translate "violacion de segmento" in spanish). By the way I have a cpu version that loads the matrix in the same way and it works!!

    Pablo

  9. #9
    Senior Member
    Join Date
    Aug 2011
    Posts
    271

    Re: diferent time in openCL programs execution

    Quote Originally Posted by mustang
    Hi,
    I know this is an off topic but I prefer not to open another topic, but my question is if it is posible that I canīt handle a 19713X19713 matriz (it is not sparse). it throws segment fault (I donīt really how to translate "violacion de segmento" in spanish). By the way I have a cpu version that loads the matrix in the same way and it works!!

    Pablo
    Although it could be an opencl implementation bug, it's more likely to be code bugs: e.g. going off the end of the array, or not checking every return value for errors.

    19713 * 19713 * float = about 1.4GB of memory, so you probably failed to allocate some buffer somewhere. Even if your GPU has more memory than that you may not be able to allocate it one block or some might be used by the system.

  10. #10
    Junior Member
    Join Date
    Feb 2012
    Posts
    20

    Re: diferent time in openCL programs execution

    Thanks, I guess it is not that I forgot to allocate a buffer because I have used the same program with smaller matrix and it seemed to work fine!! and I have centralized the points where I change the value of the matrix dimension so there is not to much risk of that. But I donīt know what is nvidia buffers max size, Iīll try to find it!!

    Pablo

Similar Threads

  1. OpenCL Kernel thread execution time
    By iforve in forum OpenCL
    Replies: 3
    Last Post: 01-08-2013, 02:06 PM
  2. Replies: 2
    Last Post: 10-14-2012, 08:41 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •