Results 1 to 3 of 3

Thread: Dynamic memory allocation/deallocation and transfer?

  1. #1
    Junior Member
    Join Date
    Jun 2012
    Posts
    3

    Dynamic memory allocation/deallocation and transfer?

    Hello, GPGPU hackers

    If one use OpenCl to write some packet/data - filtering programm, there will be nesessary
    reduce the buffer size ( yes, the Cl-Buffer), cause some packets are "filtered", and there
    is no reason to transfer old (full) amount of data with a lot of garbage to the host.
    The size of the new buffer will be know only in or after the Kernel works.
    There is a posibility to do that by running the kernel with reduce summ calculation,
    then use the result to build the new buffer in host , but thats freaky.
    How to dynamicly allocate/deallocate in global memory?

    Adwanced task is to allocate many buffers in order to filter content into many different arrays. If possible they must be dinamicly growable . Again, this must be done from kernel. Is there any possibilities to do that?

  2. #2

    Re: Dynamic memory allocation/deallocation and transfer?

    You cannot dynamically allocate global memory inside a kernel. Rather allocate the largest buffer you could possibly need or the largest one supported by the OpenCL implementation, then call your kernel and do the reduction step.

  3. #3
    Senior Member
    Join Date
    Aug 2011
    Posts
    271

    Re: Dynamic memory allocation/deallocation and transfer?

    You can use buffers to pass counters down to subsequent kernels - the kernel can decide it if has work to do, and the host just launches something relative to the problem size or hardware.

    The buffers can be accessed using atomics to dynamically allocate slots in the result arrays/queues.

Similar Threads

  1. Memory object allocation
    By zhugh in forum OpenCL
    Replies: 1
    Last Post: 12-18-2010, 03:42 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •