Results 1 to 3 of 3

Thread: GPU monitoring tools (UI/commandline)

Hybrid View

  1. #1
    Newbie
    Join Date
    Dec 2013
    Posts
    1

    GPU monitoring tools (UI/commandline)

    I am using ubuntu 12.04 and have a GPU (Nvidia Geforce GT 640) connected to it. I execute OpenCL codes on this GPU. How can I monitor the usage of the GPU when a code is being executed? Is there any tool like 'System Monitor' in ubuntu which gives GPU details? Or a command line functionality would also work. Any ideas on what should be used?

  2. #2
    Junior Member
    Join Date
    Jul 2011
    Location
    Bristol, UK
    Posts
    19
    Hi,

    For NVIDIA (assuming proprietary driver), you can use the nvidia-smi command-line tool to gauge approximate GPU load:
    Code :
    $ nvidia-smi
    Thu Dec  5 15:36:15 2013       
    +------------------------------------------------------+                       
    | NVIDIA-SMI 5.319.72   Driver Version: 319.72         |                       
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  Tesla M2090         On   | 0000:08:00.0     Off |                    0 |
    | N/A   N/A    P0   179W /  N/A |       81MB /  5375MB |     85%   E. Process |
    +-------------------------------+----------------------+----------------------+
    |   1  Tesla M2090         On   | 0000:09:00.0     Off |                    0 |
    | N/A   N/A    P0   189W /  N/A |       81MB /  5375MB |     90%   E. Process |
    +-------------------------------+----------------------+----------------------+
    |   2  Tesla M2090         On   | 0000:0A:00.0     Off |                    0 |
    | N/A   N/A    P0   183W /  N/A |       81MB /  5375MB |     77%   E. Process |
    +-------------------------------+----------------------+----------------------+
    |   3  Tesla M2090         On   | 0000:15:00.0     Off |                    0 |
    | N/A   N/A    P0   178W /  N/A |       81MB /  5375MB |     94%   E. Process |
    +-------------------------------+----------------------+----------------------+
    |   4  Tesla M2090         On   | 0000:16:00.0     Off |                    0 |
    | N/A   N/A    P0   180W /  N/A |       81MB /  5375MB |     82%   E. Process |
    +-------------------------------+----------------------+----------------------+
    |   5  Tesla M2090         On   | 0000:19:00.0     Off |                    0 |
    | N/A   N/A    P0   176W /  N/A |       81MB /  5375MB |     93%   E. Process |
    +-------------------------------+----------------------+----------------------+
    |   6  Tesla M2090         On   | 0000:1A:00.0     Off |                    0 |
    | N/A   N/A    P0    80W /  N/A |       81MB /  5375MB |     88%   E. Process |
    +-------------------------------+----------------------+----------------------+
    |   7  Tesla M2090         On   | 0000:1B:00.0     Off |                    0 |
    | N/A   N/A    P0   182W /  N/A |       81MB /  5375MB |     90%   E. Process |
    +-------------------------------+----------------------+----------------------+
     
    +-----------------------------------------------------------------------------+
    | Compute processes:                                               GPU Memory |
    |  GPU       PID  Process name                                     Usage      |
    |=============================================================================|
    |    0      3758  python                                               551MB  |
    |    1      3758  python                                               551MB  |
    |    2      3758  python                                               551MB  |
    |    3      3758  python                                               551MB  |
    |    4      3758  python                                               551MB  |
    |    5      3758  python                                               551MB  |
    |    6      3758  python                                               551MB  |
    |    7      3758  python                                               551MB  |
    +-----------------------------------------------------------------------------+
    Hope that helps,

    James

  3. #3
    If you're doing this as a way to optimise your code, you should use the Computer Profiler, which can give you a lot of detail about your kernels and the GPU performance counters. NVIDIA seem to have ripped out the OpenCL support starting from CUDA 5, but if you install CUDA 4.2 and run computeprof it still works on OpenCL code.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •