The OpenCL support by NNVM & TVM session from Linaro Connect 2018 in Hong Kong is now online. Abstract: To use mobile GPU to accelerate deep learning inference on ARM platforms in device side, OpenCL support seems a proper and promising fit. NNVM is an open compiler for AI frameworks with graph IR implementation, and TVM is an open source end-to-end Tensor IR/DSL stack. NNVM together with TVM provides a flexible architecture to support different frameworks and backends. OpenCL is one of the supported backends by NNVM & TVM now, the latest status and some how-tos will be discussed in this session.