Skip to main content

Neural Network Inference Engine IP Core Delivers >10 TeraOPS per Watt with OpenCL, OpenVX and NNEF

VeriSilicon today announced significant milestones have been achieved for its versatile and highly scalable neural network inference engine family VIP8000. The fully programmable VIP8000 processors reach the performance and memory efficiency of dedicated fixed-function logic with the customizability and future proofing of full programmability in OpenCL, OpenVX, and a wide range of NN frameworks including NNEF. “The biggest thing to happen in the computer industry since the PC is AI and machine learning, it will truly revolutionize, empower, and improve our lives. It can be done in giant machines from IBM and Google, and in tiny chips made with VeriSilicon’s neural network processors,” said Dr. Jon Peddie, president Jon Peddie Research. “By 2020 we will wonder how we ever lived without our AI assistants,” he added.