Implementing Deep Learning in Embedded Vision Systems
When you’re developing your next embedded vision system, knowing the right questions to ask is critical: Will the processor architecture support deep learning for vision? Is the convolutional neural network (CNN) flexible enough for both today’s requirements and future graphs? Are the tools capable of supporting HW/SW tradeoffs? What should I consider when evaluating processor performance? This webinar will review these and other questions that designers need to consider for their next embedded vision designs. It will discuss Synopsys’ DesignWare EV6x Embedded Vision Processors, which offer a combination of high-performance vision CPU cores and a CNN engine with high productivity programming tools based on OpenCL C and the OpenVX framework. The programmable and configurable EV6x processors support a broad range of embedded vision applications including ADAS, video surveillance, augmented reality, and SLAM.
Attend this webinar to learn about:
- How hardware decisions may influence graph training, accuracy and performance
- How to pick the optimal bit resolution to maximize accuracy and minimize die size
- Why mapping tools are critical to achieving high resolution and accuracy in the least time
- Benefits of a dedicated programmable architecture vs. general-purpose processors or hardwired CNN architectures