Implementing Deep Learning in Embedded Vision Systems

Implementing Deep Learning in Embedded Vision Systems

 

Available On Demand
Duration 1h 00min
Speakers
Gordon Cooper
Product Marketing Manager, Embedded Vision Processors
Synopsys
Gordon Cooper
Gordon Cooper is a Product Marketing Manager for Synopsys’ Embedded Vision Processor family. Gordon brings more than 20 years of experience in digital design, field applications and marketing at Raytheon, Analog Devices, and NXP to the role. Gordon also served as a Commanding Officer in the US Army Reserve, including a tour in Kosovo. Gordon holds a Bachelor of Science degree in Electrical Engineering from Clarkson University.
Bo Wu
Staff CAE
Synopsys
Bo Wu
Bo Wu is currently a staff CAE at Synopsys focusing on Embedded Vision Processors. He holds Bachelor and Master degrees from Tsinghua University in China, and a Ph.D. degree from University of Victoria in Canada. Between 1996 and 2000, he worked as a Senior System Engineer and DSP Engineer at Nortel Networks in Ottawa and at AT&T Wireless in Seattle, respectively. Afterwards, he has been employed at various engineering and technical marketing positions mainly responsible for system-level design products and processor solutions at Synopsys, CoWare, and Cadence.

When you’re developing your next embedded vision system, knowing the right questions to ask is critical: Will the processor architecture support deep learning for vision? Is the convolutional neural network (CNN) flexible enough for both today’s requirements and future graphs? Are the tools capable of supporting HW/SW tradeoffs? What should I consider when evaluating processor performance? This webinar will review these and other questions that designers need to consider for their next embedded vision designs. It will discuss Synopsys’ DesignWare EV6x Embedded Vision Processors, which offer a combination of high-performance vision CPU cores and a CNN engine with high productivity programming tools based on OpenCL C and the OpenVX framework. The programmable and configurable EV6x processors support a broad range of embedded vision applications including ADAS, video surveillance, augmented reality, and SLAM.

Attend this webinar to learn about:

  • How hardware decisions may influence graph training, accuracy and performance
  • How to pick the optimal bit resolution to maximize accuracy and minimize die size
  • Why mapping tools are critical to achieving high resolution and accuracy in the least time
  • Benefits of a dedicated programmable architecture vs. general-purpose processors or hardwired CNN architectures
Already a member? Login