Please fill in the form below, so we can support you in your request.
Please fill in the form below, so we can support you in your request.


    ASICAMD (Xilinx)AchronixIntel (Altera)LatticeMicrochip (MicroSemi)Other

    By submitting this form you are consenting to being contacted by the MLE via email and receiving marketing information.

    X
    CONTACT MLE
    Contact MLE for Design Services
    Please fill in the form below, so we can send you the relevant information regarding our interest in our Design Service Solutions.


      ASICAMD (Xilinx)AchronixIntel (Altera)LatticeMicrochip (MicroSemi)Other

      By submitting this form you are consenting to being contacted by the MLE via email and receiving marketing information.

      X
      CONTACT MLE

      Accelerating Machine Learning

      Accelerating Machine Learning

      Accelerating Machine Learning

      Machine Learning in form of Deep Convolutional Neural Networks has demonstrated significant advantages over classical algorithms, however at the cost of a significant compute burdon. MLE has started putting together a set of accelerated platforms, solutions and focused services within the Xilinx FPGA ecosystem.

      Currently, these accelerated platforms, solutions and services focus on the Inference phase of Deep-Learning. MLE’s acceleration techniques for Deep Convolutional Neural Network Inference combine “unconventional” dataflow-oriented architectures with modern design flows using Xilinx High-Level Synthesis, and the Xilinx SDx tool chain.

      Close collaboration with re-knowned experts from the Bavarian Multi-Media Lab at Augsburg University, Germany, facilitates rapid adoption of recent research results, for example, in the field of Reduced Precision Neural Networks.

      MLE is a licensee of Xilinx and offers sub-licensing, technology support and complementary design services for integrating Accelerated Deep-Learning Inference into your application. When applied to the Deep-Learning Inference phase, Xilinx FPGA technology can provide a unique combination of low-latency response times, high compute performance in the Tera-OPS range, at very low Wattage.

      Applications of Machine Learning

      • Image processing and classification
      • Environment perception
      • Multi-camera object recognition systems
      • Sensor fusion

      Core Benefits

      • Very fast response time with low deterministic processing latencies
      • Very high, raw compute performance up to tens of Tera OPS
      • Low power envelopes of typically less than 50 Watts
      • Scalability from embedded system, to High-Performance Compute (HPC)

      Key Features

      • Highly integrated single-chip solutions
      • Scales to state-of-the art networks (CNV, ResNet-50, etc)

      Documentation