Concurrent Technologies Plc.

Concurrent Technologies Plc.

Newcomen Way, Colchester, CO4 9WN, UNITED KINGDOM

3U VPX Accelerator Engine

3U VPX Accelerator Engine

TR  AEx/3sd-RCx is a rugged, conduction-cooled 3U VPX artificial intelligence accelerator board. Paired with a Concurrent Technologies processor board, TR AEx/3sd-RCx is  designed to boost performance of inference at the edge applications in the defence, exploration and transportation markets

Key Features of 3U VPX Accelerator Engine

  • Rugged 3U VPX Accelerator Engine
  • Speeds up Inference at the Edge activities
  • Supported by the Intel® OpenVINO™ toolkit
  • Supports popular frameworks like Caffe, TensorFlow, MXNet
  • PCI Express® connectivity to host board
  • Companion to TR H4x/3sd-RCx
  • Includes pre-trained models

TR AEx/3sd-RCxInference Accelerator Engine

The TR AEx/3sd-RCxacts as a complementary accelerator for TR H4x/3sd-RCxrugged compute cards. Differing neural network algorithms prefer different hardware architectures and the TR AEx/3sd-RCxcan significantly increase inference throughput by providing a dedicated solution to offload neural network inference for many algorithms. Below are its key advantages:

  • High throughput versus CPU only AI inference for many models
  • Significant performance per watt advantage versus CPU/GPUonly AI inference
  • Dedicated processing hardware frees up CPU resources
  • Low latency architecture suitable for real time applications
  • 3U VPX form factor enabling a compact embedded solution
  • Reprogrammable hardware allows for changing requirements and adaptation to new neural network models

Deep Learning

Historically, machine learning would require researchers and domain experts to manually design complex filters to extract trends and features from data. Today, deep learning algorithms and accelerators can be deployed to rapidly and effectively train models to recognise new and differing input data.

Deep learning is a natural evolution of machine learning which enables ever complex neural network models to analyse and evaluate real world problems. Deep learning models have multiple internal layers of neurons which can be trained to solve problems such as:

  • Object recognition
  • Object detection
  • Feature segmentation

Neural network models can often garner a higher accuracy than human judgement, making them a valuable tool in mission critical applications.

Inference at the edge

Inference is the process of using a trained neural network to sense, reason and act upon outcomes based on given stimuli. Traditionally, inference takes place in large datacenters, where data captured in the field must be transported to the datacentre in order for it to be processed. This incurs penalties such as:

  • Significant latency from data collection to inference resulting in inaccurate situation reports and assessments
  • Unnecessary loading of servers and connections to download and upload data
  • Security and privacy concerns where data must be transported in order for it to be processed

Inference at the edge enables near instant output from trained neural network models from within deployed hardware, to provide excellent quality actionable intelligence.

Key factors in this are:

  • Low latency; due to local processing
  • Less concern about connectivity, bandwidth or loading
  • Secure; data can only be accessed on the device and deployed platform

OpenVINO™Toolkit

The Open Visual Inference and Neural Network Optimisation (OpenVINO) toolkit enables TR AEx/3sd-RCxto process Convolutional Neural Networks (CNN) quickly and efficiently. It provides a collection of tools to easily optimise, deploy and analyse the neural network model even with little knowledge of the hardware architecture and AI itself.

At the heart of OpenVINO is the Deep Learning Deployment Toolkit which consists of:

1. Model Optimiser; Converts and optimises trained models into Intermediate Representation binaries such that they are understood by and performant on the Accelerator Engine

2. Inference Engine; Loads models onto the Accelerator Engine and manages Inference heterogeneously between CPU and Inference Accelerator Engine using an OpenCL backbone.

3.Output; Presents results in text, image and video formats with OpenVX™and OpenCV

OpenVINO supports models trained in popular frameworks such as:

  • TensorFlow
  • Caffe
  • MXNet

With a wide range of supported topologies including:

  • AlexNet
  • MobileNet
  • ResNet
  • GoogleNet
  • Inception
  • SSD
  • SqueezeNet
  • YOLO
SSL Secure Connection
-->