sDNA stands for Scalable Deep Neural Networks Accelerator. It is unique innovative patent-pending Deep Neural Networks (DNN) architecture that operates as specialized machine learning (ML) processor to overcome the challenges that Artificial Intelligent (AI) and particularly ML solutions currently face.
DNN is the high-throughput engine of ML implementation. DNN of real-time ML/AI system consumes most of the AI/ML device power and its silicon area.
sDNA is a highly optimized architecture that achieves reduction in power consumption and device cost of DNN solutions which was impossible to achieve before. It could be licensed to companies that develop AI/ML chips/devices as an embedded IP core to be integrated as ML accelerator.

Why it is important?
- Many real-time AI applications such as autonomous driving, smart phones, video surveillance, etc. require very high throughput. Those devices are called edge devices that implements ML inferencing.
- sDNA is also applicable for AI cloud ...
sDNA stands for Scalable Deep Neural Networks Accelerator. It is unique innovative patent-pending Deep Neural Networks (DNN) architecture that operates as specialized machine learning (ML) processor to overcome the challenges that Artificial Intelligent (AI) and particularly ML solutions currently face.
DNN is the high-throughput engine of ML implementation. DNN of real-time ML/AI system consumes most of the AI/ML device power and its silicon area.
sDNA is a highly optimized architecture that achieves reduction in power consumption and device cost of DNN solutions which was impossible to achieve before. It could be licensed to companies that develop AI/ML chips/devices as an embedded IP core to be integrated as ML accelerator.

Why it is important?
- Many real-time AI applications such as autonomous driving, smart phones, video surveillance, etc. require very high throughput. Those devices are called edge devices that implements ML inferencing.
- sDNA is also applicable for AI cloud (server centers that implement ML training + inferencing) applications where it is also important to reduce power and cost.

What are ML implementations main challenges?
- Processing Inefficiency: The DNN weight tensors (multi-dimensional matrixes) are sparsely populated with non-zero weights. The activation function result tensors (rectified linear unit (ReLU) activation function is common and widely used) have typically about 50% zero components. Therefore, without removing from the multiplications the zero weight components and the zero activation function components, the efficiency is very low and a lot of power and clock cycles are wasted on zero resulting multiplications. Since the locations of the zero weight or/and zero activation function components are random (in a typical commonly used time division multiplexing DNN implementation), it is very challenging to build a real-time parallel implementation that removes all the zero multiplications to improve efficiency.

Why sDNA is the right solution for those challenges?
- sDNA is an innovative, fully-parallel, very high-throughput DNN architecture that dynamically eliminate 100% of the non-zero multiplications, achieves 100% Multipliers utilization. It requires 30x less multipliers than standard DNN architecture that is not using sDNA optimization techniques. This reduction in # of multipliers, internal and external memory sizes requirements enable sDNA supreme power and cost reduction!
More information

Employees

Asher Hazanchuk
Admin
Asher Hazanchuk CEO I am very innovative (15+ patents), very interested in machine learning and other green parallel processing applications