- Placement and Design
Compaction of deep neural networks
Easily reduce the size of deep neural networks (DNN) so that they can be installed in edge devices.
- Developed a pruning method using “group sparsity phenomenon" that occurs learning a deep neural network with a common used conditions.
- Applied our technology for using a DNN on edge devices, which have a low computation power (e.g., on-board devices or robots).
- Edge devices in vehicles or robots
Benchmarks, strengths, and track record
- Reduce network parameters by 80%, while maintaining deep neural network performance. (ICMLA2018)
Please include the title “Toshiba AI Technology Catalog: More compact deep neural networks” or the URL in the inquiry text.
Please note that because this technology is currently the subject of R&D activities, immediate responses to inquiries may not be possible.
- A. Yaguchi, et al., “Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks,” Proc. of 17th IEEE International Conference on Machine Learning and Applications (ICMLA) 2018, 2018.
- Atsushi Taniguchi et.al.; “Compact deep neural network technologies using sparsity of weight coefficients”; Toshiba Review, Vol. 74, No. 4, 2019.
- Toshiba’s Compaction Technology for Deep Neural Networks Opens Way to High Accuracy Recognition Processing on Edge Devices