Календарь событий
Вторник 22 Ноябрь 2016, 18:30 - 20:00
Хиты : 104
от Адрес электронной почты защищен от спам-ботов. Для просмотра адреса в вашем браузере должен быть включен Javascript.
Докладчик: Александр Новиков (ВШЭ)
Тема: Tensorizing Neural Networks
Аннотация:
Convolutional neural networks excel in image recognition tasks, but this comes at the cost of high computational and memory complexity. CNNs require millions of floating point operations to process an image and therefore real-time applications
need powerful CPU or GPU devices. Moreover, these networks contain millions of trainable parameters and consume hundreds of megabytes of storage and memory bandwidth. Thus, CNNs are forced to use RAM instead of solely relying on the processor cache – orders of magnitude more energy efficient memory device – which increases the energy consumption even more. These reasons restrain the spread of CNNs on mobile devices. I will talk about our work on tensor factorization framework to compress fully-connected and convolutional layers of CNNs. Another research direction (besides compression) is to increase the size of the layers by training them in the compact tensor format to increase the accuracy.
For more details see papers
https://papers.nips.cc/paper/5787-tensorizing-neural-networks
https://arxiv.org/abs/1611.03214
Тема: Tensorizing Neural Networks
Аннотация:
Convolutional neural networks excel in image recognition tasks, but this comes at the cost of high computational and memory complexity. CNNs require millions of floating point operations to process an image and therefore real-time applications
need powerful CPU or GPU devices. Moreover, these networks contain millions of trainable parameters and consume hundreds of megabytes of storage and memory bandwidth. Thus, CNNs are forced to use RAM instead of solely relying on the processor cache – orders of magnitude more energy efficient memory device – which increases the energy consumption even more. These reasons restrain the spread of CNNs on mobile devices. I will talk about our work on tensor factorization framework to compress fully-connected and convolutional layers of CNNs. Another research direction (besides compression) is to increase the size of the layers by training them in the compact tensor format to increase the accuracy.
For more details see papers
https://papers.nips.cc/paper/5787-tensorizing-neural-networks
https://arxiv.org/abs/1611.03214
Место ИППИ РАН в 615 аудитории (6 этаж)