The HLF Workshop on Deep Learning and Neuroscience
Understanding and modeling intelligence is one of the greatest problems in science and technology today. Making significant progress towards these challenges will require the interaction of several disciplines involving neuroscience and cognitive science in addition to computer science, robotics and machine learning. Deep Learning, a relatively new area of artificial intelligence and machine learning, has made huge advances in the last few years, and shown remarkable empirical success in many applications such as image categorization, face identification, action recognition, speech recognition, machine translation just to name a few.
Although the advances in performance during the last few years are exclusively an engineering feat, the cumulative effect of much increased computer power, the availability of manually labeled large data sets and a small number of incremental technical improvements, deep learning at its core draws inspiration from brain science, and is seen as the beginning of a breakthrough with the potential of opening new fields for science. Deep Learning has close connection with human brain, and at times, their learning has been compared with the learning in the human brain. Deep networks trained with Imagenet seem to mimic not only the recognition performance but also the tuning properties of neurons in cortical areas of the visual cortex of monkeys. Despite many such arguments by multiple researchers, these connections between deep learning and neuroscience are still poorly understood and remain an active topic of research.
The advancements in deep learning and their presumptive ability to mimic human brain has opened up several research questions. Do humans learn in a similar way as deep networks do i.e. back-propagating the error? Does brain also optimize some cost function as deep networks do? Can deep networks trained with millions of examples mimic an important subset of the basic building blocks for brain-like intelligence? Deep networks requires millions of examples to learn but children do not. Then how can deep networks mimic human learning? Are recent advancements in deep learning such as memory networks a step towards creating a prototype of human memory? In this workshop we plan to bring researchers from various fields such computer science, neuroscience, mathematics, behavior science, cognitive science together to discuss these questions.
Organizers:
Arvind Agarwal, IBM Research, India
Felix Putze, University of Bremen, Germany
Subhrajit Roy, IBM Research, Australia
Mentor:
Prof. Johannes Schöning, University of Bremen, Germany
Coordinates:
Monday, September 25th, 14:30-16:00 (workshop session I)
(Main Meeting Venue - Heidelberg University on University Square, 69117 Heidelberg)
Although the advances in performance during the last few years are exclusively an engineering feat, the cumulative effect of much increased computer power, the availability of manually labeled large data sets and a small number of incremental technical improvements, deep learning at its core draws inspiration from brain science, and is seen as the beginning of a breakthrough with the potential of opening new fields for science. Deep Learning has close connection with human brain, and at times, their learning has been compared with the learning in the human brain. Deep networks trained with Imagenet seem to mimic not only the recognition performance but also the tuning properties of neurons in cortical areas of the visual cortex of monkeys. Despite many such arguments by multiple researchers, these connections between deep learning and neuroscience are still poorly understood and remain an active topic of research.
The advancements in deep learning and their presumptive ability to mimic human brain has opened up several research questions. Do humans learn in a similar way as deep networks do i.e. back-propagating the error? Does brain also optimize some cost function as deep networks do? Can deep networks trained with millions of examples mimic an important subset of the basic building blocks for brain-like intelligence? Deep networks requires millions of examples to learn but children do not. Then how can deep networks mimic human learning? Are recent advancements in deep learning such as memory networks a step towards creating a prototype of human memory? In this workshop we plan to bring researchers from various fields such computer science, neuroscience, mathematics, behavior science, cognitive science together to discuss these questions.
Organizers:
Arvind Agarwal, IBM Research, India
Felix Putze, University of Bremen, Germany
Subhrajit Roy, IBM Research, Australia
Mentor:
Prof. Johannes Schöning, University of Bremen, Germany
Coordinates:
Monday, September 25th, 14:30-16:00 (workshop session I)
(Main Meeting Venue - Heidelberg University on University Square, 69117 Heidelberg)
Workshop Program
Introduction (14:30 – 14:45)
Brief introduction to the workshop format and the three different subgroups for the subsequent discussion.
Group Discussion (14:45 – 15:30)
Split the group in smaller subgroups for more focused discussions on different topics. Each subgroup is moderated by one of the workshop organizers and stimulated by short impulse presentations, supported by posters.
Session-1: Knowledge Discovery & Learning
Moderator: Arvind Agarwal
Impulse Presentation:
Ansif Arooj : Strengthening Learning by Prior Knowledge
Lydia Braunack-Mayer: Man vs. Machine: Can machines really ‘learn’?
Arvind Agarwal : How Brains Represent Thousands of Objects
Session-2: Computing in the Brain
Moderator: Subhrajit Roy
Impulse Presentation:
Oliver Gäfvert: Differentiable Neural Computers
Subhrajit Roy: How neuroscience can influence Deep Learning
Session-3: Principles of Human Cognition for Next Generation Machine Learning
Moderator: Felix Putze
Impulse Presentation:
Tapasya Patki: Modeling how humans process language
Anwesha Das: Desh: Deep Learning for HPC System Health Resilience
Final joint discussion (15:30 – 16:00)
After the discussion, each moderator will give a short summary of the discussion. We will discuss next steps and opportunities to continue the workshop theme in other forms.
More details on the program including abstract of the discussions and talks can be found here.
Brief introduction to the workshop format and the three different subgroups for the subsequent discussion.
Group Discussion (14:45 – 15:30)
Split the group in smaller subgroups for more focused discussions on different topics. Each subgroup is moderated by one of the workshop organizers and stimulated by short impulse presentations, supported by posters.
Session-1: Knowledge Discovery & Learning
Moderator: Arvind Agarwal
Impulse Presentation:
Ansif Arooj : Strengthening Learning by Prior Knowledge
Lydia Braunack-Mayer: Man vs. Machine: Can machines really ‘learn’?
Arvind Agarwal : How Brains Represent Thousands of Objects
Session-2: Computing in the Brain
Moderator: Subhrajit Roy
Impulse Presentation:
Oliver Gäfvert: Differentiable Neural Computers
Subhrajit Roy: How neuroscience can influence Deep Learning
Session-3: Principles of Human Cognition for Next Generation Machine Learning
Moderator: Felix Putze
Impulse Presentation:
Tapasya Patki: Modeling how humans process language
Anwesha Das: Desh: Deep Learning for HPC System Health Resilience
Final joint discussion (15:30 – 16:00)
After the discussion, each moderator will give a short summary of the discussion. We will discuss next steps and opportunities to continue the workshop theme in other forms.
More details on the program including abstract of the discussions and talks can be found here.