Posts about Yoshua Bengio
For Prof. Yoshua Bengio, GFlowNets are the most exciting thing on the horizon of machine learning today. He believes they can solve previously intractable problems and hold the key to unlocking machine abstract reasoning itself. This discussion explores the promise of GFlowNets and the personal journey Prof. Bengio traveled to reach them.
Yoshua Bengio @ MILA (https://mila.quebec/en/person/bengio-yoshua/)
GFlowNet Foundations (https://arxiv.org/pdf/2111.09266.pdf)
Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation (https://arxiv.org/pdf/2106.04399.pdf)
Interpolation Consistency Training for Semi-Supervised Learning (https://arxiv.org/pdf/1903.03825.pdf)
Towards Causal Representation Learning (https://arxiv.org/pdf/2102.11107.pdf)
Causal inference using invariant prediction: identification and confidence intervals (https://arxiv.org/pdf/1501.01332.pdf)
[R] Yoshua Bengio Team’s Recurrent Independent Mechanisms Endow RL Agents With Out-of-Distribution Adaptation and Generalization Abilities
A research team from the University of Montreal and Max Planck Institute for Intelligent Systems constructs a reinforcement learning agent whose knowledge and reward function can be reused across tasks, along with an attention mechanism that dynamically selects unchangeable knowledge pieces to enable out-of-distribution adaptation and generalization.
The paper Fast and Slow Learning of Recurrent Independent Mechanisms is on arXiv.
The catchy title was from Synced AI technology review. The original paper is «A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning». Ability to generalize outside the strict statistical distribution is an important problem to be adressed in order to push the limits of learning-based AI systems.
AI Researchers Including Yoshua Bengio, Introduce A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning
Human consciousness is an exceptional ability that enables us to generalize or adapt well to new situations and learn skills or new concepts efficiently. When we encounter a new environment, Conscious attention focuses on a small subset of environment elements, with the help of an abstract representation of the world internal to the agent. Also known as consciousness in the first sense (C1), the practical conscious extracts necessary information from the environment and ignore unnecessary details to adapt to the new environment.
Inspired by the ability of humans conscious, the researchers planned to build an architecture that can learn a latent space beneficial for planning and in which attention can be focused on a small set of variables at any time. Since reinforcement learning (RL) trains agents in new complex environments, they aimed to develop an end-to-end architecture to encode some of these ideas into reinforcement learning (RL) agents.
A team from Max-Planck Institute for Intelligent Systems, ETH Zurich, Google Research Amsterdam, Mila and the University of Montreal make an effort to bring together causality and machine learning research programs, delineate implications of causality for machine learning and propose critical areas for future research.
Here is a quick read: Yoshua Bengio Team Proposes Causal Learning to Solve the ML Model Generalization Problem
The paper Towards Causal Representation Learning is on arXiv.
Tomorrow at NeurIPS, Yoshua Bengio will propose ways for deep learning to handle "reasoning, planning, capturing causality and obtaining systematic generalization." He spoke to IEEE Spectrum on many of the same topics.