نتائج البحث
Learning dexterity
We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity.
Variational option discovery algorithms
Variational option discovery algorithms
OpenAI Scholars 2018: Meet our Scholars
Our first class of OpenAI Scholars is underway, and you can now follow along as this group of experienced software developers becomes machine learning practitioners.
OpenAI Scholars 2018: Meet our Scholars
Our first class of OpenAI Scholars is underway, and you can now follow along as this group of experienced software developers becomes machine learning practitioners.
OpenAI Five Benchmark
The OpenAI Five Benchmark match is now over!
OpenAI Five Benchmark
The OpenAI Five Benchmark match is now over!
Glow: Better reversible generative models
We introduce Glow, a reversible generative model which uses invertible 1x1 convolutions. It extends previous work on reversible generative models and simplifies the architecture. Our model can generate realistic high resolution images, supports efficient sampling, and discovers features that can be used to manipulate attributes of data. We’re releasing code for the model and an online visualization tool so people can explore and build on these results.
Glow: Better reversible generative models
We introduce Glow, a reversible generative model which uses invertible 1x1 convolutions. It extends previous work on reversible generative models and simplifies the architecture. Our model can generate realistic high resolution images, supports efficient sampling, and discovers features that can be used to manipulate attributes of data. We’re releasing code for the model and an online visualization tool so people can explore and build on these results.
Learning Montezuma’s Revenge from a single demonstration
We’ve trained an agent to achieve a high score of 74,500 on Montezuma’s Revenge from a single human demonstration, better than any previously published result. Our algorithm is simple: the agent plays a sequence of games starting from carefully chosen states from the demonstration, and learns from them by optimizing the game score using PPO, the same reinforcement learning algorithm that underpins OpenAI Five.
Learning Montezuma’s Revenge from a single demonstration
We’ve trained an agent to achieve a high score of 74,500 on Montezuma’s Revenge from a single human demonstration, better than any previously published result. Our algorithm is simple: the agent plays a sequence of games starting from carefully chosen states from the demonstration, and learns from them by optimizing the game score using PPO, the same reinforcement learning algorithm that underpins OpenAI Five.
Keyword Snooze: A New Way to Help Control Your News Feed - meta.com
Keyword Snooze: A New Way to Help Control Your News Feed meta.com
Find Us - Tesla
Find Us Tesla
OpenAI Five
Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2.
OpenAI Five
Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2.
Retro Contest: Results
The first run of our Retro Contest—exploring the development of algorithms that can generalize from previous experience—is now complete.
Retro Contest: Results
The first run of our Retro Contest—exploring the development of algorithms that can generalize from previous experience—is now complete.
Learning policy representations in multiagent systems
Learning policy representations in multiagent systems
Improving language understanding with unsupervised learning
We’ve obtained state-of-the-art results on a suite of diverse language tasks with a scalable, task-agnostic system, which we’re also releasing. Our approach is a combination of two existing ideas: transformers and unsupervised pre-training. These results provide a convincing example that pairing supervised learning methods with unsupervised pre-training works very well; this is an idea that many have explored in the past, and we hope our result motivates further research into applying this idea...