🕐 --:--
-- --
عاجل
⚡ عاجل: كريستيانو رونالدو يُتوّج كأفضل لاعب كرة قدم في العالم ⚡ أخبار عاجلة تتابعونها لحظة بلحظة على خبر ⚡ تابعوا آخر المستجدات والأحداث من حول العالم
⌘K
AI مباشر
381037 مقال 245 مصدر نشط 66 قناة مباشرة 4893 خبر اليوم
آخر تحديث: منذ 0 ثانية
الفلاتر: 🏷️ tech مسح الكل ✕

Better language models and their implications

We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.

OpenAI Blog تكنولوجيا منذ 7 سنوات

Better language models and their implications

We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.

OpenAI Blog تكنولوجيا منذ 7 سنوات

12 تقنية من المتوقع أن تغزو منازل المستقبل - الجزيرة نت

12 تقنية من المتوقع أن تغزو منازل المستقبل  الجزيرة نت

تكنولوجيا عربي - Google تكنولوجيا منذ 7 سنوات

Computational limitations in robust classification and win-win results

OpenAI Blog تكنولوجيا منذ 7 سنوات

Meet Scout - About Amazon

Meet Scout  About Amazon

Amazon News تكنولوجيا منذ 7 سنوات

OpenAI Fellows Summer 2018: Final projects

Our first cohort of OpenAI Fellows has concluded, with each Fellow going from a machine learning beginner to core OpenAI contributor in the course of a 6-month apprenticeship.

OpenAI Blog تكنولوجيا منذ 7 سنوات

How AI training scales

We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized.

OpenAI Blog تكنولوجيا منذ 7 سنوات

How AI training scales

We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized.

OpenAI Blog تكنولوجيا منذ 7 سنوات

Adventure Awaits! No time to waste!

Assalamu Alaykum Husna Travellers! Welcome to the Husna Family! We’re so happy to have you! My name is Sobia Hussain and I’ll be your exclusive Husna Travel & Excursion Representative. You’ll be hearing from me weekly with all the exciting adventures and exploration awaiting you in the Bahamas,...

حسنى تكنولوجيا منذ 7 سنوات

Quantifying generalization in reinforcement learning

We’re releasing CoinRun, a training environment which provides a metric for an agent’s ability to transfer its experience to novel situations and has already helped clarify a longstanding puzzle in reinforcement learning. CoinRun strikes a desirable balance in complexity: the environment is simpler than traditional platformer games like Sonic the Hedgehog but still poses a worthy generalization challenge for state of the art algorithms.

OpenAI Blog تكنولوجيا منذ 7 سنوات

Quantifying generalization in reinforcement learning

We’re releasing CoinRun, a training environment which provides a metric for an agent’s ability to transfer its experience to novel situations and has already helped clarify a longstanding puzzle in reinforcement learning. CoinRun strikes a desirable balance in complexity: the environment is simpler than traditional platformer games like Sonic the Hedgehog but still poses a worthy generalization challenge for state of the art algorithms.

OpenAI Blog تكنولوجيا منذ 7 سنوات

Spinning Up in Deep RL

We’re releasing Spinning Up in Deep RL, an educational resource designed to let anyone learn to become a skilled practitioner in deep reinforcement learning. Spinning Up consists of crystal-clear examples of RL code, educational exercises, documentation, and tutorials.

OpenAI Blog تكنولوجيا منذ 7 سنوات

Spinning Up in Deep RL

We’re releasing Spinning Up in Deep RL, an educational resource designed to let anyone learn to become a skilled practitioner in deep reinforcement learning. Spinning Up consists of crystal-clear examples of RL code, educational exercises, documentation, and tutorials.

OpenAI Blog تكنولوجيا منذ 7 سنوات

Learning concepts with energy functions

We’ve developed an energy-based model that can quickly learn to identify and generate instances of concepts, such as near, above, between, closest, and furthest, expressed as sets of 2d points. Our model learns these concepts after only five demonstrations. We also show cross-domain transfer: we use concepts learned in a 2d particle environment to solve tasks on a 3-dimensional physics-based robot.

OpenAI Blog تكنولوجيا منذ 7 سنوات

Learning concepts with energy functions

We’ve developed an energy-based model that can quickly learn to identify and generate instances of concepts, such as near, above, between, closest, and furthest, expressed as sets of 2d points. Our model learns these concepts after only five demonstrations. We also show cross-domain transfer: we use concepts learned in a 2d particle environment to solve tasks on a 3-dimensional physics-based robot.

OpenAI Blog تكنولوجيا منذ 7 سنوات

Plan online, learn offline: Efficient learning and exploration via model-based control

OpenAI Blog تكنولوجيا منذ 7 سنوات

Plan online, learn offline: Efficient learning and exploration via model-based control

OpenAI Blog تكنولوجيا منذ 7 سنوات

Reinforcement learning with prediction-based rewards

We’ve developed Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity, which for the first time exceeds average human performance on Montezuma’s Revenge.

OpenAI Blog تكنولوجيا منذ 7 سنوات

Reinforcement learning with prediction-based rewards

We’ve developed Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity, which for the first time exceeds average human performance on Montezuma’s Revenge.

OpenAI Blog تكنولوجيا منذ 7 سنوات

Learning complex goals with iterated amplification

We’re proposing an AI safety technique called iterated amplification that lets us specify complicated behaviors and goals that are beyond human scale, by demonstrating how to decompose a task into simpler sub-tasks, rather than by providing labeled data or a reward function. Although this idea is in its very early stages and we have only completed experiments on simple toy algorithmic domains, we’ve decided to present it in its preliminary state because we think it could prove to be a scalable a...

OpenAI Blog تكنولوجيا منذ 7 سنوات
AI
يا هلا! اسألني أي شي 🎤