... | 🕐 --:--
-- -- --
عاجل
⚡ عاجل: كريستيانو رونالدو يُتوّج كأفضل لاعب كرة قدم في العالم ⚡ أخبار عاجلة تتابعونها لحظة بلحظة على خبر ⚡ تابعوا آخر المستجدات والأحداث من حول العالم
⌘K
AI مباشر
265166 مقال 299 مصدر نشط 38 قناة مباشرة 5746 خبر اليوم
آخر تحديث: منذ ثانية

DeepSeek V4 Shows That The Next AI Race Is About Efficiency

ترفيه
Forbes Business
2026/04/26 - 15:15 501 مشاهدة
InnovationAIDeepSeek V4 Shows That The Next AI Race Is About EfficiencyByGerui Wang,Contributor.Forbes contributors publish independent expert analyses and insights. Dr. Gerui Wang writes about AI, society, media, and culture.Follow AuthorApr 26, 2026, 11:15am EDT--:-- / --:--This voice experience is generated by AI. Learn more.This voice experience is generated by AI. Learn more.EDMONTON, CANADA - JANUARY 28: A woman holds a cell phone in front of a computer screen displaying the DeepSeek logo, on January 28, 2025, in Edmonton, Canada. (Photo by Artur Widak/NurPhoto via Getty Images)NurPhoto via Getty ImagesDeepSeek V4, the long awaited update from DeepSeek, arrives at a fiercely competitive moment, when Open AI’s GPT 5.5 and Anthropic’s Opus 4.7 have just launched one after the other. The AI models race apparently achieve a new level. As an unique believer in open sourced tools, DeepSeek impress developers with its cost-efficiency rather than the raw scale.The preview release includes two Mixture-of-Experts models with one-million-token context window: DeepSeek-V4-Pro, with 1.6 trillion total parameters and 49 billion activated parameters, and DeepSeek-V4-Flash, with 284 billion total parameters and 13 billion activated parameters. Long-context agents, coding assistants, research tools and enterprise copilots all face the same bottleneck: every newly generated token may need to refer back to a growing history of documents, code, tool calls and intermediate reasoning. DeepSeek’s technical report demonstrates that its V4 models addresses this problem through architectural compression rather than simply asking users to pay for more compute.The Core Innovation: Compressing Memory Without Losing ReasoningDeepSeek V4’s most important architectural change is a hybrid attention design that combines Compressed Sparse Attention, or CSA, with Heavily Compressed Attention, or HCA. It means that the model does not store and scan every previous token in the same expensive way....
مشاركة:

مقالات ذات صلة

AI
يا هلا! اسألني أي شي 🎤