- Bay Area Times
- Posts
- OpenAI claims China’s DeepSeek used its model for training
OpenAI claims China’s DeepSeek used its model for training

Top stories today:
- OpenAI claims China’s DeepSeek used its model for training
- Alibaba brings Qwen 2.5-Max to compete with DeepSeek V3
- ChatGPT Gov launched for U.S. government agencies
- ASML Q4 revenue +28% YoY, net income +31%, beating expectations
- Fed expected to hold rates steady, assess Trump’s impact on economy
0. Data and calendar

All values as of 6 AM ET / 3 AM PT, other than S&P500 and NASDAQ close (4 PM ET / 1 PM PT).

All times are ET.
Listen to our AI-generated podcast summarizing today’s newsletter (beware of hallucinations):
1. OpenAI claims China’s DeepSeek used its model for training

Some evidence of “distillation,” OpenAI said amid the growing popularity of DeepSeek in the U.S.
Tue.: White House AI czar David Sacks alleged “substantial evidence” that DeepSeek “distilled the knowledge out of OpenAI models.”
Microsoft said to be probing if DeepSeek improperly obtained OpenAI’s data.
Common practice among startups and academics to use outputs from commercial LLMs to train another model, experts said.
Separately, the U.S. Navy has banned DeepSeek over “potential security and ethical concerns.”
Imagine toys that talk back and build a bond with you. Elf Labs is creating a new era of AI-powered interactive toys — dolls, 3D avatars, and AR characters that can communicate, learn, and evolve alongside their fans. From Cinderella to Peter Pan, these beloved icons are being brought to life in ways we’ve never seen before.
But this is just the beginning. Elf Labs is targeting the $350B licensed merchandise space, $200B gaming sector, and $1.15T Media industry. Backed by 100+ trademarks, 12 patents, and 6 major franchises (2 already funded), they’re expanding into 30 global markets to turn nostalgia into profit.
A limited number of investors can cash in on 10% bonus shares, but time’s running out. Invest before midnight to secure your stake in this groundbreaking opportunity.
* Disclosure: This is a paid advertisement for Elf Labs’ Regulation CF offering. Please read the offering circular at elflabs.com
3. Alibaba releases Qwen 2.5-Max, new large MoE LLM competing with DeepSeek V3
Qwen 2.5-Max outperformed the open-weight DeepSeek V3 and Meta’s Llama 3.1-405B and OpenAI’s GPT-4o across benchmarks shared by Alibaba:
Trained on over 20T tokens.
Based on the Mixture-of-Experts (MoE) architecture, which requires fewer compute resources than a traditional LLM.
Available through Alibaba Cloud’s API services.
Like other Chinese AI models, Qwen 2.5-Max has censored outputs critical of the Chinese government

Qwen Chat.