Welcome, AI & Semiconductor Investors.
Nvidia’s upcoming B300 & GB300 GPUs promise a massive leap in memory and performance—just in time to shake up the entire AI supply chain.
But how do a Chinese open-source LLM breakthrough and a shifting memory market tie it all together, and what does it mean for investors right now? --- Let's find out...
What The Chip Happened?
🎄 B300 & GB300: Analysts Share Nvidia’s Next GPU Leap
🔓 DeepSeek-AI’s Low-Cost Open-Source LLM: A CapEx Disruptor?
🌩️ Micron’s Memory Storm: DRAM Downturn, But HBM Could Be a Silver Lining
Read time: 6 minutes
Nvidia (NASDAQ: NVDA)
🎄 B300 & GB300: Analysts Share Nvidia’s Next GPU Leap
What The Chip: While Nvidia hasn’t publicly revealed these GPUs, semiconductor research group SemiAnalysis just released fresh details about the B300 and GB300 platforms. The leaks point to major performance boosts and a sweeping rework of Nvidia’s supply chain model.
Details:
⚡ 50% More Firepower: The B300 GPU on TSMC’s 4NP node promises up to 50% higher FLOPS compared to B200—thanks partly to raising TDP to 1.4kW (from 1.2kW).
🎁 Bigger Memory, Bigger Gains: Both B300 and GB300 move to 12-Hi HBM3E stacks (up to 288GB), boosting reasoning model inference and enabling longer sequences.
♻️ Supply Chain Shakeup: SemiAnalysis reports that Nvidia will no longer provide a complete GPU board; only the SXM “Puck” module, Grace CPU, and a new HMC from Axiado. This lets hyperscalers customize boards further—while complicating final design validation.
⛓️ Winners & Losers: Nvidia’s board partners like Wistron lose some share, while Foxconn (FII) gains by assembling the new SXM Puck. VRM suppliers could get reshuffled; some may see a loss in business, while newcomers scoop up share.
📊 Margin Watch: As more components come from outside Nvidia’s direct umbrella, gross margins can shift.
⚙️ Deployment Timelines: SemiAnalysis indicates that design complexities will accelerate some hyperscalers’ roadmaps while slowing others. Microsoft, for instance, is said to be balancing GB200 orders before fully embracing the B300/GB300 wave.
Why AI/Semiconductor Investors Should Care: SemiAnalysis’s data suggests that Nvidia’s push into large-memory inference solutions remains a top driver of advanced AI workloads. Investors should watch how this new modular supply chain unfolds—especially its impact on gross margins and emerging winners in the GPU ecosystem. With hyperscalers demanding more flexibility, suppliers who adapt quickly stand to gain in 2025 and beyond.
Moore Semiconductor Investing
📗 Unlock Q3 Semiconductor Earnings --- 50% OFF
What The Chip: Get a front-row seat to the financials shaping the semiconductor industry. This continuously updated e-book by Jose Najarro distills the latest quarterly insights—from wafer production trends to AI chip breakthroughs—into a single comprehensive resource.
Details:
🔵 Dynamic Updates: Start with giants like TSMC and ASML, then expand to 30+ companies as their Q3 2024 earnings roll in. Already covering over 30 companies.
🔵 Huge Value for Half the Price: For a limited time, the e-book is discounted from $49.07 USD to $24.54 USD, offering a robust market guide at a significant value.
🔵 Expert Analysis: Curated by Jose Najarro (Master’s in Electrical Engineering, contributor at The Motley Fool), delivering reliable, accessible breakdowns.
🔵 Key Metrics & Trends: Follow critical financial indicators, market shifts, and executive comments shaping the sector’s trajectory.
🔵 Broad Coverage: From traditional chipmakers to cutting-edge AI semiconductor players, get the full picture as it emerges.
Why AI/Semiconductor Investors Should Care: This evolving earnings handbook gives you a strategic edge. Understanding quarterly earnings data is crucial for gauging industry health, discovering new growth leaders, and aligning your investment approach with emerging technological waves.
Disclaimer: For educational and informational purposes only. Not financial advice. Consult with a qualified professional before making any investment decisions.
DeepSeek-AI (Unlisted)
🔓 DeepSeek-AI’s Low-Cost Open-Source LLM: A CapEx Disruptor?
What The Chip: A Chinese LLM company, DeepSeek-AI, just unveiled an impressive 671B-parameter open-source model—trained on cheaper, export-approved Nvidia GPUs. The big surprise? It delivers performance on par with high-end solutions, potentially rewriting CapEx projections for AI training.
Details:
⚡ Minimal Budget, Major Impact: Training reportedly cost around $5.6M using Nvidia’s lower-specced H800 GPUs. That’s significantly cheaper than standard high-end U.S.-based HPC systems.
🧩 Mixture-of-Experts (MoE): DeepSeek-V3 uses an MoE approach, activating only a fraction of parameters per token, slashing real-time hardware needs.
🌐 Open-Source Momentum: The release closes the gap between closed-source giants and community-driven projects, reinforcing the narrative that major LLM breakthroughs can come from outside the U.S. market.
⏱️ Fast & Efficient: DeepSeek-AI leveraged advanced pipeline parallelism, memory optimizations, and FP8 mixed-precision—pushing hardware utilization to new heights.
🏗️ Long Context Windows: With up to 128K tokens of context, the model handles tasks like code interpretation and multi-step math across extended inputs.
💡 Shifting CAPEX Outlook?: This achievement raises the question: if advanced LLM performance can be attained on cheaper, export-approved chips, will data centers need the priciest GPUs for certain workloads?
Why AI/Semiconductor Investors Should Care: DeepSeek-AI’s move spotlights a potential inflection point where cost-efficient GPU deployments deliver powerful AI models. Investors need to watch for changing spending patterns at hyperscalers and startups alike—especially if competitive performance can come from hardware once deemed “inferior.” This could reshape supply-chain alignments and margin considerations in the near future.
Micron Technology (NASDAQ: MU)
🌩️ Micron’s Memory Storm: DRAM Downturn, But HBM Could Be a Silver Lining
What The Chip: Amid weak consumer DRAM demand, Micron and others see memory softness persisting well into 1H25. Surprisingly, HBM (High Bandwidth Memory) still uses DRAM and could help rationalize supply.
Details:
🔻 Prolonged DRAM Weakness: Silicon Motion and Micron both forecast tepid DRAM demand through early 2025, citing weaker PC and smartphone sales.
🌱 HBM’s Hidden Contribution: HBM production consumes roughly 3x the bits of standard DRAM, potentially tightening overall DRAM supply—even if consumer DRAM lags, but is only great news for memory companies that can make HBM, and the only two at the moment are Sk Hynix and Micron.
🚀 Chinese Memory Rise: CXMT’s rapid DDR5 ramp (from 2% to 10% market share by year-end 2024) signals fiercer competition for leaders like Samsung, Micron, and SK hynix.
🥊 Price War Looms: Chinese makers undercut rivals by 10–20%, per Economic Daily News—turning the domestic Chinese market into a battleground where export restrictions might matter less.
📉 NAND Outlook: Silicon Motion sees NAND demand lagging until at least mid-2025, with a fresh growth cycle possibly kicking off in 2026.
Why AI/Semiconductor Investors Should Care: Although the DRAM and NAND slump could drag earnings near-term, HBM uptake in AI applications may act as a demand lever, helping balance the supply equation. As Chinese memory makers gain ground, watch pricing pressure and potential supply chain shifts for Micron, SK hynix, Samsung, and emerging players.
Youtube Channel - Jose Najarro Stocks
Semiconductor Q3 Earnings Book — 50% OFF
X Account - @_Josenajarro
Disclaimer: This article is intended for educational and informational purposes only and should not be construed as investment advice. Always conduct your own research and consult with a qualified financial advisor before making any investment decisions.