[雜記] Jensen Huang w/ BG2 精彩訪談 2025/Sep part 1 AI 產業 TAM (top-down 估計)

NVIDIA 2024年營收: USD 60.9B

NVIDIA 40% 營收來自於 inference

Inference 的需求將會增加 1B (10^9) 倍

AI 發展進程: Perception AI → Generative AI → Agentic AI → Physical AI

AI Scaling Laws:

1. Pre-training Scaling

2. Post-training Scaling

  Post-training is basically like AI practicing a skill until it gets it right. And so it tries a whole bunch of different ways. And in order to do that, you’ve got to do inference.

3. Test-time Scaling (“Long Thinking”)

目前已經歷了 Pre-training Scaling,正在經歷 Post-training Scaling 和 Test-time Scaling。

NVIDIA 2025 9 月宣布將於未來一段時間內投資 USD 100B 於 OpenAI (注: 應該是指 Stargate project)

OpenAI will be the next multi-trillion dollar hyper-scale company. (注: 其他的 hyper-scale company: Google, Meta, Microsoft…)

nVIDIA 和 OpenAI 合作的計劃:

1. the build-out of Microsoft Azure (hundred of billions dollars of work)

2. the OCI (Oracle Cloud Infrastructure) build-out (5-7 gigawatts) (注: 不懂為什麼選擇以 watt 為目標)

3. CoreWeave

4. Deploy 10 gigawatts of nVIDIA Systems (將為 nVIDIA 帶來 USD 400B 營收)

OpenAI are going through two exponentials:

1. the usage exponential (注: 使用人數指數成長)

2. computational exponential of every use (it’s now thinking before it answers) (注: 每次的使用, 因為 “think longer” 的功能, 需要進行多次的 inference, 預計這樣的 inference 次數會指數成長)

Jensen 對於 AI factory 發展規模的論述 (TAM: USD 5 trillion, 注: 這裡的推算是 top-down, 後面有 bottom-up 推算. 5 trillion 相當於 NVIDIA 2024 年營收的 82 倍, 當然這並不是指 NVIDIA 全拿.):

1. Moore’s Law is dead → general purpose computing is over. The future is accelerated computing and AI computing. → 全世界的 data centers 都需要升級 (NVIDIA 和 Intel 合作, 正是意識到 general purpose computing needs to be fused with accelerated computing)

2. the first use case of AI is actually already everywhere (it’s in search, recommender engines…). Meta/Google/ByteDance/Amazon is changing from using CPU to use GPU to do AI (hundreds of billion of dollars).

3. Human intelligence represents 55%-65% of the world’s GDP (USD 50 trillion) → augmented by USD 10 trillion AI → 50% gross margin → 5 trillion AI factory

  以 NVIDIA 為例,員工年薪 USD100K,配給他 USD10K AI,讓員工生產力提升 2-3 倍. → the number of chips we’re building is growing → growing fast as a company → hire more people → profitability is greater

Eddie Wu at Alibaba:

1. between now and the end of the decade, Alibaba is going to increase its data center power by 10x.

2. token generation is doubling every few months. → the perf per watt has to keep on going exponentially

Watt is basically revenue in the future. (注: 不懂為什麼, 感覺是一個邏輯跳躍的敘述. 每瓦的能效提升是重要的, 但做了哪些推論, 最後達到結論: Watt is revenue?) (個人猜想的推論: AI Factory/GPU 這是固定成本, 而電費是變動成本, AI Factory 運作一段時間後,電費這個變動成本 >> 固定成本, 成為了 dominant factor.)

Source: 黃仁勳重磅訪談:摩爾定律已死,OpenAI將是下個「數兆美元巨獸」
https://www.youtube.com/watch?v=mA85rfzvPzQ
0:00 - 17:18