// supply chain — upstream to downstream — click any node
T1
Raw Materials
MODERATE
Silicon Wafers
Shin-Etsu · Sumco · Siltronic
300mm diameter
Japan dominant supply
Japan dominant supply
Concentration60%
Rare Gases
Neon · Argon · Krypton
Ukraine 60%+ neon
supply pre-2022
supply pre-2022
Concentration75%
refining & purification
T2
Lithography Equipment
CRITICAL
ASML
EUV / DUV Lithography
Only EUV manufacturer
€400M per machine
NL export controlled
€400M per machine
NL export controlled
EUV monopoly100%
wafer exposure & patterning
T3
Semiconductor Foundry
CRITICAL
TSMC
Taiwan · 2nm → 5nm
92% advanced node GPUs
Taiwan strait risk
Arizona fab 2025+
Taiwan strait risk
Arizona fab 2025+
Advanced node share92%
Samsung Foundry
Korea · 3nm → 4nm
AMD + Qualcomm client
Yield challenges at 3nm
Korea geopolitical risk
Yield challenges at 3nm
Korea geopolitical risk
Advanced node share15%
die packaging & testing
T4
GPU Design (Fabless)
HIGH
NVIDIA
H100 · H200 · B200 · GB200
~90% AI GPU revenue
CUDA ecosystem lock-in
Export restricted H100
CUDA ecosystem lock-in
Export restricted H100
AI GPU market share88%
AMD
MI300X · RX 7900 XTX
ROCm improving fast
MI300X competitive
No CUDA dependency
MI300X competitive
No CUDA dependency
AI GPU market share10%
Intel Arc / Gaudi
Arc A770 · Gaudi 3
Gaudi 3 = H100 class
Arc A770 = 16GB VRAM
oneAPI not CUDA
Arc A770 = 16GB VRAM
oneAPI not CUDA
AI GPU market share3%
board manufacture · CoWoS packaging
T5
HBM Memory (stacked)
CRITICAL
SK Hynix
HBM3E — Primary H100 supplier
H100/H200 exclusive HBM
~50% global HBM share
Korea concentration
~50% global HBM share
Korea concentration
HBM market share50%
Samsung
HBM3 — qualification delays
Yield issues 2023–24
HBM4 race ongoing
Same jurisdiction risk
HBM4 race ongoing
Same jurisdiction risk
HBM market share35%
Micron
HBM3E — US domestic
US manufactured
Ramping HBM3E 2024
Lower geopolitical risk
Ramping HBM3E 2024
Lower geopolitical risk
HBM market share15%
system integration · thermal
T6
System Integrator / OEM
HIGH
Dell / HP / Lenovo
Rack Servers · GPU Nodes
R750xa · DL380 Gen11
12–26 week lead times
GPU allocation queues
12–26 week lead times
GPU allocation queues
Enterprise server share65%
Supermicro
GPU Servers · Direct-liquid
H100 cluster builds
Liquid cooling native
Shorter lead times
Liquid cooling native
Shorter lead times
GPU server share20%
procurement · deployment
T7
End Deployment
HIGH
Hyperscalers
AWS · Azure · GCP · OCI
Absorb ~60% H100 alloc
Reserved capacity lock
Everyone else waits
Reserved capacity lock
Everyone else waits
H100 allocation60%
GPU Cloud
CoreWeave · Lambda · Vast.ai
Spot pricing viable
Vast.ai peer market
No commitment needed
Vast.ai peer market
No commitment needed
H100 allocation25%
Edge / Fed. Nodes
Dell R740 · RPi · Local
CPU+quant inference
No H100 dependency
Cosmogenic model
No H100 dependency
Cosmogenic model
Current AI GPU use~10%
Select a node
to view intelligence
briefing