Types of memory

Role in AI Compute:

  • DRAM (Dynamic RAM): Main system memory for CPUs/GPUs/accelerators. Holds working datasets, model parameters, intermediate activations, and general compute state while processing.
  • HBM (High-Bandwidth Memory): Specialized DRAM tightly stacked near GPU/accelerator cores. Provides very high throughput at low power compared to DDR DRAM, critical for large matrix multiplications and transformer layers in modern AI.
  • In AI contexts, DRAM is about capacity + latency, while HBM is about bandwidth.

Main Suppliers (Public + Tickers):

  • Micron Technology — MU (NYSE) — DRAM & HBM supplier via ecosystem partners.
  • SK Hynix — 000660.KS (Korea) — DRAM & HBM; one of the largest memory producers globally.
  • Samsung Electronics — 005930.KS (Korea) — #1 in DRAM and DRAM-derived products (including HBM).
  • Nanya Technology — 2408.TW (Taiwan) — DRAM maker (smaller share).

Role in AI Compute:

  • Primary non-volatile storage for SSDs and persistent buffers.
  • Used for model storage, weights caching, checkpoints, datasets that don’t fit in RAM and need persistent backing.
  • Helps enable fast I/O to feed data into faster DRAM/HBM layers.

Main Suppliers (Public + Tickers):

  • Sandisk – SNDK
  • Western Digital — WDC (NASDAQ) — NAND flash via joint ventures and SSDs.
  • Micron Technology — MU (NYSE) — Major NAND supplier.
  • Kioxia (private) — large NAND supplier (not publicly traded as of 2026).
  • SK Hynix — 000660.KS (Korea) — NAND products after acquiring Toshiba’s memory business stake.
  • Samsung Electronics — 005930.KS (Korea) — Largest NAND flash producer.

Role in AI Compute:

  • Low-cost, high-capacity bulk storage for datasets, archives, logs, backups.
  • Not used for active model training but essential for data lakes, long-term dataset retention, backup copies of models, and cold storage in AI pipelines.

Main Suppliers (Public + Tickers):

  • Seagate Technology — STX (NASDAQ) — One of the largest HDD makers for enterprise & cloud.
  • Western Digital — WDC (NASDAQ) — Major HDD maker alongside Seagate.


Role:

  • Ultra-fast memory used on-chip in CPUs/GPUs/accelerators for caches, register files, and local buffers.
  • Critical for low-latency access during computation.

Suppliers:

  • SRAM is built into chips; there aren’t standalone SRAM producers with significant public tickers. It’s designed inside logic chips by:
    • Nvidia — NVDA (NASDAQ)
    • AMD — AMD (NASDAQ)
    • Intel — INTC (NASDAQ)
    • AI ASIC firms (e.g., Google/Alphabet GOOGL, Apple AAPL)

📀 SSD (Solid-State Drive) – NVMe / PCIe Storage

Role:

  • Uses NAND flash + controller to provide high-speed persistent storage for datasets and model artifacts.
  • Often tiered with DRAM caches to speed I/O for AI training pipelines.

Suppliers:

  • SSD controllers and products from:
    • Western Digital — WDC (NASDAQ)
    • Micron — MU (NYSE)
    • Samsung — 005930.KS (Korea)
    • Kioxia (private)
    • Seagate — STX (NASDAQ)

💾 Emerging / Specialized Memory

These aren’t yet mainstream in AI compute but are relevant in research and low-power AI systems:

Memory TypeRoleNotes
MRAM (Magneto-Resistive RAM)Non-volatile, faster than NANDEarly stage; used in embedded systems
ReRAM / PCMPersistent, low powerResearch for in-memory compute
Optane (3D XPoint)Persistent memory between DRAM & NANDIntel inventor (legacy products)

Suppliers:

  • MRAM & emerging:
    • Everspin Technologies — MRAM (MRAM) (NASDAQ)
    • Intel — INTC (NASDAQ) (historical with Optane/3D XPoint)
    • Samsung / SK Hynix / Micron are researching next-gen memory but not always productized.

Memory Hierarchy Summary (AI Stack)

LayerTypical TechRole in AI
On-chip registers & cacheSRAMFastest access for computation
High-bandwidth poolHBMTraining/inference compute throughput
System memoryDRAMWorking set + dataset buffering
Persistent fast storageSSD (NAND flash)Model/dataset I/O
Bulk/archive storageHDDLong-term dataset/backup