At CES 2026, SK hynix Makes Its Case for the Future of AI Memory
I was in the CES 2026 press room when SK hynix laid out its next-gen AI memory roadmap
Las Vegas always has noise, but the CES press rooms are different. They’re quieter, more technical, and the people in the seats are listening for one thing: what ships, what scales, and what changes procurement decisions six months from now. On January 6, 2026, SK hynix stepped up in that setting and made its message pretty clear — the next cycle of AI hardware is going to be memory-constrained, and they intend to be the company defining the memory stack.
The announcement centered on a dedicated customer exhibition hall at the Venetian Expo (January 6–9), with the theme “Innovative AI, Sustainable Tomorrow.” The phrase is marketing, sure — but the product list underneath it was not. They framed the whole show around AI-optimized memory, and they backed that up with a mix of high-bandwidth memory, low-power modules for servers, client-side DRAM, and a NAND story that’s directly aimed at AI data centers.
HBM4 was the headline in the room
The first thing everyone keyed in on was HBM. SK hynix said it is showing a next-generation HBM product described as a 16-high 48GB HBM4. They positioned it as the follow-on to the 12-high 36GB HBM4 they’ve already talked about, and they made a point of saying the work is being driven by customer requirements.
They also pointed at 12-high 36GB HBM3E as a major 2026 product, and described showing AI server GPU modules from customers using that memory. Put simply: they want the press (and the market) to see HBM3E not as a spec sheet, but as a real building block sitting inside real AI hardware.
Server memory: SOCAMM2 and the “power-per-rack” conversation
The second track in the announcement was AI server memory beyond HBM. SK hynix called out SOCAMM2, describing it as a low-power memory module designed for AI servers. That matters because the conversation in data centers has shifted. It’s not only performance — it’s performance per watt, performance per rack, and the ability to keep scaling without turning the facility into a power plant.
Alongside that, they highlighted LPDDR6 as a general-purpose memory product optimized for on-device AI use cases, with improved speed and energy efficiency over the prior generation. The takeaway: they’re trying to cover both ends of the AI spectrum — big iron in the data center and smaller inference workloads closer to the user.
NAND: 321-layer 2Tb QLC aimed at AI data centers
The NAND portion was a little more understated in the presentation, but it’s the part I kept thinking about afterward. SK hynix said it will show a 321-layer 2Tb QLC NAND product designed to meet demand for ultra-high-capacity enterprise SSDs in AI data centers.
Their positioning was straightforward: high integration, improved performance, and better power efficiency compared to the previous QLC generation — specifically calling out the importance of low-power control in AI data center environments. If you build storage for AI pipelines, you already know the pattern: capacity keeps rising, power budgets don’t, and the “cheap and big” tier has to get more efficient to remain viable.
“AI System Demonstration Zone”: where the roadmap gets visual
The company (Other SK hynix articles ) also described an “AI System Demonstration Zone” meant to show how these memory products combine into an AI platform. This is where they leaned into forward-looking concepts: customized HBM, PIM, compute-in-memory, and computational storage ideas.
- Custom HBM (cHBM) — integrating certain functions traditionally handled by a GPU or ASIC into the HBM base die, positioned as a way to improve inference efficiency and reduce communication power.
- PIM (Processing-In-Memory) and related accelerator concepts — described as addressing data movement bottlenecks by adding compute capability closer to where the data sits.
- Compute-using-DRAM concepts — simple operations performed inside DRAM cells to speed data processing paths.
- CXL-based memory module prototypes — combining memory expansion with added compute functions to improve server efficiency.
- Computational storage drive concepts — storage that can sense/analyze/execute certain processing internally.
They specifically mentioned visualizing the internal architecture of customized HBM. In the press room, that’s a signal: this isn’t only a future-technology slide — they want customers to think about architectural choices they can lock into for upcoming platforms.
As I Left
Walking out of the room, the message felt consistent: SK hynix is aiming to be the memory layer that AI platforms are built around — HBM at the top, low-power server memory in the middle, LPDDR on the edge, and high-density QLC NAND feeding the storage side of AI data centers.
CES is always a mix of spectacle and substance. This one leaned heavily toward substance — the kind that shows up later as lead times, allocations, platform decisions, and the real-world cost of building and scaling AI infrastructure.
Press release source: SK hynix AI Memory Announcement
Tags: AI memory, Flash, HBM, HBM3E, HBM4, nand, Solid State Memory