Researchers on the Korea Superior Institute of Science and Expertise (KAIST) have developed energy-efficient NPU know-how that demonstrates substantial efficiency enhancements in laboratory testing.
Their specialised AI chip ran AI fashions 60% quicker whereas utilizing 44% much less electrical energy than the graphics playing cards at the moment powering most AI programs, primarily based on outcomes from managed experiments.
To place it merely, the analysis, led by Professor Jongse Park from KAIST’s Faculty of Computing in collaboration with HyperAccel Inc., addresses probably the most urgent challenges in trendy AI infrastructure: the big vitality and {hardware} necessities of large-scale generative AI fashions.
Present programs comparable to OpenAI’s ChatGPT-4 and Google’s Gemini 2.5 demand not solely excessive reminiscence bandwidth but in addition substantial reminiscence capability, driving corporations like Microsoft and Google to buy lots of of hundreds of NVIDIA GPUs.
The reminiscence bottleneck problem
The core innovation lies within the staff’s method to fixing reminiscence bottleneck points that plague current AI infrastructure. Their energy-efficient NPU know-how focuses on “light-weight” the inference course of whereas minimising accuracy loss—a important stability that has confirmed difficult for earlier options.
PhD scholar Minsu Kim and Dr Seongmin Hong from HyperAccel Inc., serving as co-first authors, introduced their findings on the 2025 Worldwide Symposium on Laptop Structure (ISCA 2025) in Tokyo. The analysis paper, titled “Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization,” particulars their complete method to the issue.
The know-how centres on KV cache quantisation, which the researchers determine as accounting for most reminiscence utilization in generative AI programs. By optimising this part, the staff allows the identical stage of AI infrastructure efficiency utilizing fewer NPU gadgets in comparison with conventional GPU-based programs.
Technical innovation and structure
The KAIST staff’s energy-efficient NPU know-how employs a three-pronged quantisation algorithm: threshold-based online-offline hybrid quantisation, group-shift quantisation, and fused dense-and-sparse encoding. This method permits the system to combine with current reminiscence interfaces with out requiring adjustments to operational logic in present NPU architectures.
The {hardware} structure incorporates page-level reminiscence administration strategies for environment friendly utilisation of restricted reminiscence bandwidth and capability. Moreover, the staff launched new encoding strategies particularly optimised for quantised KV cache, addressing the distinctive necessities of their method.
“This analysis, by joint work with HyperAccel Inc., discovered an answer in generative AI inference light-weighting algorithms and succeeded in growing a core NPU know-how that may clear up the reminiscence drawback,” Professor Park defined.
“By way of this know-how, we carried out an NPU with over 60% improved efficiency in comparison with the newest GPUs by combining quantisation strategies that cut back reminiscence necessities whereas sustaining inference accuracy.”
Sustainability implications
The environmental impression of AI infrastructure has grow to be a rising concern as generative AI adoption accelerates. The energy-efficient NPU know-how developed by KAIST gives a possible path towards extra sustainable AI operations.
With 44% decrease energy consumption in comparison with present GPU options, widespread adoption may considerably cut back the carbon footprint of AI cloud providers. Nevertheless, the know-how’s real-world impression will depend upon a number of elements, together with manufacturing scalability, cost-effectiveness, and business adoption charges.
The researchers acknowledge that their resolution represents a major step ahead, however widespread implementation would require continued improvement and business collaboration.
Trade context and future outlook
The timing of this energy-efficient NPU know-how breakthrough is especially related as AI corporations face rising strain to stability efficiency with sustainability. The present GPU-dominated market has created provide chain constraints and elevated prices, making various options more and more enticing.
Professor Park famous that the know-how “has demonstrated the potential for implementing high-performance, low-power infrastructure specialised for generative AI, and is predicted to play a key function not solely in AI cloud information centres but in addition within the AI transformation (AX) atmosphere represented by dynamic, executable AI comparable to agentic AI.”
The analysis represents a major step towards extra sustainable AI infrastructure, however its final impression shall be decided by how successfully it may be scaled and deployed in business environments. Because the AI business continues to grapple with vitality consumption considerations, improvements like KAIST’s energy-efficient NPU know-how provide hope for a extra sustainable future in synthetic intelligence computing.
(Picture by Korea Superior Institute of Science and Expertise)
See additionally: The 6 practices that ensure more sustainable data centre operations

Need to be taught extra about cybersecurity and the cloud from business leaders? Try Cyber Security & Cloud Expo happening in Amsterdam, California, and London.
Discover different upcoming enterprise know-how occasions and webinars powered by TechForge here.