본문 바로가기 대메뉴 바로가기

research

Presenting a Brain-Like Next-Generation AI Semiconductor that Sees and Judges Instantly​
View : 672 Date : 2025-12-31 Writer : PR Office

< (From left) Professor Sanghun Jeon, Ph.D candidate Seungyeob Kim, Postdoctoral researcher Hongrae Cho, Ph.D candidates Sang-ho Lee and Taeseung Jung, and M.S candidate Seonjae Park >

With the advancement of Artificial Intelligence (AI), the importance of ultra-low-power semiconductor technology that integrates sensing, computation, and memory into a single unit is growing. However, conventional structures face challenges such as power loss due to data movement, latency, and limitations in memory reliability. A Korean research team has drawn international academic attention by presenting core technologies for an integrated ‘Sensor–Compute–Store’ AI semiconductor to solve these issues.

KAIST announced on December 31st that Professor Sanghun Jeon’s research team from the School of Electrical Engineering presented a total of six papers at the ‘International Electron Devices Meeting (IEEE IEDM 2025)’—the world’s most prestigious semiconductor conference—held in San Francisco from December 8 to 10. Among these, the papers were simultaneously selected as a Highlight Paper and a Top Ranked Student Paper.

The research on the M3D integrated neuromorphic vision sensor, selected as a highlight paper, is a semiconductor that stacks the human eye and brain within a single chip. Simply put, the sensors that detect light and the circuits that process signals like a brain are made into very thin layers and stacked vertically in one chip, implementing a structure where the process of 'seeing' and 'judging' occurs simultaneously.

Through this, the research team completed the world's first "In-Sensor Spiking Convolution" platform, where AI computation technology that "sees and judges at the same time" takes place directly within the camera sensor.

< Figure 1. Summary of research on vertically stacked optical signal-to-spike frequency converter for AI >

< Figure 2. Representative diagram of the development of a 2T-2C near-pixel analog computing cell based on oxide thin-film transistors >

Previously, this technology required several stages: capturing an image (sensor), converting it to digital (ADC), storing it in memory (DRAM), and then calculating (CNN). However, this new technology eliminates unnecessary data movement as the calculation happens immediately within the sensor. As a result, it has become possible to implement real-time, ultra-low-power Edge AI with significantly reduced power consumption and dramatically improved response speeds.

Based on this approach, the research team presented six core technologies at the conference covering all layers of AI semiconductors, from input to storage. They simultaneously created neuromorphic semiconductors that operate like the brain using much less electricity while utilizing existing semiconductor processes, along with next-generation memory optimized for AI.

First, on the sensor side, they designed the system so that judgment occurs at the sensor stage rather than having separate components for capturing images and calculating. Consequently, power consumption decreased and response speeds increased compared to the conventional method of taking a photo and sending it to another chip for calculation.

< Figure 3. Schematic diagram of a next-generation biomimetic tactile system using neuromorphic devices >

< Figure 4. Representative diagram of NC-NAND development research based on Ultra-thin-Mo and Sub-3.5 nm HZO >

Furthermore, in the field of memory, they implemented a next-generation NAND flash that uses the same materials but operates at lower voltages, lasts longer, and can store data stably even when the power is turned off. Through this, they presented a foundational technology that satisfies the requirements for high-capacity, high-reliability, and low-power memory necessary for AI.


< Figure 5. Representative diagram of next-generation 3D FeNAND memory development research >

< Figure 6. Representative diagram of research on charge behavior characterization and quantitative analysis methodology for next-generation FeNAND memory >

Professor Sanghun Jeon, who led the research, stated, "This research is significant in that it demonstrates that the entire hierarchy can be integrated into a single material and process system, moving away from the existing AI semiconductor structure where sensing, computation, and storage were designed separately." He added, "Moving forward, we plan to expand this into a next-generation AI semiconductor platform that encompasses everything from ultra-low-power Edge AI to large-scale AI memory."

Meanwhile, this research was conducted with support from basic research projects of the Ministry of Science and ICT and the National Research Foundation of Korea, as well as the Center for Heterogeneous Integration of Extreme-scale & Property Semiconductors (CH³IPS). It was carried out in collaboration with Samsung Electronics, Kyungpook National University, and Hanyang University.

 

Releated news