KAIST Overcomes Limitations of Existing Image Sensors… Clear Colors Even Under Oblique Light
<(From Left) Ph.D candidate Chanhyung Park from Electrical Engineering, Jaehyun Jeon from Department of Physics, Professor Min Seok Jang from Electrical Engineering>
Smartphone cameras are becoming smaller, yet photos are becoming sharper. Korean researchers have elevated the limits of next-generation smartphone cameras by developing a new image sensor technology that can accurately represent colors regardless of the angle at which light enters. The team achieved this by utilizing a “metamaterial” that designs the movement of light through structures too small to be seen with the naked eye.
KAIST (President Kwang Hyung Lee) announced on the 12th of February that a research team led by Professor Min Seok Jang of the School of Electrical Engineering, in collaboration with Professor Haejun Chung’s team at Hanyang, has developed a metamaterial-based technology for image sensors that can stably separate colors even when the angle of light incidence varies.
Conventional smartphone cameras capture images by concentrating light into a small lens. However, as camera pixels become extremely small, lenses alone struggle to gather sufficient light. To address this, the Nanophotonic Color Router was introduced. Instead of concentrating light through a lens, this technology uses microscopic structures invisible to the eye to precisely separate incoming light by color. By designing the pathways through which light travels, this metamaterial-based structure accurately divides light into red (R), green (G), and blue (B).
Samsung Electronics has already demonstrated the commercialization potential of this technology by applying it to actual image sensors under the name “Nano Prism.” Theoretically, stacking multiple layers of extremely fine nanostructures enables greater light collection and more accurate color separation.
<Nanophotonic color router technology that works reliably even under oblique incidence conditions (AI-generated image)>
However, existing Nanophotonic Color Routers had limitations. While they functioned well when light entered vertically, their performance deteriorated significantly—or colors mixed—when light entered at an angle, as is common in smartphone cameras. This issue, known as the “oblique incidence problem,” has been considered a critical challenge that must be resolved for real-world product applications.
The research team first investigated the root cause of this issue. They found that previous designs were overly optimized for vertically incident light, causing performance to drop sharply even with slight changes in the angle of incidence. Since smartphone cameras receive light from various angles, maintaining performance under angular variation is essential.
Instead of manually designing the structure, the team adopted an “inverse design” approach, which allows the computer to autonomously determine the optimal structure. Through this method, they derived a color router design capable of stable color separation even when the angle of incoming light changes.
As a result, whereas previous structures nearly failed when light was tilted by about 12 degrees, the newly designed structure maintained approximately 78% optical efficiency within a ±12-degree range, demonstrating stable color separation performance. In other words, the technology reaches a level suitable for practical smartphone usage environments.
<Nanophotonic color router robust to oblique incidence>
The team further analyzed performance variations by considering factors such as the number of metamaterial layers, design conditions, and potential fabrication errors. They also systematically defined the limits of robustness against changes in the angle of incidence. This study is particularly meaningful in that it presents design criteria for color routers that reflect realistic image sensor environments.
Professor Min Seok Jang of KAIST stated, “This research is significant in that it systematically analyzes the oblique incidence problem, which has hindered the commercialization of color router technology, and proposes a clear solution direction,” adding, “The proposed design methodology can be extended beyond color routers to a wide range of metamaterial-based nanophotonic devices.”
In this study, KAIST undergraduate student Jaehyun Jeon and doctoral candidate Chanhyung Park participated as co-first authors. The research findings were published on January 27 in the international journal Advanced Optical Materials.
※ Paper title: “Inverse Design of Nanophotonic Color Router Robust to Oblique Incidence”
DOI: https://doi.org/10.1002/adom.202501697※ Authors: Jaehyun Jeon (KAIST, first author), Chanhyung Park (KAIST, first author), Doyoung Heo (KAIST), Haejun Chung (Hanyang University), Min Seok Jang (KAIST, corresponding author)
This research was supported by the Ministry of Trade, Industry & Energy (Korea Institute for Advancement of Technology, Korea Semiconductor Research Consortium) under the project “Design Technology of Meta-Optical Structures for Next-Generation Sensors,” by the Ministry of Science and ICT (National Research Foundation of Korea) under the projects “Development of Full-Color Micro LED Devices and Panels Based on Beam-Steerable High-Color-Purity Meta Color Conversion Layers” and “Development of a Real-Time Zero-Energy Argos-Eye Metasurface Network Computing with All Properties of Light,” and by the Ministry of Culture, Sports and Tourism (Korea Creative Content Agency) under the project “International Joint Research for Next-Generation Copyright Protection and Secure Content Distribution Technologies.”
KAIST Solves Key Micro-LED Challenges, Enabling Reality Like Visuals for AR/VR Devices
<(Back row, from left) Dr. Juhyuk Park, Ph.D candidate Hyunsu Ki, (Front row, from left) M.S candidate Haoi Le Bao, M.S candidate Chaeyeon Kim, (Circled, from left) Prof. Sanghyeon Kim, Prof. Dae-Myeong Keum >
From TVs and smartwatches to rapidly emerging VR and AR devices, micro-LEDs are a next-generation display technology in which each LED—smaller than the thickness of a human hair—emits light on its own. Among the three primary colors required for full-color displays—red, green, and blue—the realization of high-performance red micro-LEDs has long been considered the most difficult. KAIST researchers have now successfully demonstrated a high-efficiency, ultra-high-resolution red micro-LED display, paving the way for displays that can deliver visuals even sharper than reality.
KAIST (President Kwang Hyung Lee) announced on the 28th that a research team led by Professor Sanghyeon Kim of the School of Electrical Engineering, in collaboration with Professor Dae-Myeong Geum of Inha University, compound-semiconductor manufacturer QSI, and microdisplay/SoC design company Raontech, has developed a red micro-LED display technology that achieves ultra-high resolution while significantly reducing power consumption.
Using this technology, the team successfully demonstrated a 1,700 PPI* class ultra-high-resolution micro-LED display—approximately 3–4 times higher than the resolution of current flagship smartphone displays—capable of delivering truly “reality-like” visuals even in VR and AR devices.
*PPI (Pixels Per Inch): indicates how densely pixels are packed on a display; higher PPI corresponds to finer image detail.
Micro-LEDs are self-emissive displays that surpass OLEDs in brightness, lifetime, and energy efficiency, but they have faced two major technical challenges. The first is the efficiency degradation of red micro-LEDs, which becomes severe as pixel sizes shrink due to increased energy leakage. The second is the limitation of conventional transfer processes, which rely on mechanically locating and placing countless microscopic LEDs one by one, making ultra-high-resolution fabrication difficult and increasing defect rates.
<Results of Red Micro-LED Performance Improvement>
The research team addressed both challenges simultaneously. First, they adopted an AlInP/GaInP quantum-well structure, enabling highly efficient red micro-LEDs with minimal energy loss even at very small pixel sizes. Simply put, the quantum well/barrier structure acts as an “energy barrier.” It confines electrons and holes within the quantum well layer, preventing carrier leakage. By adopting quantum wells with higher hole concentration, the research team effectively reduced energy loss as pixel sizes decreased, enabling brighter and more efficient red micro-LEDs.
Also, instead of transferring individual LEDs, the researchers employed a monolithic three-dimensional (3D) integration technique, stacking the LED layers directly on top of the driving circuitry. This approach minimizes alignment errors, reduces defect rates, and enables stable fabrication of ultra-high-resolution displays. The team also developed a low-temperature process to prevent damage to the underlying circuitry during integration.
<Monolithic 3D MicroLED-on-Si Display>
This achievement is particularly significant because it demonstrates a fully functional, ultra-high-resolution, and highly-quantum-efficient red micro-LED display, widely regarded as the most difficult component to realize. The technology is expected to find broad applications in next-generation displays where pixel granularity must be virtually imperceptible, including AR/VR smart glasses, automotive head-up displays (HUDs), and ultra-compact wearable devices.
Professor Sanghyeon Kim commented, “This work simultaneously solves the long-standing challenges of red pixel efficiency and circuit integration in micro-LEDs. We will continue to advance this technology toward practical commercialization as a next-generation display platform.”
The study was led by Dr. Juhyuk Park of the KAIST Institute of Information Electronics as first author, and the results were published on January 20 in the international journal Nature Electronics.
※ Paper title: “A Monolithic Three-Dimensional Integrated Red Micro-LED Display on Silicon Using AlInP/GaInP Epilayers” ※ DOI: 10.1038/s41928-025-01546-4
This research was supported by the National Research Foundation of Korea Basic Research Program (2019), the Display Strategic Research Laboratory Program (currently ongoing), and the Samsung Future Technology Incubation Center (2020-2023).
<Monolithic 3D Direct Technology (AI-generated image)>
Presenting a Brain-Like Next-Generation AI Semiconductor that Sees and Judges Instantly
< (From left) Professor Sanghun Jeon, Ph.D candidate Seungyeob Kim, Postdoctoral researcher Hongrae Cho, Ph.D candidates Sang-ho Lee and Taeseung Jung, and M.S candidate Seonjae Park >
With the advancement of Artificial Intelligence (AI), the importance of ultra-low-power semiconductor technology that integrates sensing, computation, and memory into a single unit is growing. However, conventional structures face challenges such as power loss due to data movement, latency, and limitations in memory reliability. A Korean research team has drawn international academic attention by presenting core technologies for an integrated ‘Sensor–Compute–Store’ AI semiconductor to solve these issues.
KAIST announced on December 31st that Professor Sanghun Jeon’s research team from the School of Electrical Engineering presented a total of six papers at the ‘International Electron Devices Meeting (IEEE IEDM 2025)’—the world’s most prestigious semiconductor conference—held in San Francisco from December 8 to 10. Among these, the papers were simultaneously selected as a Highlight Paper and a Top Ranked Student Paper.
Highlight Paper: Monolithically Integrated Photodiode–Spiking Circuit for Neuromorphic Vision with In-Sensor Feature Extraction [Link: https://iedm25.mapyourshow.com/8_0/sessions/session-details.cfm?scheduleid=255]
Top Ranked Student Paper: A Highly Reliable Ferroelectric NAND Cell with Ultra-thin IGZO Charge Trap Layer; Trap Profile Engineering for Endurance and Retention Improvement [Link: https://iedm25.mapyourshow.com/8_0/sessions/session-details.cfm?scheduleid=124]
The research on the M3D integrated neuromorphic vision sensor, selected as a highlight paper, is a semiconductor that stacks the human eye and brain within a single chip. Simply put, the sensors that detect light and the circuits that process signals like a brain are made into very thin layers and stacked vertically in one chip, implementing a structure where the process of 'seeing' and 'judging' occurs simultaneously.
Through this, the research team completed the world's first "In-Sensor Spiking Convolution" platform, where AI computation technology that "sees and judges at the same time" takes place directly within the camera sensor.
< Figure 1. Summary of research on vertically stacked optical signal-to-spike frequency converter for AI >
< Figure 2. Representative diagram of the development of a 2T-2C near-pixel analog computing cell based on oxide thin-film transistors >
Previously, this technology required several stages: capturing an image (sensor), converting it to digital (ADC), storing it in memory (DRAM), and then calculating (CNN). However, this new technology eliminates unnecessary data movement as the calculation happens immediately within the sensor. As a result, it has become possible to implement real-time, ultra-low-power Edge AI with significantly reduced power consumption and dramatically improved response speeds.
Based on this approach, the research team presented six core technologies at the conference covering all layers of AI semiconductors, from input to storage. They simultaneously created neuromorphic semiconductors that operate like the brain using much less electricity while utilizing existing semiconductor processes, along with next-generation memory optimized for AI.
First, on the sensor side, they designed the system so that judgment occurs at the sensor stage rather than having separate components for capturing images and calculating. Consequently, power consumption decreased and response speeds increased compared to the conventional method of taking a photo and sending it to another chip for calculation.
< Figure 3. Schematic diagram of a next-generation biomimetic tactile system using neuromorphic devices >
< Figure 4. Representative diagram of NC-NAND development research based on Ultra-thin-Mo and Sub-3.5 nm HZO >
Furthermore, in the field of memory, they implemented a next-generation NAND flash that uses the same materials but operates at lower voltages, lasts longer, and can store data stably even when the power is turned off. Through this, they presented a foundational technology that satisfies the requirements for high-capacity, high-reliability, and low-power memory necessary for AI.
< Figure 5. Representative diagram of next-generation 3D FeNAND memory development research >
< Figure 6. Representative diagram of research on charge behavior characterization and quantitative analysis methodology for next-generation FeNAND memory >
Professor Sanghun Jeon, who led the research, stated, "This research is significant in that it demonstrates that the entire hierarchy can be integrated into a single material and process system, moving away from the existing AI semiconductor structure where sensing, computation, and storage were designed separately." He added, "Moving forward, we plan to expand this into a next-generation AI semiconductor platform that encompasses everything from ultra-low-power Edge AI to large-scale AI memory."
Meanwhile, this research was conducted with support from basic research projects of the Ministry of Science and ICT and the National Research Foundation of Korea, as well as the Center for Heterogeneous Integration of Extreme-scale & Property Semiconductors (CH³IPS). It was carried out in collaboration with Samsung Electronics, Kyungpook National University, and Hanyang University.
Turning PC and Mobile Devices into AI Infrastructure, Reducing ChatGPT Costs
< (From left) KAIST School of Electrical Engineering: Dr. Jinwoo Park, M.S candidate Seunggeun Cho, and Professor Dongsu Han >
Until now, AI services based on Large Language Models (LLMs) have mostly relied on expensive data center GPUs. This has resulted in high operational costs and created a significant barrier to entry for utilizing AI technology. A research team at KAIST has developed a technology that reduces reliance on expensive data center GPUs by utilizing affordable, everyday GPUs to provide AI services at a much lower cost.
On December 28th, KAIST announced that a research team led by Professor Dongsu Han from the School of Electrical Engineering developed 'SpecEdge,' a new technology that significantly lowers LLM infrastructure costs by utilizing affordable, consumer-grade GPUs widely available outside of data centers.
SpecEdge is a system where data center GPUs and "edge GPUs"—found in personal PCs or small servers—collaborate to form an LLM inference infrastructure. By applying this technology, the team successfully reduced the cost per token (the smallest unit of text generated by AI) by approximately 67.6% compared to methods using only data center GPUs.
To achieve this, the research team utilized a method called 'Speculative Decoding.' In this process, a small language model placed on the edge GPU quickly generates a high-probability token sequence (a series of words or word fragments). Then, the large-scale language model in the data center verifies this sequence in batches. During this process, the edge GPU continues to generate words without waiting for the server's response, simultaneously increasing LLM inference speed and infrastructure efficiency.
< Figure 1. Language data flow diagram of the developed SpecEdge >
< Figure 2. Detailed computation time reduction method of SpecEdge >
< Figure 3. Illustration of efficient batching of verification requests from multiple edge GPUs on the server GPU within SpecEdge >
Compared to performing speculative decoding solely on data center GPUs, SpecEdge improved cost efficiency by 1.91 times and server throughput by 2.22 times. Notably, the technology was confirmed to work seamlessly even under standard internet speeds, meaning it can be immediately applied to real-world services without requiring a specialized network environment.
Furthermore, the server is designed to efficiently process verification requests from multiple edge GPUs, allowing it to handle more simultaneous requests without GPU idle time. This has realized an LLM serving infrastructure structure that utilizes data center resources more effectively.
This research presents a new possibility for distributing LLM computations—which were previously concentrated in data centers—to the edge, thereby reducing infrastructure costs and increasing accessibility. In the future, as this expands to various edge devices such as smartphones, personal computers, and Neural Processing Units (NPUs), high-quality AI services are expected to become available to a broader range of users.
< Figure 4. Conceptual comparison of the developed SpecEdge vs. conventional methods >
Professor Dongsu Han, who led the research, stated, "Our goal is to utilize edge resources around the user, beyond the data center, as part of the LLM infrastructure. Through this, we aim to lower AI service costs and create an environment where anyone can utilize high-quality AI."
Dr. Jinwoo Park and M.S candidate Seunggeun Cho from KAIST participated in this study. The research results were presented as a 'Spotlight' (top 3.2% of papers, with a 24.52% acceptance rate) at the NeurIPS (Neural Information Processing Systems) conference, the world's most prestigious academic conference in the field of AI, held in San Diego from December 2nd to 7th.
Paper Title: SpecEdge: Scalable Edge-Assisted Serving Framework for Interactive LLMs
Paper Links: NeurIPS Link, arXiv Link
This research was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) under the project 'Development of 6G System Technology to Support AI-Native Application Services.'
KAIST Researchers First in the World to Identify Security Threat Exploiting Google Gemini’s "Malicious Expert AI" Structure
<Photo 1. (From left) Ph.D. candidates Mingyoo Song and Jaehan Kim, Professor Sooel Son, (Top right) Professor Seungwon Shin, Lead Researcher Seung Ho Na>
Most major commercial Large Language Models (LLMs), such as Google’s Gemini, utilize a Mixture-of-Experts (MoE) structure. This architecture enhances efficiency by dynamically selecting and using multiple "small AI models (Expert AIs)" depending on input queries . However, KAIST research team has revealed for the first time in the world that this very structure can actually become a new security threat.
A joint research team led by Professor Seungwon Shin (School of Electrical Engineering) and Professor Sooel Son (School of Computing) announced on December 26th that they have identified an attack technique that can seriously compromise the safety of LLMs by exploiting the MoE structure. For this research, they received the Distinguished Paper Award at ACSAC 2025, one of the most prestigious international conferences in the field of information security.
ACSAC (Annual Computer Security Applications Conference) is among the most influential international academic conferences in security. This year, only two papers out of all submissions were selected as Distinguished Papers. It is highly unusual for a domestic Korean research team to achieve such a feat in the field of AI security.
In this study, the team systematically analyzed the fundamental security vulnerabilities of the MoE structure. In particular, they demonstrated that even if an attacker does not have direct access to the internal structure of a commercial LLM, the entire model can be induced to generate dangerous responses if just one maliciously manipulated "Expert Model" is distributed through open-source channels and integrated into the system.
<Figure 1. Conceptual diagram of the attack technology proposed by the research team.>
To put it simply: even if there is only one "malicious expert" mixed among normal AI experts, that specific expert may be repeatedly selected for processing harmful queries, causing the overall safety of the AI to collapse. A particularly dangerous factor highlighted was that this process causes almost no degradation in model performance, making the problem extremely difficult to detect in advance.
Experimental results showed that the attack technique proposed by the research team could increase the harmful response rate from 0% to up to 80%. They confirmed that the safety of the entire model significantly deteriorates even if only one out of many experts is "infected."
This research is highly significant as it presents the first new security threat that can occur in the rapidly expanding global open-source-based LLM development environment. Simultaneously, it suggests that verifying the "source and safety of individual expert models" is now essential—not just performance—during the AI model development process.
Professors Seungwon Shin and Sooel Son stated, "Through this study, we have empirically confirmed that the MoE structure, which is spreading rapidly for the sake of efficiency, can become a new security threat. This award is a meaningful achievement that recognizes the importance of AI security on an international level."
The study involved Ph.D. candidates Jaehan Kim and Mingyoo Song, Dr. Seung Ho Na (currently at Samsung Electronics), Professor Seungwon Shin, and Professor Sooel Son. The results were presented at ACSAC in Hawaii, USA, on December 12, 2025.
<Figure 2. Photo of the Distinguished Paper Award certificate>
Paper Title: MoEvil: Poisoning Experts to Compromise the Safety of Mixture-of-Experts LLMs
Paper File: https://jaehanwork.github.io/files/moevil.pdf
GitHub (Open Source): https://github.com/jaehanwork/MoEvil
This research was supported by the Korea Internet & Security Agency (KISA) and the Institute of Information & Communications Technology Planning & Evaluation (IITP) under the Ministry of Science and ICT.
AI Gets a Private Tutor, Learning Human Preferences More Accurately
< Professor Junmo Kim and Ph.D. candidate Minchan Kwon, School of Electrical Engineering >
No matter how much data they learn, why do Artificial Intelligence (AI) models often miss the mark on human intent? Conventional "comparison learning," designed to help AI understand human preferences, has frequently led to confusion rather than clarity. A KAIST research team has now presented a new learning solution that allows AI to accurately learn human preferences even with limited data by assigning it a "private tutor."
On December17th, a research team led by Professor Junmo Kim of KAIST School of Electrical Engineering announced the development of "TVKD" (Teacher Value-based Knowledge Distillation), a reinforcement learning framework that significantly improves data efficiency and learning stability while effectively reflecting human preferences.
Existing AI training methods typically rely on collecting massive amounts of "preference comparison" data—simple structures like "A is better than B." However, this approach requires vast datasets and often causes the AI to become confused in ambiguous situations where the distinction is unclear.
To solve this problem, the research team proposed a method in which a ‘Teacher model’ that has first deeply understood human preferences delivers only the core information to a ‘Student model.’ This can be compared to a private tutor who organizes and teaches complex content, and the research team named this ‘Preference Distillation.’
The biggest feature of this technology is that instead of simply imitating ‘good or bad,’ it is designed so that the teacher model learns a ‘Value Function’ that numerically judges how valuable each situation is, and then delivers this to the student model. Through this, the AI can learn by making comprehensive judgments about ‘why this choice is better’ rather than fragmentary comparisons, even in ambiguous situations.
< Conceptual diagram of TVKD: After teaching the human preference dataset to the teacher model, learning proceeds by delivering the teacher's information and the dataset to the student model >
The core of this technology is twofold. First, by reflecting value judgments that consider the entire context into the student model, learning that understands the overall flow rather than fragmentary answers has become possible. Second, a technique was introduced to adjust learning importance according to the reliability of preference data. Clear data is significantly reflected in learning, while the influence of ambiguous or noisy data is reduced, allowing the AI to learn stably even in realistic environments.
As a result of the research team applying this technology to various AI models and conducting experiments, it showed more accurate and stable performance than methods previously known to have the best performance. In particular, it recorded achievements that stably outperformed existing top technologies in major evaluation indices such as MT-Bench and AlpacaEval.
Professor Junmo Kim said, “In reality, human preference data is not always sufficient or perfect,” and added, “This technology will allow AI to learn consistently even under such constraints, so it will be highly practical in various fields.”
< Performance comparison results for each task of MT-Bench. It can be confirmed that the proposed TVKD framework records generally higher scores than existing methods. >
< Visualization results of the Shaping term. The top tokens (converted into words) judged as important by the teacher model within the response are displayed in red, intuitively showing which tokens have a greater influence during the value-based alignment process. >
Ph.D. candidate Minchan Kwon from our university’s School of Electrical Engineering participated as the first author, and the research results were accepted at ‘NeurIPS 2025’, the most prestigious international conference in the field of artificial intelligence. The research was presented at a poster session on December 3, 2025 (US Pacific Time).
※ Paper Title: Preference Distillation via Value based Reinforcement Learning, DOI: https://doi.org/10.48550/arXiv.2509.16965
Meanwhile, this research was carried out with support from the Information & Communications Technology Planning & Evaluation (IITP) funded by the government (Ministry of Science and ICT) in 2024 (No. RS-2024-00439020, Development of Sustainable Real-time Multimodal Interactive Generative AI, SW Star Lab).
KAIST Confirms Reduction of Amyloid-β Using Red OLED-Restores Memory in Alzheimer’s Model
<Professor Kyung Cheol Choi, Dr. Byeongju Noh, Ph.D candidate Young-Hun Jung, Ph.D candidate Minwoo Park, Dr.Ja Wook Koo, Researcher Jiyun Lee, Researcher Ji-Eun Lee, Dr. Hyang Sook Hoe, Dr. Hyun-Ju Lee, Dr. Sora Kang, Researcher Seokjun Oh>
A Korean research team, raising the question “Which OLED light color can actually improve memory and pathological markers in Alzheimer’s patients?”, has identified the most effective OLED color capable of enhancing cognitive function using only light—with no drugs involved. The OLED platform developed for this study can precisely control color, brightness, flicker frequency, and exposure duration, suggesting potential future development into personalized OLED-based electroceuticals.
On the 24th, KAIST (President Kwang Hyung Lee) announced that a joint research team led by Professor Kyung Cheol Choi from the School of Electrical Engineering at KAIST and Dr. Ja Wook Koo and Dr. Hyang Sook Hoe from the Korea Brain Research Institute (KBRI) developed a uniform-illuminance, three-color OLED photostimulation technology and confirmed that “red 40-Hz light” was the most effective among blue, green, and red in improving Alzheimer's pathology and memory function.
To overcome the structural limitations of conventional LEDs—such as brightness imbalance, heat generation risk, and variability caused by animal movement—the researchers developed an OLED-based photostimulation platform that emits light uniformly. Using this platform, they compared white, red, green, and blue light under identical conditions (40-Hz frequency, brightness, and exposure time) and found that red 40-Hz light produced the most significant improvement.
In an early-stage (3-month-old) Alzheimer’s animal model, improvement in pathology and memory was observed after only two days of stimulation. When early Alzheimer’s model mice were exposed to one hour of light per day for two days, both white and red light improved long-term memory. Additionally, the amount of amyloid-β (Aβ) plaques—protein aggregates known as a major factor in Alzheimer’s disease—was reduced in key brain regions such as the hippocampus, and levels of the plaque-clearing enzyme ADAM17 increased.
This indicates that even very short periods of light stimulation can reduce harmful proteins in the brain and improve memory function. In particular, with red light, the inflammatory cytokine IL-1β, known to exacerbate inflammation and contribute to Alzheimer’s progression, decreased significantly, demonstrating an anti-inflammatory effect.
Moreover, the more plaque was reduced, the greater the improvement in memory—direct evidence that pathological improvement leads to cognitive enhancement.
In the mid-stage (6-month-old) Alzheimer’s model, statistically significant pathological improvement was seen only with red light. In a two-week long-term stimulation experiment under the same conditions, both white and red light improved memory, but a statistically meaningful reduction in plaques appeared only under red light.
< The mechanism by which red OLED stimulation of neurons reduces amyloid-β in Alzheimer’s model mice >
Differences at the molecular level were also clear. Under red light, levels of ADAM17 (which helps remove plaques) increased, while levels of BACE1, an enzyme responsible for producing plaques, decreased—demonstrating a dual effect of both inhibiting plaque formation and promoting plaque removal. In contrast, white light only lowered BACE1, showing more limited therapeutic effects compared to red light.
This scientifically identifies that the color of light is a key factor determining therapeutic efficacy.
To determine which neural circuits were activated by light stimulation, the team analyzed the expression of c-Fos, an immediate-early gene that is activated when neurons fire.
They found activation throughout the visual–memory circuit, extending from the visual cortex → thalamus → hippocampus, providing direct neurological evidence that light stimulation awakens the visual pathway, enhancing hippocampal function and memory.
Thanks to the uniform-illuminance OLED platform, light was evenly delivered regardless of animal movement, ensuring stable experimental results and high reproducibility across repeated tests.
This study is the first to demonstrate that cognitive function can be improved using only light, without drugs, and that Alzheimer’s pathological markers can be regulated through combinations of light color, frequency, and duration.
The OLED platform developed in this study allows fine control over color, brightness, flicker ratio, and exposure time, making it suitable for personalized stimulation design in future human clinical research.
The research team plans to expand conditions such as stimulation intensity, energy, duration, and combined visual–auditory stimulation, aiming toward clinical-stage development.
Dr. Byeongju Noh (from Professor Kyung Cheol Choi’s research team) said, “This study experimentally demonstrates the importance of color standardization and confirms that red OLED is the key color that activates ADAM17 and suppresses BACE1 across disease stages.”
Professor Kyung Cheol Choi emphasized, “Our uniform-illuminance OLED platform overcomes the structural limitations of traditional LEDs and enables high reproducibility and safe evaluation. We expect wearable RED OLED electroceuticals for everyday use to present a new therapeutic paradigm for Alzheimer’s disease.”
The research findings were published online on October 25 in ACS Biomaterials Science & Engineering, a leading international journal in biomedical and materials science.
Paper Title: Color Dependence of OLED Phototherapy for Cognitive Function and Beta-Amyloid Reduction through ADAM17 and BACE1
DOI: https://pubs.acs.org/doi/full/10.1021/acsbiomaterials.5c01162
Co-authors:Byeongju Noh, Hyun-Ju Lee, Jiyun Lee, Jiyun Lee, Ji-Eun Lee, Bitna Joo, Young-Hun Jung, Minwoo Park, Sora Kang, Seokjun Oh, Jeong-Woo Hwang, Dae-Si Kang, Yongmin Jeon, So-Min Lee, Hyang Sook Hoe, Ja Wook Koo, Kyung Cheol Choi
This research was supported by the National Research Foundation of Korea and the National IT Industry Promotion Agency under the Ministry of Science and ICT, and the Korea Brain Research Institute Basic Research Program. (2017R1A5A1014708, 2022M3E5E9018226, H0501-25-1001, 25-BR-02-02, 25-BR-02-04)
KAIST Professor and Alumni Who Won AIxCC Donate 150 Million KRW of Prize Money to Their Alma Mater
<(From Left) Professor Insu Yun from KAIST School of Electrical Engineering, Researcher HyungSeok Han from Samsung Research America>
KAIST (President Kwang Hyung Lee) announced on the 23rd of November that HyungSeok Han (Ph.D. alumnus from the School of Computing) and Insu Yun (B.S. alumnus, currently Associate Professor in the School of Electrical Engineering) donated 150 million KRW from the prize money won by Team Atlanta, which took first place in the world’s largest AI security competition, the “AI Cyber Challenge (AIxCC),” organized by the U.S. Defense Advanced Research Projects Agency (DARPA).
The AIxCC final round was held this August in Las Vegas, where Team Atlanta—a joint team consisting of researchers from Samsung Research, KAIST, POSTECH, and Georgia Tech—secured the top prize. AIxCC is the world’s largest AI security competition, with a total prize pool of 29.5 million USD (approx. 41 billion KRW). Over the past two years, security companies and research teams worldwide have competed with AI-based security technologies, showcasing state-of-the-art capabilities.
A total of 91 teams registered for the competition, 31 teams participated in the qualifiers, and 7 teams advanced to the finals. Team Atlanta won the first-place prize of 4 million USD (approx. 5.8 billion KRW), securing victory with an overwhelming margin comparable to the combined scores of the second- and third-place teams. The team also swept major titles such as “Most Vulnerabilities Identified” and “Highest Scoring Team,” demonstrating exceptional technical superiority.
HyungSeok Han earned his B.S. (2017) and Ph.D. (2023) from the KAIST School of Computing, then worked as a postdoctoral researcher at Georgia Tech before joining Samsung Research America where he currently works. In the competition, he served as the team leader for the development of the automatic vulnerability detection system and oversaw system integration and infrastructure, making major contributions.
Insu Yun received his B.S. (2015) from the KAIST School of Computing and his Ph.D. (2020) from Georgia Tech. Since 2021, he has been a faculty member in the KAIST School of Electrical Engineering. In this competition, he led the patch development team and played a central role in enhancing overall system completeness.
The two researchers decided to donate 150 million KRW of their prize money to the School of Computing and the School of Electrical Engineering. The School of Computing will use the donation as a scholarship fund, while the School of Electrical Engineering will apply it toward student education and research support, in line with the spirit of the donation.
Alumnus HyungSeok Han remarked, “Building a system in which AI autonomously discovers vulnerabilities and even generates patches has long been a dream of mine and an important milestone in the security field. I’m grateful to have achieved meaningful results together with KAIST alumni, and I hope KAIST will continue to exert a positive influence on global technological advancement.”
<Final Scoreboard>
Professor Insu Yun stated, “I’m truly grateful to every member of Team Atlanta. In particular, I want to thank Professor Taesoo Kim, our overall team leader and advisor, the students in our lab who worked tirelessly, and Dr. HyungSeok Han, who joined me in making this meaningful contribution.”
KAIST President Kwang Hyung Lee commented, “I deeply thank our alumni for achieving outstanding results on the world stage of technological competition and for generously giving back to their alma mater. This achievement demonstrates KAIST’s educational and research excellence and stands as meaningful evidence of the global competitiveness of Korea’s AI and security technologies. KAIST will continue to lead advanced AI and security innovation and do its utmost to nurture creative talent who will contribute to humanity and society.”
To encourage further alumni contributions, the KAIST Development Foundation is operating the Team KAIST (https://giving.kaist.ac.kr/ko/sub01/sub0103_1.php) campaign to promote alumni participation.
Makes Summer Cooler and Winter Warmer Without Power
<(Front row from left)Professor Young Min Song, Ph.D candidate Hyung Rae Kim, M.S candidate Hyunkyu Kwak, (Back row from left)Ph.D candidate Hyo Eun Jeong, Dr. Sehui Chang, Ph.D candidate Do Hyeon Kim, (Circle from left) Professor Dae-Hyeong Kim, Dr. Yoonsoo Shin, Dr. Se-Yeon Heo>
The poplar (Populus alba) has a unique survival strategy: when exposed to hot and dry conditions, it curls its leaves to expose the ventral surface, reflecting sunlight, and at night, the moisture condensed on the leaf surface releases latent heat to prevent frost damage. Plants have evolved such intricate mechanisms in response to dynamic environmental fluctuations in diurnal and seasonal temperature cycles, light intensity, and humidity, but there have been few instances of realizing such a sophisticated thermal management system with artificial materials. Through this research, the KAIST research team has developed an artificial material that mimics the thermal management strategy of the poplar leaf, significantly increasing the applicability of power-free, self-regulating thermal management technology in applications such as building facades, roofs, and temporary shelters.
KAIST announced on November 18 that the research team led by Professor Young Min Song of the School of Electrical Engineering, in collaboration with Professor Dae-Hyeong Kim’s team at Seoul National University, has developed a flexible hydrogel-based ‘Latent-Radiative Thermostat (LRT)’ that mimics the natural heat regulation strategy of the poplar leaf.
The LRT developed by the research team is a bio-inspired thermal regulator that autonomously switches between cooling and heating modes. This technology is a new thermal management technique that can simultaneously realize latent heat regulation through the evaporation and condensation of water, and radiative heat regulation using light reflection and transmission, all within a single device.
The primary functional material is a composite that integrates lithium ions (Li+) and hydroxypropyl cellulose (HPC) within a polyacrylamide (PAAm) hydrogel. Li+ maintains warmth by condensing and absorbing moisture to regulate latent heat, and HPC changes between transparent and opaque states according to temperature changes, regulating the reflection and absorption of sunlight to switch between cooling and heating modes.
When the temperature rises, HPC molecules aggregate, causing the hydrogel to become opaque, which reflects sunlight and strengthens the natural cooling effect. The resulting LRT automatically switches among four thermal management modes based on the surrounding temperature, humidity, and sunlight.
<Figure 1. Schematic of a hydrogel-based self-regulating temperature controller inspired by the thermal management strategy of poplar leaves.>
▶ In night/cold environments below the dew point temperature, it maintains warmth by absorbing and condensing moisture in the air and releasing heat. ▶ On cold days with weak sunlight, it transmits sunlight and the absorbed moisture absorbs near-infrared radiation to produce a heating effect. ▶ In hot and dry conditions, internal moisture evaporates, resulting in powerful evaporative cooling. ▶ Under strong sunlight and high-temperature conditions, the HPC becomes opaque to reflect sunlight, and simultaneously, evaporative cooling operates to lower the temperature. That is, it is a bioinspired thermal management device that autonomously switches between cooling and heating modes according to the surrounding environment without requiring power.
Through this research, the LRT has demonstrated the performance to stay cooler in the summer and warmer in the winter. The research team confirmed that the thermal regulation properties can be finely tuned to various climate conditions by adjusting the concentrations of Li+ and HPC, and the durability and mechanical strength of the material were significantly improved by adding TiO2 nanoparticles. In outdoor experiments, the LRT maintained temperatures up to 3.7 °C lower in the summer and up to 3.5 °C higher in the winter compared to conventional cooling materials. Furthermore, a simulation covering 7 climate zones (ASHRAE standards) showed an annual energy saving of up to 153 MJ/m2 compared to existing roof coatings. This study is a case of the engineering implementation of the sophisticated thermal management strategies observed in nature. It is anticipated to serve as a next-generation thermal management platform for environments where power-based cooling and heating are difficult, such as building facades, roofs, and temporary shelters.
<Figure 2. Outdoor temperature measurement results and simulated energy savings.>
In a statement, Professor Young Min Song said, “This research is significant as it technically reproduced nature's intelligent thermal regulation strategy, presenting a thermal management device that self-adapts to seasonal and climate changes. It can be expanded into an intelligent thermal management platform applicable to various environments in the future.” This study was co-first authored by PhD candidate Hyung Rae Kim (School of Electrical Engineering, KAIST). Professor Young Min Song (School of Electrical Engineering, KAIST) participated as a corresponding author. The research was published online on November 4th in Advanced Materials (IF 26.8), a world-leading journal in the field of material science.
※ Paper Title: Hydrogel Thermostat Inspired by Photoprotective Foliage Using Latent and Radiative Heat Control, DOI: https://doi.org/10.1002/adma.202516537
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (RS-2025-16063568, RS-2025-16902996, RS-2023-NR077254, RS-2022-NR068140). This work was supported by the InnoCORE program of the Ministry of Science and ICT(GIST InnoCORE KH0830). This work also was supported by the Technology Innovation Program(or Industrial Strategic Technology Development Program-Bio-industry Technology Development Project)(RS-2024-00467230, Development of a Digital Healthcare Device for Non-invasive Continuous Monitoring of Myocardial Infarction Biomarkers Based on Mid-Infrared Nano-Optical Filters) funded By the Ministry of Trade Industry & Energy(MOTIE, Korea)
KAIST Develops Wearable Ultrasound Sensor Enabling Noninvasive Treatment Without Surgery
<(From Left) Professor Hyunjoo Jenny Lee, Dr.Sang-Mok Lee, Ph.D candidate Xiaojia Liang>
Conventional wearable ultrasound sensors have been limited by low power output and poor structural stability, making them unsuitable for high-resolution imaging or therapeutic applications. A KAIST research team has now overcome these challenges by developing a flexible ultrasound sensor with statically adjustable curvature. This breakthrough opens new possibilities for wearable medical devices that can capture precise, body-conforming images and perform noninvasive treatments using ultrasound energy.
KAIST (President Kwang Hyung Lee) announced on November 12 that a research team led by Professor Hyunjoo Jenny Lee from the School of Electrical Engineering developed a “flex-to-rigid (FTR)” capacitive micromachined ultrasonic transducer (CMUT) capable of transitioning freely between flexibility and rigidity using a semiconductor wafer process (MEMS).
The team incorporated a low-melting-point alloy (LMPA) inside the device. When an electric current is applied, the metal melts, allowing the structure to deform freely; upon cooling, it solidifies again, fixing the sensor into the desired curved shape.
Conventional polymer-membrane-based CMUTs have suffered from a low elastic modulus, resulting in insufficient acoustic power and blurred focal points during vibration. They have also lacked curvature control, limiting precise focusing on target regions.
Professor Lee’s team designed an FTR structure that combines a rigid silicon substrate with a flexible elastomer bridge, achieving both high output performance and mechanical flexibility. The embedded LMPA enables dynamic adjustment and fixation of the transducer’s shape by toggling between solid and liquid states through electrical control.
As a result, the new sensor can automatically focus ultrasound on a specific region according to its curvature—without requiring separate beamforming electronics—and maintains stable electrical and acoustic performance even after repeated bending.
The device’s acoustic output reaches the level of low-intensity focused ultrasound (LIFU), which can gently stimulate tissues to induce therapeutic effects without causing damage. Experiments on animal models demonstrated that noninvasive spleen stimulation reduced inflammation and improved mobility in arthritis models.
In the future, the team plans to extend this technology to a two-dimensional (2D) array structure—arranging multiple sensors in a grid—to enable simultaneous high-resolution ultrasound imaging and therapeutic applications, paving the way for a new generation of smart medical systems.
Because the technology is compatible with semiconductor fabrication processes, it can be mass-produced and adapted for wearable and home-use ultrasound systems.
This study was conducted by Sang-Mok Lee, Xiaojia Liang (co–first authors), and their collaborators under the supervision of Professor Hyunjoo Jenny Lee. The results were published online on October 23 in npj Flexible Electronics (Impact Factor: 15.5).
Paper title: “Flexible ultrasound transducer array with statically adjustable curvature for anti-inflammatory treatment”DOI: [10.1038/s41528-025-00484-7]
The research was supported by the Bio & Medical Technology Development Program (Brain Science Convergence Research Program) of the Ministry of Science and ICT (MSIT) and the Korea Medical Device Development Fund, a multi-ministerial R&D initiative.
KAIST Researchers Uncover Critical Security Flaws in Global Mobile Networks
Breakthrough Discovery Reveals How Attackers Can Remotely Manipulate User Data Without Physical Proximity
DAEJEON, South Korea — In an era when recent cyberattacks on major telecommunications providers have highlighted the fragility of mobile security, researchers at the Korea Advanced Institute of Science and Technology have identified a class of previously unknown vulnerabilities that could allow remote attackers to compromise cellular networks serving billions of users worldwide.
The research team, led by Professor Yongdae Kim of KAIST's School of Electrical Engineering, discovered that unauthorized attackers could remotely manipulate internal user information in LTE core networks — the central infrastructure that manages authentication, internet connectivity, and data transmission for mobile devices and IoT equipment.
The findings, presented at the 32nd ACM Conference on Computer and Communications Security in Taipei, Taiwan, earned the team a Distinguished Paper Award, one of only 30 such honors selected from approximately 2,400 submissions to one of the field's most prestigious venues.
A New Class of Vulnerability
The vulnerability class, which the researchers termed "Context Integrity Violation" (CIV), represents a fundamental breach of a basic security principle: unauthenticated messages should not alter internal system states. While previous security research has primarily focused on "downlink" attacks — where networks compromise devices — this study examined the less-scrutinized "uplink" security, where devices can attack core networks.
"The problem stems from gaps in the 3GPP standards," Professor Kim explained, referring to the international body that establishes operational rules for mobile networks. "While the standards prohibit processing messages that fail authentication, they lack clear guidance on handling messages that bypass authentication procedures entirely."
The team developed CITesting, the world's first systematic tool for detecting these vulnerabilities, capable of examining between 2,802 and 4,626 test cases — a vast expansion from the 31 cases covered by the only previous comparable research tool, LTEFuzz.
Widespread Impact Confirmed
Testing four major LTE core network implementations — both open-source and commercial systems — revealed that all contained CIV vulnerabilities. The results showed:
Open5GS: 2,354 detections, 29 unique vulnerabilities
srsRAN: 2,604 detections, 22 unique vulnerabilities
Amarisoft: 672 detections, 16 unique vulnerabilities
Nokia: 2,523 detections, 59 unique vulnerabilities
The research team demonstrated three critical attack scenarios: denial of service by corrupting network information to block reconnection; IMSI exposure by forcing devices to retransmit user identification numbers in plaintext; and location tracking by capturing signals during reconnection attempts.
Unlike traditional attacks requiring fake base stations or signal interference near victims, these attacks work remotely through legitimate base stations, affecting anyone within the same MME (Mobility Management Entity) coverage area as the attacker — potentially spanning entire metropolitan regions.
Industry Response and Future Implications
Following responsible disclosure protocols, the research team notified affected vendors. Amarisoft deployed patches, and Open5GS integrated the team's fixes into its official repository. Nokia, however, stated it would not issue patches, asserting compliance with 3GPP standards and declining to comment on whether telecommunications companies currently use the affected equipment.
"Uplink security has been relatively neglected due to testing difficulties, implementation diversity, and regulatory constraints," Professor Kim noted. "Context integrity violations can pose serious security risks."
The research team, which included KAIST doctoral students Mincheol Son and Kwangmin Kim as co-first authors, along with Beomseok Oh and Professor CheolJun Park of Kyung Hee University, plans to extend their validation to 5G and private 5G environments. The tools could prove particularly critical for industrial and infrastructure networks, where breaches could have consequences ranging from communication disruption to exposure of sensitive military or corporate data.
The research was supported by the Ministry of Science and ICT through the Institute for Information & Communications Technology Planning & Evaluation, as part of a project developing security technologies for 5G private networks.
With mobile networks forming the backbone of modern digital infrastructure, the discovery underscores the ongoing challenge of securing systems designed in an era when such sophisticated attacks were barely conceivable — and the urgent need for updated standards to address them.
KAIST Develops Multimodal AI That Understands Text and Images Like Humans
<(From Left) M.S candidate Soyoung Choi, Ph.D candidate Seong-Hyeon Hwang, Professor Steven Euijong Whang>
Just as human eyes tend to focus on pictures before reading accompanying text, multimodal artificial intelligence (AI)—which processes multiple types of sensory data at once—also tends to depend more heavily on certain types of data. KAIST researchers have now developed a new multimodal AI training technology that enables models to recognize both text and images evenly, enabling far more accurate predictions.
KAIST (President Kwang Hyung Lee) announced on the 14th that a research team led by Professor Steven Euijong Whang from the School of Electrical Engineering has developed a novel data augmentation method that enables multimodal AI systems—those that must process multiple data types simultaneously—to make balanced use of all input data.
Multimodal AI combines various forms of information, such as text and video, to make judgments. However, AI models often show a tendency to rely excessively on one particular type of data, resulting in degraded prediction performance.
To solve this problem, the research team deliberately trained AI models using mismatched or incongruent data pairs. By doing so, the model learned to rely on all modalities—text, images, and even audio—in a balanced way, regardless of context.
The team further improved performance stability by incorporating a training strategy that compensates for low-quality data while emphasizing more challenging examples. The method is not tied to any specific model architecture and can be easily applied to various data types, making it highly scalable and practical.
<Model Prediction Changes with a Data-Centric Multimodal AI Training Framework>
Professor Steven Euijong Whang explained, “Improving AI performance is not just about changing model architectures or algorithms—it’s much more important how we design and use the data for training.” He continued, “This research demonstrates that designing and refining the data itself can be an effective approach to help multimodal AI utilize information more evenly, without becoming biased toward a specific modality such as images or text.”
The study was co-led by doctoral student Seong-Hyeon Hwang and master’s student Soyoung Choi, with Professor Steven Euijong Whang serving as the corresponding author. The results will be presented at NeurIPS 2025 (Conference on Neural Information Processing Systems), the world’s premier conference in the field of AI, which will be held this December in San Diego, USA, and Mexico City, Mexico.
※ Paper title: “MIDAS: Misalignment-based Data Augmentation Strategy for Imbalanced Multimodal Learning,” Original paper: https://arxiv.org/pdf/2509.25831
The research was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP) under the projects “Robust, Fair, and Scalable Data-Centric Continual Learning” (RS-2022-II220157) and “AI Technology for Non-Invasive Near-Infrared-Based Diagnosis and Treatment of Brain Disorders” (RS-2024-00444862).