AI Enters the Experienced Hire Era... Teaching Learned Knowledge with Ease
< (From left) KAIST Professor Hyunwoo J. Kim, Postdoctoral Researcher Sanghyeok Lee, M.S candidate Taehoon Song, Korea University Ph.D candidate Jihwan Park >
How inconvenient would it be if you had to manually transfer every contact and photo from scratch every time you switched to a new smartphone? Current Artificial Intelligence (AI) models face a similar predicament. Whenever a superior new AI model—such as a new version of ChatGPT—emerges, it has to be retrained with massive amounts of data and at a high cost to acquire specialized knowledge in specific fields. A Korean research team has developed a "knowledge transplantation" technology between AI models that can resolve this inefficiency.
KAIST announced on January 27th that a research team led by Professor Hyunwoo J. Kim from the School of Computing, in collaboration with a research team from Korea University, has developed a new technology capable of effectively "transplanting" learned knowledge between different AI models.
Recently, Vision-Language Models (VLM), which understand both images and text simultaneously, have been evolving rapidly. These are easily understood as multimodal AIs, like ChatGPT, which can provide explanations when a user shows them a photo and asks a question. These models have the advantage of adapting relatively quickly to new fields using small amounts of data by pre-learning large-scale image and language data.
However, the need to repeat this "adaptation process" from scratch every time a new AI model is released has been pointed out as a major inefficiency. Existing adaptation techniques also faced limitations: they were difficult to use if the model structure changed even slightly, or they significantly increased memory and computational costs because multiple models had to be used simultaneously.
To solve these problems, the research team proposed "TransMiter," a transferable adaptation technique that allows learned knowledge to be reused regardless of the model's structure or size. The core of this technology is directly transferring the "adaptation experience" accumulated by one AI as it learns to another AI model.
< TransMiter: A transferable adaptation technique reusable regardless of model structure, size, etc. >
The researchers' technology does not overhaul the complex internal structure of the AI; instead, it adopts a method of passing on "know-how" learned by observing only the prediction results (output) to another AI. Even if the AI models have different architectures, if the know-how learned by one AI is organized based on the answers given to the same questions, another AI can utilize that knowledge immediately. Consequently, there is no need to undergo the complex and time-consuming retraining process, and there is almost no slowdown in speed.
This study is highly significant as it is the first to prove that AI adaptation knowledge—previously considered almost impossible to reuse if model structures or sizes differed—can be precisely transplanted regardless of the model type. This is expected to not only reduce repetitive learning costs but also be utilized as a so-called "knowledge patch" technology that updates Large Language Models (LLMs) in real-time according to specific needs.
Professor Hyunwoo J. Kim explained, "By extending this research, we can significantly reduce the cost of post-training that had to be performed repeatedly whenever a rapidly evolving hyper-scale language model appears. It will enable 'model patches' that easily add expertise in specific fields."
The study involved Taehoon Song (Master's student, KAIST School of Computing), Sanghyeok Lee (Postdoctoral researcher), and Jihwan Park (Doctoral student, Korea University) as co-authors, with Professor Hyunwoo J. Kim serving as the corresponding author. The research results were accepted for oral presentation (4.6% acceptance rate as of 2025) at AAAI 2026 (Association for the Advancement of Artificial Intelligence), the most prestigious international conference in the field of AI, and were presented on January 25th.
Paper Title: Transferable Model-agnostic Vision-Language Model Adaptation for Efficient Weak-to-Strong Generalization
DOI: https://doi.org/10.48550/arXiv.2508.08604
Meanwhile, Professor Hyunwoo J. Kim's laboratory presented a total of three papers at the conference, including this paper and "TabFlash," a technology developed in collaboration with Google Cloud AI to enhance the understanding of tables within documents.
KAIST Develops AI to Easily Find Promising Materials That Capture Only CO₂
< Photo 1. (From left) Professor Jihan Kim, Ph.D. candidate Yunsung Lim and Dr. Hyunsoo Park of the Department of Chemical and Biomolecular Engineering >
In order to help prevent the climate crisis, actively reducing already-emitted CO₂ is essential. Accordingly, direct air capture (DAC) — a technology that directly extracts only CO₂ from the air — is gaining attention. However, effectively capturing pure CO₂ is not easy due to water vapor (H₂O) present in the air. KAIST researchers have successfully used AI-driven machine learning techniques to identify the most promising CO₂-capturing materials among metal-organic frameworks (MOFs), a key class of materials studied for this technology.
KAIST (President Kwang Hyung Lee) announced on the 29th of June that a research team led by Professor Jihan Kim from the Department of Chemical and Biomolecular Engineering, in collaboration with a team at Imperial College London, has developed a machine-learning-based simulation method that can quickly and accurately screen MOFs best suited for atmospheric CO₂ capture.
< Figure 1. Concept diagram of Direct Air Capture (DAC) technology and carbon capture using Metal-Organic Frameworks (MOFs). MOFs are promising porous materials capable of capturing carbon dioxide from the atmosphere, drawing attention as a core material for DAC technology. >
To overcome the difficulty of discovering high-performance materials due to the complexity of structures and the limitations of predicting intermolecular interactions, the research team developed a machine learning force field (MLFF) capable of precisely predicting the interactions between CO₂, water (H₂O), and MOFs. This new method enables calculations of MOF adsorption properties with quantum-mechanics-level accuracy at vastly faster speeds than before.
Using this system, the team screened over 8,000 experimentally synthesized MOF structures, identifying more than 100 promising candidates for CO₂ capture. Notably, this included new candidates that had not been uncovered by traditional force-field-based simulations. The team also analyzed the relationships between MOF chemical structure and adsorption performance, proposing seven key chemical features that will help in designing new materials for DAC.
< Figure 2. Concept diagram of adsorption simulation using Machine Learning Force Field (MLFF). The developed MLFF is applicable to various MOF structures and allows for precise calculation of adsorption properties by predicting interaction energies during repetitive Widom insertion simulations. It is characterized by simultaneously achieving high accuracy and low computational cost compared to conventional classical force fields. >
This research is recognized as a significant advance in the DAC field, greatly enhancing materials design and simulation by precisely predicting MOF-CO₂ and MOF-H₂O interactions.
The results of this research, with Ph.D. candidate Yunsung Lim and Dr. Hyunsoo Park of KAIST as co-first authors, were published in the international academic journal Matter on June 12.
※Paper Title: Accelerating CO₂ direct air capture screening for metal–organic frameworks with a transferable machine learning force field
※DOI: 10.1016/j.matt.2025.102203
This research was supported by the Saudi Aramco-KAIST CO₂ Management Center and the Ministry of Science and ICT's Global C.L.E.A.N. Project.