본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.29
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Google
by recently order
by view order
KAIST Develops ‘Real-Time Programmable Robotic Sheet’ That Can Grasp and Walk on Its Own
<(From left) Prof. Inkyu Park from KAIST, Prof. Yongrok Jeong from Kyungpook National University, Dr. Hyunkyu Park from KAIST and Prof.Jung Kim from KAIST> Folding structures are widely used in robot design as an intuitive and efficient shape-morphing mechanism, with applications explored in space and aerospace robots, soft robots, and foldable grippers (hands). However, existing folding mechanisms have fixed hinges and folding directions, requiring redesign and reconstruction every time the environment or task changes. A Korean research team has now developed a “field-programmable robotic folding sheet” that can be programmed in real time according to its surroundings, significantly enhancing robots’ shape-morphing capabilities and opening new possibilities in robotics. KAIST (President Kwang Hyung Lee) announced on the 6th that Professors Jung Kim and Inkyu Park of the Department of Mechanical Engineering have developed the foundational technology for a “field-programmable robotic folding sheet” that enables real-time shape programming. This technology is a successful application of the “field-programmability” concept to foldable structures. It proposes an integrated material technology and programming methodology that can instantly reflect user commands—such as “where to fold, in which direction, and by how much”—onto the material's shape in real time. The robotic sheet consists of a thin and flexible polymer substrate embedded with a micro metal resistor network. These metal resistors simultaneously serve as heaters and temperature sensors, allowing the system to sense and control its folding state without any external devices. Furthermore, using software that combines genetic algorithms and deep neural networks, the user can input desired folding locations, directions, and intensities. The sheet then autonomously repeats heating and cooling cycles to create the precise desired shape. In particular, closed-loop control of the temperature distribution enhances real-time folding precision and compensates for environmental changes. It also improves the traditionally slow response time of heat-based folding technologies. The ability to program shapes in real time enables a wide variety of robotic functions to be implemented on the fly, without the need for complex hardware redesign. In fact, the research team demonstrated an adaptive robotic hand (gripper) that can change its grasping strategy to suit various object shapes using a single material. They also placed the same robotic sheet on the ground to allow it to walk or crawl, showcasing bioinspired locomotion strategies. This presents potential for expanding into environmentally adaptive autonomous robots that can alter their form in response to surroundings. Professor Jung Kim stated, “This study brings us a step closer to realizing ‘morphological intelligence,’ a concept where shape itself embodies intelligence and enables smart motion. In the future, we plan to evolve this into a next-generation physical AI platform with applications in disaster-response robots, customized medical assistive devices, and space exploration tools—by improving materials and structures for greater load support and faster cooling, and expanding to electrode-free, fully integrated designs of various forms and sizes.” This research, co-led by Dr. Hyunkyu Park (currently at Samsung Advanced Institute of Technology, Samsung Electronics) and Professor Yongrok Jeong (currently at Kyungpook National University), was published in the August 2025 online edition of the international journal Nature Communications. ※ Paper title: Field-programmable robotic folding sheet ※ DOI: 10.1038/s41467-025-61838-3 This research was supported by the National Research Foundation of Korea (Ministry of Science and ICT). (RS-2021-NR059641, 2021R1A2C3008742) Video file: https://drive.google.com/file/d/18R0oW7SJVYH-gd1Er_S-9Myar8dm8Fzp/view?usp=sharing
2025.08.06
View 149
Vulnerability Found: One Packet Can Paralyze Smartphones
<(From left) Professor Yongdae Kim, PhD candidate Tuan Dinh Hoang, PhD candidate Taekkyung Oh from KAIST, Professor CheolJun Park from Kyung Hee University; and Professor Insu Yun from KAIST> Smartphones must stay connected to mobile networks at all times to function properly. The core component that enables this constant connectivity is the communication modem (Baseband) inside the device. KAIST researchers, using their self-developed testing framework called 'LLFuzz (Lower Layer Fuzz),' have discovered security vulnerabilities in the lower layers of smartphone communication modems and demonstrated the necessity of standardizing 'mobile communication modem security testing.' *Standardization: In mobile communication, conformance testing, which verifies normal operation in normal situations, has been standardized. However, standards for handling abnormal packets have not yet been established, hence the need for standardized security testing. Professor Yongdae Kim's team from the School of Electrical Engineering at KAIST, in a joint research effort with Professor CheolJun Park's team from Kyung Hee University, announced on the 25th of July that they have discovered critical security vulnerabilities in the lower layers of smartphone communication modems. These vulnerabilities can incapacitate smartphone communication with just a single manipulated wireless packet (a data transmission unit in a network). In particular, these vulnerabilities are extremely severe as they can potentially lead to remote code execution (RCE) The research team utilized their self-developed 'LLFuzz' analysis framework to analyze the lower layer state transitions and error handling logic of the modem to detect security vulnerabilities. LLFuzz was able to precisely extract vulnerabilities caused by implementation errors by comparing and analyzing 3GPP* standard-based state machines with actual device responses. *3GPP: An international collaborative organization that creates global mobile communication standards. The research team conducted experiments on 15 commercial smartphones from global manufacturers, including Apple, Samsung Electronics, Google, and Xiaomi, and discovered a total of 11 vulnerabilities. Among these, seven were assigned official CVE (Common Vulnerabilities and Exposures) numbers, and manufacturers applied security patches for these vulnerabilities. However, the remaining four have not yet been publicly disclosed. While previous security research primarily focused on higher layers of mobile communication, such as NAS (Network Access Stratum) and RRC (Radio Resource Control), the research team concentrated on analyzing the error handling logic of mobile communication's lower layers, which manufacturers have often neglected. These vulnerabilities occurred in the lower layers of the communication modem (RLC, MAC, PDCP, PHY*), and due to their structural characteristics where encryption or authentication is not applied, operational errors could be induced simply by injecting external signals. *RLC, MAC, PDCP, PHY: Lower layers of LTE/5G communication, responsible for wireless resource allocation, error control, encryption, and physical layer transmission. The research team released a demo video showing that when they injected a manipulated wireless packet (malformed MAC packet) into commercial smartphones via a Software-Defined Radio (SDR) device using packets generated on an experimental laptop, the smartphone's communication modem (Baseband) immediately crashed ※ Experiment video: https://drive.google.com/file/d/1NOwZdu_Hf4ScG7LkwgEkHLa_nSV4FPb_/view?usp=drive_link The video shows data being normally transmitted at 23MB per second on the fast.com page, but immediately after the manipulated packet is injected, the transmission stops and the mobile communication signal disappears. This intuitively demonstrates that a single wireless packet can cripple a commercial device's communication modem. The vulnerabilities were found in the 'modem chip,' a core component of smartphones responsible for calls, texts, and data communication, making it a very important component. Qualcomm: Affects over 90 chipsets, including CVE-2025-21477, CVE-2024-23385. MediaTek: Affects over 80 chipsets, including CVE-2024-20076, CVE-2024-20077, CVE-2025-20659. Samsung: CVE-2025-26780 (targets the latest chipsets like Exynos 2400, 5400). Apple: CVE-2024-27870 (shares the same vulnerability as Qualcomm CVE). The problematic modem chips (communication components) are not only in premium smartphones but also in low-end smartphones, tablets, smartwatches, and IoT devices, leading to the widespread potential for user harm due to their broad diffusion. Furthermore, the research team experimentally tested 5G vulnerabilities in the lower layers and found two vulnerabilities in just two weeks. Considering that 5G vulnerability checks have not been generally conducted, it is possible that many more vulnerabilities exist in the mobile communication lower layers of baseband chips. Professor Yongdae Kim explained, "The lower layers of smartphone communication modems are not subject to encryption or authentication, creating a structural risk where devices can accept arbitrary signals from external sources." He added, "This research demonstrates the necessity of standardizing mobile communication modem security testing for smartphones and other IoT devices." The research team is continuing additional analysis of the 5G lower layers using LLFuzz and is also developing tools for testing LTE and 5G upper layers. They are also pursuing collaborations for future tool disclosure. The team's stance is that "as technological complexity increases, systemic security inspection systems must evolve in parallel." First author Tuan Dinh Hoang, a Ph.D. student in the School of Electrical Engineering, will present the research results in August at USENIX Security 2025, one of the world's most prestigious conferences in cybersecurity. ※ Paper Title: LLFuzz: An Over-the-Air Dynamic Testing Framework for Cellular Baseband Lower Layers (Tuan Dinh Hoang and Taekkyung Oh, KAIST; CheolJun Park, Kyung Hee Univ.; Insu Yun and Yongdae Kim, KAIST) ※ Usenix paper site: https://www.usenix.org/conference/usenixsecurity25/presentation/hoang (Not yet public), Lab homepage paper: https://syssec.kaist.ac.kr/pub/2025/LLFuzz_Tuan.pdf ※ Open-source repository: https://github.com/SysSec-KAIST/LLFuzz (To be released) This research was conducted with support from the Institute of Information & Communications Technology Planning & Evaluation (IITP) funded by the Ministry of Science and ICT.
2025.07.25
View 500
Development of Core NPU Technology to Improve ChatGPT Inference Performance by Over 60%
Latest generative AI models such as OpenAI's ChatGPT-4 and Google's Gemini 2.5 require not only high memory bandwidth but also large memory capacity. This is why generative AI cloud operating companies like Microsoft and Google purchase hundreds of thousands of NVIDIA GPUs. As a solution to address the core challenges of building such high-performance AI infrastructure, Korean researchers have succeeded in developing an NPU (Neural Processing Unit)* core technology that improves the inference performance of generative AI models by an average of over 60% while consuming approximately 44% less power compared to the latest GPUs. *NPU (Neural Processing Unit): An AI-specific semiconductor chip designed to rapidly process artificial neural networks. On the 4th, Professor Jongse Park's research team from KAIST School of Computing, in collaboration with HyperAccel Inc. (a startup founded by Professor Joo-Young Kim from the School of Electrical Engineering), announced that they have developed a high-performance, low-power NPU (Neural Processing Unit) core technology specialized for generative AI clouds like ChatGPT. The technology proposed by the research team has been accepted by the '2025 International Symposium on Computer Architecture (ISCA 2025)', a top-tier international conference in the field of computer architecture. The key objective of this research is to improve the performance of large-scale generative AI services by lightweighting the inference process, while minimizing accuracy loss and solving memory bottleneck issues. This research is highly recognized for its integrated design of AI semiconductors and AI system software, which are key components of AI infrastructure. While existing GPU-based AI infrastructure requires multiple GPU devices to meet high bandwidth and capacity demands, this technology enables the configuration of the same level of AI infrastructure using fewer NPU devices through KV cache quantization*. KV cache accounts for most of the memory usage, thereby its quantization significantly reduces the cost of building generative AI clouds. *KV Cache (Key-Value Cache) Quantization: Refers to reducing the data size in a type of temporary storage space used to improve performance when operating generative AI models (e.g., converting a 16-bit number to a 4-bit number reduces data size by 1/4). The research team designed it to be integrated with memory interfaces without changing the operational logic of existing NPU architectures. This hardware architecture not only implements the proposed quantization algorithm but also adopts page-level memory management techniques* for efficient utilization of limited memory bandwidth and capacity, and introduces new encoding technique optimized for quantized KV cache. *Page-level memory management technique: Virtualizes memory addresses, as the CPU does, to allow consistent access within the NPU. Furthermore, when building an NPU-based AI cloud with superior cost and power efficiency compared to the latest GPUs, the high-performance, low-power nature of NPUs is expected to significantly reduce operating costs. Professor Jongse Park stated, "This research, through joint work with HyperAccel Inc., found a solution in generative AI inference lightweighting algorithms and succeeded in developing a core NPU technology that can solve the 'memory problem.' Through this technology, we implemented an NPU with over 60% improved performance compared to the latest GPUs by combining quantization techniques that reduce memory requirements while maintaining inference accuracy, and hardware designs optimized for this". He further emphasized, "This technology has demonstrated the possibility of implementing high-performance, low-power infrastructure specialized for generative AI, and is expected to play a key role not only in AI cloud data centers but also in the AI transformation (AX) environment represented by dynamic, executable AI such as 'Agentic AI'." This research was presented by Ph.D. student Minsu Kim and Dr. Seongmin Hong from HyperAccel Inc. as co-first authors at the '2025 International Symposium on Computer Architecture (ISCA)' held in Tokyo, Japan, from June 21 to June 25. ISCA, a globally renowned academic conference, received 570 paper submissions this year, with only 127 papers accepted (an acceptance rate of 22.7%). ※Paper Title: Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization ※DOI: https://doi.org/10.1145/3695053.3731019 Meanwhile, this research was supported by the National Research Foundation of Korea's Excellent Young Researcher Program, the Institute for Information & Communications Technology Planning & Evaluation (IITP), and the AI Semiconductor Graduate School Support Project.
2025.07.07
View 1116
KAIST researcher Se Jin Park develops 'SpeechSSM,' opening up possibilities for a 24-hour AI voice assistant.
<(From Left)Prof. Yong Man Ro and Ph.D. candidate Sejin Park> Se Jin Park, a researcher from Professor Yong Man Ro’s team at KAIST, has announced 'SpeechSSM', a spoken language model capable of generating long-duration speech that sounds natural and remains consistent. An efficient processing technique based on linear sequence modeling overcomes the limitations of existing spoken language models, enabling high-quality speech generation without time constraints. It is expected to be widely used in podcasts, audiobooks, and voice assistants due to its ability to generate natural, long-duration speech like humans. Recently, Spoken Language Models (SLMs) have been spotlighted as next-generation technology that surpasses the limitations of text-based language models by learning human speech without text to understand and generate linguistic and non-linguistic information. However, existing models showed significant limitations in generating long-duration content required for podcasts, audiobooks, and voice assistants. Now, KAIST researcher has succeeded in overcoming these limitations by developing 'SpeechSSM,' which enables consistent and natural speech generation without time constraints. KAIST(President Kwang Hyung Lee) announced on the 3rd of July that Ph.D. candidate Sejin Park from Professor Yong Man Ro's research team in the School of Electrical Engineering has developed 'SpeechSSM,' a spoken. a spoken language model capable of generating long-duration speech. This research is set to be presented as an oral paper at ICML (International Conference on Machine Learning) 2025, one of the top machine learning conferences, selected among approximately 1% of all submitted papers. This not only proves outstanding research ability but also serves as an opportunity to once again demonstrate KAIST's world-leading AI research capabilities. A major advantage of Spoken Language Models (SLMs) is their ability to directly process speech without intermediate text conversion, leveraging the unique acoustic characteristics of human speakers, allowing for the rapid generation of high-quality speech even in large-scale models. However, existing models faced difficulties in maintaining semantic and speaker consistency for long-duration speech due to increased 'speech token resolution' and memory consumption when capturing very detailed information by breaking down speech into fine fragments. To solve this problem, Se Jin Park developed 'SpeechSSM,' a spoken language model using a Hybrid State-Space Model, designed to efficiently process and generate long speech sequences. This model employs a 'hybrid structure' that alternately places 'attention layers' focusing on recent information and 'recurrent layers' that remember the overall narrative flow (long-term context). This allows the story to flow smoothly without losing coherence even when generating speech for a long time. Furthermore, memory usage and computational load do not increase sharply with input length, enabling stable and efficient learning and the generation of long-duration speech. SpeechSSM effectively processes unbounded speech sequences by dividing speech data into short, fixed units (windows), processing each unit independently, and then combining them to create long speech. Additionally, in the speech generation phase, it uses a 'Non-Autoregressive' audio synthesis model (SoundStorm), which rapidly generates multiple parts at once instead of slowly creating one character or one word at a time, enabling the fast generation of high-quality speech. While existing models typically evaluated short speech models of about 10 seconds, Se Jin Park created new evaluation tasks for speech generation based on their self-built benchmark dataset, 'LibriSpeech-Long,' capable of generating up to 16 minutes of speech. Compared to PPL (Perplexity), an existing speech model evaluation metric that only indicates grammatical correctness, she proposed new evaluation metrics such as 'SC-L (semantic coherence over time)' to assess content coherence over time, and 'N-MOS-T (naturalness mean opinion score over time)' to evaluate naturalness over time, enabling more effective and precise evaluation. Through these new evaluations, it was confirmed that speech generated by the SpeechSSM spoken language model consistently featured specific individuals mentioned in the initial prompt, and new characters and events unfolded naturally and contextually consistently, despite long-duration generation. This contrasts sharply with existing models, which tended to easily lose their topic and exhibit repetition during long-duration generation. PhD candidate Sejin Park explained, "Existing spoken language models had limitations in long-duration generation, so our goal was to develop a spoken language model capable of generating long-duration speech for actual human use." She added, "This research achievement is expected to greatly contribute to various types of voice content creation and voice AI fields like voice assistants, by maintaining consistent content in long contexts and responding more efficiently and quickly in real time than existing methods." This research, with Se Jin Park as the first author, was conducted in collaboration with Google DeepMind and is scheduled to be presented as an oral presentation at ICML (International Conference on Machine Learning) 2025 on July 16th. Paper Title: Long-Form Speech Generation with Spoken Language Models DOI: 10.48550/arXiv.2412.18603 Ph.D. candidate Se Jin Park has demonstrated outstanding research capabilities as a member of Professor Yong Man Ro's MLLM (multimodal large language model) research team, through her work integrating vision, speech, and language. Her achievements include a spotlight paper presentation at 2024 CVPR (Computer Vision and Pattern Recognition) and an Outstanding Paper Award at 2024 ACL (Association for Computational Linguistics). For more information, you can refer to the publication and accompanying demo: SpeechSSM Publications.
2025.07.04
View 982
KAIST to Lead the Way in Nurturing Talent and Driving S&T Innovation for a G3 AI Powerhouse
* Focusing on nurturing talent and dedicating to R&D to become a G3 AI powerhouse (Top 3 AI Nations). * Leading the realization of an "AI-driven Basic Society for All" and developing technologies that leverage AI to overcome the crisis in Korea's manufacturing sector. * 50 years ago, South Korea emerged as a scientific and technological powerhouse from the ashes, with KAIST at its core, contributing to the development of scientific and technological talent, innovative technology, national industrial growth, and the creation of a startup innovation ecosystem. As public interest in AI and science and technology has significantly grown with the inauguration of the new government, KAIST (President Kwang Hyung Lee) announced its plan, on June 24th, to transform into an "AI-centric, Value-Creating Science and Technology University" that leads national innovation based on science and technology and spearheads solutions to global challenges. At a time when South Korea is undergoing a major transition to a technology-driven society, KAIST, drawing on its half-century of experience as a "Starter Kit" for national development, is preparing to leap beyond being a mere educational and research institution to become a global innovation hub that creates new social value. In particular, KAIST has presented a vision for realizing an "AI-driven Basic Society" where all citizens can utilize AI without alienation, enabling South Korea to ascend to the top three AI nations (G3). To achieve this, through the "National AI Research Hub" project (headed by Kee Eung Kim), led by KAIST representing South Korea, the institution is dedicated to enhancing industrial competitiveness and effectively solving social problems based on AI technology. < KAIST President Kwang Hyung Lee > KAIST's research achievements in the AI field are garnering international attention. In the top three machine learning conferences (ICML, NeurIPS, ICLR), KAIST ranked 5th globally and 1st in Asia over the past five years (2020-2024). During the same period, based on the number of papers published in top conferences in machine learning, natural language processing, and computer vision (ICML, NeurIPS, ICLR, ACL, EMNLP, NAACL, CVPR, ICCV, ECCV), KAIST ranked 5th globally and 4th in Asia. Furthermore, KAIST has consistently demonstrated unparalleled research capabilities, ranking 1st globally in the average number of papers accepted at ISSCC (International Solid-State Circuits Conference), the world's most prestigious academic conference on semiconductor integrated circuits, for 19 years (2006-2024). KAIST is continuously expanding its research into core AI technologies, including hyper-scale AI models (Korean LLM), neuromorphic semiconductors, and low-power AI processors, as well as various application areas such as autonomous driving, urban air mobility (UAM), precision medicine, and explainable AI (XAI). In the manufacturing sector, KAIST's AI technologies are also driving on-site innovation. Professor Young Jae Jang's team has enhanced productivity in advanced manufacturing fields like semiconductors and displays through digital twins utilizing manufacturing site data and AI-based prediction technology. Professor Song Min Kim's team developed ultra-low power wireless tag technology capable of tracking locations with sub-centimeter precision, accelerating the implementation of smart factories. Technologies such as industrial process optimization and equipment failure prediction developed by INEEJI Co., Ltd., founded by Professor Jaesik Choi, are being rapidly applied in real industrial settings, yielding results. INEEJI was designated as a national strategic technology in the 'Explainable AI (XAI)' field by the government in March. < Researchers performing data analysis for AI research > Practical applications are also emerging in the robotics sector, which is closely linked to AI. Professor Jemin Hwangbo's team from the Department of Mechanical Engineering garnered attention by newly developing RAIBO 2, a quadrupedal robot usable in high-risk environments such as disaster relief and rough terrain exploration. Professor Kyoung Chul Kong's team and Angel Robotics Co., Ltd. developed the WalkOn Suit exoskeleton robot, significantly improving the quality of life for individuals with complete lower body paralysis or walking disabilities. Additionally, remarkable research is ongoing in future core technology areas such as AI semiconductors, quantum cryptography communication, ultra-small satellites, hydrogen fuel cells, next-generation batteries, and biomimetic sensors. Notably, space exploration technology based on small satellites, asteroid exploration projects, energy harvesting, and high-speed charging technologies are gaining attention. Particularly in advanced bio and life sciences, KAIST is collaborating with Germany's Merck company on various research initiatives, including synthetic biology and mRNA. KAIST is also contributing to the construction of a 430 billion won Merck Bio-Center in Daejeon, thereby stimulating the local economy and creating jobs. Based on these cutting-edge research capabilities, KAIST continues to expand its influence not only within the industry but also on the global stage. It has established strategic partnerships with leading universities worldwide, including MIT, Stanford University, and New York University (NYU). Notably, KAIST and NYU have established a joint campus in New York to strengthen human exchange and collaborative research. Active industry-academia collaborations with global companies such as Google, Intel, and TSMC are also ongoing, playing a pivotal role in future technology development and the creation of an innovation ecosystem. These activities also lead to a strong startup ecosystem that drives South Korean industries. The flow of startups, which began with companies like Qnix Computer, Nexon, and Naver, has expanded to a total of 1,914 companies to date. Their cumulative assets amount to 94 trillion won, with sales reaching 36 trillion won and employing approximately 60,000 people. Over 90% of these are technology-based startups originating from faculty and student labs, demonstrating a model that makes a tangible economic contribution based on science and technology. < Students at work > Having consistently generated diverse achievements, KAIST has already produced approximately 80,000 "KAISTians" who have created innovation through challenge and failure, and is currently recruiting new talent to continue driving innovation that transforms South Korea and the world. President Kwang Hyung Lee emphasized, "KAIST will establish itself as a global leader in science and technology, designing the future of South Korea and humanity and creating tangible value." He added, "We will focus on talent nurturing and research and development to realize the new government's national agenda of becoming a G3 AI powerhouse." He further stated, "KAIST's vision for the AI field, in which it places particular emphasis, is to strive for a society where everyone can freely utilize AI. We will contribute to significantly boosting productivity by recovering manufacturing competitiveness through AI and actively disseminating physical AI, AI robots, and AI mobility technologies to industrial sites."
2025.06.24
View 2327
KAIST Turns an Unprecedented Idea into Reality: Quantum Computing with Magnets
What started as an idea under KAIST’s Global Singularity Research Project—"Can we build a quantum computer using magnets?"—has now become a scientific reality. A KAIST-led international research team has successfully demonstrated a core quantum computing technology using magnetic materials (ferromagnets) for the first time in the world. KAIST (represented by President Kwang-Hyung Lee) announced on the 6th of May that a team led by Professor Kab-Jin Kim from the Department of Physics, in collaboration with the Argonne National Laboratory and the University of Illinois Urbana-Champaign (UIUC), has developed a “photon-magnon hybrid chip” and successfully implemented real-time, multi-pulse interference using magnetic materials—marking a global first. < Photo 1. Dr. Moojune Song (left) and Professor Kab-Jin Kim (right) of KAIST Department of Physics > In simple terms, the researchers developed a special chip that synchronizes light and internal magnetic vibrations (magnons), enabling the transmission of phase information between distant magnets. They succeeded in observing and controlling interference between multiple signals in real time. This marks the first experimental evidence that magnets can serve as key components in quantum computing, serving as a pivotal step toward magnet-based quantum platforms. The N and S poles of a magnet stem from the spin of electrons inside atoms. When many atoms align, their collective spin vibrations create a quantum particle known as a “magnon.” Magnons are especially promising because of their nonreciprocal nature—they can carry information in only one direction, which makes them suitable for quantum noise isolation in compact quantum chips. They can also couple with both light and microwaves, enabling the potential for long-distance quantum communication over tens of kilometers. Moreover, using special materials like antiferromagnets could allow quantum computers to operate at terahertz (THz) frequencies, far surpassing today’s hardware limitations, and possibly enabling room-temperature quantum computing without the need for bulky cryogenic equipment. To build such a system, however, one must be able to transmit, measure, and control the phase information of magnons—the starting point and propagation of their waveforms—in real time. This had not been achieved until now. < Figure 1. Superconducting Circuit-Based Magnon-Photon Hybrid System. (a) Schematic diagram of the device. A NbN superconducting resonator circuit fabricated on a silicon substrate is coupled with spherical YIG magnets (250 μm diameter), and magnons are generated and measured in real-time via a vertical antenna. (b) Photograph of the actual device. The distance between the two YIG spheres is 12 mm, a distance at which they cannot influence each other without the superconducting circuit. > Professor Kim’s team used two tiny magnetic spheres made of Yttrium Iron Garnet (YIG) placed 12 mm apart with a superconducting resonator in between—similar to those used in quantum processors by Google and IBM. They input pulses into one magnet and successfully observed lossless transmission of magnon vibrations to the second magnet via the superconducting circuit. They confirmed that from single nanosecond pulses to four microwave pulses, the magnon vibrations maintained their phase information and demonstrated predictable constructive or destructive interference in real time—known as coherent interference. By adjusting the pulse frequencies and their intervals, the researchers could also freely control the interference patterns of magnons, effectively showing for the first time that electrical signals can be used to manipulate magnonic quantum states. This work demonstrated that quantum gate operations using multiple pulses—a fundamental technique in quantum information processing—can be implemented using a hybrid system of magnetic materials and superconducting circuits. This opens the door for the practical use of magnet-based quantum devices. < Figure 2. Experimental Data. (a) Measurement results of magnon-magnon band anticrossing via continuous wave measurement, showing the formation of a strong coupling hybrid system. (b) Magnon pulse exchange oscillation phenomenon between YIG spheres upon single pulse application. It can be seen that magnon information is coherently transmitted at regular time intervals through the superconducting circuit. (c,d) Magnon interference phenomenon upon dual pulse application. The magnon information state can be arbitrarily controlled by adjusting the time interval and carrier frequency between pulses. > Professor Kab-Jin Kim stated, “This project began with a bold, even unconventional idea proposed to the Global Singularity Research Program: ‘What if we could build a quantum computer with magnets?’ The journey has been fascinating, and this study not only opens a new field of quantum spintronics, but also marks a turning point in developing high-efficiency quantum information processing devices.” The research was co-led by postdoctoral researcher Moojune Song (KAIST), Dr. Yi Li and Dr. Valentine Novosad from Argonne National Lab, and Prof. Axel Hoffmann’s team at UIUC. The results were published in Nature Communications on April 17 and npj Spintronics on April 1, 2025. Paper 1: Single-shot magnon interference in a magnon-superconducting-resonator hybrid circuit, Nat. Commun. 16, 3649 (2025) DOI: https://doi.org/10.1038/s41467-025-58482-2 Paper 2: Single-shot electrical detection of short-wavelength magnon pulse transmission in a magnonic ultra-thin-film waveguide, npj Spintronics 3, 12 (2025) DOI: https://doi.org/10.1038/s44306-025-00072-5 The research was supported by KAIST’s Global Singularity Research Initiative, the National Research Foundation of Korea (including the Mid-Career Researcher, Leading Research Center, and Quantum Information Science Human Resource Development programs), and the U.S. Department of Energy.
2025.06.12
View 4023
KAIST Research Team Breaks Down Musical Instincts with AI
Music, often referred to as the universal language, is known to be a common component in all cultures. Then, could ‘musical instinct’ be something that is shared to some degree despite the extensive environmental differences amongst cultures? On January 16, a KAIST research team led by Professor Hawoong Jung from the Department of Physics announced to have identified the principle by which musical instincts emerge from the human brain without special learning using an artificial neural network model. Previously, many researchers have attempted to identify the similarities and differences between the music that exist in various different cultures, and tried to understand the origin of the universality. A paper published in Science in 2019 had revealed that music is produced in all ethnographically distinct cultures, and that similar forms of beats and tunes are used. Neuroscientist have also previously found out that a specific part of the human brain, namely the auditory cortex, is responsible for processing musical information. Professor Jung’s team used an artificial neural network model to show that cognitive functions for music forms spontaneously as a result of processing auditory information received from nature, without being taught music. The research team utilized AudioSet, a large-scale collection of sound data provided by Google, and taught the artificial neural network to learn the various sounds. Interestingly, the research team discovered that certain neurons within the network model would respond selectively to music. In other words, they observed the spontaneous generation of neurons that reacted minimally to various other sounds like those of animals, nature, or machines, but showed high levels of response to various forms of music including both instrumental and vocal. The neurons in the artificial neural network model showed similar reactive behaviours to those in the auditory cortex of a real brain. For example, artificial neurons responded less to the sound of music that was cropped into short intervals and were rearranged. This indicates that the spontaneously-generated music-selective neurons encode the temporal structure of music. This property was not limited to a specific genre of music, but emerged across 25 different genres including classic, pop, rock, jazz, and electronic. < Figure 1. Illustration of the musicality of the brain and artificial neural network (created with DALL·E3 AI based on the paper content) > Furthermore, suppressing the activity of the music-selective neurons was found to greatly impede the cognitive accuracy for other natural sounds. That is to say, the neural function that processes musical information helps process other sounds, and that ‘musical ability’ may be an instinct formed as a result of an evolutionary adaptation acquired to better process sounds from nature. Professor Hawoong Jung, who advised the research, said, “The results of our study imply that evolutionary pressure has contributed to forming the universal basis for processing musical information in various cultures.” As for the significance of the research, he explained, “We look forward for this artificially built model with human-like musicality to become an original model for various applications including AI music generation, musical therapy, and for research in musical cognition.” He also commented on its limitations, adding, “This research however does not take into consideration the developmental process that follows the learning of music, and it must be noted that this is a study on the foundation of processing musical information in early development.” < Figure 2. The artificial neural network that learned to recognize non-musical natural sounds in the cyber space distinguishes between music and non-music. > This research, conducted by first author Dr. Gwangsu Kim of the KAIST Department of Physics (current affiliation: MIT Department of Brain and Cognitive Sciences) and Dr. Dong-Kyum Kim (current affiliation: IBS) was published in Nature Communications under the title, “Spontaneous emergence of rudimentary music detectors in deep neural networks”. This research was supported by the National Research Foundation of Korea.
2024.01.23
View 9018
North Korea and Beyond: AI-Powered Satellite Analysis Reveals the Unseen Economic Landscape of Underdeveloped Nations
- A joint research team in computer science, economics, and geography has developed an artificial intelligence (AI) technology to measure grid-level economic development within six-square-kilometer regions. - This AI technology is applicable in regions with limited statistical data (e.g., North Korea), supporting international efforts to propose policies for economic growth and poverty reduction in underdeveloped countries. - The research team plans to make this technology freely available for use to contribute to the United Nations' Sustainable Development Goals (SDGs). The United Nations reports that more than 700 million people are in extreme poverty, earning less than two dollars a day. However, an accurate assessment of poverty remains a global challenge. For example, 53 countries have not conducted agricultural surveys in the past 15 years, and 17 countries have not published a population census. To fill this data gap, new technologies are being explored to estimate poverty using alternative sources such as street views, aerial photos, and satellite images. The paper published in Nature Communications demonstrates how artificial intelligence (AI) can help analyze economic conditions from daytime satellite imagery. This new technology can even apply to the least developed countries - such as North Korea - that do not have reliable statistical data for typical machine learning training. The researchers used Sentinel-2 satellite images from the European Space Agency (ESA) that are publicly available. They split these images into small six-square-kilometer grids. At this zoom level, visual information such as buildings, roads, and greenery can be used to quantify economic indicators. As a result, the team obtained the first ever fine-grained economic map of regions like North Korea. The same algorithm was applied to other underdeveloped countries in Asia: North Korea, Nepal, Laos, Myanmar, Bangladesh, and Cambodia (see Image 1). The key feature of their research model is the "human-machine collaborative approach," which lets researchers combine human input with AI predictions for areas with scarce data. In this research, ten human experts compared satellite images and judged the economic conditions in the area, with the AI learning from this human data and giving economic scores to each image. The results showed that the Human-AI collaborative approach outperformed machine-only learning algorithms. < Image 1. Nightlight satellite images of North Korea (Top-left: Background photo provided by NASA's Earth Observatory). South Korea appears brightly lit compared to North Korea, which is mostly dark except for Pyongyang. In contrast, the model developed by the research team uses daytime satellite imagery to predict more detailed economic predictions for North Korea (top-right) and five Asian countries (Bottom: Background photo from Google Earth). > The research was led by an interdisciplinary team of computer scientists, economists, and a geographer from KAIST & IBS (Donghyun Ahn, Meeyoung Cha, Jihee Kim), Sogang University (Hyunjoo Yang), HKUST (Sangyoon Park), and NUS (Jeasurk Yang). Dr Charles Axelsson, Associate Editor at Nature Communications, handled this paper during the peer review process at the journal. The research team found that the scores showed a strong correlation with traditional socio-economic metrics such as population density, employment, and number of businesses. This demonstrates the wide applicability and scalability of the approach, particularly in data-scarce countries. Furthermore, the model's strength lies in its ability to detect annual changes in economic conditions at a more detailed geospatial level without using any survey data (see Image 2). < Image 2. Differences in satellite imagery and economic scores in North Korea between 2016 and 2019. Significant development was found in the Wonsan Kalma area (top), one of the tourist development zones, but no changes were observed in the Wiwon Industrial Development Zone (bottom). (Background photo: Sentinel-2 satellite imagery provided by the European Space Agency (ESA)). > This model would be especially valuable for rapidly monitoring the progress of Sustainable Development Goals such as reducing poverty and promoting more equitable and sustainable growth on an international scale. The model can also be adapted to measure various social and environmental indicators. For example, it can be trained to identify regions with high vulnerability to climate change and disasters to provide timely guidance on disaster relief efforts. As an example, the researchers explored how North Korea changed before and after the United Nations sanctions against the country. By applying the model to satellite images of North Korea both in 2016 and in 2019, the researchers discovered three key trends in the country's economic development between 2016 and 2019. First, economic growth in North Korea became more concentrated in Pyongyang and major cities, exacerbating the urban-rural divide. Second, satellite imagery revealed significant changes in areas designated for tourism and economic development, such as new building construction and other meaningful alterations. Third, traditional industrial and export development zones showed relatively minor changes. Meeyoung Cha, a data scientist in the team explained, "This is an important interdisciplinary effort to address global challenges like poverty. We plan to apply our AI algorithm to other international issues, such as monitoring carbon emissions, disaster damage detection, and the impact of climate change." An economist on the research team, Jihee Kim, commented that this approach would enable detailed examinations of economic conditions in the developing world at a low cost, reducing data disparities between developed and developing nations. She further emphasized that this is most essential because many public policies require economic measurements to achieve their goals, whether they are for growth, equality, or sustainability. The research team has made the source code publicly available via GitHub and plans to continue improving the technology, applying it to new satellite images updated annually. The results of this study, with Ph.D. candidate Donghyun Ahn at KAIST and Ph.D. candidate Jeasurk Yang at NUS as joint first authors, were published in Nature Communications under the title "A human-machine collaborative approach measures economic development using satellite imagery." < Photos of the main authors. 1. Donghyun Ahn, PhD candidate at KAIST School of Computing 2. Jeasurk Yang, PhD candidate at the Department of Geography of National University of Singapore 3. Meeyoung Cha, Professor of KAIST School of Computing and CI at IBS 4. Jihee Kim, Professor of KAIST School of Business and Technology Management 5. Sangyoon Park, Professor of the Division of Social Science at Hong Kong University of Science and Technology 6. Hyunjoo Yang, Professor of the Department of Economics at Sogang University >
2023.12.07
View 10410
Shaping the AI Semiconductor Ecosystem
- As the marriage of AI and semiconductor being highlighted as the strategic technology of national enthusiasm, KAIST's achievements in the related fields accumulated through top-class education and research capabilities that surpass that of peer universities around the world are standing far apart from the rest of the pack. As Artificial Intelligence Semiconductor, or a system of semiconductors designed for specifically for highly complicated computation need for AI to conduct its learning and deducing calculations, (hereafter AI semiconductors) stand out as a national strategic technology, the related achievements of KAIST, headed by President Kwang Hyung Lee, are also attracting attention. The Ministry of Science, ICT and Future Planning (MSIT) of Korea initiated a program to support the advancement of AI semiconductor last year with the goal of occupying 20% of the global AI semiconductor market by 2030. This year, through industry-university-research discussions, the Ministry expanded to the program with the addition of 1.2 trillion won of investment over five years through 'Support Plan for AI Semiconductor Industry Promotion'. Accordingly, major universities began putting together programs devised to train students to develop expertise in AI semiconductors. KAIST has accumulated top-notch educational and research capabilities in the two core fields of AI semiconductor - Semiconductor and Artificial Intelligence. Notably, in the field of semiconductors, the International Solid-State Circuit Conference (ISSCC) is the world's most prestigious conference about designing of semiconductor integrated circuit. Established in 1954, with more than 60% of the participants coming from companies including Samsung, Qualcomm, TSMC, and Intel, the conference naturally focuses on practical value of the studies from the industrial point-of-view, earning the nickname the ‘Semiconductor Design Olympics’. At such conference of legacy and influence, KAIST kept its presence widely visible over other participating universities, leading in terms of the number of accepted papers over world-class schools such as Massachusetts Institute of Technology (MIT) and Stanford for the past 17 years. Number of papers published at the InternationalSolid-State Circuit Conference (ISSCC) in 2022 sorted by nations and by institutions Number of papers by universities presented at the International Solid-State Circuit Conference (ISCCC) in 2006~2022 In terms of the number of papers accepted at the ISSCC, KAIST ranked among top two universities each year since 2006. Looking at the average number of accepted papers over the past 17 years, KAIST stands out as an unparalleled leader. The average number of KAIST papers adopted during the period of 17 years from 2006 through 2022, was 8.4, which is almost double of that of competitors like MIT (4.6) and UCLA (3.6). In Korea, it maintains the second place overall after Samsung, the undisputed number one in the semiconductor design field. Also, this year, KAIST was ranked first among universities participating at the Symposium on VLSI Technology and Circuits, an academic conference in the field of integrated circuits that rivals the ISSCC. Number of papers adopted by the Symposium on VLSI Technology and Circuits in 2022 submitted from the universities With KAIST researchers working and presenting new technologies at the frontiers of all key areas of the semiconductor industry, the quality of KAIST research is also maintained at the highest level. Professor Myoungsoo Jung's research team in the School of Electrical Engineering is actively working to develop heterogeneous computing environment with high energy efficiency in response to the industry's demand for high performance at low power. In the field of materials, a research team led by Professor Byong-Guk Park of the Department of Materials Science and Engineering developed the Spin Orbit Torque (SOT)-based Magnetic RAM (MRAM) memory that operates at least 10 times faster than conventional memories to suggest a way to overcome the limitations of the existing 'von Neumann structure'. As such, while providing solutions to major challenges in the current semiconductor industry, the development of new technologies necessary to preoccupy new fields in the semiconductor industry are also very actively pursued. In the field of Quantum Computing, which is attracting attention as next-generation computing technology needed in order to take the lead in the fields of cryptography and nonlinear computation, Professor Sanghyeon Kim's research team in the School of Electrical Engineering presented the world's first 3D integrated quantum computing system at 2021 VLSI Symposium. In Neuromorphic Computing, which is expected to bring remarkable advancements in the field of artificial intelligence by utilizing the principles of the neurology, the research team of Professor Shinhyun Choi of School of Electrical Engineering is developing a next-generation memristor that mimics neurons. The number of papers by the International Conference on Machine Learning (ICML) and the Conference on Neural Information Processing Systems (NeurIPS), two of the world’s most prestigious academic societies in the field of artificial intelligence (KAIST 6th in the world, 1st in Asia, in 2020) The field of artificial intelligence has also grown rapidly. Based on the number of papers from the International Conference on Machine Learning (ICML) and the Conference on Neural Information Processing Systems (NeurIPS), two of the world's most prestigious conferences in the field of artificial intelligence, KAIST ranked 6th in the world in 2020 and 1st in Asia. Since 2012, KAIST's ranking steadily inclined from 37th to 6th, climbing 31 steps over the period of eight years. In 2021, 129 papers, or about 40%, of Korean papers published at 11 top artificial intelligence conferences were presented by KAIST. Thanks to KAIST's efforts, in 2021, Korea ranked sixth after the United States, China, United Kingdom, Canada, and Germany in terms of the number of papers published by global AI academic societies. Number of papers from Korea (and by KAIST) published at 11 top conferences in the field of artificial intelligence in 2021 In terms of content, KAIST's AI research is also at the forefront. Professor Hoi-Jun Yoo's research team in the School of Electrical Engineering compensated for the shortcomings of the “edge networks” by implementing artificial intelligence real-time learning networks on mobile devices. In order to materialize artificial intelligence, data accumulation and a huge amount of computation is required. For this, a high-performance server takes care of massive computation, and for the user terminals, the “edge network” that collects data and performs simple computations are used. Professor Yoo's research greatly increased AI’s processing speed and performance by allotting the learning task to the user terminal as well. In June, a research team led by Professor Min-Soo Kim of the School of Computing presented a solution that is essential for processing super-scale artificial intelligence models. The super-scale machine learning system developed by the research team is expected to achieve speeds up to 8.8 times faster than Google's Tensorflow or IBM's System DS, which are mainly used in the industry. KAIST is also making remarkable achievements in the field of AI semiconductors. In 2020, Professor Minsoo Rhu's research team in the School of Electrical Engineering succeeded in developing the world's first AI semiconductor optimized for AI recommendation systems. Due to the nature of the AI recommendation system having to handle vast amounts of contents and user information, it quickly meets its limitation because of the information bottleneck when the process is operated through a general-purpose artificial intelligence system. Professor Minsoo Rhu's team developed a semiconductor that can achieve a speed that is 21 times faster than existing systems using the 'Processing-In-Memory (PIM)' technology. PIM is a technology that improves efficiency by performing the calculations in 'RAM', or random-access memory, which is usually only used to store data temporarily just before they are processed. When PIM technology is put out on the market, it is expected that fortify competitiveness of Korean companies in the AI semiconductor market drastically, as they already hold great strength in the memory area. KAIST does not plan to be complacent with its achievements, but is making various plans to further the distance from the competitors catching on in the fields of artificial intelligence, semiconductors, and AI semiconductors. Following the establishment of the first artificial intelligence research center in Korea in 1990, the Kim Jaechul AI Graduate School was opened in 2019 to sustain the supply chain of the experts in the field. In 2020, Artificial Intelligence Semiconductor System Research Center was launched to conduct convergent research on AI and semiconductors, which was followed by the establishment of the AI Institutes to promote “AI+X” research efforts. Based on the internal capabilities accumulated through these efforts, KAIST is also making efforts to train human resources needed in these areas. KAIST established joint research centers with companies such as Naver, while collaborating with local governments such as Hwaseong City to simultaneously nurture professional manpower. Back in 2021, KAIST signed an agreement to establish the Semiconductor System Engineering Department with Samsung Electronics and are preparing a new semiconductor specialist training program. The newly established Department of Semiconductor System Engineering will select around 100 new students every year from 2023 and provide special scholarships to all students so that they can develop their professional skills. In addition, through close cooperation with the industry, they will receive special support which includes field trips and internships at Samsung Electronics, and joint workshops and on-site training. KAIST has made a significant contribution to the growth of the Korean semiconductor industry ecosystem, producing 25% of doctoral workers in the domestic semiconductor field and 20% of CEOs of mid-sized and venture companies with doctoral degrees. With the dawn coming up on the AI semiconductor ecosystem, whether KAIST will reprise the pivotal role seems to be the crucial point of business.
2022.08.05
View 14547
KAIST Research Team Proves How a Neurotransmitter may be the Key in Controlling Alzheimer’s Toxicity
With nearly 50 million dementia patients worldwide, and Alzheimers’s disease is the most common neurodegenerative disease. Its main symptom is the impairment of general cognitive abilities, including the ability to speak or to remember. The importance of finding a cure is widely understood with increasingly aging population and the life expectancy being ever-extended. However, even the cause of the grim disease is yet to be given a clear definition. A KAIST research team in the Department of Chemistry led by professor Mi Hee Lim took on a lead to discovered a new role for somatostatin, a protein-based neurotransmitter, in reducing the toxicity caused in the pathogenic mechanism taken towards development of Alzheimer’s disease. The study was published in the July issue of Nature Chemistry under the title, “Conformational and functional changes of the native neuropeptide somatostatin occur in the presence of copper and amyloid-β”. According to the amyloid hypothesis, the abnormal deposition of Aβ proteins causes death of neuronal cells. While Aβ agglomerations make up most of the aged plaques through fibrosis, in recent studies, high concentrations of transitional metal were found in the plaques from Alzheimer’s patients. This suggests a close interaction between metallic ions and Aβ, which accelerates the fibrosis of proteins. Copper in particular is a redox-activating transition metal that can produce large amounts of oxygen and cause serious oxidative stress on cell organelles. Aβ proteins and transition metals can closely interact with neurotransmitters at synapses, but the direct effects of such abnormalities on the structure and function of neurotransmitters are yet to be understood. Figure 1. Functional shift of somatostatin (SST) by factors in the pathogenesis of Alzheimer's disease. Figure 2. Somatostatin’s loss-of-function as neurotransmitter. a. Schematic diagram of SST auto-aggregation due to Alzheimer's pathological factors. b. SST’s aggregation by copper ions. c. Coordination-prediction structure and N-terminal folding of copper-SST. d. Inhibition of SST receptor binding specificity by metals. In their research, Professor Lim’s team discovered that when somatostatin, the protein-based neurotransmitter, is met with copper, Aβ, and metal-Aβ complexes, self-aggregates and ceases to perform its innate function of transmitting neural signals, but begins to attenuate the toxicity and agglomeration of metal-Aβ complexes. Figure 3. Gain-of-function of somatostatin (SST) in the dementia setting. a. Prediction of docking of SST and amyloid beta. b. SST making metal-amyloid beta aggregates into an amorphous form. c. Cytotoxic mitigation effect of SST. d. SST mitigating the interaction between amyloid beta protein with the cell membrane. This research, by Dr. Jiyeon Han et al. from the KAIST Department of Chemistry, revealed the coordination structure between copper and somatostatin at a molecular level through which it suggested the agglomeration mechanism, and discovered the effects of somatostatin on Aβ agglomeration path depending on the presence or absence of metals. The team has further confirmed somatostatin’s receptor binding, interactions with cell membranes, and effects on cell toxicity for the first time to receive international attention. Professor Mi Hee Lim said, “This research has great significance in having discovered a new role of neurotransmitters in the pathogenesis of Alzheimer’s disease.” “We expect this research to contribute to defining the pathogenic network of neurodegenerative diseases caused by aging, and to the development of future biomarkers and medicine,” she added. This research was conducted jointly by Professor Seung-Hee Lee’s team of KAIST Department of Biological Sciences, Professor Kiyoung Park’s Team of KAIST Department of Chemistry, and Professor Yulong Li’s team of Peking University. The research was funded by Basic Science Research Program of the National Research Foundation of Korea and KAIST. For more information about the research team, visit the website: https://sites.google.com/site/miheelimlab/1-professor-mi-hee-lim.
2022.07.29
View 17524
An AI-based, Indoor/Outdoor-Integrated (IOI) GPS System to Bring Seismic Waves in the Terrains of Positioning Technology
KAIST breaks new grounds in positioning technology with an AI-integrated GPS board that works both indoors and out KAIST (President Kwang Hyung Lee) announced on the 8th that Professor Dong-Soo Han's research team (Intelligent Service Integration Lab) from the School of Computing has developed a GPS system that works both indoors and outdoors with quality precision regardless of the environment. This Indoor/Outdoor-Integrated GPS System, or IOI GPS System, for short, uses the GPS signals outdoors and estimates locations indoors using signals from multiple sources like an inertial sensor, pressure sensors, geomagnetic sensors, and light sensors. To this end, the research team developed techniques to detect environmental changes such as entering a building, and methods to detect entrances, ground floors, stairs, elevators and levels of buildings by utilizing artificial intelligence techniques. Various landmark detecting techniques were also incorporated with pedestrian dead reckoning (PDR), a navigation tool for pedestrians, to devise the so-called “Sensor-Fusion Positioning Algorithm”. To date, it was common to estimate locations based on wireless LAN signals or base station signals in a space where the GPS signal could not reach. However, the IOI GPS enables positioning even in buildings without signals nor indoor maps. The algorithm developed by the research team can provide accurate floor information within a building where even big tech companies like Google and Apple's positioning services do not provide. Unlike other positioning methods that rely on visual data, geomagnetic positioning techniques, or wireless LAN, this system also has the advantage of not requiring any prior preparation. In other words, the foundation to enable the usage of a universal GPS system that works both indoors and outdoors anywhere in the world is now ready. The research team also produced a circuit board for the purpose of operating the IOI GPS System, mounted with chips to receive and process GPS, Wi-Fi, and Bluetooth signals, along with an inertial sensor, a barometer, a magnetometer, and a light sensor. The sensor-fusion positioning algorithm the lab has developed is also incorporated in the board. When the accuracy of the IOI GPS board was tested in the N1 building of KAIST’s main campus in Daejeon, it achieved an accuracy of about 95% in floor estimation and an accuracy of about 3 to 6 meters in distance estimation. As for the indoor/outdoor transition, the navigational mode change was completed in about 0.3 seconds. When it was combined with the PDR technique, the estimation accuracy improved further down to a scope of one meter. The research team is now working on assembling a tag with a built-in positioning board and applying it to location-based docent services for visitors at museums, science centers, and art galleries. The IOI GPS tag can be used for the purpose of tracking children and/or the elderly, and it can also be used to locate people or rescue workers lost in disaster-ridden or hazardous sites. On a different note, the sensor-fusion positioning algorithm and positioning board for vehicles are also under development for the tracking of vehicles entering indoor areas like underground parking lots. When the IOI GPS board for vehicles is manufactured, the research team will work to collaborate with car manufacturers and car rental companies, and will also develop a sensor-fusion positioning algorithm for smartphones. Telecommunication companies seeking to diversify their programs in the field of location-based services will also be interested in the use the IOI GPS. Professor Dong-Soo Han of the School of Computing, who leads the research team, said, “This is the first time to develop an indoor/outdoor integrated GPS system that can pinpoint locations in a building where there is no wireless signal or an indoor map, and there are an infinite number of areas it can be applied to. When the integration with the Korea Augmentation Satellite System (KASS) and the Korean GPS (KPS) System that began this year, is finally completed, Korea can become the leader in the field of GPS both indoors and outdoors, and we also have plans to manufacture semi-conductor chips for the IOI GPS System to keep the tech-gap between Korea and the followers.” He added, "The guidance services at science centers, museums, and art galleries that uses IOI GPS tags can provide a set of data that would be very helpful for analyzing the visitors’ viewing traces. It is an essential piece of information required when the time comes to decide when to organize the next exhibit. We will be working on having it applied to the National Science Museum, first.” The projects to develop the IOI GPS system and the trace analysis system for science centers were supported through Science, Culture, Exhibits and Services Capability Enhancement Program of the Ministry of Science and ICT. Profile: Dong-Soo Han, Ph.D.Professorddsshhan@kaist.ac.krhttp://isilab.kaist.ac.kr Intelligent Service Integration Lab.School of Computing http://kaist.ac.kr/en/ Korea Advanced Institute of Science and Technology (KAIST)Daejeon, Republic of Korea
2022.07.13
View 14039
Professor Juho Kim’s Team Wins Best Paper Award at ACM CHI 2022
The research team led by Professor Juho Kim from the KAIST School of Computing won a Best Paper Award and an Honorable Mention Award at the Association for Computing Machinery Conference on Human Factors in Computing Systems (ACM CHI) held between April 30 and May 6. ACM CHI is the world’s most recognized conference in the field of human computer interactions (HCI), and is ranked number one out of all HCI-related journals and conferences based on Google Scholar’s h-5 index. Best paper awards are given to works that rank in the top one percent, and honorable mention awards are given to the top five percent of the papers accepted by the conference. Professor Juho Kim presented a total of seven papers at ACM CHI 2022, and tied for the largest number of papers. A total of 19 papers were affiliated with KAIST, putting it fifth out of all participating institutes and thereby proving KAIST’s competence in research. One of Professor Kim’s research teams composed of Jeongyeon Kim (first author, MS graduate) from the School of Computing, MS candidate Yubin Choi from the School of Electrical Engineering, and Dr. Meng Xia (post-doctoral associate in the School of Computing, currently a post-doctoral associate at Carnegie Mellon University) received a best paper award for their paper, “Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities”. The study analyzed the difficulties experienced by learners watching video-based educational content in a mobile environment and suggests guidelines for solutions. The research team analyzed 134 survey responses and 21 interviews, and revealed that texts that are too small or overcrowded are what mainly brings down the legibility of video contents. Additionally, lighting, noise, and surrounding environments that change frequently are also important factors that may disturb a learning experience. Based on these findings, the team analyzed the aptness of 41,722 frames from 101 video lectures for mobile environments, and confirmed that they generally show low levels of adequacy. For instance, in the case of text sizes, only 24.5% of the frames were shown to be adequate for learning in mobile environments. To overcome this issue, the research team suggested a guideline that may improve the legibility of video contents and help overcome the difficulties arising from mobile learning environments. The importance of and dependency on video-based learning continue to rise, especially in the wake of the pandemic, and it is meaningful that this research suggested a means to analyze and tackle the difficulties of users that learn from the small screens of mobile devices. Furthermore, the paper also suggested technology that can solve problems related to video-based learning through human-AI collaborations, enhancing existing video lectures and improving learning experiences. This technology can be applied to various video-based platforms and content creation. Meanwhile, a research team composed of Ph.D. candidate Tae Soo Kim (first author), MS candidate DaEun Choi, and Ph.D. candidate Yoonseo Choi from the School of Computing received an honorable mention award for their paper, “Stylette: styling the Web with Natural Language”. The research team developed a novel interface technology that allows nonexperts who are unfamiliar with technical jargon to edit website features through speech. People often find it difficult to use or find the information they need from various websites due to accessibility issues, device-related constraints, inconvenient design, style preferences, etc. However, it is not easy for laymen to edit website features without expertise in programming or design, and most end up just putting up with the inconveniences. But what if the system could read the intentions of its users from their everyday language like “emphasize this part a little more”, or “I want a more modern design”, and edit the features automatically? Based on this question, Professor Kim’s research team developed ‘Stylette’, a system in which AI analyses its users’ speech expressed in their natural language and automatically recommends a new style that best fits their intentions. The research team created a new system by putting together language AI, visual AI, and user interface technologies. On the linguistic side, a large-scale language model AI converts the intentions of the users expressed through their everyday language into adequate style elements. On the visual side, computer vision AI compares 1.7 million existing web design features and recommends a style adequate for the current website. In an experiment where 40 nonexperts were asked to edit a website design, the subjects that used this system showed double the success rate in a time span that was 35% shorter compared to the control group. It is meaningful that this research proposed a practical case in which AI technology constructs intuitive interactions with users. The developed technology can be applied to existing design applications and web browsers in a plug-in format, and can be utilized to improve websites or for advertisements by collecting the natural intention data of users on a large scale.
2022.06.13
View 10798
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
>
다음 페이지
>>
마지막 페이지 6