본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.27
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Image
by recently order
by view order
KAIST Researchers Unveil an AI that Generates "Unexpectedly Original" Designs
< Photo 1. Professor Jaesik Choi, KAIST Kim Jaechul Graduate School of AI > Recently, text-based image generation models can automatically create high-resolution, high-quality images solely from natural language descriptions. However, when a typical example like the Stable Diffusion model is given the text "creative," its ability to generate truly creative images remains limited. KAIST researchers have developed a technology that can enhance the creativity of text-based image generation models such as Stable Diffusion without additional training, allowing AI to draw creative chair designs that are far from ordinary. Professor Jaesik Choi's research team at KAIST Kim Jaechul Graduate School of AI, in collaboration with NAVER AI Lab, developed this technology to enhance the creative generation of AI generative models without the need for additional training. < Photo 2. Gayoung Lee, Researcher at NAVER AI Lab; Dahee Kwon, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Jiyeon Han, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Junho Kim, Researcher at NAVER AI Lab > Professor Choi's research team developed a technology to enhance creative generation by amplifying the internal feature maps of text-based image generation models. They also discovered that shallow blocks within the model play a crucial role in creative generation. They confirmed that amplifying values in the high-frequency region after converting feature maps to the frequency domain can lead to noise or fragmented color patterns. Accordingly, the research team demonstrated that amplifying the low-frequency region of shallow blocks can effectively enhance creative generation. Considering originality and usefulness as two key elements defining creativity, the research team proposed an algorithm that automatically selects the optimal amplification value for each block within the generative model. Through the developed algorithm, appropriate amplification of the internal feature maps of a pre-trained Stable Diffusion model was able to enhance creative generation without additional classification data or training. < Figure 1. Overview of the methodology researched by the development team. After converting the internal feature map of a pre-trained generative model into the frequency domain through Fast Fourier Transform, the low-frequency region of the feature map is amplified, then re-transformed into the feature space via Inverse Fast Fourier Transform to generate an image. > The research team quantitatively proved, using various metrics, that their developed algorithm can generate images that are more novel than those from existing models, without significantly compromising utility. In particular, they confirmed an increase in image diversity by mitigating the mode collapse problem that occurs in the SDXL-Turbo model, which was developed to significantly improve the image generation speed of the Stable Diffusion XL (SDXL) model. Furthermore, user studies showed that human evaluation also confirmed a significant improvement in novelty relative to utility compared to existing methods. Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST and co-first authors of the paper, stated, "This is the first methodology to enhance the creative generation of generative models without new training or fine-tuning. We have shown that the latent creativity within trained AI generative models can be enhanced through feature map manipulation." They added, "This research makes it easy to generate creative images using only text from existing trained models. It is expected to provide new inspiration in various fields, such as creative product design, and contribute to the practical and useful application of AI models in the creative ecosystem." < Figure 2. Application examples of the methodology researched by the development team. Various Stable Diffusion models generate novel images compared to existing generations while maintaining the meaning of the generated object. > This research, co-authored by Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST Kim Jaechul Graduate School of AI, was presented on June 16 at the International Conference on Computer Vision and Pattern Recognition (CVPR), an international academic conference.* Paper Title: Enhancing Creative Generation on Stable Diffusion-based Models* DOI: https://doi.org/10.48550/arXiv.2503.23538 This research was supported by the KAIST-NAVER Ultra-creative AI Research Center, the Innovation Growth Engine Project Explainable AI, the AI Research Hub Project, and research on flexible evolving AI technology development in line with increasingly strengthened ethical policies, all funded by the Ministry of Science and ICT through the Institute for Information & Communications Technology Promotion. It also received support from the KAIST AI Graduate School Program and was carried out at the KAIST Future Defense AI Specialized Research Center with support from the Defense Acquisition Program Administration and the Agency for Defense Development.
2025.06.20
View 375
KAIST’s Next-Generation Small Satellite-2 Completes a Two-Year Mission – the Successful Observation of Arctic and Forest Changes
KAIST (President Kwang-Hyung Lee) announced on the 25th of May that the Next-Generation Small Satellite-2 developed by the Satellite Technology Research Center (SaTReC, Director Jaeheung Han) and launched aboard the third Nuri rocket from the Naro Space Center at 18:24 on May 25, 2023, has successfully completed its two-year core mission of verifying homegrown Synthetic Aperture Radar (SAR) technology and conducting all-weather Earth observations. The SAR system onboard the satellite was designed, manufactured, and tested domestically for the first time by KAIST’s Satellite Research Center. As of May 25, 2025, it has successfully completed its two-year in-orbit technology demonstration mission. Particularly noteworthy is the fact that the SAR system was mounted on the 100 kg-class Next-Generation Small Satellite-2, marking a major step forward in the miniaturization and weight reduction of spaceborne radar systems and strengthening Korea’s competitiveness in satellite technology. < Figure 1. Conceptual diagram of Earth observation by the Next-Generation Small Satellite No. 2's synthetic aperture radar > The developed SAR is an active sensor that uses electromagnetic waves, allowing all-weather image acquisition regardless of time of day or weather conditions. This makes it especially useful for monitoring regions like the Korean Peninsula, which frequently experiences rain and cloud cover, as it can observe even in cloudy and rainy conditions or darkness. Since its launch, the satellite has carried out three to four image acquisitions per day on average, undergoing functionality checks and technology verifications. To date, it has completed over 1,200 Earth observations and the SAR continues to perform stably, supporting ongoing observation tasks even beyond its designated mission lifespan. < Photo 1. Researchers of the Next-Generation Small Satellite No. 2 at SatRec, taken at the KAIST ground station. (From left) Sung-Og Park, Jung-soo Lee, Hongyoung Park, TaeSeong Jang (Next-Generation Small Satellite No. 2 Project Manager), Seyeon Kim, Mi Young Park, Yongmin Kim, DongGuk Kim > Although still in the domestic technology verification stage, KAIST’s Satellite Research Center has been collaborating with the Korea Polar Research Institute (Director Hyoung Chul Shin) and the Korea National Park Research Institute (Director Jin Tae Kim) since March 2024 to prioritize imaging of areas of interest related to Arctic ice changes and forest ecosystem monitoring. KAIST’s Satellite Research Center is conducting repeated observations of Arctic sea ice, and the Remote Sensing and Cryosphere Information Center of the Korea Polar Research Institute is analyzing the results using time-series data to precisely track changes in sea ice area and structure due to climate change. < Photo 2. Radar Images from Observations on July 24, 2024 - Around the Atchafalaya River in Louisiana, USA. The Wax Lake Delta is seen growing like a leaf. > Recently, the Korea Polar Research Institute (KOPRI), by integrating observation data from the Next-Generation Small Satellite No. 2 and the European Space Agency's (ESA) Sentinel-1, detected a significant increase of 15 km² in the area of an ice lake behind Canada's Milne Ice Shelf (a massive, floating layer of ice where glaciers flow from land into the sea) between 2021 and 2025. This has exacerbated structural instability and is analyzed as an important sign indicating the acceleration of Arctic climate change. Hyuncheol Kim, Director of the Remote Sensing and Cryosphere Information Center at the Korea Polar Research Institute, stated, “This research clearly demonstrates how vulnerable Arctic ice shelves are to climate change. We will continue to monitor and analyze Arctic environmental changes using the SAR aboard the Next-Generation Small Satellite-2 and promote international collaboration.” He added, “We also plan to present these findings at international academic conferences and expand educational and outreach efforts to raise public awareness about changes in the Arctic environment.” < Photo 3. Sinduri Coastal Dune, Taean Coastal National Park, Taean-gun, Chungcheongnam-do > In collaboration with the Climate Change Research Center of the National Park Research Institute, SAR imagery from the satellite is also being used to study phenological shifts due to climate change, the dieback of conifers in high-altitude zones, and landslide monitoring in forest ecosystems. Researchers are also analyzing the spatial distribution of carbon storage in forest areas using satellite data, comparing it with field measurements to improve accuracy. Because SAR is unaffected by light and weather conditions, it can observe through fire and smoke during wildfires, making it an exceptionally effective tool for the regular monitoring of large protected areas. It is expected to play an important role in shaping future forest conservation policies. In addition, KAIST’s Satellite Research Center is working on a system to convert the satellite’s technology demonstration data into standardized imagery products, with budget support from the Korea Aerospace Administration (Administrator Youngbin Yoon), making the data more accessible to research institutions and boosting the usability of the satellite’s observations. < Photo 4. Jang Bogo Station, Antarctica > Jaeheung Han, Director of the Satellite Research Center, said, “The significance of the Next-Generation Small Satellite-2 lies not only in the success of domestic development, but also in its direct contribution to real-world environmental analysis and national research efforts. We will continue to focus on expanding the application of SAR data from the satellite.” KAIST President Kwang-Hyung Lee remarked, “This satellite is a product of KAIST’s advanced space technology and the innovation capacity of its researchers. Its success signals KAIST’s potential to lead in future space technology talent development and R&D, and we will continue to accelerate efforts in this direction.” < Photo 5. Confirmation of changes in the expanded area of the Milne Ice Shelf lake using observation data from Next-Generation Small Satellite No. 2 and Sentinel-1 >
2025.05.25
View 1312
KAIST Secures Core Technology for Ultra-High-Resolution Image Sensors
A joint research team from Korea and the United States has developed next-generation, high-resolution image sensor technology with higher power efficiency and a smaller size compared to existing sensors. Notably, they have secured foundational technology for ultra-high-resolution shortwave infrared (SWIR) image sensors, an area currently dominated by Sony, paving the way for future market entry. KAIST (represented by President Kwang Hyung Lee) announced on the 20th of November that a research team led by Professor SangHyeon Kim from the School of Electrical Engineering, in collaboration with Inha University and Yale University in the U.S., has developed an ultra-thin broadband photodiode (PD), marking a significant breakthrough in high-performance image sensor technology. This research drastically improves the trade-off between the absorption layer thickness and quantum efficiency found in conventional photodiode technology. Specifically, it achieved high quantum efficiency of over 70% even in an absorption layer thinner than one micrometer (μm), reducing the thickness of the absorption layer by approximately 70% compared to existing technologies. A thinner absorption layer simplifies pixel processing, allowing for higher resolution and smoother carrier diffusion, which is advantageous for light carrier acquisition while also reducing the cost. However, a fundamental issue with thinner absorption layers is the reduced absorption of long-wavelength light. < Figure 1. Schematic diagram of the InGaAs photodiode image sensor integrated on the Guided-Mode Resonance (GMR) structure proposed in this study (left), a photograph of the fabricated wafer, and a scanning electron microscope (SEM) image of the periodic patterns (right) > The research team introduced a guided-mode resonance (GMR) structure* that enables high-efficiency light absorption across a wide spectral range from 400 nanometers (nm) to 1,700 nanometers (nm). This wavelength range includes not only visible light but also light the SWIR region, making it valuable for various industrial applications. *Guided-Mode Resonance (GMR) Structure: A concept used in electromagnetics, a phenomenon in which a specific (light) wave resonates (forming a strong electric/magnetic field) at a specific wavelength. Since energy is maximized under these conditions, it has been used to increase antenna or radar efficiency. The improved performance in the SWIR region is expected to play a significant role in developing next-generation image sensors with increasingly high resolutions. The GMR structure, in particular, holds potential for further enhancing resolution and other performance metrics through hybrid integration and monolithic 3D integration with complementary metal-oxide-semiconductor (CMOS)-based readout integrated circuits (ROIC). < Figure 2. Benchmark for state-of-the-art InGaAs-based SWIR pixels with simulated EQE lines as a function of TAL variation. Performance is maintained while reducing the absorption layer thickness from 2.1 micrometers or more to 1 micrometer or less while reducing it by 50% to 70% > The research team has significantly enhanced international competitiveness in low-power devices and ultra-high-resolution imaging technology, opening up possibilities for applications in digital cameras, security systems, medical and industrial image sensors, as well as future ultra-high-resolution sensors for autonomous driving, aerospace, and satellite observation. Professor Sang Hyun Kim, the lead researcher, commented, “This research demonstrates that significantly higher performance than existing technologies can be achieved even with ultra-thin absorption layers.” < Figure 3. Top optical microscope image and cross-sectional scanning electron microscope image of the InGaAs photodiode image sensor fabricated on the GMR structure (left). Improved quantum efficiency performance of the ultra-thin image sensor (red) fabricated with the technology proposed in this study (right) > The results of this research were published on 15th of November, in the prestigious international journal Light: Science & Applications (JCR 2.9%, IF=20.6), with Professor Dae-Myung Geum of Inha University (formerly a KAIST postdoctoral researcher) and Dr. Jinha Lim (currently a postdoctoral researcher at Yale University) as co-first authors. (Paper title: “Highly-efficient (>70%) and Wide-spectral (400 nm -1700 nm) sub-micron-thick InGaAs photodiodes for future high-resolution image sensors”) This study was supported by the National Research Foundation of Korea.
2024.11.22
View 5234
KAIST Employs Image-recognition AI to Determine Battery Composition and Conditions
An international collaborative research team has developed an image recognition technology that can accurately determine the elemental composition and the number of charge and discharge cycles of a battery by examining only its surface morphology using AI learning. KAIST (President Kwang-Hyung Lee) announced on July 2nd that Professor Seungbum Hong from the Department of Materials Science and Engineering, in collaboration with the Electronics and Telecommunications Research Institute (ETRI) and Drexel University in the United States, has developed a method to predict the major elemental composition and charge-discharge state of NCM cathode materials with 99.6% accuracy using convolutional neural networks (CNN)*. *Convolutional Neural Network (CNN): A type of multi-layer, feed-forward, artificial neural network used for analyzing visual images. The research team noted that while scanning electron microscopy (SEM) is used in semiconductor manufacturing to inspect wafer defects, it is rarely used in battery inspections. SEM is used for batteries to analyze the size of particles only at research sites, and reliability is predicted from the broken particles and the shape of the breakage in the case of deteriorated battery materials. The research team decided that it would be groundbreaking if an automated SEM can be used in the process of battery production, just like in the semiconductor manufacturing, to inspect the surface of the cathode material to determine whether it was synthesized according to the desired composition and that the lifespan would be reliable, thereby reducing the defect rate. < Figure 1. Example images of true cases and their grad-CAM overlays from the best trained network. > The researchers trained a CNN-based AI applicable to autonomous vehicles to learn the surface images of battery materials, enabling it to predict the major elemental composition and charge-discharge cycle states of the cathode materials. They found that while the method could accurately predict the composition of materials with additives, it had lower accuracy for predicting charge-discharge states. The team plans to further train the AI with various battery material morphologies produced through different processes and ultimately use it for inspecting the compositional uniformity and predicting the lifespan of next-generation batteries. Professor Joshua C. Agar, one of the collaborating researchers of the project from the Department of Mechanical Engineering and Mechanics of Drexel University, said, "In the future, artificial intelligence is expected to be applied not only to battery materials but also to various dynamic processes in functional materials synthesis, clean energy generation in fusion, and understanding foundations of particles and the universe." Professor Seungbum Hong from KAIST, who led the research, stated, "This research is significant as it is the first in the world to develop an AI-based methodology that can quickly and accurately predict the major elemental composition and the state of the battery from the structural data of micron-scale SEM images. The methodology developed in this study for identifying the composition and state of battery materials based on microscopic images is expected to play a crucial role in improving the performance and quality of battery materials in the future." < Figure 2. Accuracies of CNN Model predictions on SEM images of NCM cathode materials with additives under various conditions. > This research was conducted by KAIST’s Materials Science and Engineering Department graduates Dr. Jimin Oh and Dr. Jiwon Yeom, the co-first authors, in collaboration with Professor Josh Agar and Dr. Kwang Man Kim from ETRI. It was supported by the National Research Foundation of Korea, the KAIST Global Singularity project, and international collaboration with the US research team. The results were published in the international journal npj Computational Materials on May 4. (Paper Title: “Composition and state prediction of lithium-ion cathode via convolutional neural network trained on scanning electron microscopy images”)
2024.07.02
View 6486
North Korea and Beyond: AI-Powered Satellite Analysis Reveals the Unseen Economic Landscape of Underdeveloped Nations
- A joint research team in computer science, economics, and geography has developed an artificial intelligence (AI) technology to measure grid-level economic development within six-square-kilometer regions. - This AI technology is applicable in regions with limited statistical data (e.g., North Korea), supporting international efforts to propose policies for economic growth and poverty reduction in underdeveloped countries. - The research team plans to make this technology freely available for use to contribute to the United Nations' Sustainable Development Goals (SDGs). The United Nations reports that more than 700 million people are in extreme poverty, earning less than two dollars a day. However, an accurate assessment of poverty remains a global challenge. For example, 53 countries have not conducted agricultural surveys in the past 15 years, and 17 countries have not published a population census. To fill this data gap, new technologies are being explored to estimate poverty using alternative sources such as street views, aerial photos, and satellite images. The paper published in Nature Communications demonstrates how artificial intelligence (AI) can help analyze economic conditions from daytime satellite imagery. This new technology can even apply to the least developed countries - such as North Korea - that do not have reliable statistical data for typical machine learning training. The researchers used Sentinel-2 satellite images from the European Space Agency (ESA) that are publicly available. They split these images into small six-square-kilometer grids. At this zoom level, visual information such as buildings, roads, and greenery can be used to quantify economic indicators. As a result, the team obtained the first ever fine-grained economic map of regions like North Korea. The same algorithm was applied to other underdeveloped countries in Asia: North Korea, Nepal, Laos, Myanmar, Bangladesh, and Cambodia (see Image 1). The key feature of their research model is the "human-machine collaborative approach," which lets researchers combine human input with AI predictions for areas with scarce data. In this research, ten human experts compared satellite images and judged the economic conditions in the area, with the AI learning from this human data and giving economic scores to each image. The results showed that the Human-AI collaborative approach outperformed machine-only learning algorithms. < Image 1. Nightlight satellite images of North Korea (Top-left: Background photo provided by NASA's Earth Observatory). South Korea appears brightly lit compared to North Korea, which is mostly dark except for Pyongyang. In contrast, the model developed by the research team uses daytime satellite imagery to predict more detailed economic predictions for North Korea (top-right) and five Asian countries (Bottom: Background photo from Google Earth). > The research was led by an interdisciplinary team of computer scientists, economists, and a geographer from KAIST & IBS (Donghyun Ahn, Meeyoung Cha, Jihee Kim), Sogang University (Hyunjoo Yang), HKUST (Sangyoon Park), and NUS (Jeasurk Yang). Dr Charles Axelsson, Associate Editor at Nature Communications, handled this paper during the peer review process at the journal. The research team found that the scores showed a strong correlation with traditional socio-economic metrics such as population density, employment, and number of businesses. This demonstrates the wide applicability and scalability of the approach, particularly in data-scarce countries. Furthermore, the model's strength lies in its ability to detect annual changes in economic conditions at a more detailed geospatial level without using any survey data (see Image 2). < Image 2. Differences in satellite imagery and economic scores in North Korea between 2016 and 2019. Significant development was found in the Wonsan Kalma area (top), one of the tourist development zones, but no changes were observed in the Wiwon Industrial Development Zone (bottom). (Background photo: Sentinel-2 satellite imagery provided by the European Space Agency (ESA)). > This model would be especially valuable for rapidly monitoring the progress of Sustainable Development Goals such as reducing poverty and promoting more equitable and sustainable growth on an international scale. The model can also be adapted to measure various social and environmental indicators. For example, it can be trained to identify regions with high vulnerability to climate change and disasters to provide timely guidance on disaster relief efforts. As an example, the researchers explored how North Korea changed before and after the United Nations sanctions against the country. By applying the model to satellite images of North Korea both in 2016 and in 2019, the researchers discovered three key trends in the country's economic development between 2016 and 2019. First, economic growth in North Korea became more concentrated in Pyongyang and major cities, exacerbating the urban-rural divide. Second, satellite imagery revealed significant changes in areas designated for tourism and economic development, such as new building construction and other meaningful alterations. Third, traditional industrial and export development zones showed relatively minor changes. Meeyoung Cha, a data scientist in the team explained, "This is an important interdisciplinary effort to address global challenges like poverty. We plan to apply our AI algorithm to other international issues, such as monitoring carbon emissions, disaster damage detection, and the impact of climate change." An economist on the research team, Jihee Kim, commented that this approach would enable detailed examinations of economic conditions in the developing world at a low cost, reducing data disparities between developed and developing nations. She further emphasized that this is most essential because many public policies require economic measurements to achieve their goals, whether they are for growth, equality, or sustainability. The research team has made the source code publicly available via GitHub and plans to continue improving the technology, applying it to new satellite images updated annually. The results of this study, with Ph.D. candidate Donghyun Ahn at KAIST and Ph.D. candidate Jeasurk Yang at NUS as joint first authors, were published in Nature Communications under the title "A human-machine collaborative approach measures economic development using satellite imagery." < Photos of the main authors. 1. Donghyun Ahn, PhD candidate at KAIST School of Computing 2. Jeasurk Yang, PhD candidate at the Department of Geography of National University of Singapore 3. Meeyoung Cha, Professor of KAIST School of Computing and CI at IBS 4. Jihee Kim, Professor of KAIST School of Business and Technology Management 5. Sangyoon Park, Professor of the Division of Social Science at Hong Kong University of Science and Technology 6. Hyunjoo Yang, Professor of the Department of Economics at Sogang University >
2023.12.07
View 8803
Atomically-Smooth Gold Crystals Help to Compress Light for Nanophotonic Applications
Highly compressed mid-infrared optical waves in a thin dielectric crystal on monocrystalline gold substrate investigated for the first time using a high-resolution scattering-type scanning near-field optical microscope. KAIST researchers and their collaborators at home and abroad have successfully demonstrated a new platform for guiding the compressed light waves in very thin van der Waals crystals. Their method to guide the mid-infrared light with minimal loss will provide a breakthrough for the practical applications of ultra-thin dielectric crystals in next-generation optoelectronic devices based on strong light-matter interactions at the nanoscale. Phonon-polaritons are collective oscillations of ions in polar dielectrics coupled to electromagnetic waves of light, whose electromagnetic field is much more compressed compared to the light wavelength. Recently, it was demonstrated that the phonon-polaritons in thin van der Waals crystals can be compressed even further when the material is placed on top of a highly conductive metal. In such a configuration, charges in the polaritonic crystal are “reflected” in the metal, and their coupling with light results in a new type of polariton waves called the image phonon-polaritons. Highly compressed image modes provide strong light-matter interactions, but are very sensitive to the substrate roughness, which hinders their practical application. Challenged by these limitations, four research groups combined their efforts to develop a unique experimental platform using advanced fabrication and measurement methods. Their findings were published in Science Advances on July 13. A KAIST research team led by Professor Min Seok Jang from the School of Electrical Engineering used a highly sensitive scanning near-field optical microscope (SNOM) to directly measure the optical fields of the hyperbolic image phonon-polaritons (HIP) propagating in a 63 nm-thick slab of hexagonal boron nitride (h-BN) on a monocrystalline gold substrate, showing the mid-infrared light waves in dielectric crystal compressed by a hundred times. Professor Jang and a research professor in his group, Sergey Menabde, successfully obtained direct images of HIP waves propagating for many wavelengths, and detected a signal from the ultra-compressed high-order HIP in a regular h-BN crystals for the first time. They showed that the phonon-polaritons in van der Waals crystals can be significantly more compressed without sacrificing their lifetime. This became possible due to the atomically-smooth surfaces of the home-grown gold crystals used as a substrate for the h-BN. Practically zero surface scattering and extremely small ohmic loss in gold at mid-infrared frequencies provide a low-loss environment for the HIP propagation. The HIP mode probed by the researchers was 2.4 times more compressed and yet exhibited a similar lifetime compared to the phonon-polaritons with a low-loss dielectric substrate, resulting in a twice higher figure of merit in terms of the normalized propagation length. The ultra-smooth monocrystalline gold flakes used in the experiment were chemically grown by the team of Professor N. Asger Mortensen from the Center for Nano Optics at the University of Southern Denmark. Mid-infrared spectrum is particularly important for sensing applications since many important organic molecules have absorption lines in the mid-infrared. However, a large number of molecules is required by the conventional detection methods for successful operation, whereas the ultra-compressed phonon-polariton fields can provide strong light-matter interactions at the microscopic level, thus significantly improving the detection limit down to a single molecule. The long lifetime of the HIP on monocrystalline gold will further improve the detection performance. Furthermore, the study conducted by Professor Jang and the team demonstrated the striking similarity between the HIP and the image graphene plasmons. Both image modes possess significantly more confined electromagnetic field, yet their lifetime remains unaffected by the shorter polariton wavelength. This observation provides a broader perspective on image polaritons in general, and highlights their superiority in terms of the nanolight waveguiding compared to the conventional low-dimensional polaritons in van der Waals crystals on a dielectric substrate. Professor Jang said, “Our research demonstrated the advantages of image polaritons, and especially the image phonon-polaritons. These optical modes can be used in the future optoelectronic devices where both the low-loss propagation and the strong light-matter interaction are necessary. I hope that our results will pave the way for the realization of more efficient nanophotonic devices such as metasurfaces, optical switches, sensors, and other applications operating at infrared frequencies.” This research was funded by the Samsung Research Funding & Incubation Center of Samsung Electronics and the National Research Foundation of Korea (NRF). The Korea Institute of Science and Technology, Ministry of Education, Culture, Sports, Science and Technology of Japan, and The Villum Foundation, Denmark, also supported the work. Figure. Nano-tip is used for the ultra-high-resolution imaging of the image phonon-polaritons in hBN launched by the gold crystal edge. Publication: Menabde, S. G., et al. (2022) Near-field probing of image phonon-polaritons in hexagonal boron nitride on gold crystals. Science Advances 8, Article ID: eabn0627. Available online at https://science.org/doi/10.1126/sciadv.abn0627. Profile: Min Seok Jang, MS, PhD Associate Professor jang.minseok@kaist.ac.kr http://janglab.org/ Min Seok Jang Research Group School of Electrical Engineering http://kaist.ac.kr/en/ Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea
2022.07.13
View 14827
‘Urban Green Space Affects Citizens’ Happiness’
Study finds the relationship between green space, the economy, and happiness A recent study revealed that as a city becomes more economically developed, its citizens’ happiness becomes more directly related to the area of urban green space. A joint research project by Professor Meeyoung Cha of the School of Computing and her collaborators studied the relationship between green space and citizen happiness by analyzing big data from satellite images of 60 different countries. Urban green space, including parks, gardens, and riversides not only provides aesthetic pleasure, but also positively affects our health by promoting physical activity and social interactions. Most of the previous research attempting to verify the correlation between urban green space and citizen happiness was based on few developed countries. Therefore, it was difficult to identify whether the positive effects of green space are global, or merely phenomena that depended on the economic state of the country. There have also been limitations in data collection, as it is difficult to visit each location or carry out investigations on a large scale based on aerial photographs. The research team used data collected by Sentinel-2, a high-resolution satellite operated by the European Space Agency (ESA) to investigate 90 green spaces from 60 different countries around the world. The subjects of analysis were cities with the highest population densities (cities that contain at least 10% of the national population), and the images were obtained during the summer of each region for clarity. Images from the northern hemisphere were obtained between June and September of 2018, and those from the southern hemisphere were obtained between December of 2017 and February of 2018. The areas of urban green space were then quantified and crossed with data from the World Happiness Report and GDP by country reported by the United Nations in 2018. Using these data, the relationships between green space, the economy, and citizen happiness were analyzed. The results showed that in all cities, citizen happiness was positively correlated with the area of urban green space regardless of the country’s economic state. However, out of the 60 countries studied, the happiness index of the bottom 30 by GDP showed a stronger correlation with economic growth. In countries whose gross national income (GDP per capita) was higher than 38,000 USD, the area of green space acted as a more important factor affecting happiness than economic growth. Data from Seoul was analyzed to represent South Korea, and showed an increased happiness index with increased green areas compared to the past. The authors point out their work has several policy-level implications. First, public green space should be made accessible to urban dwellers to enhance social support. If public safety in urban parks is not guaranteed, its positive role in social support and happiness may diminish. Also, the meaning of public safety may change; for example, ensuring biological safety will be a priority in keeping urban parks accessible during the COVID-19 pandemic. Second, urban planning for public green space is needed for both developed and developing countries. As it is challenging or nearly impossible to secure land for green space after the area is developed, urban planning for parks and green space should be considered in developing economies where new cities and suburban areas are rapidly expanding. Third, recent climate changes can present substantial difficulty in sustaining urban green space. Extreme events such as wildfires, floods, droughts, and cold waves could endanger urban forests while global warming could conversely accelerate tree growth in cities due to the urban heat island effect. Thus, more attention must be paid to predict climate changes and discovering their impact on the maintenance of urban green space. “There has recently been an increase in the number of studies using big data from satellite images to solve social conundrums,” said Professor Cha. “The tool developed for this investigation can also be used to quantify the area of aquatic environments like lakes and the seaside, and it will now be possible to analyze the relationship between citizen happiness and aquatic environments in future studies,” she added. Professor Woo Sung Jung from POSTECH and Professor Donghee Wohn from the New Jersey Institute of Technology also joined this research. It was reported in the online issue of EPJ Data Science on May 30. -PublicationOh-Hyun Kwon, Inho Hong, Jeasurk Yang, Donghee Y. Wohn, Woo-Sung Jung, andMeeyoung Cha, 2021. Urban green space and happiness in developed countries. EPJ Data Science. DOI: https://doi.org/10.1140/epjds/s13688-021-00278-7 -ProfileProfessor Meeyoung ChaData Science Labhttps://ds.ibs.re.kr/ School of Computing KAIST
2021.06.21
View 13094
Every Moment of Ultrafast Chemical Bonding Now Captured on Film
- The emerging moment of bond formation, two separate bonding steps, and subsequent vibrational motions were visualized. - < Emergence of molecular vibrations and the evolution to covalent bonds observed in the research. Video Credit: KEK IMSS > A team of South Korean researchers led by Professor Hyotcherl Ihee from the Department of Chemistry at KAIST reported the direct observation of the birthing moment of chemical bonds by tracking real-time atomic positions in the molecule. Professor Ihee, who also serves as Associate Director of the Center for Nanomaterials and Chemical Reactions at the Institute for Basic Science (IBS), conducted this study in collaboration with scientists at the Institute of Materials Structure Science of High Energy Accelerator Research Organization (KEK IMSS, Japan), RIKEN (Japan), and Pohang Accelerator Laboratory (PAL, South Korea). This work was published in Nature on June 24. Targeted cancer drugs work by striking a tight bond between cancer cell and specific molecular targets that are involved in the growth and spread of cancer. Detailed images of such chemical bonding sites or pathways can provide key information necessary for maximizing the efficacy of oncogene treatments. However, atomic movements in a molecule have never been captured in the middle of the action, not even for an extremely simple molecule such as a triatomic molecule, made of only three atoms. Professor Ihee's group and their international collaborators finally succeeded in capturing the ongoing reaction process of the chemical bond formation in the gold trimer. "The femtosecond-resolution images revealed that such molecular events took place in two separate stages, not simultaneously as previously assumed," says Professor Ihee, the corresponding author of the study. "The atoms in the gold trimer complex atoms remain in motion even after the chemical bonding is complete. The distance between the atoms increased and decreased periodically, exhibiting the molecular vibration. These visualized molecular vibrations allowed us to name the characteristic motion of each observed vibrational mode." adds Professor Ihee. Atoms move extremely fast at a scale of femtosecond (fs) ― quadrillionths (or millionths of a billionth) of a second. Its movement is minute in the level of angstrom equal to one ten-billionth of a meter. They are especially elusive during the transition state where reaction intermediates are transitioning from reactants to products in a flash. The KAIST-IBS research team made this experimentally challenging task possible by using femtosecond x-ray liquidography (solution scattering). This experimental technique combines laser photolysis and x-ray scattering techniques. When a laser pulse strikes the sample, X-rays scatter and initiate the chemical bond formation reaction in the gold trimer complex. Femtosecond x-ray pulses obtained from a special light source called an x-ray free-electron laser (XFEL) were used to interrogate the bond-forming process. The experiments were performed at two XFEL facilities (4th generation linear accelerator) that are PAL-XFEL in South Korea and SACLA in Japan, and this study was conducted in collaboration with researchers from KEK IMSS, PAL, RIKEN, and the Japan Synchrotron Radiation Research Institute (JASRI). Scattered waves from each atom interfere with each other and thus their x-ray scattering images are characterized by specific travel directions. The KAIST-IBS research team traced real-time positions of the three gold atoms over time by analyzing x-ray scattering images, which are determined by a three-dimensional structure of a molecule. Structural changes in the molecule complex resulted in multiple characteristic scattering images over time. When a molecule is excited by a laser pulse, multiple vibrational quantum states are simultaneously excited. The superposition of several excited vibrational quantum states is called a wave packet. The researchers tracked the wave packet in three-dimensional nuclear coordinates and found that the first half round of chemical bonding was formed within 35 fs after photoexcitation. The second half of the reaction followed within 360 fs to complete the entire reaction dynamics. They also accurately illustrated molecular vibration motions in both temporal- and spatial-wise. This is quite a remarkable feat considering that such an ultrafast speed and a minute length of motion are quite challenging conditions for acquiring precise experimental data. In this study, the KAIST-IBS research team improved upon their 2015 study published by Nature. In the previous study in 2015, the speed of the x-ray camera (time resolution) was limited to 500 fs, and the molecular structure had already changed to be linear with two chemical bonds within 500 fs. In this study, the progress of the bond formation and bent-to-linear structural transformation could be observed in real time, thanks to the improvement time resolution down to 100 fs. Thereby, the asynchronous bond formation mechanism in which two chemical bonds are formed in 35 fs and 360 fs, respectively, and the bent-to-linear transformation completed in 335 fs were visualized. In short, in addition to observing the beginning and end of chemical reactions, they reported every moment of the intermediate, ongoing rearrangement of nuclear configurations with dramatically improved experimental and analytical methods. They will push this method of 'real-time tracking of atomic positions in a molecule and molecular vibration using femtosecond x-ray scattering' to reveal the mechanisms of organic and inorganic catalytic reactions and reactions involving proteins in the human body. "By directly tracking the molecular vibrations and real-time positions of all atoms in a molecule in the middle of reaction, we will be able to uncover mechanisms of various unknown organic and inorganic catalytic reactions and biochemical reactions," notes Dr. Jong Goo Kim, the lead author of the study. Publications: Kim, J. G., et al. (2020) ‘Mapping the emergence of molecular vibrations mediating bond formation’. Nature. Volume 582. Page 520-524. Available online at https://doi.org/10.1038/s41586-020-2417-3 Profile: Hyotcherl Ihee, Ph.D. Professor hyotcherl.ihee@kaist.ac.kr http://time.kaist.ac.kr/ Ihee Laboratory Department of Chemistry KAIST https://www.kaist.ac.kr Daejeon 34141, Korea (END)
2020.06.24
View 19464
Unravelling Complex Brain Networks with Automated 3-D Neural Mapping
-Automated 3-D brain imaging data analysis technology offers more reliable and standardized analysis of the spatial organization of complex neural circuits.- KAIST researchers developed a new algorithm for brain imaging data analysis that enables the precise and quantitative mapping of complex neural circuits onto a standardized 3-D reference atlas. Brain imaging data analysis is indispensable in the studies of neuroscience. However, analysis of obtained brain imaging data has been heavily dependent on manual processing, which cannot guarantee the accuracy, consistency, and reliability of the results. Conventional brain imaging data analysis typically begins with finding a 2-D brain atlas image that is visually similar to the experimentally obtained brain image. Then, the region-of-interest (ROI) of the atlas image is matched manually with the obtained image, and the number of labeled neurons in the ROI is counted. Such a visual matching process between experimentally obtained brain images and 2-D brain atlas images has been one of the major sources of error in brain imaging data analysis, as the process is highly subjective, sample-specific, and susceptible to human error. Manual analysis processes for brain images are also laborious, and thus studying the complete 3-D neuronal organization on a whole-brain scale is a formidable task. To address these issues, a KAIST research team led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering developed new brain imaging data analysis software named 'AMaSiNe (Automated 3-D Mapping of Single Neurons)', and introduced the algorithm in the May 26 issue of Cell Reports. AMaSiNe automatically detects the positions of single neurons from multiple brain images, and accurately maps all the data onto a common standard 3-D reference space. The algorithm allows the direct comparison of brain data from different animals by automatically matching similar features from the images, and computing the image similarity score. This feature-based quantitative image-to-image comparison technology improves the accuracy, consistency, and reliability of analysis results using only a small number of brain slice image samples, and helps standardize brain imaging data analyses. Unlike other existing brain imaging data analysis methods, AMaSiNe can also automatically find the alignment conditions from misaligned and distorted brain images, and draw an accurate ROI, without any cumbersome manual validation process. AMaSiNe has been further proved to produce consistent results with brain slice images stained utilizing various methods including DAPI, Nissl, and autofluorescence. The two co-lead authors of this study, Jun Ho Song and Woochul Choi, exploited these benefits of AMaSiNe to investigate the topographic organization of neurons that project to the primary visual area (VISp) in various ROIs, such as the dorsal lateral geniculate nucleus (LGd), which could hardly be addressed without proper calibration and standardization of the brain slice image samples. In collaboration with Professor Seung-Hee Lee's group of the Department of Biological Science, the researchers successfully observed the 3-D topographic neural projections to the VISp from LGd, and also demonstrated that these projections could not be observed when the slicing angle was not properly corrected by AMaSiNe. The results suggest that the precise correction of a slicing angle is essential for the investigation of complex and important brain structures. AMaSiNe is widely applicable in the studies of various brain regions and other experimental conditions. For example, in the research team’s previous study jointly conducted with Professor Yang Dan’s group at UC Berkeley, the algorithm enabled the accurate analysis of the neuronal subsets in the substantia nigra and their projections to the whole brain. Their findings were published in Science on January 24. AMaSiNe is of great interest to many neuroscientists in Korea and abroad, and is being actively used by a number of other research groups at KAIST, MIT, Harvard, Caltech, and UC San Diego. Professor Paik said, “Our new algorithm allows the spatial organization of complex neural circuits to be found in a standardized 3-D reference atlas on a whole-brain scale. This will bring brain imaging data analysis to a new level.” He continued, “More in-depth insights for understanding the function of brain circuits can be achieved by facilitating more reliable and standardized analysis of the spatial organization of neural circuits in various regions of the brain.” This work was supported by KAIST and the National Research Foundation of Korea (NRF). Figure and Image Credit: Professor Se-Bum Paik, KAIST Figure and Image Usage Restrictions: News organizations may use or redistribute these figures and images, with proper attribution, as part of news coverage of this paper only. Publication: Song, J. H., et al. (2020). Precise Mapping of Single Neurons by Calibrated 3D Reconstruction of Brain Slices Reveals Topographic Projection in Mouse Visual Cortex. Cell Reports. Volume 31, 107682. Available online at https://doi.org/10.1016/j.celrep.2020.107682 Profile: Se-Bum Paik Assistant Professor sbpaik@kaist.ac.kr http://vs.kaist.ac.kr/ VSNN Laboratory Department of Bio and Brain Engineering Program of Brain and Cognitive Engineering http://kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea (END)
2020.06.08
View 14987
Ultrathin but Fully Packaged High-Resolution Camera
- Biologically inspired ultrathin arrayed camera captures super-resolution images. - The unique structures of biological vision systems in nature inspired scientists to design ultracompact imaging systems. A research group led by Professor Ki-Hun Jeong have made an ultracompact camera that captures high-contrast and high-resolution images. Fully packaged with micro-optical elements such as inverted micro-lenses, multilayered pinhole arrays, and gap spacers on the image sensor, the camera boasts a total track length of 740 μm and a field of view of 73°. Inspired by the eye structures of the paper wasp species Xenos peckii, the research team completely suppressed optical noise between micro-lenses while reducing camera thickness. The camera has successfully demonstrated high-contrast clear array images acquired from tiny micro lenses. To further enhance the image quality of the captured image, the team combined the arrayed images into one image through super-resolution imaging. An insect’s compound eye has superior visual characteristics, such as a wide viewing angle, high motion sensitivity, and a large depth of field while maintaining a small volume of visual structure with a small focal length. Among them, the eyes of Xenos peckii and an endoparasite found on paper wasps have hundreds of photoreceptors in a single lens unlike conventional compound eyes. In particular, the eye structures of an adult Xenos peckii exhibit hundreds of photoreceptors on an individual eyelet and offer engineering inspiration for ultrathin cameras or imaging applications because they have higher visual acuity than other compound eyes. For instance, Xenos peckii’s eye-inspired cameras provide a 50 times higher spatial resolution than those based on arthropod eyes. In addition, the effective image resolution of the Xenos peckii’s eye can be further improved using the image overlaps between neighboring eyelets. This unique structure offers higher visual resolution than other insect eyes. The team achieved high-contrast and super-resolution imaging through a novel arrayed design of micro-optical elements comprising multilayered aperture arrays and inverted micro-lens arrays directly stacked over an image sensor. This optical component was integrated with a complementary metal oxide semiconductor image sensor. This is first demonstration of super-resolution imaging which acquires a single integrated image with high contrast and high resolving power reconstructed from high-contrast array images. It is expected that this ultrathin arrayed camera can be applied for further developing mobile devices, advanced surveillance vehicles, and endoscopes. Professor Jeong said, “This research has led to technological advances in imaging technology. We will continue to strive to make significant impacts on multidisciplinary research projects in the fields of microtechnology and nanotechnology, seeking inspiration from natural photonic structures.” This work was featured in Light Science & Applications last month and was supported by the National Research Foundation (NRF) of and the Ministry of Health and Welfare (MOHW) of Korea. Image credit: Professor Ki-Hun Jeong, KAIST Image usage restrictions: News organizations may use or redistribute this image, with proper attribution, as part of news coverage of this paper only. Publication: Kisoo Kim, Kyung-Won Jang, Jae-Kwan Ryu, and Ki-Hun Jeong. (2020) “Biologically inspired ultrathin arrayed camera for high-contrast and high-resolution imaging”. Light Science & Applications. Volume 9. Article 28. Available online at https://doi.org/10.1038/s41377-020-0261-8 Profile: Ki-Hun Jeong Professor kjeong@kaist.ac.kr http://biophotonics.kaist.ac.kr/ Department of Bio and Brain Engineering KAIST Profile: Kisoo Kim Ph.D. Candidate kisoo.kim1@kaist.ac.kr http://biophotonics.kaist.ac.kr/ Department of Bio and Brain Engineering KAIST (END)
2020.03.23
View 20269
Image Analysis to Automatically Quantify Gender Bias in Movies
Many commercial films worldwide continue to express womanhood in a stereotypical manner, a recent study using image analysis showed. A KAIST research team developed a novel image analysis method for automatically quantifying the degree of gender bias in films. The ‘Bechdel Test’ has been the most representative and general method of evaluating gender bias in films. This test indicates the degree of gender bias in a film by measuring how active the presence of women is in a film. A film passes the Bechdel Test if the film (1) has at least two female characters, (2) who talk to each other, and (3) their conversation is not related to the male characters. However, the Bechdel Test has fundamental limitations regarding the accuracy and practicality of the evaluation. Firstly, the Bechdel Test requires considerable human resources, as it is performed subjectively by a person. More importantly, the Bechdel Test analyzes only a single aspect of the film, the dialogues between characters in the script, and provides only a dichotomous result of passing the test, neglecting the fact that a film is a visual art form reflecting multi-layered and complicated gender bias phenomena. It is also difficult to fully represent today’s various discourse on gender bias, which is much more diverse than in 1985 when the Bechdel Test was first presented. Inspired by these limitations, a KAIST research team led by Professor Byungjoo Lee from the Graduate School of Culture Technology proposed an advanced system that uses computer vision technology to automatically analyzes the visual information of each frame of the film. This allows the system to more accurately and practically evaluate the degree to which female and male characters are discriminatingly depicted in a film in quantitative terms, and further enables the revealing of gender bias that conventional analysis methods could not yet detect. Professor Lee and his researchers Ji Yoon Jang and Sangyoon Lee analyzed 40 films from Hollywood and South Korea released between 2017 and 2018. They downsampled the films from 24 to 3 frames per second, and used Microsoft’s Face API facial recognition technology and object detection technology YOLO9000 to verify the details of the characters and their surrounding objects in the scenes. Using the new system, the team computed eight quantitative indices that describe the representation of a particular gender in the films. They are: emotional diversity, spatial staticity, spatial occupancy, temporal occupancy, mean age, intellectual image, emphasis on appearance, and type and frequency of surrounding objects. Figure 1. System Diagram Figure 2. 40 Hollywood and Korean Films Analyzed in the Study According to the emotional diversity index, the depicted women were found to be more prone to expressing passive emotions, such as sadness, fear, and surprise. In contrast, male characters in the same films were more likely to demonstrate active emotions, such as anger and hatred. Figure 3. Difference in Emotional Diversity between Female and Male Characters The type and frequency of surrounding objects index revealed that female characters and automobiles were tracked together only 55.7 % as much as that of male characters, while they were more likely to appear with furniture and in a household, with 123.9% probability. In cases of temporal occupancy and mean age, female characters appeared less frequently in films than males at the rate of 56%, and were on average younger in 79.1% of the cases. These two indices were especially conspicuous in Korean films. Professor Lee said, “Our research confirmed that many commercial films depict women from a stereotypical perspective. I hope this result promotes public awareness of the importance of taking prudence when filmmakers create characters in films.” This study was supported by KAIST College of Liberal Arts and Convergence Science as part of the Venture Research Program for Master’s and PhD Students, and will be presented at the 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) on November 11 to be held in Austin, Texas. Publication: Ji Yoon Jang, Sangyoon Lee, and Byungjoo Lee. 2019. Quantification of Gender Representation Bias in Commercial Films based on Image Analysis. In Proceedings of the 22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW). ACM, New York, NY, USA, Article 198, 29 pages. https://doi.org/10.1145/3359300 Link to download the full-text paper: https://files.cargocollective.com/611692/cscw198-jangA--1-.pdf Profile: Prof. Byungjoo Lee, MD, PhD byungjoo.lee@kaist.ac.kr http://kiml.org/ Assistant Professor Graduate School of Culture Technology (CT) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr Daejeon 34141, Korea Profile: Ji Yoon Jang, M.S. yoone3422@kaist.ac.kr Interactive Media Lab Graduate School of Culture Technology (CT) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr Daejeon 34141, Korea Profile: Sangyoon Lee, M.S. Candidate sl2820@kaist.ac.kr Interactive Media Lab Graduate School of Culture Technology (CT) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr Daejeon 34141, Korea (END)
2019.10.17
View 27448
KAIST Team Reaching Out with Appropriate Technology
(The gold prize winning team of KATT) The KAIST Appropriate Technology Team (KATT) consisting of international students at KAIST won the gold and silver prizes at ‘The 10th Creative Design Competition for the Other 90 Percent.’ More than 218 students from 50 teams nationwide participated in the competition hosted by the Ministry of Science and ICT last month. The competition was created to discover appropriate technology and sustainable design items to enhance the quality of life for those with no or few accessible technologies. A team led by Juan Luis Gonzalez Bello, graduate student from the School of Electrical Engineering received the gold prize for presenting a prosthetic arm. Their artificial arm was highly recognized for its affordability and good manageability. The team said that it cost less than 10 US dollars to construct from materials available in underprivileged regions and was easy to assemble. Sophomore Hutomo Calvin from the Department of Materials Science & Engineering also worked on the prosthetic arm project with freshmen Bella Godiva, Stephanie Tan, and Koptieuov Yearbola. Alexandra Tran, senior from the School of Electrical Engineering led the silver prize winning team. Her team developed a portable weather monitor, ‘Breathe Easy’. She worked with Alisher Tortay, senior from the School of Computing, Ashar Alam, senior from the Department of Mechanical Engineering, Bereket Eshete, junior from the School of Computing, and Marthens Hakzimana, sophomore from the Department of Mechanical Engineering. This weather monitor is a low-cost but efficient air quality monitor. The team said it just cost less than seven US dollars to construct the monitor.KAIST students have now won the gold prize for two consecutive years.
2018.06.19
View 12970
<<
첫번째페이지
<
이전 페이지
1
2
>
다음 페이지
>>
마지막 페이지 2