본문 바로가기
KICT 한국건설기술연구원
About KICT
Welcome Message
History
Organization
KICT CI
Related Organization
Research Division
Department of Highway & Transportation Research
Department of Structural Engineering Research
Department of Geotechnical Engineering Research
Department of Building and Urban Research
Department of Hydro Science and Engineering Research
Department of Environmental Research
Department of Future & Smart Construction Research
Department of Fire Safety Research
Department of Building Energy Research
Department of Construction Test & Certification
Department of Construction Industry Promotion
Department of Construction Policy Research
Korea Construction Standards Center
Research Information
Research Reports
Press Release
Research Facilities
Research videos
Industrial Support
Smart Construction Support Center
Smart Construction Technology Plaza
Smart Construction Alliance
SME Support
Technology Transfer
Accreditation Certification and Testing
Standard of Construction Estimate
International Cooperation
International Activities
International Memorandum of Understanding (MOU)
UST-KICT School
News & Notice
News & Notice
Global KICT
Brochure
PR Film
Newsletter
Gender Equality Plan(GEP)
직원찾기
페이스북
블로그
KOR
전체메뉴
검색
닫기
About KICT
Research Division
Research Information
Industrial Support
International
Cooperation
News & Notice
About KICT
Welcome Message
History
Organization
KICT CI
Related Organization
Research Division
Department of Highway & Transportation Research
Introduction
Staff
Papers
Department of Structural Engineering Research
Introduction
Staff
Papers
Department of Geotechnical Engineering Research
Introduction
Staff
Papers
Department of Building and Urban Research
Introduction
Staff
Papers
Department of Hydro Science and Engineering Research
Introduction
Staff
Papers
Department of Environmental Research
Introduction
Staff
Papers
Department of Future & Smart Construction Research
Introduction
Staff
Papers
Department of Fire Safety Research
Introduction
Staff
Papers
Department of Building Energy Research
Introduction
Staff
Papers
Department of Construction Test & Certification
Introduction
Staff
Papers
Department of Construction Industry Promotion
Introduction
Staff
Papers
Department of Construction Policy Research
Introduction
Staff
Papers
Korea Construction Standards Center
Introduction
Staff
Papers
Research Information
Research Reports
Press Release
Research Facilities
Ilsan HeadQuarters (Main Research Facilities)
Department of Fire Safety Research (Hwaseong)
River Experiment Center (Andong)
SOC Demonstration Research Center (Yeoncheon)
Research videos
Industrial Support
Smart Construction Support Center
Intro
Smart construction startup idea contest
Office and support space for resident companies and support for residents
Open Incubation Program
Smart Construction Innovation Startup Program
Smart Construction Technology Plaza
Intro
Registration Procedure
Review Items
Utilization Strategies
Smart Construction Alliance
Intro
SME Support
Technology Transfer
Accreditation Certification and Testing
Standard of Construction Estimate
International Cooperation
International Activities
International Memorandum of Understanding (MOU)
UST-KICT School
News & Notice
News & Notice
Global KICT
Brochure
PR Film
Newsletter
Gender Equality Plan(GEP)
KOR
전체메뉴 닫기
Research Information
Research Reports
Press Release
Research Facilities
Ilsan HeadQuarters (Main Research Facilities)
Department of Fire Safety Research (Hwaseong)
River Experiment Center (Andong)
SOC Demonstration Research Center (Yeoncheon)
Research videos
Research Reports
Home
Research Information
Research Reports
검색
ALL
Subject
Content
Search
TOTAL
51
Current page
1
/
6
Present and Future of High-Efficiency Fuel Cell Systems for Biogas-to-Energy Conversion
Senior Researcher Ji Sang-hoon, Department of Environmental Research, KICT Prologue Within the environment–energy research conducted at the Korea Institute of Civil Engineering and Building Technology (the KICT), the resource circulation of organic waste is a key area of focus. With the recent implementation of related regulations and the government’s emphasis on renewable energy, further technological advancement in this field has become increasingly important. This article outlines directions for improving the efficiency of biogas-to-energy conversion technologies and for developing solutions to reduce carbon emissions. Biogas-to-Energy Organic wastes such as food waste, sewage sludge, and livestock manure—primarily discarded carbon-containing compounds—are both naturally generated and produced in large quantities through human activities. In the past, organic waste was frequently disposed of in the ocean, without strict regulation. Due to the severe ecological damage caused by such practices, organic waste must now be treated and managed on land. By applying anaerobic digestion—a microbial process that decomposes organic matter in the absence of oxygen—to these organic wastes, their mass can be significantly reduced. During this process, biogas containing large amounts of methane and carbon dioxide is generated. As the release of methane into the atmosphere can greatly accelerate global warming, proper management and utilization of methane is essential. A methane molecule consists of one carbon atom and four hydrogen atoms, which means that methane has a relatively high hydrogen content. Since hydrogen releases a large amount of energy when combined with oxygen, biogas that is rich in methane has strong potential as an alternative energy source. The systems used for biogas-to-energy conversion (hereinafter referred to as biogas-to-energy systems) typically include engines, turbines, and fuel cell systems, the characteristics of which are summarized in Table 1. An engine uses pistons as its main component and generates energy through rotational motion produced by fuel combustion. It has the advantage of being suitable for compact systems; however, the piston movement generates relatively high levels of noise and vibration. In addition, proper management of mechanical wear, which significantly affects energy conversion efficiency, is essential. A turbine uses blades as its primary component and generates energy through the rotational power produced by fuel combustion. Turbines are advantageous for large-scale energy production, but are less suitable for smaller systems. Managing the thermal deformation of components exposed to high temperatures is also important. A fuel cell system uses electrolytes and electrodes as its main components and generates energy through electrochemical reactions (oxidation and reduction) of fuel. Compared with other biogas-to-energy systems, fuel cell systems offer higher efficiency and lower noise and vibration. However, they require advanced materials and sophisticated system operation technologies to ensure the durability of the electrolytes and electrodes. Fuel Cell Systems for Biogas-to-Energy Conversion Fuel cell systems used for biogas-to-energy conversion can be divided into two main types: Polymer Electrolyte Fuel Cell (PEFC) systems and Solid Oxide Fuel Cell (SOFC) systems. The characteristics of these two types of systems are summarized in Table 2. Polymer electrolyte fuel cell systems employ fuel cells that use polymer-based electrolytes as their core component. These systems have the advantage of operating at relatively low temperatures (50–80°C), allowing for rapid system start-up. However, they have the drawback of requiring the use of precious metals such as platinum as catalysts at the fuel electrode, which makes the system expensive. In addition, the presence of liquid water within the fuel cell can make their performance unstable. Additionally, the operating temperature and catalyst characteristics of these systems mean that direct use of biogas as fuel is difficult, so a separate reforming unit is used to extract high-purity hydrogen from the methane contained in biogas. Significantly, as the concentration of carbon monoxide in the hydrogen supplied to the fuel electrode must be extremely low—on the order of a few ppm—high-performance hydrogen purification systems are needed. The hydrogen-to-electricity conversion efficiency of polymer electrolyte fuel cell systems is generally less than 35%, which is higher than that of internal combustion engines but lower than that of other fuel cell systems. Despite these limitations, polymer electrolyte fuel cell systems currently have the highest level of technological maturity and have demonstrated numerous commercial applications. Their output capacity is typically up to 250 kW, making them suitable for portable devices and small-scale facilities. They are widely used in rural areas and waste treatment facilities, and also serve as energy sources for vehicles and backup power systems. Solid oxide fuel cell systems employ fuel cells that use solid oxide electrolytes as their key component. These systems operate at high temperatures (600–1,000°C), which means that their start-up process is relatively slow compared to other fuel cell systems. But their high-temperature operation provides sufficient reaction activity, enabling non-precious metals such as nickel to be used as the fuel-electrode catalyst. In addition, the water inside the fuel cell is in the gaseous phase, making water management relatively easy. Another advantage is that the operating temperature and catalysts enable the direct utilization of biogas as fuel, eliminating the need for a separate reforming unit to extract hydrogen from the methane contained in biogas. Hydrogen can also be used as fuel. In addition, the carbon monoxide produced from carbon dioxide during the biogas-to-energy process can also be utilized as fuel. Notably, the hydrogen-to-electricity conversion efficiency of solid oxide fuel cell systems is around 50%, which is considerably high. Although the technological maturity of SOFC systems is currently lower than that of polymer electrolyte fuel cell systems, their high electrical efficiency has enabled their gradual commercialization. The output capacity of SOFC systems ranges from 1 to 3,000 kW, enabling their application in small-, medium-, and large-scale energy systems for rural areas, waste treatment facilities, and medium- to large-scale commercial and residential buildings. Their high operating temperatures also mean that they generate high-quality waste heat, making them highly suitable for distributed combined heat and power (CHP) systems. Future Research Directions Despite their high electrical efficiency, conventional biogas-to-energy solid oxide fuel cell systems have limited economic feasibility (e.g., restricted material choices and complex thermal management) and practicality (e.g., slow system start-up) due to their excessively high operating temperatures. For this reason, significant research efforts are being made to reduce their operating temperatures to below 600°C. Achieving this requires technological development to ensure sufficient reaction activity (e.g., electrical efficiency) and durability (e.g., material lifespan) under lower operating temperatures. Meanwhile, similarly to other energy conversion systems, solid oxide fuel cell systems also generate carbon dioxide during the biogas-to-energy process, as the carbon contained in methane reacts with externally supplied oxygen during electrochemical reactions. To realize a carbon-neutral future, the carbon dioxide generated during the energy conversion process must be captured and either reused within the system, utilized externally, or stored. Accordingly, this study proposes two key technological directions for next-generation high-efficiency solid oxide fuel cell systems for biogas-to-energy conversion: 1) Reduction of operating temperature 2)Carbon dioxide capture (Figure 1) References Stephen J. McPhail, Luigi Leto, and Carlos BoiguesMuñoz (2013) International Status of SOFC deployment 2012-2013 Chad W. Blake and Carl H. Rivkin (2010) Stationary Fuel Cell Application Codes and Standards: Overview and Gap Analysis
Department of Environmental Research
Date
2026-03-23
Hit
7
Net-Zero Buildings: Enabling Carbon Neutrality Through Technology Integration
Research Fellow Song Su-won, Department of Building Energy Research, KICT(Net-Zero Building Innovation Strategy Research Group) “We’re Not Building Just a Structure—We’re Building an Ecosystem” Stepping into the office of the Net-Zero Building Innovation Strategy Research Group at the Department of Building Energy Research, the first thing that catches the eye is a massive system diagram covering an entire wall. Building envelopes, heating and cooling, ventilation, control—each element is represented as a box, intricately connected to the others. “That diagram,” the research group leader explained, “represents the very challenge we are trying to solve.” From its name alone, the Net-Zero Building Innovation Strategy Research Group might sound like a place where high-performance windows or photovoltaic panels are developed. But the group’s mission operates on a completely different level. “We don’t build individual components—we conduct an orchestra,” one researcher said, offering an apt analogy. Even the finest instrument may be inadequate if played independently. The group’s work is to bring together the “instruments”—building envelopes, mechanical systems, ventilation, and control technologies—and turn them into a single, well-coordinated symphony. The research group’s objective is clear: to achieve buildings that can reach 100% energy self-sufficiency—structures capable of generating as much energy as they consume, achieving ‘±0 kWh’ energy balance. Importantly, these technologies are intended not to operate in laboratory settings but in real urban environments. To achieve this goal, the group is pursuing four core research areas: next-generation energy-convergent envelope systems, integrated testbeds for building energy systems, building energy performance evaluation technologies, and digital twin–based optimization technologies. “Each of these individual technologies already exists. But problems arise when they operate together—unexpected issues emerge,” the researcher explained while showing photographs of a prototype building. For example, installing a high-performance building envelope can result in ventilation loads that are higher than initially anticipated, while optimizing the heating and cooling system may lead to conflicts with the building control system. It is precisely at this point that the research begins. High-Cost Systems Left Idle After Completion “Attend the opening ceremony of a zero-energy building and everything looks impressive,” one researcher said with a faint smile. “Advanced technologies everywhere. But if you visit the same building a year later…” In many cases, the costly ventilation systems have been switched off due to noise complaints, while the sophisticated control systems have been placed in manual mode because they are too complex to operate. As the building sector accounts for the largest share of urban energy consumption, it will take a transformative innovation in buildings to achieve carbon neutrality by 2050. Yet the reality has been sobering. Thus far, much of the development in zero-energy buildings has resembled a competition of component specifications—better windows, thicker insulation, more efficient equipment. But simply upgrading individual components does not guarantee real-life performance. Without system integration, even the best technologies cannot deliver their full potential. Another major challenge has been the lack of verification infrastructure. After innovative technologies are developed, there has been no reliable way to confirm how they perform under real building conditions. The gap between simulated performance and real-world operation has often slowed both technology development and adoption. “On paper, the specifications look perfect,” a researcher noted. “But once the system is installed, its performance often falls short.” This is exactly where the research group is focusing its efforts: integrating component technologies, rapid real-world validation, and performance optimization that continues beyond design into the operational phase. “Developing new technologies matters,” the researcher emphasized, “but ensuring that they actually work in real buildings is just as important.” Their research goes far beyond improving energy efficiency; it seeks to trigger a paradigm shift across the entire building industry. Once these technologies reach commercialization, they are expected not only to support the practical implementation of national carbon neutrality strategies but also to open new markets and expand the broader industrial ecosystem. Testing It Like a Real Building—Not Just in a Lab At the beginning of the research, the team faced a fundamental challenge: how to realistically replicate actual building environments. Testing individual elements—such as envelopes or heating systems—was relatively straightforward. The real difficulty lay in understanding the complex interactions that occur when all these systems are operating simultaneously in a real building. “At first, it felt overwhelming,” a member of the research team recalled. “Relying on simulations alone has limitations, but building a new structure every time is obviously not practical.” The solution the team developed was HILS (Hardware-In-the-Loop Simulation)–based experimental infrastructure. This approach links real hardware with simulation systems to replicate actual building operating environments. It also incorporates integrated validation technologies which are capable of evaluating envelopes, mechanical systems, and ventilation systems simultaneously. Beyond this, the team is developing a digital twin–based autonomous operation platform. Even after a building is completed, the system continuously monitors and optimizes building performance. “Even if everything looks perfect during the design stage, reality introduces many variables—different climates, different occupancy patterns.” This platform enables buildings to learn from operational data and adapt autonomously over time. The research team has now moved beyond simply developing technologies. It is building an entire framework that includes technology validation, performance verification, and real-world deployment methods. “What we are creating,” one researcher explained, “is essentially a practical guide—showing how these technologies can actually be used in real buildings.” A Research Team That Harmonizes Like an Orchestra “We each play different instruments, but we follow the same score,” the researchers often say. This collaborative structure is perhaps the research group’s greatest strength. Specialists in building envelopes, mechanical engineers, AI researchers, and digital twin developers—all with different backgrounds—working together toward a shared goal. “For my technology to be completed, I need data from another team—and the same is true for them.” Each technology is developed not in isolation but in close interaction with others. Improving window performance affects ventilation design. Changing control algorithms influences equipment operation strategies. As a result, technologies evolve organically and interdependently. The research group is more than a single institution. It is a collaborative research network involving domestic and international institutes, universities, and overseas laboratories. “At first, communication was difficult,” the team recalls. “Architectural terminology differs from energy engineering terminology, and AI researchers speak yet another technical language.” But over time, the team developed a shared language of collaboration. Today, they say that during their meetings it is sometimes difficult to tell who specializes in which field. “We are not simply developing technologies—we are building an ecosystem,” the research group leader concluded. Net-zero buildings are more than high-efficiency structures; they are the starting point for transforming entire cities and architectural cultures. This research team is laying the technological foundation that will make that transformation possible. Their journey—creating systems rather than isolated components, real-world solutions rather than laboratory experiments, and ecosystems rather than standalone technologies—has only just begun.
Department of Building Energy Research
Date
2026-03-23
Hit
8
Development of a Fire-Safe, Cost-Effective Building Façade System: The KICT FB Method
Research Specialist Kim Do-hyun, Department of Fire Safety Research, KICT - Using organic insulation materials, meeting international standards, and enabling investment cost recovery within 6.5 years What is the fire-safe, cost-effective building façade system currently under development at the KICT, and what motivated this research? What social needs drove its development? In 2017, there were major fires at the Jecheon Sports Center and the Miryang Geriatric Hospital in Korea. In the same year, the Grenfell Tower fire in the United Kingdom caused significant casualties and extensive property damage. One common feature of these three buildings was that they all had aluminum composite panels installed on the building façade. Aluminum composite panels are lightweight and easy to install, which has led to their wide use as exterior finishing materials for mixed-use residential and office buildings, where aesthetics are important. In addition, organic insulation materials with excellent thermal efficiency are commonly used to enhance a building’s insulation performance. In this type of façade structure, cavities inevitably form between the concrete wall, insulation layer, and composite panel modules during installation. It is believed that these hollow spaces (cavities) acted as vertical channels that allowed flames to rapidly spread between modules. Following these incidents, I participated as a member of a special government-wide task force on fire safety measures, led by the Blue House (BH), which was established to develop both short- and long-term fire safety policies. As part of this effort, research was launched on building façade systems, which had caused particularly severe fire damage in these incidents. To address the structural weaknesses of existing façade systems, we set out to develop a technology that could provide both fire safety and economic feasibility, while also maintaining high insulation performance using cost-effective insulation materials. This led to the development of a fire-safe and cost-effective building façade system. What are the key technological components, and how does the system work? Figure 1 illustrates the construction methods currently used that employ aluminum composite panels, showing cross-sectional views of both modular and integrated systems. Regardless of the type of insulation material used, modular systems inevitably create cavities between modules. In addition, the joints between modules create thermal bridges, which reduce insulation performance. During a fire, these cavities and joints can act as pathways for flames, allowing oxygen to enter and supply the air that accelerates the spread of flames. This phenomenon produces a chimney effect, enabling flames to spread rapidly to adjacent modules and significantly increasing fire vulnerability. Even when inorganic insulation materials are applied in integrated systems, the presence of cavities can still lead to similar fire safety issues. To overcome these structural problems and develop a façade system that achieves both insulation performance and fire safety while remaining economically feasible, the following ambitious research objectives were established: 1. Material goal : Use organic insulation material (EPS), which is considered highly vulnerable to fire 2. Insulation performance goal : 국Apply the strictest residential insulation standards in Korea (Central Region 1) 3. Fire safety performance goal : Apply the world-leading exterior cladding fire safety standard (BS 8414) 4. Performance verification goal : Obtain verification and certification from internationally accredited testing institutions Figure 2 illustrates the concept and cross-sectional structure of the KICT FB (Fire Barrier) method developed through this research. The key differences compared with conventional systems (Figure 1) are as follows: First, fire-retardant plastic manufacturing technology was applied to improve the fire safety performance of aluminum composite panels, which are used as exterior cladding. Second, urethane-based functional foam pads and sheets are used to fill the cavities between modules. This approach structurally blocks thermal bridges and eliminates pathways for vertical flame spread. Finally, the system adopts a fire-spread prevention structure, in which insulation materials and pads are protected with functional foam sheets to fundamentally prevent the vertical propagation of fire through combustible insulation and cavities. Additional reinforcement measures were also introduced after experimental testing to address previously identified vulnerable areas. Compared with existing technologies, what are the key advantages of this system? How economically viable is the solution in terms of installation and maintenance costs? To verify the insulation performance of the developed system, computer-based simulations and thermal transmittance (U-value) tests were conducted. The results of these showed that the developed system had a significantly reduced thermal transmittance compared with existing technologies, demonstrating improved insulation performance. The improved panels also contributed to enhanced thermal efficiency compared with conventional products. Ultimately, the developed system satisfied residential insulation standards for the Central Region 1 climate zone in Korea. To evaluate fire safety performance, comparative experiments were conducted between façade systems using the developed materials, structures, and construction methods and those using conventional technologies. Fire performance tests were conducted based on the BS 8414 standard, which is internationally recognized and was being introduced and revised within Korean KS standards and building regulations at the time. For systematic verification, cross-validation tests were performed by BRE (Building Research Establishment) in the United Kingdom, an internationally accredited certification body for BS 8414 testing. The results of the domestic verification tests are shown in Figure 3. According to the performance evaluation standard BR 135, the conventional system failed and the test was terminated after approximately 5 minutes. In contrast, the developed system delayed the spread of fire for 23 minutes and 22 seconds, achieving more than four times the safety performance and exceeding the international standard by more than 140%. The results of cross-validation fire tests conducted by BRE based on this technology are presented in Figure 4. To evaluate the economic feasibility of the developed technology, a standard building model was designed, and optimal standard modules and detailed design specifications were derived. Based on these, the economic performance of the technology was analyzed, from the construction stage through post-completion operation. The analysis showed that applying the developed construction method would increase the initial construction cost by approximately 1.1 times. However, due to the improved insulation performance provided by the technology, an annual energy cost savings of approximately KRW 9.5 million can be achieved after completion. As a result, it is estimated the additional investment needed can be recovered within approximately 6.5 years. In addition, it was found that material costs can be reduced through mass production processes, making it possible to commercialize the technology at a lower cost than that incurred during the research stage. Ultimately, the application of this technology is expected to effectively delay fire spread in the event of a building fire, helping to prevent both loss of life and property damage. “What stage of development has the technology reached, and what is the feasibility of its commercialization? If commercialized, what are its primary target markets (e.g., public sector, private sector, overseas markets)?” As shown in Figure 4, the technology has been applied to the exterior wall of the fire safety performance testing facility within the Department of Fire Safety Research at KICT to verify its constructability through on-site implementation. As of 2025, technology licensing agreements amounting to KRW 480 million have been completed with two companies. To further accelerate commercialization, discussions on additional technology transfer are actively underway with various industry partners. Because the technology has successfully undergone BS 8414 testing at BRE (Building Research Establishment), an internationally accredited testing institution, it can also enter overseas construction markets in which this certification standard is adopted. Domestically, the technology can be applied to both public-sector projects (including new construction and remodeling) and private-sector developments. If this technology is commercialized, what positive impacts do you expect it to have on urban environments and the public? This technology can be applied not only to new buildings but also to aging structures that require renovation or reinforcement due to fire risks. By providing a façade system that combines effective thermal insulation, enhanced fire safety, and economic feasibility, the technology can directly meet the needs of the public and consumers who seek safer and more energy-efficient buildings. Given the increasing frequency of building fire incidents, it is expected that the application of this technology will provide safer living environments while helping prevent both human casualties and property damage. What are your future research and development plans and goals? The ultimate goal of this technology is to respond to increasingly severe fire risks, while addressing both stronger insulation regulations and evolving consumer demands by promoting the wider availability of cost-effective technologies and products in the market. In addition, efforts will continue to incorporate insights from industry, academia, and research institutions, enabling the development and practical application of technologies that better reflect the needs of consumers.
Department of Fire Safety Research
Date
2026-03-23
Hit
7
Overview of the Project to Establish an Integrated Data Management Framework for Promoting Carbon Neutrality in the Building Sector
Senior Researcher Kim Deuk-woo, Department of Building Energy Research, KICT Prologue Achieving carbon neutrality in the building sector involves improving the energy performance of individual buildings, reducing energy waste, and producing energy through renewable technologies. Going forward, the number of buildings constructed to meet these high performance standards is expected to increase, with zero-energy building certification systems being strengthened in parallel. However, existing buildings—rather than new construction—account for approximately 75% of the total building stock, with most of these being aging structures that were completed more than 15 years ago. Identifying cost-effective ways to improve the energy efficiency and reduce the carbon emissions of these existing buildings is therefore a major challenge shared by government, industry, academia, and research institutions, and requires cost-efficient choices between reconstruction and green remodeling. To this end, it is essential to swiftly identify energy-intensive buildings nationwide and connect them to practical intervention measures. Yet even experts face difficulty when it comes to clearly defining energy-intensive buildings or estimating their number. Identifying Energy-Intensive Buildings It is overly simplistic to conclude that “buildings with high energy consumption have poor energy performance.” High levels of energy use may stem from factors that are not directly related to a building’s energy performance, such as water supply and drainage systems, cooking activities, office equipment, server rooms, or bathing facilities. For example, restaurants and data centers often consume large amounts of energy simply because energy use is intrinsic to the core functions and services they provide. From a heating-energy perspective, regional climate differences must also be taken into account. Even buildings with identical performance characteristics will inevitably consume more energy in Gangwon Province than in Jeju Island due to the lower ambient temperatures. Only by statistically accounting for such regional and functional differences is it possible to define and identify energy-intensive buildings based on rational and defensible criteria. In other words, a reasonable assessment of whether a building is energy-intensive requires a multidimensional analysis of the various factors that influence energy consumption, followed by objective evaluation based on that analysis. Key factors to consider include climate conditions, architectural characteristics, building systems and operational practices, types of use or business activities, occupant characteristics, surrounding environments, and broader socio-cultural and economic conditions (Figure 1). Only by comprehensively reflecting these factors can the context of a building’s energy consumption be fully understood, and its status as an energy-intensive building accurately determined. The determination of whether a building is energy-intensive is made through comparison with benchmark values. These benchmarks are established based on the energy consumption distribution of a peer group consisting of buildings with similar characteristics. The appropriateness of this grouping directly affects the reliability of the evaluation results: the greater the similarity among buildings within a peer group, the higher the accuracy of the assessment. Ultimately, after constructing a dataset in which energy consumption and influencing factors are well linked and integrated, the screening and evaluation of energy-intensive buildings can be carried out more efficiently and with greater reliability. Fragmented Data and the Establishment of an Integrated Management Framework Multiple government ministries—including the Ministry of Land, Infrastructure and Transport; the Ministry of the Interior and Safety; the Ministry of Education; the Ministry of Culture, Sports and Tourism; and the Ministry of Health and Welfare—produce and release a wide range of information on influencing factors as by-products of their administrative work, each serving different purposes. While some of these datasets are accessible through platforms such as the Public Data Portal, they are often fragmented across multiple institutions, difficult to access, and insufficiently documented. In some cases, original datasets cannot be obtained without prior approval from the responsible ministry (e.g., the National Building Energy Integrated Database, building 3D model data, the nationwide business census, household composition data, and credit card sales data). As a result, understanding, integrating, and analyzing such heterogeneous datasets is a highly complex and resource-intensive task. In many instances, attempts to link and integrate data are abandoned due to these difficulties, or fail to lead to meaningful analysis even when data are successfully obtained. Although various organizations—including research institutes, universities, and private companies—are currently attempting data collection, linkage, and analysis, most efforts remain at the pilot level due to constraints of time, cost, and technical complexity. To address these challenges that hinder the development of an industry–academia–research data ecosystem, the Project to Establish an Integrated Data Management Framework for the Promotion of Carbon Neutrality in the Building Sector (Project DataNet) was launched (Shin Hye-ri et al., 2024). This initiative aims to propose a national data framework to accelerate carbon neutrality in the building sector and to demonstrate a nationwide integrated management system based on this framework (Figure 2). The framework encompasses the identification, processing, linkage, and integration of fragmented datasets, as well as the development of models to evaluate energy consumption levels. Information related to weather, architectural characteristics, building systems, operational practices, business activities, users, surrounding environments, and socio-cultural and economic conditions is integrated, using the building registry as the core reference. From this integrated dataset, three high-priority building characteristic indicators are extracted. Based on this extensive dataset of influencing factors, statistical evaluation models are developed to achieve pilot assessments of appropriate energy consumption levels. In addition, a visualization tool for data utilization—the DeepView Data Viewer—is implemented. Together, these components form the I-BED (Infrastructure for Building Energy Data Management) system. The three key indicators are the morphological and shading indicators described in Yi Dong-hyuk and Kim Deuk-Woo (2024), the spatial mixing indicators in Choi et al. (2025), and the energy pattern indicators in Kim et al. (2022, 2024). Pilot energy consumption evaluation models have been proposed for educational facilities (Kim et al., 2024), childcare facilities (Choi Kwag-won et al., 2024), and multi-family housing (Kim Ji-hyung et al., 2024), while models for hospitals, libraries, and office buildings are currently under development. The back-end conceptual design of the I-BED system is detailed in Kim Eo-jin et al. (2024). Epilogue To accelerate the achievement of carbon neutrality in the building sector, securing high-quality, building-level data on a nationwide scale must be the top priority. Such data go beyond simple status monitoring and enable rational evaluations that take the context of energy consumption into account. Equally essential is the establishment of a data management system to systematically organize and operate these data resources. Only when these three pillars—data acquisition, evaluation models, and management systems—are developed in an integrated and coordinated manner can nationwide screening of energy-intensive buildings be effectively linked to regional green remodeling initiatives, leading to timely retrofit and improvement measures. Furthermore, once a cyclical structure in which screening and intervention continuously reinforce each other is established, carbon neutrality in the building sector can be meaningfully accelerated. From a policy perspective, data-driven and rigorous review will become possible in decision-making processes related to zero-energy buildings and green remodeling across relevant ministries. In addition, the ability to efficiently manage the energy consumption levels of individual buildings nationwide will support the establishment of a closed-loop framework encompassing the identification, inspection, support, and management of energy-intensive buildings. From a scientific and technological standpoint, evidence-based and objective performance evaluation will significantly enhance the reliability of assessments based on measured energy consumption. This increased reliability is expected to generate broader economic impacts, including the activation of diagnostic and efficiency-improvement markets for energy-intensive buildings, as well as the creation of new data-driven industries. References Kim, E. J., Choi, Y., Song, B. K., Shin, H., Kim, D. W., & Kim, Y. S. (2024). Performance Analysis of a Kubernetes-based Data Distribution Service. Journal of the Korean Institute of Communications and Information Sciences, 49(10), 1458–1465. Kim, J. H., Kim, S. I., Park, Y. J., Kim, D. W., & Kim, E. J. (2024). Correlation Analysis Between Non-Energy Public Data and Annual Energy Consumption by End Use in Multi-Family Housing. Korea Journal of Air-Conditioning and Refrigeration Engineering (KJACR), 36(12), 606–618. Kim, H. J., Joo, H. B., Kim, D. W., & Heo, Y. S. (2024). Analysis of Annual Base and Heating Energy Influencing Factors for Energy Benchmarking of Educational Facilities. Journal of the Korean Institute of Architectural Sustainable Environment and Building Systems, 18(6), 491–501. Yi, D. H., & Kim, D. W. (2024). GIS-based Urban-scale EnergyPlus Simulations for Database Construction to Develop Building Shading Indicators. Journal of the Korean Institute of Architectural Sustainable Environment and Building Systems, 18(2), 85–97. Shin, H. R., Kim, H. G., & Kim, D. W. (2024). DataNet: Establishing an Integrated Management Framework for Building Energy and Influencing-factor Data to Accelerate Carbon Neutrality in the Building Sector. Journal of the Korean Institute of Architectural Sustainable Environment and Building Systems, 18(6), 564–575. Choi, K. W., Park, J. H., Kim, D. W., & Cho, J. W. (2024).Development of Regression Models for Evaluating Energy Consumption Performance of Childcare Facilities Using Open Public Data. Journal of the Korean Solar Energy Society, 44(6), 35–48. Kim D-W, Ahn K-U, Shin H, Lee S-E. (2022) Simplified Weather-Related Building Energy Disaggregation and Change-Point Regression: Heating and Cooling Energy Use Perspective. Buildings, 12(10):1717. Kim HG, Lee SE, Kim D-W (2024) Impact of calendarization on change-point models, Energy and Buildings. Volume 303, 113803. Sebin Choi, Dong Hyuk Yi, Deuk-Woo Kim, Sungmin Yoon(2025) Multi-source data fusion-driven urban building energy modeling, Sustainable Cities and Society, Volume 123, 106283, ISSN 2210-6707, https://doi.org/10.1016/j.scs.2025.106283.
Department of Building Energy Research
Date
2025-12-22
Hit
279
Carbon-Eating Concrete: A Means of Reaching Carbon Neutrality
Research Fellow Park Jung-jun, Department of Structural Engineering Research (Carbon-Neutral Construction Materials Team), KICT Each year, the Earth sends increasingly urgent warnings in the form of extreme weather events. Recurrent damage to lives and infrastructure caused by heavy rainfall, droughts, and typhoons has made greenhouse gas reduction no longer something to aspire to, but essential for survival. The cement and concrete industry is a particularly large emitter, accounting for approximately 8% of global greenhouse gas emissions. To directly confront this reality, the Carbon-Neutral Construction Materials Team at the Department of Structural Engineering Research, KICT, has developed Carbon-Eating Concrete (CEC) technology. Their goal extends beyond simply reducing emissions: it is to accelerate carbon neutrality across the entire concrete industry. Developing CEC Technology for Carbon Neutrality The Carbon-Neutral Construction Materials Team is engaged in continuous research with the aim of achieving net-zero carbon emissions in the construction industry. In response to the urgent global demand for carbon reduction technologies, the team is tackling the greenhouse gas challenges of the cement and concrete sector head-on. At the core of this research lies Carbon-Eating Concrete (CEC) technology. The essence of CEC is to react the carbon dioxide generated during concrete’s production with components inside the concrete, permanently storing CO₂ in a stable mineral form while simultaneously enhancing the concrete’s strength and durability. This represents a circular approach in which exhaust gases are treated not as pollutants, but as valuable resources. The research team is exploring the potential for CO₂ storage across the entire lifecycle of construction materials—including cement, aggregates, and mixing water—while focusing on maximizing the carbon neutrality impact of the concrete industry as a whole. Utilizing the largest CO₂ curing facility in Korea, the team has successfully demonstrated direct CO₂ storage in precast bridge deck slabs. In addition, they have developed a CO₂ treatment technology for recycled ready-mixed concrete wastewater that can also be applied to cast-in-place concrete, achieving world-class efficiency. Ultimately, through a full lifecycle research framework encompassing material development, structural performance evaluation, field application, and policy recommendations, the team is working to ensure that the technology moves beyond the laboratory to become firmly established at real-world construction sites. CEC Technology: A Game Changer in Accelerating Carbon Neutrality The International Energy Agency (IEA) estimates that carbon capture and utilization (CCU) technologies that use CO₂ during concrete production have the potential to contribute 1–15% of the total 10 Gt CO₂ reduction target under the broader CCUS (Carbon Capture, Utilization, and Storage) framework. This assessment highlights CEC technology as a practical and globally applicable solution for accelerating carbon neutrality efforts. Accordingly, CEC is often described as a key player within the CCUS technology chain. This is because CEC is one of the very few technologies capable of utilizing the large volumes of CO₂ captured from industrial sites while simultaneously ensuring its safe and permanent storage. Unlike other CCU approaches that remain largely at the proof-of-concept stage, such as carbon-to-fuel or carbon-to-oil conversion, CEC has already demonstrated its feasibility for on-site application through pilot and demonstration projects. The research team estimates that converting just 20% of domestic concrete production to CEC technology could reduce CO₂ emissions by approximately 520,000 tons per year, equivalent to about 4.9% of Korea’s national CCUS reduction target. In this sense, CEC technology represents a critical lever for transforming the construction industry into an active contributor to climate change mitigation. Convergent Research Addressing Key Barriers A major strength of the River Experiment Center lies in its laboratory-to-field integrated approach to developing CEC technology. Rather than focusing solely on injecting CO₂ into concrete, this work represents a convergent research effort that connects the entire chain of carbon capture, utilization, and evaluation into a single, integrated framework. To this end, experts from industry, academia, and research—including the Department of Structural Engineering Research as the core hub, together with the Department of Building Research, the Department of Fire Safety Research, Shinhan University, Yonsei University, and Jiseung C&I Co., Ltd.—have participated from the early stages through regular seminars and discussions. Through the process of understanding different disciplinary perspectives and sharing data, the research team has been building a model for effective convergent research. The greatest challenge was the scale-up process, in which technologies that had proven successful at the laboratory level had to be expanded to an industrial scale. This involved more than simply increasing reactor capacity; it required verification that prototypes could be mass-produced while meeting the quality and reliability standards demanded in real-world applications. When unexpected reaction variability emerged at larger scales, the team designed and fabricated multi-stage mock-up reactors and repeatedly analyzed simulations and experimental data to identify optimal operating conditions. As a result of these efforts, large-scale demonstration experiments are now actively underway at ready-mixed concrete and precast product manufacturing plants. A virtuous cycle has also been established, in which data obtained from field demonstrations are fed back into research to further refine equipment and processes. Through a process of continuous adjustments and optimizations, CEC technology is steadily evolving into a field-ready standard. Lifecycle Research Capability and Teamwork The greatest strength of the Carbon Neutral Construction Materials Team lies in its end-to-end research capability, spanning the entire continuum from materials development to institutional and policy improvement. The team’s work goes beyond materials innovation to encompass structural safety evaluation, field application validation, technology standardization, and policy recommendations, all conducted within a coherent and integrated research framework. This comprehensive approach enables potential challenges arising at construction sites to be anticipated in advance and addressed effectively, significantly enhancing the practicality and scalability of the technology. All of these achievements are rooted in the strong trust and collaboration among team members. The Carbon-Neutral Construction Materials Team fosters a horizontal culture in which everyone, regardless of their rank or years of experience, is encouraged to freely share ideas, respect diverse perspectives, and work together toward optimal solutions. Through a process of overcoming numerous trials and setbacks, team members have continually supported and motivated one another. This collaborative culture has provided researchers with sustained positive momentum, and become a driving force in their collective efforts to build a sustainable future.
Department of Structural Engineering Research
Date
2025-12-22
Hit
157
Developing Two Powerful Capabilities: Purifying Water and Recovering Resources
Research Specialist An Ju-suk, Department of Environmental Research, KICT Circulation-Type Membrane Capacitive Deionization (C-MCDI): A New Future for Resource-Recovery-Oriented Water Treatment As water scarcity and water pollution continue to intensify worldwide, the need for technologies that can efficiently utilize limited water resources is growing. In particular, regions such as islands, remote areas, and locations lacking sufficient water supply infrastructure require innovative solutions capable of achieving both stable water treatment and resource recovery. Against this backdrop, Circulation-Type Membrane Capacitive Deionization (C-MCDI) technology has emerged as a promising alternative. Q1. Could you introduce Circulation-Type Membrane Capacitive Deionization (C-MCDI) technology, and explain the background of the technology and the need for its development? In Korea, there are many small-scale water supply facilities that rely on groundwater, while overseas there is a strong demand for treating coastal brackish groundwater or water sources containing high concentrations of specific ions. C-MCDI was developed to address these needs. Conventional MCDI systems utilize electrodes and ion-exchange membranes but suffer from low water recovery rates, requiring large volumes of raw water. In contrast, C-MCDI significantly improves recovery efficiency by recirculating the desorption water generated during the desorption process through a dedicated circulation loop instead of discharging it. Through repeated circulation, we experimentally confirmed that the concentration of specific ions gradually increases, enabling not only desalination but also resource recovery. Q2. What differentiates this technology from existing approaches, and what advantages does it offer in terms of performance and economics? The most distinctive feature of C-MCDI lies in its structure, which reprocesses desorption water that would otherwise be disposed of, thereby greatly improving water recovery rates. This allows for stable water treatment even with limited raw water availability. In addition, tailored circulation water operation based on ionic characteristics helps reduce scaling and fouling, enabling long-term, high-efficiency operation. Energy consumption is proportional to the salinity of the feed water, and if the water for treatment has a total dissolved solids (TDS) concentration below 5,000 mg/L, low-energy operation at less than 1 kWh/m³ is achievable. Capital costs are also reasonable, as the system is composed of low-pressure, DC-based modules without the need for high-pressure vessels or large pumps. Operation and maintenance costs mainly consist of electricity expenses and periodic replacement of electrodes and membranes, while the minimized cleaning frequency helps keep operating costs stable. Q3. Could you explain the core components and operating principles of the technology? C-MCDI consists of porous carbon electrodes, cation and anion exchange membranes, a power supply, and a circulation system. During the adsorption stage, an applied voltage forms an electric double layer on the electrode surfaces, causing cations and anions to be adsorbed and thereby reducing the inorganic ion concentration in the water. Subsequently, the system transitions to the desorption stage by maintaining the voltage at 0 V or reversing polarity, releasing the adsorbed ions. In the circulation-type configuration, this desorption water is not discharged but instead returned to a dedicated desorption loop and reused as feed water for subsequent desorption stages. Repeating this process maintains treated water quality while improving recovery rates and achieving selective ion concentration. Q4. What is the current development stage of the technology, its potential for commercialization, and its target markets? C-MCDI technology is being developed in two areas simultaneously: high-recovery, low-energy water treatment and valuable resource concentration. In the water treatment field, a pilot-scale demonstration system with a capacity of 50 tons per day has been successfully operated for groundwater treatment at small-scale water supply facilities in Korea and for saline groundwater treatment in marine ASEAN countries such as Malaysia. The results of test operations showed over 90% removal efficiency, a recovery rate of 83.3%, and an energy consumption of only 0.584 kWh/m³ under feed water conditions of 1,000 mg/L TDS. The technology has already been transferred to a domestic SME, and efforts are currently underway to develop commercial modules and expand overseas applications. Key target markets include small-scale water supply facilities, island and coastal water supply systems, and regions relying on low-salinity groundwater. In the concentration field, experimental studies have confirmed that the circulation structure enables the gradual enrichment of specific ions. While this stage has focused on feasibility validation, ongoing research aims to optimize operating conditions to enhance concentration efficiency and selectivity. Ultimately, the goal is to advance this into a high-value concentration technology applicable to industrial wastewater, mine drainage, battery manufacturing processes, and other resource recovery applications. Major target markets include the metal and mineral resource industries, secondary battery manufacturing, reuse of semiconductor cleaning water, and treatment of high-salinity industrial wastewater. Q5. What social and environmental impacts are expected once the technology is commercialized? C-MCDI is optimized for decentralized water treatment, enabling stable water supply even in regions with limited water infrastructure, such as islands, remote areas, and developing countries. Its circulation-based operation conserves water resources, and its low-power, DC-based structure allows for its easy integration with renewable energy sources such as solar power. Moreover, its modular design and simplified maintenance enable long-term, stable operation even in areas lacking specialized personnel. As a result, C-MCDI can contribute to improved water welfare and sustainable water management in international development cooperation initiatives, including ODA projects. Q6. What are your future research plans and goals?? In the short term, efforts will focus on optimizing desorption water circulation parameters—including circulation volume, cycle frequency, and voltage waveforms—verifying electrode and membrane durability, and standardizing modules and control software to accelerate commercialization. In the mid-term, the team plans to pursue low-power operation integrated with solar energy, establish operational datasets under diverse water quality conditions, and expand domestic and international field demonstrations. In the concentration domain, the goal is to improve target ion selectivity and recovery efficiency, and to integrate precipitation and electrochemical recovery processes to realize a comprehensive “water treatment + resource recovery” solution.
Department of Environmental Research
Date
2025-12-22
Hit
140
Introducing Robots into Residential Spaces: Research on Human–Robot Interactive Architectural Technologies
Research Specialist Yang Hyeon-jeong, Department of Building Research, KICT Prologue In recent years, robotic technologies have advanced rapidly through the convergence of physical computing and generative AI, along with significant progress in humanoid robot development. As a result, the roles and application domains of robots are expanding far beyond their traditional function of simple automation, toward more sophisticated forms of interaction with humans. Robots, once primarily deployed in industrial manufacturing settings, are now evolving into service robots capable of actively responding to a wide range of situations in everyday life. This shift represents not only an inevitable trajectory of technological evolution, but also a direction that aligns closely with emerging social needs and expectations. Notably, these technological advances have captured attention as promising solutions that address the demographic shifts associated with an aging society. Considering a range of social challenges—including shortages in caregiving personnel, the need to support independent living among older adults, and issues of emotional isolation—robots have the potential to serve not merely as assistive tools, but as meaningful partners in daily life. For example, humanoid robots capable of understanding and responding to human language and emotions hold significant potential to support both the physical and psychological well-being of older adults. In this context, residential spaces constitute a core environment in which robots interact most closely with humans. Beyond serving a purely residential function, this shift calls for a transformation in architectural technologies and spatial design premised on Human–Robot Interaction (HRI). Against this backdrop, the present study investigates Human–Robot Interactive architectural technologies that support the seamless operation of robots and user-centered interaction within residential environments. Through this research, the study aims to propose a new residential paradigm that enhances quality of life for occupants. Overview of Research on Human–Robot-Interactive Architectural Technologies Research on Human–Robot-Interactive architectural technologies recognizes the need to move beyond isolated instances of human–robot interaction and toward the development of integrated cooperative systems that combine humans, robots, buildings, and spaces. The ultimate goal of this research is to develop human-centered, robot-interactive architectural technologies—integrating architectural space and services—to enable meaningful and effective interaction between humans and robots. More specifically, the study seeks to propose spatial adaptation strategies that allow robots to effectively support humans through research on dynamic interactions among robots, spaces, and occupants. In parallel, it aims to develop technologies for real-time data analysis and spatial optimization by linking robots with smart building infrastructure. The research is conducted in a phased manner over a three-year period. In the first year, the focus is on establishing the foundational framework for the development of robot-friendly interactive architectural technologies. Based on a survey of robot technologies applicable to architectural spaces, robots suitable for deployment in residential environments are selected, and a Human–Robot Interactive operational environment is constructed. In the second year, the research advances to the development of multimodal data utilization technologies for interactive robot-use environments, the establishment of user-tailored response optimization technologies, and the development of prototype control services integrated with existing Robot Operating System (ROS)-based platforms. In the third year, user-tailored services are validated in real-world usage environments, interactions between architectural spaces and robots are optimized, and the connectivity and integration among humans, buildings, and robots are comprehensively verified. This research is conducted at the “Interactive Smart Housing Laboratory” located on the 5th floor of Building 8 at the KICT headquarters, where existing smart home functions are expanded to realize an interaction-driven technological environment and architectural space improvements that support the evolution toward Human–Robot–Building interactive environments. Trend Analysis of Care Robots and Service Models To support the introduction of robot services in residential environments, a review of domestic and international trends was conducted, focusing on robots that are commercially available. In Korea, robots such as Hyodol—used for health management, emotional interaction, and emergency assistance—and Pibo, which provides senior care and childcare services, are being applied in caregiving contexts. In the United States, robots such as Stretch (a mobile manipulator for home use) and Moxi (used for medical supply delivery and laboratory sample transport) have been introduced in healthcare and caregiving facilities. In Japan, emotionally interactive robots such as Paro (a robot with the appearance of a seal), Lovot, and Pepper are being utilized for dementia and depression management, reflecting the active adoption of companion and social robots. Overall, however, the diversity of service robots remains limited, and the number of commercially available platforms is still relatively small. To guide the selection of robots for deployment, the study examined the types of services required in residential settings. The Korean government’s “Senior Residence Activation Plan,” announced in July 2024, outlines service needs across three stages of aging. In the “Independent Living Stage”, support is required for daily living activities such as household chores and meals, leisure activities, and regular well-being check-ins. In the “Care-Required Stage,” services such as customized elderly care, home-based nursing care, safe housing, and healthcare support are needed. In the “Specialized Care Stage,” residential living and long-term care support in senior care facilities are deemed essential. Based on this framework, the present study focuses on exploring how robots can provide the services required by older adults in the “Care-Required Stage,” with the aim of supporting daily living and caregiving needs within residential environments. Selection and Technical Analysis of Robots for Residential Deployment To develop service scenarios for elderly care robots in smart housing environments, three commercially available robots were selected for this study. LG CLOi (Delivery Robot) provides food and beverage delivery, mail and essential item transport, and user-tailored environmental services. Roborock (Household Robot) is equipped with spatial mapping and navigation functions to deliver automated indoor cleaning services. Hyodori (Social Robot) applies Internet of Things (IoT) technologies to provide 24-hour monitoring of older adults’ daily activities, emotional states, and safety conditions. These robots are managed in an integrated manner through the Home Assistant platform, a system designed to implement diverse residential management services using APIs linked to each robot’s respective control platform. The interactive environment has been developed as an open system so that it can be readily expanded to accommodate the future deployment of humanoid robots. The content was organized based on role-based robot scenarios in smart housing environments. A smart housing service scenario was developed using the daily routine of a 67-year-old resident (Ms. Kim) as a model. By analyzing her weekday life patterns from morning to night and matching appropriate robot technologies, five core technology domains were identified: mobility assistance robot technologies (fall prevention, route guidance, and object carrying); household assistance robot technologies (automation of cooking, laundry, cleaning, and dishwashing); interactive robot technologies (speech recognition, emotional feedback, and visual and auditory assistance); smart environment integration technologies (control of curtains, lighting, and home appliances); and health and daily-life monitoring technologies (sleep monitoring, fall detection, and temperature and humidity sensing). Research on Robot-Friendly Residential Spaces To examine the potential spatial transformations of housing premised on the introduction of robots, this study conducted an analysis of the robot-friendly building certification system. At present, this certification system is primarily operated for general (non-residential) buildings in which robot utilization is more active, and its application to residential spaces remains at an early stage. A representative example is Naver’s Second Headquarters, Korea’s and the world’s first ‘robot-friendly’ building. In April 2022, this building achieved the highest rating under the robot-friendly building certification system by satisfying all 25 evaluation criteria across four categories: △architectural and facility design, △network and system infrastructure, △building operation and management, and △robot support and related services. Key features of the building include the world’s first robot-dedicated elevators, which enable seamless vertical movement of robots; a wide range of services based on 5G brainless robot technologies; and a multi-robot intelligence system supported by Naver Cloud and the 5G network infrastructure. Approximately 100 “Rookie” delivery robots are currently in operation, performing various tasks, including fire evacuation response. Based on this certification framework and case analysis, the present study identified essential spatial elements required for deploying care robots in residential environments. Particularly notable elements include: △circulation corridors with a minimum effective width of 1.2 m or more, considering bidirectional movement between users and mobile service robots; △ floor finishing materials suitable for robot mobility (with a coefficient of slip resistance (C.S.R.) of 0.4 or higher); and △ the establishment of an integrated network infrastructure to support IoT and sensor technologies. These elements are expected to serve as core criteria for the future introduction of robots into residential spaces. Plan for Establishing an Interactive Environment and Collecting Data An “Interactive Smart Housing Laboratory”, with a total floor area of 84 m², has been established on the fifth floor of Building 8 at the headquarters of the Korea Institute of Civil Engineering and Building Technology (KICT) in Ilsan. This facility was created as an integrated experimental space for the development of automated, environment-controlled smart home technologies that enable the real-time monitoring of occupants’ behavioral and physiological responses and support the creation of healthy residential environments. At present, the laboratory is being expanded beyond conventional smart home functions, with the aim of evolving into an interactive environment that facilitates dynamic interactions among humans, robots, and buildings. To achieve this goal, the development of interaction-based technological infrastructure and improvements to architectural spaces are being pursued in parallel. As illustrated in Figure 4, the system is designed to comprehensively monitor and analyze human factors (user location, activity level, sleep status, heart rate, respiration rate, blood pressure, and pulse), building factors (temperature, humidity, illuminance, air quality, atmospheric pressure, noise levels, and appliance operation status), and robot factors (user–robot interactions, mental-health-related responses, location tracking, collision detection, task execution data, muscle mass, and physical activity levels). In addition, through integration with the smart building infrastructure installed within the laboratory, the system is designed to enable real-time data analysis and bidirectional interactions among all components. This integrated framework is expected to support not only the provision of human-centered, personalized residential environments, but also future expansion toward integrated operation technologies for care robots and the development of data-driven environmental control algorithms. Epilogue This study is significant as it represents one of the first systematic efforts to explore the potential introduction of diverse service robots into the everyday setting of residential spaces, along with the architectural transformations and interactive environments required to accommodate them. While the research does not primarily aim to advance robot technologies themselves, it provides an architectural examination of the physical conditions and interaction frameworks necessary for the practical deployment of robots in residential environments. In doing so, the study establishes an important starting point for enhancing the real-world applicability and value of service robots. Looking ahead, the spatial response strategies and technology integration concepts proposed in this study can be extended toward the development of sustainable residential models that address the challenges of an aging society, improve quality of life, and diversify residential services. Furthermore, it is hoped that this work will serve as a practical foundation for exploring new possibilities in the convergence of architecture and robotics, contributing to actionable pathways for meaningful human–robot coexistence. References Lee, K., Koo, H. M., Lee, Y.S., Jung, M. S., Yoon, D. K., & Kim, K. S. (2022). Development of Robot-Friendly Building Certification Indicators: Application of Focus Group Interviews (FGI) and the Analytic Hierarchy Process (AHP). Journal of Cadastre & Land Information (JCLI), 52(2), 17–34. Electronics and Telecommunications Research Institute (ETRI). (2022). Development of Real-Environment Human-Care Robot Technologies in Response to an Aging Society. Report commissioned by the Ministry of Science and ICT. Architecture & Urban Research Institute. (2024). Development of Core Technologies for the Design and Remodeling of Robot-Friendly Buildings. Report commissioned by the Ministry of Land, Infrastructure and Transport. Ivanov, Stanislav Hristov, and Craig Webster. (2017). Designing robot-friendly hospitality facilities. Proceedings of the scientific conference Tourism. Innovations. Strategies. Sheridan, T. B. (2016). Human–robot interaction: status and challenges. Human factors, 58(4), 525-532. Sartorius, Marie P., and Petra von Both(2022) “RuleBased Design for the Integration of Humanoid Assistance Robotics into the Living Environment of Senior Citizens.” Legal Depot D/2022/14982/02 : 367.
Date
2025-12-22
Hit
611
Core Solution for the Era of Fully Autonomous Driving: Physical Infrastructure Supporting Autonomy
Senior Researcher Kim Young-min, Department of Highway and Transportation Research, KICT Prologue To operate independently, autonomous vehicles (“AVs” hereinafter) must be capable of perceiving and interpreting their surroundings. In essence, they need to perform the same sequence of actions that human drivers carry out—Perception–Identification–Emotion–Volition (PIEV). To achieve this, AVs must be equipped with systems and performance capabilities that support this sequence. For AVs, the functions that parallel the human PIEV process are recognizing the driving environment and controlling the vehicle based on that recognition. The environment they must process includes not only the fundamental road layout (e.g., horizontal and vertical alignment, lane configuration) but also dynamic, real-time information, such as the presence and movement of other road users (vehicles, pedestrians, etc.) and the traffic regulations governing road use. Up to now, road infrastructure systems have been developed and operated with human drivers as the primary consideration. To achieve the commercialization of fully autonomous driving technology, it is essential to re-examine road infrastructure systems with AVs as the primary consideration. The Korea Institute of Civil Engineering and Building Technology (KICT) has pursued various R&D initiatives around strengthening the role of road infrastructure in the age of autonomous driving (for related content, see the Spring 2025 special feature “Future Road Development for Cooperative Autonomous Driving”). This article introduces the Physical Infrastructure Supporting Autonomous Driving currently being developed at the KICT. Background and Purpose of Technology Development To realize fully autonomous driving—defined as Level 3 or higher under the SAE (Society of Automotive Engineers) standards, where control authority shifts from the human driver to the vehicle—it is essential to combine advanced AI-based environmental perception using onboard sensors with technologies that link static and dynamic information from high-definition road maps, known as the Local Dynamic Map (LDM). Together, these technologies enable vehicles to perceive their surroundings with high-precision positioning. This approach represents the core concept of cooperative autonomous driving, in which infrastructure supports autonomous vehicles in carrying out driving tasks. To make this vision a reality, various forms and methods of infrastructure support have been proposed (see Figure 1). Let us return to the perspective of human driving behavior. The information a driver uses to perform driving is more extensive than commonly assumed, and the cognitive processes involved in decision-making are highly complex. For example, the act of lane changing stems from several decision-making factors and resolutions. This includes decisions such as recognizing that the current lane is more congested than an adjacent lane and deciding to change lanes, determining that the lane ahead is blocked due to construction or other reasons and therefore a forced lane change is unavoidable, or choosing to move into a lane closer to the intended direction in order to make a left or right turn at an intersection. At a deeper level, driving involves collecting “evidence” for each decision, followed by “reasoning” to reach the final judgment. In short, the decision-making process required for driving combines sensory inputs—such as visual recognition of an obstacle’s shape or auditory recognition of a horn—with prior driving experience and accumulated know-how in situational judgment. AVs must carry out the same processes as human drivers. At this point, the role of road facilities—referred to in this article as “physical infrastructure supporting autonomous driving”—is revealed. Because sensor-based perception systems have inherent limitations, AVs are currently required to transfer control authority back to the driver in what are commonly called “handicap situations and zones” (Jeon and Kim, 2021). Representative examples include reduced visibility due to weather conditions such as fog, which makes it difficult for vision sensors to collect information, and lane closures caused by roadwork; these are classified as typical “autonomous driving handicap situations and zones.” If “physical infrastructure supporting autonomous driving” can provide support in such “handicap situations and zones” by contributing to the “decision-making process,” more specifically to the “evidence collection process for decision-making” and to “reasoning using decision evidence,” it can offer practical and meaningful assistance for AV operations. The implementation of “physical infrastructure supporting autonomous driving” can be broadly categorized into two types. The first involves enhancing existing road facilities so that they are more easily detected by AV sensors. In practice, this means improving the sensor-based perception performance of road facilities—while preserving their inherent functions and properties—by accounting for the characteristics of key vehicle sensors used for environmental perception (e.g., cameras, LiDAR). Examples include adjusting the color or material of facilities within existing regulatory limits, or making structural modifications that expand sensor-detectable areas without altering their outward appearance. The second type involves leveraging the physical properties of road facilities to provide the AV with information that can serve as more reliable evidence in its reasoning process. For instance, facilities shaped like conventional traffic signs can display encoded information that AV sensors can detect and interpret, thereby delivering critical road operation data for vehicle control. This approach can be viewed as equipping AVs with functions equivalent to those that road facilities provide for human drivers—such as traffic regulation signs that indicate required actions, or guide signs and safety facilities that offer useful reference information while driving. Development and Verification of Physical Infrastructure Supporting Autonomous Driving: Focus on Lane-Closure Sections In 2024, the research team developed a prototype AV (see Figure 2). Although it is just one of many AVs designed and manufactured in South Korea, this vehicle has a unique function: it enables “vehicle control and autonomous driving through physical infrastructure support.” By incorporating physical infrastructure into vehicle control, the research team compared AV perception performance in “handicap situations and zones,” as well as vehicle behavior with and without infrastructure in those conditions. This made it possible to verify the suitability of the physical infrastructure system for autonomous driving. Every day, countless events unfold on the road. Among them, one of the most critical situations directly affecting vehicle operation is the lane closure. Lane closures often occur due to road maintenance or accident response, and vehicles must detour around the closed lane in order to continue driving. Human drivers recognize and interpret lane closures through multiple cues—for example, visually confirming traffic control devices such as cones or guard barriers, observing hand signals from traffic controllers like police officers or flaggers, or noticing forced merges of preceding vehicles. AVs, however, have clear limitations in carrying out such reasoning and judgment processes. Given that lane closures are highly dynamic and variable on roadways, it is expected that map-based electronic information systems alone will not be sufficient to provide reliable information in these situations. The research team devised a system that enables AVs to more easily recognize lane-closure situations by utilizing “encoded signs” that can be detected through road facilities (see Figure 3). In lane-closure sections, the vehicle must perform lateral control, which consists of two tasks: avoidance control, where the vehicle detours around the closed lane, and return control, where the vehicle decides whether to return to its original lane after passing the closed section, depending on the requirements of the global driving path. To achieve this, the team applied a technology that recognizes point cloud data (PCD) patterns obtained through in-vehicle LiDAR, enabling AVs to detect lane-closure situations via road facilities and incorporate this information into vehicle control. This approach takes advantage of LiDAR’s greater robustness under adverse weather conditions (e.g., heavy rain, fog) compared to vision sensors, thus addressing the “visibility obstruction” handicap situation that cannot be easily resolved by simply improving conventional vision sensor–based perception (Kim et al., 2024). The following are the speed and angular velocity values measured inside the AV when passing through a lane-closure section, both without and with the installation of physical infrastructure supporting autonomous driving. This experiment was conducted by recreating a lane-closure environment at the Yeoncheon SOC Demonstration Center and observing how the vehicle’s behavior differed depending on the presence or absence of physical infrastructure. When the AV recognizes the lane-closure section, lane-change control is performed, during which the vehicle reduces speed to an appropriate level and executes a turning maneuver to change lanes. In this process, if physical infrastructure that provides lane-closure guidance exists, the AV can recognize the lane-closure section in advance. Compared to abrupt maneuvers such as sudden deceleration or sharp turns—often carried out by AVs when facing unexpected physical situations that make normal driving difficult—this advance recognition induces smoother driving. Numerically (see Figure 4), without the use of lane-closure guidance infrastructure, the AV reduced its speed by up to 20 km/h when changing lanes, with angular velocity reaching a maximum of 0.15 rad/s. In contrast, when physical infrastructure was utilized, the AV reduced its speed by only up to 10 km/h to pass through the section, and its maximum angular velocity remained within 0.10 rad/s, confirming quantitatively that more stable driving was achieved. The results of this experiment indicate that the physical infrastructure supporting autonomous driving not only has positive effects on AVs themselves but can also generate even greater benefits in situations where AVs and conventional vehicles coexist. From a traffic flow perspective, large fluctuations in the speed and angular velocity of an individual vehicle make the vehicle a so-called “troublemaker” that disrupts overall traffic stability. The function of physical infrastructure that ensures AVs are controlled so that they do not behave in ways that appear unusual compared to human drivers contributes to improving the stability of mixed traffic flow involving both AVs and conventional vehicles, and is therefore expected to positively influence the broader adoption of AVs. Epilogue As of 2025, many experts believe that autonomous driving technology is currently in a stagnation phase of development and diffusion known as the “Chasm.” In the early 2010s, when Google first unveiled its autonomous vehicle to the public, most countries had set targets for the commercialization of autonomous driving that were earlier than 2020. Today, in the mid-2020s, only a very limited number of production vehicles equipped with autonomous driving functions at SAE Level 3 or higher—a recognized milestone for commercialization—actually exist, and even these are constrained to operating only within various limitations defined by their Operation Design Domain (ODD). This reality implies that there are significant technological challenges that must be solved before we reach the era of fully autonomous vehicles, and at the same time highlights the need for new methodologies and approaches. Various R&D cases conducted thus far demonstrate that cooperation between vehicles and infrastructure is indispensable for the commercialization of fully autonomous driving. The methodology introduced by the research team in this article—namely, constructing an environment in which AVs can more actively utilize road facilities during the driving process and applying this approach to alleviate the difficulties of AV decision-making and control in “handicap situations and zones”—is expected to serve as a core solution that can accelerate the advent of the fully autonomous driving era. References Kim, Young-min; Park, Beom-jin; Kim, Ji-soo (2024). A Study on the Development and Verification of Infrastructure Facilities Supporting AV Positioning Using Mobile LiDAR. Journal of The Korean Society of Intelligent Transport Systems, Vol. 23, No. 6, pp. 203–217. Jeon, Hyun-myung; Kim, Ji-soo (2021). Analysis of Handicap Situations and Their Causes in Autonomous Vehicles through IPA and FGI. Journal of The Korean Society of Intelligent Transport Systems, Vol. 20, No. 3, pp. 34–46. Korea Intelligent Transport Systems Consortium (2024). Stage Report on the Development of a Digital Road and Traffic Infrastructure Convergence Platform Based on Crowdsourcing.
Department of Highway & Transportation Research
Date
2025-09-24
Hit
520
The Current State of Cable Tension Monitoring Technology in Cable-Stayed Bridges
Senior Researcher Park Young-soo, Department of Structural Engineering Research, KICT Prologue The Special Act on the Safety Control and Maintenance of Establishments defines criteria for managing facilities, including bridges, primarily based on their scale and type, and stipulates that special bridges must be monitored and managed through precise measurements. Among these special bridges, the cable-stayed bridge is a representative cable-supported structure, in which the deck is supported by stay cables connected to towers. Cable-stayed bridges offer improved structural efficiency by combining the tensile strength of cables with the bending and compressive strength of towers and decks. They are particularly suited for long spans, but because of their aesthetic appeal, are also increasingly being adopted for shorter spans, resulting in a steady increase in the number of cable-stayed bridges in service. In such cable-supported structures, the cables are critical structural components. Their tension force and damping ratio affect not only the behavior of the cables themselves but also the overall stability of the bridge. As the main span of a cable-stayed bridge becomes longer, the stay cables linking the towers and decks become more susceptible to vibrations induced by wind and traffic loads. Since these essential cables may experience tension loss for various reasons—and such losses can significantly degrade bridge performance, potentially even leading to collapse in extreme cases—effective methods of monitoring cable tension are indispensable. Various methods for monitoring cable tension have been studied and applied. Among them, the vibration-based method estimates tension using vibration data and has the advantages of easier installation and higher cost-effectiveness compared to other methods. As of 2022, approximately 260 cable tensiometers have been installed on cable-supported bridges managed by the Special Bridge Management Center of the Korea Authority of Land & Infrastructure Safety (KALIS), and most of these monitor cable tension by estimating it through vibration-based methods that use acceleration data. Vibration-Based Method The vibration-based method for estimating cable tension involves the following procedure: 1) installing an accelerometer on the exterior of the cable to continuously collect vibration responses (Figure 2, Step #01), 2) transforming the collected responses into power spectral density (PSD) signals in the frequency domain, 3) extracting peak information (fn: peak location, n: peak order) from the transformed PSD signals (Figure 2, Step #02), and 4) deriving a linear regression equation from the extracted peak information (Figure 2, Step #03). Using the intercept b of the regression equation (0.729 in Figure 2, Step #04), together with the cable’s properties—effective length (Leff) and unit weight (w)—the cable tension is then estimated as expressed in Equation (2). Since the excitation conditions of the cable are not constant, the data collected by accelerometers installed on the cable allow for a more stable detection of peak information as the measurement time increases. However, the longer the measurement time, the longer the tension estimation cycle becomes. Therefore, in practice, acceleration measurements are generally taken at a frequency of 100 Hz, with measurement durations of 10 minutes. The collected acceleration data are then transformed into the frequency domain, and peak information (peak position and order) is detected from the transformed frequency spectrum. During the tension estimation process, the critical task of detecting peak information is mainly performed manually. For example, if data are collected in 10-minute intervals over a 24-hour period, this yields 144 data sets. If accelerometers are installed on 8 cables of a single bridge, peak information must then be detected from a total of 1,152 data sets. Because the detection of peaks is carried out primarily by a human operator, the process is labor-intensive and subject to the operator’s subjective judgment, reducing objectivity. An alternative approach to manual detection is to use pre-set conditions. For instance, peaks can be identified by detecting locations where the amplitude exceeds a threshold, or by defining frequency bands where peaks are expected and selecting the highest value within that band. However, peak information may be missing depending on excitation conditions or cable damage. In cases where the natural frequency of the cable coincides with external excitation conditions, resonance may occur, resulting in unusually large peaks in certain frequency ranges. The limitation of methods based on automatic detection of pre-set conditions is that settings must be customized for each cable specification, and changes in spectral characteristics can hinder the accurate detection of peak information. IoT Measurement System with Automatic Peak Detection Algorithm The vibration signals of cable-stayed bridge cables, when transformed into the frequency domain, generate a power spectral density (PSD) that exhibits two distinct characteristics, as shown in Figure 3. First, the peaks in the cable PSD display a periodic pattern occurring at uniform intervals, reflecting the inherent dynamic properties of the cable. While the spacing of these peaks can vary depending on the cable’s specifications (such as material, geometry, and tension) and the overall structural system, periodicity with consistent intervals is a physical feature common to all cable members in cable-stayed bridges. The second characteristic is that the peaks have relatively higher amplitudes compared to surrounding frequency components. This means that, in the PSD, these peaks behave as outliers compared to neighboring values (Jin et al., 2021). To automatically detect such uniform peak intervals, one can apply the Automatic Multiscale-based Peak Detection (AMPD) technique, a biosignal processing method from the field of Biomedical Engineering (BME) (Scholkmanm et al., 2012). AMPD has the advantage of enabling complete automation because it can automatically detect periodically occurring peaks without any pre-configuration. To capture the second characteristic—where peaks appear as outliers compared to surrounding values—a threshold-based outlier detection method can be used in parallel. In this case, the threshold can be set using the Median Absolute Deviation (MAD) method, which is robust to data containing outliers (Rousseeuw et al., 1993). Based on the peak information estimated using these two techniques, the cable tension is calculated. This technology offers several advantages: (1) there is no need for pre-configuration, (2) it has a high robustness against signal variations, and (3) there is a low computational cost. Acceleration data for cable tension monitoring are mainly collected through wired measurement systems. In these systems, the sensors are connected to the data acquisition devices with cables, and the collected data are transmitted to the managing authority for use in tension analysis. Wired measurement systems have the advantage of enabling stable measurements without data loss; however, they involve additional costs due to the need for cabling between sensors and loggers as well as the installation of protective conduits to prevent disconnection, and they are limited in terms of installation locations and the number of sensors that can be deployed. In recent years, various IoT (Internet of Things)-based measurement systems have been developed and applied to facilities. However, most of them, like traditional wired systems, remain at the level of simply collecting and transmitting data. While this offers advantages in terms of installation flexibility and scalability, it does not fully utilize the potential strengths of IoT technology. IoT measurement systems can incorporate diverse algorithms to filter and process raw data before its transmission, rather than sending the raw data itself. This edge computing technology processes data in real time at the sensor terminal or adjacent devices, reducing the burden of transmission to servers and lowering both processing costs and time. By installing the previously described automatic peak detection algorithm on an IoT-based measurement system and applying it to cable-stayed bridge cables, a study was conducted to verify the algorithm’s accuracy, as well as the usability and efficiency of the measurement system. Through this research, the potential of applying IoT measurement systems and edge computing technologies to facility monitoring was confirmed. Epilogue The integration of IoT measurement systems with edge computing makes it possible to move beyond the traditional approach of transmitting large volumes of raw data to servers for collection and analysis, enabling on-site data processing and optimized management. With the advancement of data processing and analysis technologies now embedded into IoT measurement systems, the scope of data utilization in facility maintenance—which was previously limited to raw data transmission—is expected to expand significantly. In addition, with the advent of real-time processing, it is now possible to respond immediately rather than after the fact, making preventive maintenance achievable. This not only helps prevent safety accidents but also is expected to reduce both direct and indirect social costs. References 2024 Road Bridge and Tunnel Status Report. Jin et al. (2021), Fuly automated peak-picking method for an autonomous stay-cable monitoring system in cablestayed bridges, Autom. Constr. Vol. 126. Scholkmanm et al. (2012), An efficient algorithm for automatic peak detection in noisy periodic and quasiperiodic signals, Algorithms, Vol. 5. Rousseeuw et al. (1993), Alternatives to the median absolute deviation, J. Am. Stat. Assoc. Vol. 88
Department of Structural Engineering Research
Date
2025-09-24
Hit
275
AI-Based GPR Data Analysis Technology for Detecting Underground Cavities and Buried Objects
Research Fellow Lee Dae-young, Department of Geotechnical Engineering Research, KICT Prologue In recent years, a series of large-scale ground subsidence accidents have occurred in urban areas such as Seoul. Examples include the sinkhole accident in Myeongil-dong, Gangdong-gu, Seoul, and the underground collapse at the Sinansan Line construction site in Gwangmyeong. Following these incidents, the Seoul Metropolitan Government announced that it would be strengthening safety management against ground subsidence by conducting intensive Ground Penetrating Radar (GPR) surveys in the areas around excavation sites (Seoul City, 2025). Ground Penetrating Radar (GPR) is a geophysical survey method that uses electromagnetic waves to detect underground structures such as sewer pipelines, buried utilities, and cavities. Since the large-scale cavity incident at the Seokchon Underpass in 2014, GPR surveys have been actively applied to investigate subsurface cavities and ground subsidence beneath urban roads. As a non-destructive survey technique, GPR is useful for identifying underground utilities, cavities, and soil structures. However, it has several limitations, including depth restrictions depending on frequency, sensitivity to soil conditions, and difficulties in data interpretation. In addition, GPR analysis relies heavily on expert interpretation, and for high-resolution or 3D surveys, the data processing and interpretation require a significant amount of time, with notable variations in the reliability of the results. To address these issues, research is now underway on AI-based methods for automatically analyzing GPR data. This article introduces the principles of GPR surveys, along with AI-based methods for analyzing GPR data to improve the accuracy of interpretation, shorten analysis time, and enable real-time analysis. Principle of GPR Surveys Ground Penetrating Radar (GPR) is a survey technique that can identify the location and shape of underground structures such as buried pipelines through transmitting electromagnetic waves into the ground and receiving the reflected signals generated at the boundaries of such structures, while considering their different electrical properties (conductivity and permittivity). GPR employs radio waves with frequencies of several tens of MHz or higher, and is mainly used as a non-destructive testing method to investigate relatively shallow targets at depths of approximately 1–3 meters. It is applied to the detection of underground utilities, cavities, tunnel voids, and stratigraphic structures. More recently, GPR surveys have been intensively conducted in areas where ground subsidence is a concern due to aging sewer pipelines, serving as an evaluation method to help prevent ground collapse (Figure 1). In the analysis of GPR survey data, buried pipelines exhibit strong amplitudes and appear in the form of hyperbolae, as shown in Figure 2. While single-channel GPR systems using one transmitter and receiver pair have been mainly used, high-resolution three-dimensional multi-channel GPR systems have recently come into wider application. GPR surveys are effective for targets buried at shallow depths of up to approximately 3 meters, within which range most pipelines are located, but have limitations when it comes to deeper investigations, such as tunnel construction or large-scale excavation sites. GPR Data Using AI Techniques In the context of the Fourth Industrial Revolution, the outstanding performance and popularization of Artificial Intelligence (AI) technologies have further expanded their applicability. The application of AI to GPR analysis has potential to improve the accuracy and efficiency of underground structure detection and reduce interpretation errors. Recently, to address errors and technical challenges that arise during GPR image interpretation, research utilizing deep learning—one of the machine learning techniques widely applied in the field of image processing—has been actively conducted. The AI-based method for analyzing GPR data involves collecting GPR data in B-scan and C-scan formats, performing noise removal and corrections, and then carrying out data labeling. After generating a corrected labeled training dataset, a Convolutional Neural Network (CNN)-based AI algorithm is used for object detection (Girshick, 2014). Through deep learning, the reliability of buried pipeline detection can be significantly enhanced. The Korea Institute of Civil Engineering and Building Technology (KICT) has conducted research on the application of AI to improve the accuracy of GPR surveys for cavity detection in the ground and the investigation of underground obstacles beneath roads, with the aim of preventing ground subsidence. GPR survey data were used to detect buried pipelines and cavities, and high-quality labeled datasets were generated by converting the GPR data into images and removing noise such as clutter. For the detection of underground utilities and cavities, the Faster R-CNN algorithm was applied, and by employing various training techniques, optimal performance for detecting buried pipelines and cavities was achieved. Through this effort, the KICT developed AI algorithms and GPR data analysis technologies capable of detecting underground cavities and buried pipelines. Epilogue With the acceleration of urban development and the resulting increase in large-scale excavation works, as well as the occurrence of urban sinkholes caused by aging infrastructure, the use of GPR surveys for detecting cavities and ground subsidence has become increasingly important. Recently, research has been progressing on the application of AI technologies to advance GPR data analysis. Integrating AI into GPR surveys can reduce data processing time while improving the consistency and accuracy of interpretation results, thereby overcoming the limitations of traditional GPR analysis. AI-based automatic analysis technology also enables the real-time processing of GPR data and reduces interpretation errors, allowing decision-making processes to move more quickly. Ultimately, this technology can play a vital role in preventing ground subsidence accidents and enhancing the safety of underground utilities. References Seoul Metropolitan Government (2025). Special Countermeasures for Strengthening Safety Management Against Ground Subsidence at Large Urban Excavation Sites. Press Release, Road Management Division, Disaster and Safety Office, Seoul Metropolitan Government. Lee, Dae-young (2015). Development of Ground Subsidence Evaluation Methods Caused by Damage to Old Sewer Pipes. Proceedings of the Joint Conference of the Korean Society of Water and Wastewater (KSWW) and the Korean Society on Water Environment (KSWE), Special Session V-1. Lee, Dae-young (2018). Risk Assessment of Sewer Defects and Ground Subsidence Using CCTV and GPR. Journal of the Korean Geosynthetics Society (KGSS), Vol. 17, No. 3, pp. 47–55. Korea Institute of Civil Engineering and Building Technology (2022). Development of Smart QSE-Based Undergrounding Innovation Technology for Overhead Lines and Road Performance Restoration Technology (1/3), Annual Report. Korea Institute of Civil Engineering and Building Technology (2024). Development of Smart QSE-Based Undergrounding Innovation Technology for Overhead Lines and Road Performance Restoration Technology (3/3), Final Report https://ashutoshmakone.medium.com/faster-rcnn502e4a2e1ec6 R. Girshick, J. Donahue, T. Darrell, and J. Malik (2014) “Rich feature hierarchies for accurate object detection and semantic segmentation,” In Proc. CVPR.
Department of Geotechnical Engineering Research
Date
2025-09-24
Hit
963
첫페이지
이전페이지
1
2
3
4
5
6
다음페이지
마지막페이지
TOP
QUICK
QUICK
SERVICE
KICT 안내
찾아오시는 길
주요문의처
조직도
연구분야
기업지원
표준품셈
기술이전 및 사업화
인증/인정/시험신청
건설기술정보시스템
HOT LINK
고객지원
묻고답하기
정규직 채용안내
정기간행물
보도자료
닫기