본문 바로가기
KICT 한국건설기술연구원
About KICT
Welcome Message
Mission & Function
History
Personnel & Budget
Organization
Staff
KICT CI
Contact
Related Organization
Location
Research Division
Department of Highway & Transportation Research
Department of Structural Engineering Research
Department of Geotechnical Engineering Research
Department of Building Research
Department of Hydro Science and Engineering Research
Department of Environmental Research
Department of Future & Smart Construction Research
Department of Fire Safety Research
Department of Building Energy Research
Department of Construction Test & Certification
Department of Construction Industry Promotion
Department of Construction Policy Research
Korea Construction Standards Center
Research Information
Research Reports
Press Release
Research Facilities
Research videos
Industrial Support
Smart Construction Support Center
Smart Construction Technology Plaza
Smart Construction Alliance
SME Support
Technology Transfer
Accreditation Certification and Testing
Standard of Construction Estimate
International Cooperation
International Activities
International Memorandum of Understanding (MOU)
UST-KICT School
News & Notice
News & Notice
Global KICT
Global Recruiting
Brochure
PR Film
Newsletter
직원찾기
페이스북
블로그
KOR
전체메뉴
검색
닫기
About KICT
Research Division
Research Information
Industrial Support
International
Cooperation
News & Notice
About KICT
Welcome Message
Mission & Function
History
Personnel & Budget
Organization
Staff
KICT CI
Contact
Related Organization
Location
Ilsan HeadQuarters (Main Research Facilities)
Department of Fire Safety Research (Hwaseong)
River Experiment Center (Andong)
Yeoncheon SOC Demonstration Center
Research Division
Department of Highway & Transportation Research
Introduction
Staff
Papers
Department of Structural Engineering Research
Introduction
Staff
Papers
Department of Geotechnical Engineering Research
Introduction
Staff
Papers
Department of Building Research
Introduction
Staff
Papers
Department of Hydro Science and Engineering Research
Introduction
Staff
Papers
Department of Environmental Research
Introduction
Staff
Papers
Department of Future & Smart Construction Research
Introduction
Staff
Papers
Department of Fire Safety Research
Introduction
Staff
Papers
Department of Building Energy Research
Introduction
Staff
Papers
Department of Construction Test & Certification
Introduction
Staff
Papers
Department of Construction Industry Promotion
Introduction
Staff
Papers
Department of Construction Policy Research
Introduction
Staff
Papers
Korea Construction Standards Center
Introduction
Staff
Papers
Research Information
Research Reports
Press Release
Research Facilities
Ilsan HeadQuarters (Main Research Facilities)
Department of Fire Safety Research (Hwaseong)
River Experiment Center (Andong)
SOC Demonstration Research Center (Yeoncheon)
Research videos
Industrial Support
Smart Construction Support Center
Intro
Smart construction startup idea contest
Office and support space for resident companies and support for residents
Open Incubation Program
Smart Construction Innovation Startup Program
Smart Construction Technology Plaza
Intro
Registration Procedure
Review Items
Utilization Strategies
Smart Construction Alliance
Intro
SME Support
Technology Transfer
Accreditation Certification and Testing
Standard of Construction Estimate
International Cooperation
International Activities
International Memorandum of Understanding (MOU)
UST-KICT School
News & Notice
News & Notice
Global KICT
Global Recruiting
Information
Notice
Brochure
PR Film
Newsletter
KOR
전체메뉴 닫기
Research Information
Research Reports
Press Release
Research Facilities
Ilsan HeadQuarters (Main Research Facilities)
Department of Fire Safety Research (Hwaseong)
River Experiment Center (Andong)
SOC Demonstration Research Center (Yeoncheon)
Research videos
Research Reports
Home
Research Information
Research Reports
검색
ALL
Subject
Content
Search
TOTAL
48
Current page
1
/
5
Overview of the Project to Establish an Integrated Data Management Framework for Promoting Carbon Neutrality in the Building Sector
Senior Researcher Kim Deuk-woo, Department of Building Energy Research, KICT Prologue Achieving carbon neutrality in the building sector involves improving the energy performance of individual buildings, reducing energy waste, and producing energy through renewable technologies. Going forward, the number of buildings constructed to meet these high performance standards is expected to increase, with zero-energy building certification systems being strengthened in parallel. However, existing buildings—rather than new construction—account for approximately 75% of the total building stock, with most of these being aging structures that were completed more than 15 years ago. Identifying cost-effective ways to improve the energy efficiency and reduce the carbon emissions of these existing buildings is therefore a major challenge shared by government, industry, academia, and research institutions, and requires cost-efficient choices between reconstruction and green remodeling. To this end, it is essential to swiftly identify energy-intensive buildings nationwide and connect them to practical intervention measures. Yet even experts face difficulty when it comes to clearly defining energy-intensive buildings or estimating their number. Identifying Energy-Intensive Buildings It is overly simplistic to conclude that “buildings with high energy consumption have poor energy performance.” High levels of energy use may stem from factors that are not directly related to a building’s energy performance, such as water supply and drainage systems, cooking activities, office equipment, server rooms, or bathing facilities. For example, restaurants and data centers often consume large amounts of energy simply because energy use is intrinsic to the core functions and services they provide. From a heating-energy perspective, regional climate differences must also be taken into account. Even buildings with identical performance characteristics will inevitably consume more energy in Gangwon Province than in Jeju Island due to the lower ambient temperatures. Only by statistically accounting for such regional and functional differences is it possible to define and identify energy-intensive buildings based on rational and defensible criteria. In other words, a reasonable assessment of whether a building is energy-intensive requires a multidimensional analysis of the various factors that influence energy consumption, followed by objective evaluation based on that analysis. Key factors to consider include climate conditions, architectural characteristics, building systems and operational practices, types of use or business activities, occupant characteristics, surrounding environments, and broader socio-cultural and economic conditions (Figure 1). Only by comprehensively reflecting these factors can the context of a building’s energy consumption be fully understood, and its status as an energy-intensive building accurately determined. The determination of whether a building is energy-intensive is made through comparison with benchmark values. These benchmarks are established based on the energy consumption distribution of a peer group consisting of buildings with similar characteristics. The appropriateness of this grouping directly affects the reliability of the evaluation results: the greater the similarity among buildings within a peer group, the higher the accuracy of the assessment. Ultimately, after constructing a dataset in which energy consumption and influencing factors are well linked and integrated, the screening and evaluation of energy-intensive buildings can be carried out more efficiently and with greater reliability. Fragmented Data and the Establishment of an Integrated Management Framework Multiple government ministries—including the Ministry of Land, Infrastructure and Transport; the Ministry of the Interior and Safety; the Ministry of Education; the Ministry of Culture, Sports and Tourism; and the Ministry of Health and Welfare—produce and release a wide range of information on influencing factors as by-products of their administrative work, each serving different purposes. While some of these datasets are accessible through platforms such as the Public Data Portal, they are often fragmented across multiple institutions, difficult to access, and insufficiently documented. In some cases, original datasets cannot be obtained without prior approval from the responsible ministry (e.g., the National Building Energy Integrated Database, building 3D model data, the nationwide business census, household composition data, and credit card sales data). As a result, understanding, integrating, and analyzing such heterogeneous datasets is a highly complex and resource-intensive task. In many instances, attempts to link and integrate data are abandoned due to these difficulties, or fail to lead to meaningful analysis even when data are successfully obtained. Although various organizations—including research institutes, universities, and private companies—are currently attempting data collection, linkage, and analysis, most efforts remain at the pilot level due to constraints of time, cost, and technical complexity. To address these challenges that hinder the development of an industry–academia–research data ecosystem, the Project to Establish an Integrated Data Management Framework for the Promotion of Carbon Neutrality in the Building Sector (Project DataNet) was launched (Shin Hye-ri et al., 2024). This initiative aims to propose a national data framework to accelerate carbon neutrality in the building sector and to demonstrate a nationwide integrated management system based on this framework (Figure 2). The framework encompasses the identification, processing, linkage, and integration of fragmented datasets, as well as the development of models to evaluate energy consumption levels. Information related to weather, architectural characteristics, building systems, operational practices, business activities, users, surrounding environments, and socio-cultural and economic conditions is integrated, using the building registry as the core reference. From this integrated dataset, three high-priority building characteristic indicators are extracted. Based on this extensive dataset of influencing factors, statistical evaluation models are developed to achieve pilot assessments of appropriate energy consumption levels. In addition, a visualization tool for data utilization—the DeepView Data Viewer—is implemented. Together, these components form the I-BED (Infrastructure for Building Energy Data Management) system. The three key indicators are the morphological and shading indicators described in Yi Dong-hyuk and Kim Deuk-Woo (2024), the spatial mixing indicators in Choi et al. (2025), and the energy pattern indicators in Kim et al. (2022, 2024). Pilot energy consumption evaluation models have been proposed for educational facilities (Kim et al., 2024), childcare facilities (Choi Kwag-won et al., 2024), and multi-family housing (Kim Ji-hyung et al., 2024), while models for hospitals, libraries, and office buildings are currently under development. The back-end conceptual design of the I-BED system is detailed in Kim Eo-jin et al. (2024). Epilogue To accelerate the achievement of carbon neutrality in the building sector, securing high-quality, building-level data on a nationwide scale must be the top priority. Such data go beyond simple status monitoring and enable rational evaluations that take the context of energy consumption into account. Equally essential is the establishment of a data management system to systematically organize and operate these data resources. Only when these three pillars—data acquisition, evaluation models, and management systems—are developed in an integrated and coordinated manner can nationwide screening of energy-intensive buildings be effectively linked to regional green remodeling initiatives, leading to timely retrofit and improvement measures. Furthermore, once a cyclical structure in which screening and intervention continuously reinforce each other is established, carbon neutrality in the building sector can be meaningfully accelerated. From a policy perspective, data-driven and rigorous review will become possible in decision-making processes related to zero-energy buildings and green remodeling across relevant ministries. In addition, the ability to efficiently manage the energy consumption levels of individual buildings nationwide will support the establishment of a closed-loop framework encompassing the identification, inspection, support, and management of energy-intensive buildings. From a scientific and technological standpoint, evidence-based and objective performance evaluation will significantly enhance the reliability of assessments based on measured energy consumption. This increased reliability is expected to generate broader economic impacts, including the activation of diagnostic and efficiency-improvement markets for energy-intensive buildings, as well as the creation of new data-driven industries. References Kim, E. J., Choi, Y., Song, B. K., Shin, H., Kim, D. W., & Kim, Y. S. (2024). Performance Analysis of a Kubernetes-based Data Distribution Service. Journal of the Korean Institute of Communications and Information Sciences, 49(10), 1458–1465. Kim, J. H., Kim, S. I., Park, Y. J., Kim, D. W., & Kim, E. J. (2024). Correlation Analysis Between Non-Energy Public Data and Annual Energy Consumption by End Use in Multi-Family Housing. Korea Journal of Air-Conditioning and Refrigeration Engineering (KJACR), 36(12), 606–618. Kim, H. J., Joo, H. B., Kim, D. W., & Heo, Y. S. (2024). Analysis of Annual Base and Heating Energy Influencing Factors for Energy Benchmarking of Educational Facilities. Journal of the Korean Institute of Architectural Sustainable Environment and Building Systems, 18(6), 491–501. Yi, D. H., & Kim, D. W. (2024). GIS-based Urban-scale EnergyPlus Simulations for Database Construction to Develop Building Shading Indicators. Journal of the Korean Institute of Architectural Sustainable Environment and Building Systems, 18(2), 85–97. Shin, H. R., Kim, H. G., & Kim, D. W. (2024). DataNet: Establishing an Integrated Management Framework for Building Energy and Influencing-factor Data to Accelerate Carbon Neutrality in the Building Sector. Journal of the Korean Institute of Architectural Sustainable Environment and Building Systems, 18(6), 564–575. Choi, K. W., Park, J. H., Kim, D. W., & Cho, J. W. (2024).Development of Regression Models for Evaluating Energy Consumption Performance of Childcare Facilities Using Open Public Data. Journal of the Korean Solar Energy Society, 44(6), 35–48. Kim D-W, Ahn K-U, Shin H, Lee S-E. (2022) Simplified Weather-Related Building Energy Disaggregation and Change-Point Regression: Heating and Cooling Energy Use Perspective. Buildings, 12(10):1717. Kim HG, Lee SE, Kim D-W (2024) Impact of calendarization on change-point models, Energy and Buildings. Volume 303, 113803. Sebin Choi, Dong Hyuk Yi, Deuk-Woo Kim, Sungmin Yoon(2025) Multi-source data fusion-driven urban building energy modeling, Sustainable Cities and Society, Volume 123, 106283, ISSN 2210-6707, https://doi.org/10.1016/j.scs.2025.106283.
Department of Building Energy Research
Date
2025-12-22
Hit
0
Carbon-Eating Concrete: A Means of Reaching Carbon Neutrality
Research Fellow Park Jung-jun, Department of Structural Engineering Research (Carbon-Neutral Construction Materials Team), KICT Each year, the Earth sends increasingly urgent warnings in the form of extreme weather events. Recurrent damage to lives and infrastructure caused by heavy rainfall, droughts, and typhoons has made greenhouse gas reduction no longer something to aspire to, but essential for survival. The cement and concrete industry is a particularly large emitter, accounting for approximately 8% of global greenhouse gas emissions. To directly confront this reality, the Carbon-Neutral Construction Materials Team at the Department of Structural Engineering Research, KICT, has developed Carbon-Eating Concrete (CEC) technology. Their goal extends beyond simply reducing emissions: it is to accelerate carbon neutrality across the entire concrete industry. Developing CEC Technology for Carbon Neutrality The Carbon-Neutral Construction Materials Team is engaged in continuous research with the aim of achieving net-zero carbon emissions in the construction industry. In response to the urgent global demand for carbon reduction technologies, the team is tackling the greenhouse gas challenges of the cement and concrete sector head-on. At the core of this research lies Carbon-Eating Concrete (CEC) technology. The essence of CEC is to react the carbon dioxide generated during concrete’s production with components inside the concrete, permanently storing CO₂ in a stable mineral form while simultaneously enhancing the concrete’s strength and durability. This represents a circular approach in which exhaust gases are treated not as pollutants, but as valuable resources. The research team is exploring the potential for CO₂ storage across the entire lifecycle of construction materials—including cement, aggregates, and mixing water—while focusing on maximizing the carbon neutrality impact of the concrete industry as a whole. Utilizing the largest CO₂ curing facility in Korea, the team has successfully demonstrated direct CO₂ storage in precast bridge deck slabs. In addition, they have developed a CO₂ treatment technology for recycled ready-mixed concrete wastewater that can also be applied to cast-in-place concrete, achieving world-class efficiency. Ultimately, through a full lifecycle research framework encompassing material development, structural performance evaluation, field application, and policy recommendations, the team is working to ensure that the technology moves beyond the laboratory to become firmly established at real-world construction sites. CEC Technology: A Game Changer in Accelerating Carbon Neutrality The International Energy Agency (IEA) estimates that carbon capture and utilization (CCU) technologies that use CO₂ during concrete production have the potential to contribute 1–15% of the total 10 Gt CO₂ reduction target under the broader CCUS (Carbon Capture, Utilization, and Storage) framework. This assessment highlights CEC technology as a practical and globally applicable solution for accelerating carbon neutrality efforts. Accordingly, CEC is often described as a key player within the CCUS technology chain. This is because CEC is one of the very few technologies capable of utilizing the large volumes of CO₂ captured from industrial sites while simultaneously ensuring its safe and permanent storage. Unlike other CCU approaches that remain largely at the proof-of-concept stage, such as carbon-to-fuel or carbon-to-oil conversion, CEC has already demonstrated its feasibility for on-site application through pilot and demonstration projects. The research team estimates that converting just 20% of domestic concrete production to CEC technology could reduce CO₂ emissions by approximately 520,000 tons per year, equivalent to about 4.9% of Korea’s national CCUS reduction target. In this sense, CEC technology represents a critical lever for transforming the construction industry into an active contributor to climate change mitigation. Convergent Research Addressing Key Barriers A major strength of the River Experiment Center lies in its laboratory-to-field integrated approach to developing CEC technology. Rather than focusing solely on injecting CO₂ into concrete, this work represents a convergent research effort that connects the entire chain of carbon capture, utilization, and evaluation into a single, integrated framework. To this end, experts from industry, academia, and research—including the Department of Structural Engineering Research as the core hub, together with the Department of Building Research, the Department of Fire Safety Research, Shinhan University, Yonsei University, and Jiseung C&I Co., Ltd.—have participated from the early stages through regular seminars and discussions. Through the process of understanding different disciplinary perspectives and sharing data, the research team has been building a model for effective convergent research. The greatest challenge was the scale-up process, in which technologies that had proven successful at the laboratory level had to be expanded to an industrial scale. This involved more than simply increasing reactor capacity; it required verification that prototypes could be mass-produced while meeting the quality and reliability standards demanded in real-world applications. When unexpected reaction variability emerged at larger scales, the team designed and fabricated multi-stage mock-up reactors and repeatedly analyzed simulations and experimental data to identify optimal operating conditions. As a result of these efforts, large-scale demonstration experiments are now actively underway at ready-mixed concrete and precast product manufacturing plants. A virtuous cycle has also been established, in which data obtained from field demonstrations are fed back into research to further refine equipment and processes. Through a process of continuous adjustments and optimizations, CEC technology is steadily evolving into a field-ready standard. Lifecycle Research Capability and Teamwork The greatest strength of the Carbon Neutral Construction Materials Team lies in its end-to-end research capability, spanning the entire continuum from materials development to institutional and policy improvement. The team’s work goes beyond materials innovation to encompass structural safety evaluation, field application validation, technology standardization, and policy recommendations, all conducted within a coherent and integrated research framework. This comprehensive approach enables potential challenges arising at construction sites to be anticipated in advance and addressed effectively, significantly enhancing the practicality and scalability of the technology. All of these achievements are rooted in the strong trust and collaboration among team members. The Carbon-Neutral Construction Materials Team fosters a horizontal culture in which everyone, regardless of their rank or years of experience, is encouraged to freely share ideas, respect diverse perspectives, and work together toward optimal solutions. Through a process of overcoming numerous trials and setbacks, team members have continually supported and motivated one another. This collaborative culture has provided researchers with sustained positive momentum, and become a driving force in their collective efforts to build a sustainable future.
Department of Structural Engineering Research
Date
2025-12-22
Hit
1
Developing Two Powerful Capabilities: Purifying Water and Recovering Resources
Research Specialist An Ju-suk, Department of Environmental Research, KICT Circulation-Type Membrane Capacitive Deionization (C-MCDI): A New Future for Resource-Recovery-Oriented Water Treatment As water scarcity and water pollution continue to intensify worldwide, the need for technologies that can efficiently utilize limited water resources is growing. In particular, regions such as islands, remote areas, and locations lacking sufficient water supply infrastructure require innovative solutions capable of achieving both stable water treatment and resource recovery. Against this backdrop, Circulation-Type Membrane Capacitive Deionization (C-MCDI) technology has emerged as a promising alternative. Q1. Could you introduce Circulation-Type Membrane Capacitive Deionization (C-MCDI) technology, and explain the background of the technology and the need for its development? In Korea, there are many small-scale water supply facilities that rely on groundwater, while overseas there is a strong demand for treating coastal brackish groundwater or water sources containing high concentrations of specific ions. C-MCDI was developed to address these needs. Conventional MCDI systems utilize electrodes and ion-exchange membranes but suffer from low water recovery rates, requiring large volumes of raw water. In contrast, C-MCDI significantly improves recovery efficiency by recirculating the desorption water generated during the desorption process through a dedicated circulation loop instead of discharging it. Through repeated circulation, we experimentally confirmed that the concentration of specific ions gradually increases, enabling not only desalination but also resource recovery. Q2. What differentiates this technology from existing approaches, and what advantages does it offer in terms of performance and economics? The most distinctive feature of C-MCDI lies in its structure, which reprocesses desorption water that would otherwise be disposed of, thereby greatly improving water recovery rates. This allows for stable water treatment even with limited raw water availability. In addition, tailored circulation water operation based on ionic characteristics helps reduce scaling and fouling, enabling long-term, high-efficiency operation. Energy consumption is proportional to the salinity of the feed water, and if the water for treatment has a total dissolved solids (TDS) concentration below 5,000 mg/L, low-energy operation at less than 1 kWh/m³ is achievable. Capital costs are also reasonable, as the system is composed of low-pressure, DC-based modules without the need for high-pressure vessels or large pumps. Operation and maintenance costs mainly consist of electricity expenses and periodic replacement of electrodes and membranes, while the minimized cleaning frequency helps keep operating costs stable. Q3. Could you explain the core components and operating principles of the technology? C-MCDI consists of porous carbon electrodes, cation and anion exchange membranes, a power supply, and a circulation system. During the adsorption stage, an applied voltage forms an electric double layer on the electrode surfaces, causing cations and anions to be adsorbed and thereby reducing the inorganic ion concentration in the water. Subsequently, the system transitions to the desorption stage by maintaining the voltage at 0 V or reversing polarity, releasing the adsorbed ions. In the circulation-type configuration, this desorption water is not discharged but instead returned to a dedicated desorption loop and reused as feed water for subsequent desorption stages. Repeating this process maintains treated water quality while improving recovery rates and achieving selective ion concentration. Q4. What is the current development stage of the technology, its potential for commercialization, and its target markets? C-MCDI technology is being developed in two areas simultaneously: high-recovery, low-energy water treatment and valuable resource concentration. In the water treatment field, a pilot-scale demonstration system with a capacity of 50 tons per day has been successfully operated for groundwater treatment at small-scale water supply facilities in Korea and for saline groundwater treatment in marine ASEAN countries such as Malaysia. The results of test operations showed over 90% removal efficiency, a recovery rate of 83.3%, and an energy consumption of only 0.584 kWh/m³ under feed water conditions of 1,000 mg/L TDS. The technology has already been transferred to a domestic SME, and efforts are currently underway to develop commercial modules and expand overseas applications. Key target markets include small-scale water supply facilities, island and coastal water supply systems, and regions relying on low-salinity groundwater. In the concentration field, experimental studies have confirmed that the circulation structure enables the gradual enrichment of specific ions. While this stage has focused on feasibility validation, ongoing research aims to optimize operating conditions to enhance concentration efficiency and selectivity. Ultimately, the goal is to advance this into a high-value concentration technology applicable to industrial wastewater, mine drainage, battery manufacturing processes, and other resource recovery applications. Major target markets include the metal and mineral resource industries, secondary battery manufacturing, reuse of semiconductor cleaning water, and treatment of high-salinity industrial wastewater. Q5. What social and environmental impacts are expected once the technology is commercialized? C-MCDI is optimized for decentralized water treatment, enabling stable water supply even in regions with limited water infrastructure, such as islands, remote areas, and developing countries. Its circulation-based operation conserves water resources, and its low-power, DC-based structure allows for its easy integration with renewable energy sources such as solar power. Moreover, its modular design and simplified maintenance enable long-term, stable operation even in areas lacking specialized personnel. As a result, C-MCDI can contribute to improved water welfare and sustainable water management in international development cooperation initiatives, including ODA projects. Q6. What are your future research plans and goals?? In the short term, efforts will focus on optimizing desorption water circulation parameters—including circulation volume, cycle frequency, and voltage waveforms—verifying electrode and membrane durability, and standardizing modules and control software to accelerate commercialization. In the mid-term, the team plans to pursue low-power operation integrated with solar energy, establish operational datasets under diverse water quality conditions, and expand domestic and international field demonstrations. In the concentration domain, the goal is to improve target ion selectivity and recovery efficiency, and to integrate precipitation and electrochemical recovery processes to realize a comprehensive “water treatment + resource recovery” solution.
Department of Environmental Research
Date
2025-12-22
Hit
0
Introducing Robots into Residential Spaces: Research on Human–Robot Interactive Architectural Technologies
Research Specialist Yang Hyeon-jeong, Department of Building Research, KICT Prologue In recent years, robotic technologies have advanced rapidly through the convergence of physical computing and generative AI, along with significant progress in humanoid robot development. As a result, the roles and application domains of robots are expanding far beyond their traditional function of simple automation, toward more sophisticated forms of interaction with humans. Robots, once primarily deployed in industrial manufacturing settings, are now evolving into service robots capable of actively responding to a wide range of situations in everyday life. This shift represents not only an inevitable trajectory of technological evolution, but also a direction that aligns closely with emerging social needs and expectations. Notably, these technological advances have captured attention as promising solutions that address the demographic shifts associated with an aging society. Considering a range of social challenges—including shortages in caregiving personnel, the need to support independent living among older adults, and issues of emotional isolation—robots have the potential to serve not merely as assistive tools, but as meaningful partners in daily life. For example, humanoid robots capable of understanding and responding to human language and emotions hold significant potential to support both the physical and psychological well-being of older adults. In this context, residential spaces constitute a core environment in which robots interact most closely with humans. Beyond serving a purely residential function, this shift calls for a transformation in architectural technologies and spatial design premised on Human–Robot Interaction (HRI). Against this backdrop, the present study investigates Human–Robot Interactive architectural technologies that support the seamless operation of robots and user-centered interaction within residential environments. Through this research, the study aims to propose a new residential paradigm that enhances quality of life for occupants. Overview of Research on Human–Robot-Interactive Architectural Technologies Research on Human–Robot-Interactive architectural technologies recognizes the need to move beyond isolated instances of human–robot interaction and toward the development of integrated cooperative systems that combine humans, robots, buildings, and spaces. The ultimate goal of this research is to develop human-centered, robot-interactive architectural technologies—integrating architectural space and services—to enable meaningful and effective interaction between humans and robots. More specifically, the study seeks to propose spatial adaptation strategies that allow robots to effectively support humans through research on dynamic interactions among robots, spaces, and occupants. In parallel, it aims to develop technologies for real-time data analysis and spatial optimization by linking robots with smart building infrastructure. The research is conducted in a phased manner over a three-year period. In the first year, the focus is on establishing the foundational framework for the development of robot-friendly interactive architectural technologies. Based on a survey of robot technologies applicable to architectural spaces, robots suitable for deployment in residential environments are selected, and a Human–Robot Interactive operational environment is constructed. In the second year, the research advances to the development of multimodal data utilization technologies for interactive robot-use environments, the establishment of user-tailored response optimization technologies, and the development of prototype control services integrated with existing Robot Operating System (ROS)-based platforms. In the third year, user-tailored services are validated in real-world usage environments, interactions between architectural spaces and robots are optimized, and the connectivity and integration among humans, buildings, and robots are comprehensively verified. This research is conducted at the “Interactive Smart Housing Laboratory” located on the 5th floor of Building 8 at the KICT headquarters, where existing smart home functions are expanded to realize an interaction-driven technological environment and architectural space improvements that support the evolution toward Human–Robot–Building interactive environments. Trend Analysis of Care Robots and Service Models To support the introduction of robot services in residential environments, a review of domestic and international trends was conducted, focusing on robots that are commercially available. In Korea, robots such as Hyodol—used for health management, emotional interaction, and emergency assistance—and Pibo, which provides senior care and childcare services, are being applied in caregiving contexts. In the United States, robots such as Stretch (a mobile manipulator for home use) and Moxi (used for medical supply delivery and laboratory sample transport) have been introduced in healthcare and caregiving facilities. In Japan, emotionally interactive robots such as Paro (a robot with the appearance of a seal), Lovot, and Pepper are being utilized for dementia and depression management, reflecting the active adoption of companion and social robots. Overall, however, the diversity of service robots remains limited, and the number of commercially available platforms is still relatively small. To guide the selection of robots for deployment, the study examined the types of services required in residential settings. The Korean government’s “Senior Residence Activation Plan,” announced in July 2024, outlines service needs across three stages of aging. In the “Independent Living Stage”, support is required for daily living activities such as household chores and meals, leisure activities, and regular well-being check-ins. In the “Care-Required Stage,” services such as customized elderly care, home-based nursing care, safe housing, and healthcare support are needed. In the “Specialized Care Stage,” residential living and long-term care support in senior care facilities are deemed essential. Based on this framework, the present study focuses on exploring how robots can provide the services required by older adults in the “Care-Required Stage,” with the aim of supporting daily living and caregiving needs within residential environments. Selection and Technical Analysis of Robots for Residential Deployment To develop service scenarios for elderly care robots in smart housing environments, three commercially available robots were selected for this study. LG CLOi (Delivery Robot) provides food and beverage delivery, mail and essential item transport, and user-tailored environmental services. Roborock (Household Robot) is equipped with spatial mapping and navigation functions to deliver automated indoor cleaning services. Hyodori (Social Robot) applies Internet of Things (IoT) technologies to provide 24-hour monitoring of older adults’ daily activities, emotional states, and safety conditions. These robots are managed in an integrated manner through the Home Assistant platform, a system designed to implement diverse residential management services using APIs linked to each robot’s respective control platform. The interactive environment has been developed as an open system so that it can be readily expanded to accommodate the future deployment of humanoid robots. The content was organized based on role-based robot scenarios in smart housing environments. A smart housing service scenario was developed using the daily routine of a 67-year-old resident (Ms. Kim) as a model. By analyzing her weekday life patterns from morning to night and matching appropriate robot technologies, five core technology domains were identified: mobility assistance robot technologies (fall prevention, route guidance, and object carrying); household assistance robot technologies (automation of cooking, laundry, cleaning, and dishwashing); interactive robot technologies (speech recognition, emotional feedback, and visual and auditory assistance); smart environment integration technologies (control of curtains, lighting, and home appliances); and health and daily-life monitoring technologies (sleep monitoring, fall detection, and temperature and humidity sensing). Research on Robot-Friendly Residential Spaces To examine the potential spatial transformations of housing premised on the introduction of robots, this study conducted an analysis of the robot-friendly building certification system. At present, this certification system is primarily operated for general (non-residential) buildings in which robot utilization is more active, and its application to residential spaces remains at an early stage. A representative example is Naver’s Second Headquarters, Korea’s and the world’s first ‘robot-friendly’ building. In April 2022, this building achieved the highest rating under the robot-friendly building certification system by satisfying all 25 evaluation criteria across four categories: △architectural and facility design, △network and system infrastructure, △building operation and management, and △robot support and related services. Key features of the building include the world’s first robot-dedicated elevators, which enable seamless vertical movement of robots; a wide range of services based on 5G brainless robot technologies; and a multi-robot intelligence system supported by Naver Cloud and the 5G network infrastructure. Approximately 100 “Rookie” delivery robots are currently in operation, performing various tasks, including fire evacuation response. Based on this certification framework and case analysis, the present study identified essential spatial elements required for deploying care robots in residential environments. Particularly notable elements include: △circulation corridors with a minimum effective width of 1.2 m or more, considering bidirectional movement between users and mobile service robots; △ floor finishing materials suitable for robot mobility (with a coefficient of slip resistance (C.S.R.) of 0.4 or higher); and △ the establishment of an integrated network infrastructure to support IoT and sensor technologies. These elements are expected to serve as core criteria for the future introduction of robots into residential spaces. Plan for Establishing an Interactive Environment and Collecting Data An “Interactive Smart Housing Laboratory”, with a total floor area of 84 m², has been established on the fifth floor of Building 8 at the headquarters of the Korea Institute of Civil Engineering and Building Technology (KICT) in Ilsan. This facility was created as an integrated experimental space for the development of automated, environment-controlled smart home technologies that enable the real-time monitoring of occupants’ behavioral and physiological responses and support the creation of healthy residential environments. At present, the laboratory is being expanded beyond conventional smart home functions, with the aim of evolving into an interactive environment that facilitates dynamic interactions among humans, robots, and buildings. To achieve this goal, the development of interaction-based technological infrastructure and improvements to architectural spaces are being pursued in parallel. As illustrated in Figure 4, the system is designed to comprehensively monitor and analyze human factors (user location, activity level, sleep status, heart rate, respiration rate, blood pressure, and pulse), building factors (temperature, humidity, illuminance, air quality, atmospheric pressure, noise levels, and appliance operation status), and robot factors (user–robot interactions, mental-health-related responses, location tracking, collision detection, task execution data, muscle mass, and physical activity levels). In addition, through integration with the smart building infrastructure installed within the laboratory, the system is designed to enable real-time data analysis and bidirectional interactions among all components. This integrated framework is expected to support not only the provision of human-centered, personalized residential environments, but also future expansion toward integrated operation technologies for care robots and the development of data-driven environmental control algorithms. Epilogue This study is significant as it represents one of the first systematic efforts to explore the potential introduction of diverse service robots into the everyday setting of residential spaces, along with the architectural transformations and interactive environments required to accommodate them. While the research does not primarily aim to advance robot technologies themselves, it provides an architectural examination of the physical conditions and interaction frameworks necessary for the practical deployment of robots in residential environments. In doing so, the study establishes an important starting point for enhancing the real-world applicability and value of service robots. Looking ahead, the spatial response strategies and technology integration concepts proposed in this study can be extended toward the development of sustainable residential models that address the challenges of an aging society, improve quality of life, and diversify residential services. Furthermore, it is hoped that this work will serve as a practical foundation for exploring new possibilities in the convergence of architecture and robotics, contributing to actionable pathways for meaningful human–robot coexistence. References Lee, K., Koo, H. M., Lee, Y.S., Jung, M. S., Yoon, D. K., & Kim, K. S. (2022). Development of Robot-Friendly Building Certification Indicators: Application of Focus Group Interviews (FGI) and the Analytic Hierarchy Process (AHP). Journal of Cadastre & Land Information (JCLI), 52(2), 17–34. Electronics and Telecommunications Research Institute (ETRI). (2022). Development of Real-Environment Human-Care Robot Technologies in Response to an Aging Society. Report commissioned by the Ministry of Science and ICT. Architecture & Urban Research Institute. (2024). Development of Core Technologies for the Design and Remodeling of Robot-Friendly Buildings. Report commissioned by the Ministry of Land, Infrastructure and Transport. Ivanov, Stanislav Hristov, and Craig Webster. (2017). Designing robot-friendly hospitality facilities. Proceedings of the scientific conference Tourism. Innovations. Strategies. Sheridan, T. B. (2016). Human–robot interaction: status and challenges. Human factors, 58(4), 525-532. Sartorius, Marie P., and Petra von Both(2022) “RuleBased Design for the Integration of Humanoid Assistance Robotics into the Living Environment of Senior Citizens.” Legal Depot D/2022/14982/02 : 367.
Department of Building Research
Date
2025-12-22
Hit
1
Core Solution for the Era of Fully Autonomous Driving: Physical Infrastructure Supporting Autonomy
Senior Researcher Kim Young-min, Department of Highway and Transportation Research, KICT Prologue To operate independently, autonomous vehicles (“AVs” hereinafter) must be capable of perceiving and interpreting their surroundings. In essence, they need to perform the same sequence of actions that human drivers carry out—Perception–Identification–Emotion–Volition (PIEV). To achieve this, AVs must be equipped with systems and performance capabilities that support this sequence. For AVs, the functions that parallel the human PIEV process are recognizing the driving environment and controlling the vehicle based on that recognition. The environment they must process includes not only the fundamental road layout (e.g., horizontal and vertical alignment, lane configuration) but also dynamic, real-time information, such as the presence and movement of other road users (vehicles, pedestrians, etc.) and the traffic regulations governing road use. Up to now, road infrastructure systems have been developed and operated with human drivers as the primary consideration. To achieve the commercialization of fully autonomous driving technology, it is essential to re-examine road infrastructure systems with AVs as the primary consideration. The Korea Institute of Civil Engineering and Building Technology (KICT) has pursued various R&D initiatives around strengthening the role of road infrastructure in the age of autonomous driving (for related content, see the Spring 2025 special feature “Future Road Development for Cooperative Autonomous Driving”). This article introduces the Physical Infrastructure Supporting Autonomous Driving currently being developed at the KICT. Background and Purpose of Technology Development To realize fully autonomous driving—defined as Level 3 or higher under the SAE (Society of Automotive Engineers) standards, where control authority shifts from the human driver to the vehicle—it is essential to combine advanced AI-based environmental perception using onboard sensors with technologies that link static and dynamic information from high-definition road maps, known as the Local Dynamic Map (LDM). Together, these technologies enable vehicles to perceive their surroundings with high-precision positioning. This approach represents the core concept of cooperative autonomous driving, in which infrastructure supports autonomous vehicles in carrying out driving tasks. To make this vision a reality, various forms and methods of infrastructure support have been proposed (see Figure 1). Let us return to the perspective of human driving behavior. The information a driver uses to perform driving is more extensive than commonly assumed, and the cognitive processes involved in decision-making are highly complex. For example, the act of lane changing stems from several decision-making factors and resolutions. This includes decisions such as recognizing that the current lane is more congested than an adjacent lane and deciding to change lanes, determining that the lane ahead is blocked due to construction or other reasons and therefore a forced lane change is unavoidable, or choosing to move into a lane closer to the intended direction in order to make a left or right turn at an intersection. At a deeper level, driving involves collecting “evidence” for each decision, followed by “reasoning” to reach the final judgment. In short, the decision-making process required for driving combines sensory inputs—such as visual recognition of an obstacle’s shape or auditory recognition of a horn—with prior driving experience and accumulated know-how in situational judgment. AVs must carry out the same processes as human drivers. At this point, the role of road facilities—referred to in this article as “physical infrastructure supporting autonomous driving”—is revealed. Because sensor-based perception systems have inherent limitations, AVs are currently required to transfer control authority back to the driver in what are commonly called “handicap situations and zones” (Jeon and Kim, 2021). Representative examples include reduced visibility due to weather conditions such as fog, which makes it difficult for vision sensors to collect information, and lane closures caused by roadwork; these are classified as typical “autonomous driving handicap situations and zones.” If “physical infrastructure supporting autonomous driving” can provide support in such “handicap situations and zones” by contributing to the “decision-making process,” more specifically to the “evidence collection process for decision-making” and to “reasoning using decision evidence,” it can offer practical and meaningful assistance for AV operations. The implementation of “physical infrastructure supporting autonomous driving” can be broadly categorized into two types. The first involves enhancing existing road facilities so that they are more easily detected by AV sensors. In practice, this means improving the sensor-based perception performance of road facilities—while preserving their inherent functions and properties—by accounting for the characteristics of key vehicle sensors used for environmental perception (e.g., cameras, LiDAR). Examples include adjusting the color or material of facilities within existing regulatory limits, or making structural modifications that expand sensor-detectable areas without altering their outward appearance. The second type involves leveraging the physical properties of road facilities to provide the AV with information that can serve as more reliable evidence in its reasoning process. For instance, facilities shaped like conventional traffic signs can display encoded information that AV sensors can detect and interpret, thereby delivering critical road operation data for vehicle control. This approach can be viewed as equipping AVs with functions equivalent to those that road facilities provide for human drivers—such as traffic regulation signs that indicate required actions, or guide signs and safety facilities that offer useful reference information while driving. Development and Verification of Physical Infrastructure Supporting Autonomous Driving: Focus on Lane-Closure Sections In 2024, the research team developed a prototype AV (see Figure 2). Although it is just one of many AVs designed and manufactured in South Korea, this vehicle has a unique function: it enables “vehicle control and autonomous driving through physical infrastructure support.” By incorporating physical infrastructure into vehicle control, the research team compared AV perception performance in “handicap situations and zones,” as well as vehicle behavior with and without infrastructure in those conditions. This made it possible to verify the suitability of the physical infrastructure system for autonomous driving. Every day, countless events unfold on the road. Among them, one of the most critical situations directly affecting vehicle operation is the lane closure. Lane closures often occur due to road maintenance or accident response, and vehicles must detour around the closed lane in order to continue driving. Human drivers recognize and interpret lane closures through multiple cues—for example, visually confirming traffic control devices such as cones or guard barriers, observing hand signals from traffic controllers like police officers or flaggers, or noticing forced merges of preceding vehicles. AVs, however, have clear limitations in carrying out such reasoning and judgment processes. Given that lane closures are highly dynamic and variable on roadways, it is expected that map-based electronic information systems alone will not be sufficient to provide reliable information in these situations. The research team devised a system that enables AVs to more easily recognize lane-closure situations by utilizing “encoded signs” that can be detected through road facilities (see Figure 3). In lane-closure sections, the vehicle must perform lateral control, which consists of two tasks: avoidance control, where the vehicle detours around the closed lane, and return control, where the vehicle decides whether to return to its original lane after passing the closed section, depending on the requirements of the global driving path. To achieve this, the team applied a technology that recognizes point cloud data (PCD) patterns obtained through in-vehicle LiDAR, enabling AVs to detect lane-closure situations via road facilities and incorporate this information into vehicle control. This approach takes advantage of LiDAR’s greater robustness under adverse weather conditions (e.g., heavy rain, fog) compared to vision sensors, thus addressing the “visibility obstruction” handicap situation that cannot be easily resolved by simply improving conventional vision sensor–based perception (Kim et al., 2024). The following are the speed and angular velocity values measured inside the AV when passing through a lane-closure section, both without and with the installation of physical infrastructure supporting autonomous driving. This experiment was conducted by recreating a lane-closure environment at the Yeoncheon SOC Demonstration Center and observing how the vehicle’s behavior differed depending on the presence or absence of physical infrastructure. When the AV recognizes the lane-closure section, lane-change control is performed, during which the vehicle reduces speed to an appropriate level and executes a turning maneuver to change lanes. In this process, if physical infrastructure that provides lane-closure guidance exists, the AV can recognize the lane-closure section in advance. Compared to abrupt maneuvers such as sudden deceleration or sharp turns—often carried out by AVs when facing unexpected physical situations that make normal driving difficult—this advance recognition induces smoother driving. Numerically (see Figure 4), without the use of lane-closure guidance infrastructure, the AV reduced its speed by up to 20 km/h when changing lanes, with angular velocity reaching a maximum of 0.15 rad/s. In contrast, when physical infrastructure was utilized, the AV reduced its speed by only up to 10 km/h to pass through the section, and its maximum angular velocity remained within 0.10 rad/s, confirming quantitatively that more stable driving was achieved. The results of this experiment indicate that the physical infrastructure supporting autonomous driving not only has positive effects on AVs themselves but can also generate even greater benefits in situations where AVs and conventional vehicles coexist. From a traffic flow perspective, large fluctuations in the speed and angular velocity of an individual vehicle make the vehicle a so-called “troublemaker” that disrupts overall traffic stability. The function of physical infrastructure that ensures AVs are controlled so that they do not behave in ways that appear unusual compared to human drivers contributes to improving the stability of mixed traffic flow involving both AVs and conventional vehicles, and is therefore expected to positively influence the broader adoption of AVs. Epilogue As of 2025, many experts believe that autonomous driving technology is currently in a stagnation phase of development and diffusion known as the “Chasm.” In the early 2010s, when Google first unveiled its autonomous vehicle to the public, most countries had set targets for the commercialization of autonomous driving that were earlier than 2020. Today, in the mid-2020s, only a very limited number of production vehicles equipped with autonomous driving functions at SAE Level 3 or higher—a recognized milestone for commercialization—actually exist, and even these are constrained to operating only within various limitations defined by their Operation Design Domain (ODD). This reality implies that there are significant technological challenges that must be solved before we reach the era of fully autonomous vehicles, and at the same time highlights the need for new methodologies and approaches. Various R&D cases conducted thus far demonstrate that cooperation between vehicles and infrastructure is indispensable for the commercialization of fully autonomous driving. The methodology introduced by the research team in this article—namely, constructing an environment in which AVs can more actively utilize road facilities during the driving process and applying this approach to alleviate the difficulties of AV decision-making and control in “handicap situations and zones”—is expected to serve as a core solution that can accelerate the advent of the fully autonomous driving era. References Kim, Young-min; Park, Beom-jin; Kim, Ji-soo (2024). A Study on the Development and Verification of Infrastructure Facilities Supporting AV Positioning Using Mobile LiDAR. Journal of The Korean Society of Intelligent Transport Systems, Vol. 23, No. 6, pp. 203–217. Jeon, Hyun-myung; Kim, Ji-soo (2021). Analysis of Handicap Situations and Their Causes in Autonomous Vehicles through IPA and FGI. Journal of The Korean Society of Intelligent Transport Systems, Vol. 20, No. 3, pp. 34–46. Korea Intelligent Transport Systems Consortium (2024). Stage Report on the Development of a Digital Road and Traffic Infrastructure Convergence Platform Based on Crowdsourcing.
Department of Highway & Transportation Research
Date
2025-09-24
Hit
202
The Current State of Cable Tension Monitoring Technology in Cable-Stayed Bridges
Senior Researcher Park Young-soo, Department of Structural Engineering Research, KICT Prologue The Special Act on the Safety Control and Maintenance of Establishments defines criteria for managing facilities, including bridges, primarily based on their scale and type, and stipulates that special bridges must be monitored and managed through precise measurements. Among these special bridges, the cable-stayed bridge is a representative cable-supported structure, in which the deck is supported by stay cables connected to towers. Cable-stayed bridges offer improved structural efficiency by combining the tensile strength of cables with the bending and compressive strength of towers and decks. They are particularly suited for long spans, but because of their aesthetic appeal, are also increasingly being adopted for shorter spans, resulting in a steady increase in the number of cable-stayed bridges in service. In such cable-supported structures, the cables are critical structural components. Their tension force and damping ratio affect not only the behavior of the cables themselves but also the overall stability of the bridge. As the main span of a cable-stayed bridge becomes longer, the stay cables linking the towers and decks become more susceptible to vibrations induced by wind and traffic loads. Since these essential cables may experience tension loss for various reasons—and such losses can significantly degrade bridge performance, potentially even leading to collapse in extreme cases—effective methods of monitoring cable tension are indispensable. Various methods for monitoring cable tension have been studied and applied. Among them, the vibration-based method estimates tension using vibration data and has the advantages of easier installation and higher cost-effectiveness compared to other methods. As of 2022, approximately 260 cable tensiometers have been installed on cable-supported bridges managed by the Special Bridge Management Center of the Korea Authority of Land & Infrastructure Safety (KALIS), and most of these monitor cable tension by estimating it through vibration-based methods that use acceleration data. Vibration-Based Method The vibration-based method for estimating cable tension involves the following procedure: 1) installing an accelerometer on the exterior of the cable to continuously collect vibration responses (Figure 2, Step #01), 2) transforming the collected responses into power spectral density (PSD) signals in the frequency domain, 3) extracting peak information (fn: peak location, n: peak order) from the transformed PSD signals (Figure 2, Step #02), and 4) deriving a linear regression equation from the extracted peak information (Figure 2, Step #03). Using the intercept b of the regression equation (0.729 in Figure 2, Step #04), together with the cable’s properties—effective length (Leff) and unit weight (w)—the cable tension is then estimated as expressed in Equation (2). Since the excitation conditions of the cable are not constant, the data collected by accelerometers installed on the cable allow for a more stable detection of peak information as the measurement time increases. However, the longer the measurement time, the longer the tension estimation cycle becomes. Therefore, in practice, acceleration measurements are generally taken at a frequency of 100 Hz, with measurement durations of 10 minutes. The collected acceleration data are then transformed into the frequency domain, and peak information (peak position and order) is detected from the transformed frequency spectrum. During the tension estimation process, the critical task of detecting peak information is mainly performed manually. For example, if data are collected in 10-minute intervals over a 24-hour period, this yields 144 data sets. If accelerometers are installed on 8 cables of a single bridge, peak information must then be detected from a total of 1,152 data sets. Because the detection of peaks is carried out primarily by a human operator, the process is labor-intensive and subject to the operator’s subjective judgment, reducing objectivity. An alternative approach to manual detection is to use pre-set conditions. For instance, peaks can be identified by detecting locations where the amplitude exceeds a threshold, or by defining frequency bands where peaks are expected and selecting the highest value within that band. However, peak information may be missing depending on excitation conditions or cable damage. In cases where the natural frequency of the cable coincides with external excitation conditions, resonance may occur, resulting in unusually large peaks in certain frequency ranges. The limitation of methods based on automatic detection of pre-set conditions is that settings must be customized for each cable specification, and changes in spectral characteristics can hinder the accurate detection of peak information. IoT Measurement System with Automatic Peak Detection Algorithm The vibration signals of cable-stayed bridge cables, when transformed into the frequency domain, generate a power spectral density (PSD) that exhibits two distinct characteristics, as shown in Figure 3. First, the peaks in the cable PSD display a periodic pattern occurring at uniform intervals, reflecting the inherent dynamic properties of the cable. While the spacing of these peaks can vary depending on the cable’s specifications (such as material, geometry, and tension) and the overall structural system, periodicity with consistent intervals is a physical feature common to all cable members in cable-stayed bridges. The second characteristic is that the peaks have relatively higher amplitudes compared to surrounding frequency components. This means that, in the PSD, these peaks behave as outliers compared to neighboring values (Jin et al., 2021). To automatically detect such uniform peak intervals, one can apply the Automatic Multiscale-based Peak Detection (AMPD) technique, a biosignal processing method from the field of Biomedical Engineering (BME) (Scholkmanm et al., 2012). AMPD has the advantage of enabling complete automation because it can automatically detect periodically occurring peaks without any pre-configuration. To capture the second characteristic—where peaks appear as outliers compared to surrounding values—a threshold-based outlier detection method can be used in parallel. In this case, the threshold can be set using the Median Absolute Deviation (MAD) method, which is robust to data containing outliers (Rousseeuw et al., 1993). Based on the peak information estimated using these two techniques, the cable tension is calculated. This technology offers several advantages: (1) there is no need for pre-configuration, (2) it has a high robustness against signal variations, and (3) there is a low computational cost. Acceleration data for cable tension monitoring are mainly collected through wired measurement systems. In these systems, the sensors are connected to the data acquisition devices with cables, and the collected data are transmitted to the managing authority for use in tension analysis. Wired measurement systems have the advantage of enabling stable measurements without data loss; however, they involve additional costs due to the need for cabling between sensors and loggers as well as the installation of protective conduits to prevent disconnection, and they are limited in terms of installation locations and the number of sensors that can be deployed. In recent years, various IoT (Internet of Things)-based measurement systems have been developed and applied to facilities. However, most of them, like traditional wired systems, remain at the level of simply collecting and transmitting data. While this offers advantages in terms of installation flexibility and scalability, it does not fully utilize the potential strengths of IoT technology. IoT measurement systems can incorporate diverse algorithms to filter and process raw data before its transmission, rather than sending the raw data itself. This edge computing technology processes data in real time at the sensor terminal or adjacent devices, reducing the burden of transmission to servers and lowering both processing costs and time. By installing the previously described automatic peak detection algorithm on an IoT-based measurement system and applying it to cable-stayed bridge cables, a study was conducted to verify the algorithm’s accuracy, as well as the usability and efficiency of the measurement system. Through this research, the potential of applying IoT measurement systems and edge computing technologies to facility monitoring was confirmed. Epilogue The integration of IoT measurement systems with edge computing makes it possible to move beyond the traditional approach of transmitting large volumes of raw data to servers for collection and analysis, enabling on-site data processing and optimized management. With the advancement of data processing and analysis technologies now embedded into IoT measurement systems, the scope of data utilization in facility maintenance—which was previously limited to raw data transmission—is expected to expand significantly. In addition, with the advent of real-time processing, it is now possible to respond immediately rather than after the fact, making preventive maintenance achievable. This not only helps prevent safety accidents but also is expected to reduce both direct and indirect social costs. References 2024 Road Bridge and Tunnel Status Report. Jin et al. (2021), Fuly automated peak-picking method for an autonomous stay-cable monitoring system in cablestayed bridges, Autom. Constr. Vol. 126. Scholkmanm et al. (2012), An efficient algorithm for automatic peak detection in noisy periodic and quasiperiodic signals, Algorithms, Vol. 5. Rousseeuw et al. (1993), Alternatives to the median absolute deviation, J. Am. Stat. Assoc. Vol. 88
Department of Structural Engineering Research
Date
2025-09-24
Hit
99
AI-Based GPR Data Analysis Technology for Detecting Underground Cavities and Buried Objects
Research Fellow Lee Dae-young, Department of Geotechnical Engineering Research, KICT Prologue In recent years, a series of large-scale ground subsidence accidents have occurred in urban areas such as Seoul. Examples include the sinkhole accident in Myeongil-dong, Gangdong-gu, Seoul, and the underground collapse at the Sinansan Line construction site in Gwangmyeong. Following these incidents, the Seoul Metropolitan Government announced that it would be strengthening safety management against ground subsidence by conducting intensive Ground Penetrating Radar (GPR) surveys in the areas around excavation sites (Seoul City, 2025). Ground Penetrating Radar (GPR) is a geophysical survey method that uses electromagnetic waves to detect underground structures such as sewer pipelines, buried utilities, and cavities. Since the large-scale cavity incident at the Seokchon Underpass in 2014, GPR surveys have been actively applied to investigate subsurface cavities and ground subsidence beneath urban roads. As a non-destructive survey technique, GPR is useful for identifying underground utilities, cavities, and soil structures. However, it has several limitations, including depth restrictions depending on frequency, sensitivity to soil conditions, and difficulties in data interpretation. In addition, GPR analysis relies heavily on expert interpretation, and for high-resolution or 3D surveys, the data processing and interpretation require a significant amount of time, with notable variations in the reliability of the results. To address these issues, research is now underway on AI-based methods for automatically analyzing GPR data. This article introduces the principles of GPR surveys, along with AI-based methods for analyzing GPR data to improve the accuracy of interpretation, shorten analysis time, and enable real-time analysis. Principle of GPR Surveys Ground Penetrating Radar (GPR) is a survey technique that can identify the location and shape of underground structures such as buried pipelines through transmitting electromagnetic waves into the ground and receiving the reflected signals generated at the boundaries of such structures, while considering their different electrical properties (conductivity and permittivity). GPR employs radio waves with frequencies of several tens of MHz or higher, and is mainly used as a non-destructive testing method to investigate relatively shallow targets at depths of approximately 1–3 meters. It is applied to the detection of underground utilities, cavities, tunnel voids, and stratigraphic structures. More recently, GPR surveys have been intensively conducted in areas where ground subsidence is a concern due to aging sewer pipelines, serving as an evaluation method to help prevent ground collapse (Figure 1). In the analysis of GPR survey data, buried pipelines exhibit strong amplitudes and appear in the form of hyperbolae, as shown in Figure 2. While single-channel GPR systems using one transmitter and receiver pair have been mainly used, high-resolution three-dimensional multi-channel GPR systems have recently come into wider application. GPR surveys are effective for targets buried at shallow depths of up to approximately 3 meters, within which range most pipelines are located, but have limitations when it comes to deeper investigations, such as tunnel construction or large-scale excavation sites. GPR Data Using AI Techniques In the context of the Fourth Industrial Revolution, the outstanding performance and popularization of Artificial Intelligence (AI) technologies have further expanded their applicability. The application of AI to GPR analysis has potential to improve the accuracy and efficiency of underground structure detection and reduce interpretation errors. Recently, to address errors and technical challenges that arise during GPR image interpretation, research utilizing deep learning—one of the machine learning techniques widely applied in the field of image processing—has been actively conducted. The AI-based method for analyzing GPR data involves collecting GPR data in B-scan and C-scan formats, performing noise removal and corrections, and then carrying out data labeling. After generating a corrected labeled training dataset, a Convolutional Neural Network (CNN)-based AI algorithm is used for object detection (Girshick, 2014). Through deep learning, the reliability of buried pipeline detection can be significantly enhanced. The Korea Institute of Civil Engineering and Building Technology (KICT) has conducted research on the application of AI to improve the accuracy of GPR surveys for cavity detection in the ground and the investigation of underground obstacles beneath roads, with the aim of preventing ground subsidence. GPR survey data were used to detect buried pipelines and cavities, and high-quality labeled datasets were generated by converting the GPR data into images and removing noise such as clutter. For the detection of underground utilities and cavities, the Faster R-CNN algorithm was applied, and by employing various training techniques, optimal performance for detecting buried pipelines and cavities was achieved. Through this effort, the KICT developed AI algorithms and GPR data analysis technologies capable of detecting underground cavities and buried pipelines. Epilogue With the acceleration of urban development and the resulting increase in large-scale excavation works, as well as the occurrence of urban sinkholes caused by aging infrastructure, the use of GPR surveys for detecting cavities and ground subsidence has become increasingly important. Recently, research has been progressing on the application of AI technologies to advance GPR data analysis. Integrating AI into GPR surveys can reduce data processing time while improving the consistency and accuracy of interpretation results, thereby overcoming the limitations of traditional GPR analysis. AI-based automatic analysis technology also enables the real-time processing of GPR data and reduces interpretation errors, allowing decision-making processes to move more quickly. Ultimately, this technology can play a vital role in preventing ground subsidence accidents and enhancing the safety of underground utilities. References Seoul Metropolitan Government (2025). Special Countermeasures for Strengthening Safety Management Against Ground Subsidence at Large Urban Excavation Sites. Press Release, Road Management Division, Disaster and Safety Office, Seoul Metropolitan Government. Lee, Dae-young (2015). Development of Ground Subsidence Evaluation Methods Caused by Damage to Old Sewer Pipes. Proceedings of the Joint Conference of the Korean Society of Water and Wastewater (KSWW) and the Korean Society on Water Environment (KSWE), Special Session V-1. Lee, Dae-young (2018). Risk Assessment of Sewer Defects and Ground Subsidence Using CCTV and GPR. Journal of the Korean Geosynthetics Society (KGSS), Vol. 17, No. 3, pp. 47–55. Korea Institute of Civil Engineering and Building Technology (2022). Development of Smart QSE-Based Undergrounding Innovation Technology for Overhead Lines and Road Performance Restoration Technology (1/3), Annual Report. Korea Institute of Civil Engineering and Building Technology (2024). Development of Smart QSE-Based Undergrounding Innovation Technology for Overhead Lines and Road Performance Restoration Technology (3/3), Final Report https://ashutoshmakone.medium.com/faster-rcnn502e4a2e1ec6 R. Girshick, J. Donahue, T. Darrell, and J. Malik (2014) “Rich feature hierarchies for accurate object detection and semantic segmentation,” In Proc. CVPR.
Department of Geotechnical Engineering Research
Date
2025-09-24
Hit
365
Development of Fast Prediction Technology for Urban Flood Forecasting
Researcher:Kim Hyung-jun, Senior Researcher, Department of Hydro Science and Engineering Research, KICT Sim Sang-bo, Postdoctoral Researcher, Department of Hydro Science and Engineering Research, KICT Prologue In recent years, climate change has accelerated the temporal concentration and spatial intensification of rainfall, leading to unprecedented flooding events that cause significant damage. During 2020, South Korea experienced prolonged monsoon rains that triggered flood damage across the country, with floods in some regions that exceeded design flood levels, resulting in substantial loss of life and property. 2022 saw localized torrential rainfall in southern Seoul that exceeded the capacity of urban drainage systems, causing widespread inundation throughout the city and resulting in casualties, particularly in the Gwanak-gu area, where many residents live in underground spaces. Such rainfall events exceeding expectations are anticipated to continue increasing. Particularly in urban areas with high ratios of impervious surfaces, the risk of flood damage from localized concentrated rainfall is rising, and this risk is expected to increase further in the future. As a countermeasure, the Ministry of Environment, based on recent cases of large-scale flood damage, is designating additional flood warning points along rivers to expand its flood forecasting coverage, and has enacted new legislation to establish an institutional foundation for implementing flood forecasting in urban areas. The Korea Institute of Civil Engineering and Building Technology (KICT) is developing urban flood forecasting models to support the Ministry of Environment's implementation of urban flood forecasting. Development of Real-Time Urban Flood Prediction Model 1. Dual Drainage Model for Urban Flood Prediction To analyze the urban inundation caused by localized torrential rainfall, it is necessary to use both a 2D model for surface runoff behavior and a 1D model for the flow within complex underground stormwater networks. The traditional method of urban inundation analysis involved first performing 1D stormwater network flow analysis, then calculating the flow discharged from the network, and applying this to a 2D model to assess the extent of surface flooding. However, this approach does not account for the possibility of flow re-entering the stormwater network from the surface, which can lead to overestimating the extent of inundation. Recent advancements have led to the development of a model that dynamically integrates stormwater network flow and surface water flow analysis, allowing for simulations in which excess stormwater flows into the network and causes flooding, and then flows back into the network, helping to resolve the flooding. The Korea Institute of Civil Engineering and Building Technology (KICT) developed the HCSURF (Hyper Connected Solution for Urban Flood) model, which allows for a simultaneous analysis of both surface water flow and stormwater network flow. The stormwater network flow analysis uses the source code of the United States Environmental Protection Agency (EPA)'s SWMM (Storm Water Management Model) version 5.2, while the surface water flow analysis uses self-developed code that discretizes the 2D shallow water equation using the Finite Volume Method (FVM). The SWMM model was written in C, while the surface water model was developed in Fortran. These were integrated into a single project in Visual Studio to allow for information exchange, and compiled into one executable file. The HC-SURF model is designed to analyze urban flooding by sharing the results of stormwater network flow and surface water flow simulations. The calculation of inflows into the stormwater network uses the results from the lumped rainfall-runoff simulation of the SWMM model, which is then sequentially linked to the surface water flow simulation. Alternatively, a distributed rainfall model can be used to perform the surface water simulation, and the inflow to the stormwater network is calculated. After comparing the surface water results, methods for calculating surplus flow and re-entry flow are reflected in the model. 2. Effective Representation of Building Shapes in Urban Areas To perform flood prediction based on rainfall runoff and surface water behavior in urban areas, it is crucial to account for the impact of various structures such as buildings and roads, which significantly affect the flow behavior. Unlike rivers, urban areas introduce additional complexity. Methods for incorporating the effects of buildings into urban flood analysis include: (a) Excluding Building Areas from the Grid: This method excludes building areas from the calculation domain; (b) Reflecting Building Proportions in the Grid: This method calculates the proportion of the grid occupied by buildings, and incorporates this into the governing equations to define the effective area; (c) Applying Modified Roughness Coefficients in Building Areas: This method involves applying higher roughness coefficients to the grids where buildings are present to control flow speed and direction. To develop an efficient numerical analysis model for urban flood forecasting, a study was conducted to examine the differences in results based on various methods of incorporating buildings into the model. The model was applied to an area near Sindaebang Station along the Dorimcheon River, which experienced a large-scale urban flood in 2022, and the results were compared, as shown in Figure 2. Figure 2 (a) shows the results of simulating the inundation range by excluding building areas from the numerical grid. While road shapes are reflected with high accuracy, areas that were not accurately captured during grid generation are excluded from the numerical simulation. This leads to the issue of excluding areas where actual inundation could occur, thus omitting flood-prone areas from the simulation. The results of simulated urban flooding that incorporates the proportion taken up by buildings in the numerical grid are shown in Figure 2 (b). The simulation results demonstrate that regardless of the shape of the grid representing the computational domain, urban flooding is reasonably simulated through roads, and the phenomenon of inundation occurring through roads is captured accurately. In areas with many buildings, the depth of inundation is not calculated, but the simulation can reasonably model urban flooding in grids where there is ample space for stormwater flow. The results of simulating urban flood inundation using modified roughness coefficients are shown in Figure 2 (c). The roughness coefficients were increased in areas with buildings to significantly raise the resistance to stormwater flow, but stormwater still flowed through areas with buildings. As a result, it was observed that, of the three simulation conditions, the urban flood inundation area was the largest under this condition. 3. Achieving Real-Time Forecasting Capability through the Application of Parallel Processing Techniques For urban flood forecasting, it is essential to minimize the simulation execution time in order to improve operational efficiency. The HC-SURF model has been developed to enable real-time urban flood forecasting by enhancing computational efficiency through the application of parallel processing techniques. Table 1 shows the computational time required by applying parallel processing techniques under each condition. When generating the numerical grid that incorporates building shapes, even though parallel processing techniques were applied, the computational time interval is determined by very small calculation grids. As a result, despite the reduction in time, longer computation times are required compared to other methods. In numerical simulations that incorporate the effects of buildings using uniform calculation grids, it was observed that not only is the time reduction ratio higher, but the computational time is also significantly shorter. Advancing Operationalization Through Pilot Testing Korea’s Ministry of Environment enacted the "Act on Flood Damage Prevention in Urban River Basins" after the large-scale urban flood that occurred in the Dorimcheon River basin in 2022, which established an institutional foundation for expanding the flood forecasting areas. Following this, a technology was developed to provide scenario-based urban flood maps for the Dorimcheon River to relevant agencies. However, there are still significant technical limitations when it comes to providing accurate urban flood information by reflecting real-time hydrological data. From its early development stages, the HC-SURF model was designed to support urban flood forecasting for the Ministry of Environment (ME), and this achieved basic performance through three years of research and development from 2022 to 2024. Starting in 2025, pilot testing will be conducted on the operational server of the Ministry of Environment (ME) Han River Flood Control Office (HRFCO) as part of the "Dam-River Digital Twin" project. In March, a real-time linkage system with the Ministry of Environment (ME) hydrological survey database will be established to enhance the model for collaborative support. During the remaining two years of the research and development period, the HC-SURF model will be further advanced to reflect the feedback of the Ministry of Environment (ME), which is the end user, and it is expected that the framework will be established to implement urban flood forecasting as a unique technology in Korea.
Department of Hydro Science and Engineering Research
Date
2025-06-23
Hit
463
Research Directions for Smart Road Infrastructure for Future Mobility
Researcher:Ryu Seung-ki, Senior Research Fellow, Department of Highway & Transportation Research, KICT Prologue Currently, South Korea’s road infrastructure is suffering due to traffic congestion, environmental pollution, and various traffic accidents, all of which are the result of an imbalance between regional traffic demand and road supply. In addition, the roads are aging, and due to climate change, the rate of road deterioration is accelerating. With increasing socio-economic damages caused by these factors, it is clear that “smart” roads are needed to extend the service life of roads and maintain normalcy, and innovative research and development to realize this must be pursued. Smart roads are envisioned as future roads that can assess their current state based on past and present data and predict future conditions to proactively and quickly restore road abnormalities. Roads, being public goods, are spaces where risks will always exist, and thus substantial finance must be allocated for their maintenance and recovery to ensure their permanence and resilience. We must continue the efforts to supply smart road technologies by researching and developing optimal solutions. Smart roads can be realized by improving mobility for various modes of transportation, such as automobiles, railways, Urban Air Mobility (UAM), subways, and buses, enhancing connectivity between these modes, and introducing innovative transportation systems. The core technologies in realizing smart roads involve traditional road and transportation technologies, along with ICT convergence technologies such as AI, IoT, big data analytics, and V2X, with recent advancements in AI technology and its wide-ranging applicability making it an essential strategic technology for smart roads. Smart roads must focus on the development of core technologies for future mobility across the entire process, from planning and construction to maintenance. In the planning and construction stages, calculating, analyzing, and predicting materials, time, and costs can help manage resources efficiently while also saving costs. In the construction and maintenance stages, smart road technologies will play a critical role in quality and safety management. Future infrastructure research and development policies for roads must introduce more innovative research programs and government policies to address current traffic issues and respond to upcoming changes in future mobility. The Korea Institute of Civil Engineering and Building Technology (KICT) is engaged in research and development in the field of core technologies for future transportation infrastructure, with the Department of Highway and Transportation Research playing a central role. Research Directions and Achievements in Smart Road Infrastructure From 2021 to 2024, the Department of Highway and Transportation Research has been focused on developing core technologies for smart road infrastructure for future mobility. The key research areas have been set as future mobility, sustainable and eco-friendly roads, international and regional cooperation, and the construction of future road demonstration infrastructure. Notably, the future mobility sector has been focused on developing autonomous cooperative driving infrastructure and service technologies, with research and development centered around road infrastructure. In 2021, the first year of the Department of Highway and Transportation Research, research planning focused on road facility safety, digital transition services, traffic signal systems, active road icing accident reduction, wireless charging energy road infrastructure, smart mobility MaaS (Mobility as a Service), and AR-based vehicle location recognition technologies. In 2022, representative and seed projects were actively conducted. Research and development were focused on technologies for safe future roads, road infrastructure for autonomous driving safety, and driver assistance technologies based on vehicle video recorders. At the same time, tasks were planned for intelligent road safety management systems and public transportation infrastructure service diagnosis technologies, in response to policy demands. In 2023, the department continued to support representative projects and initiated planning for tasks such as digital twin services for road risk management, smart parking platforms, and vehicle tire data-based road information services. In 2024, new core projects were launched, including AI Safe Road technology for next-generation neighborhood environments and plastic road infrastructure technology for autonomous driving. Seed projects such as video-based road risk element detection technology, parking lot digital transition technology, and road traffic noise model design were also pursued. While much research and development have been conducted for high-spec trunk roads such as highways and national roads, research and development on local roads, narrow roads, and mixed-use pedestrian and vehicle roads have been relatively scarce. Moreover, the ultimate goal of future mobility will be a safe last-mile autonomous driving service. Smart road infrastructure for future mobility can be realized through a variety of core technologies, and we are proactively researching these to ensure their integration into follow-up projects, practical application, and responsiveness to policy demands. In particular, increasing the utilization of artificial intelligence (AI) is crucial. To achieve this, we are developing various AI applications and core technologies that integrate AI into existing road systems. Summarizing the achievements of the Headquarters Purpose-Specific R&R projects carried out from 2021 to 2024, the Purpose-Specific R&R projects played a pivotal role in paving the way for large follow-up projects, yielding excellent results. Over the three years, 12 subprojects under the Purpose-Specific R&R initiative were connected to 17 follow-up projects, ensuring continuous research and development. Achievements in the Development of AI-Based Road Infrastructure Application Services for Future Mobility In the course of the Purpose-Specific R&R projects, we would like to introduce the AI application mobility service technology for smart road infrastructure, which we have secured as part of future mobility. First, we present a solution to accidents related to road potholes, which have been repeatedly identified as high-risk objects on the road. Autonomous vehicles continue to face challenges in detecting high-risk objects such as potholes using visual recognition. Solutions for detecting such difficult objects require highly reliable detection performance. We have developed the first domestic AI-based road pothole detection solution using black box video footage, and through ongoing research and development to enhance object detection performance under limited perception conditions, are striving to create the world's highest-performing solution. If an AI application solution is secured that can achieve a high level of performance in detecting and recognizing high-risk objects and perception limitations on the road, smart road infrastructure will be able to automatically identify damages such as cracks, subsidence, and potholes on the road. This will lead to efficient maintenance, and transform the infrastructure into a future mobility system that collaborates with autonomous vehicles. Next, residential area roads involve various dynamic objects moving simultaneously, such as pedestrians, vehicles, and motorcycles. These situations will pose a challenge for fully autonomous driving services. Therefore, a solution is needed that enables the smart road infrastructure to recognize these dynamic objects and collaborate with autonomous vehicles. We have proactively developed a multi-object classification solution for narrow roads with a mix of dynamic objects. This solution allows smart road infrastructure to autonomously detect the presence of dynamic objects, their movement trajectories, and parked objects, while also classifying them. Narrow roads, such as alleys, are often obstructed by illegally parked vehicles or other obstacles, making it difficult to assess potential blockages or inaccessible routes for emergency vehicles like fire trucks and ambulances. This can lead to a failure to achieve the critical "golden time" needed to reach the destination, thus increasing the risk of damage. We have developed a smart road infrastructure solution that predicts the effective lane width or accessible routes for narrow roads to secure the golden time. This solution can be applied to video-based traffic accident risk prediction and information provision services. It detects the boundaries of various objects on narrow roads and excludes the objects occupying the road surface to calculate the actual effective road area. By calculating the effective lane width frame by frame, this solution can provide real-time information to emergency vehicles and administrative authorities. For fully autonomous driving, it is essential to secure traffic signal detection and classification technology based on vehicle-mounted cameras. Autonomous vehicles must detect traffic signals ahead and recognize their signal states using visual sensors. To ensure the highest level of autonomous driving safety, fully autonomous vehicles need to use image sensor data to accurately detect traffic signal objects and classify them by type. However, there are still significant challenges in detecting traffic signals when the signal object is small relative to the background or when the contrast with the background is low. We are developing a solution to address these difficult-to-detect recognition issues and improve performance in challenging situations. Research and Development Policies on AI in the US and China The United States recognizes artificial intelligence (AI) technology as a strategic technology directly linked to national security, and is promoting related policies. Under the Biden administration, executive orders were issued at the federal level to develop and spread trustworthy AI, while strengthening international cooperation in response to the emerging potential risks of AI. The policy to ensure the safety and reliability of AI as a national security technology is expected to continue during the Trump 2.0 era under the National AI Initiative Act (2020). Korea must prepare an AI strategy to respond to the Trump 2.0 era. US initiatives in this period can be expected to focus on emphasizing the safety and reliability of AI technology, while maintaining US global leadership through the reinforcement of export controls and technology management related to national security. In response, Korea needs to adopt a balanced policy that proactively addresses global regulatory environments while aligning with the US-led competitive framework for AI technology development and industrial promotion. Strengthening technological alliances with the US and ensuring the ethical use and reliability of AI technology while harmonizing with international regulations is essential. In addition, domestic policies such as the "BASIC ACT ON AI" must be designed to align with global standards, supporting the international expansion of domestic companies and minimizing the impact of the US's export controls and strengthened technology management. At the same time, to secure independent competitiveness in AI technology, it is crucial to increase national investment in research and development (R&D), strengthen global cooperation networks, and foster a comprehensive strategy to support the domestic startup and corporate ecosystem. According to the draft of the 2024 budget report submitted to the Annual Session of the National People’s Congress, China has allocated 371 billion yuan for scientific and technological R&D, with 98 billion yuan (approximately KRW 18 trillion) specifically earmarked for basic science research in fields like physics and chemistry. The Chinese government is emphasizing a "scientific and technological revolution" and is focusing on the development of technologies such as the 72-qubit superconducting quantum computer, hydrogen energy, commercial aerospace technology, robotics, and artificial intelligence (AI). Amid escalating tensions as the US restricts China’s access to key technologies like semiconductors, AI, and quantum computing, China appears determined to avoid falling behind in the global power struggle by expanding its investment in science and technology. Furthermore, China believes that "high-quality development" is a prerequisite to achieving stable growth, viewing independent and innovative science and technology as a driving force of national growth. Recently, China’s AI company DeepThink gained attention by achieving results comparable to OpenAI's models. As scientific and technological innovation is the foundation for the growth of major powers, we must review and adjust our research and development directions accordingly. Epilogue The research and development of smart road infrastructure for future mobility should focus on integrating autonomous vehicles with smart roads, advanced road safety services, and AI-powered smart roads. Core technologies for smart road infrastructure require policy enhancements such as data sharing, standardization, and the expansion of the data application industry to improve efficiency and safety. The Department of Highway and Transportation Research should concentrate on developing core technologies for smart road infrastructure that collaborate with future mobility. This requires the building of data-driven analytical capabilities specific to road traffic, which in turn will help develop internal capabilities for utilizing road traffic infrastructure-based AI technologies, enabling long-term growth and adaptation to government policy changes. Furthermore, to strengthen global competitiveness, it is essential to engage in international cooperation projects and collaborative research initiatives, as well as to expand proof-of-concept research on societal issues by utilizing real-scale smart road infrastructure test beds. In the future, smart road infrastructure will enhance mobility, accessibility, convenience, and safety through cooperative operation between smart roads and autonomous vehicles. To effectively respond to the emergence of future mobility, it is crucial to continuously develop and prepare core technologies for smart road infrastructure. References Ryu Seung-ki et al. (2024) Development of Core Technologies for Future Smart Transportation Infrastructure, Korea Institute of Civil Engineering and Building Technology (KICT) Lee Hae-soo & Yoo Jae-hong (2025) Current Status and Implications of U.S. Artificial Intelligence (AI) Safety and Reliability Policies, Software Policy & Research Institute (SPRi)
Department of Highway & Transportation Research
Date
2025-06-23
Hit
907
Development of Sustainable Construction and Environmental Infrastructure Technology Using Renewable Biomass
Researcher:Ahn Chang-hyuk, Senior Researcher, Department of Environmental Research, KICT Prologue Humanity’s rapid resource utilization and infrastructure development that began in the 20th century has caused a dramatic increase in materials consumption worldwide. Since the 20th century, the mass of infrastructure elements (concrete, asphalt, metals, etc.) and facilities that constitute anthropogenic or human-made products such as buildings, roads, and machinery has increased rapidly. While this has improved user convenience, it has raised potential problems in terms of the sustainability of the construction environment. Biomass shows differences in scope and meaning depending on perspective. From an ecological standpoint, it mainly refers to the total existing amount of biological organisms, including plants that synthesize organic matter using solar energy and the animals and microorganisms that feed on them. However, from a more general perspective including industry, it has a broader meaning regardless of the life or death status or form of organisms, considering energy and renewable resource utilization aspects (e.g., organic waste, sewage sludge, biogas, charcoal, etc.). According to recent research published in Nature (Elhacham et al., 2020), the mass of anthropogenic products and their waste reached the level of the dry mass of ecological biomass existing on Earth between 2013-2020, and is predicted to surpass the wet mass level between 2031-2037 (Figure 1). When applied to "Warming stripes (Ed Hawkins, 2018)," which depicts the annual average global temperature (1850-2018, World Meteorological Organization data) reported to the World Meteorological Organization (WMO), it can be estimated that the rapid increase in global material consumption may have a significant impact on global climate change (Figure 2). Sustainability Strategy for the Domestic Construction Environment According to the 2025 Ministry of Environment (ME) work plan, responding to the climate crisis is the top priority issue affecting public safety and the economy. More specifically, addressing abnormal climate patterns, managing greenhouse gases, and securing international competitiveness in global carbon markets are identified as key sub-initiatives. With the European Union (EU) advancing carbon trade regulations, the continued strengthening of international carbon regulations is expected, and the increase in the global green market size (7.2 trillion USD as of Q1 2024) predicts an inevitable expansion of technology demands related to ESG disclosures, resource security, and circular economy. This signals a paradigm shift in both domestic and international construction environments. Additionally, the implementation of the "Act on Promotion of Transition to Circular Economy and Society" and the "Regulatory Sandbox" is expected to provide regulatory exemptions to strengthen the foundation for the circular use of waste resources in the domestic biomass sector. These efforts aim to achieve the 2035 Nationally Determined Contribution (NDC) targets for greenhouse gas reductions in response to the United Nations Framework Convention on Climate Change (UNFCCC). The detailed steps include preparing conditions for local carbon neutrality implementation through future legislation. In this context, academic studies related to the construction environment are increasingly focused on quantitatively evaluating and monitoring material flows, including resource use and socio-economic metabolism in construction environments, both domestically and globally. By comparing biomass totals and utilizing them, it becomes possible to predict the mass, composition, inputs, and outputs of material stocks and plan for overall resource management. Ultimately, the improvement of recycled biomass or the securing of new uses through scientific and technological processes will offer new perspectives on various environmental problems that were previously unsolvable, contributing to sustainable development. Environmental Issues in Construction Infrastructure and Solutions for Utilizing Recycled Biomass The increasing presence and exposure to potentially harmful anthropogenic contaminants due to urbanization is an important environmental issue that appears in proportion to construction environmental infrastructure development. This represents a persistent global problem requiring innovative solutions (Akhtar et al., 2021). Anthropogenic contaminants typically addressed in urbanized areas include various substances such as heavy metals, hydrophobic organic contaminants, dyes, pesticides, and microorganisms and viruses. Over the past several decades, various environmental technologies have been developed to remove harmful contaminants originating from urban areas. However, these have primarily consisted of limited technological elements for capturing, transporting, and removing contaminants in ex-situ environmental facilities. Nevertheless, the linear management of materials and component technologies, including process complexity, distributed management, customized field application, and cost-effectiveness issues, is recognized as one of the limitations faced by such systems. Conversely, strategies that improve or modify renewable biomass and reconfigure it into in-situ purification systems may be effective approaches that can induce a paradigm shift to adopting sustainable manufacturing practices. Strategies that integrate renewable biomass into environmental management have the advantage of simultaneously achieving resource circulation and environmental pollution management goals by implementing green infrastructure, utilizing renewable energy for energy efficiency and waste treatment, and promoting circular economy principles. As one example, modification of specific chemical structures (e.g., humic-like substances) on material surfaces through cooperation between renewable biomass and microorganisms can lead to the removal of anthropogenic contaminants through various physicochemical mechanisms (adsorption, precipitation, ion exchange, etc.). If these possibilities are realized, we not only can effectively remove harmful contaminants from various environmental media with a new perspective on the renewable biomass that we previously abandoned, but also can provide economic alternatives to landfills, incineration, or purification systems for waste treatment. Therefore, future research directions need to consider strategies that can appropriately utilize renewable biomass and maximize physicochemical properties for environmental purification to effectively limit the behavior of contaminants that may occur in cities. Future Directions for Renewable Biomass Utilization Technologies The application and expansion of renewable biomass utilization technologies not only require the development of alternative component technologies for existing fields but also demand comprehensive systematic approaches. It is necessary to consider the behavioral characteristics and pathways of contaminants occurring in natural and human environments, as well as the risks and environmental impacts on receptors of different contaminant types. To respond to expanding urbanization, nature-based solutions or ecological engineering approaches need to be considered, and industrial ecological life cycle assessment techniques that consider virtuous cycles of material circulation incorporating circular economy elements should be reviewed. In addition, applied engineering attempts through convergent approaches of traditional science and engineering are considered foundational academic approaches in related research. As explained earlier, since the scope of biomass is very broad, methodologies and products that process and modify various organic and inorganic materials in hybrid forms can be expected to achieve sustainable commercialization in the construction and environmental sectors. We anticipate future-oriented technological development that can contribute to the construction and environmental sectors by actively utilizing related approaches. References Akhtar, N., Ishak, M.I.S., Bhawani, S.A., Umar, K. (2021) Various natural and anthropogenic factors responsible for water quality degradation: a review. Water, 13, 2660. pp. 1-35. Ed Hawkins (2018) https://en.wikipedia.org /wiki/Warming_stripes. Elhacham, E., Ben-Uri, L., Grozovski, J., Bar-On, Y.M., Milo, R. (2020) Global human-made mass exceeds all living biomass. Nature, 588, pp. 442-454.
Department of Environmental Research
Date
2025-06-23
Hit
196
첫페이지
이전페이지
1
2
3
4
5
다음페이지
마지막페이지
TOP
QUICK
QUICK
SERVICE
KICT 안내
찾아오시는 길
주요문의처
조직도
연구분야
기업지원
표준품셈
기술이전 및 사업화
인증/인정/시험신청
건설기술정보시스템
HOT LINK
고객지원
묻고답하기
정규직 채용안내
정기간행물
보도자료
닫기