In Korea, occupational hazards are on the rise, particularly in the construction sector. According to a report on the ‘Occupational Safety Accident Status’ by Korea’s Ministry of Employment and Labor, the industry accounted for the highest number of accidents and fatalities among all sectors in 2021. To address this rise, the Korea Occupational Safety and Health Agency has been providing virtual reality (VR)-based construction safety content to daily workers as part of their educational training initiatives.
Nevertheless, current VR-based training methods grapple with two limitations. Firstly, VR-based construction safety training is essentially a passive exercise, with learners following one-way instructions that fail to adapt to their judgments and decisions. Secondly, there is an absence of an objective evaluation process during VR-based safety training. To address these challenges, researchers have introduced immersive VR-based construction safety content to promote active worker engagement and have conducted post-written tests. However, these post-written tests have limitations in terms of immediacy and objectivity. Furthermore, among the individual characteristics that can affect learning performance, including personal, academic, social, and cognitive aspects, cognitive characteristics may undergo changes during VR-based safety training.
To address this, a team of researchers led by Associate Professor Choongwan Koo from the Division of Architecture & Urban Division at Incheon National University, Korea, has now proposed a groundbreaking machine learning approach for forecasting personal learning performance in VR-based construction safety training that uses real-time biometric responses. Their paper was made available online on October 7, 2023, and will be published in Volume 156 of the journal Automation in Construction in December 2023.
“While traditional methods of evaluating learning outcomes that use post-written tests may lack objectivity, real-time biometric responses, collected from eye-tracking and electroencephalogram (EEG) sensors, can be used to promptly and objectively evaluate personal learning performances during VR-based safety training,” explains Dr. Koo.
The study involved 30 construction workers undergoing VR-based construction safety training. Real-time biometric responses, collected from eye-tracking and EEG to monitor brain activity, were gathered during the training to assess the psychological responses of the participants. Combining this data with pre-training surveys and post-training written tests, the researchers developed machine-learning-based forecasting models to evaluate the overall personal learning performance of the participants during VR-based safety training.
The team developed two models — a full forecast model (FM) that uses both demographic factors and biometric responses as independent variables and a simplified forecast model (SM) which solely relies on the identified principal features as independent variables to reduce complexity. While the FM exhibited higher accuracy in predicting personal learning performance than traditional models, it also displayed a high level of overfitting. In contrast, the SM demonstrated higher prediction accuracy than the FM due to a smaller number of variables, significantly reducing overfitting. The team thus concluded that the SM was best suited for practical use.
Explaining these results, Dr. Koo emphasizes, “This approach can have a significant impact on improving personal learning performance during VR-based construction safety training, preventing safety incidents, and fostering a safe working environment.” Further, the team also emphasizes the need for future research to consider various accident types and hazard factors in VR-based safety training.
In conclusion, this study marks a significant stride in enhancing personalized safety in construction environments and improving the evaluation of learning performance!
This article was originally published on sciencedaily