Smart Vehicales Surveillance in Foggy Conditions Using Enhanced Deep Learning Algorithms
Main Article Content
Abstract
Foggy conditions pose significant challenges for vehicles due to reduced visibility, making it difficult for drivers to see other vehicles, pedestrians, and road signs, which increases the risk of accidents. Fog also distorts depth perception, complicating the ability to accurately judge distances and speeds of other vehicles, leading to potential collision. For this we use deep learning methods, while deep convolutional neural networks excel at eliminating fog, they also need to be able to handle photos taken in actual meteorological situations with patches of cloud cover or fog. Blur is harder to categorize in the actual world, and decreasing map or picture quality will result in output results with inconsistent colours or less content. Additionally, the model's complexity will rise with additional convolutional block stacking. Deep learning methods for fog image processing can be plagued by over fitting in addition to the challenge of gathering enough training data. This can restrict the capabilities of the model and make it difficult to use it practically in real-world situations.
This proposed a combined method for removing fog from surveillance images using WaveletFormerNet, a Transformer-based wavelet network designed for real-world non-homogeneous dense fog scenarios, this transformer method use the wavelet transform method. It also uses Multi-Object Detection with Enhanced YOLOv2 and LuNet Algorithms to detect objects. When these methods are combined, they can better handle the intricacies of hazy surroundings, improving both visibility and object detection precision. The effectiveness of the proposed technique is proven through rigorous testing, highlighting its potential to enhance the operation of monitoring systems in difficult weather situations.
Article Details
Issue
Section
Articles