Show simple item record

dc.contributor.authorJin, Y
dc.date.accessioned2025-04-15T08:25:58Z
dc.date.issued2025-04-22
dc.date.updated2025-04-15T02:53:42Z
dc.description.abstractGiven the labour-intensive and subjective nature of manual sewer assessments using closed-circuit television (CCTV), computer vision techniques offer a practical alternative to human effort in sewer inspections. However, existing advancements have primarily focused on classifying CCTV frames and detecting or segmenting defects within those frames. This thesis presents an integrated framework for automated CCTV-based sewer condition assessment, consisting of a deep learning (DL)-based method to address defect identification and combined image processing techniques for subsequent severity quantification based on the output of identification. The identified defects are categorized into operational defects and structural ones, with several defect classes selected as representative examples in experiments, and specific methods for quantifying their severity are designed separately in this study. Multiple real CCTV images containing the targeted defects are then used to demonstrate the performance of the proposed methods. Regarding defect identification, with knowing the advancements in deep neural network (DNN) architectures and the recognized significance of data in artificial intelligence, this research emphasizes a data-centric analysis of DL-based sewer defect detection, using a DNN-based object detector, YOLOv4, to detect settled deposits as an example. The experiments demonstrated that increasing the training dataset size improves the model's performance up to a certain threshold. The findings also underscored the importance of consistent dataset annotation. While image augmentation and transfer learning can enhance average precision, they have varied effects on other evaluation metrics. A new approach for quantifying the severity of operational defect via estimating pipe cross-section area loss was developed in this thesis, and settled deposit was selected as a representative operational defect to be studied. The proposed assessment approach includes a joint fitting module and a defect segmentation module, which are then integrated and transformed to eventually quantify the loss of cross-sectional area. The joint fitting module involves detecting a valid joint in the footage, applying a combination of image processing techniques, and using an ellipse fitting algorithm. Defect segmentation is achieved through K-means clustering, complemented by customizing a series of image processing operations. In the final quantification of severity, the distance between the defect and the joint was taken into account. Each module is evaluated on several real-world cases. The evaluation results show that the joint fitting module can generate ellipses that align with the outlines of joints in most test cases, with minor deviations mainly due to joint displacement. Together with the designed defect segmentation module which identifies the regions of settled deposits, the whole framework results in estimated cross-sectional area loss values that are consistent with visual assessments. Additionally, the findings highlight the necessity of converting the fitted ellipse area to the pipe’s cross-sectional area at the defect location for accurate cross-sectional area loss calculations. This study also makes contribution to the automated assessment of structural defects, which is divided into two branches. The first focuses on joint-related defects, especially choosing the validation of detected joint displacement as research target, and customized approach is built via integrating image processing techniques with morphological analysis. The proposed method is demonstrated through various real CCTV images, and it successfully estimates joint displacement distances that align with human estimations. The second branch concentrates on classifying cracks and fractures. The developed method stacks various image processing operations to enhance and extract morphological features, followed by a classification approach that integrates polar transformation and principal component analysis. It is also tested with several real cases under various conditions, and the results shows it can successfully extract the main morphological features of cracks and fractures and eventually classify them into different groups corresponding to different condition deduct values. Lastly, this thesis offers recommendations for future work. Firstly, the assessment methods could be further enhanced by incorporating more complex scenarios during their development. At the same time, significant attention should also be directed towards the defect identification model, as it serves as the foundation of the entire framework. Moreover, additional modules should be developed for the full automation in practical applications. These may include, but are not limited to, evaluating varying camera angles, counting the number of defects, assessing other defect types not covered in this study, and automatically generating comprehensive assessment reports.en_GB
dc.identifier.urihttp://hdl.handle.net/10871/140801
dc.language.isoenen_GB
dc.publisherUniversity of Exeteren_GB
dc.rights.embargoreasonThis thesis is embargoed until 22/Oct/2026 as the author plans to publish their research.en_GB
dc.subjectSewer inspectionen_GB
dc.subjectComputer visionen_GB
dc.subjectSewer condition assessmenten_GB
dc.titleComputer Vision-Based Automated Sewer Condition Assessment Using CCTV Imagesen_GB
dc.typeThesis or dissertationen_GB
dc.date.available2025-04-15T08:25:58Z
dc.contributor.advisorFu, Guangtao
dc.contributor.advisorEverson, Richard
dc.contributor.advisorButler, David
dc.publisher.departmentEngineering
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dc.type.degreetitleDoctor of Philosophy
dc.type.qualificationlevelDoctoral
dc.type.qualificationnameDoctoral Thesis
rioxxterms.versionNAen_GB
rioxxterms.licenseref.startdate2025-04-22
rioxxterms.typeThesisen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record