Development of low-overhead soft error mitigation technique for safety critical neural networks applications

Khalid Adam, Ismail Hammad (2021) Development of low-overhead soft error mitigation technique for safety critical neural networks applications. PhD thesis, Universiti Malaysia Pahang (Contributors, UNSPECIFIED: UNSPECIFIED).

[img]
Preview
Pdf
Development of low-overhead soft error mitigation technique for safety critical neural.ir.pdf - Accepted Version

Download (367kB) | Preview

Abstract

Deep Neural Networks (DNNs) have been widely applied in healthcare applications. DNN-based healthcare applications are safety-critical systems that require highreliability implementation due to a high risk of human death or injury in case of malfunction. Several DNN accelerators are used to execute these DNN models, and GPUs are currently the most prominent and the dominated DNN accelerators. However, GPUs are prone to soft errors that dramatically impact the GPU behaviors; such error may corrupt data values or logic operations, which result in Silent Data Corruption (SDC). The SDC propagates from the physical level to the application level (SDC that occurs in hardware GPUs’ components) results in misclassification of objects in DNN models, leading to disastrous consequences. Food and Drug Administration (FDA) reported that 1078 of the adverse events (10.1%) were unintended errors (i.e., soft errors) encountered, including 52 injuries and two deaths. Several traditional techniques have been proposed to protect electronic devices from soft errors by replicating the DNN models. However, these techniques cause significant overheads of area, performance, and energy, making them challenging to implement in healthcare systems that have strict deadlines. To address this issue, this study developed a Selective Mitigation Technique based on the standard Triple Modular Redundancy (S-MTTM-R) to determine the model’s vulnerable parts, distinguishing Malfunction and Light-Malfunction errors. A comprehensive vulnerability analysis was performed using a SASSIFI fault injector at the CNN AlexNet and DenseNet201 models: layers, kernels, and instructions to show both models’ resilience and identify the most vulnerable portions and harden them by injecting them while implemented on NVIDIA’s GPUs. The experimental results showed that S-MTTM-R achieved a significant improvement in error masking. No-Malfunction have been improved from 54.90%, 67.85%, and 59.36% to 62.80%, 82.10%, and 80.76% in the three modes RF, IOA, and IOV, respectively for AlexNet. For DenseNet, NoMalfunction have been improved from 43.70%, 67.70%, and 54.68% to 59.90%, 84.75%, and 83.07% in the three modes RF, IOA, and IOV, respectively. Importantly, S-MTTMR decreased the percentage of errors that case misclassification (Malfunction) from 3.70% to 0.38% and 5.23% to 0.23%, for AlexNet and DenseNet, respectively. The performance analysis results showed that the S-MTTM-R achieved lower overhead compared to the well-known protection techniques: Algorithm-Based Fault Tolerance (ABFT), Double Modular Redundancy (DMR), and Triple Modular Redundancy (TMR). In light of these results, the study revealed strong evidence that the developed S-MTTMR was successfully mitigated the soft errors for the DNNs model on GPUs with lowoverheads in energy, performance, and area indicated a remarkable improvement in the healthcare domains’ model reliability.

Item Type: Thesis (PhD)
Additional Information: Thesis (Doctor of Philosophy) -- Universiti Malaysia Pahang – 2021, SV: DR. IZZELDIN IBRAHIM MOHAMED ABDELAZIZ, CD: 13093
Uncontrolled Keywords: soft error mitigation, neural networks
Subjects: T Technology > T Technology (General)
T Technology > TA Engineering (General). Civil engineering (General)
Faculty/Division: Institute of Postgraduate Studies
College of Engineering
Depositing User: Mr. Nik Ahmad Nasyrun Nik Abd Malik
Date Deposited: 14 Oct 2022 02:57
Last Modified: 14 Oct 2022 02:57
URI: http://umpir.ump.edu.my/id/eprint/34715
Download Statistic: View Download Statistics

Actions (login required)

View Item View Item