Self-Supervised Pretraining and Quantization for Fault Tolerant Neural Networks: Friend or Foe?
Self-Supervised Pretraining and Quantization for Fault Tolerant Neural Networks: Friend or Foe?
Blog Article
Deep neural networks (DNNs) are increasingly being applied in critical domains such as healthcare and autonomous driving.However, their predictive capabilities can degrade in the presence of transient hardware faults, which can lead to potentially catastrophic and unpredictable errors.Consequently, various techniques have been proposed to enhance DNN fault tolerance by modifying the network structure or training procedure, thereby reducing the need for costly hardware redundancy.However, there are design or training choices whose impact on fault propagation has been overlooked in the literature.
Specifically, self-supervised learning (SSL) as a pre-training technique has been shown to enhance the robustness of Tweed Blazers learned features, resulting in improved performance on downstream tasks.This study investigates the error tolerance of different DNN and SSL techniques in image classification and segmentation tasks, including those relevant to Earth Observation.Experimental results suggest that SSL pretraining, whether used alone or in combination with error mitigation techniques, generally enhances DNN fault tolerance.We complement these findings with an in-depth analysis of the fault tolerance of quantized networks.
In this context, the use of standard SSL techniques leads to IRONSMART a decrease in accuracy.However, this issue can be partially addressed by employing methods that, during the pretraining phase, incorporate the quantization error into the loss function.