Ponencias
URI permanente para esta colecciónhttp://10.0.96.45:4000/handle/11056/15103
Examinar
Examinando Ponencias por Materia "APRENDIZAJE PROFUNDO"
Mostrando 1 - 1 de 1
- Resultados por página
- Opciones de ordenación
Ítem Understanding soft error sensitivity of deep learning models and frameworks through checkpoint alteration(Institute of Electrical and Electronics Engineers (IEEE), 2021-10-13) Rojas, Elvis; Pérez, Diego; Calhoun, Jon; Bautista-Gomez, Leonardo; Jones, Terry; Meneses, EstebanThe convergence of artificial intelligence, highperformance computing (HPC), and data science brings unique opportunities for marked advance discoveries and that leverage synergies across scientific domains. Recently, deep learning (DL) models have been successfully applied to a wide spectrum of fields, from social network analysis to climate modeling. Such advances greatly benefit from already available HPC infrastructure, mainly GPU-enabled supercomputers. However, those powerful computing systems are exposed to failures, particularly silent data corruption (SDC) in which bit-flips occur without the program crashing. Consequently, exploring the impact of SDCs in DL models is vital for maintaining progress in many scientific domains. This paper uses a distinctive methodology to inject faults into training phases of DL models. We use checkpoint file alteration to study the effect of having bit-flips in different places of a model and at different moments of the training. Our strategy is general enough to allow the analysis of any combination of DL model and framework—so long as they produce a Hierarchical Data Format 5 checkpoint file. The experimental results confirm that popular DL models are often able to absorb dozens of bitflips with a minimal impact on accuracy convergence.