Ponencias
URI permanente para esta colecciónhttp://10.0.96.45:4000/handle/11056/15103
Examinar
Examinando Ponencias por Materia "APRENDIZAJE PROFUNDO DISTRIBUIDO"
Mostrando 1 - 1 de 1
- Resultados por página
- Opciones de ordenación
Ítem Early experiences of noise-sensitivity performance analysis of a distributed deep learning framework(Institute of Electrical and Electronics Engineers (IEEE), 2022-10-18) Rojas, Elvis; Knobloch, Michael; Daoud, Nour; Meneses, Esteban; Mohr, BerndDeep Learning (DL) applications are used to solve complex problems efficiently. These applications require complex neural network models composed of millions of parameters and huge amounts of data for proper training. This is only possible by parallelizing the necessary computations by so-called distributed deep learning (DDL) frameworks over many GPUs distributed over multiple nodes of a HPC cluster. These frameworks mostly utilize the compute power of the GPUs and use only a small portion of the available compute power of the CPUs in the nodes for I/O and inter-process communication, leaving many CPU cores idle and unused. The more powerful the base CPU in the cluster nodes, the more compute resources are wasted. In this paper, we investigate how much of this unutilized compute resources could be used for executing other applications without lowering the performance of the DDL frameworks. In our experiments, we executed a noise-generation application, which generates a very-high memory, network or I/O load, in parallel with DDL frameworks, and use HPC profiling and tracing techniques to determine whether and how the generated noise is affecting the performance of the DDL frameworks. Early results indicate that it might be possible to utilize the idle cores for jobs of other users without affecting the performance of the DDL applications in a negative way.