Ponencias
URI permanente para esta colecciónhttp://10.0.96.45:4000/handle/11056/15103
Examinar
Examinando Ponencias por browse.metadata.rights "Acceso embargado"
Mostrando 1 - 3 de 3
- Resultados por página
- Opciones de ordenación
Ítem Análisis de organizaciones de base comunitaria; como alternativa de gestión para el desarrollo del turismo rural en el cantón de Buenos Aires (Costa Rica).(V Congreso Internacional de Desarrollo Local. Cartagena de Indias, 2019) Arauz Beita, Ileana IsabelEl sector turístico en las últimas dos décadas ha venido a dinamizar las economías locales principalmente en las zonas rurales del estado costarricense. Poniendo en evidencia actividades turísticas desarrolladas bajo modalidad de núcleo familiar, personal u organizaciones de base comunitaria, comúnmente llamadas asociaciones, comités o cooperativas. Para el caso específico del cantón de Buenos Aires las organizaciones de base comunitaria pueden llegar a ser claves en el desarrollo del turismo rural, esto porque permiten en el marco de su gestión una planificación con sentido de pertinencia, apropiación del territorio y por qué no, con una definición clara del producto turístico comunitario a ofrecer. Para ello se considera necesario desarrollar los elementos que posee el cantón como inventario de atractivos y recursos turísticos, áreas protegidas, oferta, conectividad entre los distritos y por ende la puesta en valor del capital social a partir de la asociatividad. Se abordan diversas fuentes de información como mapeo de los sectores turísticos, inventarios de organizaciones, fichas de sitios turísticos, entrevista a actores sociales representantes de las organizaciones con el fin de generar un modelo de gestión que le permita a la cámara de turismo articular acciones de forma descentralizada.Ítem Early experiences of noise-sensitivity performance analysis of a distributed deep learning framework(Institute of Electrical and Electronics Engineers (IEEE), 2022-10-18) Rojas, Elvis; Knobloch, Michael; Daoud, Nour; Meneses, Esteban; Mohr, BerndDeep Learning (DL) applications are used to solve complex problems efficiently. These applications require complex neural network models composed of millions of parameters and huge amounts of data for proper training. This is only possible by parallelizing the necessary computations by so-called distributed deep learning (DDL) frameworks over many GPUs distributed over multiple nodes of a HPC cluster. These frameworks mostly utilize the compute power of the GPUs and use only a small portion of the available compute power of the CPUs in the nodes for I/O and inter-process communication, leaving many CPU cores idle and unused. The more powerful the base CPU in the cluster nodes, the more compute resources are wasted. In this paper, we investigate how much of this unutilized compute resources could be used for executing other applications without lowering the performance of the DDL frameworks. In our experiments, we executed a noise-generation application, which generates a very-high memory, network or I/O load, in parallel with DDL frameworks, and use HPC profiling and tracing techniques to determine whether and how the generated noise is affecting the performance of the DDL frameworks. Early results indicate that it might be possible to utilize the idle cores for jobs of other users without affecting the performance of the DDL applications in a negative way.Ítem Understanding soft error sensitivity of deep learning models and frameworks through checkpoint alteration(Institute of Electrical and Electronics Engineers (IEEE), 2021-10-13) Rojas, Elvis; Pérez, Diego; Calhoun, Jon; Bautista-Gomez, Leonardo; Jones, Terry; Meneses, EstebanThe convergence of artificial intelligence, highperformance computing (HPC), and data science brings unique opportunities for marked advance discoveries and that leverage synergies across scientific domains. Recently, deep learning (DL) models have been successfully applied to a wide spectrum of fields, from social network analysis to climate modeling. Such advances greatly benefit from already available HPC infrastructure, mainly GPU-enabled supercomputers. However, those powerful computing systems are exposed to failures, particularly silent data corruption (SDC) in which bit-flips occur without the program crashing. Consequently, exploring the impact of SDCs in DL models is vital for maintaining progress in many scientific domains. This paper uses a distinctive methodology to inject faults into training phases of DL models. We use checkpoint file alteration to study the effect of having bit-flips in different places of a model and at different moments of the training. Our strategy is general enough to allow the analysis of any combination of DL model and framework—so long as they produce a Hierarchical Data Format 5 checkpoint file. The experimental results confirm that popular DL models are often able to absorb dozens of bitflips with a minimal impact on accuracy convergence.