Keynotes
Abstracts
Scalable Web Search Engines, Dr. Mauricio Marin
2/9, 10:15hs
Dealing efficiently with multiple user queries, each at a different stage of execution at any given instant of time, is a key issue in large-scale Web search engines. Here the use of suitable algorithms and heuristics devised to achieve a high query throughput upon the least possible amount of hardware, and yet remain stable under sudden peaks in query traffic, is critical to efficient data center operation. Achieving this goal is quite beyond the possibilities of a single indexing data structure and respective query processing algorithm. Indeed, current practice in Web search engines clearly indicates that such a goal is only feasible through a combination of indexing, caching and parallelism, all devised to work together so that they -- as a whole -- lead to efficient and scalable performance upon well-dimensioned hardware. This talk describes combinations of such strategies and shows that they lead to Web search engines devised to be efficient in terms of query throughput, individual query latency, and power consumption.
Short Bio
Dr. Mauricio Marin is a senior research scientist at Yahoo! Research Latin America Santiago, hosted by the University of Chile. He holds a PhD in Computer Science from University of Oxford, UK, and a MSc from University of Chile. His research work is at the intersection of information retrieval and parallel computing with applications in Web search engines.
Treatment of problems with high computational cost via multi-core/multi-GPU technology. Examples: RSA and several primality test. Alejo Grau, Daniel Ciolek y Dr. Mario Mastriani
2/9, 16:30hs
We report the development of a library for large numbers in GPUs, which allows programmers of languages with high level of abstraction, access the computing power offered by modern GPUs and multicore processors, without having to deal with the details and differences of different devices. One of the extensions of the platform is an implementation of operations on large numbers optimized for vector architectures of GPUs. The tests were made it was decided to address a problem "difficult" to measure the limits of this technology. The problem chosen was the factorization of large numbers in powers of primes. This problem is a classic example where you need a large computing power and a good implementation of arbitrary precision arithmetic. Furthermore, the difficulty of this problem is what gives great strength to the method most widely used ciphers today, RSA. At the same time, it would be desirable to apply similar techniques to the new generation of large prime numbers, trying to expand the repertoire of cryptographic applications of this new tool.
Short Bios
Alejo Grau. He studied Electronics Engineering at UBA. He studied Design of Image and Sound at UBA. Systems programmer, with special enphasis in parallel multi-core/multi-GPU developments. Founder and partner of Dixar Inc. S.A. Leader of NextStream project.

Daniel Ciolek. He studied Computer Science at UBA. He served as Analyst Programmer in several companies. Junior Researcher in Research and Development Lab of Dixar Inc. S.A. Developer of numericak algorithms with vector architectures in CUDA and OpenCL. Programmaer on ISA level in several architectures.

Mario Mastriani. Electronics Engineer, Specialty: Automatic Control. Ph.D. in Engineering. Ph.D. in Computer Science. Ph.D. Candidate in Science and Technology. Reviewer of 15 journals for IEEE, Springer-Verlag, Taylor & Francis, IET, SPIE, OSA, Elsevier, etc. Director of Computer Engineering Degree, National University of Tres de Febrero (UNTreF). Professor of Digital Image Processing, Computer Engineering Degree at UNTreF. Director of Signal and Image Processing Research Group (GIPSI) UNTreF. Professor of Doctorate course, Engineering College, Buenos Aires University: Advanced topics in signal and image processing. Professor of Optoelectronics Master, Engineering College, Buenos Aires University: 1) Medical diagnosis and treatment, 2) Medical applications. ExDirector of Research and Development in New Technologies Lab (LIDeNTec) ANSES. Director of Technical Innovation, Management of Informatics and Technological Innovation (GIIT) ANSES. Evaluator FONCyT in Informatics Technology, Communications, Electronics, ANPCyT-MinCyT. Evaluator CIC in Informatics Technology, Communications, Electronics, Bs. As. Province. Evaluator CONEAU for Informatics Engineering Degree. Ex leader of focusing SAR group of CONAE. INVAP Contractor, 7 projects. USAF Contractor, 2 projects. UNTreF representative for Software Development Program for Interactive Digital TV, Federal Planning Ministry. 47 international scientific publications with severe refereed process in journals. 49 scientific conferences in Argentina and outside.
HPC Training in the Short, Medium and Long Terms in USA and Europe, Gónzalo Hernández
1/9, 11:30hs
In the main centers for supercomputing of USA and Europe, the education and training activities in the short, medium and long terms have a fundamental role, and consider not just the methods and classical techniques of HPC but also new HPC technologies. The sixteen countries of the European community created in 2007 the PRACE consortium: Partnership for Advanced Computing in Europe. Its main objective is the creation of a competitive level HPC service for a supercomputer with petaflops computing power that supports the research and development of academia and industry in Europe. In order to determine the necessities of HPC training of supercomputer users in Europe, the PRACE consortium designed a survey with the objective of evaluating the existing programs and materials for learning the methods and technologies of HPC available within the consortium. The main result of this survey shows that the fundamental methods and techniques of HPC are not well understood by a significant part of the HPC community in Europe. Furthermore, the majority of the community evaluates its knowledge of HPC as basic or beginner level. For this reason, the PRACE consortium put into practice an action strategy considering immediate, short and long term objectives, which allows the improvement of education and training in HPC of the European scientific community Despite that the most powerful supercomputers are located in the United States, only in recent years has it been necessary to incorporate under- and postgraduate HPC programs and highly trained scientists in this area. Additionally, and contrary to what happens in Europe, a joint program has not existed, which has resulted in training initiatives and access to HPC tools becoming restricted to only certain research groups. The most important USA initiative in the field of HPC is the Blue Waters project which aims to be the world's most powerful supercomputer dedicated to scientific research. Blue Waters is a joint enterprise between the National Center for Supercomputing Applications (University of Illinois at Urbana-Champaign), IBM and The Great Lakes Consortium for Petascale Computation. The project involves the collaboration of twelve research teams in order to achieve the development of science and engineering, systems software and industrial innovation projects. Also, the Blue Waters project will develop different educational programs in the short, medium and long terms in the field of HPC which will impact students and professors in primary and secondary education, under- and postgraduate students and engineers and scientists.
Short Bio
Gonzalo Hernández studied Mathematical Engineering at the University of Chile. He also received a Ph.D. in Mathematical Modeling from the same University. In his doctoral thesis he developed different models for some complex systems and studied these models both analytically and numerically by means of large-scale simulations implemented in supercomputers. From his graduation, Dr. Hernández has participated in more than 15 projects in the area of Modeling and Simulation of Complex Systems. As a result of this work Dr. Hernández has published more than 50 papers in journals and proceedings; he has been advisor of more than 20 undergraduate and graduate thesis students; he has organized 4 international conferences and participated in the scientific committees of international conference in High Performance Computing. From 2005, he works in the HPC Laboratory at the Center for Mathematical Modeling of the University of Chile with an academic appointment of associate professor and he participates in the Chilean Grid initiative.
HPC in the Multicore Era - Challenges and Opportunities, David Barkai
1/9, 9:30hs
The application of "Moore's Law" still provides us with doubling the performance every generation through higher density of transistors on the chip, but also with the help of architectural innovation. What is a new phenomenon is that since the introduction of multicore the potential performance increase will not occur without assistance from the software and applications developers community. The challenges arise from the ever increasing level of concurrency, and the innovation required to maintain an adequate compute-bandwidth-latency balance. This talk will cover the current state of affairs in HPC, the hardware and software challenges, some approaches being studied to resolve them - new approaches to memory and IO systems, examination of more complex programming models (i.e., heterogeneous and hybrid); and the scientific discovery opportunities that will open up when those barriers are overcome.
Short Bio
Dr. David Barkai is an HPC computational architect for Intel Corporation, involved in interfacing between the HPC users' community and Intel. He also held a number of positions within Intel research labs involving peer-to-peer and investigations of the impact of emerging technologies on society. Before joining Intel in 1996, David worked for over 20 years in the field of scientific and engineering supercomputing for Control Data Corporation, Floating Point Systems, Cray Research Inc., Supercomputer Systems Inc., and NASA Ames Research Center. David received his B.Sc. in physics and mathematics from the Hebrew University in Jerusalem and a Ph.D. in theoretical physics from Imperial College of Science and technology, London University, in the UK. Over the years David has published in various forums on subjects in physics, numerical methods, and computer applications and architectures. He authored the book "Peer-to-Peer Computing: Technologies for Sharing and Collaborating on the Net" (Intel Press, 2001) and other articles on related topics.
Atomistic simulations using thousands of CPUs, Eduardo M. Bringa.
2/9, 14:00hs
Advances in massively parallel computing have lead to new challenges in the resolution of numerical problems. For instance, strategies for solving coupled differential equations are not the same for a desktop computer than for a computer with hundreds of thousands of CPUs. In this presentation I will discuss simulations of materials which have been carried out in thousands of CPUs in various supercomputers, including some simulations in BlueGene/L (BGL) using 64,000 CPUs. These simulations describe the behavior of materials at the atomic scale, using techniques like molecular dynamics (MD) and Monte Carlo (MD), and can display novel material properties which arise from the complexity of collective atomic behavior under extreme conditions. The spatial scale given by the inter-atomic interactions allows for a natural resolution of problems that appear when this behavior is extrapolated to the continuum scale, as in the case of a discontinuity at the front of a shock wave. I will discuss some pathways for future development, including advances using novel algorithms and calculations in Graphic Processing Units (GPUs).
Short Bio
Eduardo M. Bringa obtuvo su licenciatura en física en el Instituto Balseiro en diciembre de 1995, sobre modelado de plasmas. Su Ph.D. en física lo obtuvo en mayo de 2000 sobre simulaciones de materiales, en la Universidad de Virginia (UVa), USA. Luego de un cargo post-doctoral en el departamento de astronomía de UVa, comenzó su estadía en Lawrence Livermore National Laboratory (USA) donde alcanzó a ser miembro del plantel permanente hasta 2008, cuando retorno a Argentina como profesor asociado del Instituto de Ciencias Básicas de la Universidad Nacional de Cuyo. Actualmente es investigador independiente de CONICET. Sus intereses, plasmados en más de 70 publicaciones con referato, incluyen simulaciones de propiedades mecánicas de materiales, daño por radiación, astrofísica, nanotecnología, y algoritmos para de alta eficiencia computacional en simulaciones atomísticas