Programme
Programme
We reserve the right to modify program due to circumstance
Maciej Pawlik, Klemens Noga, Maciej Czuchry, Jacek Budzowski, Łukasz Flis, Patryk Lasoń, Marek Magryś, Michał Sterzel
ACC Cyfronet UST, Krakow, Poland
Title: Evaluation of ARM based system for HPC workloads, a case study.
Abstract:ARM is a well known CPU architecture, up to this point, used primarily in portable devices. Application of ARM CPUs was mostly motivated by features which include low power consumption and flexible licensing model. Over the recent years ARM has grown to feature a mature software ecosystem and performance comparable to CPUs used in stationary devices like workstations and servers. According to the community, ARM has potential to address some of challenges which arose in the field of High Performance Computing. In the current day HPC, vendors and researchers alike are working on lowering power consumption, increasing performance per dollar and increasing density. This paper presents experience from a case study, which was done at ACC Cyfronet UST, where we evaluated ARM based system designed to provide computing services. The evaluation consisted of integrating the test system with a running HPC cluster and subjecting it to typical workloads. Integration allowed us to test system’s compatibility with supporting services like storage and networking. Our experiences demonstrate that storage based on Lustre filesystem needs substantial changes in configuration to work with ARM servers properly. The computing performance was measured and proved to be comparable to that offered by most popular vendors. While we experienced some difficulties, it was shown that it is possible to build and run an HPC cluster based on ARM architecture.
Evaluation of ARM based system for HPC workloads, a case study (paper)
Karthee Sivalingam, Alfio Lazzaro, Nina Mujkanovic
HPE HPC/AI EMEA Research Lab
Title: Optimising AI training deployments using Graph compilers and containers
Abstract: AI applications based on Deep Neural Networks (DNN) have become popular in solving nontrivial problems like image analysis and speech recognition. An AI workload usually incorporates an Extract, Transform and Load (ETL) pipeline, data movement, and execution of DNN graphs.Deep learning (DL) models are usually represented as computational graphs, with nodes representing tensor operators, and edges the data dependencies between them. AI training deployments can be optimised with target-specific libraries, Graph compilers, and by improving data movement. Graph compilers aim to optimise the execution of a DNN graph by generating an optimised code for a target hardware/backend, thus accelerating training and deployment of DNN models. Heterogenous Cloud and HPC infrastructures further increase the complexity of deploying and optimising AI training workloads.
In the SODALITE (a Horizon 2020 project), we address this problem by providing tools to enable simpler and faster development, deployment, operation, and execution of applications in heterogenous HPC and cloud computing environments. As part of this project, we are developing an Application Optimiser component that uses a performance model of infrastructure and applications (based on benchmarks) to optimise its deployment and runtime for heterogenous infrastructure and hardware. Using input from a data scientist which defines the configurations and optimisations to be enabled, the Application Optimiser component will select the framework, graph compiler, and target-specific libraries before building an optimised container.
In this talk, we present a review of different AI frameworks and supported graph compilers. We also compare the performance of different frameworks and graph compilers using standard benchmarks when deployed using containers. We describe how AI training deployments in heterogenous targets will be optimised by the Application Optimiser.
A Research Engineer/Scientist for HPE HPC/AI EMEA Research Lab (ERL), he is part of the Center of Excellence (CoE) at ARCHER and lead the HPE efforts for SODALITE (Horizon 2020) project. In the CoE, he supports scientist and research support engineer (RSE) in better utilization of ARCHER, UK national supercomputer. As lead of SODALITE project, he is involved in developing Application Optimiser component that automates optimisation of application deployments in heterogenous targets like HPC and Cloud. He has been involved in various research projects that involve IO/Data middleware, Workflow optimisation, Big Data frameworks, Data analytics for Lustre and Machine/Deep learning. Previously, Karthee had also worked at MetOffice, STFC (Hartree center) and Infosys. He completed his doctorate in Particle Physics at the University of Edinburgh.
Uros Ignjacevic
General Manager of Sun Data World
Title: Ethiopia: The formation of a digital powerhouse of Africa
Abstract: With abundant renewable energy sources, including Africa’s largest hydroelectric $4.5 Billion, Grand Ethiopian Renaissance Dam, currently under construction, Ethiopia is emerging as a global player in providing highly competitive and reliable HPC services. Using abundant green energy Ethiopia has the potential to create a zero net carbon footprint while building its economy on the pillars of information technology and using advanced high speed computing for better and faster data analysis and research, creating prediction models for agriculture, an improved health system, the development of science and more. Join me in a presentation on Ethiopia, the land of origins, and its future in advanced computing.
General Manager of Sun Data World, the first tier III / IV data centre in Ethiopia, providing both colocation and cloud solutions to local and global customers. Uros comes from a marketing and media background, successfully building his first startup in 2005, growing it into a company with over 50 employees, and completing a successful exit 9 years after inception. Uros has since taken a keen interest in tech, specifically cloud computing. Before joining Sun Data World, he successfully managed Bull&Turbine, a London based digital software production company for 5 years. He has worked with premium brands, such as Coca Cola, Etihad, Mercedes Benz and more in Europe and the Middle East. A Global Citizen. Graduated Aiglon College, a private Swiss boarding school, received a BA from the University of Virginia, USA, and an MBA from COTRUGLI Business School, Belgrade, Serbia.
Chin Fang
CEO of Zettar Inc.
Title: The historical 1st Poland-Singapore data transfer production trial over CAE-1, a behind the scene look
Abstract: From mid-October to early November 2019, ICM, Warsaw, Poland, ASTAR Computational Resource Centre (A*CRC), Singapore, and Zettar Inc. jointly carried out a historical 1st data transfer production trial from ICM to A*CRC, across the then world’s newest 100Gbps international connection, CAE-1, covering a distance 12,375 miles. The talk takes a look at the motivation, the unusual preparation, the set up at both the Poland side and the Singapore side, and a few key engineering accomplishments. Also explained is the three different types of data movement solutions, why this endeavor is a production trial instead of a demo, together with its significance in the context of modern data movement in the hybrid cloud environment.
Dr. Fang is the founder & CEO of Zettar Inc. (Zettar), a software company based in Palo Alto, California, U.S. Zettar builds and delivers scalable, robust, and high-performance data mover software for distributed data intensive engineering and science workloads, especially in a hybrid cloud environment. The company won the overall winner title of the Supercomputing Asia 2019 Data Mover Challenge, a grueling two-month long international competition of the highest order. Dr. Fang led the Zettar team to beat out six other elite national teams from the U.S. and Japan by a wide margin. Chin Fang holds a master’s and doctoral engineering degree from Stanford University.
Vladimir Brusic
School of Computer Science, University of Nottingham Ningbo China
Title: Single cell transcriptomics – new challenges for Big Data analytics
Abstract: Transcriptomics is the study of complete set of gene expression (expressed RNA) produced by transcription of DNA in a given biological sample. Biological samples typically contain mixtures of cells. The analysis of transcriptome aims to identify genes that show differences in expression between samples that represent different biological conditions, such as different cell types, different activation status, developmental stages, or disease states. This knowledge provides insight into biological processes, understanding the role of genes in biological processes, and changes related to various biological conditions. Transcriptomics has many practical applications in all fields of life sciences. Prominent areas of application include developmental biology, immunology, virology, agriculture and food science, and medicine. Clinical applications of transcriptomics include discovery of biomarkers, disease diagnosis and prognosis, selection and optimization of therapies, and disease monitoring. A great promise of clinical transcriptomics is the possibility of screening for multiple diseases, understanding the disease causes, estimating the disease progression, and assessing likely responses to specific treatment, all in a single transcriptomics experiment.
Bulk transcriptomics measures expression of tens of thousands of gene from the overall mixture of cells. On the other hand, single-cell transcriptomics (SCT) determines levels of expression of tens of thousands of genes in individual cells from a given biological sample. The main advantages of bulk sequencing is the ability to perform “deep“ sequencing, allowing detection of transcripts that are present in minute concentrations. However, bulk transcriptomics will miss differential expression of transcripts between cell types and subtypes present in the sample. SCT offers significant advantages as compared to bulk sequencing, including ability to profile different cell types and subtypes and identification of novel or rare cell types. The tradeof of SCT sequencing is that it is more “shallow“ in comparison to bulk sequencing, resulting in lower transcript coverage, noisier data, and larger variability than bulk sequencing. Cell populations that have very similar bulk transcriptomes often show remarkably variable transcriptome profiles of single cells because of inherent biological variability, a mix of cell-cycle stages, random variability, and shallow sequencing.
Dr. Brusic is a Li Dak Sum Chair Professor in Computer Science at the University of Nottingham Ningbo China and Adjunct Professor at Boston University (USA). He studied at University of Belgrade (Serbia), La Trobe University (Australia), Royal Melbourne Institute of Technology (Australia), and Rutgers University (USA) where he earned BEng, MEng, MAppSci, PhD, and MBA degrees. Professor Brusic had previously held senior research or academic positions internationally, including Dana-Farber Cancer Institute (USA), University of Queensland (Australia), Institute for Infocomm Research (Singapore), Walter and Eliza Hall Institute of Medical Research (Australia), and University of Belgrade (Serbia). He has published more than 200 scientific and technology articles that have attracted more than 14,000 citations and has published two patents related to medical diagnostics and vaccine design. Prof. Brusic has worked in Knowledge Management for nearly 30 years. He developed new artificial intelligence solutions for vaccine research, immunology, infectious disease, autoimmunity and cancer research. His current projects include applications of artificial intelligence, machine learning, statistics, mathematical modeling, and computer models in monitoring health, medical diagnostics, vaccine development, and food authentication.
Nicolas Tonello
Constelcom Ltd
Title: Constellation® – Supercomputing at your fingertips – Delivering HPC power and expertise to all
Abstract: Whilst supercomputers at HPC Centres represent the best in class, access to on-premises HPC remains siloed and challenging for both users and HPC centres. The challenges facing users who could benefit from the advanced performance the systems bring include technical as well as operational issues, from the need for advanced computing expertise to having the access and knowledge to run software to take full advantage of supercomputers, all the way to being able to register on the systems in the first place and finding the resources. Centres are also facing challenges in addressing users in their own professional environments and customs and in successfully delivering their full expertise. Constelcom will present Constellation®, its web accessible and application agnostic platform for self-managed, easy and secure access to High Performance Computing. Constellation® has been conceived to bridge the gap between users and HPC Centres, making it possible for users and organisations to plug into and collaborate with HPC at their fingertips and for HPC Centres to easily manage their systems and resources whilst effortlessly acquiring and providing their in-house expertise to users.
Dr Tonello founded Constelcom, with the mission of enabling open-access for all to on-premises HPC via its Constellation® platform. End-users in science, discovery and innovation and HPC centres are able to optimise resource management, increase end-user engagement and enhance productivity by liberalising access to supercomputing within a unique secure, private and collaborative environment. Dr Tonello received a PhD in Aerospace Engineering from The University of Michigan in Ann Arbor, MI, USA in 2016.
Eden Figueroa
Stony Brook University
Title: Towards the Quantum Internet: Building an entanglement-sharing quantum network
Abstract: The goal of quantum communication is to transmit quantum states between distant sites. The key aspect to achieve this goal is the generation entangled states over long distances. Such states can then be used to faithfully transfer classical and quantum states via quantum teleportation. This is an exciting new direction which establishes the fundamentals of a new quantum internet. The big challenge, however, is that the entanglement generated between two distant sites decreases exponentially with the length of the connecting channel. To overcome this difficulty, the new concepts of entanglement swapping, and quantum repeater operation are needed.
In this talk we will show our progress towards building a quantum network of many quantum devices capable of distributing entanglement over long distances connecting Stony Brook University and the Brookhaven National Laboratory in Long Island, New York. We will show how to produce photonic quantum entanglement in the laboratory and how to store it and distribute it by optically manipulating the properties of room temperature atomic clouds. Finally, we will discuss our recent experiments in which several quantum devices are already interconnected forming elementary quantum cryptographic and quantum repeater networks. Finally, we will discuss how these milestones can form the backbone of a future quantum-information-enhanced internet.
Prof. Eden Figueroa was awarded his BSc in Engineering Physics and his MSc in Optical Engineering at Monterrey Tech, Mexico in 2000 and 2002 respectively. From 2003 to 2008, he was a PhD student in the Quantum Technology Group of Prof. A. I. Lvovsky at the University of Konstanz in Germany and later at the Institute for Quantum Information Science at the University of Calgary, Canada. His PhD thesis entitled: “A quantum memory for squeezed light” was one of the first experimental implementations of quantum memory for quantized light fields. In 2009, he joined the Quantum Dynamics Group of Prof. G. Rempe at the Max-Planck-Institut für Quantenoptik in Garching, Germany where he worked in implementation of quantum networks utilizing single-atoms trapped in high-finesse optical cavities. Starting in 2013 he has been an Associate Professor and the Group leader of the Quantum Information Technology group at Stony Brook University, where he has developed scalable room temperature quantum memories and entanglement sources, aiming to constructing the first working prototype of a quantum repeater network. Since Jan. 2019, Prof. Figueroa is also joint appointment with the Instrumentation Division and the Computer Science Initiative at Brookhaven National Laboratories. The collaboration between Stony Brook and BNL is developing the Long Island Quantum Information Distribution Network (LIQuIDNet), a first prototype of a quantum network distributing photonic entanglement over long distances.
Makoto Taiji
RIKEN Center for Biosystems Dynamics Research Osaka
Title: MDGRAPE-4A: A Special-purpose computer system for Molecular Dynamics simulations
Abstract: We have developed several special-purpose computer systems for particle simulations called “GRAPE (GRAvity PipE)”. The GRAPE systems are dedicated or quasi-general-purpose accelerators attached to host computers. The efficient parallelization in GRAPE systems provided us high-performance at low cost and low energy. However, as the performance increases, it requires a more powerful host machine – a PC cluster. Thus, the bottleneck in strong scaling lies in host systems, not in accelerators. This is often the case for GPU cluster systems. To overcome the bottleneck, we are currently developing the “MDGRAPE-4”, the special-purpose computer system for molecular dynamics (MD) simulations. The MDGRAPE-4 has similar structure with Anton by D. E. Shaw Research. It consists of System-on-Chip LSIs with general-purpose cores, dedicated pipelines, network interfaces, and memory units. By the integration of these elements we aim to achieve a simulation speed of a few ten microsecond per step by its low latency memory access and networking.
Dr. Makoto Taiji is a Deputy Director of RIKEN Center for Biosystems Dynamics Research and Team Leader of Laboratory for Computational Molecular Design in the center.
His research interests cover computer science and computational science, especially computational biology. He developed several special-purpose computers for scientific simulations. In 2006 he developed “MDGRAPE-3”, a special-purpose computer for molecular dynamics simulation with world-top-class performance. He was awarded three Gordon Bell Prizes in 1995, 2006 and 2009. Recently he developed the special-purpose computer for molecular dynamics simulations, “MDGRAPE-4A”. He also applies molecular simulations using his machine to molecular biophysics including drug discovery research.
Łukasz Orłowski
Co-Founder and CTO of Archanan
Title: Tomorrow’s Supercomputers, Yesterday’s Practices: Applications of cloud-backed large system emulation for supercomputer software development
Abstract: Software development for supercomputers has been flawed since the early days of supercomputing. Typically, a programmer user develops their software on their workstation and then submits it to a debug queue for testing. Waiting for the test job to run in the debug queue often proves futile, as the run may not provide any information useful in debugging or troubleshooting.
This approach has many shortcomings:
- The development workstation and the target supercomputer are two entirely different machines, at best they may have the same processor architecture.
- There is no network communication during computation in a workstation environment, only socket-to-socket, at best
- The debug queue is around 10% of the entire system resources, what if the target deployment requires 25% of the target system?
- Debuggers are often licensed on a per-node basis, making debugging in a large environment prohibitively expensive and incompatible with cloud.
Supercomputers are expensive and, by design, are meant to run production codes. It makes more sense to run development, testing, and validation workloads in the cloud.
Łukasz Orłowski is Co-Founder and CTO of Singapore based startup, Archanan. With Archanan, Łukasz brings whole-system emulation and cloud-backed environments to users developing supercomputer software. Before Archanan, Łukasz held positions with A*STAR Singapore, Stony Brook University, Intel, and Institute of Advanced Computational Sciences. He is recently a winner of MIT Innovators Under 35 APAC.
James K. Gimzewski
University of California Los Angeles
Title: Emergent Atomic Switch Networks for Neuroarchitectonics:
Abstract: Artificial realizations of the mammalian brain alongside their integration into electronic components are explored through neuromorphic
architectures, neuroarchitectonics, on CMOS compatible platforms. Exploration of neuromorphic technologies continue to develop as an alternative computational paradigm as both capacity and capability reach their fundamental limits with the end of the transistor-driven industrial phenomenon of Moore’s law. Here, we consider the electronic landscape within neuromorphic technologies and the role of the atomic switch as a model device. We report the fabrication of an atomic switch network (ASN) showing critical dynamics and harness criticality to perform benchmark signal classification and Boolean logic tasks. Observed evidence of biomimetic behavior such as synaptic plasticity and fading memory enable the ASN to attain a cognitive capability within the context of artificial neural networks. Single atomic switches have been shown to exhibit both long-term and short-term memorization based on their history of excitation dynamics. When connected together in a massively dense network using nanoarchitectonics they additionally display emergent behavior observable using multielectrode arrays similar to EEG to follow their spatio-temporal dynamics. A number of architectures such as Reservoir computation (RC) are promising methods to implement neuromorphics.
Dr. James K. Gimzewski is a distinguished professor of Chemistry at the University of California, Los Angeles and member of the California NanoSystems Institute. His current research is focused on nanotechnology applied to medicine, artificial intelligence using neuromorphic systems (Atomic Switch Networks) and Atomically Precise Manufacture (APM). Dr. Gimzewski is a Fellow of the Royal Society and Royal Academy of Engineering. He has received honorary Doctorates (PhD hc & DSc hc) from the University of Aix II in Marseille, France and from the University of Strathclyde, Glasgow. He is a Principal Investigator & satellite co-director of the WPI program, MANA, at the National Institute of Materials Science (NIMS) in Tsukuba, Japan. He is currently Scientific director of the UCLA Art|Sci Center.
Prior to joining the UCLA faculty in 2001, he was a group leader at IBM Zurich Research Laboratory, where he conducted research in nanoscale science and technology for more than 18 years. Dr. Gimzewski pioneered research on single atoms and molecules using scanning tunneling microscopy (STM).
Jaroslaw Jung & Krzysztof Halagan
Lodz University of Technology
Title: Technology of Real-World Analyzers (TAUR) and its practical application
Abstract: As a result of many years of research and testing, at Lodz University of Technology the concept of technology of Real-World Analyzers (in polish: Technologia Analizatorów Układów Rzeczywistych TAUR) was developed [1]. Devices built in this technology can contain up to several hundred million operational cells placed in 3D face centered cubic (fcc) network nodes. These machines are a fully parallel data processing systems equipped with low-latency communication channels, designed for simulations of huge amount of relatively simple elements working in parallel and interacting locally. Devices can support solving problems from various fields of science and technology such as e.g. molecular simulations, artificial intelligence or data encryption.
In frame of TAUR technology the device called Analyzer of Real Complex Systems (in polish: Analizator Rzeczywistych Układów Złożonych – ARUZ) was designed and constructed [2]. This machine is located in Poland in the city of Łódź in BioNanoPark. ARUZ is based on 26 000 reconfigurable FPGA devices (Field Programmable Gate Array) connected in 3D fcc network and working as signal processing units.
Currently, ARUZ is intensively used to analyze computationally demanding complex macromolecular systems. Simulation of soft matter systems in chemistry and polymer physics (e.g. polymer liquids containing brushes, bottle-brushes, polymer stars and networks) is pretty challenging because must involve a variety of time scales and broad range of sizes – from Fickian–like dynamics to segmental motion of chains and mass-center diffusion. An unique simulation method in this field, called Dynamic Lattice Liquid (DLL) model, was proposed by Tadeusz Pakula [3]. The DLL correctly represents the dynamics of the studied molecular structure, taking into account the diffusion restrictions associated with obstacles occurring in the system. The DLL algorithm was implemented in ARUZ allowing to study the dynamics of molecular systems on the time scale difficult to access (or unreachable in some cases) for existing computing systems.
Krzysztof Hałagan received his MSc in physics in 2006 and PhD degree in chemical technology in 2013, both from Lodz University of Technology. His main fields of interest are computer simulations of molecular systems using Monte Carlo methods, molecular dynamics and quantum calculations. His main research area includes application of Dynamic Lattice Liquid algorithm to various physicochemical problems like diffusion, polymerization kinetics and simulation of soft matter, polymer systems and complex liquids. Current interests cover in details diffusion in polymer systems containing polymer stars, brushes and networks, radical-scavenger reactions in water systems and active pharmaceutical ingredients mixed with polymers. He interests in hardware-related issues, accompanying computer modelling, like dedicated computing devices based on FPGAs and GPUs.
Jarosław Jung received his M.Sc. degree in Physics in 1987 from the University of Lodz and in Electronics in 1990 from the Technical University of Lodz. He received his doctorate in chemistry at the Technical University of Lodz in 2001 and postdoctoral degree in electronics in 2016. He works in the Technical University of Lodz in the Department of Molecular Physics. His interests include the study of organic semiconductors and photoconductors, organic electronic devices, construction of electronic devices, as well as the design of state machines dedicated to the parallel simulation of dense molecular systems.
Antonino Tumeo
Pacific Northwest National Laboratory
Title: The Data-Model Convergence: a case for Software Defined Architectures
Abstract: High Performance Computing, data analytics, and machine learning are often considered three separate and different approaches. Applications, software and now hardware stacks are typically designed to only address one of the areas at a time. This creates a false distinction across the three different areas. In reality, domain scientists need to exercise all the three approaches in an integrated way. For example, large scale simulations generate enormous amount of data, to which Big Data Analytics techniques can be applied. Or, as scientist seek to use data analytics as well as simulation for discovery, machine learning can play an important role in making sense of the disparate source’s information. Pacific Northwest National Laboratory is launching a new Laboratory Directed Research and Development (LDRD) Initiative to investigate the integration of the three techniques at all level of the high-performance computing stack, the Data-Model Convergence (DMC) Initiative. The DMC Initiative aims to increase scientist productivity by enabling purpose-built software and hardware and domain-aware ML techniques.
In this talk, I will present the objectives of PNNL’s DMC Initiative, highlighting the research that will be performed to enable the integration of vastly different programming paradigms and mental models. I will then make the case for how reconfigurable architectures could represent a great opportunity to address the challenges of DMC. In principle, the possibility to dynamically modify the architecture during runtime could provide a way to address the requirement of workloads that have significantly diverse behaviors across phases, without losing too much flexibility or programmer productivity, with respect to highly heterogeneous architectures composed by sea of fixed application specific accelerators. Reconfigurable architectures have been explored since long time ago, and arguably new software breakthroughs are required to make them successful. I will thus present the efforts that the DMC initiative is launching to design a productive toolchain for upcoming novel reconfigurable systems. I will then proceed to discuss the influences and relations of the DMC initiative with other projects ongoing at PNNL that are exploring methods to accelerate, through custom hardware/software stacks and advanced design automation approaches, machine learning and data analytics.
Dr. Antonino Tumeo received the M.S degree in Informatic Engineering, in 2005, and the Ph.D degree in Computer Engineering, in 2009, from Politecnico di Milano in Italy. Since February 2011, he has been a research scientist in the PNNL’s High Performance Computing group. He Joined PNNL in 2009 as a post doctoral research associate. Previously, he was a post doctoral researcher at Politecnico di Milano. His research interests are modeling and simulation of high performance architectures, hardware-software codesign, FPGA prototyping and GPGPU computing. Dr. Tumeo is a senior member of the IEEE and of the ACM.
Trilce Estrada
University of New Mexico
Title: Graphic Encoding of Macromolecules for Quantitative Classification of Protein Structure and Representation of Conformational Changes
Abstract: The function of a protein depends on its three-dimensional structure. Computational approaches for protein function prediction, and more generally macromolecular analysis are limited by the expressiveness and complexity of protein representation formats. Partial structural representations and representations that rely on homology alignments are both computationally expensive and do not scale with the number of molecules, as three-dimensional matching is an NP-hard problem. Being able to represent heterogeneous macromolecules in a homogeneous, easy-to-compare, and easy-to-analyze format has the potential to disrupt the way and scale at which molecular analysis is done today. This talk introduces a generalizable and homogeneous representation of macromolecules that explicitly encodes tertiary structural motifs and their relative distance as a proxy of their interaction. The final goal of this encoding is to expose intra- and inter-molecular structural patterns in a scalable way, i.e., that does not require performing alignments, homology calculations, or other expensive operations between pairs, or sets, of proteins. To demonstrate the effectiveness of this encoding, we also present an image processing system based on deep convolutional neural networks that is able to use our graphic representation to perform high throughput protein function prediction and interpretation of protein folding.
Trilce Estrada is an associate professor in the department of Computer Science at the University of New México and the director of the Data Science Laboratory. Her research interests span the intersection of Machine Learning, High Performance Computing, Big Data, and their applications to interdisciplinary problems in science and medicine. In 2015 Estrada received an NSF CAREER award for her work on in-situ analysis and distributed machine learning. Her work for Accurate Scoring of Drug Conformations at the Extreme Scale won first place at the 2015 IEEE International Scalable Computing Challenge, a competition in which participants demonstrate working solutions to real-world problems that rely on large scale computing. She was named the 2019 ACM SIGHPC Emerging Woman Leader in Technical Computing winner. She has been chair of multiple mentoring efforts, reaching over 500 students, including the PhD Forum and Student Program at IPDPS (2014-2018) and the Mentor Protege Program at SC (2019). Dr. Estrada obtained a PhD in Computer Science from the University of Delaware, a M.S in Computer Science from INAOE, and a B.S in Informatics from The University of Guadalajara, Mexico.
Wilhelmina Nekoto
Data Engineer, Research Scientist (Computer Vision)
Title: Automated Wildlife Monitoring and Poacher Detection in Namibian Communal Conservancies
Abstract: Manual systems still underpin many of the Namibian economic sectors, hindering effective collec-tion, storage and processing of data, and hence affecting informed decision making, to social challenges such as poverty, underdevelopment and crime. Although, pushing for the development of supercomputing in a developing country may seem overly ambitious, doing it at the right scale provides research resources necessary in addressing many of the global challenges and presents opportunities for the economic growth. In this talk we aim to identify applications in which HPC-AI powered systems can be used for development planning, to impact the growth of economy and human capital. Using new technologies to create and provide datasets for research that will fuel scientific discoveries and provide the key to addressing social and economical challenges in Namibia and beyond.
Wilhelmina Ndapewa Onyothi Nekoto is a graduate software engineer (2016) from Namibia. Prior to engineering, her background is in Astrophysics. She is passionate about the role of Machine Learning and Artificial Intelligence in building resilient communities, economic development and wildlife conservation in her home country Namibia and beyond. Wilhelmina trained at the Data Science Retreat in Berlin (2018), with a focus on computer vision. Her recent project idea “Auto-mated Wildlife Monitoring and Poacher Detection” earned her a Computer Vision for Global Chal-lenges (CV4GC) award, which she presented during a workshop at the 2019 Computer Vision and Pattern Recognition (CVPR) Conference.
Jay Lofstead
Scalable System Software Sandia National Laboratories
Title: Memory vs. Storage Software and Hardware: The Shifting Landscape
Abstract: New memory technologies, such as the persistent memory modules supported by the latest Intel chips offer a persistent storage device accessible on the memory bus. NVMe and other high performance storage devices offer extreme performance at increasingly affordable prices. With these technology shifts, what software we use for different tasks and what we can afford to do is changing. This talk will explore how things are changing and where new opportunities to enable science exist.
Dr. Jay Lofstead is a Principal Member of Technical Staff at Sandia National Laboratories. His work focuses on infrastructure to support all varieties of simulation, scientific, and engineering workflows with a strong emphasis on IO, middleware, storage, transactions, operating system features to support workflows, containers, software engineering and reproducibility. He is co-founder of the IO-500 storage list. He also works extensively to support various student mentoring and diversity programs at several venues each year including outreach to both high school and college students. Jay graduated with a BS, MS, and PhD in Computer Science from Georgia Institute of Technology and was a recipient of a 2013 R&D 100 award for his work on the ADIOS IO library.
Jay Lofstead
Scalable System Software Sandia National Laboratories
Title: Containers and Data-Centric Computing
Abstract: Data-centric computing in the cloud frequently uses containerized applications for easier and more flexible deployment. This compute-focused structure works for data-centric computing, but is not ideal. Provenance and workflow systems track the connection between files and their origins, but this needs to be done using a less fragile approach. By extending file systems and the operating system to incorporate these provenance features, understanding where analyzed and processed data came from becomes easier.
Dr. Jay Lofstead is a Principal Member of Technical Staff at Sandia National Laboratories. His work focuses on infrastructure to support all varieties of simulation, scientific, and engineering workflows with a strong emphasis on IO, middleware, storage, transactions, operating system features to support workflows, containers, software engineering and reproducibility. He is co-founder of the IO-500 storage list. He also works extensively to support various student mentoring and diversity programs at several venues each year including outreach to both high school and college students. Jay graduated with a BS, MS, and PhD in Computer Science from Georgia Institute of Technology and was a recipient of a 2013 R&D 100 award for his work on the ADIOS IO library.
Ziogas Alexandros Nikolaos
ETH Zürich
Title: A Data-Centric Approach to Extreme-Scale Ab initio Dissipative Quantum Transport Simulations
Abstract: The computational efficiency of a state of the art ab initio quantum transport (QT) solver, capable of revealing the coupled electro-thermal properties of atomically-resolved nano-transistors, has been improved by up to two orders of magnitude through a data centric reorganization of the application. The approach yields coarse-and fine-grained data-movement characteristics that can be used for performance and communication modeling, communication-avoidance, and dataflow transformations. The resulting code has been tuned for two top-6 hybrid supercomputers, reaching a sustained performance of 85.45 Pflop/s on 4,560 nodes of Summit (42.55% of the peak) in double precision, and 90.89 Pflop/s in mixed precision. These computational achievements enable the restructured QT simulator to treat realistic nanoelectronic devices made of more than 10,000 atoms within a 14× shorter duration than the original code needs to handle a system with 1,000 atoms, on the same number of CPUs/GPUs and with the same physical accuracy.
Alex is a PhD student at the Scalable Parallel Computing Laboratory at ETH Zurich, under the supervision of Prof. Torsten Hoefler. His research interests lie in performance optimization and modeling for parallel and distributed computing systems. Recently, he has been working on data-centric representations and optimizations for High-Performance Computing applications. He was awarded the 2019 Gordon Bell prize for his work on optimizing Quantum Transport Simulations.
Eliu Huerta
University of Illinois at Urbana-Champaign
Title: Accelerated Artificial Intelligence for Big-data Physics Experiments
Abstract: The cyberinfrastructure needs of large-scale facilities share a common thread of computational challenges, namely, datasets of ever increasing complexity and volume, and the need for novel signal-processing algorithms to extract trustworthy scientific results in real-time with oversubscribed computational resources. I will discuss the emergence of accelerated artificial intelligence algorithms as a solution to address computational grand challenges in big-data physics experiments.
Eliu Huerta completed a Master of Advanced Study and a PhD in Theoretical Astrophysics at the University of Cambridge, UK. His research pursues the convergence of artificial intelligence, large scale computing and innovative hardware architectures to push back the frontiers of gravitational wave astrophysics, cosmology, multi-messenger astrophysics, medicine and industry. Eliu is the head of the grAvIty Group and the director of the Center for Artificial Intelligence Innovation at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.
Piotr Bala
ICM UW
Title: High Performance Services for genetic research
Abstract: Recent progress in genetic research was possible thanks to the progress in the experimental techniques. However such progress will not be possible without computational methods developed in the last decades. One of the milestones is BLAST (Basic Local Alignment Search Tool) – an essential algorithm that researchers use for sequence alignment analysis. Amongst different implementations of the BLAST algorithm, the NCBI-BLAST is the most popular. It can run on a single multithreading node. However, the volume of nucleotide and protein data is fast-growing, making single node insufficient. Therefore it is more and more important to develop high-performance computing solutions, which could help researchers to analyze genetic data in a fast and scalable way. In this lecture, we will present the execution of the BLAST algorithm on HPC clusters and supercomputers in a massively parallel manner using thousands of processors. The PCJ (Parallel Computing in Java) library has been used to implement the optimal splitting up of the input queries, work distribution and search management. It is used with the non-modified NCBI-BLAST package which is an additional advantage for the users. The result application PCJ-BLAST is responsible for the reading sequence for comparison, splitting it up and starting multiple NCBI-BLAST executables. Since I/O performance could limit sequence analysis performance we will also address this problem. Part of the talk will be dedicated to the PCJ which is key technology enabling efficient parallelization of the BLAST and other algorithms. We will present a general idea of the library as a parallelization tool in the PGAS paradigm as well as implementation details. Examples of successful parallelization of different algorithms from HPC, BigData and AI application domains will be provided. We will show that using Java and PCJ library it is possible to perform sequence analysis using hundreds of nodes in parallel. Since genomics research is getting significant attention in Europe we will present the Human Exposome Assessment Platform (HEAP) which is Horizon 2020 project starting 1st January 2020. It will create a global research resource that enables ethical and efficient management and processing of massive data from geographically distributed large-scale population cohorts. Also, a standardized, integrated and generic informatics platform will be created, It will provide PCJ-BLAST and other highly scalable genetics applications to the wide research community.
Piotr Bała is a professor at Interdisciplinary Centre for Mathematical and Computational Modeling, Warsaw University. He has received Ph.D. in physics in 1995. From 2000, he is a leader of the team at ICM which develops Grid tools and HPC software for molecular biology and medicine. He has been strongly involved in European and national Grid projects, successfully leading work packages and coordinating research. He has been coordinating ICM works in the EMI project, has been leading work package in the APOS project (EU-Russia collaboration) and has been involved in PRACE projects. He has coordinating CHIST-ERA project dedicated to the development of parallel computing in Java and has coordinated the Chemomentum project developing UNICORE grid middleware. The main focus of research is in parallel and distributed computing as well as on the development of new methods for biomolecular simulations. He has been involved in the development of the UNICORE middleware and its deployment. Currently, he is leading the development of PCJ library for parallel computing in Java using PGAS programming paradigm. PCJ library received HPCC Award at Supercomputing Conference SC’14. Currently, PCJ is running on petascale systems allowing for efficient parallelization of numerous applications.
Prof. Bała has supervised 10 PhD students, he is strongly involved in the creation of a master degree program at ICM. He has published over 150 scientific papers in the leading scientific journals.
Joanna Trylska
Centre of New Technologies University of Warsaw
Title: Functional dynamics of biomolecules with supercomputers
Abstract: Proper association and recognition of biomolecules determines many life-sustaining processes occurring in a crowded environment of cells. In addition, biomolecules are intrinsically flexible and their dynamics is related to function. For example, flexibility of the ribosome, a protein synthesis molecular machine, is crucial for the fidelity of this synthesis in cells. Thus, not surprisingly, aminoglycoside antibiotics that affect bacterial protein synthesis hinder the functional flexibility of the ribosome. Therefore, understanding the dynamics of biomolecular recognition is indispensable in any drug design process. Binding of drugs to their intracellular targets requires conformational rearrangements of both binding partners in order to form a tight complex and inhibit the function of the target. Transport of drugs through cellular membranes is also a highly dynamic process.
I will talk about the importance of investigating biomolecular flexibility in drug design and transport. Using HPC resources we apply various molecular modeling methods and molecular dynamics simulations to investigate the processes of diffusion and flexibility of biomolecules. I will show examples of biological systems whose dynamics modulates their function and is crucial for drug design.
Joanna Trylska is a professor and group leader at the Centre of New Technologies of the University of Warsaw. She received PhD (2001) and DSc (2009) from the Faculty of Physics of the University of Warsaw. In 2003 – 2005 she worked as a postdoctoral researcher in the group of professor J. Andrew McCammon, a world-known expert in molecular dynamics simulations. With the use of HPC simulations she explained how aminoglycoside antibiotics affect mRNA decoding in different organisms and how the dynamics of HIV-1 protease is correlated with drug resistance. She parameterized the first coarse-grained model for molecular dynamics simulations of the ribosome.
Prof. Trylska’s group develops computational models for simulations of biomolecules and software to study their microsecond-long internal dynamics. The research covers the simulations of functional dynamics of biomolecules, especially in connection with antibacterial compounds. Her group uses supercomputers to simulate the dynamics of proteins and nucleic acids in crowded and membrane environments. Her most recent focus includes the simulations of transport of oligonucleotides to bacterial cells.
Prof. Joanna Trylska has been the recipient of many grants from the Foundation for Polish Science (Team, Focus), National Centre for Research and Development, National Science Centre and National Institutes of Health (USA). She also received the Fulbright Senior Award.
Yuefan Deng
Stony Brook University, NY
Title: Fast and Accurate Multiscale Modeling of Platelets Guided by Machine Learning
Abstract: Multiscale modeling in biomedical engineering is gaining momentum because of progress in supercomputing, applied mathematics, and quantitative biomedical engineering. For example, scientists in various disciplines have been advancing, slowly but steadily, the simulation of blood including its flowing and the physiological properties of such components as red blood cells, white blood cells, and platelets. Platelet activation and aggregation stimulate blood clotting that results in heart attacks and strokes causing nearly 20 million deaths each year. To reduce such deaths, we must discover new drugs. To discover new drugs, we must understand the mechanism of platelet activation and aggregation. To model platelets’ dynamics involves setting the basic space and time discretization in huge ranges of 5-6 orders of magnitudes, resulting from the relevant fundamental interactions at atomic, to molecular, to cell, to fluid scales. To achieve the desired accuracy at the minimal computational costs, we must select the correct physiological parameters in the force fields as well as the spatial and temporal discretization, by machine learning. We demonstrate our results of speeding up a multiscale platelet aggregation simulation, while maintaining desired accuracies, by orders of magnitude, compared with traditional algorithm that uses the smallest of temporal and spatial scales in order to capture the finest details of the dynamics. We present our analyses of the accuracies and efficiencies of the representative modeling. We will also outline the general methodologies of multiscale modeling of cells at atomic resolutions guided by machine learning.
Yuefan Deng earned his BA (1983) in Physics from Nankai University and his Ph. D. (1989) in Theoretical Physics from Columbia University. He is currently a professor (since 1998) of applied mathematics and the associate director of the Institute of Engineering-Driven Medicine, at Stony Brook University in New York. Prof. Deng’s research covers parallel computing, molecular dynamics, Monte Carlo methods, and biomedical engineering. The latest focus is on the multiscale modeling of platelet activation and aggregation (funded by US NIH) on supercomputers, parallel optimization algorithms, and supercomputer network topologies. He publishes widely in diverse fields of physics, computational mathematics, and biomedical engineering. He has received 13 patents.
Michael Bader
Technical University of Munich
Title: Towards Exascale Hyperbolic PDE Engines – Are We Skating to Where the Puck Will Be?
Abstract: Exascale supercomputers are “ante portas”. They pose novel challenges to simulation software, such as how to cope with heterogeneous performance of compute resources, increased failure rates and respective demands on resiliency, or with scalability on multiple levels of parallelism, which are at the same time suffering from unpredictable performance. CPU/GPU power will continue to outperform memory and communication hardware, which asks for algorithms that feature high arithmetic intensity and minimize data movement across the entire memory hierarchy. Can we successfully develop simulation software that will run efficiently on the supercomputers of 2030?
This talk will present on recent work to better prepare simulation software for exascale, focusing on two concrete packages: ExaHyPE and SeisSol. ExaHyPE is an engine to solve hyperbolic PDE systems. While it provides flexibility with respect to the tackled PDEs, it focuses on high-order discontinuous Galerkin (with a-posteriori Finite-Volume-based limiting) on tree-structure Cartesian meshes as the underlying numerical scheme. I will outline ExaHyPE’s code generation approach to tailor the engine to different needs of application, algorithm and code optimisation experts, and will highlight a fine-grain task-offloading strategy which can respond to performance fluctuations in hardware and combines with an MPI rank replication approach. Similar, I will present recent work on the earthquake simulation package SeisSol. SeisSol allows several modelling variants for seismic wave propagation and earthquake sourcing. It relies on code generation for its performance critical element-local small tensor operations, to cope with the complexity of realising several model variants on different hardware.
Michael Bader is associate professor at the Technical University of Munich. He works on hardware-aware algorithms in computational science and engineering and in high performance computing. In particular, he focuses on challenges imposed by latest supercomputing platforms, and the development of suitable efficient and scalable algorithms and software for simulation tasks in science and engineering.
Rob Knight
University of California San Diego
Title: The Human Microbiome: Big Challenges, Big Data, Big Compute
Abstract: Our ability to sequence DNA has improved many orders of magnitude in speed and cost over the past 15 years. This enabling technology allows us to read out the human microbiome (the genes from the bacteria, fungi, viruses, and other tiny organisms that live on and in our bodies) on a massive scale, associating these microbes with a range of diseases from the expected (gut microbes are involved in inflammatory bowel disease) to the surprising (they are also involved in autism). Much of the data interpretation relies on shared resources backed by large compute. Here I discuss our rebuilding the tree of life on SDSC’s Comet, and efforts to predict the structures of all the proteins in the gut microbiome on IBM’s World Community Grid, and discuss the implications of these resources for building a user interface to our microbiomes that allows us to shape it for life-long health.
Rob Knight is the founding Director of the Center for Microbiome Innovation and Professor of Pediatrics, Bioengineering, and Computer Science & Engineering at UC San Diego. Before that, he was Professor of Chemistry & Biochemistry and Computer Science in the BioFrontiers Institute of the University of Colorado at Boulder, and an HHMI Early Career Scientist. He is a Fellow of the American Association for the Advancement of Science and of the American Academy of Microbiology. In 2015 he received the Vilceck Prize in Creative Promise for the Life Sciences. In 2017 he won the Massry Prize, often considered a predictor of the Nobel. He is the author of “Follow Your Gut: The Enormous Impact of Tiny Microbes” (Simon & Schuster, 2015), coauthor of “Dirt is Good: The Advantage of Germs for Your Child’s Developing Immune System (St. Martin’s Press, 2017) and spoke at TED in 2014. His lab has produced many of the software tools and laboratory techniques that enabled high-throughput microbiome science, including the QIIME pipeline (cited over16,000 times as of this writing) and UniFrac (cited over 7000 times). He is co-founder of the Earth Microbiome Project, the American Gut Project, and the company Biota, Inc., which uses DNA from microbes in the subsurface to guide oilfield decisions. His work has linked microbes to a range of health conditions including obesity and inflammatory bowel disease, has enhanced our understanding of microbes in environments ranging from the oceans to the tundra, and made high-throughput sequencing techniques accessible to thousands of researchers around the world. Dr. Knight can be followed on Twitter (@knightlabnews) or on his web site https://knightlab.ucsd.edu/.
Happy Sithole
The Council for Scientific and Industrial Research
Title: The role of Cyber-Infrastructure in Development of Africa
Abstract: South Africa has invested in cyber-infrastructure since 2006, and has developed a National Integrated Cyber-Infrastructure (NICIS), which provides high-speed networks, large data research capabilities and high performance computing. In an endeavour to ensure that these facilities have maximum impact in Africa, South Africa has developed programs to include the rest of the African continent in Cyber-Infrastructure Development. In this talk, I will outline the developmental challenges that the country faced and how they were addressed. I will also provide examples of success in industry facilitation using HPC and access to Data. The talk will also cover some of the developments in the other African countries. We will talk of the technology options and strategies on HPC technology choices and the rational behind those decisions.
Dr. Happy Sithole is the Center Manager of National Integrated Cyber-Infrastructure System (NICIS). Amongst his responsibilities is to oversee the developments of High Performance Computing in the country, through the Center for High Performance Computing (CHPC), roll-out of broadband connectivity for all science councils and universities, through the SANReN and ensure long term data management for research community through DIRISA. Dr. Sithole has a PhD in Materials Science focused on mineral extraction schemes using large-scale simulations. He has applied the simulation techniques in diamond mining industry, where he worked as Process Optimisation specialist. He has also worked in nuclear power plant design, as Senior Process Engineer. His work in High Performance Computing includes strategic development of HPC and also technical level design of the HPC systems. He is passionate on applications performance on HPC systems, and considers HPC systems development to be driven by the applications requirements. He has pioneered the development of skills in South Africa and the continent. Dr. Sithole supports megascience projects such as the SKA and LHC projects. He sits in Steering Committees for HPC in different countries and also a Board Member of the National Library of South Africa, where he is the Chair of the ICT Committee. Dr. Sithole is also a member of the Steering Committee of ISC and SC. He also participates in the Works stream focusing on Infrastructure and Resources of the Presidential Commission on the Fourth Industrial Revolution. He has presented invited and plenary talks in many meetings and countries.
Laura Boykin
Cassava Virus Action Project
Title: Utility of Real time portable genome sequencing and HPC for global food security
Abstract: Portable DNA sequencing technology has great potential to reduce the risk of community crop failure and help improve livelihoods of millions of people, especially in low resourced communities. Crop losses due to plant viral diseases and pests are major constraints on food security and income for millions of households in sub-Saharan Africa (SSA). Such losses can be reduced if plant diseases and pests are correctly diagnosed and identified early. Currently, accurate diagnosis for definitive identification of plant viruses and their vectors in SSA mostly relies on standard PCR and next generation sequencing technologies (NGS). However, it can take up to 6 months before results generated using these approaches are available. The long time taken to detect or identify viruses impedes quick within-season decision making necessary for early action, crop protection advise and disease control measures by farmers. This ultimately compounds the magnitude of crop losses and food shortages suffered by farmers. For the first time globally, the MinION portable pocket DNA sequencer was used to sequence whole plant virus genomes on the farm in Uganda, Tanzania and Kenya and we have reduced the time to diagnosis to 3 hours. In this talk I will outline how we have used the Oxford Nanopore portable genomics technology to identify the begomoviruses causing the devastating cassava mosaic disease, which is ravaging smallholder farmers’ crops in sub-Saharan Africa leaving millions food insecure. I will also cover the gaps in computing we have identified with our on farm genomic sequencing with the hopes of engaging with people in attendance at the Supercomputing Frontiers Europe 2020 into the future.
Dr. Laura Boykin is a TED Senior Fellow (2017), Gifted Citizen (2017) and a computational biologist who uses genomics and supercomputing to help smallholder farmers in sub-Saharan Africa control whiteflies and associated viruses, which have caused devastation of local cassava crops. She recently (2018) been awarded an Honorary Doctorate from the Open University in the UK for her work on whiteflies, viruses and inclusion and diversity in science. Boykin also works to equip African scientists with a greater knowledge of genomics and high-performance computing skills to tackle future insect outbreaks. Boykin completed her PhD in Biology at the University of New Mexico while working at Los Alamos National Laboratory in the Theoretical Biology and Biophysics group and is currently a Senior Research Fellow at University of Western Australia. She was invited to present her lab’s research on whiteflies at the United Nations Solution Summit in New York City for the signing of the Sustainable Development Goals to end extreme poverty by 2030. The team’s latest work to bring portable DNA sequencing to east African farmers has been featured on CNN, BBC World News, BBC Swahili, BBC Technology News, and the TED Fellows Ideas Blog. For more information: www.lauraboykinresearch.com or www.cassavavirusactionproject.com.
Calista Redmond
Chief Executive Officer, RISC-V Foundation
Title: The Reality and Tremendous Opportunity of Custom, Open Source Processing
Abstract: The growth of human and business interaction with technology continues to explode. At the literal heart of that technology sits a silicon core, combined with general and specific instructions and connections. The insane cost, risk, development time, necessary volumes, and limited computing demands kept the lucrative chip opportunity within reach of just a handful of companies — focused mostly on general purpose processors. New computing needs in various power and performance dimensions have increased demand and competition for custom processors. This pressure is quietly and rapidly disrupting the processor industry. An Open source approach to processors now reduces risk and investment, with accelerated time to market, and opens the opportunity to thousands of possible custom processors. Learn about the trends, opportunities, and examples — from smart watches to supercomputers!
Calista Redmond is the CEO of the RISC-V Foundation with a mission to expand and engage RISC-V stakeholders, compel industry adoption, and increase visibility and opportunity for RISC-V within and beyond the Foundation. Prior to the RISC-V Foundation, Calista held a variety of roles at IBM, including Vice President of IBM Z Ecosystem where she led strategic relationships across software vendors, system integrators, business partners, developer communities, and broader engagement across the industry. Focus areas included execution of commercialization strategies, technical and business support for partners, and matchmaker to opportunities across the IBM Z and LinuxOne community. Calista’s background includes building and leading strategic business models within IBM’s Systems Group through open source initiatives including OpenPOWER, OpenDaylight, and Open Mainframe Project. For OpenPOWER, Calista was a leader in drafting the strategy, cultivating the foundation of partners, and nurturing strategic relationships to grow the org from zero to 300+ members. While at IBM, she also drove numerous acquisition and divestiture missions, and several strategic alliances. Prior to IBM, she was an entrepreneur in four successful start-ups in the IT industry. Calista holds degrees from the University of Michigan and Northwestern University.
FLORIN MANAILA
Senior AI Architect and Inventor at IBM Systems Hardware Europe
Title: HPC Transformation with AI
Abstract: The session will present HPC challenges in fueling machine learning and deep learning into the simulations. Besides, we will present a user-centric view of IBM Watson ML Community Edition and the newly IBM inference system IC922 adoption into AIops of large HPC clusters (from deployment to inference).
Florin leads enterprise transformation by adopting of High-Performance Computing, Distributed Deep Learning and Emerging Technologies in order to help them to rapidly transform the way businesses operate, solve problems, and gain competitive advantage. He is responsible for performance, availability, and scalability of Cognitive Systems infrastructure. It has created IBM Distributed Deep Learning reference architecture as well as important industry blueprints of applied AI including edge fabric. His client experience covers EU commercial and government civil and defense agencies. Is passionate about in-memory computing and Spiking Neural Networks.
Tomasz Malkiewicz
CSC – IT Center for Science and Nordic e-Infrastructure Collaboration (NeIC)
Title: LUMI: the EuroHPC pre-exascale system of the North
Abstract: The EuroHPC initiative is a joint effort by the European Commission and 31 countries to establish a world-class ecosystem in supercomputing to Europe (read more at https://eurohpc-ju.europa.eu/). One of its first concrete efforts is to install the first three “precursor to exascale” supercomputers. Finland, together with 8 other countries from the Nordics and central Europe, will collaboratively host one of these systems in Kajaani, Finland. This system, LUMI , will be the one of the most powerful and advanced computing systems on the planet at the time of its installation. The vast consortium of countries with an established tradition in scientific computing and strong national computing centers will be a key asset for the successful infrastructure. In this talk we will discuss the LUMI infrastructure and its great value and potential for the research community.
Julian Kunkel, Jean-Thomas Acquaviva, Jay Lofstead, Ziogas Alexandros Nikolaos
Title: Opportunities for Data-Centric Computing
Abstract: The efficient, convenient, and robust execution of data-driven workflows is key for productivity in scientific computing and computer-aided RD&E. The HPC community still optimizes the compute and storage components independently from each other, and, moreover, independently from the needs of end-to-end user workflows that ultimately lead to insight. The efficient management of data and compute capabilities in a heterogeneous environment enables the exploitation of alternative hardware architectures and infrastructures
and most importantly heterogeneous environments that stretch beyond a single data center and into the cloud.
In this session, we discuss challenges, visions and ongoing development for the data-centric computing environments of the future that gives the fastest time to insight. This includes lifting the workflow abstraction on a higher level and then applying concepts like smart scheduling to workflows which, e.g., minimize data movement for the entire workflow. The higher-level workflow formulation has implications on scientists but also on data-center planning and ultimately enables smarter hardware and software layers.
We gathered a range of stakeholders from industry and academia that give short presentations revolving around this topic with the ultimate goal to support the community with a new forum that addresses these challenges.
Central European Time (UTC+1) | ||||
9:00 — 9:10 | Opening Remarks from the Chairman of the Organising Committee Marek Michalewicz, ICM UW, Poland |
Opening | ||
9:10 — 10:15 | A Data-Centric Approach to Extreme-Scale Ab initio Dissipative Quantum Transport Simulations Ziogas Alexandros Nikolaos, ETH Zürich |
Physics, Astronomy, Cosmology | ||
10:10 — 10:45 | Accelerated Artificial Intelligence for Big-data Physics Experiments Eliu Huerta, University of Illinois at Urbana-Champaign |
|||
10:45 — 11:10 | New features of superfluidity far from equilibrium: nuclear reactions, dynamics of ferrons and quantum vortices in ultracold gases Piotr Magierski, M.C.Barton, A.Bulgac, S.Jin, K.Kobuszewski, P. Kulinski, K.J.Roche, K.Sekizawa, B.Tuzemen, GWlazlowski |
|||
11:10 — 11:20 | Numerical tests of HARM simulations Bestin James and Agnieszka Janiuk |
|||
11:20 — 11:55 | The role of Cyber-Infrastructure in Development of Africa Happy Sithole, The Council for Scientific and Industrial Research |
HPC in Africa | ||
11:55 — 12:30 | Automated Wildlife Monitoring and Poacher Detection in Namibian Communal Conservancies Wilhelmina Nekoto, Data Engineer, Research Scientist (Computer Vision) |
|||
12:30 — 12:50 | Ethiopia: The formation of a digital powerhouse of Africa. Uros Ignjacevic, General Manager of Sun Data World |
|||
12:50 — 13:10 | LUNCH | |||
13:10 — 13:45 | Optimising AI training deployments using Graph compilers and containers Karthee Sivalingam, Alfio Lazzaro, Nina Mujkanovic, HPE HPC/AI EMEA Research Lab |
|||
13:45 — 14:20 | Fast and Accurate Multiscale Modeling of Platelets Guided by Machine Learning Yuefan Deng, Stony Brook University, NY |
Muli- and Large Scale Modelling | ||
14:20 — 14:45 | PIConGPU Performance and Scaling Results on Summit Rene Widera, Sergei Bastrakov, Alexander Debus, Marco Garten, Richard Pausch, Klaus Steiniger, Michael Bussmann and Axel Huebl |
|||
14:45 — 15:10 | Templated CUDA Lattice Boltzmann Method: generic CFD solver for single and multi-phase problems Michał Dzikowski and Grzegorz Gruszczyński |
|||
15:10 — 15:20 | The influence of granular layer on the stick-slip dynamics of sheared fault gouges modelled with the Discrete Element Method Piotr Klejment |
|||
15:20 — 15:30 | Computational research on metal-ligand bonds stability limiting factors: the case of Rh(IX)O4+ and Rh(IX)NO3 Mateusz Domański, Łukasz Wolański, Paweł Szarek and Wojciech Grochala |
|||
15:30 — 16:05 | Towards Exascale Hyperbolic PDE Engines – Are We Skating to Where the Puck Will Be? Michael Bader, Technical University of Munich |
|||
16:05 — 16:30 | LUMI: the EuroHPC pre-exascale system of the North Tomasz Malkiewicz, CSC – IT Center for Science and Nordic e-Infrastructure Collaboration (NeIC) |
Central European Time (UTC+1) | ||||
9:00 — 10:00 | The Human Microbiome: Big Challenges, Big Data, Big Compute Rob Knight, University of California San Diego |
Omics | ||
10:00 — 10:35 | Single cell transcriptomics – new challenges for Big Data analytics Vladimir Brusic, School of Computer Science, University of Nottingham Ningbo China |
|||
10:35 — 11:10 | Graphic Encoding of Macromolecules for Quantitative Classification of Protein Structure and Representation of Conformational Changes Trilce Estrada, University of New Mexico |
|||
11:10 — 11:45 | Utility of Real time portable genome sequencing and HPC for global food security Laura Boykin, Cassava Virus Action Project |
|||
11:45 — 12:20 | Functional dynamics of biomolecules with supercomputers Joanna Trylska, Centre of New Technologies University of Warsaw |
|||
12:20 — 12:30 | Design of selective TrmD inhibitors Adam Stasiulewicz, Bartosz Trzaskowski and Joanna Sułkowska |
|||
12:30 — 12:50 | LUNCH | |||
12:50 — 13:05 | HPC Adventures: WarsawTeam around the world |
|||
13:05 — 13:40 | HPC Transformation with AI Florin Manila, Senior Architect and Inventor IBM |
|||
13:40 — 14:00 | Tomorrow’s Supercomputers, Yesterday’s Practices: Applications of cloud-backed large system emulation for supercomputer software development Łukasz Orłowski, Co-Founder and CTO of Archanan |
HPC services, provisioning and delivery | ||
14:00 — 14:20 | Constellation® – Supercomputing at your fingertips – Delivering HPC power and expertise to all Nicolas Tonello, Constelcom Ltd |
|||
14:20 — 14:30 | A hybrid HPC and Cloud platform for multidisciplinary scientific application Marian Bubak, Jan Meizner, Piotr Nowakowski, Martin Bobak, Ondrej Habala, Ladislav Hluchy, Viet Tran, Adam Belloum, Reginald Cushing, Maximilian Höb, Dieter Kranzlmüller and Jan Schmidt |
|||
14:30 — 14:40 | Cyberinfrastructure Resource Integration: Advancing Local Cyberinfrastructure Through Community Best Practices Richard Knepper |
|||
14:40 — 15:05 | To be announced Rick Koopman, Lenovo |
|||
15:05 — 15:25 | High Performance Services for genetic research Piotr Bala, ICM UW |
Bioinformatics | ||
15:25 — 15:35 | Ligand-dependent activity of an aminoglycoside riboswitch Marta Kulik, Takaharu Mori, Yuji Sugita and Joanna Trylska |
|||
15:35 — 15:45 | Mechanism of transport of vitamin B12-peptide nucleic acids through the outer membrane of E. coli Tomasz Pieńko and Joanna Trylska |
|||
15:45 — 15:55 | Mutations affect the dynamics of an aminoglycoside riboswitch Piotr Chyży, Marta Kulik, Suyong Re, Yuji Sugita and Joanna Trylska |
|||
13.30 – 16.30 | Tutorial OpenPOWER and Power 9 |
Brief agenda:
Presenters: Florin Manila |
Central European Time (UTC+1) | ||||||
9:00 — 10:00 | Emergent Atomic Switch Networks for Neuroarchitectonics James K. Gimzewski, University of California Los Angeles |
|||||
10:00 — 10:35 | Towards the Quantum Internet: Building an entanglement-sharing quantum network Eden Figueroa, Stony Brook University |
Data Communication, Distributed Computing | ||||
10:35 — 10:55 | The historical 1st Poland-Singapore data transfer production trial over CAE-1, a behind the scene look Chin Fang, CEO of Zettar Inc. |
|||||
10:55 — 11:20 | Long distance geographicaly distributed computing cluster and High Performance Parallex Karol Niedzielewski, Marcin Semeniuk, Jarosław Skomiał, Jerzy Proficz, Piotr Sumionka, Bartosz Pliszka and Marek Michalewicz |
|||||
11:20 — 11:55 | The Reality and Tremendous Opportunity of Custom, Open Source Processing Calista Redmond, Chief Executive Officer, RISC-V Foundation |
Architecture | ||||
11:55 — 12:30 | GRAPE Supercomputer and Biosystems computational research Makoto Taiji, RIKEN Center for Biosystems Dynamics Research Osaka |
|||||
12:30 — 13:05 | Technology of Real-World Analyzers (TAUR) and its practical application Jarosław Jung i Krzysztof Halagan, Lodz University of Technology |
|||||
13:05 — 13:15 | ARUZ – fully parallel FPGA-based data processing system Rafał Kiełbik, Krzysztof Hałagan, Jarosław Jung and Zbigniew Mudza |
|||||
13:15 — 13:25 | Evaluation of ARM based system for HPC workloads, a case study Maciej Pawlik, Klemens Noga, Maciej Czuchry, Jacek Budzowski, Łukasz Flis, Patryk Lasoń, Marek Magryś and Michał Sterzel |
|||||
13:25 — 13:45 | LUNCH | |||||
13:45 — 14:40 | Vector Evolution -The path to the SX-Aurora TSUBASA Erich Focht, NEC Germany |
|||||
14:40 — 15:05 | Coarse-Grained Approach for Reconfigurable Logic in High Performance Computing Systems Zbigniew Mudza |
|||||
15:05 — 15:30 | The Data Vortex: From Interbellum Polish Mathematics to a Novel Topology for Connecting Cores Reed Devany, Coke Reed, Santiago Betelu and Michael Ives |
|||||
15:30 — 15:55 | Checkpoint/Restart Implementation for OpenSHMEM Delafrouz Mirfendereski, Barbara Chapman, Tony Curtis and Md Abdullah Shahneous Bari |
|||||
15:55 — 16:30 | The Data-Model Convergence: a case for Software Defined Architectures Antonino Tumeo, Pacific Northwest National Laboratory |
Data-centric Computing, Storage |
||||
16:30 — 17:05 | Memory vs. Storage Software and Hardware: The Shifting Landscape Jay Lofstead, Scalable System Software Sandia National Laboratories |
|||||
17:05 — 17:25 | Special Thematic Session: Opportunities for Data-Centric Computing |
Potential of I/O-Aware Workflows in Climate and Weather Julian Kunkel |
||||
17:25 — 17:37 | I/O Challenges in Data-Centric Compute Framework Ziogas Alexandros Nikolaos |
|||||
17:37 — 17:54 | Containers and Data-Centric Computing Jay Lofstead |
|||||
17:54 — 18:00 | Closing words Marek Michalewicz, Interdisciplinary Centre for Mathematical and Computer Modeling (ICM), University of Warsaw, Poland |
CLOSING |
9:00 — 13:00 |
316 / 3* Introduction to deep neural networks with pytorch Wojciech Rosinski (ICM), Lukasz Gorski (ICM), Norbert Kapinski (ICM) |
254 / 2* Introduction to quantum programming Jaroslaw Miszczak (PAS) |
315 / 3* Introduction to scientific computing using the Julia language Bogumił Kamiński (SGH), Przemysław Szufel (SGH) |
256 / 2* Introduction to the NEC SX-Aurora TSUBASA Vector Engine Erich Focht (NEC Deutschland GmbH), Nicolas Weber (NEC) |
13:00 — 14:00 | LUNCH | |||
14:00 — 18:00 |
316 / 3* Introduction to GPU programming using CUDA framework Michał Dzikowski (ICM), Grzegorz Gruszczyński (ICM) |
315 / 3* Parallel computing using the Julia language Bogumił Kamiński (SGH), Przemysław Szufel (SGH) |
256 / 2* Advanced scientific visualization with VisNow platform Bartosz Borucki (ICM), Krzysztof Nowiński (ICM) |
* – room/level
Go to tutorials programme to see all available tutorials topics.
Coming
VIRTUAL ICM SEMINARS AFTER SCFE20
The series of Virtual ICM Seminars in Computer and Computational Science will return in the new academic year 2020/2021.
WATCH 10 PAST SEMINARS