Important milestone reached for SNIC Science Cloud!

Our work on the SNIC Science Cloud is progressing at a steady rate, and we are very happy to announce that we have achieved an important milestone: over 50 active projects with more than 200 users from different disciplines. As the figure shows, the number of projects has been growing steadily, with an average of 3 per month.  Early in the project, we were driven by pre-defined pilot use-cases, but as can be seen, users are since some time ago finding us rather than the opposite. The question now is what will happen later in 2016 and 2017, will the linear trend continue or will we see a more rapid increase in project requests?

projects
Number of active projects on the SSC IaaS resources as registered in SUPR, the SNIC project management portal. The projects vary in size, and in total include over 200 users.

The IaaS is still based on best possible efforts and in kind contributions from three SNIC centers (UPPMAX, C3SE and HPC2N). We are currently working hard on implementing more extensive accounting, gathering data on resource usage patterns, and we will follow up with a more in-depth analysis later this year. This is an important activity in our efforts to better understand the operational costs and constraints of a cloud for science, and will form a basis for e.g. informed capacity planning. We plan to make all data publicly available in the hope that it will help other academic institutions in their e-infrastructure strategy work.

We have organized three workshops and there will be more to come in the fall. The aim of the workshops is to help the user community make productive use of clouds in their research.  The first workshop was held at the Department of Information Technology, Uppsala University (September 2015), the second was at SciLifeLab, Stockholm (March 2016) and recently the third workshop was held at Royal Institute of Technology (KTH), Stockholm (May 2016). The first two workshops were about introduction to Cloud computing and how to get started with OpenStack based IaaS. The third workshop address advanced concepts of virtualization, contextualization based on CloudInit and Ansible and orchestration using Heat.

Our senior cloud architect and UPPMAX application expert on cloud computing, Salman Toor, lecturing about cloud computing at KTH. In this advanced-level workshop, contextualization, orchestration and automation was on the agenda.
Our senior cloud architect and UPPMAX application expert on cloud computing, Salman Toor, lecturing about cloud computing at KTH. In this advanced-level workshop, contextualization, orchestration and automation was on the agenda.

SSC aims at being a very open project, sharing as much of our developed content and practices as open source. In line with this,  all the tutorial material is available for download, reuse, and importantly, contributions, from our GitHub repository:

https://github.com/SNICScienceCloud/technical-training.git

Pull requests are much appreciated.

Have a nice summer!

SSC Team

Using cloud computing for estimating failure probabilities with applications in underground porous media flows

In this guest post, Fredrik Hellman, a PhD student at the Division of Scientific Computing, Department of Information Technology, Uppsala University, report on how cloud computing resources in SSC were used in recent work with collaborators at UU and Chalmers/GU. 

In many engineering applications the probability of system failures are of particular interest. A special application is the assessment of storage capacity of underground carbon dioxoide storage reservoirs,where a failure is that the capacity of the target reservoir is smaller than expected. Since the rock properties are generally uncertain, the uncertainty in the reservoir capacity is also large.

The SNIC Science Cloud was used in our work on estimating failure probability to assess the performance of four different Monte Carlo method setups for estimating failure probability in a porous media fluid flow simulation with uncertain rock properties. For all four methods, the basic algorithm was to generate a set of realizations of the uncertain rock properties and distribute the work of performing the simulation for each realization in a network of virtual machines in the SNIC Science Cloud. All algorithms thus exhibit single program, multiple data (SPMD) parallelism.

snicblog

The code performing the simulations was written in Python, using
finite element assembly routines from the FEniCS project. The project benefited from using a cloud based service mainly for two reasons. First, the virtualization allowed for good control over the software environment. Experimental versions of software could easily be used without administrative overhead. Second, the IPython based MOLNs software for setting up and managing a virtual computing network for distributed computations was readily available and simplified the management of the computations.

Dedicated support channel

As part of our efforts to move towards a production grade setup, both for the infrastructure and the surrounding administration, we have now set up a dedicated support email:

support@cloud.snic.se

Please direct your support requests there so that they are seen by all members of the cloud team.

Mini-workshop on microservice platforms

The Spjuth group is organizing a mini-workshop on microservice platforms on May 12 with talks 13:15-15:00 in ITC 2345, open to the public. SSC’s senior cloud architect Salman Toor will give a brief presentation about the cloud infrastructure. Agenda for the afternoon session is available here.

The type of microservice platforms presented offer higher-level functionality  by abstracting away most of the IaaS layer, allowing users to focus more on deploying applications than directly managing VMs. This makes it easier to build and deploy robust and scalable cloud computing applications. In the next SSC training session at KTH, there will be more information about MANTL, including a hands-on tutorial.

Virtual Research Environments for Clinical Metabolomics

PhenoMeNalLogo

PhenoMeNal is a 3-year EU Horizon 2020 project (2015-2018) that will develop a standardised e-infrastructure for analysing medical metabolic phenotype data. This comprises development of standards for data exchange, pipelines, computational frameworks and resources for the processing, analysis and information-mining of the massive amount of medical molecular phenotyping and genotyping data that will be generated by metabolomics applications now entering research and clinic.

At the Spjuth research group we lead WP5; “Operation and maintenance of PhenoMeNal grid/cloud” and our aim is to provide PhenoMeNal and researchers with the capability to spawn secure Virtual Research Environments (VRE or VE) with easy access to scalable, interoperable data and tools for data analysis. These virtual environments should be able to run on most hardware architectures ranging from single laptops/workstations, to private and public cloud (IaaS) providers.

We use MANTL to set up, and to provide, a microservice-oriented virtual infrastructure. In PhenoMeNal, all partners provide tools as Docker images, , that are automatically built, tested, and pushed to DockerHub, by a continuous integration system (Jenkins). Within MANTL we provide long-running services using Marathon, including Jupyter and Galaxy workflows systems, that can orchestrate microservices-based pipelines using e.g. Chronos or Kubernetes.

Phase3 Draft Budget

So far we have successfully provisioned PhenoMeNal VRE on Google Cloud Platform, EBI Embassy Cloud (OpenStack), and SNIC Science Cloud (OpenStack). We are currently experimenting with Packer for speeding up the provisioning of virtual machines within the VRE, and Consul for federating multiple VREs. Another ongoing project is to use Apache Spark for distributed data analysis within the VRE.

Links:

http://www.farmbio.uu.se/forskning/researchgroups/pb/PhenoMeNal/

http://www.farmbio.uu.se/forskning/researchgroups/pb/Data-intensive/

http://phenomenal-h2020.eu/

Plans for first half of 2016

Part of the SNIC CLoud Team 2016
Part of the SNIC Cloud Team 2016. From left: Lars Viklund (HPC2N), Daniel Nilsson (C3SE), Andreas Hellander (UU), Salman Toor (UU, UPPMAX), Pontus Freyhult (UPPMAX) and Mathias Lindberg (C3SE). Missing from picture: Ingemar Fällman (HPC2N) and Henric Zazzi (PDC).

Last week we held out first all-hands meeting for 2016. Many of us were able to meet at HPC2N in Umeå for almost two days of brainstorming and technical work. Since we now have a functioning (but not yet production grade) IaaS cloud up and running, serving approximately 40 projects and 110 users, the focus of this meeting was on monitoring (increase stability), metering and accounting. We are, like all SNIC-supported projects, relying on the SUPR system for managing projects and users, but we haven’t yet developed a custom entry point for the cloud resources (we have been using “UPPMAX Small” templates, for those of you who know what that is). During the meeting, we completed a draft of the SUPR/SAMS workflows for cloud projects, in collaboration with representatives of the SAMS team. This is now to be handed off to those teams for feedback, and hopefully quick implementation.

Some other highlights from our all-hands meeting:

  • We decided to host 3 training workshops this semester targeted at beginning users for the cloud resources, tentatively at KI (end of February), Chalmers (late March) and Umeå University  (May). We will then follow this up with a more advanced workshop, showing some more advanced concepts and tools  in Uppsala early in the next semester.
  • We are in good shape to start accepting more users, now that we have two regions online. If you are interested, go ahead and make a project request.
  • We spent a lot of time discussing the incentive for users to make sensible use of the IaaS resources when developing applications. We will implement some form of pay-as-you-go model to promote dynamic use of resources. More information will follow.
  • We are planning to harden the systems, so as a user you will successively see a more and more stable system over the next couple of months. One step in that direction will be taken during the next large service window in the UPPMAX region Feb 15-Feb 29.
  • A third region at C3SE is well on its way.

SNIC Science Cloud – A Community Cloud and a Community Effort

We are happy to announce SNIC Science Cloud (SSC), a community cloud offering Infrastructure as a Service (IaaS), and, in the near future, selected Platform as a Service (PaaS) offerings free of charge to individual researchers at Swedish universities. We will start taking on more users during the next couple of months so let us know if you have a need for cloud computing infrastructure.

Open source. We are building SSC on the open source OpenStack cloud suite. Currently, we are hardening the system for sustained production. We are also scaling it to multiple regions with participation from the HPC centre UPPMAX (Uppsala), C3SE (Göteborg), HPC2N (Umeå) and PDC (Stockholm) to ensure that we can meet an increasing demand.

A community effort. Our goal is to provide a modern, flexible and open infrastructure that complements existing HPC resources. We strive for a community effort that evolves with and for researchers. We would love to hear from potential users regarding the needs for platform level services, such as Apache Hadoop/Spark, Kubernetes or other toolchains so that we focus efforts where they are most needed. What large datasets would you like to process?

Transparency to help others follow. In taking on the challenge of deploying and operating an OpenStack community cloud on a national scale over several hundreds of servers and many thousand physical cores, we hope to lead the way for other institutions that are considering similar initiatives. This is why we aim for transparency, both with architecture planning, operation practices (e.g. sharing code for testing and evaluation), and with data regarding usage patterns.

Open science, open data. With SSC we hope to take a leap towards an infrastructure for open science and open data, with cloud technology facilitating shareability and reproducibility of complex and computationally demanding experiments. We aim at making computations and data analysis more accessible for research communities with little previous experience of advanced and large scale computing resources. We are always interested in discussing these issues and in sharing and sharpening our vision.

You can help. Finally, there is a lot of work to do! If you are involved with academia in Sweden and you are an OpenStack operator, have experience of e.g. software stacks for large scale data processing, microservices orchestration, automation, or if you belong to a community that is using some specific SaaS that you would like to provide for research groups, we want your help!