Upgrading the WEST-1 region of Swedish Science Cloud in September

In Semptember we will begin upgrading the WEST-1 region hosted by Chalmers e-Commons to the latest version of OpenStack and also add some new hardware.

This will improve the capacity and funktionality of the WEST-1 region.

Unfortunately, it also means that the WEST-1 region will be down and unavialable for some time this fall and eveyting that any data currently stored there will be removed.

If you are currently using WEST-1 you must make sure to:

  • Backup your data.
  • Move your workloads and data from WEST-1 to either EAST-1 or NORTH-1.

If you have any questions or if you need assistance, do not hesitate to contact support@cloud.snic.se and we will help you.

EAST-1 power failure (resolved)

At 00:57 CEST on Monday, May 29th a power outage caused the cooling system at Ångström Laboratory to shut down, leading to a rapid increase in temperature within the compute hall. To prevent further temperature escalation and safeguard the equipment, all systems in the compute hall were forcefully powered off. The cooling system was restored at approximately 05:00.

Due to the elevated temperatures experienced during the outage, additional inspections are required to ensure the compute hall, compute, storage, and network hardware are functioning as expected. Currently, we have identified an issue with one of the two UPS units.

Throughout the day, we will provide regular updates regarding the progress of the recovery efforts and the status of the affected equipment. We are working diligently to resolve any issues and restore normal operations as soon as possible.

Update 2023-05-29 11:00

The compute hall is fully operational again. We are now working on restoring systems.

Shutdown of all systems on 2 february at 07:00 CET

The UPPMAX compute hall hosting EAST-1 will be partially shutdown during 2 February between 07:00 – 11:00 CET as Akademiska Hus performs work on the cooling circuit. The shutdown has been planned to coincide with our February maintenance day. We will try to provide some level of access but expect all compute capability to be unavailable until the work is completed.

If you have any questions please contact us at support@uppmax.uu.se.

Best regards, UPPMAX

Serious vulnerability in pwnkit (CVE-2021-4034)

Pwnkit is installed by default in most linux distributions, there is no permanent fix yet but there is a workaround, you can remove the suid bit from the binary using chmod 0755 /usr/bin/pkexec and that will make it impossible to exploit this bug.

  • Pkexec is installed by default on all major Linux distributions.
  • Pkexec has been vulnerable since its creation in May 2009.
  • Any unprivileged local user can exploit this vulnerability to get full root privileges.

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-4034

Serious vulnerability in sudo (CVE-2021-3156)

Make sure to install the latest security updates in your instances to fix a Serious vulnerability in sudo (CVE-2021-3156) that will let any user run any command as root without entering a password.

In combination with other less severe security exploits this can in some cases be used to compromise your instances remotely.

Read more about it: https://www.openwall.com/lists/oss-security/2021/01/26/3

SSC training workshop at HPC2N on Oct 10

We would like to invite all interested current and future users to a training workshop on introductory level at HPC2N, Umeå, on Oct 10. We will introduce cloud computing in general, including covering best practices around security. The majority of time will be spent in lab sessions on basic usage of the IaaS cloud.

Please register here.

Note the the number of participants are limited to around 25, and spots will be filled on a first-come-first serve basis.

You need to bring a laptop, no prior experience of using cloud resources are required.

SSC in pre-production

It has been a busy spring in the SSC team. Following instructions from SNIC to start converging to a production-level national service, we have worked hard to refine and automate our deployment and management of the cloud control planes (in the process standardizing on e.g. operating systems and management software across all regions). We now have a common operations base, contributed to from all participating centra and publicly available on GitHub.

We are happy to announce that we open up for project requests again, in what we call the “pre-production stage”. To get access to the resources, simply make a new project request in SUPR, following the instructions here.

During pre-production, we have all intention to provide a stable, well supported infrastructure mature for real use-cases. We offer SNIC’s normal best offer support. However, we acknowledge the fact that there might be unforeseen modifications needed when then number of users ramps up again. Hence, we reserve the right to have frequent service windows, sometimes with short announcement times during the pre-production stage. All service descriptions, policies etc. regarding usage and quotas should also be considered in draft-stage (we need this pre-production time to verify that our models hold up in practice). By using SSC in the pre-production stage,  we also ask you to be helpful to report any issues you encounter swiftly, and we might reach out to you as a user for feedback on critical functionality  and documentation to help us harden the setup further.

If you were a pilot user of SSC in 2016/2017, you will now notice the following major changes:

  1. In SUPR the cloud resource is now associated with the SSC-metacenter rather than UPPMAX.  There is a dedicated round associated with SSC.
  2. When requesting the project, the unit for the resource is “Coins”. We are in the process of implementing an accounting system that resembles commercial clouds, to better support the “cloud economic model” of usage. In pre-production, you can ignore this number (just write e.g. 1000 in the box), but it will become used in production stage.
  3. Account management has been reworked, and now uses SUPR as the identity provider. This means that in order to log into the dashboard, you will log into SUPR first to prove your identity. This means that SWAMID is supported (and the preferred mechanism for authentication to Horizon).
  4. There are two independent (and hence resilient) but harmonized regions,  HPC2N and C3SE. Each currently offer the same set of services. You are welcome to use either, or both. Since the regions have HW of different quality, later there will be a differentiated cost.

During the pre-production stage we will continue the work on:

  • Finalizing the cost/quota model and implementing accounting with SAMS.
  • Improve our automated monitoring of systems.
  • Scaling regions with more HW.
  • Finalizing an offical service description.
  • Finalizing end-user policies, incl.  security considerations.

The following timelines now apply:

  • March 21-August 30: Pre-production stage opens.
  • April 4: The old “SMOG” cloud is decommissioned.
  • April 5-August 30, “Dis”, a new large region based on UPPMAX-Tintin resources is added to SSC.
  • August 30 – Production stage (pending final review).

Welcome back to SSC!

The SNIC Science Cloud Team

 

Towards production in 2017

It has been a busy 2016 for the SSC team. We have served more than 60 pilot projects, conducted both beginning and more advanced level training at several locations in Sweden, and started working on a hardened infrastructure. Since competency renewal on OpenStack operations is expected to be one key challenge for SSC long-term, we have taken measures to standardize operations across regions to facilitate a joint, national responsibility for operations. At the end of the year, SNIC conducted a thorough evaluation of the project, looking specifically at whether the project had succeeded in creating services of value to the research community.

We are happy to announce that the outcome of this processes is a decision to converge towards production resources in 2017. This is great news for end-users, since this will mean a higher level of service and support.

SSC closes for new project request in Q1 order to transition to a production service

Early in 2017 we will upgrade the control planes at UPPMAX and C3SE with new hardware capable of supporting a larger amount of users and projects. The bulk of the compute nodes will continue to come from second generation HPC clusters but they will be modernized and expanded. Our two regions at C3SE and HPC2N will become available for general project requests. We will accelerate the work to integrate SSC into the SNIC ecosystem. In particular, we will redesign our temporary project and account handling. Security policies will also be documented and communicated to end users.

To free up time in the project to make this transitions as rapidly as possible, we will not accept any new pilot project requests until we are ready to announce the production services (the goal is early Q2 2017). During the transition period, we will keep supporting our existing pilot users on the same levels as they are now. When we reopen the services, it will be with the same best effort support levels as other SNIC resources.

Glenna 2

Glenna is a Nordic e-Infrastructure Collaboration (NeIC) initiative, with focus on knowledge exchange and Nordic collaboration on cloud computing. The first Glenna project has now concluded, and from January 2017 a new phase of the project, Glenna 2, starts. Glenna 2 will focus on four main aims:

  1. Supporting national cloud initiatives to sustain affordable IaaS cloud resources through financial support, knowledge exchange and pooling competency on cloud operations.
  2. Using such national resources to establish an internationally leading collaboration on data intensive computing in collaboration with user communities.
  3. Leveraging the pooled competency to take responsibility for assessing future hybrid cloud technology and communicate that to the national initiatives.
  4. Supporting use of resources by pooling national cloud application expert support and create a Nordic support channel for cloud and big data. The mandate is to sustain a coordinated training and dissemination effort, creating training material and providing application level support to cloud users in all countries.

In short, aim 1 ensures the availability of IaaS, aim 2 seeks to establish PaaS and SaaS services for Big Data Analytics, aim 3 investigates future emerging technology and HPC-as-a Service and aim 4 will provide advanced user support for research groups transitioning into cloud computing infrastructure. The project directive for Glenna 2 can be found here.

SNIC Science Cloud is a cloud computing infrastructure run by SNIC, the Swedish National Infrastructure for Computing. SNIC Science Cloud provides a national-scale IaaS cloud and associated higher-level services (PaaS), for the research community

SNIC Science Cloud Workshop (Fall 2016)

Overview:

Instructor: Salman Toor.
Level: Basic.

Location: Chalmers University of Technology, Room Raven & Fox, Fysik forskarhus 5th floor.

Visiting address: Chalmers Campus Johanneberg, Room Raven & Fox, Fysik forskarhus, 5th floor entrance Fysikgränd 3.

Infrastructure: SNIC Science Cloud (OpenStack based Community Cloud).

Date & duration: 25:th November, (10:00 – 16:00).

Audience: Users and potential users of SNIC Science Cloud resources with no previous cloud experience.


Registration:

Register here.


Topics:

  • Brief overview of Cloud Computing.
  • Cloud offerings: Compute, Storage, Network as a Service (*aaS).
  • Brief description of IaaS, PaaS, SaaS etc.
  • How to access Cloud resources?
  • Introduction to SNIC Science Cloud initiative.

Hands-on session topics:

1 – How does the Horizon dashboard work?
2 – How to start a virtual machine (VM)?
3 – Instance snapshots.
4 – Access to cloud storage (volumes and Object store)
5 – Storage snapshot
5 – Network information
6 – Basic system interaction with APIs

Lab-Document


Schedule:

First half (10:15 – 12:00): Lectures
Second half (13:00 – 16:00): Lab session