Update: The downtime will last until 22/4, exact time unknown.
The electrical work did not go as smooth as planed, resulting in a cooling outage of the compute nodes and storage in the HPC2N region.
Update: The downtime will last until 22/4, exact time unknown.
The electrical work did not go as smooth as planed, resulting in a cooling outage of the compute nodes and storage in the HPC2N region.
Planned downtime in the HPC2N region on Monday the 20th of April between 6-12 and Tuesday the 21th of April between 11-17, due to urgent electrical work. All running instances will be suspended before the outage and restarted again afterwards.
The other regions will not be affected by this and so if you can, we suggest move your workloads to the new WEST-1 region that is running a much more resent version of OpenStack on new hardware.
Due to a broken network fiber (2020-01-30) the regions is currently unavailable, ETA for the repair is 20:00 UTC 2020-01-30.
Due to cooling issues on the 2nd and 7th of January there where short outages in the HPC2N region and all the running instances where shut down unexpectedly. The underlying issue causing these cooling issues has been resolved but you might need to start up your instances in the cloud again manually.
Due to a datacenter cooling failure in the morning of Thursday the 12th of December, we were forced to do an emergency shutdown of the UPPMAX region. We are currently working on resolving this issue and apologize for the inconvenience of this event.
Final acceptance testing is currently ongoing, more information about the new hardware can be found here https://www.c3se.chalmers.se/about/SSC/ . The upgrade also updates Openstack to the Rocky release, more information regarding Rocky can be found here https://www.openstack.org/software/rocky/
Due to a datacenter cooling failure in the morning of Wednesday the 19th, we were forced to do an emergency shutdown of the UPPMAX region.
We are currently working on resolving this issue and estimate that all services will be available again by Friday at the latest.
During the holy days we will not respond to support tickets and we will also have reduced support capacity in the days between and following the holy days.
We apologize for any inconvenience that this might cause.
Due to the latest security flaws in Intel CPU:s, users of SNIC Science Cloud must patch all instances to the latest kernel as soon as possible.
apt-get update
apt-get upgrade
yum update
One of the central network switches in the storage-infrastructure will be replaced due to a failed fan and to do this we must shut down all running instances in the HPC2N region on 3/9 2018 between 08:00 and 12:00.
This will only effect the HPC2N region and will not have any impact on instances running in the other SSC regions.
However login via the web dashboard to the other regions will not work during this maintenance period.