Incidents | Hostup AB Incidents reported on status page for Hostup AB https://status.hostup.se/ https://d1lppblt9t2x15.cloudfront.net/logos/8bc61c66b0a8b24c5b99872b2772312e.png Incidents | Hostup AB https://status.hostup.se/ en eta and zeta are down https://status.hostup.se/incident/582826 Wed, 28 May 2025 14:21:16 -0000 https://status.hostup.se/incident/582826#261931c0d5545f434a3860dcfff9ee77b62c3c4e1043f3e416f0ca7e797b898b eta and zeta recovered. eta and zeta are down https://status.hostup.se/incident/582826 Wed, 28 May 2025 14:21:16 -0000 https://status.hostup.se/incident/582826#261931c0d5545f434a3860dcfff9ee77b62c3c4e1043f3e416f0ca7e797b898b eta and zeta recovered. eta and zeta are down https://status.hostup.se/incident/582826 Wed, 28 May 2025 14:18:56 -0000 https://status.hostup.se/incident/582826#6be5f60feac84f6a6a91204f55305d907771415c56ec53431a1a6f01bf917aae eta and zeta went down. eta and zeta are down https://status.hostup.se/incident/582826 Wed, 28 May 2025 14:18:56 -0000 https://status.hostup.se/incident/582826#6be5f60feac84f6a6a91204f55305d907771415c56ec53431a1a6f01bf917aae eta and zeta went down. epsilon is down https://status.hostup.se/incident/581900 Tue, 27 May 2025 17:33:11 -0000 https://status.hostup.se/incident/581900#172fdd02491e4ae99daa8ecb1f93b27a35a44ecf006d8a3a5123fef0e3c06f33 epsilon recovered. epsilon is down https://status.hostup.se/incident/581900 Tue, 27 May 2025 17:32:31 -0000 https://status.hostup.se/incident/581900#2845ded1a310adf65e7b354e9af164f42c00622768f20f458aa193c5551f85df epsilon went down. omega (cpanel) is down https://status.hostup.se/incident/559906 Sun, 11 May 2025 04:56:22 -0000 https://status.hostup.se/incident/559906#2fce8f7f91f48041edee4ab539bc5c5b59d8013e8f7d9616b82e34a57629cf5f omega (cpanel) recovered. omega (cpanel) is down https://status.hostup.se/incident/559906 Sun, 11 May 2025 04:55:48 -0000 https://status.hostup.se/incident/559906#852b7ac0328d6a0211b6cfc4c1652845b0491fa64b628635a63dfeef1f070a97 omega (cpanel) went down. omega (cpanel) is down https://status.hostup.se/incident/559906 Sun, 11 May 2025 00:51:41 -0000 https://status.hostup.se/incident/559906#c6872e4e20377a61b767dd6e37dcb0e710860fc21935f2959a5323e289130b02 omega (cpanel) recovered. omega (cpanel) is down https://status.hostup.se/incident/559906 Sun, 11 May 2025 00:50:12 -0000 https://status.hostup.se/incident/559906#6f225ee441cf4025ded93a94d31c72fe89d40234b637f64e155c1733d575b8a3 omega (cpanel) went down. stockholm1-3-vm (25 GbE snapshot) is down https://status.hostup.se/incident/559222 Fri, 09 May 2025 11:36:52 -0000 https://status.hostup.se/incident/559222#97a8376972397f06ff054011e0bc4f1c8b93694606030e479a85fab4879f4912 stockholm1-3-vm (25 GbE snapshot) recovered. stockholm1-3-vm (25 GbE snapshot) is down https://status.hostup.se/incident/559222 Fri, 09 May 2025 11:33:20 -0000 https://status.hostup.se/incident/559222#991e2ac4d9cf6e161acdaa7b77264d30fe249a5b0f73903d4f706886cca78091 stockholm1-3-vm (25 GbE snapshot) went down. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:57:03 -0000 https://status.hostup.se/incident/558815#690c2d878be53cc4361280a06fc10e9e720ce986ac59bb895c70f36a1849a3ec stockholm1-1-vm recovered. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:54:30 -0000 https://status.hostup.se/incident/558815#7b828325af2ec43ba51ef671f13948bf2555c1db8dca19c741ea7e0476209dd0 stockholm1-1-vm went down. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:54:06 -0000 https://status.hostup.se/incident/558815#ee4078e9c0b28b3d799da855e33ebc553425c56b9376ad018f703a48f6e45fd9 stockholm1-10-vm (HA cluster) recovered. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:53:46 -0000 https://status.hostup.se/incident/558815#a5191ff11b32199acca5583df9629eca8110479b0c54942e12120829ff050a88 stockholm1-12-vm (HA cluster) recovered. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:52:42 -0000 https://status.hostup.se/incident/558815#79719f9df2e6e56e1545168fe0b5e32b155265c7a1549bb95cd243f345d3b93a ping stockholm1-23-vm (HA cluster) recovered. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:52:23 -0000 https://status.hostup.se/incident/558815#b59ba17793e544b57ae41a06e604d992e78896e40e4b30956c1a2f8efc8cc0c7 ping High Frequency Ryzen 9950x recovered. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:51:25 -0000 https://status.hostup.se/incident/558815#b73594ab966c9cd80118c6e9c142e0c3cefd0eef7893b22da06dca143b0e8d49 ping High Frequency Ryzen 9950x went down. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:51:20 -0000 https://status.hostup.se/incident/558815#3179448dd1f1928ef5e43a794d7e3923028e3cc2f50b861ace3c8b171cb0f449 stockholm1-10-vm (HA cluster) went down. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:51:15 -0000 https://status.hostup.se/incident/558815#a54c1eb77a4f6a8ec568d23f34387e42ddd5f526bdcc39937ddd0813e56009e2 ping stockholm1-23-vm (HA cluster) and stockholm1-12-vm (HA cluster) went down. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:51:15 -0000 https://status.hostup.se/incident/558815#a54c1eb77a4f6a8ec568d23f34387e42ddd5f526bdcc39937ddd0813e56009e2 ping stockholm1-23-vm (HA cluster) and stockholm1-12-vm (HA cluster) went down. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:50:20 -0000 https://status.hostup.se/incident/558815#26ee42a96e789a14c0f6dd06426b1548260d966a2caaa889df04a2633b0c64f2 ping stockholm1-24-vm (HA cluster) recovered. ping stockholm1-24-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and 4 other services are down https://status.hostup.se/incident/558815 Thu, 08 May 2025 17:50:15 -0000 https://status.hostup.se/incident/558815#1ba176467b7d9e976a63abb68a7d5e3c7f0c18baf757327d85ba4a723e7f3da4 ping stockholm1-24-vm (HA cluster) went down. epsilon is down https://status.hostup.se/incident/547747 Sat, 19 Apr 2025 13:11:48 -0000 https://status.hostup.se/incident/547747#1477f35b42689ffb95c0de47390def6bf2eb290cca011072830e055914508df5 epsilon recovered. epsilon is down https://status.hostup.se/incident/547747 Sat, 19 Apr 2025 13:10:48 -0000 https://status.hostup.se/incident/547747#91c7d524302b119b873a6e0197c1afbc8ffd893b05c139ac36800f31e0133f2f epsilon went down. srv10 and delta (cpanel) are down https://status.hostup.se/incident/546607 Thu, 17 Apr 2025 02:15:27 -0000 https://status.hostup.se/incident/546607#a05eb974a531350f167d84a5f7d076f4b5f0d7576f3ed1e8ae9765a9b4bf13b4 delta (cpanel) recovered. srv10 and delta (cpanel) are down https://status.hostup.se/incident/546607 Thu, 17 Apr 2025 02:11:10 -0000 https://status.hostup.se/incident/546607#ffeb77484e04455944252f59da27f36e8d40a48c5c3d00a97c5c5d1c5ffea1bf delta (cpanel) went down. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites) and delta (cpanel) are down https://status.hostup.se/incident/546051 Wed, 16 Apr 2025 06:27:37 -0000 https://status.hostup.se/incident/546051#4b249c525561e1bc9ece163651ca4248edd612e156492ffa69753c6774180e27 delta (cpanel) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites) and delta (cpanel) are down https://status.hostup.se/incident/546051 Wed, 16 Apr 2025 06:26:34 -0000 https://status.hostup.se/incident/546051#fa77f4630b2536427e5c654d8966bea261c47c6c6e9e8d97a6c3a7271abe3faf delta (cpanel) went down. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:36:53 -0000 https://status.hostup.se/incident/544675#d7704e9a7588241fe74ab4e636f9acd6a00a68198f81f37c4af81832484220da stockholm1-1-vm recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:20:23 -0000 https://status.hostup.se/incident/544675#60552dbf5630fcff784eda727822e3c7ad410976a6e1cd6bc010392e0fe8c2e8 stockholm1-1-vm went down. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:18:44 -0000 https://status.hostup.se/incident/544675#6d3cadc29e0ca72aac954d42d56c47fab651d6a4ee5393033c0f5c8611dfb299 stockholm1-12-vm (HA cluster) and stockholm1-10-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:18:44 -0000 https://status.hostup.se/incident/544675#6d3cadc29e0ca72aac954d42d56c47fab651d6a4ee5393033c0f5c8611dfb299 stockholm1-12-vm (HA cluster) and stockholm1-10-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:17:38 -0000 https://status.hostup.se/incident/544675#7fe5978a0fe697e9c3f47182112836e88dcd38205d87e5a606d5324c8b335573 srv11 (Ryzen 9950x cPanel Stockholm - for high load websites) went down and ping stockholm1-23-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:16:16 -0000 https://status.hostup.se/incident/544675#c5ddeade25b51143ebf6ef44ea62176de5cf29d669ca6e7528909fa7af060ee9 stockholm1-8-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:16:14 -0000 https://status.hostup.se/incident/544675#0fea914dd960d613cfcb3a0bbfe582e5bbaede185c2524b40cb9824cf8108c14 stockholm1-10-vm (HA cluster) and stockholm1-8-vm (HA cluster) went down and ping stockholm1-25-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:16:14 -0000 https://status.hostup.se/incident/544675#0fea914dd960d613cfcb3a0bbfe582e5bbaede185c2524b40cb9824cf8108c14 stockholm1-10-vm (HA cluster) and stockholm1-8-vm (HA cluster) went down and ping stockholm1-25-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:16:14 -0000 https://status.hostup.se/incident/544675#0fea914dd960d613cfcb3a0bbfe582e5bbaede185c2524b40cb9824cf8108c14 stockholm1-10-vm (HA cluster) and stockholm1-8-vm (HA cluster) went down and ping stockholm1-25-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:16:05 -0000 https://status.hostup.se/incident/544675#d2cc5b2fc7a729cd709f99a4c19d9e3ba5efd9be19fef55b97f48f25493d3024 ping stockholm1-25-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and stockholm1-12-vm (HA cluster) went down. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:16:05 -0000 https://status.hostup.se/incident/544675#d2cc5b2fc7a729cd709f99a4c19d9e3ba5efd9be19fef55b97f48f25493d3024 ping stockholm1-25-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and stockholm1-12-vm (HA cluster) went down. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Mon, 14 Apr 2025 00:16:05 -0000 https://status.hostup.se/incident/544675#d2cc5b2fc7a729cd709f99a4c19d9e3ba5efd9be19fef55b97f48f25493d3024 ping stockholm1-25-vm (HA cluster), ping stockholm1-23-vm (HA cluster), and stockholm1-12-vm (HA cluster) went down. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Sun, 13 Apr 2025 22:55:53 -0000 https://status.hostup.se/incident/544675#0868780eace5388a1ee859e452726e5c9419220e55d4b1f097a477494a70e605 stockholm1-1-vm recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Sun, 13 Apr 2025 22:31:03 -0000 https://status.hostup.se/incident/544675#2862f510e1e0fb277ff29db82782f55f05f596aa6d18fe02f4197baf233e9194 stockholm1-10-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Sun, 13 Apr 2025 22:29:39 -0000 https://status.hostup.se/incident/544675#638d0bba1498178f4425bf2c7cbbbff5c457b3958e8b1ef53345580edb8f1fb9 stockholm1-1-vm went down. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Sun, 13 Apr 2025 22:26:46 -0000 https://status.hostup.se/incident/544675#d850dafcbc9ce38810f49f488a07d5b47bcced490899da6fa70520a2ce3d2825 ping stockholm1-23-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Sun, 13 Apr 2025 22:25:23 -0000 https://status.hostup.se/incident/544675#2aa8837db121d9d9d2d006ae1d1756c88c7fdd6735805c8fd69b3eadd0f2eaf7 ping stockholm1-23-vm (HA cluster) and stockholm1-13-vm (HA cluster - backup spare capacity non production) went down. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Sun, 13 Apr 2025 22:25:16 -0000 https://status.hostup.se/incident/544675#76558eae7988a64c53192da101b9f852dd7ba883f85713f1350400ce4076faa1 stockholm1-10-vm (HA cluster) went down and ping stockholm1-25-vm (HA cluster) and ping stockholm1-24-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Sun, 13 Apr 2025 22:25:16 -0000 https://status.hostup.se/incident/544675#76558eae7988a64c53192da101b9f852dd7ba883f85713f1350400ce4076faa1 stockholm1-10-vm (HA cluster) went down and ping stockholm1-25-vm (HA cluster) and ping stockholm1-24-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Sun, 13 Apr 2025 22:25:16 -0000 https://status.hostup.se/incident/544675#76558eae7988a64c53192da101b9f852dd7ba883f85713f1350400ce4076faa1 stockholm1-10-vm (HA cluster) went down and ping stockholm1-25-vm (HA cluster) and ping stockholm1-24-vm (HA cluster) recovered. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Sun, 13 Apr 2025 22:25:03 -0000 https://status.hostup.se/incident/544675#ebb0ca4b87ac6d21a4966d6dc2c655ed7dcc1c1f4b4cfe2d8be51fb1a8f3bd9d ping stockholm1-25-vm (HA cluster) and ping stockholm1-24-vm (HA cluster) went down. srv11 (Ryzen 9950x cPanel Stockholm - for high load websites), ping stockholm1-25-vm (HA cluster), and 6 other services are down https://status.hostup.se/incident/544675 Sun, 13 Apr 2025 22:25:03 -0000 https://status.hostup.se/incident/544675#ebb0ca4b87ac6d21a4966d6dc2c655ed7dcc1c1f4b4cfe2d8be51fb1a8f3bd9d ping stockholm1-25-vm (HA cluster) and ping stockholm1-24-vm (HA cluster) went down. stockholm1-12-vm (HA cluster) is down https://status.hostup.se/incident/544171 Sat, 12 Apr 2025 14:07:40 -0000 https://status.hostup.se/incident/544171#71452fe5c74ea58a3cdb6e8751ac8deb9c7f7612fba4f51c11fa79a721fed471 stockholm1-12-vm (HA cluster) recovered. stockholm1-12-vm (HA cluster) is down https://status.hostup.se/incident/544171 Sat, 12 Apr 2025 14:06:02 -0000 https://status.hostup.se/incident/544171#7279fa27ef7e690ddc761cee82696140c4d662da20c8f6f0bf486f03400ebb1c stockholm1-12-vm (HA cluster) went down. epsilon is down https://status.hostup.se/incident/536812 Sun, 30 Mar 2025 09:21:17 -0000 https://status.hostup.se/incident/536812#243564460ef85948172e69ff78d66e2b8a5ca50b3117b96fcd83582cb2e2489e epsilon recovered. epsilon is down https://status.hostup.se/incident/536812 Sun, 30 Mar 2025 09:20:17 -0000 https://status.hostup.se/incident/536812#adc501a8d19c9f880a674f9741da03cf7dbeaabcc6881ed7df47ca1a128d2480 epsilon went down. delta (cpanel) is down https://status.hostup.se/incident/532606 Sat, 22 Mar 2025 06:28:21 -0000 https://status.hostup.se/incident/532606#502a0c1ebd379b1cd6d38b84fabcf5ba5188bc313fd57031f0538e2b65619dbe delta (cpanel) recovered. delta (cpanel) is down https://status.hostup.se/incident/532606 Sat, 22 Mar 2025 06:21:50 -0000 https://status.hostup.se/incident/532606#4391c6066dae128847363b0088f9230ea5fef01d2f21a3e99eafc34137028ffa delta (cpanel) went down. delta (cpanel) is down https://status.hostup.se/incident/529068 Sun, 16 Mar 2025 11:52:50 -0000 https://status.hostup.se/incident/529068#e5f3cd06a918ac6f43e1ee66a9b99b2010ad90ed87b0d838fcdca1238eedc2ed delta (cpanel) recovered. delta (cpanel) is down https://status.hostup.se/incident/529068 Sun, 16 Mar 2025 11:36:38 -0000 https://status.hostup.se/incident/529068#ff1febd6c7ffd2a7df2a46b93ddc7d41d4946607c0915a5a0655de39b2f35f57 delta (cpanel) went down. delta (cpanel) is down https://status.hostup.se/incident/528709 Sat, 15 Mar 2025 15:15:09 -0000 https://status.hostup.se/incident/528709#c8fe007ec97785413452c835e571444be52a1e98ae8d9e3a16c40e40d917830b delta (cpanel) recovered. delta (cpanel) is down https://status.hostup.se/incident/528709 Sat, 15 Mar 2025 15:12:07 -0000 https://status.hostup.se/incident/528709#dcfa937bb789e80b4fc0c1a6cafc51e4f2c89f20edca79afd351fd8c95efdad8 delta (cpanel) went down. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 11 Mar 2025 10:05:00 -0000 https://status.hostup.se/incident/522574#cec8dd770c5c2257feddfae9b69ae4809f5b73be53365f4f5fc59be4fbff7a03 Hello everyone, We want to publish an update as the developer has finally found the root cause of the issue. Please find bug report here: https://tracker.ceph.com/issues/70390 The bug triggered when adding new OSD's and when there existed an Erasure Encoding pool. We also see other people in the past 1-2 weeks have experienced similar issues. To summarize, the issue was caused by a bug in Ceph 19.2 (squid). It only effected the Erasure Encoding pool. We are now 100% confident that this will not happen again since we moved everything away from Erasure Encoding 4 + 2 to Replication-3 for redundancy. Additionally, to prevent such a bug from ever happening again going forward we'll always stay one major version behind. if the current version is 19.2.1, we'll stay at 18.2.4 until version 20 is released, only then do we upgrade to version 19.X. This should allow more time for the "new" version to be battle tested by others first. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 11 Mar 2025 10:05:00 -0000 https://status.hostup.se/incident/522574#cec8dd770c5c2257feddfae9b69ae4809f5b73be53365f4f5fc59be4fbff7a03 Hello everyone, We want to publish an update as the developer has finally found the root cause of the issue. Please find bug report here: https://tracker.ceph.com/issues/70390 The bug triggered when adding new OSD's and when there existed an Erasure Encoding pool. We also see other people in the past 1-2 weeks have experienced similar issues. To summarize, the issue was caused by a bug in Ceph 19.2 (squid). It only effected the Erasure Encoding pool. We are now 100% confident that this will not happen again since we moved everything away from Erasure Encoding 4 + 2 to Replication-3 for redundancy. Additionally, to prevent such a bug from ever happening again going forward we'll always stay one major version behind. if the current version is 19.2.1, we'll stay at 18.2.4 until version 20 is released, only then do we upgrade to version 19.X. This should allow more time for the "new" version to be battle tested by others first. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 11 Mar 2025 10:05:00 -0000 https://status.hostup.se/incident/522574#cec8dd770c5c2257feddfae9b69ae4809f5b73be53365f4f5fc59be4fbff7a03 Hello everyone, We want to publish an update as the developer has finally found the root cause of the issue. Please find bug report here: https://tracker.ceph.com/issues/70390 The bug triggered when adding new OSD's and when there existed an Erasure Encoding pool. We also see other people in the past 1-2 weeks have experienced similar issues. To summarize, the issue was caused by a bug in Ceph 19.2 (squid). It only effected the Erasure Encoding pool. We are now 100% confident that this will not happen again since we moved everything away from Erasure Encoding 4 + 2 to Replication-3 for redundancy. Additionally, to prevent such a bug from ever happening again going forward we'll always stay one major version behind. if the current version is 19.2.1, we'll stay at 18.2.4 until version 20 is released, only then do we upgrade to version 19.X. This should allow more time for the "new" version to be battle tested by others first. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 11 Mar 2025 10:05:00 -0000 https://status.hostup.se/incident/522574#cec8dd770c5c2257feddfae9b69ae4809f5b73be53365f4f5fc59be4fbff7a03 Hello everyone, We want to publish an update as the developer has finally found the root cause of the issue. Please find bug report here: https://tracker.ceph.com/issues/70390 The bug triggered when adding new OSD's and when there existed an Erasure Encoding pool. We also see other people in the past 1-2 weeks have experienced similar issues. To summarize, the issue was caused by a bug in Ceph 19.2 (squid). It only effected the Erasure Encoding pool. We are now 100% confident that this will not happen again since we moved everything away from Erasure Encoding 4 + 2 to Replication-3 for redundancy. Additionally, to prevent such a bug from ever happening again going forward we'll always stay one major version behind. if the current version is 19.2.1, we'll stay at 18.2.4 until version 20 is released, only then do we upgrade to version 19.X. This should allow more time for the "new" version to be battle tested by others first. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 11 Mar 2025 10:05:00 -0000 https://status.hostup.se/incident/522574#cec8dd770c5c2257feddfae9b69ae4809f5b73be53365f4f5fc59be4fbff7a03 Hello everyone, We want to publish an update as the developer has finally found the root cause of the issue. Please find bug report here: https://tracker.ceph.com/issues/70390 The bug triggered when adding new OSD's and when there existed an Erasure Encoding pool. We also see other people in the past 1-2 weeks have experienced similar issues. To summarize, the issue was caused by a bug in Ceph 19.2 (squid). It only effected the Erasure Encoding pool. We are now 100% confident that this will not happen again since we moved everything away from Erasure Encoding 4 + 2 to Replication-3 for redundancy. Additionally, to prevent such a bug from ever happening again going forward we'll always stay one major version behind. if the current version is 19.2.1, we'll stay at 18.2.4 until version 20 is released, only then do we upgrade to version 19.X. This should allow more time for the "new" version to be battle tested by others first. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 11 Mar 2025 10:05:00 -0000 https://status.hostup.se/incident/522574#cec8dd770c5c2257feddfae9b69ae4809f5b73be53365f4f5fc59be4fbff7a03 Hello everyone, We want to publish an update as the developer has finally found the root cause of the issue. Please find bug report here: https://tracker.ceph.com/issues/70390 The bug triggered when adding new OSD's and when there existed an Erasure Encoding pool. We also see other people in the past 1-2 weeks have experienced similar issues. To summarize, the issue was caused by a bug in Ceph 19.2 (squid). It only effected the Erasure Encoding pool. We are now 100% confident that this will not happen again since we moved everything away from Erasure Encoding 4 + 2 to Replication-3 for redundancy. Additionally, to prevent such a bug from ever happening again going forward we'll always stay one major version behind. if the current version is 19.2.1, we'll stay at 18.2.4 until version 20 is released, only then do we upgrade to version 19.X. This should allow more time for the "new" version to be battle tested by others first. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 11 Mar 2025 10:05:00 -0000 https://status.hostup.se/incident/522574#cec8dd770c5c2257feddfae9b69ae4809f5b73be53365f4f5fc59be4fbff7a03 Hello everyone, We want to publish an update as the developer has finally found the root cause of the issue. Please find bug report here: https://tracker.ceph.com/issues/70390 The bug triggered when adding new OSD's and when there existed an Erasure Encoding pool. We also see other people in the past 1-2 weeks have experienced similar issues. To summarize, the issue was caused by a bug in Ceph 19.2 (squid). It only effected the Erasure Encoding pool. We are now 100% confident that this will not happen again since we moved everything away from Erasure Encoding 4 + 2 to Replication-3 for redundancy. Additionally, to prevent such a bug from ever happening again going forward we'll always stay one major version behind. if the current version is 19.2.1, we'll stay at 18.2.4 until version 20 is released, only then do we upgrade to version 19.X. This should allow more time for the "new" version to be battle tested by others first. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 11 Mar 2025 10:05:00 -0000 https://status.hostup.se/incident/522574#cec8dd770c5c2257feddfae9b69ae4809f5b73be53365f4f5fc59be4fbff7a03 Hello everyone, We want to publish an update as the developer has finally found the root cause of the issue. Please find bug report here: https://tracker.ceph.com/issues/70390 The bug triggered when adding new OSD's and when there existed an Erasure Encoding pool. We also see other people in the past 1-2 weeks have experienced similar issues. To summarize, the issue was caused by a bug in Ceph 19.2 (squid). It only effected the Erasure Encoding pool. We are now 100% confident that this will not happen again since we moved everything away from Erasure Encoding 4 + 2 to Replication-3 for redundancy. Additionally, to prevent such a bug from ever happening again going forward we'll always stay one major version behind. if the current version is 19.2.1, we'll stay at 18.2.4 until version 20 is released, only then do we upgrade to version 19.X. This should allow more time for the "new" version to be battle tested by others first. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 11 Mar 2025 10:05:00 -0000 https://status.hostup.se/incident/522574#cec8dd770c5c2257feddfae9b69ae4809f5b73be53365f4f5fc59be4fbff7a03 Hello everyone, We want to publish an update as the developer has finally found the root cause of the issue. Please find bug report here: https://tracker.ceph.com/issues/70390 The bug triggered when adding new OSD's and when there existed an Erasure Encoding pool. We also see other people in the past 1-2 weeks have experienced similar issues. To summarize, the issue was caused by a bug in Ceph 19.2 (squid). It only effected the Erasure Encoding pool. We are now 100% confident that this will not happen again since we moved everything away from Erasure Encoding 4 + 2 to Replication-3 for redundancy. Additionally, to prevent such a bug from ever happening again going forward we'll always stay one major version behind. if the current version is 19.2.1, we'll stay at 18.2.4 until version 20 is released, only then do we upgrade to version 19.X. This should allow more time for the "new" version to be battle tested by others first. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 11 Mar 2025 10:05:00 -0000 https://status.hostup.se/incident/522574#cec8dd770c5c2257feddfae9b69ae4809f5b73be53365f4f5fc59be4fbff7a03 Hello everyone, We want to publish an update as the developer has finally found the root cause of the issue. Please find bug report here: https://tracker.ceph.com/issues/70390 The bug triggered when adding new OSD's and when there existed an Erasure Encoding pool. We also see other people in the past 1-2 weeks have experienced similar issues. To summarize, the issue was caused by a bug in Ceph 19.2 (squid). It only effected the Erasure Encoding pool. We are now 100% confident that this will not happen again since we moved everything away from Erasure Encoding 4 + 2 to Replication-3 for redundancy. Additionally, to prevent such a bug from ever happening again going forward we'll always stay one major version behind. if the current version is 19.2.1, we'll stay at 18.2.4 until version 20 is released, only then do we upgrade to version 19.X. This should allow more time for the "new" version to be battle tested by others first. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 11 Mar 2025 10:05:00 -0000 https://status.hostup.se/incident/522574#cec8dd770c5c2257feddfae9b69ae4809f5b73be53365f4f5fc59be4fbff7a03 Hello everyone, We want to publish an update as the developer has finally found the root cause of the issue. Please find bug report here: https://tracker.ceph.com/issues/70390 The bug triggered when adding new OSD's and when there existed an Erasure Encoding pool. We also see other people in the past 1-2 weeks have experienced similar issues. To summarize, the issue was caused by a bug in Ceph 19.2 (squid). It only effected the Erasure Encoding pool. We are now 100% confident that this will not happen again since we moved everything away from Erasure Encoding 4 + 2 to Replication-3 for redundancy. Additionally, to prevent such a bug from ever happening again going forward we'll always stay one major version behind. if the current version is 19.2.1, we'll stay at 18.2.4 until version 20 is released, only then do we upgrade to version 19.X. This should allow more time for the "new" version to be battle tested by others first. omega (cpanel) is down https://status.hostup.se/incident/526305 Tue, 11 Mar 2025 08:26:46 -0000 https://status.hostup.se/incident/526305#4669e7a80834c20c57017f0dcc2c25c7a003752620ef03edb637f335c0af1749 omega (cpanel) recovered. omega (cpanel) is down https://status.hostup.se/incident/526305 Tue, 11 Mar 2025 08:20:58 -0000 https://status.hostup.se/incident/526305#60b3cf63d593870d87c0345400b481f0aa5e43aeeb6d4cda19196e85d5311807 omega (cpanel) went down. delta (cpanel) and stockholm1-1-vm are down https://status.hostup.se/incident/523594 Fri, 07 Mar 2025 10:34:59 -0000 https://status.hostup.se/incident/523594#7cbae41c0b0dddf208d8830ee01da6c648d3538f0c3098a2b4ead78080f68fc4 stockholm1-1-vm recovered. delta (cpanel) and stockholm1-1-vm are down https://status.hostup.se/incident/523594 Thu, 06 Mar 2025 13:03:22 -0000 https://status.hostup.se/incident/523594#2dc1e730e9fa828371a67a49bfe467f09407322ed3cd174686d0075682dd0189 delta (cpanel) recovered. delta (cpanel) and stockholm1-1-vm are down https://status.hostup.se/incident/523594 Thu, 06 Mar 2025 12:59:45 -0000 https://status.hostup.se/incident/523594#ac0998c70d1a3acf9640ce59fb6e25be0614c71184fb068ae11f9ee717d7ac74 delta (cpanel) went down. delta (cpanel) and stockholm1-1-vm are down https://status.hostup.se/incident/523594 Thu, 06 Mar 2025 08:52:20 -0000 https://status.hostup.se/incident/523594#34cff52b0c9a1ffd67faa1103961f87ea02be83e3abd388c260a34b2040a1c47 stockholm1-1-vm went down. Reboot https://status.hostup.se/incident/523588 Thu, 06 Mar 2025 08:43:00 -0000 https://status.hostup.se/incident/523588#d236028f2e99ca3f6ae63699fda68535537ee0c555ad8b60a5e71078e8132581 We are rebooting server #5 for quick update. It will be back in around 5 minutes Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Wed, 05 Mar 2025 18:29:00 -0000 https://status.hostup.se/incident/522574#362cc035b0dc4548b717740cc5d5f6eea2a4ff21b799fde5d0d99156ea0e241f The cluster has been an optimal state with an healthy 3x Replication pool for around 24 hours now. An import of your scheduled backups (if set, 7 free are included) or a reinstallation is sadly required to bring VM back up. Backup management guide: https://hostup.se/en/support/hantera-sakerhetskopior/ Reinstall instructions: https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Over our close to 5 years of running Ceph based storage, we've never seen an such an issue we had. 3x replicatied pool is now standard replacing the EC and 2x rep for both redundancy and simplicity. We are truly sorry for the trouble this brought everyone and we are going to make sure something like this never happens again. Additionally automated backups (without having to set a schedule yourself) and high speed storage solution for these backups will also be high on the priority list. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Wed, 05 Mar 2025 18:29:00 -0000 https://status.hostup.se/incident/522574#362cc035b0dc4548b717740cc5d5f6eea2a4ff21b799fde5d0d99156ea0e241f The cluster has been an optimal state with an healthy 3x Replication pool for around 24 hours now. An import of your scheduled backups (if set, 7 free are included) or a reinstallation is sadly required to bring VM back up. Backup management guide: https://hostup.se/en/support/hantera-sakerhetskopior/ Reinstall instructions: https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Over our close to 5 years of running Ceph based storage, we've never seen an such an issue we had. 3x replicatied pool is now standard replacing the EC and 2x rep for both redundancy and simplicity. We are truly sorry for the trouble this brought everyone and we are going to make sure something like this never happens again. Additionally automated backups (without having to set a schedule yourself) and high speed storage solution for these backups will also be high on the priority list. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Wed, 05 Mar 2025 18:29:00 -0000 https://status.hostup.se/incident/522574#362cc035b0dc4548b717740cc5d5f6eea2a4ff21b799fde5d0d99156ea0e241f The cluster has been an optimal state with an healthy 3x Replication pool for around 24 hours now. An import of your scheduled backups (if set, 7 free are included) or a reinstallation is sadly required to bring VM back up. Backup management guide: https://hostup.se/en/support/hantera-sakerhetskopior/ Reinstall instructions: https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Over our close to 5 years of running Ceph based storage, we've never seen an such an issue we had. 3x replicatied pool is now standard replacing the EC and 2x rep for both redundancy and simplicity. We are truly sorry for the trouble this brought everyone and we are going to make sure something like this never happens again. Additionally automated backups (without having to set a schedule yourself) and high speed storage solution for these backups will also be high on the priority list. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Wed, 05 Mar 2025 18:29:00 -0000 https://status.hostup.se/incident/522574#362cc035b0dc4548b717740cc5d5f6eea2a4ff21b799fde5d0d99156ea0e241f The cluster has been an optimal state with an healthy 3x Replication pool for around 24 hours now. An import of your scheduled backups (if set, 7 free are included) or a reinstallation is sadly required to bring VM back up. Backup management guide: https://hostup.se/en/support/hantera-sakerhetskopior/ Reinstall instructions: https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Over our close to 5 years of running Ceph based storage, we've never seen an such an issue we had. 3x replicatied pool is now standard replacing the EC and 2x rep for both redundancy and simplicity. We are truly sorry for the trouble this brought everyone and we are going to make sure something like this never happens again. Additionally automated backups (without having to set a schedule yourself) and high speed storage solution for these backups will also be high on the priority list. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Wed, 05 Mar 2025 18:29:00 -0000 https://status.hostup.se/incident/522574#362cc035b0dc4548b717740cc5d5f6eea2a4ff21b799fde5d0d99156ea0e241f The cluster has been an optimal state with an healthy 3x Replication pool for around 24 hours now. An import of your scheduled backups (if set, 7 free are included) or a reinstallation is sadly required to bring VM back up. Backup management guide: https://hostup.se/en/support/hantera-sakerhetskopior/ Reinstall instructions: https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Over our close to 5 years of running Ceph based storage, we've never seen an such an issue we had. 3x replicatied pool is now standard replacing the EC and 2x rep for both redundancy and simplicity. We are truly sorry for the trouble this brought everyone and we are going to make sure something like this never happens again. Additionally automated backups (without having to set a schedule yourself) and high speed storage solution for these backups will also be high on the priority list. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Wed, 05 Mar 2025 18:29:00 -0000 https://status.hostup.se/incident/522574#362cc035b0dc4548b717740cc5d5f6eea2a4ff21b799fde5d0d99156ea0e241f The cluster has been an optimal state with an healthy 3x Replication pool for around 24 hours now. An import of your scheduled backups (if set, 7 free are included) or a reinstallation is sadly required to bring VM back up. Backup management guide: https://hostup.se/en/support/hantera-sakerhetskopior/ Reinstall instructions: https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Over our close to 5 years of running Ceph based storage, we've never seen an such an issue we had. 3x replicatied pool is now standard replacing the EC and 2x rep for both redundancy and simplicity. We are truly sorry for the trouble this brought everyone and we are going to make sure something like this never happens again. Additionally automated backups (without having to set a schedule yourself) and high speed storage solution for these backups will also be high on the priority list. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Wed, 05 Mar 2025 18:29:00 -0000 https://status.hostup.se/incident/522574#362cc035b0dc4548b717740cc5d5f6eea2a4ff21b799fde5d0d99156ea0e241f The cluster has been an optimal state with an healthy 3x Replication pool for around 24 hours now. An import of your scheduled backups (if set, 7 free are included) or a reinstallation is sadly required to bring VM back up. Backup management guide: https://hostup.se/en/support/hantera-sakerhetskopior/ Reinstall instructions: https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Over our close to 5 years of running Ceph based storage, we've never seen an such an issue we had. 3x replicatied pool is now standard replacing the EC and 2x rep for both redundancy and simplicity. We are truly sorry for the trouble this brought everyone and we are going to make sure something like this never happens again. Additionally automated backups (without having to set a schedule yourself) and high speed storage solution for these backups will also be high on the priority list. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Wed, 05 Mar 2025 18:29:00 -0000 https://status.hostup.se/incident/522574#362cc035b0dc4548b717740cc5d5f6eea2a4ff21b799fde5d0d99156ea0e241f The cluster has been an optimal state with an healthy 3x Replication pool for around 24 hours now. An import of your scheduled backups (if set, 7 free are included) or a reinstallation is sadly required to bring VM back up. Backup management guide: https://hostup.se/en/support/hantera-sakerhetskopior/ Reinstall instructions: https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Over our close to 5 years of running Ceph based storage, we've never seen an such an issue we had. 3x replicatied pool is now standard replacing the EC and 2x rep for both redundancy and simplicity. We are truly sorry for the trouble this brought everyone and we are going to make sure something like this never happens again. Additionally automated backups (without having to set a schedule yourself) and high speed storage solution for these backups will also be high on the priority list. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Wed, 05 Mar 2025 18:29:00 -0000 https://status.hostup.se/incident/522574#362cc035b0dc4548b717740cc5d5f6eea2a4ff21b799fde5d0d99156ea0e241f The cluster has been an optimal state with an healthy 3x Replication pool for around 24 hours now. An import of your scheduled backups (if set, 7 free are included) or a reinstallation is sadly required to bring VM back up. Backup management guide: https://hostup.se/en/support/hantera-sakerhetskopior/ Reinstall instructions: https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Over our close to 5 years of running Ceph based storage, we've never seen an such an issue we had. 3x replicatied pool is now standard replacing the EC and 2x rep for both redundancy and simplicity. We are truly sorry for the trouble this brought everyone and we are going to make sure something like this never happens again. Additionally automated backups (without having to set a schedule yourself) and high speed storage solution for these backups will also be high on the priority list. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Wed, 05 Mar 2025 18:29:00 -0000 https://status.hostup.se/incident/522574#362cc035b0dc4548b717740cc5d5f6eea2a4ff21b799fde5d0d99156ea0e241f The cluster has been an optimal state with an healthy 3x Replication pool for around 24 hours now. An import of your scheduled backups (if set, 7 free are included) or a reinstallation is sadly required to bring VM back up. Backup management guide: https://hostup.se/en/support/hantera-sakerhetskopior/ Reinstall instructions: https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Over our close to 5 years of running Ceph based storage, we've never seen an such an issue we had. 3x replicatied pool is now standard replacing the EC and 2x rep for both redundancy and simplicity. We are truly sorry for the trouble this brought everyone and we are going to make sure something like this never happens again. Additionally automated backups (without having to set a schedule yourself) and high speed storage solution for these backups will also be high on the priority list. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Wed, 05 Mar 2025 18:29:00 -0000 https://status.hostup.se/incident/522574#362cc035b0dc4548b717740cc5d5f6eea2a4ff21b799fde5d0d99156ea0e241f The cluster has been an optimal state with an healthy 3x Replication pool for around 24 hours now. An import of your scheduled backups (if set, 7 free are included) or a reinstallation is sadly required to bring VM back up. Backup management guide: https://hostup.se/en/support/hantera-sakerhetskopior/ Reinstall instructions: https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Over our close to 5 years of running Ceph based storage, we've never seen an such an issue we had. 3x replicatied pool is now standard replacing the EC and 2x rep for both redundancy and simplicity. We are truly sorry for the trouble this brought everyone and we are going to make sure something like this never happens again. Additionally automated backups (without having to set a schedule yourself) and high speed storage solution for these backups will also be high on the priority list. Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 18:11:00 -0000 https://status.hostup.se/incident/522574#414f08f9440754af8016fb160984d512c39839200fc1960c37733e46a1660f42 We at Hostup sincerely apologize for the severe incident that occurred this afternoon, caused by multiple OSD failures leading to irreversible data corruption in our storage cluster. The root cause was identified as a firmware bug in the new disks recently introduced to our Ceph cluster. As a result, all VPS instances must now be reinstalled. We will work throughout the night to assist you with restoring your backups and getting your services operational again as quickly as possible. Don't worry they're safe and backups are always included in our services. Additionally, we’ve transitioned all customers from replication-2 and EC 4+2 to replication-3 to significantly increase redundancy and prevent similar issues in the future. For instructions on reinstalling your VPS and restoring from backups, please refer to these guides: https://hostup.se/en/support/hantera-sakerhetskopior/ https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 18:11:00 -0000 https://status.hostup.se/incident/522574#414f08f9440754af8016fb160984d512c39839200fc1960c37733e46a1660f42 We at Hostup sincerely apologize for the severe incident that occurred this afternoon, caused by multiple OSD failures leading to irreversible data corruption in our storage cluster. The root cause was identified as a firmware bug in the new disks recently introduced to our Ceph cluster. As a result, all VPS instances must now be reinstalled. We will work throughout the night to assist you with restoring your backups and getting your services operational again as quickly as possible. Don't worry they're safe and backups are always included in our services. Additionally, we’ve transitioned all customers from replication-2 and EC 4+2 to replication-3 to significantly increase redundancy and prevent similar issues in the future. For instructions on reinstalling your VPS and restoring from backups, please refer to these guides: https://hostup.se/en/support/hantera-sakerhetskopior/ https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 18:11:00 -0000 https://status.hostup.se/incident/522574#414f08f9440754af8016fb160984d512c39839200fc1960c37733e46a1660f42 We at Hostup sincerely apologize for the severe incident that occurred this afternoon, caused by multiple OSD failures leading to irreversible data corruption in our storage cluster. The root cause was identified as a firmware bug in the new disks recently introduced to our Ceph cluster. As a result, all VPS instances must now be reinstalled. We will work throughout the night to assist you with restoring your backups and getting your services operational again as quickly as possible. Don't worry they're safe and backups are always included in our services. Additionally, we’ve transitioned all customers from replication-2 and EC 4+2 to replication-3 to significantly increase redundancy and prevent similar issues in the future. For instructions on reinstalling your VPS and restoring from backups, please refer to these guides: https://hostup.se/en/support/hantera-sakerhetskopior/ https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 18:11:00 -0000 https://status.hostup.se/incident/522574#414f08f9440754af8016fb160984d512c39839200fc1960c37733e46a1660f42 We at Hostup sincerely apologize for the severe incident that occurred this afternoon, caused by multiple OSD failures leading to irreversible data corruption in our storage cluster. The root cause was identified as a firmware bug in the new disks recently introduced to our Ceph cluster. As a result, all VPS instances must now be reinstalled. We will work throughout the night to assist you with restoring your backups and getting your services operational again as quickly as possible. Don't worry they're safe and backups are always included in our services. Additionally, we’ve transitioned all customers from replication-2 and EC 4+2 to replication-3 to significantly increase redundancy and prevent similar issues in the future. For instructions on reinstalling your VPS and restoring from backups, please refer to these guides: https://hostup.se/en/support/hantera-sakerhetskopior/ https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 18:11:00 -0000 https://status.hostup.se/incident/522574#414f08f9440754af8016fb160984d512c39839200fc1960c37733e46a1660f42 We at Hostup sincerely apologize for the severe incident that occurred this afternoon, caused by multiple OSD failures leading to irreversible data corruption in our storage cluster. The root cause was identified as a firmware bug in the new disks recently introduced to our Ceph cluster. As a result, all VPS instances must now be reinstalled. We will work throughout the night to assist you with restoring your backups and getting your services operational again as quickly as possible. Don't worry they're safe and backups are always included in our services. Additionally, we’ve transitioned all customers from replication-2 and EC 4+2 to replication-3 to significantly increase redundancy and prevent similar issues in the future. For instructions on reinstalling your VPS and restoring from backups, please refer to these guides: https://hostup.se/en/support/hantera-sakerhetskopior/ https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 18:11:00 -0000 https://status.hostup.se/incident/522574#414f08f9440754af8016fb160984d512c39839200fc1960c37733e46a1660f42 We at Hostup sincerely apologize for the severe incident that occurred this afternoon, caused by multiple OSD failures leading to irreversible data corruption in our storage cluster. The root cause was identified as a firmware bug in the new disks recently introduced to our Ceph cluster. As a result, all VPS instances must now be reinstalled. We will work throughout the night to assist you with restoring your backups and getting your services operational again as quickly as possible. Don't worry they're safe and backups are always included in our services. Additionally, we’ve transitioned all customers from replication-2 and EC 4+2 to replication-3 to significantly increase redundancy and prevent similar issues in the future. For instructions on reinstalling your VPS and restoring from backups, please refer to these guides: https://hostup.se/en/support/hantera-sakerhetskopior/ https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 18:11:00 -0000 https://status.hostup.se/incident/522574#414f08f9440754af8016fb160984d512c39839200fc1960c37733e46a1660f42 We at Hostup sincerely apologize for the severe incident that occurred this afternoon, caused by multiple OSD failures leading to irreversible data corruption in our storage cluster. The root cause was identified as a firmware bug in the new disks recently introduced to our Ceph cluster. As a result, all VPS instances must now be reinstalled. We will work throughout the night to assist you with restoring your backups and getting your services operational again as quickly as possible. Don't worry they're safe and backups are always included in our services. Additionally, we’ve transitioned all customers from replication-2 and EC 4+2 to replication-3 to significantly increase redundancy and prevent similar issues in the future. For instructions on reinstalling your VPS and restoring from backups, please refer to these guides: https://hostup.se/en/support/hantera-sakerhetskopior/ https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 18:11:00 -0000 https://status.hostup.se/incident/522574#414f08f9440754af8016fb160984d512c39839200fc1960c37733e46a1660f42 We at Hostup sincerely apologize for the severe incident that occurred this afternoon, caused by multiple OSD failures leading to irreversible data corruption in our storage cluster. The root cause was identified as a firmware bug in the new disks recently introduced to our Ceph cluster. As a result, all VPS instances must now be reinstalled. We will work throughout the night to assist you with restoring your backups and getting your services operational again as quickly as possible. Don't worry they're safe and backups are always included in our services. Additionally, we’ve transitioned all customers from replication-2 and EC 4+2 to replication-3 to significantly increase redundancy and prevent similar issues in the future. For instructions on reinstalling your VPS and restoring from backups, please refer to these guides: https://hostup.se/en/support/hantera-sakerhetskopior/ https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 18:11:00 -0000 https://status.hostup.se/incident/522574#414f08f9440754af8016fb160984d512c39839200fc1960c37733e46a1660f42 We at Hostup sincerely apologize for the severe incident that occurred this afternoon, caused by multiple OSD failures leading to irreversible data corruption in our storage cluster. The root cause was identified as a firmware bug in the new disks recently introduced to our Ceph cluster. As a result, all VPS instances must now be reinstalled. We will work throughout the night to assist you with restoring your backups and getting your services operational again as quickly as possible. Don't worry they're safe and backups are always included in our services. Additionally, we’ve transitioned all customers from replication-2 and EC 4+2 to replication-3 to significantly increase redundancy and prevent similar issues in the future. For instructions on reinstalling your VPS and restoring from backups, please refer to these guides: https://hostup.se/en/support/hantera-sakerhetskopior/ https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 18:11:00 -0000 https://status.hostup.se/incident/522574#414f08f9440754af8016fb160984d512c39839200fc1960c37733e46a1660f42 We at Hostup sincerely apologize for the severe incident that occurred this afternoon, caused by multiple OSD failures leading to irreversible data corruption in our storage cluster. The root cause was identified as a firmware bug in the new disks recently introduced to our Ceph cluster. As a result, all VPS instances must now be reinstalled. We will work throughout the night to assist you with restoring your backups and getting your services operational again as quickly as possible. Don't worry they're safe and backups are always included in our services. Additionally, we’ve transitioned all customers from replication-2 and EC 4+2 to replication-3 to significantly increase redundancy and prevent similar issues in the future. For instructions on reinstalling your VPS and restoring from backups, please refer to these guides: https://hostup.se/en/support/hantera-sakerhetskopior/ https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 18:11:00 -0000 https://status.hostup.se/incident/522574#414f08f9440754af8016fb160984d512c39839200fc1960c37733e46a1660f42 We at Hostup sincerely apologize for the severe incident that occurred this afternoon, caused by multiple OSD failures leading to irreversible data corruption in our storage cluster. The root cause was identified as a firmware bug in the new disks recently introduced to our Ceph cluster. As a result, all VPS instances must now be reinstalled. We will work throughout the night to assist you with restoring your backups and getting your services operational again as quickly as possible. Don't worry they're safe and backups are always included in our services. Additionally, we’ve transitioned all customers from replication-2 and EC 4+2 to replication-3 to significantly increase redundancy and prevent similar issues in the future. For instructions on reinstalling your VPS and restoring from backups, please refer to these guides: https://hostup.se/en/support/hantera-sakerhetskopior/ https://hostup.se/en/support/installera-nytt-operativsystem-pa-din-vps/ Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 14:23:00 -0000 https://status.hostup.se/incident/522574#517c4f40a2a46a5baec21de1c8e3d8f3767884950ac89e46fe6db4599f0caafa We are currently experiencing catastrophic firmware bug in new disks. This firmware causes data to be corrupted sometimes on write, and the issue now is that the storage cluster is down due to corruption. We will rollback the system to bring it online again Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 14:23:00 -0000 https://status.hostup.se/incident/522574#517c4f40a2a46a5baec21de1c8e3d8f3767884950ac89e46fe6db4599f0caafa We are currently experiencing catastrophic firmware bug in new disks. This firmware causes data to be corrupted sometimes on write, and the issue now is that the storage cluster is down due to corruption. We will rollback the system to bring it online again Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 14:23:00 -0000 https://status.hostup.se/incident/522574#517c4f40a2a46a5baec21de1c8e3d8f3767884950ac89e46fe6db4599f0caafa We are currently experiencing catastrophic firmware bug in new disks. This firmware causes data to be corrupted sometimes on write, and the issue now is that the storage cluster is down due to corruption. We will rollback the system to bring it online again Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 14:23:00 -0000 https://status.hostup.se/incident/522574#517c4f40a2a46a5baec21de1c8e3d8f3767884950ac89e46fe6db4599f0caafa We are currently experiencing catastrophic firmware bug in new disks. This firmware causes data to be corrupted sometimes on write, and the issue now is that the storage cluster is down due to corruption. We will rollback the system to bring it online again Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 14:23:00 -0000 https://status.hostup.se/incident/522574#517c4f40a2a46a5baec21de1c8e3d8f3767884950ac89e46fe6db4599f0caafa We are currently experiencing catastrophic firmware bug in new disks. This firmware causes data to be corrupted sometimes on write, and the issue now is that the storage cluster is down due to corruption. We will rollback the system to bring it online again Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 14:23:00 -0000 https://status.hostup.se/incident/522574#517c4f40a2a46a5baec21de1c8e3d8f3767884950ac89e46fe6db4599f0caafa We are currently experiencing catastrophic firmware bug in new disks. This firmware causes data to be corrupted sometimes on write, and the issue now is that the storage cluster is down due to corruption. We will rollback the system to bring it online again Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 14:23:00 -0000 https://status.hostup.se/incident/522574#517c4f40a2a46a5baec21de1c8e3d8f3767884950ac89e46fe6db4599f0caafa We are currently experiencing catastrophic firmware bug in new disks. This firmware causes data to be corrupted sometimes on write, and the issue now is that the storage cluster is down due to corruption. We will rollback the system to bring it online again Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 14:23:00 -0000 https://status.hostup.se/incident/522574#517c4f40a2a46a5baec21de1c8e3d8f3767884950ac89e46fe6db4599f0caafa We are currently experiencing catastrophic firmware bug in new disks. This firmware causes data to be corrupted sometimes on write, and the issue now is that the storage cluster is down due to corruption. We will rollback the system to bring it online again Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 14:23:00 -0000 https://status.hostup.se/incident/522574#517c4f40a2a46a5baec21de1c8e3d8f3767884950ac89e46fe6db4599f0caafa We are currently experiencing catastrophic firmware bug in new disks. This firmware causes data to be corrupted sometimes on write, and the issue now is that the storage cluster is down due to corruption. We will rollback the system to bring it online again Downtime in storage cluster due to firmware bug https://status.hostup.se/incident/522574 Tue, 04 Mar 2025 14:23:00 -0000 https://status.hostup.se/incident/522574#517c4f40a2a46a5baec21de1c8e3d8f3767884950ac89e46fe6db4599f0caafa We are currently experiencing catastrophic firmware bug in new disks. This firmware causes data to be corrupted sometimes on write, and the issue now is that the storage cluster is down due to corruption. We will rollback the system to bring it online again epsilon is down https://status.hostup.se/incident/521071 Sat, 01 Mar 2025 15:19:17 -0000 https://status.hostup.se/incident/521071#81a20019a7e142a0430de3e6ccb47c25edeb401ea1ea8e2a6e775f0626ac4daa epsilon recovered. epsilon is down https://status.hostup.se/incident/521071 Sat, 01 Mar 2025 15:18:19 -0000 https://status.hostup.se/incident/521071#1c9da9018487d58fdd4b9fd726aa86ccaf4d669ddb81a5f35be954d2538e7568 epsilon went down. srv10.hostup.se, stockholm1-12-vm (HA cluster), and 2 other services are down https://status.hostup.se/incident/517977 Mon, 24 Feb 2025 08:59:52 -0000 https://status.hostup.se/incident/517977#af764ac1b2ff9ea8e1152a9ff26a9871c357484bc9c67cc73d6a25b79c30845e stockholm1-1-vm recovered. srv10.hostup.se, stockholm1-12-vm (HA cluster), and 2 other services are down https://status.hostup.se/incident/517977 Mon, 24 Feb 2025 08:52:15 -0000 https://status.hostup.se/incident/517977#cda3b2f40a7dce526157815ea3e4e61f3caf0cf0092d262c432f8f063c7c7245 stockholm1-1-vm went down and srv10.hostup.se recovered. srv10.hostup.se, stockholm1-12-vm (HA cluster), and 2 other services are down https://status.hostup.se/incident/517977 Mon, 24 Feb 2025 08:45:57 -0000 https://status.hostup.se/incident/517977#fa4dbe8d7b0810287ccb259fb084941cff8fb21921af5f687f0fc94ef6013fe8 stockholm1-10-vm (HA cluster) recovered. srv10.hostup.se, stockholm1-12-vm (HA cluster), and 2 other services are down https://status.hostup.se/incident/517977 Mon, 24 Feb 2025 08:43:35 -0000 https://status.hostup.se/incident/517977#3c646d7af919e3c730cabc696d7e81af8e2f4c9bf5a586ec4646162ceadf1deb stockholm1-12-vm (HA cluster) recovered. srv10.hostup.se, stockholm1-12-vm (HA cluster), and 2 other services are down https://status.hostup.se/incident/517977 Mon, 24 Feb 2025 08:42:59 -0000 https://status.hostup.se/incident/517977#13a55aed9a331747bea4193f7fccfcd3d3610dc1c2c8e90f7a9a619c3e9a656d stockholm1-12-vm (HA cluster) went down. srv10.hostup.se, stockholm1-12-vm (HA cluster), and 2 other services are down https://status.hostup.se/incident/517977 Mon, 24 Feb 2025 08:40:56 -0000 https://status.hostup.se/incident/517977#f4492f66cdf0bc4d621c84cbd31099a1bfcb53f7855f2bc27c8fad94ea51766e stockholm1-10-vm (HA cluster) went down. delta (cpanel) is down https://status.hostup.se/incident/516251 Thu, 20 Feb 2025 17:05:09 -0000 https://status.hostup.se/incident/516251#4ca87a041a7891e06edb65ade55892945eee017cb2640bc8fd4c188a97ce62e3 delta (cpanel) recovered. delta (cpanel) is down https://status.hostup.se/incident/516251 Thu, 20 Feb 2025 16:59:56 -0000 https://status.hostup.se/incident/516251#3921f6ab4cbc16ce0c0f9dcd6d9ec527d040429525b72b58741df48fbb5fb164 delta (cpanel) went down. delta (cpanel) is down https://status.hostup.se/incident/515441 Wed, 19 Feb 2025 10:13:28 -0000 https://status.hostup.se/incident/515441#6f278f813314246aebee4cd6f67677e15712ace035b94c2284a261bd149cec75 delta (cpanel) recovered. delta (cpanel) is down https://status.hostup.se/incident/515441 Wed, 19 Feb 2025 10:11:12 -0000 https://status.hostup.se/incident/515441#09e6aa99091b6ac03209c49eb41bb2ea2c0670d976a40358def2e08c416231cd delta (cpanel) went down. delta (cpanel) is down https://status.hostup.se/incident/514224 Mon, 17 Feb 2025 10:51:43 -0000 https://status.hostup.se/incident/514224#cdcecd74497eebb0b9a8d9c3549461c8c7f78926d2e257af68a618de10cbcf02 delta (cpanel) recovered. delta (cpanel) is down https://status.hostup.se/incident/514224 Mon, 17 Feb 2025 10:50:51 -0000 https://status.hostup.se/incident/514224#000c8d5253e4f70c76bc2ffd7e508357db20aa16398d199c3933b2d66288aebc delta (cpanel) went down. epsilon is down https://status.hostup.se/incident/506129 Sun, 02 Feb 2025 21:38:35 -0000 https://status.hostup.se/incident/506129#8f4dc427b92934e3d0bf9c95ed24a4635bb30fa2328e6a0d0c67a5e2ae266e34 epsilon recovered. epsilon is down https://status.hostup.se/incident/506129 Sun, 02 Feb 2025 21:38:06 -0000 https://status.hostup.se/incident/506129#0129d6d4bc4e43cf37e3609aa69728294f2af902b8525a7e2eae300e510b5072 epsilon went down. zeta is down https://status.hostup.se/incident/503696 Wed, 29 Jan 2025 22:40:51 -0000 https://status.hostup.se/incident/503696#33c38f6b256e647720ff3888805c0a379d068891c11f3ca6e7f385b1a7a366d1 zeta recovered. zeta is down https://status.hostup.se/incident/503696 Wed, 29 Jan 2025 09:30:51 -0000 https://status.hostup.se/incident/503696#3896f527ef68ea67381f946c308f27cdc3133db7081dd41c15de3a8321c6e190 zeta went down. Webbhotell reverse proxy and orion are down https://status.hostup.se/incident/500670 Thu, 23 Jan 2025 19:28:21 -0000 https://status.hostup.se/incident/500670#d50587122f085a00b3afc7fa12c532a45e591c047fc0a74f3b89a9d06a635a06 Webbhotell reverse proxy recovered. Webbhotell reverse proxy and orion are down https://status.hostup.se/incident/500670 Thu, 23 Jan 2025 19:27:59 -0000 https://status.hostup.se/incident/500670#38536a00cdb68640d3e6e5c98c58a429f54c2b1ec36ab445c27281eeb6a80d51 Webbhotell reverse proxy went down. Webbhotell reverse proxy and orion are down https://status.hostup.se/incident/500670 Thu, 23 Jan 2025 13:57:53 -0000 https://status.hostup.se/incident/500670#ad343f8dca7ab2900dbbf527c8dab0ab7651bf6b3632801eff544bdc86905bde orion recovered. Webbhotell reverse proxy and orion are down https://status.hostup.se/incident/500670 Thu, 23 Jan 2025 13:55:36 -0000 https://status.hostup.se/incident/500670#9aa90d68a97e3c1f03b0f2da71ad144391e9f8d12d9c068fd29793f7aea16465 orion went down. orion is down https://status.hostup.se/incident/497785 Sat, 18 Jan 2025 08:12:43 -0000 https://status.hostup.se/incident/497785#d90443ec6d5099ebdc3885476a28a806abe3dbf026c9c76cba2a79dd8f56252f orion recovered. orion is down https://status.hostup.se/incident/497785 Sat, 18 Jan 2025 08:12:20 -0000 https://status.hostup.se/incident/497785#96b32a7885224e3c6929d738f22f6ced1d87fcdf1dd763459ff6bc0eb1296b8a orion went down. delta (cpanel) is down https://status.hostup.se/incident/497160 Fri, 17 Jan 2025 02:10:46 -0000 https://status.hostup.se/incident/497160#b8062aadd0c6bdf450a884bce5c46a39f726aa43987ffb0fcae1ad868723b9ed delta (cpanel) recovered. delta (cpanel) is down https://status.hostup.se/incident/497160 Fri, 17 Jan 2025 01:58:46 -0000 https://status.hostup.se/incident/497160#5df65d908b2b39e939bbb4fe3fe76b300de772eabda9b64d0a9a129e6d1f5d91 delta (cpanel) went down. delta (cpanel) is down https://status.hostup.se/incident/497160 Fri, 17 Jan 2025 01:54:46 -0000 https://status.hostup.se/incident/497160#f607b2642bbb038a72356ca967909db4f84e26131d1c0245db678cbbe017be20 delta (cpanel) recovered. delta (cpanel) is down https://status.hostup.se/incident/497160 Fri, 17 Jan 2025 01:51:48 -0000 https://status.hostup.se/incident/497160#2ddd4b28edc9742bb694cb0b0269813ff9f55fb291deeb256b400c9c91764015 delta (cpanel) went down. epsilon is down https://status.hostup.se/incident/495491 Tue, 14 Jan 2025 09:09:09 -0000 https://status.hostup.se/incident/495491#bca5fbf60fea7ccbc2e9ba4466aea0e78f72aa4f4635293504b4679aaee26abc epsilon recovered. epsilon is down https://status.hostup.se/incident/495491 Tue, 14 Jan 2025 05:39:20 -0000 https://status.hostup.se/incident/495491#50642437562c36c3a243aff7930398d36c5929aaed6f47b0b376933bb1e3245e epsilon went down. server restart https://status.hostup.se/incident/495352 Mon, 13 Jan 2025 23:55:57 +0000 https://status.hostup.se/incident/495352#e597fe3fc5394473922cfce815614e89de32c729a02b2b2fa81943cad01c01ca Maintenance completed server restart https://status.hostup.se/incident/495352 Mon, 13 Jan 2025 23:50:57 -0000 https://status.hostup.se/incident/495352#e07f6c8dc561cf5db664c775d06b4b24b86de50086d91af56271ead08193f452 During this window we'll upgrade the server resources. This requires a quick 3 minute reboot. zeta is down https://status.hostup.se/incident/467138 Mon, 25 Nov 2024 03:45:53 -0000 https://status.hostup.se/incident/467138#877847266d3e590c8a70c01f49162075309c007302144268c138b4ee207e1e26 zeta recovered. zeta is down https://status.hostup.se/incident/467138 Mon, 25 Nov 2024 02:34:16 -0000 https://status.hostup.se/incident/467138#32202ff5d30e595bb0ad301ee5574534f64d203487e96104b414dc8ad0ae91b7 zeta went down. Unexpected server reboot https://status.hostup.se/incident/466481 Fri, 22 Nov 2024 20:35:00 -0000 https://status.hostup.se/incident/466481#8ad24e163efcb0dde5c0555ac61d50cbbca992ebf3b285b0ed9f3d38ffb24e3f Incident overview Affected server: Stockholm1-10vm Time of incident: 21:20 Duration: Approx. 15 minutes Impact: Around 20% of VMs in the subnets 95.141.241.0/24 and 91.226.221.0/24 experienced extended downtime due to network connectivity issues. What happened At 21:20, the Stockholm1-10vm server unexpectedly rebooted due to a hardware issue. The affected VMs were automatically redistributed across the remaining servers and began booting within approximately two minutes. However, around 20% of the VMs in the affected subnets (95.141.241.0/24 and 91.226.221.0/24) did not regain network connectivity upon reboot. These VMs require VLAN tagging to connect, as their subnets are announced via another ASN. Since the redistribution was random, some VMs were deployed on servers that do not support VLAN tagging, resulting in network downtime. By 21:35, all VMs were operational after our team manually migrated those affected to servers with proper VLAN tagging support. Root cause The server rebooted due to a memory issue in the CPU1 G0 slot. A defective RAM stick caused the reboot. The extended downtime for VMs in 95.141.241.0/24 and 91.226.221.0/24 subnets was due to their reliance on VLAN tagging for connectivity, combined with the random redistribution of VMs across servers. Actions taken All VMs were manually migrated to appropriate servers to restore network connectivity. The defective memory stick has been identified and will be removed to prevent further incidents. Future improvements Hardware maintenance: The faulty RAM stick will be removed, and the server will be returned to production after thorough testing. Subnet reconfiguration: We will migrate 95.141.241.0/24 and 91.226.221.0/24 to our own ASN over the weekend, removing the dependency on VLAN tagging. This change will ensure that, in case of future server failures, affected VMs can reboot on any server without network connectivity issues. Automatic handling: We will enhance our automated recovery systems to prevent similar delays during VM redistribution. We apologize for any inconvenience caused and are committed to improving our systems to minimize downtime in the future. stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and 1 other service are down https://status.hostup.se/incident/459797 Tue, 12 Nov 2024 03:36:37 -0000 https://status.hostup.se/incident/459797#d0b0e74eadea45ebb766130ae0ffe6fbaaa90d9743ecd8cc17e8b2412ab94620 stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and stockholm1-4-vm (HA cluster) recovered. stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and 1 other service are down https://status.hostup.se/incident/459797 Tue, 12 Nov 2024 03:36:37 -0000 https://status.hostup.se/incident/459797#d0b0e74eadea45ebb766130ae0ffe6fbaaa90d9743ecd8cc17e8b2412ab94620 stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and stockholm1-4-vm (HA cluster) recovered. stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and 1 other service are down https://status.hostup.se/incident/459797 Tue, 12 Nov 2024 03:36:37 -0000 https://status.hostup.se/incident/459797#d0b0e74eadea45ebb766130ae0ffe6fbaaa90d9743ecd8cc17e8b2412ab94620 stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and stockholm1-4-vm (HA cluster) recovered. stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and 1 other service are down https://status.hostup.se/incident/459797 Tue, 12 Nov 2024 03:35:53 -0000 https://status.hostup.se/incident/459797#76180860b39180c27cfc45c76dc10c9c6633d37b3bd62bccbd2ba3fbbffdcf04 stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and stockholm1-4-vm (HA cluster) went down. stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and 1 other service are down https://status.hostup.se/incident/459797 Tue, 12 Nov 2024 03:35:53 -0000 https://status.hostup.se/incident/459797#76180860b39180c27cfc45c76dc10c9c6633d37b3bd62bccbd2ba3fbbffdcf04 stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and stockholm1-4-vm (HA cluster) went down. stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and 1 other service are down https://status.hostup.se/incident/459797 Tue, 12 Nov 2024 03:35:53 -0000 https://status.hostup.se/incident/459797#76180860b39180c27cfc45c76dc10c9c6633d37b3bd62bccbd2ba3fbbffdcf04 stockholm1-8-vm (HA cluster), stockholm1-7-vm (HA cluster), and stockholm1-4-vm (HA cluster) went down. Webbhotell reverse proxy, DNS Namnserver 1, and 2 other services are down https://status.hostup.se/incident/458931 Sun, 10 Nov 2024 07:47:20 -0000 https://status.hostup.se/incident/458931#804dc73786a38d4e807706351c3c2d977d7b73c7359a576c44046baa5794a722 Webbhotell reverse proxy recovered. Webbhotell reverse proxy, DNS Namnserver 1, and 2 other services are down https://status.hostup.se/incident/458931 Sun, 10 Nov 2024 07:37:49 -0000 https://status.hostup.se/incident/458931#6bdf3e8cd61a22b2dcefd49c3124c75078f3ea734cd91215c1be9666e75f25b9 Webbhotell reverse proxy went down. stockholm1-9-vm (HA cluster) and stockholm1-6-vm (HA cluster) are down https://status.hostup.se/incident/445407 Wed, 16 Oct 2024 15:09:36 -0000 https://status.hostup.se/incident/445407#d61253fe3fadac50c66b01d39402175eaa59ec82af0f4fb9695d0570786c5229 stockholm1-9-vm (HA cluster) recovered. stockholm1-9-vm (HA cluster) and stockholm1-6-vm (HA cluster) are down https://status.hostup.se/incident/445407 Wed, 16 Oct 2024 15:09:00 -0000 https://status.hostup.se/incident/445407#9665a41ed312ba511e2a3d71a078859a0dd1f5fad552817068616ad6b1f74789 stockholm1-9-vm (HA cluster) went down. stockholm1-9-vm (HA cluster) and stockholm1-6-vm (HA cluster) are down https://status.hostup.se/incident/445407 Wed, 16 Oct 2024 07:16:08 -0000 https://status.hostup.se/incident/445407#85b97f456e17d74d7b4fe577d650fbe6a5ff9405b2d593b859b9bd1a86edc35d stockholm1-6-vm (HA cluster) recovered. stockholm1-9-vm (HA cluster) and stockholm1-6-vm (HA cluster) are down https://status.hostup.se/incident/445407 Wed, 16 Oct 2024 06:25:15 -0000 https://status.hostup.se/incident/445407#85468b13ff8e4340309e23fddfb37b35fa02d135a499ff593d93cefc67b44ec8 stockholm1-6-vm (HA cluster) went down. Webbhotell reverse proxy and epsilon are down https://status.hostup.se/incident/427349 Wed, 11 Sep 2024 01:59:02 -0000 https://status.hostup.se/incident/427349#cd0ad1abae314512b91ab86dea4f831ecec8a6105fa09f50790f60d0abbf43c4 Webbhotell reverse proxy recovered. Webbhotell reverse proxy and epsilon are down https://status.hostup.se/incident/427349 Wed, 11 Sep 2024 01:43:00 -0000 https://status.hostup.se/incident/427349#a3c90cc8c9933ecac750c9a5ee073ced22ec4f64247776a4ebbd12308c98197f epsilon recovered. Webbhotell reverse proxy and epsilon are down https://status.hostup.se/incident/427349 Wed, 11 Sep 2024 01:40:30 -0000 https://status.hostup.se/incident/427349#d8a91aa6aec0fa825761c8087536f90906d31ebee922c003101ff367746da10a epsilon went down. Webbhotell reverse proxy and epsilon are down https://status.hostup.se/incident/427349 Wed, 11 Sep 2024 01:04:10 -0000 https://status.hostup.se/incident/427349#b2c8b9b0761f421668b9b83bcd9578541c9232cdff63cc158a22b3fc73ce4ca8 Webbhotell reverse proxy went down. theta is down https://status.hostup.se/incident/424332 Wed, 04 Sep 2024 22:25:51 -0000 https://status.hostup.se/incident/424332#cccb1ac839fdd60cbcbd1c8c0fc619e4fa028359e6adb8fa0a475247c00d0442 theta recovered. theta is down https://status.hostup.se/incident/424332 Wed, 04 Sep 2024 21:52:57 -0000 https://status.hostup.se/incident/424332#26f6c240126c9320d9e3829991a9f32f640a2592efaaf376229b7b65db285c76 theta went down. DNS Namnserver 2 and stockholm1-1-vm are down https://status.hostup.se/incident/419432 Mon, 26 Aug 2024 18:51:07 -0000 https://status.hostup.se/incident/419432#3204fe441ea81624918fe539031be5dfe9749fe874a899c7f05877e7199633d8 stockholm1-1-vm recovered. DNS Namnserver 2 and stockholm1-1-vm are down https://status.hostup.se/incident/419432 Mon, 26 Aug 2024 16:44:51 -0000 https://status.hostup.se/incident/419432#471a3dc84858f129b905b9a276c99c9cfa76925f14c1b9d77991e290d5670b10 stockholm1-1-vm went down. epsilon and eta are down https://status.hostup.se/incident/412894 Mon, 12 Aug 2024 22:38:47 -0000 https://status.hostup.se/incident/412894#1e312c8dd93398bec3b6a47ba59a250474d11236fe46f0a2e32b3c5979e94e08 eta recovered. epsilon and eta are down https://status.hostup.se/incident/412894 Mon, 12 Aug 2024 22:37:52 -0000 https://status.hostup.se/incident/412894#35efcd8f37d4dc36d6203da348425ad3c1925296fb42e188daa74c2f8f45fa12 epsilon recovered. epsilon and eta are down https://status.hostup.se/incident/412894 Mon, 12 Aug 2024 22:36:41 -0000 https://status.hostup.se/incident/412894#a6c0f1f0244e8cf63cb0c115b8176dc4aa3438e912c45cba8f894b2bd6ccfb87 eta went down. epsilon and eta are down https://status.hostup.se/incident/412894 Mon, 12 Aug 2024 22:36:20 -0000 https://status.hostup.se/incident/412894#d90e84365e97e0c65786a4a49ad7973098d0a0f39fb4003171c13d6212f8de2a epsilon went down. epsilon, zeta, and 1 other service are down https://status.hostup.se/incident/393136 Wed, 03 Jul 2024 16:03:20 -0000 https://status.hostup.se/incident/393136#f245d2131b2f1e37558af93a1442d97cd96b1d25d573e1088bb61a5e34ce58a5 epsilon recovered. epsilon, zeta, and 1 other service are down https://status.hostup.se/incident/393136 Wed, 03 Jul 2024 16:00:45 -0000 https://status.hostup.se/incident/393136#16730613f38cff9c675291f53dad5292bced35cc5079e2a6d1413f677470093b epsilon went down. epsilon, zeta, and 1 other service are down https://status.hostup.se/incident/393136 Wed, 03 Jul 2024 15:54:50 -0000 https://status.hostup.se/incident/393136#7f348a802f17ccc422670d9b1887e9c4a11c6c56048b109eaf2bd275acc76bca orion recovered. epsilon, zeta, and 1 other service are down https://status.hostup.se/incident/393136 Wed, 03 Jul 2024 15:50:15 -0000 https://status.hostup.se/incident/393136#897c9bf8af8ba24d718095c78f4d5a8f3476c4d59ffaa165cd22e649ff6f7023 zeta recovered. epsilon, zeta, and 1 other service are down https://status.hostup.se/incident/393136 Wed, 03 Jul 2024 15:34:19 -0000 https://status.hostup.se/incident/393136#f9f4ea5bdc311c33ff1cbe3159a834be8286fb45563bab71a9c727019c88cc71 zeta went down. epsilon, zeta, and 1 other service are down https://status.hostup.se/incident/393136 Wed, 03 Jul 2024 15:31:49 -0000 https://status.hostup.se/incident/393136#e1bda03caf2e4e85800b9faaac0fa7119c76e09d1aa5e373a4a915052a38de7d orion went down. Omega server startar om under 1-2 minuter för att uppgradera hårdvara https://status.hostup.se/incident/392403 Tue, 02 Jul 2024 09:00:00 -0000 https://status.hostup.se/incident/392403#c7893d628f1e4f040adc4636100c4d6b87be7e626957dcba3ec89342fb2e9b6e Server uppgradering är klart. Sajter är återigen snabba och har nu tillgång till ännu mer CPU. Omega server startar om under 1-2 minuter för att uppgradera hårdvara https://status.hostup.se/incident/392403 Tue, 02 Jul 2024 08:54:00 -0000 https://status.hostup.se/incident/392403#c9d03a6a81f0e0ed606f46e8e70740504c1baa73a097a11ec84fd6537a5a3e92 Omega server startar om under 1-2 minuter för att uppgradera hårdvara zeta is down https://status.hostup.se/incident/385518 Mon, 17 Jun 2024 14:55:57 -0000 https://status.hostup.se/incident/385518#bf9ee452942dadc3d02a4f07e78cb5bd44286f8c8dc66bf7bd91e4dd932a1fa3 zeta recovered. zeta is down https://status.hostup.se/incident/385518 Mon, 17 Jun 2024 14:50:03 -0000 https://status.hostup.se/incident/385518#07e667c6f62b87cefdd54b57a4f38fb53f80bf6ab23ecfe1d1e410fb3d379f4c zeta went down. Webbhotell reverse proxy is down https://status.hostup.se/incident/383538 Thu, 13 Jun 2024 02:22:00 -0000 https://status.hostup.se/incident/383538#de85aeb5d5eb031fe9100cb70e5bb0d81641a71ae2ea41fb22c0b1bed8e36392 Update finished. Webbhotell reverse proxy is down https://status.hostup.se/incident/383538 Thu, 13 Jun 2024 02:19:00 -0000 https://status.hostup.se/incident/383538#21b71688904ee8c78a33f196e42e8283a2841af9fd5c897cc1abb7e9c9bcc452 We are updating the reverse proxy system (centralized login) MySQL server sometimes unavailable during morning 05-09 https://status.hostup.se/incident/379471 Thu, 16 May 2024 07:08:00 -0000 https://status.hostup.se/incident/379471#338088aecbba9fa52fc74baf8677f2a49b4f1575b0e9776f54fecdb49dd44d9c Issue was resolved by downgrading mysql version and later doing config update to make it work with new version MySQL server sometimes unavailable during morning 05-09 https://status.hostup.se/incident/379471 Thu, 16 May 2024 07:08:00 -0000 https://status.hostup.se/incident/379471#338088aecbba9fa52fc74baf8677f2a49b4f1575b0e9776f54fecdb49dd44d9c Issue was resolved by downgrading mysql version and later doing config update to make it work with new version MySQL server sometimes unavailable during morning 05-09 https://status.hostup.se/incident/379471 Thu, 16 May 2024 07:08:00 -0000 https://status.hostup.se/incident/379471#338088aecbba9fa52fc74baf8677f2a49b4f1575b0e9776f54fecdb49dd44d9c Issue was resolved by downgrading mysql version and later doing config update to make it work with new version MySQL server sometimes unavailable during morning 05-09 https://status.hostup.se/incident/379471 Thu, 16 May 2024 07:08:00 -0000 https://status.hostup.se/incident/379471#338088aecbba9fa52fc74baf8677f2a49b4f1575b0e9776f54fecdb49dd44d9c Issue was resolved by downgrading mysql version and later doing config update to make it work with new version MySQL server sometimes unavailable during morning 05-09 https://status.hostup.se/incident/379471 Thu, 16 May 2024 03:05:00 -0000 https://status.hostup.se/incident/379471#05e27116ed1a2cd90c7cd918e610c9353515f441eb59df26f87d89fc7b0d585e Due to an automatic upgrade of MariaDB 10.6 the mysql server was unavailable for some users resulting in the ability to only visit pages that were cached (not backend like /wp-admin). For some users where page cache was not enabled WordPress would print error. All other services were unaffected. In order to avoid this happening in the future we've disabled automatic updates of critical services and will instead schedule a service window to allow us to quickly resolve any issues. MySQL server sometimes unavailable during morning 05-09 https://status.hostup.se/incident/379471 Thu, 16 May 2024 03:05:00 -0000 https://status.hostup.se/incident/379471#05e27116ed1a2cd90c7cd918e610c9353515f441eb59df26f87d89fc7b0d585e Due to an automatic upgrade of MariaDB 10.6 the mysql server was unavailable for some users resulting in the ability to only visit pages that were cached (not backend like /wp-admin). For some users where page cache was not enabled WordPress would print error. All other services were unaffected. In order to avoid this happening in the future we've disabled automatic updates of critical services and will instead schedule a service window to allow us to quickly resolve any issues. MySQL server sometimes unavailable during morning 05-09 https://status.hostup.se/incident/379471 Thu, 16 May 2024 03:05:00 -0000 https://status.hostup.se/incident/379471#05e27116ed1a2cd90c7cd918e610c9353515f441eb59df26f87d89fc7b0d585e Due to an automatic upgrade of MariaDB 10.6 the mysql server was unavailable for some users resulting in the ability to only visit pages that were cached (not backend like /wp-admin). For some users where page cache was not enabled WordPress would print error. All other services were unaffected. In order to avoid this happening in the future we've disabled automatic updates of critical services and will instead schedule a service window to allow us to quickly resolve any issues. MySQL server sometimes unavailable during morning 05-09 https://status.hostup.se/incident/379471 Thu, 16 May 2024 03:05:00 -0000 https://status.hostup.se/incident/379471#05e27116ed1a2cd90c7cd918e610c9353515f441eb59df26f87d89fc7b0d585e Due to an automatic upgrade of MariaDB 10.6 the mysql server was unavailable for some users resulting in the ability to only visit pages that were cached (not backend like /wp-admin). For some users where page cache was not enabled WordPress would print error. All other services were unaffected. In order to avoid this happening in the future we've disabled automatic updates of critical services and will instead schedule a service window to allow us to quickly resolve any issues. Webbhotell reverse proxy, omega (cpanel), and 4 other services are down https://status.hostup.se/incident/321122 Sat, 03 Feb 2024 04:37:00 -0000 https://status.hostup.se/incident/321122#2617993a6f74b9a668438f2dae77758ece1aaad988977559daa8fe7fd45810ca post-mortem: LiteSpeed turned off web server because of an IP change hence it thought license was invalid. Issue has been resolved. During the night of Saturday, 1:09-05:37 web server was restarting unexpectedly. Webbhotell reverse proxy, omega (cpanel), and 4 other services are down https://status.hostup.se/incident/321122 Sat, 03 Feb 2024 02:48:00 -0000 https://status.hostup.se/incident/321122#ce80e22bb29c4a9239493583125f78a519299b611c7b1453fcbf0899f69baf6a omega (cpanel) went down. Webbhotell reverse proxy, omega (cpanel), and 4 other services are down https://status.hostup.se/incident/321122 Sat, 03 Feb 2024 02:44:29 -0000 https://status.hostup.se/incident/321122#113f03ffb6d7851ae3ebabcc06cb5031e6fd05a30e3df191558a3e6b8038a2d8 omega (cpanel) recovered. Webbhotell reverse proxy, omega (cpanel), and 4 other services are down https://status.hostup.se/incident/321122 Fri, 02 Feb 2024 23:57:00 -0000 https://status.hostup.se/incident/321122#6724fff03affff46b6e3ea81a4d659da142263a727ae2c6c4d3e29481cba595f omega (cpanel) went down. Webbhotell reverse proxy, omega (cpanel), and 4 other services are down https://status.hostup.se/incident/321122 Fri, 02 Feb 2024 01:41:24 -0000 https://status.hostup.se/incident/321122#024f71290b81032fda05272747d8acc6987b3282f76d710ed7e0517259a8749f Webbhotell reverse proxy recovered. Webbhotell reverse proxy, omega (cpanel), and 4 other services are down https://status.hostup.se/incident/321122 Fri, 02 Feb 2024 01:36:52 -0000 https://status.hostup.se/incident/321122#be348c79d9eb6621e06cdceb392a0560345c811c932af941b4d8ab9a775cfbf2 Webbhotell reverse proxy went down. zeta is down https://status.hostup.se/incident/320779 Thu, 01 Feb 2024 09:54:44 -0000 https://status.hostup.se/incident/320779#f43c266a98404f82c3c6f1b318324b35fac27ee405a027f78f7eb816945e5b6f zeta recovered. zeta is down https://status.hostup.se/incident/320779 Thu, 01 Feb 2024 09:53:27 -0000 https://status.hostup.se/incident/320779#caf321bb2457f0e02fa978d8186b2dbb50ebfcc4ca06e59d3f8252df4aef47cc zeta went down. orion is down https://status.hostup.se/incident/312696 Sun, 14 Jan 2024 22:41:19 -0000 https://status.hostup.se/incident/312696#da3013b073e540c470a8822744443c250a5ebc3c9f2628a6eee3f5c687ea57aa orion recovered. orion is down https://status.hostup.se/incident/312696 Sun, 14 Jan 2024 22:40:30 -0000 https://status.hostup.se/incident/312696#bddc4d45a33724d2f178a54a3e3cac96fcd53dc8c89197b6b9ca4d1346c7a3b4 orion went down. DNS Namnserver 2 and VPS are down https://status.hostup.se/incident/308464 Thu, 04 Jan 2024 18:55:08 -0000 https://status.hostup.se/incident/308464#46de775b203fefc74cc949153ad027586d589a20d12bf22e971bf8bd6f099a24 VPS recovered. DNS Namnserver 2 and VPS are down https://status.hostup.se/incident/308464 Thu, 04 Jan 2024 18:54:48 -0000 https://status.hostup.se/incident/308464#dfda9754731f0eecefae5a22aaf39b614da6cd75669e6d576858d7f2c9ba6d04 VPS went down. DNS Namnserver 2 and VPS are down https://status.hostup.se/incident/308464 Thu, 04 Jan 2024 18:36:32 -0000 https://status.hostup.se/incident/308464#e618e37d2b95250cbb914ddab9bd2e9a1f4bc20f64abd07ebc5d06b01d18a72c DNS Namnserver 2 and VPS recovered. DNS Namnserver 2 and VPS are down https://status.hostup.se/incident/308464 Thu, 04 Jan 2024 17:59:33 -0000 https://status.hostup.se/incident/308464#c88478ea34a93dcb62d4590de93f7301a329226d57072d28b19c5e2cc42eed26 VPS went down. Webbhotell reverse proxy, DNS Namnserver 1, DNS Namnserver 4, and 1 other service are down https://status.hostup.se/incident/306240 Fri, 29 Dec 2023 14:44:46 -0000 https://status.hostup.se/incident/306240#f07a7a1e450dcb578f9b05de80e3e996b83bd008330723686503972ab8a5a8f5 Webbhotell reverse proxy recovered. Webbhotell reverse proxy, DNS Namnserver 1, DNS Namnserver 4, and 1 other service are down https://status.hostup.se/incident/306240 Fri, 29 Dec 2023 14:43:34 -0000 https://status.hostup.se/incident/306240#e56f373935e751bc57b81c164985a508748215f150fff12834f9d6f578ff02d5 Webbhotell reverse proxy went down. epsilon is down https://status.hostup.se/incident/291639 Fri, 24 Nov 2023 11:43:15 -0000 https://status.hostup.se/incident/291639#096051d7e91eb82515967dfb6d54631e573e9313a00133b1dff4fecde217b98e epsilon recovered. epsilon is down https://status.hostup.se/incident/291639 Fri, 24 Nov 2023 11:40:21 -0000 https://status.hostup.se/incident/291639#da13c4b4caee4ddebbe72af93c8c57d68f0bf6bfc8af3cb93564a307468a65fa epsilon went down. theta is down https://status.hostup.se/incident/259173 Thu, 14 Sep 2023 11:33:00 -0000 https://status.hostup.se/incident/259173#ef893e7ba2c38c3299034f78ff6d87d7f2dce9480d019c56e6d816994c05b0d2 Limited downtime to some php processes. Issue was resolved by our technicians within 15 minutes theta is down https://status.hostup.se/incident/259173 Thu, 14 Sep 2023 11:09:56 -0000 https://status.hostup.se/incident/259173#4c53238a4c08f22a8f3315fe0360498646ce8a76f61c887f80dd7bef0bb71960 theta recovered. theta is down https://status.hostup.se/incident/259173 Thu, 14 Sep 2023 10:59:00 -0000 https://status.hostup.se/incident/259173#48b5d9c3a35e66f94ee063ac66e5922f86c00ada570a6410a5012707e15b6cab theta went down.