Latest Update:
Wed Jul 9 11:30 2025 UTC:
As per our previous update, full connectivity has since been restored to this data center. The total outage duration for this incident was 4 hours and 46 minutes. Customers in our other data centers were unaffected. Customers with the optional Emergency Access addon were also able to retain access to their bookings throughout this incident.As part of our transparent approach to communicating with our customers, we'll provide as much information about this incident as we know.
For context, this incident occurred 'upstream' of our EU data center (Interxion/DRTs). Everything remained operational within the data center itself, and there was no risk of data loss. As this incident involved an upstream provider (Eqinix), it was something that was beyond our direct control, or that of our hosting provider.
Consequently, our understanding of this incident is based on information provided to us from our hosting provider, which they have obtained from the data center, who in turn obtained it from their upstream provider.
It is our understanding that there were two unconnected events, but which combined leading to this outage. Each cause on its own would not have been sufficient to cause this outage.
The first event was routine scheduled power maintenance at the upstream provider's facility, Eqinix AMS1. The second event was the failure of a Power Distribution Unit at said provider during this routine maintenance window. This resulted in a loss of power at the upstream provider. We are unaware at this stage as to whether that was a partial or total loss of power. Regardless, the knock on effect was that Eqinix was unable to route traffic into or out of its facility via a number of transit providers including PacketFabric, Unitas, and INAP.
Now, our hosting provider does have two redundant endpoints that connect to the affected datacenter, however both were offline and unreachable due to this incident at Eqinix.
Our hosting provider has this morning confirmed that they "have additional redundant paths coming online as soon as next week along with a major network overhaul planned to start this month". These plans are part of a $500,000 hardware investment by our hosting provider into the EU data center, which is expected to start being rolled out from next month. Additional redundant paths at the data center will help mitigate an issue of this nature in the future.
Whilst outages like this are thankfully extremely rare, we do appreciate the impact they've had this morning on our EU-hosted customers.
Throughout this incident, we've been keeping customers fully informed with regular updates here on our service status site, as well as updates on our X feed.
Whilst our team were also promptly responding to emails and live chat from customers this morning within minutes of receipt, we would ask that should you ever be unable to access your cloud-hosted MIDAS system, please check our X feed and service status pages in the first instance before reaching out via email/live chat. That's because we can more readily keep everyone up-to-date via X and our dedicated service status site.
Despite this isolated incident, the overall uptime of our MIDAS network has remained over 99.9% now for over a decade. That said, for extra piece of mind and protection, we do also offer an optional Emergency Access addon to all cloud-hosted MIDAS customers. This addon allows you to access a backup of your MIDAS system in the event that you're ever unable to access your primary MIDAS system. To add Emergency Access to your MIDAS system, simply upgrade your MIDAS system.
Thank you again for your patience if your MIDAS access was affected for a time this morning, and we apologize again for the inconvenience and frustration this may have caused.
Previous Updates:
Wed Jul 9 11:00 2025 UTC:
We're pleased to report that full connectivity has now been restored.We will post a further update here in due course once we have more information on the exact cause of this outage, but for now, we're seeing full access restored to this data center.
Thank you again for your patience these past few hours, and we apologize again for the inconvenience this outage caused.
Wed Jul 9 08:37 2025 UTC:
We appreciate your continued patience during this outage affecting customers in our EU Data center. If you're just joining us, here's a brief recap on what's happened this morning:- Shortly before 04:30 UTC this morning, our pro-active monitoring detected a connectivity issue to our EU Data center
- At 05:03 UTC our hosting provider confirmed the issue was upstream of the data center (i.e. all equipment in the data center itself was fully operational)
- At 06:12 UTC the affected upstream provider, Equinix, confirms there's been a "power event".
- At 06:36 UTC our hosting provider reports that Equinix have isolated the issue are are working on a fix.
- At 08:00 UTC our hosting provider reports the problem is due to a Power Distribution Unit failure in combination with Equinix performing power maintenance on one of the feeds.
- At 08:10 UTC our hosting provider reports Equinix are working on getting temporary power to bring systems back online
From our side, we've been replying to all customer emails on this issue within 3 minutes of receipt and we continue to provide regular updates here on this page.
Wed Jul 9 08:10 2025 UTC:
Our hosting provider reports that the upstream provider (Equinix) is "working on getting temporary power to bring up the critical equipment which will bring us online". Again, frustratingly, no ETA is forthcoming from our hosting provider's affected upstream provider, but we will continue to post regular updates to this page just as soon we receive them through our hosting provider.Once again, we sincerely apologize for the impact this is having to your operations this morning.
Wed Jul 9 08:00 2025 UTC:
Our hosting provider's current understanding from the upstream provider is that "the problem has been isolated down to a PDU [Power Distribution Unit] failure in combination with Equinix [the upstream provider] performing power maintenance on one of the feeds".Our hosting provider comments "If that holds true then either relocating the equipment to a new cabinet, or replacing the PDU would be the next steps". At this time however, our hosting provider still hasn't received a firm ETA on service restoration from their upstream provider.
Wed Jul 9 07:52 2025 UTC:
We appreciate the impact that this is having on some customers ability to access their hosted MIDAS systems this morning. As this issue originates outside of our control or our hosting provider's control, our hosting provider is reliant on updates from the data center's upstream provider. Whilst the upstream provider has confirmed that they are "working on restoring options at this time", they have so far not provided our hosting provider (and therefore us) with an ETA.We also appreciate that not having an ETA adds to yours (and indeed our) frustration, but we are hopeful that this issue will be resolved shortly. In the meantime, we will continue to provide updates as we receive them here.
As a reminder, the Data Center itself and our servers are operational, and with this being a network outage there is no risk to your data or it's integrity. As soon as the network connectivity has been restored access will resume as normal.
Wed Jul 9 07:35 2025 UTC:
Latest update from our hosting provider: "Equinix confirmed they are doing a power maintenance at the site but the issue is one of the PDUs (Power Distribution Units). We are working on restoring options at this time".Wed Jul 9 06:36 2025 UTC:
Our hosting provider reports that the root issue is being worked on now by the upstream provider, and has been isolated to the core equipment located in the Equinix AM1 (upstream) facility.With this being a network outage there is no risk to your data or it's integrity and as soon as the network connectivity has been restored access will resume as normal.