Latest Update:
Wed Mar 19 00:20 2025 UTC:
We, along with our hosting provider and server software provider have been continuing to monitor the status of our node in our West Coast (US) data center.The move to different physical hardware on Monday appears to have reduced the volume of Cloudflare 524 (Origin Timeout) errors, however, we have still been observing a few sporadic errors of this nature since the migration.
Because these occurrences are very occasionally, and few and far between, it has made investigating them a frustratingly slow process.
However, within the past hour, we're hopeful that there may have finally been a bit of a breakthrough! We'll go into more detail in a future update should this turn out to be the case, but to test this we've just seamlessly pushed a small update out to all hosted customers in our West Coast (US) data center.
We are going to closely monitor this over the coming hours to see if this has indeed fully resolved the issue, and we will of course provide a further update here in due course.
In the meantime, we'd like to apologize again if you've been affected by recent performance issues in our West Coast data center, and we thank you for sticking with us whilst lots of people have worked tirelessly behind the scenes on this issue.
Previous Updates:
Mon Mar 17 22:02 2025 UTC:
The server software provider believes that this may be an OS (Operating System) issue. Consequently, our hosting provider has now moved (as of 18:50 UTC) the affected VPS (Virtual Private Server) to a different physical hardware node to see if this resolves the issues.We are continuing to monitor the situation we will provide a further update shortly.
Mon Mar 17 00:16 2025 UTC:
Despite the earlier config change by the server software provider, we continue to observe a small number of Cloudflare 524 (Origin Timeout) errors. As per previous updates, these 524 errors are still only being observed on connections routed through Cloudflare's Auckland (AKL) Data Center.Our hosting provider and server software vendor are continuing to investigate, and we will provide further updates accordingly.
Sun Mar 16 19:34 2025 UTC:
We're pleased to report that the server software provider has identified a server component that they believe was resulting in intermittent connection drops, leading to apparent timeouts and 'hangs' during the course of normal usage of customer's MIDAS systems hosted in our West Coast (US) data center.This non-critical server component has been temporarily disabled, whilst further investigations into the root cause are undertaken.
We will continue to monitor this sever, and will provide a further update in due course...
Fri Mar 14 08:08 2025 UTC:
Our hosting provider has engaged the external assistance of the underlying server software provider as they continue to investigate the root cause of intermittent connection drops, leading to apparent timeouts and 'hangs' during the course of normal usage of customer's MIDAS systems hosted in our West Coast (US) data center.We are hopefully that this issue will shortly be resolved, and we will continue to provide further updates here in the meantime as their investigation continues...
Thu Mar 13 20:18 2025 UTC:
The node update and reboot do not appear to have resolved the intermittent connectivity issues affecting a small subset of requests made to hosted MIDAS systems residing on this node.Our hosting provider continues to investigate and are actively working to resolve this issue.
Thu Mar 13 19:02 2025 UTC:
The reboot of our West Coast (US) node has now been complete. We and our hosting provider are closely monitoring this node post-reboot, and we'll provide further updates here in due course...Thu Mar 13 18:49 2025 UTC:
We'll shortly be performing a reboot of our West Coast (US) node - this may result in a short downtime (of approx 5 mins) for customers. Our hosting provider wishes to apply some software updates. Whilst this would normally be scheduled for out of hours, it is hoped that this may resolve the intermittent issues recently experienced by some customers who have their MIDAS systems hosted in this data center, so we've given the go ahead for this reboot and software update to happen imminently.We will keep customers updated on this page...
Thu Mar 13 07:45 2025 UTC:
Whilst we continue to await Cloudflare's response, we've asked our hosting provider to look again at a node in their West Coast (US) Data Center, as this node is the final target that those experiencing 524 timeout errors are trying to reach. We've identified a potential contributing factor to these random and intermittent 524 errors, and have asked our hosting provider to investigate further.We apologize if you're still affected by intermittent 'hangs' or timeouts in your MIDAS system, and we are working hard to understand the root cause in order that the issue can be resolved.
We will continue to provide updates on this page.
Wed Mar 12 22:59 2025 UTC:
We are still awaiting a response from Cloudflare in relation to this issue. We have chased them again today, and will provide a further update here once we've received their response or if the situation changes.Wed Mar 05 09:25 2025 UTC:
Whilst we await Cloudflare's response, we have taken the additional step of rebooting all servers which host the MIDAS systems of customers who have been connecting via Cloudflare's Auckland (AKL) data center, and who we can see have experienced at least one 524 error response in the past week. We continue to observe intermittent yet infrequent 524 errors in our Cloudflare analytics since these server reboots. We therefore are confident that this is a Cloudflare issue relating specifically to their Auckland (AKL) Data Center.For context, over the past 7 days, we can see from our Cloudflare logs that a total of 236 524 errors have been returned through the AKL data center. This is out of a total of 36.08k requests served through AKL over the same period. This represents 0.65% of requests through this Cloudflare Data Center.
Mon Mar 03 01:02 2025 UTC:
Our hosting provider has not been able to identify anything with their network/servers/infrastructure which may explain the intermittent 524 errors produced by Cloudflare. It is their current assessment that the issue is Cloudflare related. We also have analyzed Cloudflare's logs, and have discovered that all instances of 524 errors over the past few days have been on connections that were routed through Cloudflare's Auckland, New Zealand (AKL) data center. Connections to our severs which are routed via any of Cloudflare's other global data centers appear unaffected. There is presently no indication on Cloudflare's Service Status site of any issues relating to AKL. We have however raised this issue with Cloudflare and are awaiting their response. We will provide a further update upon Cloudflare's response or should the situation change in the meantime...
Sun Mar 02 20:35 2025 UTC:
The original reporter has now provided additional information and a screenshot on the intermittent error they're receiving. It appears to be a Cloudflare 524 (Timeout) error. We have received out to our hosting provider to investigate, as we are unable to replicate this issue ourselves or see any obvious or likely causes of this. We will provide a further update once our hosting provider has investigated further...
Sun Mar 02 20:28 2025 UTC:
We have received a third hand report from the original customer that this issue is also being experienced by another MIDAS customer. Whilst this customer hasn't reached out to us themselves, we are continuing to actively investigate. However, at this time, we are unable to reproduce this issue. All our servers and network are running normally at this time. We will provide further updates as necessary...
Wed Feb 26 20:08 2025 UTC:
We have received an isolated report of intermittent slow MIDAS performance, freezing, and timing out from a customer located in Wellington, New Zealand. Our active monitoring has not detected any issues with our network or servers at this time. We're currently investigating and will provide further updates accordingly...