Update 1: The server and all services are back online. Thank you for your patience.
We are performing emergency maintenance on this server which will make all/most services on this server inaccessible.
UPDATE: Network appears to be back to normal and we are awaiting further details on the upstream provider that caused some people to drop connections.
We are currently looking into an issue with one of our upstream providers that could be affecting some routing.
UPDATE: Migration has been completed successfully.
We will be migrating the accounts from the server listed as S02 to our new servers in our Coventry data centre.
If you use A:Records please make sure to update them between the listed times to use this IP: 18.104.22.168
UPDATE 1: After making some JVM changes the issue appears to have become stable but we will continue to closely monitor the service over the next couple of days.
We are investigating an issue with the ColdFusion services on our Hawking instance (S04) that is causing the service to suddenly stop.
UPDATE: The migration completed without any issues and all accounts are now on the node Brunel. Please make sure you have updated your A:Records if you use them. We will be shutting down the old server shortly.
We will be migrating all accounts from the server listed as S14 to a new server code named: Brunel
Scheduled Date/Time: 08/12/2019 20:00 (Timezone: London, UK)
If you use A:Records to point your domain to our servers you will need to update them to point to: 22.214.171.124
16/11/2019 – 21:30 – We are currently experiencing an issue with services at our Coventry site, further updates will follow shortly.
21:45 – Our onsite engineers have found BGP sessions to be flapping between our core routers in Coventry and London, further updates will follow shortly.
22:01 – Our onsite engineers have identified an attack against our core routing infrastructure at this site and are working to mitigate this.
22:31 – Our engineers have been unable to mitigate the attack against our routing infrastructure and we are still working on the issue. Service has been restored for some customers however the network is currently still unstable.
23:52 – Our engineers are going to bring forward the replacement of our routing equipment at our Coventry site which was scheduled for later this month under a planned maintenance window as we believe the new equipment should be better placed to deal with the attack. We hope to have service fully restored to all customers by 04:00 at the latest.
17/11/2019 – 01:22 – The new routing equipment has been racked and the configuration being loaded onto this, customers should expect further service disruption in the next thirty minutes when we move customers to the new routing equipment.
02:32 – Service should now be restored to the majority of customers at our Coventry site and the new routing equipment is successfully mitigating the attack on our equipment.
04:25 – The remaining customers should now be back online at our Coventry datacentre and customers are requested to open a support ticket if their service remains offline.
UPDATE: Since our changes, the Lucee service appears to be back to normal stability. We will continue to monitor the service closely. Thank you for your patience.
We will be performing some adjustments to our UK Lucee servers to correct a number of reported issues around the stability of the Lucee service.
Network issues were reported at our Coventry, UK based data centre which the data centre team worked on to resolve. We are awaiting a full report from them to update our customers with.
We are sorry for the downtime seen by our customers and we will be working with the data centre to see what actions can be put in place to prevent this from happening again.
UPDATE 11: VMs have been restored, if you face any issues please open or update your support tickets so our team can investigate. Thank you so much to all our affected customers for their patience and understanding.
UPDATE 10: The restores are processing well, due to the amount of data this can take some time but we are working on this as quickly as we can.
UPDATE 9: We have been able to get a XEN server online and now starting to restore accounts.
UPDATE 8: We are continuing to work on the issue and hope to have our new XEN server online shortly. There has been issue within the XEN setup that caused our tests to fail.
UPDATE 7: Final tests on our 3rd server setup is almost completed.
UPDATE 6: Due to some kernel issues we have booted a 3rd server up as an alternative which is on a different network and will require new IPs to be allocated. We will update clients once we have more details.
UPDATE 5: We are continuing our setup of our alternative XEN Server. We will post our next update as soon as possible.
UPDATE 4: We have our alternative XEN Server partitioned and the final setup stage processing now. Once done restores of data will begin.
UPDATE 3: Due to a XFS corruption beyond repair we will be restoring backups on a secondary node as soon as possible to get all customers services back online.
UPDATE 2: We are continuing to run the XFS repair on the server, it is taking a little longer than expected and having our DC remote hands checking this.
UPDATE: We are running a full XFS repair on the drives as something appears to have become corrupted on the disk and causing the server not to boot properly into the OS.
We are currently investigating an issue with one of our new nodes at our Coventry DC (Node: Nelson). We are working on this with the highest priority.
FINAL UPDATE: The issue has been fully resolved.
The initial issue was due to the power circuit being tripped out, the DC team worked to move our racks to the backup circuits to ensure power was restored quickly to the affected servers. After 15 minutes the main power supply was routed back to our racks.
We started to check / bring back online all servers that were offline. While doing this we found the node Churchill didn't respond to our main controllers commands. After investigating it was found to be loading form the flash memory on the server instead of the main controller of the hard drives. We reconfigured the BIOS and restarted the machine which brought back the node and once tested we brought back online the instances.
We will be performing an update on the BIOS to ensure the correct hard drive controller is loaded in case of any future failures in power. This update will be happening at 9PM UK, London time today (4th of June) and a network status item will be available for reference.
UPDATE 3: After resolving a linking issue to our racks and correcting a possible long term issue our team are focusing on resolving the issue with our Churchill node.
UPDATE 2: DC engineers are continuing to work on the issue with our racks as further issues were found. We hope to have this resolved shortly.
UPDATE 1: All servers apart from the Churchill node has come back online. We are working on the issue.
We are currently resolving an issue with our racks at the Coventry DC. Further updates to come.
During a routine review by our electrician, we have identified a fault with the power distribution that supplies our racks at the Coventry data centre. There is a core distribution unit which needs to be replaced to ensure a stable service. This will require all power to our racks being removed for about 60 seconds whilst the fault is fixed.
UPDATE:Â Migration processed well and all accounts are now on the new server. We are now backing up all accounts on the old server before shutting it down.
We will be migrating customers from S10 server to newer servers. All affected customers will be updated via email, any customers using the Global Reseller Panel will have the details updated in their reseller control panel. Downtime will be minimal as the migration will be handled by the cPanel transfer.
New server IP:Â 126.96.36.199
UPDATE 1: We have resolved the issues and all services are back to normal status. Thank you for your patience.
We are investigating an issue with our US based Xen servers which dropped network services. We are working to resolve this as soon as possible.
UPDATE: All systems came back online shortly after the initial status update. If you find you are having any issues please do contact a member of the support team.
We detected a memory fault due to a faulty RAM card. This is being replaced now and services should be back online shortly.
We will be updating the servers BIOS to avoid boot up issues loading incorrect OS after unexpected downtime/shut downs. Downtime will be less than 5min as only a reboot is required to apply the changes.
UPDATE1: Services are back online and running normally. Thank you for your patience.
We are running updates on the disk and memory services of our S03 Lucee server. A reboot is processing now to apply these updates. We hope to have services back online within the next 5min. Sorry for the downtime caused.
Since our adjustments to the Lucee JVM all services appear stable. We will carry on monitoring the server closely and if any further issues occur we will open a new server status.
We have been server monitoring Lucee services and they have been stable during the night. We are continuing to monitor any and all load spikes to resolve any issues. We will update this status further when we know more.
During off-peak (UK night time) we are seeing high Lucee load on the server which appears to be causing the Lucee CML services to stall. We are monitoring and working on finding a fix.
We are investigating a high load on our S03 server which appears to have been the cause of the server requiring a forced reboot.
Reboot complete and updates applied. Downtime less than 1 minute.
We will be running an update and reboot of the Churchill instance to apply the latest updates. Downtime will be less than 10 minutes.
We have been dealing with disk issues within the core of the S11 instance. If you are seeing issues please open a support ticket and request a migration to our S03 Lucee server which is on our new platform. Please note S03 uses dedicated remote SQL servers so in your Lucee data sources or connection scripts please make sure to use 'remotesql' instead of 'localhost' on your settings.
We will be migrating accounts from our S24 server to our latest Lucee S03 server. Downtime will be minimal as we will be performing a direct transfer of accounts.
New server IP: 188.8.131.52
Minor updates and a quick reboot of S14 to ensure stability of latest updates.
UPDATE 2:Â We can confirm all services are running normally and now have CloudLinux running for better general performance and stability.
UPDATE 1:Â A fault with drive mappings was found causing unexpected downtime on the server and this is being fixed.
Upgrades being applied:Â CloudLinux & kernel updates.
Downtime:Â We will try and keep downtime to a minimum but downtime will be intermittent over a few hours.
UPDATE 1:Â Our new servers are going through tests now, we will be migrating customers to the new server in batches. We will contact those customers directly throughout the week. If you are still facing issues please open a ticket to sales to request a migration sooner. Currently S02 services are running normally.
We are monitoring our S02 server due to intermittent slowness that has been detected. We already have plans in place for a migration of this server to one our new servers being racked this week. We will continue to monitor and resolve any reported issues.
UPDATE 9: S11 - Our team has got as much data as we could possibly get from our backups and the faulty S11 server. If you have backups of SQL available please send them over to our tech team via a support ticket and we will get these uploaded straight away with the highest priority. We are also ensuring other servers are not affected by the same backup faults and issues that caused S11 to fail. As we always recommend, please ensure you keep local backups in case of failures such as this. We will be investing heavily in new backup solutions on all shared services in the coming months to prevent such issues from happening again.
UPDATE 8: S11 - Our team continue to bring up remaining websites with most back online. If you continue to have issues and haven't opened a ticket we highly recommend to create one in case the issue on your site is an isolated issue.
UPDATE 7: S11 - To help speed up restores of accounts we highly recommend if you have local copies of backups please do send them over to the main support team and they can get your services back online quicker.
UPDATE 6: S11 - File restores have processed but SQL databases failed to restore correctly. We are looking at alternative restore options now.
UPDATE 5: S11 - As we continue to bring more accounts back online, if you use A:Records instead of our name servers we strongly recommendÂ changing your domains DNS to our name servers. This will help when we sync your domain to the new server IP addresses your domain will already be configured. Our global DNS network name servers are:
UPDATE 4: S11 - Restores of accounts are proceed from data located on the server and on our remote backup servers. We are manually having to process these backups one by one. We will provide further updates as they come through.
UPDATE 3: S11 - We are attempting to restore available backups and overlay them with the latest data from the damage server. Other systems are being worked on by our 3rd party software support teams to resolve the issues as soon as possible.
UPDATE 2:Â All services on our Archer node are back online apart from one shared service S11. We are working on this issue as our top priority, we now have access to the data and migrating the data to a newer server to get services back online as quickly as possible for everything.
UPDATE 1: During a number our instances became unavailable, our team are working on this issue now with our 3rd party suppliers.
We are currently running checks and general maintenance on our Archer node, this includes the XEN services and SolusVM integration. You may see some services slow down but this will be kept to the minimal.
UPDATE 1: Services are back online and we are investigating the cause of the network issue.
We are investigating an issue with our S13 server at the Sydney data centre that has caused the S13 server to fail.
UPDATE 2:Â All systems are OK and running normally. Thank you for your patience while we upgrade our services.
UPDATE 1:Â Services are coming back online and VM instances running. Downtime averaged 5min to complete the updates and bring instances back online. We will update this ticket once weÂ have completed our checks.
We are running a reboot of the XenServer node Darwin to apply updates and to correct integration issues withÂ Virtualizor. Thank you for your patience.
UPDATE 1: We are still migrating accounts to our new server. Due to the number of sites and data it is taking longer than expected. We hope to have further updates soon.
On Thursday the 28th of June at 10PM we will be migrating all accounts from S17 to S14 which is based on much newer systems. If you use A:Records to point your domain to our servers please update the IP to:Â 184.108.40.206
Thank you for your understanding while we process this migration.
UPDATE: The migration was completed but during some migrations and setups of ColdFusion DSNs a few already created DSNs were affected and locked out. If you have any issues with your DSNs please contact support who will recreate them for you. Thank you for your understanding.
On the 25th of June we will be migrating all US S12 ColdFusion 10 customers to our UK CF10 servers. As per our other US based CFML services we are moving all accounts to our UK based data centers. This move will also help with future plans for new ColdFusion services (pending final decisions from management). If you are using A:Records to point to our servers please make sure to update your domains IP to point to:Â 220.127.116.11
Once the migration has been completed you will need to setup your ColdFusion DSNs via the CFManager or by opening a support ticket if you prefer us to handle this for you - please note we will need the database details so we can set them up. Our transfer systems currently do not allow for migration of CF DSNs.
Thank you for your understanding.
On Thursday the 22th of June at 10PM we will be migrating all accounts from S23 to S01 which is based on much newer systems. If you use A:Records to point your domain to our servers please update the IP to:Â 18.104.22.168
Thank you for your understanding while we process this migration.
On the 27th of June at midnight (UK Time) we will be migrating all US Lucee accounts to our UK data centre. Our CFML services have been moving to our UK data centres the past years and now the final US based Lucee server will be moved. If you are using A:Records on your domain please make sure to change them to the new servers IP:Â 22.214.171.124
Thank you for your understanding. If you have any questions please do contact a member of the team.
UPDATE: The migration has been completed and all customers details confirmed as updated in their client portal. Any problems or questions please do contact a member of the team. Also please remember to update your A:Records if you do not use our nameservers. New server IP:Â 126.96.36.199
to resolve a number of performance issues we will be migrating our last cPanel shared hosting servers in Alexandria USA to our new US location in New Jersey. If you are using A:Records instead of DNS/nameservers you will need to update the IP to:Â 188.8.131.52
We are sorry for the short notice on this migration and we hope to have it complete as quickly as possible.
UPDATE: Updates have been applied and total downtime was less than 2min. Thank you.Â
We have corrected the issue which was due to another instance on the node causing a slow CPU.
We are investigating an issue with a slowness in the Lucee service on S11. The root of the issue appears to be a drain on the CPU resources (known as a CPU Steal).
We will be migrating our final Sydney servers to our new servers on the 3th of May at 7PM UK, London time.
New server IP:Â 184.108.40.206
We will be migrating our final Hong Kong servers to our new Singapore servers on the 4th of May at 7PM UK, London time.
New server IP:Â 220.127.116.11
On the 27th of April we will be finishing the final move from our Amsterdam servers to our latest German based servers. This is in line with our aims to focus our offering to the best possible locations for speed and data centre support.
Below are the final two servers S04 and S18 that will be migrated to server S05 with the IP:Â 18.104.22.168
S04:Â 22.214.171.124 =>Â 126.96.36.199
S18:Â 188.8.131.52 =>Â 184.108.40.206
All data, including emails, website files and database will automatically be migrated and there is nothing for you to do unless you are using A:Records on your domains. Please see below for details on IP/DNS changes.
If you are using A:Records to point your domain to a server you will need to update this to point to the new servers IP:Â 220.127.116.11
Using DNS/Nameservers Records?
You will not need to do anything as we will take care of the DNS change on our side.
Thank you for your understanding and we hope you enjoy the new services at our Germany location.
New server IPs, cPanel links and statuses are listed below:
18.104.22.168 => 22.214.171.124 (cPanel Login)
126.96.36.199 => 188.8.131.52 (cPanel Login)
184.108.40.206Â => 220.127.116.11Â (cPanel Login)
18.104.22.168 => 22.214.171.124 (cPanel Login)
Please make sure to update your A:Records to point to the updated server IPs. If you use our DNS servers then no IP change will be required.
No changes to your logins to cPanel or WHM. You can use the links above to access cPanel directly.
Our Dallas 1 and London 2 cloud node which is backed by OnApp requires an emergencyÂ migration. This is being handled by the OnApp team and data will be migrated by the engineers there. All Dallas and London based customers may see a small amount of downtime while the migration occurs and a new IP will be assigned. Please update your DNS to point to our servers, if you are required/prefer to use A:Records we will be providing details of the new IPs once the cloud migration has been completed.
We are sorry for theÂ lateness of this notification, we have been working to try and avoid such a migration on the older platforms until we were ready to move to the new systems in New Jersey. Please keep an eye on this page for the latest updates.
Thank you for your understanding.Â
We will be performing updates on our Darwin node at midnight tonight to ensure the latest security patches are applied. Downtime of the instances on the node will be minimal as only a standard reboot is required. We expect no more than 15min of downtime. Thank you for your understanding.
The migration from the US server to the UK server has been completed successfully. New IP:Â 126.96.36.199
UPDATE 1: We have monitored the changes over the night and all services are running normally. Thank you for your patience during this update.
UPDATE 4: All systems appear to be running smoothly and we will be continuing to monitor the server closely. Thank you for your patience.
UPDATE 2: The issue has been resolved and services are now coming back online. The OnApp engineers confirmed the root cause in their systems and applied fixes. We are monitoring the servers while services start to come back online.
UPDATE: Server is running normally now. Planned updates to our Asian based servers are in the works and news will be released in the coming weeks. Thank you and sorry for the inconvenienceÂ caused.
UPDATE 8: We have run a reboot to apply some changes we have made to this server. We are monitoring its services. Thank you for your patience.
As our new US servers are online we will be migrating all US-based accounts to this new server. On the 17th of February we will be migrating all accounts on S22 to S09 server.
If you use our nameservers you will not need to make any changes. If you use A:Records you will need to update the IP to:Â 188.8.131.52
If you have any questions please contact a member of the team.
We will be running updates on all servers on our network to correct theÂ CPU Meltdown and Spectre vulnerabilities which have been in the news lately. One we have patched the servers a reboot will be required and downtime is expected to be less than 10min per server. We are working with our partners/suppliers to ensure all our servers hardware is looked at. The patches are for the operating systems on our servers. If you have a dedicated server or private cloud we will be sending details on the issues soon. You can open a support ticket and one of our support team will be happy to help patch your server.
We recommend everyone to run YUM / Windows updates on your servers to ensure you are running the latest version. Please feel free to contact a member of the team for more information.
UPDATE: We are currently investigating the cause of an unexpected downtime during the migration. All services are running normally and the migration has been completed.
With the latest server improvements being rolled out across our infrastructure we will be migrating all accounts from server S02 to a new server (keeping the same server name S02). This will begin at 8PM on Wednesday the 20th of December (2 weeks time). We will be running a full cPanel migration which means you will not need to do anything. If you are using our nameservers (DNS) there will be no change, but if you use A:Records then a change will be needed. The new server IP to point your domains to after the 20th will be:Â 184.108.40.206
Thank you and we hope you will enjoy the improved services.
UPDATE 3: We have services back online, we will require toÂ run a quick reboot as we have installed software to help prevent this issue for the next couple of weeks ready for the planned migration.
UPDATE 1:Â A small spike in load caused our monitoring systems to flag an issue which slowed HTTP services. We will be scheduling a migration of accounts from this server to one of our new server setups for improved CPU performance. All services are running normally.
Update 1:Â Reboot completed and all services are coming back online. Downtime <3min. Thank you for patience.
We needed to reboot the instance to apply updates to disk and CPU settings. Thank you for your understanding.
UPDATE 1: All services are all back to normal running.
UPDATE 2: We have completed the replacement and all services are back online. We are monitoring the server and will update this status page if there are any further updates.
UPDATE 1: Server S02 has been shut down while we carry out the work. We are sorry for the downtime and hope to have services back online shortly.
Please be advised we've noticed issues with Hypervisor 7 (HV7). We have either the RAID cable or RAID controller faulty on this server node. This will require the node to be physically stopped to perform the repair. We will keep downtime to a minimum.
Updates to follow.
UPDATE 2: All servers and services are back online, we are monitoring and checking the cause as no router/network configuration was changed.
UPDATE 2: The engineers at the DC have corrected the IP issue - if you are using A:Records you will need to change to our DNS nameservers to be the best level of service.
UPDATE 3: We have completed all upgrades and monitored the services during the night. All services appear to be running normally. Thank you for your patience.
We are investigating a network issue to our Dallas DC cloud servers. We are sorry for the unexpected downtime.
We will be running an update on the ColdFusion services at 10PM UK/London time. A service restart will be performed which will cause some CFML downtime during the process (<10min). Thank you for your understanding.
UPDATE 1: The issue was caused by one of the clusters server nodes that failed and caused a chain reaction through the network. The server was replaced and services brought back online. We are monitoring the server cluster closely to help prevent this from happening again.
To apply system changes/updates we will be performing a reboot of the S24 server at midnght as defined (UK/London time). Downtime should be less than 10min.
We will be performing a ColdFusion service restart at 4AM UK/London time (10PM Dallas US Time).
We will be performing a ColdFusion service reboot tomorow (28th) morning at 5AM UK/London time (27th at 11PM Dallas US time) to apply Java updates to the ColdFusion service.
Downtime expected to be less than 5min.
S12 ColdFusion services are being updated with the latest stable version of Java. Service restarts will occur during this update. Thank you for your patience.
UPDATE 7: We have now completed the migration and updates. All services are back online and if you have already re-created your ColdFusion datasources you should see sites working as normal. If you have any questions please feel free to contact a member of the team. Thank you for your patience.
UPDATE 6: We are now applying the very latest ColdFusion updates. We have tried to import all ColdFusion datasources but unfortunately you may need to recreate the DSN via the CFManager.Â
UPDATE 5: ColdFusion has been configured and we are now applying our Apache updates to ensure smooth running of CF applications on the server. You may see HTTP and CF services go down but you will still be able to login to cPanel and access services there. Thank you for your patience.
UPDATE 4: We are having some issues with one of the connectors in ColdFusion and needing to run a reinstall of CF to ensure it applies the correct configuration. We are very sorry for the inconvinence this has caused. Our tests showed everything was working OK but it appears a Apache update caused some issues. We are working to have all ColdFusion based sites back online asap. Please note only the ColdFusion service is affected and all other services are running normally.
UPDATE 3: We are in the final stages of having services back online and running normally. We are sorry for the extended period of time this is taking, we our new systems some configurations was needed to our setup that matched the old server but didn't allow full functionality.
UPDATE 2: While we run the final configurations, Apache and ColdFusion services may not be working as expected or showing as 404 pages.
UPDATE 1: File have completed their transfer and we are finishing the config of our CFManager to verify and sync DSNs.
On theÂ 23/JUNE/2017 at 23:00 London time (22:00 UTC+1 / 22:00 Greenwich Mean Time +1) we will be migrating all accounts from server S12 to our new Cloud platform. ColdFusion 10 will be continued on this new server and all CF config will be transferred during this process.
We will need to shut down services at this time to allow the transfer to run as fast as possible so downtime will be during off peak hours. We hope to have all accounts migrated by the morning on Saturday. A new IP will be assigned (220.127.116.11) and if you are using A:Records you will need to update your domains DNS to match. If you are using our DNS nameservers no change will be required.
If you have a dedicated IP we will provide a new IP but our new platfrom uses IPv6 for addionial IPs and IPv4 for main root IPs.
The new platfrom will allow us to provide a higher level of service to all customers.
If you have any questions or concerns please contact a member of the management team who will be happy to help.
Thank you for your understanding.
On Sunday the 18th of June we will be upgrading Lucee from 5.1 to the latest release and patch 5.2. Downtime will be restricted to Lucee services while we restart them and will be minimal.
Our US data center is currently investigating a network issue on our US ColdFusion servers (Washington, Walla Walla). We hope to have services back online asap.
UPDATE 3: The migration has been completed and if you have any questions or you are having any issues please do contact a member of the support team. Thank you for your patience and understanding during this migration.
UPDATE 2: We have completed nearly all of the migrations but some of the remaining accounts are larger than most accounts. If you have any questions please do contact a member of the support team via the help desk. Thank you for your patience.
UPDATE 1: The migration is still processing and we hope to have all accounts moved as soon as possible. If you have any concerns please contact a member of the team who will be happy to assist you.
We are migrating all accounts to a new instance due to some concerns of the stability and security of key features and services on the server. Once all data migrated the old IP will be allocated to the new instance.
Thank you for your patience.
UPDATE 9: After a night of all services running normally we are closing this ticket. We are monitoring the server closely to see if any further issues occur. Thank you for your patience and understanding during this email outage.
UPDATE 2: Instance reboot complete and services are back online. We are now checking the server load and logs to find the root of the issue.
UPDATE 5: A quick reboot is processing for CPU changes on the node. Thank you for your patience.
UPDATE 2: The issue was related to a router at the data centre that became unaligned with the network and crashed. Replacements and re-configurations have been made at the data centre to help prevent this from occuring again.Â
A DDOS attack was detected at our Coventry DC at 00:44 AM UK/London time. It was quickly resolved with a total of 6min of interrupted network connections. All servers remained up during this time but some users may have seen some drop offs from their services.
UPDATE 2: We have installed CloudLinux to ensure single accounts can not over use CPU cores on this instance. We have also increase the SWAP memory as this was an item that was flagged during our investigation. Thank you for your patience.
UPDATE: Load has returned to normal and we are looking at CPU resource improvements. Thank you.
REPORT:Â The Dallas cloud is operated by OnApp, and the datacenter managed the hardware. The first alerted was an issue with some VMs being down due to disk IO reports. From logs it looks like a dying raid card. We had to go back and forth with the data center, as they didn't see any cause of it was a bad raid card. Eventually the raid card was replaced and the SAN brought back up and VMs turned back on.
UPDATE 17: A full report of the issue will be posted within the next 7 days.
UPDATE 16: At 1AM UK time all services were back online and running normally. We are gathering a report from all parties to provide. Thank you for your patience during this hardware outage.
UPDATE 15: The reboot has shown further issues within OnApp which the team are correcting now. Hardware is being replaced to ensure the stability of the services. Once the new server has been installed we will post a new update.
UDPATE 14: We are going to be doing a standard reboot of a number of instances to ensure everything is fully corrected. Downtime should be less than 5min. Thank you.
UPDATE 13: Services have come back online but we are awaiting the all clear from the engineers.
UDPATE 12: Work is ongoing at the data center and engineers from OnApp are working to resolve the issue asap. Thank you for your continued patience.
UPDATE 11: The issue has been located in the OnApp hypervision which engineers at the data center and the OnApp support team are investigating.
UPDATE 10: We are still seeing some issues with the host machines and hope to have this corrected asap.
UPDATE 9: We have completed fine sync and corrections. We are monitoring services to ensure everything is stable. Thank you again for your patience during this matter. We will post a report as soon as possible.
UPDATE 8: We are running some reboots of the host machine to ensure we fully fix the issues that caused the host machine to fail.
UPDATE 7: We are happy to confirm the instances below are back online. We are running some checks/tests on our OnApp Control Panel instance:
UPDATE 6: Now that the host machine is operating normally we are picking up logs and some issues which we can correct as we boot instances online.
UPDATE 5: We can confirm instances within the cloud are coming back online. Server S22 - 18.104.22.168 is back online. Others will be coming back online shortly.
UPDATE 4:Â The host machine has been started after the found hardware failure. The OnApp CP should startup soon and all other services shortly. We will update this status once all services are confirmed back online. Thank you so much for your patience.
UPDATE 3: Engineers are still working on the servers at the Softlayer data center in Texas USA. We hope to have a report and an ETA as soon as possible. Thank you for your continued patience and understanding.
UDPATE 2:Â A hardware fault has been found and tech engineers are checking the servers. We hope to have further updates with more detail shortly.
UPDATE 1: As per our new support policy for our Cloud platfroms OnApp support was informed of the issue and has detected a possible hardware to software fault. They are working with on site engineers to resolve the issue asap. We are sorry for this unexpected downtime. Thank you for your patience.
We are seeing an issue at our US Texas data center and the on site team is investigating.
We are doing everything possible to get the affected services onine asap.
UPDATE 2: We are currently investigating an issue which required a reboot.Â
Our reporting software indicated a disk resource issue which we are investigating and correcting on S17 server (London, UK).
UPDATE 1: We have restored access to the server and all services are back online. We are now investigating the cause and once found we will put in measures to avoid this from happening again.
UPDATE 1: We are found the cause of the CPU spikes and correct this but we are looking into ways to ensure if the issue does occur on an account in the future it won't cause such issues. We will update this status once we know more.
We are seeing a number of CPU spikes on S23 (London), we are investigating the cause and looking at implementing systems to prevent the server rebooting due to this. We hope to have systems stable asap and we are sorry for the inconvience caused. We will keep this status open during the night while we monitor and investigate the CPU issues.
Due to storage system upgrades we need to perform a quick reboot of the server. Downtime is expected to be less than 10min. We are sorry for the short notice.
We will be upgrading MySQL to version 5.6 between 3AM to 5AM UK/London time on the 30th of January. Downtime will be minimal during this period. We expect downtime to be <30min.
Thank you for your patience.
UPDATE 8: All customers have now been emailed a full report of the downtime.
We are currently experience issues with power at our Coventry Datacentre. The Generators and APCs are running but some customers may experience issues. We are working this as a matter of urgency to ensure we have this resolved as soon as we physically can. We are sorry for the inconvenience.
Our Coventry datacentre site experienced a power outage from Western Power at around 08:00 this morning, our generators started and took the load however after a few hours generators at the site developed faults.
Power has now been restored at the site but this is through our generators, Western Power are working to restore the power which should hopefully be completed shortly.
A further power outage has occurred since the initial restoration. Please be assured we are doing all the possibly can to restore all services.
Western Power assure us power will be restored soon to the affected data hall.
Full power has now been restored to the affected datahall. All services will be online momentarily. If you are still experiencing issues then please let us know
We are running a quick reboot of our OnApp controller, during this time there will be no downtime of any customers services. Thank you for your patience while we action this control panel reboot.
UPDATE 5: CPU load has reduced to a normal level and we are monitoring services.
UPDATE 3 - Full Report:
Firstly thank you for your patience during this migration period. We are in the middle of running the transfer/restore of the final VMs on the new node. It has been a long process and longer than we had hoped but all VMs have been migrated or in the final stages of transfer to our new UK data centre. Those that are fully restored appear to be running well with the final ones will be coming online asap. If you have any issues please do contact a member of the support team via a ticket for the fastest response. With all major migrations or issues we like to be very open about any issues and problems that may have occured during such tasks, please see below our report.
We saw a number of issues which caused an increase length in the migration process. Firstly the amount of data that was transferred was a large amount but with eailer tests network speeds held at the top speed available over the network but during the 6th hour of the migration the speed started to drop. We had hoped this was a temporary loss of network speed and at points the connection dropped. Network issues had been reported on the old node server and one of the cases for the migration.
In total there has been a total of 40 hours of downtime since we started the migration on Sunday morning.
There is a number of reasons for this migration but the main one was the hardware age in a data centre that was not performing at the level we wanted to provide to our customers. The server was starting to show issues in both memory and harddrive and to ensure customers data and services were not damaged in any way we decided to migrate to our UK data centres new hardware.
With any large scale migration there are lessons to be learnt and we have an internal review planned to see how we can improve our SolusVM migrations. Even though the migration was necessary we believe we can provide different options on how migration can be handled, for example transferring backup files over and then deploying them instead of the live data point. With the extent of downtime that was caused we will be providing all migrated customers VMs with 6 months of free hosting extentions - if you have any questions regarding this please do contact a member of the billing team. These extentions will be applied over the cause of the next 24 hours.
From everyone at Host Media we would like to thank all our customers for their patience and understanding during this migration period. Until we reach a good amount of time after all VMs have been checked and services appear stable this status issue will remain open for any further updates.
UPDATE 2: We have 3 VMs remaining in the migration and then all services would have been migrated. If you have any questions please do let our team know. Thank you for your patience and understanding.
UPDATE 1: We have transferred most VMs and the affected customers have been updated via support tickets. If you have any questions or issues please do get in touch with a member of the team.
We are now starting to migrate all VMs from the node Bravo to ourÂ Betelgeuse server. Please check your client portal for ticket updates which will contain details of your new IP address.
Thank you for your patience while we run this transfer.
UDPATE 1: All services came back online fine after the reboot and CloudLinux has also been installed. If you have any questions please do contact a member of the team.
We will be performing a server software upgrade on our S17 server which requires a server reboot. The reboot will be a standard reboot that will take up to 10min to complete.
Thank you for your understanding.
UPDATE 2: We have found the affected account and place in actions to help prevent the issue from happening again.
UDPATE 1: Scan complete and drives are running OK.
UPDATE:Â Services are all running normally. Thank you for your patience.
UPDATE: Services are coming back online and reboot was successful. Thank you for your patiance.
We are performing a reboot to correct a disk error and update software.
We are currently looking into an unexpected downtime on S17. We hope to have all services back online asap.
Update 1: The server and all services are back online. Thank you for your patience.
We are performing emergency maintenance on this server which will make all/most services on this server inaccessible.
Services affected include HTTPD and Webmail access, MySQL, POP/IMAP, SSH and FTP.
During this outage, access to your websites, email, files, databases, will not be possible. We apologize for this inconvenience and while we do not have an ETA for this procedure, we will continue providing updates as soon as possible.
UPDATE 1: Migration has been completed and all services are running on the new cloud instance. Thank you for your patience and understanding.
We will be performing a migration of all accounts from the current S21 Hong Kong server to a new cloud VM. This is due to network issues that has effected mail ports and connection speeds. Downtime will be minimal as we will be doing a direct transfer of accounts. If you are using A:Records please change your domains IP to point to:Â 22.214.171.124 - if you are using DNS then nothing is required to be changed.
If you have any questions please do contact us.
UPDATE 1:Â Services are running normally but our team are looking into our Lucee platform to see how improvements can be made to avoid memory overload from Lucee. We will ensure to update all effected customers as soon as possible. Thank you for your patience and if you have any questions please feel free to open a ticket to the management team who will be more than happy to answer infrastructureÂ questions.
UPDATE 2: Our updates have been completed and services are running normally. Thank you for your patience during the reboot.
UPDATE 1:Â We will be performing a server reboot tonight at 10PM UK/London time. We will be increasing some resources allocated to the cloud VM to ensure performance is maintained to the highest level. Thank you for your patience.
We are investigating an issue with the cloud VM that causes some downtime for customers websites. We will be performing upgrades to the VM which may require a single reboot of the VM.
Thank you for your understanding and patience.
UPDATE: Services came back online and all systems are running normally. Thank you.
We will be installing CloudLinux on our S15 server to help ensure CPU usage by accounts are kept to an acceptable level. This install requires a standard reboot of the server and we expect downtime to be under 10min. If you have any questions please contact a member of the team.
UPDATE 2: Services are back online and node issues are corrected. Sorry for the downtime.
UPDATE: The network has been resolved and all services are back online.
UDPATE 6: We have been monitoring the services during the night and all services are now running on the latest kernel. We will continue to monitor the server as normal to ensure the previous issues do not occur again on this node. Thank you for your patience and understanding during this update.
We are investigating an issue on the Archer node. Our DC engineers are checking this now.
UPDATE: After our reboot the services are running normally and we are monitoring the situation.
UPDATE: Services are back online and we are investogating the root cause.
We have detected a system issue with the node hosting the SQL services listed above. Our engineering team applied system updates and scheduled a brief maintenance window to perform a server restart.
Date and Time:Â Aug-18-2016 10:00 GMT/UTC (Aug-18-2016 03:00 Local Time)
Please note: This event will reboot the server and a small amount of downtime will occur. Your data and configurations will not be affected by the reboot.
A reboot of serverÂ S15 is underway to apply new updates. Sorry for the downtime caused, this will be a max of 5min.
UPDATE: Service reboot corrected the issue and services are running normally.
UPDATE: We have now resolved the issues, the main cause was a kernel error for the version the server requires. Plans are in place to move customers VMs and hosting accounts to our new Cloud solutions. Customers will be updated in the near future with planned migrations. Thank you for your patience.
MAINTENANCE START TIME:Â 7:30 pm EDT 08/03/16
ESTIMATED DURATION:Â 1 day
STATUS:Â In Progress
AtÂ 8:30PMÂ tonightÂ (03Â August,Â 2016Â 20:30-CDT)Â WeÂ willÂ beÂ takingÂ theÂ server Toronto 6 serverÂ offlineÂ inÂ orderÂ toÂ synchronizeÂ dataÂ acrossÂ multipleÂ disksÂ andÂ re-initializeÂ backupÂ services.
OurÂ teamÂ hasÂ detectedÂ anÂ issueÂ thatÂ couldÂ resultÂ inÂ heavyÂ dataÂ lossÂ ifÂ leftÂ unattended.Â ThisÂ processÂ couldÂ takeÂ upÂ toÂ 24Â hoursÂ toÂ complete,Â andÂ allÂ hostingÂ servicesÂ willÂ beÂ unavailableÂ duringÂ thatÂ time.
TheÂ safetyÂ andÂ consistencyÂ ofÂ yourÂ dataÂ isÂ oneÂ ofÂ ourÂ highestÂ priorities,Â andÂ thisÂ hasÂ beenÂ determinedÂ toÂ beÂ theÂ quickestÂ andÂ safestÂ wayÂ toÂ proceed.Â OurÂ teamÂ willÂ beÂ activelyÂ managingÂ theÂ processÂ throughout.
WeÂ sincerelyÂ apologizeÂ forÂ theÂ inconvenience,Â andÂ willÂ haveÂ allÂ servicesÂ restoredÂ asÂ soonÂ asÂ possible.
ThankÂ youÂ forÂ yourÂ patience.
UPDATE 1: Services are back online and running normally. Thank you for your patience.
We are currently working on an issue with our global cluster network. A network issue was identified which affects multiple servers. As such, some of the sites hosted may load slow or appear inaccessible. Our System Administration team is actively working on this now and we will update this post as more information becomes available.
We sincerely apologize for the inconvenience this issue has caused. We understand service reliability is of the utmost importance. If you have any further questions please let us know and we will do our best to answer them!
Services have been restored and we are investigating the issue with the Kernel version to help prevent this from happening in the future.
Enginners are checking IP routing with a version of Kernel which appears to be the cause of the issues. We hope to have services back online asap. We are sorry for the downtime caused.
We are currently investigating a major network issue connected to our Archer node. Engineers at the data centre are working to resolve this issue asap.
Update: services have returned to normal after network issues resolved the issues.
Engineers at our Walla Walla, US data centre are looking into packet loss issues on the network. All shared/reseller ColdFusion servers are currently affected.
UPDATE 1: Services are back online and we are investogating what happened in full.
UPDATE 1:Â Reboot complete and disk increased successfully.
We will be rebooting the Adobe ColdFusion services to resolve timeout issues in Tomcat. We hope to keep downtime to a minimum and is expected to be less than 5min.
After investigating the issue it appears the server was overloaded and after a reboot the memory cleared and all services came back online. We are monitoring the server to see any further build up of memory usage.
We are currently investigating issues with the Archer node. Our engineers are working on the issue.
UPDATE 3: We found the kernelÂ was causing the main issues which we have now corrected. We are looking into why this happened and how to try to prevent this in the future.
UPDATE 2: Our IP bridge for Archer node is no longer showing up for this server. Our team are working on correcting this to get all services back online asap.
UPDATE 1: We have lowered the traffic coming into the rack and now working to restore all services.
We are seeing a large amount of traffic hitting our servers. We are looking into the cause and working to resolve this asap. Sorry for any inconvenienceÂ caused.
One of our data centres upstream network providers performed emergency maintenance which interrupted our service. All servers and sites are currently up at this time and downtime was <2min.
If you have any questions or concerns please feel free to contact us at any time.
UPDATE 1: The reboot and memory upgrades have been completed. Services are running well. If you see any issues or have any questions please do let us know. Thank you for your patience.
UPDATE 2: We can fully confirm this was a data centre network related issue and we are working with the DC to find out exactly what happened.
UPDATE 2: We have resolved the issues detected on our memory update and all services are back online. We are sorry for theÂ inconvenience caused and this downtime was unexpected and nessasry to ensure node services would run smoothly and this does not cause larger issues in the future. If you have any questions or comments please contact the sales or management team who will be more than happy to help.
Update 1:Â The error at the data centre in regards to our IP subnet was human error which we our management team are working with the DC management to see how to prevent this from happening in the future. We are sorry for the downtime and if you have any comments or questions please feel free to contact a member of the team. Thank you for your understanding.
Update 6: The migration has been completed and all services are running normally. Thank you for your patience during this process and if you have any questions or comments please do let us know.
Update 5: We have started the transfer of S10 server. We hope to have this completed within the next 4-6 hours. Thank you for your patience.
Update 4: Server S11 has been migrated and now running on the new hardware.
Update 3: Due to complications with the transfer which wereÂ unforeseeable we had to stop the migration andÂ rescheduled the migration of S10 server for Sunday 9PM. We are very sorry for theÂ inconvenience caused and if you have any questions in the mean time please do let us know. Thank you for your understanding and patience.
Update 2: Due to the lower transfer speeds between the servers than expected we have rescheduled the migration of S11 server until tonight at 9PM. S10 is almost complete and should be back online shoryly.
Update 1:Â We have completed the migration of the dedicated VMs and now in the middle of migrating the 2 shared/reseller servers S10 and S11. Once this has been completed we will update this status. Thank you for your patience and bearing with us during this migration process.
We will be performing a node migration at our Coventry, UK data centre on Friday the 4th of September and starting from 10PM UK/London time.
Expected downtime: 2-4 hours per service
The new node is one of our top of the line servers and we hope you will see a general performance increase.
UPDATE 2: There was a temporary network issue on the racks that host the DELTAUK2 at the Coventry DC which has been resolved.
UPDATE 2:Â One of the data centres network providers had experienced some issues. This has been resolved at the DC level and we will continue to monitor the situation.
UPDATE: DC has resolved the power issues and services are now fully online. Thank you for your patience and understanding.
UPDATE 5: All services are stable and the sync will take a number of hours to complete. We will mark this status update as resolved and provide further updates if required. Thank you for your patience.
On Sunday the 14th of June we will be running security updates on our XEN nodes. Services will be rebooted on Sunday between 1AM and 2AM. We do not expect any major downtime apart from the reboots. The reboots may take a little longer than normal to ensure all security updates are installed correctly.
Thank you for your understanding.
UPDATE 6: The issue was caused byÂ a wide spread issue affecting much of the UK's connectivity at the London Internet Exchange (LINX). We have disabled our peering at LINX for now and all services are running normally. We will provide further updates shortly.
UPDATE 4: Data Centre Update:Â The problem was caused by a broadcast storm on our network and as a result a number of rack switches locked up which we had to reboot.
UPDATE:Â One of our upstream network providers has been experiencing some issues. We have re-routed the network traffic around them for the time being. All servers and sites are currently up.
UPDATE 3: We have corrected the issues and all VMs are back online. We are monitoring the services and investigating the network issue fully.
UPDATE 1: Services were restored at 23:25.
UPDATE 2: Networks appear stable but we are monitoring.
On Sunday the 12th at 20:09PM UK timeÂ Apache made an automatic graceful restart. This caused an Apache log rotation and our external monitoring services picked up 2-3min of downtime. Other monitoring services showed sites and services running. If you have any questions about this outage please do contact a member of the team.
The main cause of the issue was due to the XEN security updates that caused a failure in the boot up systems of the Archer node. Our team had to correct the boot up issues and run manual hardware reboots at the data centre. Once the node came back online all VMs loaded up successfully.
What we are planning to help prevent this from happening again:
UPDATE 1: VMs are now all back online and we are checking to see what happened to cause all VMs to fail without warning.
UPDATE 2: The outage to the majority of our servers should have been around 4 minutes. Those in racks 30, 13 and 27 have experienced upto ten minutes of downtime. This was due to the routing process restarting on our servers gateway device, we are looking into the cuase of this with Juniper and hope to have another update from them within the next 24 hours.
We have resolved the issue and will be publishing a full report shortly.
UPDATE 1: The network has come back online after corrections by the Coventry DC team. We are investigating what happeneded and will update you as soon as possible.
UPDATE 2: Updates have been sent via support tickets for clients with information regarding IP changes due to new DC node being deployed.
UPDATE 1: Migration has been rescheduled for the 5th of Feb.
Due to a decrease in performance on the node: AlphaUK2 we will be migrating all VMs from this node to the node:Â Betelgeuse. No IP changes will be required as we will be migrating the IP subnet over to the node.
Migration scheduled for: 03/Feb/2015 10PM UK/London Time Zone
UPDATE 5: All data has been migrated and we have been testing the sites and VMs over the night. Load and performance has generally increased. If you have any questions or issues please do get in contact with a member of the support team. Thank you for your patience and support during this migration.
UPDATE 4: We are still migrating data and currently on the largest section of data to migrate. Once this has been completed we will update this status.
UPDATE 3:Â We are still working on migrating the final services over to the new node. We hope the speed will increase once off-peak time comes. Thank you for your patience.
UPDATE 2:Â The migration of data is still progressing, due to the faults in the BravoUK2 drive the transfer process has been slower than expected.
UPDATE 1:Â Betelgeuse has had its final checks and data is now migrating over from BravoUK2.
Due to faults found in the BravoUK2 server we are performing anÂ emergency migration of all services instances to a new node which has been setup. This may take up to 10-12 hours for the complete data transfer which we will then switch over the IP subnets to ensure all clients services have the same IP. No domain or DNS updates will be needed.
New node name:Â Betelgeuse
We are sorry for the quickness of this migration but to ensure no data or customers services are effected we are pushing forward with thisÂ emergency migration.
We will keep this network status updated while we process this migration.
Archer Downtime Report
Report Downtime Start-End Date/Time:
30/DEC/2014 13:05 - 31/DEC/2014 03:30
The first reports showed issues with one of the 1TB SSD hard drives within the RAID10 configuration. This would not normally cause such issues due to the 7 other drives in the RAID setup. On further investigation we found a second hard drive had become faulty. This caused corruption in some files that controlled many elements of the Xen virtualisation setup which broke the network bridge between the node main domain and the VMs.
We were able to restore the configuration files to allow networks to become available once again. No data loss has occured and the VM instances were running normally during this time but without a network connection to the outside world. We are continuing to monitor the server and any sign of disruption will be investigated straight away.
We are setting up new monitoring tasks on our RAID and hard drives company wide starting with the Archer node to help detect issues like this sooner.
UPDATE 11: As of 3:30am UK time we were able to correct the network issues on the server. We are monitoring the server heavily and will be making adjustments throughout the day to ensure services run smoothly. Further updates will be posted shortly. Thank you for your patience during this issue.
UPDATE 10: New server hardware has been requested directly with our UK data centre and we hope to have this deployed asap.
UPDATE 9: Due to the hardware failure on the drives the configuration setup for our virtualisation systems has become corrupted and we are looking at restoring/transferring VMs to a new node server asap. We hope to have further updates shortly.
UPDATE 8: We have the engineers at the data centre investigating their configuration and any faults there end. From all of us at Host Media we are very sorry for this long period of downtime.
UPDATE 7: We are still working on a network issue connected to the local network to the node. The network issue is a misconfiguration in the bridge routing of the IP to VM.
UPDATE 6: We have been able to access locally the VM consoles and now renetworking the IP configs as the network seems to have been lost during the issues.
UPDATE 5: The node has come back online after its faulty drive replacement. We are now working on restoring access to the VMs and hope to have our customers back online asap.
UPDATE 4: A faulty drive (one of our 1TB SSD) in our RAID collection has caused the drives to fail their sync and brought down the VMs. The data centre team are replacing the faulty drive and also checking the controllers. We hope to have the node back online in 10min and then try to boot all VMs.
UPDATE 3: It appears a RAID controller could be the cause of the issues on the 'Archer' node. We hope to have more for you soon and your websites/VMs back online.
UPDATE 2: We are seeing some slowness in VMs coming back online - our DC team are checking the status of our RAID10 controllers and our offices tech team are checking the status of VMs data. Further updates to come.
UPDATE 1:Â The server node is coming back online now and we are making sure all VMs come back online. We will update you as soon as we can confirm what happened to this node.
We are currently working on resolving load issues with S01 server and we have started migrating some customers accounts to our CloudLinux SSD servers. If you wish to be migrated to our newer hosting platform (CloudLinux SSD Servers) while we are correcting the issues please contact a member of the sales team.
We are sorry for any downtime and slowness of the website loading speeds. We hope to improve the stability as soon as possible.
We will be rebooting the following server nodes at 8PM UK/London time to ensure the Kernal updates are fully applied.
Update type: Security
Servers Nodes: AlphaUK2, BravoUK2, CharlieUK2 and AlphaUS2
Shared Services: UK1, US1 and S5
We had to run a manual reboot of the node BRAVO UK - The OS is coming back online now and we are monitoring. VPS services should be coming online shortly. Sorry for the unexpected downtime.
UPDATE: Services are coming back online - we are investigating the cause of the downtime.
UPDATE 1: Network restored and services are coming back online. If you have any problems please contact the support team.
UPDATE 1: Services are now coming back online and we are monitoring the situation.
UPDATE 1: We have completed our repairs on the SQL services and all SQL systems are back to normal. We are sorry for the inconvenience caused by the SQL downtimes.
The node Charlie at our UK data centre was experiencing issues which was first reported as network based by our internal systems but on checking was due to a VM instance with corrupted data. We are investigating further but all VMs are coming back online. We will update you further as soon as we can. We are already prepping our new systems using Xen and look forward to hosting all our customers on these new systems soon.
Thank you and sorry for the downtime caused.
UPDATE 8: We have seen another automatic reboot from the server and we are investigating this now. Services are coming back online though.
UPDATE 1: The server is stable but load is still a little high for our liking and we are investigating now. We hope to have the servers load back to normal levels soon.
We are going to be running general updates and backup software updates on Sunday 10PM - Monday 1AM on our S6 CF10 server.
Expected downtime: 1hour.
Thank you for your understanding.
UPDATE 4: After 24 hours of normal running we are closing this issue. If you require any support or have any questions please do let us know.
UPDATE 8: After 24 hours of normal running we are closing this issue. If you require any support or have any questions please do let us know.
UPDATE 1: The outage was caused by an emergency reboot of our core routing platform at our Coventry site as recommended by JTAC engineers due to an error we were seeing on these racks. If you are seeing any issues please do let us know.
UPDATE 1: We have corrected the issue and all services have been running fine for some time. We had one spike in traffic but that was resolved as soon as it was detected. We will continue to monitor the services and any further updates will be posted here. Thank you for your patience.
We are currently looking into a high load issue on out Charlie node in the Germany DC. Services are coming back online now but any questions please contact us.
UPDATE 5: The new CF server have been deployed and our team are just checking all settings for the new system.
UPDATE 4: Our DC team have started setting up another CF server to spread load and accounts over too. We hope to have this online soon and offer some customers to be transferred over to this server. Further updates to come.
UPDATE 3: A service reboot is currently in progress due to a CPU load - our engineers are checking the cause now and will implement systems to stop this from happening again.
UPDATE 2: Reboot is now underway and we will be monitoring services once back online.
UPDATE 1: Further updates including memory updates will be happening tonight midnight UK/London time to improve stability more.
We are currently seeing a number of ColdFusion service downtime issues which appear to be caused by threads being unable to be created due to resource issues.
To correct the mentioned issue we will be applying further memory resources to the S6 server and applying updates to the JVM/Heap settings to optimize the performance of the server.
UPDATE 5: The issue has been fully resolved and all network activity is back to normal.
UPDATE 4: We believe to have found the main cause of the issue and now monitoring the network lines at the DC.
UPDATE 3: We have run a complete reconfigure of the network and the hardware for the routers and so far all networks have come back to a stable level. We are monitoring and ensure all services are back online.
UPDATE 2: A reboot of the server and network services was required to complete a full test of the servers network config. The server is coming back online now - we hope to have further updates soon.
UPDATE: The data centre has started speaking with network providers connecting the DC to the rest of the world (i.e. networks over to the UK etc) as the DC is currently unable to find an issue with the routers or networks within the centre.
We are currently seeing packet loss to our racks in the Germany data centre which was isolated to a couple IP's but now appears to effected entire subnets.
The engineers at the data centre are working as hard as they can to find the cause of the issue and we hope to have this resolved soon.
UPDATE: Due to some delays in the software updates we will be performing a CF service restart tonight instead. Downtime should be minimal.
UPATE: Services have now all come back online and the CFML services are running normally. If you are seeing any issues please do contact one of the support team via the client portal help desk. Thank you for your patience.
We are investigating issues at our US data centre.
Update: We have completed the updates to the Plesk CP.
UPDATE: The server auto restarted the affected services but we will be investigating what caused this. If you need any assistance pleas contact the support team.
The Alpha node, Media servers and Reseller servers became unresponsive due to heavy load through the network to these boxes. We have rebooted the servers and the team are now looking into this matter.
UPDATE: Network and services issues have been resolved. If you have any questions please feel free to contact the accounts team.
We are currently working on an unexpected failure on our US network. We hope to have this resolved shortly.
We are currently working on an issue affecting the Charlie UK node.
We are sorry for the unexpected downtime.
UPDATE: Last night our team completed restoring a backup of some SQL databases that corrupted and caused issues with service loading. We are moniting but all services appear to be running normally.
We are currently working on a disk issue on our S6 server which caused unexpected downtime.
UPDATE: All services are back online and the network issue has been corrected.
We are currently seeing a network issue at the data centre in the US. We are working on correctly this and hope to have everything back online soon.
UPDATE: We have our services back online and monitoring the network to ensure all issues have been resolved.
We are currently facing a DDoS attack on our US data centre network. We are doing everything we can to resolve this and have all websites back online soon.
Sorry for the inconvenience caused.
UPDATE: The DC team have resolved the network issue but now looking into what happened and how to try and prevent this from happening again. If you need any further assistance do let us know or if you are having issues with your service.
UPDATE: We have corrected the issues with the server and plans are being put in place and all customers on these US media servers will be updated soon.
We are currently experiencing issues with our US data centre which we are working on. We hope to have updates shortly.
- All services have been running stable for the last 12 hours and we are continuing to moniter the server.
- We are continuing to see 1min downtimes on this server due to the ACF service stalling. We have our team working on this with a CF consultant.
- 3PM Further to some more downtime logged within our systems we have made further adjustments to the CF memory systems and now monitoring our adjustments. We have further actions if required planned.
- 10AM We have made some changes to our CF services and now monitoring the server to ensure no further issues appear. We will continue to update this page once we have further information.
Adobe released a security hotfix APSB13-27 which is an important update which we have scheduled to be installed tomorrow Wednesday night (Nov 20th) at 11PM UK Time.
Date: Wednesday night (Nov 20th)
Time: 11pm GMT (3pm PDT)
Thank you for your understanding.
UPDATE We have resolved the IP routing issue with the data centre and all services are running normally.
We are currently investigating a network issue on IP routing on our S5 server. We hope to have further updates shortly.
We have scheduled an Apache recompile for updates on the S6 server.
Expected Downtime: <10min
Date/Time: Thursday 31st October 2014 at 23:59 (UK/London timezone)
If you require any assistance or have any questions please do contact the accounts team who will be more than happy to assist you.
UPDATE: Services are coming back online and we are now running scans on the servers to ensure everything is running normally.
We are currently having network issues at the US data centre controlling our CF10 services and we hope to have this resolved shortly.
Thank you and sorry for this unexpected downtime.
UPDATE: All services appear to be 100% stable and no reports of issues have come in. If you do have any issues please contact the Web Hosting support department.
UPDATE: We have made the changed internally and now monitoring all services to ensure the DNS takes effect.
Tonight (UK/London Time) we will be migrating the DNS1 cluster from it's current node to a new node. The clusters IP will change and this may cause some downtime for websites until all servers DNS zones are updated. No changes will need to be made on your domain/DNS as this is an internal change.
Customers using these nameservers will be affected by this change:
If you are using A:Records to point your domain name to our services you will not be effected by this change.
Sorry for any downtime caused but we hope this will increase the overall performance of the DNS cluster. Please note no downtime may occur and this update is for your records.
We will be migrating the server 126.96.36.199 (s4.dnshostnetwork.com) to a new server. All customers on this server has been emailed so please check your inbox for the email from us. Make sure to check your SPAM/JUNK folder just in case you have any filtering on your email client.
UPDATE: In the early hours of this morning (UK Time) the network was repaired and all services came fully back online. The data centre are looking further into the cause and how to prevent these issues from happening again. Thank you for your patience.
UPDATE: The network issue we are suffering is an external issue from a fiber cut, while we don't currently have an ETA they have found the problem and are working on fixing it as fast as possible.
ISSUE: We are currently investigating downtime on the nodes: AlphaUS2, BravoUS2 and US hosting services
UPDATE 2: The servers are back online after the transit provider re-ran their filters.
UPDATE: It appears there has been a filter issue on the 188.8.131.52/19 subnet with our transit providers, we have requested they run manual filter updates asap, we expect this issue to be resolved shortly. We are sorry again for the downtime caused.
We are currently investigating 2 server nodes that have become unavailable to pings. The nodes effected are BravoUK2 and CharlieUK2.
We will post further updates as soon as we can.
UPDATE 5: We are now at 80% completed and should have the final data restored in the next 1-2 hours.
UPDATE 4: We have restored 40% of the data and working on the remaining 60% now. We hope to have most of the offline accounts backup of the next few hours.
UPDATE 3: Our team and a DC engineer has confirmed that both drives within the server and RAID had become faulty. We are now replacing both drives and will be restoring the data from backups. The backups available are from the date: 03/July/2013.
UPDATE 2: The migration of data on the servers failed due to the harddrives issues. We are now at the data centre replacing the hardware.
UPDATE: We are now migrating hosting services to a new node and will update clients shortly. Some VPS services are running normally and those clients will be contacted shortly to be migrated.
We are currently working on a drive issue on the Delta node but due to issues we will be looking at migrating instances to a different node or unracking the drives and replacing with upgraded hard drives. We will post further updates as soon as we can.
UPDATE 2 09:47 - 26/June/2013: We have to the node and now checking the server for server side network issues. We hope to have everything checked and repaired within the next <90min.
UPDATE 1 09:17 - 26/June/2013: Network engineers are now at the data centre and servers to work on the networks connected to the node BRAVO. We are sorry for the downtime but we are doing everything we can to get the server back online asap.
ORIGINAL: We are currenrtly investigating an issue with the networks in our Germany data centre connecting to the node: BRAVO. We hope to have an update soon and resolved these issues.
UPDATE 21:42 | 25/June/2013: All services have now come back online and have been monitored for the past hour.
UPDATE 16:29 | 25/June/2013: We now have the KVM connected at the DC and now working on resolving the issue. If we are unable to get the network issues resolved a reboot will be planned for tonight (UK Time).
ORIGINAL POST: We are currently looking into a connection issue with the Delta node in Germany. The server is online but failing to respond to external VPS control panel (SolusVM) commands. We are connecting a KVM and checking this further. We hope not to run a server reboot to ensure uptime of the VPS instances are maintained.
Update 1 - 16:01 19/June/2013: We have adjusted the Railo/Tomcat memory settings to help with performance and CPU issues seen. We will continue to monitor and update this status update.
Tonight we will be running the install of some PHP extensions and an Apache recompile + server reboot will be required. We expect a maximum downtime of <30min.
We are sorry for any inconvenience caused.
LATEST STATUS (16/May/2013 - 10:15AM): The server has been running well over the past 24 hours and only small adjustments have been made to CF settings without issues. We will soon mark this migration complete and the old server will be looked at being shut down in a week or so.
PREVIOUS 1 UPDATE (15/May/2013 - 9:09AM): The migration has been fully completed and we are seeing sites run really well on CF10.
PREVIOUS 2 UPDATE (14/May/2013 - 17:29PM): We have 100% completed the migration and now testing the CF services further. If you find any issues with your website please contact the ColdFusion support department - Direct URL: https://www.hostmedia.co.uk/client/submitticket.php?step=2&deptid=9
Note: If you are using our DNS/Nameservers and your website is showing as offline/down this is due to your account not yet being migrated. We are working on having all accounts migrated as soon as possible but this may still take sometime. Please see percentage above for details.
New Server Features: ColdFusion 10 Enterprise, CFManager (New version coming soon), CloudLinux (Based on CentOS 6 64Bit), Clustered DNS Network and Latest cPanel/WHM
New servers IP: 184.108.40.206
UPDATE: Upgrade completed and service rebooted.
A new ColdFusion 10 update was released late yesterday afternoon which we have scheduled to be installed tonight (15th of May) at 11:59PM. A ColdFusion service reboot will be required which will cause a small amount of downtime. Expected downtime <5min.
ColdFusion Update Information:
ColdFusion 10 Update 10 Tuesday, 14 May 2013
Update Level: 10
Update Type: Security
Update Description: The ColdFusion 10 Update 10 includes important security fixes.
CURRENT STATUS: United States (Kansas City) - ALPHAUS2 IP's are still nulled by our DC ISP but we are working on this.
UPDATE 3 - 09/May/2013 | 17:31: We have started to see another attack on the same instances after the first set was successfully re-routered. We are working on this now and the DC is monitoring our routers for the BRAVO node.
UPDATE 2 - 09/May/2013 | 09:15: The attack on some of our clients servers has ended and we are reporting a normal service level. The clients who were effected was contacted and only those clients were effected by this attack all other services were running normally during this. Thank you.
UPDATE 1 - 08/May/2013 | 22:49: We have null routed the IP's but the attack is still continuing but the network appears to be handling the traffic fine now. We will continue to monitor and the data centres will also continue to monitor.
We are currently seeing an DDoS attack on some of our servers in Germany and the US. We have null routed the attacking IP but some connections are still carrying on the network.
UPDATE 1: We have started preping and the migration of the S9 server.
Update 1: The server is performing normally and management will be sending an email to all CF customers shortly.
We have a quick reboot planned in the next 30-40min for the CF Linux server to increase general performance of this server. We are sorry for the downtime caused but this should be no longer than <10min.
UPDATE 4 - 12:53 / 21/April/2013 : We have now completed the migrations and all sites are now responding well and running on the new servers. If you have any issues please do contact us and we will be more than happy to help.
UPDATE 3 - 22:52 / 19/April/2013 : We are still migrating over sites, due to the large amount of data it is taking some time but we will continue migrating sites and update all customers once completed. Thank you for bearing with us.
UPDATE 2 - 16:22 / 19/April/2013 : The migration continues but it is going well, we hope to have all sites migrated by the end of the day. Just a quick reminder if you are using our DNS/Nameservers (dns1.dnshostnetwork.com / dns2.dnshostnetwork.com / dns3.dnshostnetwork.com) you will not need to do anything. As soon as your site is migrated via cPanel the DNS will automatically point your site to the new servers. If you have any issues at all please do contact us via a support ticket to the WEB HOSTING department.
UPDATE 1 - 10:09 / 19/April/2013 : We have started the migrations after some setting changes and updates to the new server. Once the migration has been completed we will send all customers the new IP address again and to supply a general update. The new IP is: 220.127.116.11
We will be applying ColdFusion 9 security updates midnight tonight with a 10/15min reboot. If you have any questions please do let us know.
UPDATE 5 21:50AM: Due to the issues with the CF services we are now restoring the CFIDE and settings from a backup. We hope to have this resolved soon.
UPDATE 4 16:42AM: We are seeing some issues with the ColdFusion service after the updates earlier today. Our Adobe qualified partners are checking this now to see what the cause is. The server has been upgraded heavily and running on newer systems which will increase the general service level.
UPDATE 3 15:52AM: We will be performing short restarts in the next hour to ensure changes fully take effect on the server and that the memory increases are running correctly. We are sorry for any further downtime but after these reboots the server will be classed as stable. We will continue to montior the server to ensure any problems are worked on straight away. We are again very sorry for the downtime during this morning and hope the new improvements will allow your sites to run faster than ever before.
UPDATE 2 12:44AM: All services are running normally, we are checking the server for issues while it is now live. We will keep this status update open until we have run our reports.
UPDATE 2 12:26AM: We have completed 1 scan and a reboot which did not correct the issues and a new scan has already started. A number of issues were fixed from the first scan and we hope this second scan will correct the final issues for the server to boot correctly.
UPDATE 1 11:19AM: We are now running a fsck (disc check) to work out the issues on the server and why the drives won't boot up correctly. We hope to have this completed soon and the server back online. Again we are very sorry for the downtime and hope to have everything back online soon.
UPDATE 2 28th/March/2013 22:48PM: We are currently delayed in the migration due to software installs. We will be working through the night to get the service up for tomorrow. We will keep posting updates here but any questions feel free to contact the sales team.
UPDATE 2 28th/March/2013 16:33PM: We are now working on getting the server racked, our team at the data centre are just putting all the bits together. Once we have the server racked we will get working on this. Sorry for the delay.
We are currently having an issue with the subnet 18.104.22.168 - 22.214.171.124 - Websites and virtual instances will appear down. We have tech team investigating and working to resolve this.
We will be running a quick reboot of our CF9 cPanel server to apply extra disk storage to that server. Expected downtime: <10min
UPDATE 1 (11:24AM London Time): Full network access has been restored and all services were not effected by the network outage. The DC team are investigating further. Sorry for the downtime caused.
We are currently experiencing a network failure in the Walla Walla, USA Data Centre. This is being investigated and we hope to have further updates for you soon.
Sorry for the downtime caused and we hope to have this resolved asap.
This reboot has been rescheduled for tonight (7/March/2013):
Due to updates to our server instance we require to run a reboot to apply the performance features. This has been scheduled in for midnight tonight. The expected downtime is less than 15 minutes.
UPDATE 1: The DNS cluster has been repaired and services are back online but we are still working on some fragments of the cluster. Hosting services should start resolving shortly and access to FTP/mail/web should be back soon.
ISSUE: We are currently investigating an issue with the DNS cluster on our shared hosting. Customers using these DNS reports may be effected:
We are working on this now and hope to have resolved shortly.
We will be running some updates on the Coventry Bravo node to ensure we are able to provide additional server features to you.
To ensure these features are done correctly we require to run a quick reboot of the server. This has been scheduled for Wednesday morning at 00:20 (6/March/2013). The expected downtime of the node during the reboot is <15min and our team will be checking this to ensure everything is rebooted correctly.
If you have any issues please do contact our support team and they will be happy to help you.
Our DE1 Windows server (Germany) has become unresponsive but our engineer is at the data centre now working on the issue to see if the issue is hardware related or software. We will post updates as soon as we can.
We will be performing a quick reboot of our DE1 Windows Server to enable some newly installed software to be activated. The downtime will be small and we only expect a downtime of <15min. Sorry for any inconvenience caused.
We are currently rebooting the CF services on this server. This may cause your websites to become slow or stop working during the reboot of the services. Sorry for any downtime caused.
We are currently preforming a server reboot of US1 to take upgrades and system changes in to effect.
We have a scheduled reboot planned tonight of our cPanel US ColdFusion 9 servers to apply updates to the web services. We only expect a downtime of 5 minutes.
We are currently having issues after a reboot on our Windows server in the UK. Our team are working on this and should have an update shortly. We are sorry for this downtime and the issues caused. Thank you.
We are currently seeing a high load on our US Railo server (DC: Phoenix Server: 1). Our team are already investigating.
We are currently working on high CPU load from accounts on the server. We are performing updates to the server. We are sorry for any downtime caused but we are doing everything possible to resolve the CPU load issues and have the service to stable level.
Future/long term plans for this server: Directors have agreed plans to move to our new servers which run higher grade processors (Intel i7's). We will be ordering the hardware soon and running tests on this platform.
Update 1 : 10:30
All services appear to be coming back online, we are just checking the settings and CPU readings.
We are in the middle of a restart and performance scan on our Windows server to increase the performance and make the server more stable.
We are sorry for the downtime and hope to have everything back online asap.
Due to a high load from PHP and web services we are preforming a web service to clear temporary files. This will increase the performance of the service greatly.
Expected downtime <1
Thank you and sorry for the downtime.
We have corrected the apache issue but looking into what caused the fault and to make sure this does not happen again. If you have any questions do contact the team and ask for the level 3 tech team to help.
We are currently having issues on our CF1.HOSTMEDIAUK.COM server with Apache failing to start up. Our team are working on this now and we hope to have resolved shortly.
Thank you and very sorry for the inconvenience caused.
Update 1: 30/04/2012 13:12
We have made the changes to the ColdFusion services which included adjusting the MaxPermSize / Maximum JVM Heap and settings on the Windows services. This appears to have greatly increased the general performance of the ColdFusion service but requires monitoring. The server has enough resources to be updated again if required.
We are sorry for the down time and if you have any questions please contact the management department.
We are currently looking into a ColdFusion service issue that is casuing the ColdFusion service to stop responding and requiring a restart. Our team are looking at increasing the general performance of the server and to allow ColdFusion to use more resources on the server.
We are sorry for the downtime and hope to have everything stable asap.
Management are looking into the possibility of moving the entire server to one of our custom Cloud services to greatly increase performance and reliability.
investigation on the network issues seen last night at our Data Centre seemed to be a flow based attack against a particular IP. We have since located the server in question and secured it.
Apologies for any inconvenience this may have caused.
In order to keep the infrastructure up to date and provide the best service for our customers we are upgrading the switch connections in our racks on Sat 10th March 03:30. You may experience a few seconds loss of connectivity when we plug cf3.hostmediauk.com (Windows Plesk CF9 Server) in to the new switch.
Any dedicated & colocation service customers will be informed by email effected by this upgrade.
Update 1: All services are running and memory increased. We are keeping an eye on this but everything has now been resolved. Thank you and sorry for the issues.
We have had reports our CF9 cPanel Linux server has started to run slow and timeout, we are working on this now and hope to have everything running smoothly shortly. We are sorry for the issues and will resolve asap.
Our team are fixing the Kloxo service issue which has knocked out all web services on our Kloxo CP based servers. (CF & Railo).
We will have everything back online asap.
Our team are fixing the Kloxo service issue which has knocked out all web services on our Kloxo CP based servers. (CF & Railo).
We will have everything back online asap.
UPDATE 2: We was correct in seeing it was a JVM memory issue, a restart made all sites come back online. We are keeping an eye on the service and checking documentation to see if this is a known issue.
UPDATE 1: It appears there is a JVM memory issue within ColdFusion which means sites are loading but only after a long period of time. We are looking into this now.
We are currently investigating a number of small outages on the CF3 server, our team are working on this to find out why these are happening and will update all clients soon.
UPDATE 11: Everyone seems to be using the new servers well, but we have a small number of accounts we are working on with the clients to make sure everything is working as it was on the old box. All services are stable. Any questions or problems do let us know.
UPDATE 10: We are currently running a number of tests due to issues with making Plesk 10 work with the very latest ColdFusion hot fixes. We hope to have this final element resolved soon ready to move all accounts to the new server. We are sorry for the delay, any questions do contact our team. Thank you,
UPDATE 9: Our team are doing everything they can to get our new server setup and secure. We have had a couple small delays but our current server appears to be stable at the present time but we are working as fast as we can to get the new server online. ETA for online is 09:30AM Friday (23/Sept) and sites being restored first thing. We will continue to do everything we can to ensure the current setup stays online and stable. Thank you,
UPDATE 8: We have emailed all our Windows customers regarding a plan that will be going through today. This will involve moving all accounts to a new larger server with the lates ColdFusion hot fixes pre-installed to safe guard from the issues we have been having. This upgrade is a huge investment for the company in the Windows hosting service we offer and will allow faster support (due to new staff being brought in), faster speeds (due to the increased port speeds) and faster performance on your websites & applications (due to the larger server specification).
This will be worked on today, we of course want to run as many tests as we can to make sure no issues appear on the servers. If you have any questions do let us know.
UPDATE 7: We are investigating more issues on our Windows servers relating to Sundays attempt to install ColdFusion hot fix 9.0.1. The issues seem to get resolved by our team and then ColdFusion does not seem to handle after a few hours and wants to crash. Our entire team is working on this to resolve asap.
I wish to take this time to thank all our customers for their support and patience. We can undertstand this is not what you want from a provider having its main service down but we will resolve this.
UPDATE 6: It appears the CF service is still having a number of issues which we are working as hard as we can to resolve. We are sorry for this constant issues on this server relating to Plesk & ColdFusion. We will update everyone asap.
UPDATE 5: We are having a number of minor issues on the server still but all services are working fully. We are looking into getting these remaining issues resolved asap.
UPDATE 4: All services are back online and running, we are currently montioring all services to ensure everything is stable.
Some clients may see an error for your ColdFusion DSN (Data Source Names), to resolve this please do the following:
Any questions do contact us.
UPDATE 3: We have been having some issues with Plesk connections to ColdFusion. If you are unable to access Plesk this is due to the connection issues. We are sorry for this long delay and downtime overnight and hope to have this resolved soon. We are contacting teams from Adobe & Plesk for extra support in this case.
UPDATE 2: Our team have ColdFusion reinstalled and working, we are currently working on the connections between Plesk and ColdFusion. We hope to have this fully resolved soon. Thank you,
UPDATE 1: After applying the update yesterday for ColdFusion 9.0.1 a number of the ColdFusion files were corrupted, the team are now reinstalling ColdFusion to the server and applying all settings. The team are still investigating the issues and hope to have everything resolved asap.
Thank you again for your patience.
We have been having a number of issues with our ColdFusion service today which our team is working hard to resolve fully. We will post updates as soon as we have more details on the issue.
We are very sorry for the inconvenience downtime on the CF services.
Thank you for your patience.
We will be running a number of hardware updates to our US servers for ColdFusion & Railo, the update will include a number of benefits such as faster network connections (faster ports being opened up), larger backup drives, increased disk space due to new drives being added.
We do not expect much downtime, but with a min downtime of <20min.
Railo 3.2.3.000 has been released and we are running the update today, we expect minimal downtime and only a Railo restart required.
If you have any questions feel free to contact our team.
We are applying an update to our shared hosting server SERVER42 at 02:00 on the 18th September. Coldfusion sites may be unnavailable for a short period during this time.
We are sorry for any downtime that may occur.
Since the 16th of July our team has been working to fully restore all accounts on our media1 servers after a hack using accounts insecure scripting allowing access to commands on the server. This server security break has been fixed but the accounts where these scripting errors allowed the accounts file to be edited or settings changed are being fixed now.
We are sorry for any downtime or low connection occurs but our team is monitoring the situation and hope to have more updates soon.
Please make sure to use the support tickets to contact our support team.
Thank you for your time & patience.
We are currently having some issues with our Media 2 server in the US. Our data centre who we have contracts to maintain this server are looking into the problem now. It appears the
Very sorry for the downtime and hope to have this resolve shortly.
We are currently running some tests and looking into an issue on our ColdFusion Kloxo server in the US. We are sorry for any downtime and hope to have the server backup soon.
Host Media UK Tech Team
We have restored most of the servers systems and working on the rest of the services. Sorry for the downtime that has occurred.
We are currently seeing a high usage on our CF2 server which we are investigating to resolve asap. The issue appears to be server wide, we hope to have an update soon and the server back online.
We are very sorry for the downtime.
UPDATE 11.47AM :: Data Centre team is helping with the issue, thanks to them this should be resolved quickly.
The issue has now been fixed, it appears the server was not taking in account cached memory so it showed using more memory than it really had.
Everything appears to be running fine now but our team will be keeping an eye on it to make sure nothing else comes of it.
We are currently seeing a large increase in memory and CPU usage by apache for the Railo server even after our Upgrade of RAM last night. Our UK team are investigating this and our US server administrators will be checking the logs to see why this is. We believe an account is using a unsafe script which is creating the high server load.
We will post an update here asap!
The Railo service is now backup and running while our team investigates the reasons for the service downtime. We will update this issue post with our results.
We are currently updating our Railo service which we are sorry to say is taking a bit longer than we thought due to issues with the update. We are working on getting the service back to normal ASAP! The update will bring the Railo service to the latest version with all security and new features.
We will update all our customers once tests are finished.
Sorry for any downtime on Coldfusion / .cfm / .cfc files.
All services have been tested over a period of 12 hours and appears to be running fine now. We are investigating the issue of the RAID controller.
Our team has found the issue, it was an unexpected RAID hardware issue which is getting replaced / fixed now and our server will be up and running within the hour.
We are sorry for this issue, but was a RAID hardware issue.
Thank you and we will update you soon.
We are currently having issues with our NY1 server which our US and UK team are working on.
This issue started after a restart due to a system clean to help performance on the server and speed up mail / POP3 systems. The server appears stalled on the main drives for an unknown reason.
We will update all our customers on this server asap!
Our NY1 server has been going off and online over the night and we have been able to get partial access back to the server for some services but still working on the port 80 issue.
We are sorry for this issue but we have our looking into the issue and have our data centra investigating this.
We will update our reports here.
*** ISSUE RESOLVED ***
We have been seeing our servers getting hit with a high amount of CPU usage, we are working on this issue with our full team.
We hope to have news soon on why this is and make sure it does not happen again.
Sorry for any issues on your websites as reboots maybe required.
:: UPDATE 17/Sept/2009 ::
We have found the issues making our servers load higher than normal (Currently fixed but monitoring), we are in the process of moving up the deadline for our new systems which will offer customers new locations as well as newer servers. This is mainly for non FFMPEG/PHP Ming/Reseller customers. If you would like to know more please do open a support ticket to sales with your questions.
:: UPDATE 22/Sept/2009 ::
As many of the US media server customers will have seen all sites were offline while the servers main systems were running fine which included WHM/cPanel. We are investigating this and awaiting tests from our data centre.
We would like to take a moment to say sorry from all the team for any emails missed in this time and sorry for the down time. We will be offering customers the chance to move to our new servers in a wide range of locations. We hope our final tests today will allow customers to have new accounts setup around the world.
Our US media server had issues with port 25 sending and getting mail to work. Our US ISP changed their security without any warning and forced us to use port 26 instead for all mail.
The mail servers have now been restarted and all tests show mail working fully again. If you have any problems with your mail please contact us.
We are currently running some fixes for some issues we have found on the email server for our UK servers. The team there are working on this issue.
Our UK servers maybe running a bit slower than our normal fast speeds and some minor downtime may occur due to updates to our systems.
We are hoping these updates and restarts will improve our overal systems.
Sorry for any issues caused.
Server is now backup and running.
There is a planned restart of the Windows CF8 server for updates and install of FFmpeg.
This will be a short delay in the server.
Sorry for any inconvenience.
Over this weekend we will be performing some upgrades to our support and sales systems including our main mail for Host Media UK, all enquires will be answered as soon as this system upgrade has gone through and we are sorry for any inconvenience. All our server administrators will be on hand monitoring the servers as normal to make sure no down time occurs.
Thank you for your support.
Due to some network issues found by our server administrators we are working on upgrading our server connection to make sure no major down time appears and to keep our 99% uptime.
Some downtime may occur but we hope to have this issue sorted asap.
If you have any questions please do contact us either through Host Media UK or AeonCube Networks
Host Media UK Server Team
Over night we had to work on the cPanel / WHM Linux server for some
This is now complete and we are sorry for the down time.
Total down time: 150min
If you have any questions regarding the server and our updates please contact us.
We are working on our ColdFusion / ASP services to run checks on accounts and setup of the server to run faster, some issues my occur on new setups and features maybe offline for a small amount of time (Pre-Installed script etc).
We are also looking into upgrading the server to a Plesk based server to allow faster and better systems.
Shared Hosting Server Upgrades
Our shared servers will be under upgrade but we website should not be done while these upgrades process. As our servers are also being moved in this upgrade we will be publishing new shared IP addresses. If your site uses A RECORDS for its domain names please change this asap. All name servers such as: dns1.hostmediauk.com / dns2.hostmediauk.com will not be effected.
Server upgrades includes:
New features coming soon
We will keep everyone updated on the progress of our upgrades.
Host Media UK Management