There are upcoming maintenance events which may impact our services. Learn more
There are no Open Network Issues Currently
There are 1 Scheduled Network Impacting Issues. Learn more...
Data Centre Rack Change (Resolved) Medium

Affecting System - Coventry Rack Move & Upgrades

  • 07/03/2024 23:00 - 07/03/2024 23:55
  • Last Updated 08/03/2024 00:06

UPDATE 4: We are happy to confirm all servers are back online and services are running normally. We will closely monitor all systems during the night and resolve any issues that may arise. This status will now be set to "Resolved". If you do face any issues please do contact our support team who will be happy to help. Thank you for your patience and we hope you enjoy the upgrade network & power services.


UPDATE 3: Networks are being updated now to the new racks and we should start seeing services coming back online. We will update this page once we have verified everything.


UPDATE 2: Servers are now being loaded into the new racks. Once racked and power checks are completed, network updates will be started to map our subnets etc to the new facility.


UPDATE 1: We have started to shut down servers and engineers at the data centre are beginning to move our servers into their new racks. 


DETAILS:

On Thursday the 7th of March at 11PM (UTC) we will be performing a rack change at our UK data centre to a new facility based at the same location. This will mean servers will be disconnected, moved and re-inserted into our new racks.

Actions Required by Customers: No actions required, but please see below 'Recommendations for Customers'.

Expected Downtime: ~50 minutes (likely 30-40 minutes)
This move will also affect our internal mail services which power our incoming mail to the help desk but you will still receive help desk ticket updates via email, but please view the ticket in your client portal to reply to any tickets there instead of by email.

Reason for move:

Our move will provide a number of critical upgrades and benefits to our services which will include:

  • Upgraded network with 2 x 10Gb/s uplinks
  • Upgraded power supply, with A/B + C for extra redundancies
  • New Juniper switches
  • Improved cooling systems

Recommendations for Customers:

As with any change to systems, hardware or moves, we always recommend for customers to take a backup of their critical data before the date of the move and store on their local machines (or services outside of Host Media' network, e.g. DropBox, OneDrive etc) in case of any failure. This should be done as part of your normal backup processes for locally stored backups.

Time of Migration:

7th of March 2024

11PM UTC
COORDINATED UNIVERSAL TIME

5PM CST (UTC-6)
CENTRAL STANDARD TIME

Node Migration | Location: US (Resolved) Low

Affecting Server - Richmond

  • 22/02/2024 07:00
  • Last Updated 23/02/2024 09:58

We will be performing a standard cPanel migration of all accounts hosted on the US based node 'Richmond' on Wednesday the 22nd of February 2024 at 7AM UTC time to new servers.

Domain/DNS Actions Required by Customers:

  • If you use our DNS name servers on your domain name, you will not need to make any changes as everything will sync to the new server once your account has been migrated.
  • If you DO NOT use our DNS name servers on your domain name, you will need to update your domains A/MX:Records to use the servers new IP '172.96.160.100' once the migration has been completed we will send communications to all customers affected by this migration.

Expected Downtime: Minimal due to cPanel direct transfer between servers 

Reason for move:

Our 2024 plans include improving our US based services with the start of moving our current US nodes to our brand new data centre/server supplier. Our new shared and reseller hosting servers will be primarily hosted on the west coast in Los Angeles, with great connections to all of Asia and of course all of the Americas, it is the perfect location for our new range of hosting. Customers will also benefit from faster servers using a range of high performing AMD & Intel CPUs, using the latest software, as well as a mix of Enterprise SSD and NVMe hard drives (RAID Protected). We will also be providing across our entire standard hosting range further included backups within JetBackup as well as upgrades to version 5 of the software.

Recommendations for Customers:

With any hardware change or process, we strongly recommend taking a full backup of your account before the 22nd of February in case of any data issues during the migration. We will do everything possible to ensure a stable and clean transfer but as with everything in tech, things can happen unexpectedly. We will have the old servers for a period after the migration in case of any issues to revert back or re-transfer anything that is required but please do make your own copy of your key files (databases especially).

Time of Migration:

22nd of February 2024

7AM UTC
COORDINATED UNIVERSAL TIME

1AM CST (UTC-6)
CENTRAL STANDARD TIME

Post-Update Disk Issues (Resolved) Critical

Affecting Server - Darwin

  • 25/01/2024 23:15
  • Last Updated 26/01/2024 16:22

UPDATE 5: We are monitoring the services but at present everything looks normal and we will be working on disk upgrades in the coming weeks with plans being drawn up.

UPDATE 4: Services are now all back online and we will be running a post check. One issue was found during this that hadn't come up before related to one of the disks which we will be looking at replacing in the hardware RAID asap once all services are settled. Thank you for your patience and understanding - we are very sorry for the downtime caused.

UPDATE 3: FSCK scan is running and disk data being optimised at the same time. This can take some time to run so we can't provide an exact ETA but hopefully less than 1 hour.

UPDATE 2: Server parts list fully OK, disks are now being fully scanned after the initial tests and partitions data corrected with FSCK. 

UPDATE 1: Due to te server having some issues previously we are having to run further checks on this box before running a full disk scan to ensure we are not patching for it to have another issue in the future. We are sorry for this long unexpected downtime but we are working on getting everything back online asap.

We are working on an issue with the Darwin server after updates were made to the Kernel of the server. These checks include a full disk scan which is taking some time.

Details of issue:

A CloudLinux update was performed on the DirectAdmin server Darwin which was suggested by the CloudLinux support team in a check/scan of the server to improve features and services on the box. Once the updates were made a soft reboot was required to ensure all services were updated. Downtime expected was less than 5 minutes and planned for 11 PM to avoid peak times.

Once the server was rebooted the server was able to boot back online properly and presenting with kernal issues.

Node Reboot (Resolved) Low

Affecting Server - Newton

  • 10/01/2024 13:30 - 10/01/2024 13:40
  • Last Updated 10/01/2024 13:39

UPDATE: Reboot completed and all services running normally.

We will be performing a standard reboot of the cPanel node 'Newton' (Singapore) to apply some minor fixes and updates. We expect downtime to be minimal (~10min).

Reported Darwin Issues (Resolved) High

Affecting Server - Darwin

  • 01/09/2023 14:50
  • Last Updated 07/09/2023 13:08

UPDATE 6: We are closing this status log as we have left it open for a number of days to ensure people saw the details and we are not getting any new reports of issues. If you do face any issues please open a tech support ticket.

UPDATE 5: If you are facing issues with your database, please check your JetBackups for possible restore points or contact our support team to assist you. We have some reports of database users missing which is causing some websites to show a database connection error.

UPDATE 4: All services have been stable for some time, if you are facing any database connection issues please get in touch with our tech support team via the help desk.

UPDATE 3: We are investigating some SQL issues post the disk fix and working to have this resolved asap.

UPDATE 2: The disk clean and FSCK was completed and the server is now back online with all services started. If you have any issues please contact a member of the technical team.

UPDATE 1: We are running a disk clean which can take some time but once completed we will confirm the next steps.

ISSUE: We are currently investigating an issue with the Darwin node and an unexpected downtime logged. We will update here when we understand the causes.

 

 

Memory Upgrade (Resolved) Low

Affecting Server - Greyhound

  • 10/08/2023 06:00
  • Last Updated 10/08/2023 06:30

UPDATE: Memory update and Lucee configuration completed.

We will be performing a memory upgrade on the Greyhound server at 6AM (UK/London time) on Thursday the 10th of August. Downtime is expected to be less than 30min while these updates are going through. We are sorry for the short notice of this upgrade but this is to help resolve issues found on the server that has caused some memory heavy services to be unstable.

Mail Delivery Issues (Resolved) Medium

Affecting Server - Blackwell

  • 13/07/2023 09:00 - 19/07/2023 10:57
  • Last Updated 13/07/2023 13:17

UPDATE 2

We are seeing some retry attempts on emails that have not come through but it appears some will not be able to be delivered. We are continuing out investigation to work out why this issue happened but resolved once we did a full restore of the EXIM configuration to cPanel defaults. 

UPDATE 1

Mail should now be coming into all accounts inboxes, we have reverted some changes that were applied that caused mail to no longer flow into customers mail boxes, we are continuing our investigation to see what caused this and why it affected incoming mail.

ISSUE

We are aware of an email delivery for incoming emails to the cPanel server Blackwell, our teams are investigating this issue and working to resolve asap.

Sorry for the inconvenience caused and we hope to have all services back to normal soon.

Node Maintenance (Resolved) Low

Affecting Server - Blackwell

  • 31/03/2023 06:00 - 31/03/2023 06:30
  • Last Updated 31/03/2023 09:05

We will be performing a memory upgrade and software update on the node "Blackwell" on the 31st of March 2023 at 6AM BST (UK/London). These upgrades will correct a detected issue with the memory allocation on the server.

We expect downtime to be less than 30min.

UK Data Centre Disruption (Resolved) Critical

Affecting System - UK Data Centre

  • 08/12/2022 08:30 - 20/12/2022 10:24
  • Last Updated 20/12/2022 10:24

UPDATE 13: We are awaiting a full report on the outage from our data centre, once we have this and analysed the findings we will post a status update with our own investigations in the events after the ~4 hour intermittent power outages. This status will be marked as closed as we continue with BAU support of our customers. Thank you for everyones patience during this time.


UPDATE 12: Restores of the available data has been completed and new remotely stored database backups tasks are now running alongside the full snapshots. We continue to look into the cause of the backup issues to generate a report.


UPDATE 11 - BACKUP UPDATE: At present JetBackup engineers do not understand why databases are missing from our DirectAdmin backups, they are awaiting the next backup job to run to analyse this further. Internally we are considering options for our DirectAdmin hosting going forward if such critical capabilities are missing or faulty.


UPDATE 10 - BACKUP UPDATE: Due to failures in some JetBackup based backups are missing certain data such as databases or completely unavailable even though a record exists. We have had JetBackups support looking at this and working out why this is so. We are very sorry for the inconvenience caused but we are asking those with missing data to provide their own backups for our technical teams to restore for them.


UPDATE 9 DISASTER RECOVERY PLAN: We are progressing with the full disaster recovery plan with the following stages which will be updated as we progress:

  1. New clean install of OS COMPLETE
  2. Installation of the core software:
    - DirectAdmin COMPLETE
      (Note: Implemented new DA version with PRO Pack)
    - CloudLinux COMPLETE
    - LiteSpeed Web Server COMPLETE
    - JetBackups COMPLETE
  3. Link JetBackups with offsite cloud backup servers COMPLETE
  4. Begin account restore (this will take some time) COMPLETE
    Restore Notes:
    - We are continuing to verify, re-queue restores and resolve any issues customers maybe facing, if you believe your account isn’t available please get in touch with our team.
    - Due to some failures in restores we have JetBackups & DirectAdmin support teams advising us with fixes required. We strongly recommend if you haven't been able to gain access to your account to contact the support team and request the account be setup from new while we restore any/all missing data.
  5. During account restore install non-core software
    - Softaculous  COMPLETE
    - CrossBox Mail Suite PENDING
    - RoundCube Mail Client COMPLETE
    - PHP Version Selector COMPLETE
    - Website Builders  COMPLETE

Important Notes - Please read:

  • Resellers:
    Account restores for resellers may require our teams to contact you via a help desk support ticket to ensure any/all sub DirectAdmin accounts are properly synced with your user and are restored correctly.
  • Accounts ordered after the 2nd of Dec:
    We are sorry to say our weekly backups were meant to run when the outage happened which is very bad timing. Those accounts ordered after the 2nd of Dec will be created with clean new accounts.
  • Ticket Response Times:
    Our entire team are focused on restoring the remaining accounts onto the newly built Darwin node which is taking most of the teams time. We will respond to all tickets but there is a large backlog to work through. Sorry for the inconvenience in delays, we will ensure to keep this status page up to date.

UPDATE 8: Unfortunately it appears the disk became corrupted due to the constant power disruptions on the Darwin node that we are putting in place our full recovery plans. This will take some time and we will be working with customers to get them back online asap. We will post further updates here once this has been started. We are very sorry for the inconvenience this has certainly caused. A post debrief with the data centre and management will occur where we understand the full details of this.


UPDATE 7: Progress with the DirectAdmin node labelled Darwin is underway and we are attempting to repair the damage caused to the storage drives from the power outage. At the moment we don't have an ETA on the repair but will keep this incident report updated.


UPDATE 6: Most servers are confirmed online, the Darwin node appears to have disk issues which need to be repaired due to the power being suddenly stopped. We are working on this as quickly as possible but disk scans and corrections can take some time.


UPDATE 5: We are servers appearing to still be having issues and servers becoming active and then down again. The data centre are actively working on this and we are monitoring our services closely to get everything restored asap.


UPDATE 4: We are seeing most servers online, but a couple including the Darwin DirectAdmin server remains down. An engineer at the data centre is checking this now.


UPDATE 3: Power has been restored but we are still closely monitoring the situation and awaiting all servers to come back online.


UPDATE 2: Some racks have started to come back online but unfortunately does not include ours. The engineers are investigating the issue and we hope for updates soon. 


UPDATE 1: You can track the data centre updates via https://status.ukservers.com/


We are currently working with our UK data centre on a power issue that is effecting the entire data hall. We will post updates as we get them.

Mail routing to MailChannels unstable (Resolved) High

Affecting Server - Blackwell

  • 05/12/2022 14:30
  • Last Updated 07/12/2022 17:11

UPDATE 2: After a couple days all services have been seen running normally. This issue will be marked as resolved. 


UPDATE 1: We have corrected the issue with the routing of mail to Mail Channels on the Blackwell server and resending all mail in the queue. We will keep monitoring to ensure the queue is worked through over the next few hours. 


We are currently investigate a mail routing issue on the node Blackwell which is causing mail to remain stuck in the servers mail queue. We are actively working on this and once resolved all mail will attempt to be resent. We are sorry for the inconvenience caused.

Server Migration (Eden -> Darwin Node) (Resolved) Low
  • 17/10/2022 05:50
  • Last Updated 24/10/2022 11:31

UPDATE 8

The Eden server has been shut down. If you have any issues or questions please do get in touch with the team. Thank you,


UPDATE 7

We will be shutting down the Eden server on Monday the 24th at 10am, please ensure you have updated your domains DNS or A/MX:Records before this time.


UPDATE 6

Crossbox has been updated and working, we are monitoring the service but if you face any issues please do get in touch.


UPDATE 5

We have completed the migration and all accounts are now on the new server. Please login using the URL: https://darwin.dnshostnetwork.com:2222/ with the same logins, or access the client portal for SSO login to DirectAdmin.

New Server IP:

178.159.5.244 

New DNS:

ns1.darwin.dnshostnetwork.com
ns2.darwin.dnshostnetwork.com

If you have any issues please open a technical support ticket to get the quickest updates and solutions.

NOTE: We are seeing an issue with Crossbox on our Darwin server and the Crossbox tech support team are investigating.  


UPDATE 4

We have now migrated all sites and doing final tests and manually migrating any failed accounts. We hope to send all clients emails shortly with updates. Thank you again for your patience and once again sorry for the delayed migration as it has taken longer than planned.


UPDATE 3

We are in the final stages of migration with syncing of meta data and some user data. We expect this to be completed within 3 hours. Once completed we will be asking all customers to check their DirectAdmin logins on the new server and files/data to ensure there are no problems, ready then to update DNS/A:Records via domain providers. We will be keeping the Eden server online for a few days to allow time for everyone to check sites and make DNS changes. We do recommend taking a backup of your data from the old server just in case (we will have backups as well stored on our offsite backup storage).


UPDATE 2

We have completed 80% of the transfer from Eden to Darwin server, once this has been fully completed we will update all customers. Thank you for your patience.


UPDATE 1

We are continuing the restoration of accounts, progress has been a little slower than first thought but we are working on this with the highest priority. All services on the Eden server continue to be online and websites are loading so no downtime for any service.


We are currently migrating all accounts from the DirectAdmin server Eden to the Darwin node. Once completed we will email all customers reminding them to update any DNS/A:records.

If you have any questions please do get in touch with the team.

CloudLinux 503 Errors (Resolved) High

Affecting Server - Blackwell

  • 09/05/2022 04:00 - 09/05/2022 11:30
  • Last Updated 09/05/2022 13:48

In the early hours of today we found one of our Power Supply Units on the Blackwell server became faulty and our engineers removed the PSU as it caused minor power issues. Once this was corrected with minimal downtime we noticed slowly overtime CloudLinux started to fail but not on all websites. Our alerting failed to alert us to this as our tested features (ping/http/cPanel etc) were all green.

We resolved the issue with the support of cPanel to update our running software and after a number of tweaks all services are back online.

If you have any issues please do contact our support team.

We are sorry for the inconvenience caused by this downtime, we will continue to monitor this server for any issues.

UK Data Centre Network Issues (Resolved) Medium

Affecting System - UK Data Centre

  • 20/10/2021 16:29
  • Last Updated 21/10/2021 10:03

UPDATE: All connections and BGP sessions have now been stable for 12 hours. We will close this issue now as the data centre will continue to monitor and currently happy with the resolutions.


UPDATE: Moving priority of ticket to 'Medium' while we await updates from the data centre.


UPDATE: Services have now become available, downtime <10min was recorded by our monitoring systems. We are awaiting an investigation report from the data centre.


We are currently investigating network issues at our UK data centre. Please stand by for further updates.

Brunel cPanel Migration (Resolved) Medium
  • 24/08/2021 13:28
  • Last Updated 26/08/2021 09:03

UPDATE 2: Migration has been completed and post-migration issues resolved on the few accounts that reported them. If you find any issues please contact a member of the support team who will be more than happy to assist.


UPDATE 1: The migration continues with a large number of accounts already moved. You may see some DNS propagation going through while we switch DNS settings. This can be in the form of a cPanel default screen. We hope to have everything completely soon.


On Monday the 24th of August at 02:00AM (UK, London timezone) we will be migrating all accounts from the server listed as Brunel (IP 5.101.142.88) to the node named Blackwell (IP: 5.101.173.45).

If you are using our DNS/nameservers you will not need to make any changes.

For reference our nameservers are:

  • dns1.dnshostnetwork.com
  • dns2.dnshostnetwork.com

If you are using A:Records (CloudFlare/Custom DNS Services) you will need to update your domains records to point to the new servers IP: 5.101.173.45 on the day of the migration.

Please note during the day if won't be possible to login to your new cPanel server via the client portal. You will be able to access and login using the direct cPanel URL of the new server: https://5.101.173.45:2083/

If you have any questions please contact a member of the sales/accounts team by clicking here.

S03 Lucee Server Migration => Greyhound (Resolved) Medium

Affecting Server - [S03] Linux cPanel ~ Lucee ~ London UK

  • 29/03/2021 03:00 - 01/04/2021 18:45
  • Last Updated 30/03/2021 09:51

UPDATE 2: All accounts are confirmed on the new server, we have been delayed by a few accounts that required manual migration but these have been completed.


UPDATE 1: The majority of accounts have been migrated and now running on our new server. If you face any issues please do contact a member of the support team.


Scheduled migration and upgrade of our S03 server to newer hardware with a new name.

If you use our DNS/nameservers you will not need to make any changes, but if you use A:Records please ensure you change the domains settings to point to the new IP 178.159.5.243.

Downtime will be minimal but due to the number of accounts to migrate, it will be a process over a number of hours.

Important Notes:

  • Lucee Version Change
    We have been testing the latest version of Lucee and on the new server, we will be implementing version 5.3.7.48, we recommend testing your website on this version on a local development environment and make any code corrections that might be required. The current version on S03 is: 5.2.7.63
  • cPanel Access
    To access cPanel using the hostname of the server, please use: https://greyhound.dnshostnetwork.com:2083/ once your account has been migrated. You will not be able to access your old account on the S03 server once the transfer has been completed.
  • Database / DSN Host Settings
    Please ensure to update any Lucee DSN or DB config settings to use 'localhost' instead of any remote database settings.
  • Lucee Timeout / Crash Protection
    The Greyhound server has better timeout and crash protection features, so if you have applications that take longer than 60 seconds to complete you may see errors. Please ensure your code is optimised to run and complete within 60 seconds.

Beagle Server Migration (Resolved) Medium

Affecting Server - Greyhound

  • 15/03/2021 04:00
  • Last Updated 17/03/2021 11:31

UPDATE 1: All accounts have been migrated, please check to ensure you are now using the new IP on your domains DNS settings. If you find any issues please contact the team. 


Scheduled migration and upgrade of our Beagle server to newer hardware.

If you use our DNS/nameservers you will not need to make any changes, but if you use A:Records please ensure you change the domains settings to point to the new IP 178.159.5.243.

Downtime will be minimal but due to the number of accounts to migrate, it will be a process over a number of hours.

Important Notes:

  • Lucee Version Change
    We have been testing the latest version of Lucee and on the new server, we will be implementing version 5.3.7.48, we recommend testing your website on this version on a local development environment and make any code corrections that might be required. The current version on Beagle is: 5.3.5.96
  • cPanel Access
    To access cPanel using the hostname of the server, please use: https://greyhound.dnshostnetwork.com:2083/ once your account has been migrated. You will not be able to access your old account on the S03 server once the transfer has been completed.

Coventry DC Network Issues (Resolved) Critical

Affecting System - DC Network Rack 1

  • 01/02/2021 17:18
  • Last Updated 01/02/2021 17:41

POST ISSUE REPORT: The issue was due to a misconfiguration in the racks firewall layer during an update to increase the IP ranges on our servers. This was corrected by onsite engineers that control the firewall systems.

UPDATE 2: Network access has been restored and servers are now loading. We will update with the cause once we have checked all hardware. 

UPDATE 1: Engineers are onsite looking into the network issues and hope to have an update shortly.

ISSUE:

We are currently investigating network issues at our Coventry DC to one of our racks. Our alerting systems were triggered which proceeded with a P1 critical issue.

Rack Move (Stewart Node) (Resolved) Medium

Affecting System - Stewart Node

  • 28/01/2021 05:30
  • Last Updated 28/01/2021 17:00

UPDATE: Server move was completed without any issues.

We will be upgrading our Coventry racks for planned improvements. On the 28th at 5:30AM (UK/London time) we will be moving the physical server node named Stewart to our new racks at the same data centre.

We expect downtime to be minimal and engineers will be ensuring the smoothest of transition.

Detailed Timings:

  • 5:15 Starting preparing for the move
  • 5:30-6:00 Begin shutting down servers and racking them into the new racks
  • 6:00-6:30 All services should be confirmed as online

 

Rack Move (Blackwell & Victoria Nodes) (Resolved) Medium

Affecting System - Blackwell & Victoria Nodes

  • 29/01/2021 05:30 - 01/02/2021 17:21
  • Last Updated 27/01/2021 12:11

We will be upgrading our Coventry racks for planned improvements. On the 29th at 5:30AM (UK/London time) we will be moving the physical server nodes named Blackwell & Victoria to our new racks at the same data centre.

We expect downtime to be minimal and engineers will be ensuring the smoothest of transition.

Detailed Timings:

  • 5:15 Starting preparing for the move
  • 5:30-6:00 Begin shutting down servers and racking them into the new racks
  • 6:00-6:30 All services should be confirmed as online

 

Rack Move (Churchill & Nelson Nodes) (Resolved) Medium

Affecting System - Churchill & Nelson Nodes

  • 27/01/2021 06:00
  • Last Updated 27/01/2021 08:57

UPDATE: All services are running normally.

We will be upgrading our Coventry racks for planned improvements. On the 27th at 6AM (UK/London time) we will be moving the physical server nodes named Churchill and Nelson to our new racks at the same data centre.

We expect downtime to be minimal and engineers will be ensuring the smoothest of transition.

Detailed Timings:

  • 5:30 Starting preparing for the move
  • 6:00-6:30 Begin shutting down servers and racking them into the new racks
  • 6:30-7:00 All services should be confirmed as online

 

Rack Move (Brunel Node) (Resolved) Medium

Affecting System - Brunel Node

  • 26/01/2021 06:00 - 26/01/2021 10:51
  • Last Updated 26/01/2021 10:51

UPDATE: Server move completed without any issues.

We will be upgrading our Coventry racks for planned improvements. On the 26th at 6AM (UK/London time) we will be moving the physical server named Brunel to our new racks at the same data centre.

We expect downtime to be minimal and engineers will be ensuring the smoothest of transition.

S07 Account Migration (Resolved) Medium
  • 02/11/2020 02:00 - 07/11/2020 11:02
  • Last Updated 16/10/2020 12:09

We will be migrating all S07 Plesk accounts to a new Windows Plesk server, code-named, 'Austen' (named after Jane Austen was an English novelist known primarily for her novels).

We have scheduled the migration for the 2nd of November at 2am UK local time.

You will need to adjust your domains DNS/A:records to one of the following options at the above date:

Nameservers:

ns1.austen.dnshostnetwork.com
ns2.austen.dnshostnetwork.com

A:Record IP:

78.110.165.202

Lucee Security Updates (Resolved) High

Affecting Server - Greyhound

  • 24/09/2020 22:00
  • Last Updated 24/09/2020 22:01

We have performed a number of security updates on our Beagle Lucee server which required a number of reboots. All services are now running normally and we are monitoring the services.

S09 Migration => Richmond (Resolved) Medium

Affecting Server - [S09] Linux cPanel ~ New Jersey US

  • 31/08/2020 10:00 - 01/09/2020 13:56
  • Last Updated 10/08/2020 10:46

We scheduled in a migration of the S09 server to our new pure SSD servers hosted in our new providers' data centre. Please see below details of the planned migration:

Migration Scheduled:
London, UK Time: 31/08/2020 10:00 AM
Eastern, US Time: 31/08/2020 5:00 AM

If you use A:Records to point your domain name to our servers you will need to update them to the IP: 51.81.109.178

If you use our DNS/nameservers you will not need to do anything.

All your data will be migrated by our team on the day, if you have any questions please speak with one of our sales team who will be able to provide more information about the process.

Planned reboot for updates (Resolved) Low
  • 23/07/2020 22:00 - 24/07/2020 08:16
  • Last Updated 23/07/2020 14:17

We will be performing a quick reboot of the Eden server to apply updates and general improvements to this service. Downtime is expected to be minimal as it will be a standard reboot.

Coventry, UK Data Centre Power Issues (Resolved) Critical

Affecting System - Power Supply

  • 18/07/2020 15:00 - 22/07/2020 15:45
  • Last Updated 19/07/2020 09:35

At 3pm on the 18th of July the Coventry, UK data centre saw power issues which caused services to fail. If you are having any issues with your service please contact a member of the team. Our team will be monitoring and checkups on all our services to ensure they are running OK.

Below are the status history provided by the data centre.

Monitoring - Power to all racks is now restored and should be stable, electricians have now left site and service to all customers should now be restored. If you are still having issues please update or submit a new support ticket.
Jul 1820:32 UTC
Update - Whilst power has been restored to racks that went offline in DH1, we are still running on generator power at this site.

Both electrical engineers and western power are onsite.

Whilst we cannot guarantee further issues tonight at this site, we will endeavour to keep you updated.
Jul 1819:21 UTC
Update - We have just had another issue which has caused a power outage to some racks in Data Hall 1 in Coventry.

We do have electricians and Western Power on site.
Jul 1818:44 UTC
Update - Update: 17:09 - 18/7/20 - Service should now be restored to most customers at our Coventry site however we still have a critical issue at this facility. If you are still having issues with your service then please update your support ticket.
Jul 1816:10 UTC
Investigating - We are currently investigating a major power issue at our Coventry datacentre site.

Further updates will follow on this status page.

Scheduled Migration (Resolved) Medium

Affecting Server - Churchill

  • 28/06/2020 23:00 - 29/06/2020 08:44
  • Last Updated 29/06/2020 12:26

UPDATE: Migration completed and if you have any issues or require any support please contact our support team. Thank you. 


We will be migrating all cPanel accounts on the Churchill server to our brand new servers on the 28th of June at 23:00 (BST) to be ready for the Monday morning. The new server label is: Blackwell (named after the British Physician, Elizabeth Blackwell) and has the IP: 5.101.173.45

The new server is our latest range of hardware and is powered by a Dell 40 core enterprise server. We are rolling out more of these powerful servers at our UK location.

If you use our DNS/nameservers you will not need to change anything. If you use A:Records (CloudFlare for example) please make sure to update your IP address to use: 5.101.173.45

If you would prefer to be migrated sooner please just contact a member of the sales team who will book this in for you.

S21 cPanel Server Migration (Resolved) Medium

Affecting Server - [S21] Linux cPanel ~ Singapore

  • 16/06/2020 18:00
  • Last Updated 24/06/2020 16:44

Migration has been completed and we have turned off the HTTP services on S21. If you face any issues please contact our support team. Thank you for your patience during this migration.


We are migrating all cPanel accounts from our S21 server to our new range of SSD based servers hosted in Singapore. Once the migration has been completed all customers will be notified. Due to the issues found on S21 it will cause the migration to be slower than expected but downtime will be minimal as services are still running on S21.

New server IP: 139.99.122.95

Reboot to apply updates (Resolved) Low

Affecting Server - Turing

  • 14/05/2020 19:05 - 14/05/2020 19:07
  • Last Updated 14/05/2020 19:09

Update: Reboot complete, total downtime 2min.


We are performing a full reboot of the Turing server to apply the latest Kernel and cPanel updates. Downtime <5min.

Reboot to apply updates (Resolved) Low

Affecting Server - Blackwell

  • 14/05/2020 18:55 - 14/05/2020 15:58
  • Last Updated 14/05/2020 18:59

Update: Reboot complete, total downtime 3min.


We are performing a full reboot of the Blackwell server to apply the latest Kernel and cPanel updates. Downtime <5min.

Hawking => Turing (Resolved) Low

Affecting Server - Hawking

  • 07/05/2020 06:00 - 10/05/2020 12:37
  • Last Updated 30/04/2020 14:06

On the 7th of May at 6am UK time we will be migrating all accounts from the Hawking server to our new ColdFusion 11 server named Turing.

If you use A:Records you will need to change them to point to the IP: 5.101.142.85. If you use our nameservers then nothing will need to change.

One action will need to be taken by yourselves and that is the recreation of ColdFusion data sources in the CFManager in cPanel. Currently, our migration tools do not allow for data source names to be transferred but it is something our developers are working on to make future migrations easier.

You will find our new servers are much higher in specifications and also have features such as Fusion Reactor protection included to provide the best stability possible.

S08 Migration => Blackwell (Resolved) Low
  • 11/05/2020 06:00 - 14/05/2020 11:44
  • Last Updated 24/04/2020 14:49

On Monday the 11th of May at 6am, UK, London time we will be migrating all accounts from the legacy S08 WordPress server in London to our new nodes in Coventry. The new servers provide greater power, speed and features which are in line with our WordPress feature matrix: https://www.hostmedia.co.uk/wordpress-hosting/feature-matrix/

If you use A:Records to point your domain name to our servers you will need to update them to the IP: 5.101.173.45

If you use our DNS/nameservers you will not need to do anything.

All your data will be migrated by our team on the day, if you have any questions please speak with one of our sales team who will be able to provide more information about the process.

London Cluster Reboot (Resolved) Medium
  • 27/02/2020 10:45
  • Last Updated 27/02/2020 11:13

UPDATE: Reboot was successful and all operations are normal.


We need to run a reboot of the London server cluster named 'Darwin' which operates a number of instances to correct a disk issue which is appearing. The downtime should be minimal.

Thank you for your understanding.

Server Migration (S01->Brunel) (Resolved) Medium

Affecting Server - [S01] Linux cPanel ~ Coventry UK

  • 08/02/2020 02:00 - 10/02/2020 15:32
  • Last Updated 10/02/2020 15:34

UPDATE: Migration completed. Thank you for your patience.


We will be migrating all accounts from the server listed as S01 to a new server code named: Brunel

Scheduled Date/Time: 08/02/2019 02:00 (Timezone: London, UK)

If you use A:Records to point your domain to our servers you will need to update them to point to: 5.101.142.88

Unexpected downtime (Resolved) High

Affecting Server - Churchill

  • 28/01/2020 07:01 - 30/01/2020 12:42
  • Last Updated 30/01/2020 12:42

UPDATE 3: All services have been running without indecent for over 24 hours now. This issue will now be closed.


UPDATE 2: We have tweaked a number of settings and reapplied the LiteSpeed web server. We will monitor to ensure all services continue to run as expected. Thank you,


UPDATE 1: We believe the issue is related to an issue with the LiteSpeed webserver, we are currently running all systems on the standard Apache webserver which the server can easily handle while we investigate further. All sites have been running normally for the past 3.45 hours. You can find further details here: https://status.hostmedia.co.uk/784190645


We are investigating issues reported by our monitoring systems on the instance Churchill which is going up and down. We will update further as soon as possible.

Unexpected network failures (Resolved) High

Affecting System - Coventry DC Networks

  • 13/01/2020 12:47
  • Last Updated 13/01/2020 12:52

UPDATE: Network appears to be back to normal and we are awaiting further details on the upstream provider that caused some people to drop connections.


We are currently looking into an issue with one of our upstream providers that could be affecting some routing.

S02 Migration (Resolved) Medium
  • 05/01/2020 21:00 - 06/01/2020 10:33
  • Last Updated 06/01/2020 10:33

UPDATE: Migration has been completed successfully.


We will be migrating the accounts from the server listed as S02 to our new servers in our Coventry data centre.

If you use A:Records please make sure to update them between the listed times to use this IP: 5.101.142.88

ColdFusion Service Interruptions (Resolved) Medium

Affecting Server - Hawking

  • 18/12/2019 10:50
  • Last Updated 18/12/2019 14:29

UPDATE 1: After making some JVM changes the issue appears to have become stable but we will continue to closely monitor the service over the next couple of days.


We are investigating an issue with the ColdFusion services on our Hawking instance (S04) that is causing the service to suddenly stop.

Server Migration (S14->Brunel) (Resolved) Medium

Affecting Server - [S14] Linux cPanel ~ London UK

  • 08/12/2019 20:00
  • Last Updated 09/12/2019 08:44

UPDATE: The migration completed without any issues and all accounts are now on the node Brunel. Please make sure you have updated your A:Records if you use them. We will be shutting down the old server shortly.


We will be migrating all accounts from the server listed as S14 to a new server code named: Brunel

Scheduled Date/Time: 08/12/2019 20:00 (Timezone: London, UK)

If you use A:Records to point your domain to our servers you will need to update them to point to: 5.101.142.88

Network Outages (Resolved) Medium

Affecting System - DH1 Coventry Issue – 16th Nov

  • 17/11/2019 10:57
  • Last Updated 17/11/2019 11:01

16/11/2019 – 21:30 – We are currently experiencing an issue with services at our Coventry site, further updates will follow shortly.

21:45 – Our onsite engineers have found BGP sessions to be flapping between our core routers in Coventry and London, further updates will follow shortly.

22:01 – Our onsite engineers have identified an attack against our core routing infrastructure at this site and are working to mitigate this.

22:31 – Our engineers have been unable to mitigate the attack against our routing infrastructure and we are still working on the issue. Service has been restored for some customers however the network is currently still unstable.

23:52 – Our engineers are going to bring forward the replacement of our routing equipment at our Coventry site which was scheduled for later this month under a planned maintenance window as we believe the new equipment should be better placed to deal with the attack. We hope to have service fully restored to all customers by 04:00 at the latest.

17/11/2019 – 01:22 – The new routing equipment has been racked and the configuration being loaded onto this, customers should expect further service disruption in the next thirty minutes when we move customers to the new routing equipment.

02:32 – Service should now be restored to the majority of customers at our Coventry site and the new routing equipment is successfully mitigating the attack on our equipment.

04:25 – The remaining customers should now be back online at our Coventry datacentre and customers are requested to open a support ticket if their service remains offline.

Lucee Stability Issues (Resolved) High

Affecting System - Lucee UK Servers

  • 12/11/2019 09:00 - 15/11/2019 09:40
  • Last Updated 15/11/2019 09:40

UPDATE: Since our changes, the Lucee service appears to be back to normal stability. We will continue to monitor the service closely. Thank you for your patience.

We will be performing some adjustments to our UK Lucee servers to correct a number of reported issues around the stability of the Lucee service.

Network Outage (Resolved) Critical

Affecting System - UK Coventry Data Centre Network Issues

  • 30/10/2019 10:03 - 30/10/2019 11:09
  • Last Updated 30/10/2019 12:20

Network issues were reported at our Coventry, UK based data centre which the data centre team worked on to resolve. We are awaiting a full report from them to update our customers with.

We are sorry for the downtime seen by our customers and we will be working with the data centre to see what actions can be put in place to prevent this from happening again.

Node: Unexpected server failure after reboot (Resolved) Critical

Affecting System - Node: Nelson, DC: Coventry UK

  • 13/09/2019 08:54 - 17/09/2019 16:53
  • Last Updated 17/09/2019 16:53

UPDATE 11: VMs have been restored, if you face any issues please open or update your support tickets so our team can investigate. Thank you so much to all our affected customers for their patience and understanding.


UPDATE 10: The restores are processing well, due to the amount of data this can take some time but we are working on this as quickly as we can.


UPDATE 9: We have been able to get a XEN server online and now starting to restore accounts.


UPDATE 8: We are continuing to work on the issue and hope to have our new XEN server online shortly. There has been issue within the XEN setup that caused our tests to fail.


UPDATE 7: Final tests on our 3rd server setup is almost completed.


UPDATE 6: Due to some kernel issues we have booted a 3rd server up as an alternative which is on a different network and will require new IPs to be allocated. We will update clients once we have more details.


UPDATE 5: We are continuing our setup of our alternative XEN Server. We will post our next update as soon as possible.


UPDATE 4: We have our alternative XEN Server partitioned and the final setup stage processing now. Once done restores of data will begin.


UPDATE 3: Due to a XFS corruption beyond repair we will be restoring backups on a secondary node as soon as possible to get all customers services back online. 


UPDATE 2: We are continuing to run the XFS repair on the server, it is taking a little longer than expected and having our DC remote hands checking this.


UPDATE: We are running a full XFS repair on the drives as something appears to have become corrupted on the disk and causing the server not to boot properly into the OS.


We are currently investigating an issue with one of our new nodes at our Coventry DC (Node: Nelson). We are working on this with the highest priority.

Power issues at UK data centre (Resolved) Critical

Affecting System - Coventry DC

  • 04/06/2019 07:40 - 04/06/2019 12:59
  • Last Updated 02/09/2019 14:37

FINAL UPDATE: The issue has been fully resolved.

Issue Details:
The initial issue was due to the power circuit being tripped out, the DC team worked to move our racks to the backup circuits to ensure power was restored quickly to the affected servers. After 15 minutes the main power supply was routed back to our racks.
We started to check / bring back online all servers that were offline. While doing this we found the node Churchill didn't respond to our main controllers commands. After investigating it was found to be loading form the flash memory on the server instead of the main controller of the hard drives. We reconfigured the BIOS and restarted the machine which brought back the node and once tested we brought back online the instances.
We will be performing an update on the BIOS to ensure the correct hard drive controller is loaded in case of any future failures in power. This update will be happening at 9PM UK, London time today (4th of June) and a network status item will be available for reference.


UPDATE 3: After resolving a linking issue to our racks and correcting a possible long term issue our team are focusing on resolving the issue with our Churchill node.


UPDATE 2: DC engineers are continuing to work on the issue with our racks as further issues were found. We hope to have this resolved shortly.


UPDATE 1: All servers apart from the Churchill node has come back online. We are working on the issue.


We are currently resolving an issue with our racks at the Coventry DC. Further updates to come.

Rack Power Replacement (Resolved) Medium

Affecting System - Coventry Data Centre

  • 30/08/2019 04:30 - 30/08/2019 08:45
  • Last Updated 29/08/2019 12:37

During a routine review by our electrician, we have identified a fault with the power distribution that supplies our racks at the Coventry data centre. There is a core distribution unit which needs to be replaced to ensure a stable service. This will require all power to our racks being removed for about 60 seconds whilst the fault is fixed.

Migration (Resolved) Low

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 05/08/2019 04:00
  • Last Updated 05/08/2019 12:50

UPDATE: Migration processed well and all accounts are now on the new server. We are now backing up all accounts on the old server before shutting it down.


We will be migrating customers from S10 server to newer servers. All affected customers will be updated via email, any customers using the Global Reseller Panel will have the details updated in their reseller control panel. Downtime will be minimal as the migration will be handled by the cPanel transfer.

New server IP: 81.92.218.156

US Xen Services (Resolved) High

Affecting System - Virtualisation

  • 26/07/2019 10:19
  • Last Updated 26/07/2019 12:44

UPDATE 1: We have resolved the issues and all services are back to normal status. Thank you for your patience.


We are investigating an issue with our US based Xen servers which dropped network services. We are working to resolve this as soon as possible.

Archer Node (Resolved) High

Affecting System - RAM Fault

  • 23/07/2019 08:05
  • Last Updated 23/07/2019 13:15

UPDATE: All systems came back online shortly after the initial status update. If you find you are having any issues please do contact a member of the support team.


We detected a memory fault due to a faulty RAM card. This is being replaced now and services should be back online shortly.

BIOS Updates (Resolved) Low

Affecting Server - Churchill

  • 04/06/2019 21:00 - 05/06/2019 07:59
  • Last Updated 04/06/2019 13:00

We will be updating the servers BIOS to avoid boot up issues loading incorrect OS after unexpected downtime/shut downs. Downtime will be less than 5min as only a reboot is required to apply the changes.

Memory and disk upgrades (Resolved) High

Affecting Server - [S03] Linux cPanel ~ Lucee ~ London UK

  • 24/04/2019 17:37
  • Last Updated 24/04/2019 17:41

UPDATE1: Services are back online and running normally. Thank you for your patience.


We are running updates on the disk and memory services of our S03 Lucee server. A reboot is processing now to apply these updates. We hope to have services back online within the next 5min. Sorry for the downtime caused.

Unexpected Lucee Service Disruption - High Lo (Resolved) Medium

Affecting Server - [S03] Linux cPanel ~ Lucee ~ London UK

  • 15/04/2019 17:24
  • Last Updated 22/04/2019 14:19

Since our adjustments to the Lucee JVM all services appear stable. We will carry on monitoring the server closely and if any further issues occur we will open a new server status.


We have been server monitoring Lucee services and they have been stable during the night. We are continuing to monitor any and all load spikes to resolve any issues. We will update this status further when we know more.


During off-peak (UK night time) we are seeing high Lucee load on the server which appears to be causing the Lucee CML services to stall. We are monitoring and working on finding a fix.


We are investigating a high load on our S03 server which appears to have been the cause of the server requiring a forced reboot.

Reboot to apply updates (Resolved) Medium

Affecting Server - Churchill

  • 09/04/2019 21:00 - 09/04/2019 21:10
  • Last Updated 09/04/2019 21:02

Reboot complete and updates applied. Downtime less than 1 minute.


We will be running an update and reboot of the Churchill instance to apply the latest updates. Downtime will be less than 10 minutes.

Intermittent Disk Issues (Resolved) High

Affecting Server - [S11] Linux cPanel ~ Lucee ~ London UK

  • 18/02/2019 11:04
  • Last Updated 19/03/2019 10:20

We have been dealing with disk issues within the core of the S11 instance. If you are seeing issues please open a support ticket and request a migration to our S03 Lucee server which is on our new platform. Please note S03 uses dedicated remote SQL servers so in your Lucee data sources or connection scripts please make sure to use 'remotesql' instead of 'localhost' on your settings.

Scheduled Account Migration (Resolved) Low

Affecting Server - [S24] Linux cPanel ~ Lucee ~ London UK

  • 29/03/2019 22:00 - 09/04/2019 20:27
  • Last Updated 19/03/2019 10:20

We will be migrating accounts from our S24 server to our latest Lucee S03 server. Downtime will be minimal as we will be performing a direct transfer of accounts.
New server IP: 185.42.223.91

Reboot for applied upgrades (Resolved) Low

Affecting Server - [S14] Linux cPanel ~ London UK

  • 27/02/2019 11:56 - 27/02/2019 13:04
  • Last Updated 27/02/2019 11:57

Minor updates and a quick reboot of S14 to ensure stability of latest updates.

Server upgrades (Resolved) High

Affecting Server - [S14] Linux cPanel ~ London UK

  • 19/02/2019 01:00 - 19/02/2019 08:00
  • Last Updated 20/02/2019 13:03

UPDATE 2: We can confirm all services are running normally and now have CloudLinux running for better general performance and stability.


UPDATE 1: A fault with drive mappings was found causing unexpected downtime on the server and this is being fixed.


Upgrades being applied: CloudLinux & kernel updates.

Downtime: We will try and keep downtime to a minimum but downtime will be intermittent over a few hours.

Intermittent Slowness (Resolved) Low

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 13/02/2019 10:26
  • Last Updated 18/02/2019 11:08

UPDATE 1: Our new servers are going through tests now, we will be migrating customers to the new server in batches. We will contact those customers directly throughout the week. If you are still facing issues please open a ticket to sales to request a migration sooner. Currently S02 services are running normally.


We are monitoring our S02 server due to intermittent slowness that has been detected. We already have plans in place for a migration of this server to one our new servers being racked this week. We will continue to monitor and resolve any reported issues.

Archer Node Maintenance (Resolved) High

Affecting System - SolusVM

  • 17/01/2019 22:28 - 29/01/2019 10:32
  • Last Updated 24/01/2019 12:11

UPDATE 9: S11 - Our team has got as much data as we could possibly get from our backups and the faulty S11 server. If you have backups of SQL available please send them over to our tech team via a support ticket and we will get these uploaded straight away with the highest priority. We are also ensuring other servers are not affected by the same backup faults and issues that caused S11 to fail. As we always recommend, please ensure you keep local backups in case of failures such as this. We will be investing heavily in new backup solutions on all shared services in the coming months to prevent such issues from happening again.


UPDATE 8: S11 - Our team continue to bring up remaining websites with most back online. If you continue to have issues and haven't opened a ticket we highly recommend to create one in case the issue on your site is an isolated issue.


UPDATE 7: S11 - To help speed up restores of accounts we highly recommend if you have local copies of backups please do send them over to the main support team and they can get your services back online quicker.


UPDATE 6: S11 - File restores have processed but SQL databases failed to restore correctly. We are looking at alternative restore options now.


UPDATE 5: S11 - As we continue to bring more accounts back online, if you use A:Records instead of our name servers we strongly recommend  changing your domains DNS to our name servers. This will help when we sync your domain to the new server IP addresses your domain will already be configured. Our global DNS network name servers are:

dns1.dnshostnetwork.com
dns2.dnshostnetwork.com
dns3.dnshostnetwork.com


UPDATE 4: S11 - Restores of accounts are proceed from data located on the server and on our remote backup servers. We are manually having to process these backups one by one. We will provide further updates as they come through.


UPDATE 3: S11 - We are attempting to restore available backups and overlay them with the latest data from the damage server. Other systems are being worked on by our 3rd party software support teams to resolve the issues as soon as possible.


UPDATE 2: All services on our Archer node are back online apart from one shared service S11. We are working on this issue as our top priority, we now have access to the data and migrating the data to a newer server to get services back online as quickly as possible for everything.


UPDATE 1: During a number our instances became unavailable, our team are working on this issue now with our 3rd party suppliers.


We are currently running checks and general maintenance on our Archer node, this includes the XEN services and SolusVM integration. You may see some services slow down but this will be kept to the minimal.

Sydney Disruption (Resolved) Critical
  • 28/09/2018 14:38
  • Last Updated 28/09/2018 15:19

UPDATE 1: Services are back online and we are investigating the cause of the network issue.


We are investigating an issue with our S13 server at the Sydney data centre that has caused the S13 server to fail.

Darwin Node Server (Resolved) Medium

Affecting System - XenServer

  • 22/09/2018 10:51
  • Last Updated 22/09/2018 11:22

UPDATE 2: All systems are OK and running normally. Thank you for your patience while we upgrade our services.


UPDATE 1: Services are coming back online and VM instances running. Downtime averaged 5min to complete the updates and bring instances back online. We will update this ticket once we have completed our checks.


We are running a reboot of the XenServer node Darwin to apply updates and to correct integration issues with Virtualizor. Thank you for your patience.

Migration and update of S17 (Resolved) Medium
  • 28/06/2018 22:00 - 03/07/2018 16:31
  • Last Updated 29/06/2018 09:43

UPDATE 1: We are still migrating accounts to our new server. Due to the number of sites and data it is taking longer than expected. We hope to have further updates soon.


On Thursday the 28th of June at 10PM we will be migrating all accounts from S17 to S14 which is based on much newer systems. If you use A:Records to point your domain to our servers please update the IP to: 54.36.162.146

Thank you for your understanding while we process this migration.

US ColdFusion 10 Server Migration (Resolved) High

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 25/06/2018 00:00 - 25/06/2018 09:53
  • Last Updated 25/06/2018 15:11

UPDATE: The migration was completed but during some migrations and setups of ColdFusion DSNs a few already created DSNs were affected and locked out. If you have any issues with your DSNs please contact support who will recreate them for you. Thank you for your understanding.


On the 25th of June we will be migrating all US S12 ColdFusion 10 customers to our UK CF10 servers. As per our other US based CFML services we are moving all accounts to our UK based data centers. This move will also help with future plans for new ColdFusion services (pending final decisions from management). If you are using A:Records to point to our servers please make sure to update your domains IP to point to: 185.145.202.175

Once the migration has been completed you will need to setup your ColdFusion DSNs via the CFManager or by opening a support ticket if you prefer us to handle this for you - please note we will need the database details so we can set them up. Our transfer systems currently do not allow for migration of CF DSNs.

Thank you for your understanding.

Migration and update of S23 (Resolved) Medium

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 22/06/2018 12:04 - 24/06/2018 16:54
  • Last Updated 24/06/2018 16:54

On Thursday the 22th of June at 10PM we will be migrating all accounts from S23 to S01 which is based on much newer systems. If you use A:Records to point your domain to our servers please update the IP to: 78.157.200.45

Thank you for your understanding while we process this migration.

US Lucee Server Migration (Resolved) High
  • 27/06/2018 00:00 - 29/06/2018 10:31
  • Last Updated 09/06/2018 11:14

On the 27th of June at midnight (UK Time) we will be migrating all US Lucee accounts to our UK data centre. Our CFML services have been moving to our UK data centres the past years and now the final US based Lucee server will be moved. If you are using A:Records on your domain please make sure to change them to the new servers IP: 78.110.165.199

Thank you for your understanding. If you have any questions please do contact a member of the team.

Server Migration (Resolved) High

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 25/05/2018 05:00 - 25/05/2018 10:02
  • Last Updated 25/05/2018 10:02

UPDATE: The migration has been completed and all customers details confirmed as updated in their client portal. Any problems or questions please do contact a member of the team. Also please remember to update your A:Records if you do not use our nameservers. New server IP: 108.61.13.243


to resolve a number of performance issues we will be migrating our last cPanel shared hosting servers in Alexandria USA to our new US location in New Jersey. If you are using A:Records instead of DNS/nameservers you will need to update the IP to: 108.61.13.243

We are sorry for the short notice on this migration and we hope to have it complete as quickly as possible.

Server Reboot - Updates (Resolved) Low

Affecting Server - [S09] Linux cPanel ~ New Jersey US

  • 20/05/2018 12:00 - 20/05/2018 12:10
  • Last Updated 20/05/2018 12:10

UPDATE: Updates have been applied and total downtime was less than 2min. Thank you. 


We are applying the latest cPanel updates to server S09 which requires a reboot. Downtime should be less than 5min. Thank you.

Detected Lucee CPU Steal (Resolved) Medium

Affecting Server - [S11] Linux cPanel ~ Lucee ~ London UK

  • 09/05/2018 11:24
  • Last Updated 09/05/2018 17:20

We have corrected the issue which was due to another instance on the node causing a slow CPU.


We are investigating an issue with a slowness in the Lucee service on S11. The root of the issue appears to be a drain on the CPU resources (known as a CPU Steal).

Sydney Server Upgrades (Resolved) Medium
  • 03/05/2018 19:00 - 05/05/2018 10:05
  • Last Updated 25/04/2018 14:31

We will be migrating our final Sydney servers to our new servers on the 3th of May at 7PM UK, London time.

New server IP: 139.99.163.84

Hong Kong Migration to Singapore (Resolved) Medium

Affecting Server - [S21] Linux cPanel ~ Singapore

  • 04/05/2018 19:00 - 05/05/2018 10:05
  • Last Updated 25/04/2018 14:30

We will be migrating our final Hong Kong servers to our new Singapore servers on the 4th of May at 7PM UK, London time.

New server IP: 139.99.17.25

Amsterdam to Germany Migrations (Resolved) Medium

Affecting System - Server Migration

  • 27/04/2018 20:00 - 30/04/2018 23:07
  • Last Updated 10/04/2018 14:04

On the 27th of April we will be finishing the final move from our Amsterdam servers to our latest German based servers. This is in line with our aims to focus our offering to the best possible locations for speed and data centre support.

Below are the final two servers S04 and S18 that will be migrated to server S05 with the IP: 144.76.231.221

S04: 185.181.8.171 => 144.76.231.221
S18: 176.56.239.221 => 144.76.231.221

All data, including emails, website files and database will automatically be migrated and there is nothing for you to do unless you are using A:Records on your domains. Please see below for details on IP/DNS changes.

Using A:Records?

If you are using A:Records to point your domain to a server you will need to update this to point to the new servers IP: 144.76.231.221

Using DNS/Nameservers Records?

You will not need to do anything as we will take care of the DNS change on our side.

Thank you for your understanding and we hope you enjoy the new services at our Germany location.

Dallas 1 + London 2 - Node Migration (Resolved) High

Affecting System - Dallas & London OnApp Cloud Network

  • 30/03/2018 12:12 - 06/04/2018 16:39
  • Last Updated 03/04/2018 13:53

New server IPs, cPanel links and statuses are listed below:

S12 (Completed)
69.168.236.13 => 74.84.148.22 (cPanel Login)

S15 (Completed)
69.168.236.96 => 74.84.148.21 (cPanel Login)

S16 (Completed)
69.168.236.49 => 74.84.148.23 (cPanel Login)

S23 (Completed)
69.168.235.191 => 185.42.223.86 (cPanel Login)

Please make sure to update your A:Records to point to the updated server IPs. If you use our DNS servers then no IP change will be required.

No changes to your logins to cPanel or WHM. You can use the links above to access cPanel directly.


Our Dallas 1 and London 2 cloud node which is backed by OnApp requires an emergency migration. This is being handled by the OnApp team and data will be migrated by the engineers there. All Dallas and London based customers may see a small amount of downtime while the migration occurs and a new IP will be assigned. Please update your DNS to point to our servers, if you are required/prefer to use A:Records we will be providing details of the new IPs once the cloud migration has been completed.

We are sorry for the lateness of this notification, we have been working to try and avoid such a migration on the older platforms until we were ready to move to the new systems in New Jersey. Please keep an eye on this page for the latest updates.

Thank you for your understanding. 

XenServer Updates (Resolved) Medium

Affecting System - Darwin XenServer Node

  • 01/03/2018 23:55 - 06/03/2018 16:08
  • Last Updated 01/03/2018 12:33

We will be performing updates on our Darwin node at midnight tonight to ensure the latest security patches are applied. Downtime of the instances on the node will be minimal as only a standard reboot is required. We expect no more than 15min of downtime. Thank you for your understanding.

US SQL Database Service Relocation (Resolved) Medium
  • 01/03/2018 00:00 - 28/02/2018 11:30
  • Last Updated 28/02/2018 11:29

The migration from the US server to the UK server has been completed successfully. New IP: 104.238.186.101

Thank you

Reboot - Disk Scaling + Critical Updates (Resolved) High

Affecting Server - [S03] Linux cPanel ~ Lucee ~ London UK

  • 22/02/2018 17:13 - 23/02/2018 10:11
  • Last Updated 23/02/2018 10:11

UPDATE 1: We have monitored the changes over the night and all services are running normally. Thank you for your patience during this update.


We were required to run a quick reboot of S03, we are sorry for the unexpected reboot but critical disk updates were required to ensure services continue to run. We hope to have services running normally shortly.

Disk Configuration Update (Resolved) High

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 19/02/2018 21:00 - 20/02/2018 05:00
  • Last Updated 20/02/2018 19:18

UPDATE 4: All systems appear to be running smoothly and we will be continuing to monitor the server closely. Thank you for your patience.


UPDATE 3: We have completed the migration to our new NVMe storage drives to ensure there are not future disk I/O issues for this instance. We are running final checks and updates to make sure all services run smoothly. Thank you again for your patience with our migration. We hope you find the new storage faster and helps speed up your website.

UPDATE 2: We will be running a cPanel migration tonight (19th) at 9 PM to our newer upgraded drives. No changes will be needed on any domains as the IP and DNS will remain the same. We will aim to keep downtime to a minimum.

UPDATE 1: Due to some configuration issues we have rescheduled the update for Saturday 17th at 9 PM.

We will be performing a disk configuration update on the server S02. We will need to shut down services while this occurs. We are sorry for the short notice but this will ensure all services run smoothly. Thank you for your understanding.

Service Disruption (Resolved) Critical

Affecting System - London DC 1

  • 19/02/2018 10:15
  • Last Updated 19/02/2018 16:32

UPDATE 2: The issue has been resolved and services are now coming back online. The OnApp engineers confirmed the root cause in their systems and applied fixes. We are monitoring the servers while services start to come back online.


UPDATE 1: The issue has been isolated within the OnApp platform, we have support from OnApp working on this issue now.

We are investigating a disruption at our London 1 data centre. We hope to have services back online soon.

Data Centre Node Issues (Resolved) Critical

Affecting Server - [S21] Linux cPanel ~ Singapore

  • 10/02/2018 09:55 - 10/02/2018 10:00
  • Last Updated 10/02/2018 10:00

UPDATE: Server is running normally now. Planned updates to our Asian based servers are in the works and news will be released in the coming weeks. Thank you and sorry for the inconvenience caused.


We are currently working with the data centre to resolve issues on our S21 server in Hong Kong. We will do our best to keep everyone updated while we investigate.

CPU Load Spikes (Resolved) Medium

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 29/01/2018 09:31 - 06/02/2018 10:10
  • Last Updated 04/02/2018 12:20

UPDATE 8: We have run a reboot to apply some changes we have made to this server. We are monitoring its services. Thank you for your patience.


UPDATE 7:As all our servers come with CloudLinux we have adjusted the settings further to prevent I/O abuse. We are currently monitoring and at present, the I/O remains under 10% busy which is around normal levels (most of the time at 2%).

UPDATE 6: New spikes have been seen on the disk usage, we are checking this to find the root issue. Sorry for the inconvenience.

UPDATE 5:We have been monitoring server operations during the night and everything appears to be running normally. We will continue to closely monitor the node and ensure services continue to run smoothly. Thank you for your patience and understanding.

UPDATE 4:Disk load has reduced to normal levels and we are now investigating the root cause of the node using up all the disk. We hope to have further updates soon.

UPDATE 3:We are seeing disk usage issues which could be related to the high CPU issues on the server. We are working to resolve these as soon as possible.

UPDATE 2:We have found the cause and appears to be a kernel issue, we will be running a reboot to apply updates at 11PM tonight UK time. Thank you for your patience.

UPDATE 1: Services are all back online and we are performing a full audit and check on the server. Once we have more details we will post updates as well as inform you on the actions taken.

We are investigating a CPU load issue on our CloudLinux S02 server.

Migration (Resolved) Medium

Affecting Server - [S22] Linux cPanel ~ Texas US

  • 17/02/2018 20:00 - 19/02/2018 12:05
  • Last Updated 03/02/2018 15:43

As our new US servers are online we will be migrating all US-based accounts to this new server. On the 17th of February we will be migrating all accounts on S22 to S09 server.

If you use our nameservers you will not need to make any changes. If you use A:Records you will need to update the IP to: 108.61.13.243

If you have any questions please contact a member of the team.

Patching for CPU Meltdown and Spectre vulnera (Resolved) High

Affecting System - All Servers

  • 06/01/2018 04:00 - 16/01/2018 16:13
  • Last Updated 05/01/2018 11:24

We will be running updates on all servers on our network to correct the CPU Meltdown and Spectre vulnerabilities which have been in the news lately. One we have patched the servers a reboot will be required and downtime is expected to be less than 10min per server. We are working with our partners/suppliers to ensure all our servers hardware is looked at. The patches are for the operating systems on our servers. If you have a dedicated server or private cloud we will be sending details on the issues soon. You can open a support ticket and one of our support team will be happy to help patch your server.

We recommend everyone to run YUM / Windows updates on your servers to ensure you are running the latest version. Please feel free to contact a member of the team for more information.

Thank you.

Server Migration (Resolved) Medium
  • 20/12/2017 20:00 - 21/12/2017 15:40
  • Last Updated 21/12/2017 15:40

UPDATE: We are currently investigating the cause of an unexpected downtime during the migration. All services are running normally and the migration has been completed.


With the latest server improvements being rolled out across our infrastructure we will be migrating all accounts from server S02 to a new server (keeping the same server name S02). This will begin at 8PM on Wednesday the 20th of December (2 weeks time). We will be running a full cPanel migration which means you will not need to do anything. If you are using our nameservers (DNS) there will be no change, but if you use A:Records then a change will be needed. The new server IP to point your domains to after the 20th will be: 185.145.200.53

Thank you and we hope you will enjoy the improved services.

Server Load (Resolved) High
  • 12/12/2017 13:57 - 13/12/2017 11:24
  • Last Updated 13/12/2017 11:24

UPDATE 3: We have services back online, we will require to run a quick reboot as we have installed software to help prevent this issue for the next couple of weeks ready for the planned migration.


UPDATE 2: We have prossible DDoS attack on the server. We are still trying to get the server back online and working hard to resolve.

UPDATE 1:We are continuing to see an issue with the CPU on the server being overloaded every time the server is booted back up. We are checking this and hoping to resolve asap.

We are investigating a server load issue on server S02. We hope to have this isolated and resolved shortly.

CPU Load Issues (Resolved) High

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 06/12/2017 10:22 - 06/12/2017 11:41
  • Last Updated 06/12/2017 11:41

UPDATE 1: A small spike in load caused our monitoring systems to flag an issue which slowed HTTP services. We will be scheduling a migration of accounts from this server to one of our new server setups for improved CPU performance. All services are running normally.


We are investigating load issues on server S02.

Server Reboot (Resolved) High

Affecting Server - [S03] Linux cPanel ~ Lucee ~ London UK

  • 05/12/2017 13:26 - 05/12/2017 13:32
  • Last Updated 05/12/2017 13:31

Update 1: Reboot completed and all services are coming back online. Downtime <3min. Thank you for patience.


We are currently running a server reboot of S03 due to detected memory issues which require an update. Thank you for your understanding.

Reboot for applied updates (Resolved) Medium
  • 18/11/2017 14:10 - 18/11/2017 14:47
  • Last Updated 18/11/2017 14:16

We needed to reboot the instance to apply updates to disk and CPU settings. Thank you for your understanding.

Loss of connections (Resolved) High

Affecting System - Archer Coventry Node

  • 07/10/2017 12:24
  • Last Updated 07/10/2017 16:37

UPDATE 1: All services are all back to normal running.


We are currently investigating an issue reported on our Archer node in the Coventry data centre. There has been some loss of connections to the server so a reboot was performed to see if this resolves the issue. More updates to come.

RAID Card Replacement (Resolved) High

Affecting System - UK Data Centre Cloud

  • 27/09/2017 20:16 - 28/09/2017 08:35
  • Last Updated 27/09/2017 21:56

UPDATE 2: We have completed the replacement and all services are back online. We are monitoring the server and will update this status page if there are any further updates.


UPDATE 1: Server S02 has been shut down while we carry out the work. We are sorry for the downtime and hope to have services back online shortly.


Please be advised we've noticed issues with Hypervisor 7 (HV7). We have either the RAID cable or RAID controller faulty on this server node. This will require the node to be physically stopped to perform the repair. We will keep downtime to a minimum.

Updates to follow.

Coventry Nework Issues (Resolved) Critical

Affecting System - Data Centre Network

  • 10/08/2017 09:17 - 10/08/2017 14:59
  • Last Updated 10/08/2017 14:59

UPDATE 2: All servers and services are back online, we are monitoring and checking the cause as no router/network configuration was changed.


UPDATE 1: We are continuing to work with the engineers at the data centre on the network issue to our routers. We hope to have service back online as soon as possible.

Engineers are investigating network issues to the Coventry data centre.

Data Centre Network Issue - London DC2 (Resolved) Critical
  • 08/08/2017 08:00 - 08/08/2017 10:31
  • Last Updated 08/08/2017 13:45

UPDATE 2: The engineers at the DC have corrected the IP issue - if you are using A:Records you will need to change to our DNS nameservers to be the best level of service.


UPDATE 1: We have found that an IP subnet has faults and we have replaced with a new IP - services will soon appear back online and we will investigate the cause of the failed IP subnet.

We are investigating a network outage at London DC2. Engineers at the data centre are working on the issue.

CPU Load (Resolved) Medium
  • 25/07/2017 10:25
  • Last Updated 26/07/2017 08:32

UPDATE 3: We have completed all upgrades and monitored the services during the night. All services appear to be running normally. Thank you for your patience.


UPDATE 2: We have scheduled a server reboot tonight off peak for UK/London (10PM) to run some upgrades to help prevent CPU abuse further.

UPDATE 1: We have run a node reboot and adjusted the CloudLinux settings to help prevent CPU abuse. We are monitoring the server to see if any load spikes occur.

We are investigating a CPU load issue on server S17 which has caused services to stop.

Dallas DC Network Issues (Resolved) High

Affecting System - Network

  • 14/07/2017 23:07 - 15/07/2017 10:16
  • Last Updated 14/07/2017 23:07

We are investigating a network issue to our Dallas DC cloud servers. We are sorry for the unexpected downtime.

ColdFusion Update (Resolved) Low

Affecting Server - [S25] Linux cPanel ~ CF 11 Server ~ London UK

  • 14/07/2017 22:00 - 14/07/2017 23:07
  • Last Updated 14/07/2017 08:54

We will be running an update on the ColdFusion services at 10PM UK/London time. A service restart will be performed which will cause some CFML downtime during the process (<10min). Thank you for your understanding.

Unexpected downtime on Dallas DC Cloud (Resolved) High

Affecting System - Dallas DC - Network

  • 02/07/2017 07:59
  • Last Updated 03/07/2017 14:22

UPDATE 1: The issue was caused by one of the clusters server nodes that failed and caused a chain reaction through the network. The server was replaced and services brought back online. We are monitoring the server cluster closely to help prevent this from happening again.


We are investigating a failure on our Dallas Cloud network. Once we have a full report we will publish within this status. All services are back online.

Server reboot to apply updates (Resolved) Low

Affecting Server - [S24] Linux cPanel ~ Lucee ~ London UK

  • 01/07/2017 00:00 - 03/07/2017 07:39
  • Last Updated 30/06/2017 10:47

To apply system changes/updates we will be performing a reboot of the S24 server at midnght as defined (UK/London time). Downtime should be less than 10min.

ColdFusion Service Reboot (Resolved) Low
  • 30/06/2017 04:00 - 30/06/2017 10:45
  • Last Updated 29/06/2017 19:06

We will be performing a ColdFusion service restart at 4AM UK/London time (10PM Dallas US Time).

ColdFusion Service Restart (Resolved) Low
  • 28/06/2017 05:00 - 28/06/2017 13:21
  • Last Updated 27/06/2017 13:15

We will be performing a ColdFusion service reboot tomorow (28th) morning at 5AM UK/London time (27th at 11PM Dallas US time) to apply Java updates to the ColdFusion service.

Downtime expected to be less than 5min.

ColdFusion Java Update (Resolved) Critical
  • 26/06/2017 22:02 - 27/06/2017 11:45
  • Last Updated 26/06/2017 22:03

S12 ColdFusion services are being updated with the latest stable version of Java. Service restarts will occur during this update. Thank you for your patience.

S12 - Migration Schedued (Resolved) Medium
  • 23/06/2017 23:00
  • Last Updated 24/06/2017 16:48

UPDATE 7: We have now completed the migration and updates. All services are back online and if you have already re-created your ColdFusion datasources you should see sites working as normal. If you have any questions please feel free to contact a member of the team. Thank you for your patience.


UPDATE 6: We are now applying the very latest ColdFusion updates. We have tried to import all ColdFusion datasources but unfortunately you may need to recreate the DSN via the CFManager. 


UPDATE 5: ColdFusion has been configured and we are now applying our Apache updates to ensure smooth running of CF applications on the server. You may see HTTP and CF services go down but you will still be able to login to cPanel and access services there. Thank you for your patience.


UPDATE 4: We are having some issues with one of the connectors in ColdFusion and needing to run a reinstall of CF to ensure it applies the correct configuration. We are very sorry for the inconvinence this has caused. Our tests showed everything was working OK but it appears a Apache update caused some issues. We are working to have all ColdFusion based sites back online asap. Please note only the ColdFusion service is affected and all other services are running normally.


UPDATE 3: We are in the final stages of having services back online and running normally. We are sorry for the extended period of time this is taking, we our new systems some configurations was needed to our setup that matched the old server but didn't allow full functionality.


UPDATE 2: While we run the final configurations, Apache and ColdFusion services may not be working as expected or showing as 404 pages.


UPDATE 1: File have completed their transfer and we are finishing the config of our CFManager to verify and sync DSNs.


On the 23/JUNE/2017 at 23:00 London time (22:00 UTC+1 / 22:00 Greenwich Mean Time +1) we will be migrating all accounts from server S12 to our new Cloud platform. ColdFusion 10 will be continued on this new server and all CF config will be transferred during this process.

We will need to shut down services at this time to allow the transfer to run as fast as possible so downtime will be during off peak hours. We hope to have all accounts migrated by the morning on Saturday. A new IP will be assigned (69.168.236.13) and if you are using A:Records you will need to update your domains DNS to match. If you are using our DNS nameservers no change will be required.

If you have a dedicated IP we will provide a new IP but our new platfrom uses IPv6 for addionial IPs and IPv4 for main root IPs.

The new platfrom will allow us to provide a higher level of service to all customers.

If you have any questions or concerns please contact a member of the management team who will be happy to help.

Thank you for your understanding.

Lucee Update 5.1 => Latest 5.2 (Resolved) Medium

Affecting Server - [S24] Linux cPanel ~ Lucee ~ London UK

  • 18/06/2017 22:00 - 19/06/2017 11:34
  • Last Updated 06/06/2017 13:18

On Sunday the 18th of June we will be upgrading Lucee from 5.1 to the latest release and patch 5.2. Downtime will be restricted to Lucee services while we restart them and will be minimal.

Network Issue - US ColdFusion Servers (Resolved) Critical

Affecting System - US Data Center

  • 26/05/2017 16:30 - 27/05/2017 09:31
  • Last Updated 26/05/2017 16:52

Our US data center is currently investigating a network issue on our US ColdFusion servers (Washington, Walla Walla). We hope to have services back online asap.

Emergency migration (Resolved) High

Affecting Server - [S01] Linux cPanel ~ Coventry UK

  • 03/05/2017 19:11 - 08/05/2017 13:51
  • Last Updated 08/05/2017 13:51

UPDATE 3: The migration has been completed and if you have any questions or you are having any issues please do contact a member of the support team. Thank you for your patience and understanding during this migration.


UPDATE 2: We have completed nearly all of the migrations but some of the remaining accounts are larger than most accounts. If you have any questions please do contact a member of the support team via the help desk. Thank you for your patience.


UPDATE 1: The migration is still processing and we hope to have all accounts moved as soon as possible. If you have any concerns please contact a member of the team who will be happy to assist you.


We are migrating all accounts to a new instance due to some concerns of the stability and security of key features and services on the server. Once all data migrated the old IP will be allocated to the new instance.

Thank you for your patience.

S12 Mail Issues (Resolved) Critical
  • 25/04/2017 11:46 - 26/04/2017 07:16
  • Last Updated 26/04/2017 07:16

UPDATE 9: After a night of all services running normally we are closing this ticket. We are monitoring the server closely to see if any further issues occur. Thank you for your patience and understanding during this email outage.


UPDATE 8: We now have emails sending from the server and we are testing services to ensure both incoming and outgoing continue to complete.

UPDATE 7: The cPanel techs were able to restore a backup of our RPM packages and now rebuilding the EXIM service. Even though this is good news we can't be sure this will correct the issue. We will keep you updated as soon as we know more.

UPDATE 6: Our team are working on suggestions by cPanel but please be aware cPanel tech team has suggested a migration to a new server maybe required. We will be doing a migration to our newer hardware and to ColdFusion 11. If this will be the case we will email all clients before the migration is scheduled.

UPDATE 5: A cPanel tech admin is now working on the server and we are hoping for a quick resolution.

UPDATE 4: We have escalated our ticket to cPanel as we are unable to find the exact issue in cPanel that has caused EXIM to fail. If you are hosted on the S12 server and would like to be migrated to a different server please contact sales who will happily get this arranged for you. Thank you for your patience duruing this mail outage.

UPDATE 3: The reboot allowed had applied some updates but failed to bring the EXIM service back online and processing. We are continuing with our investigation.

UPDATE 2: We are required to run a reboot to ensure the package for mail systems are updated correctly. This will occor at 13:27 UK time.

UPDATE 1: After yesterdays reboot and CPU issues a pending cPanel update was ready to be deployed and the reboot appears to have installed this but may have also caused the EXIM/packages to become corrupted. We are running a clean up now and still awaiting an update from cPanel on this matter.

Mail sending/recieveing on server S12 are currently interrupted due to EXIM service failure. All systems appear fine but the service fails to start, we have contacted cPanel while looking into this issue. Sorry for the inconvenience caused.

Unexpected high server load (Resolved) High
  • 24/04/2017 15:00 - 25/04/2017 11:46
  • Last Updated 24/04/2017 16:39

UPDATE 2: Instance reboot complete and services are back online. We are now checking the server load and logs to find the root of the issue.


UPDATE 1: We are running a reboot now to apply some system changes.

We are working on server S12 which is having high load issues and ColdFusion services are having trouble staying active. We are working to resolve this as soon as possible.

Compute CPU Issue (Resolved) Critical

Affecting Server - [S18] Linux cPanel ~ Amsterdam NL

  • 18/04/2017 11:26 - 18/04/2017 13:44
  • Last Updated 18/04/2017 13:40

UPDATE 5: A quick reboot is processing for CPU changes on the node. Thank you for your patience.


UPDATE 4: We have resolved the issue, the cause was an attack on a large number of sites which brought the servers CPU levels. We have implemented new security systems which will help prevent this from happening in the future. Thank you for your patience during this unepxected outage.

UPDATE 3: Services are back online and we are checking logs to find any root issues on this node. Thank you for your patience.

UPDATE 2: The issue has been confirmed as a CPU issue. We are checking why this could be the case and hope to have services back online asap.

UPDATE 1: The Cloud team are working on the issue as it appears a faulty kernal may have caused the issues. 

Our server monitors have alerted us to an issue on server S18. We are investigating the cause and will provide updates asap.

London Network Issues (Resolved) High

Affecting System - London Cloud Data Centre

  • 14/04/2017 11:35
  • Last Updated 16/04/2017 06:35

UPDATE 2: The issue was related to a router at the data centre that became unaligned with the network and crashed. Replacements and re-configurations have been made at the data centre to help prevent this from occuring again. 


UPDATE 1 (11:39): Service are all back online as the DC engineers were working to resolve the connections straight away. We are gathering a report on what caused this.

(11:34) We are seeing some network issues at our London Data Centre which operates our Cloud services. We are checking this now.

DDOS Attack Detected (Resolved) High

Affecting System - Coventry Data Centre

  • 31/03/2017 00:44 - 31/03/2017 00:50
  • Last Updated 31/03/2017 12:05

A DDOS attack was detected at our Coventry DC at 00:44 AM UK/London time. It was quickly resolved with a total of 6min of interrupted network connections. All servers remained up during this time but some users may have seen some drop offs from their services.

CPU Load Issues (Resolved) High

Affecting Server - [S18] Linux cPanel ~ Amsterdam NL

  • 15/03/2017 16:53
  • Last Updated 16/03/2017 11:35

UPDATE 2: We have installed CloudLinux to ensure single accounts can not over use CPU cores on this instance. We have also increase the SWAP memory as this was an item that was flagged during our investigation. Thank you for your patience.


UPDATE: Services have quickly come back online and we are looking into the root cause.

We are investigating a CPU load issues which is causing some services to fail. We hope to have a fix asap for this issue. We are also looking at applying CloudLinux to this instance. Thank you for your understanding.

CPU Load (Resolved) High

Affecting Server - [S18] Linux cPanel ~ Amsterdam NL

  • 10/03/2017 10:32
  • Last Updated 10/03/2017 10:53

UPDATE: Load has returned to normal and we are looking at CPU resource improvements. Thank you.


We are investigating a CPU load spike which has caused some services on the instance to fail. Thank you for your patience.

Texas Data Center Network Issues (Resolved) Critical

Affecting System - Texas DC

  • 23/02/2017 14:39
  • Last Updated 02/03/2017 20:08

REPORT: The Dallas cloud is operated by OnApp, and the datacenter managed the hardware. The first alerted was an issue with some VMs being down due to disk IO reports. From logs it looks like a dying raid card. We had to go back and forth with the data center, as they didn't see any cause of it was a bad raid card. Eventually the raid card was replaced and the SAN brought back up and VMs turned back on.


UPDATE 17: A full report of the issue will be posted within the next 7 days.


UPDATE 16: At 1AM UK time all services were back online and running normally. We are gathering a report from all parties to provide. Thank you for your patience during this hardware outage.


UPDATE 15: The reboot has shown further issues within OnApp which the team are correcting now. Hardware is being replaced to ensure the stability of the services. Once the new server has been installed we will post a new update.


UDPATE 14: We are going to be doing a standard reboot of a number of instances to ensure everything is fully corrected. Downtime should be less than 5min. Thank you.


UPDATE 13: Services have come back online but we are awaiting the all clear from the engineers.


UDPATE 12: Work is ongoing at the data center and engineers from OnApp are working to resolve the issue asap. Thank you for your continued patience.


UPDATE 11: The issue has been located in the OnApp hypervision which engineers at the data center and the OnApp support team are investigating.


UPDATE 10: We are still seeing some issues with the host machines and hope to have this corrected asap.


UPDATE 9: We have completed fine sync and corrections. We are monitoring services to ensure everything is stable. Thank you again for your patience during this matter. We will post a report as soon as possible.


UPDATE 8: We are running some reboots of the host machine to ensure we fully fix the issues that caused the host machine to fail.


UPDATE 7: We are happy to confirm the instances below are back online. We are running some checks/tests on our OnApp Control Panel instance:

  • S15 - 69.168.236.96
  • S16 - 69.168.236.49
  • S22 - 69.168.236.42

UPDATE 6: Now that the host machine is operating normally we are picking up logs and some issues which we can correct as we boot instances online.


UPDATE 5: We can confirm instances within the cloud are coming back online. Server S22 - 69.168.236.42 is back online. Others will be coming back online shortly.


UPDATE 4: The host machine has been started after the found hardware failure. The OnApp CP should startup soon and all other services shortly. We will update this status once all services are confirmed back online. Thank you so much for your patience.


UPDATE 3: Engineers are still working on the servers at the Softlayer data center in Texas USA. We hope to have a report and an ETA as soon as possible. Thank you for your continued patience and understanding.


UDPATE 2: A hardware fault has been found and tech engineers are checking the servers. We hope to have further updates with more detail shortly.


UPDATE 1: As per our new support policy for our Cloud platfroms OnApp support was informed of the issue and has detected a possible hardware to software fault. They are working with on site engineers to resolve the issue asap. We are sorry for this unexpected downtime. Thank you for your patience.


We are seeing an issue at our US Texas data center and the on site team is investigating.

Affected servers:

  • S15 - 69.168.236.96
  • S16 - 69.168.236.49
  • S22 - 69.168.236.42
  • OnApp Cloud Control Panel - No other VMs outside of the Texas DC are affected

We are doing everything possible to get the affected services onine asap.

Server reboot for CloudLinux (Resolved) Medium
  • 17/02/2017 22:00 - 18/02/2017 12:39
  • Last Updated 19/02/2017 18:26

UPDATE 2: We are currently investigating an issue which required a reboot. 


UPDATE: We are rebooting S23 due to memory linked issues with some previous CPU issues. Downtime expected to be less than 10min. Thank you for your patience and understanding.

We will be running a server reboot on S23 to implement CloudLinux services to ensure performance and stability of the server. Downtime will be less that 10min and we will start the install of CloudLinux at 10PM UK time.

Disk Storage Checks/Upgrades (Resolved) Medium
  • 18/02/2017 11:51 - 18/02/2017 15:54
  • Last Updated 18/02/2017 11:52

Our reporting software indicated a disk resource issue which we are investigating and correcting on S17 server (London, UK).

Service Failure (Resolved) Critical

Affecting Server - [S21] Linux cPanel ~ Singapore

  • 15/02/2017 06:30 - 15/02/2017 08:45
  • Last Updated 15/02/2017 09:11

UPDATE 1: We have restored access to the server and all services are back online. We are now investigating the cause and once found we will put in measures to avoid this from happening again.


We saw the S21 server services stall without any known reasons. We are investigating.

CPU Spikes (Resolved) Medium
  • 13/02/2017 21:04 - 17/02/2017 14:04
  • Last Updated 14/02/2017 10:49

UPDATE 1: We are found the cause of the CPU spikes and correct this but we are looking into ways to ensure if the issue does occur on an account in the future it won't cause such issues. We will update this status once we know more.


We are seeing a number of CPU spikes on S23 (London), we are investigating the cause and looking at implementing systems to prevent the server rebooting due to this. We hope to have systems stable asap and we are sorry for the inconvience caused. We will keep this status open during the night while we monitor and investigate the CPU issues.

Disk Storage Upgrade - Reboot (Resolved) Medium

Affecting Server - [S24] Linux cPanel ~ Lucee ~ London UK

  • 09/02/2017 20:20 - 09/02/2017 20:24
  • Last Updated 09/02/2017 20:12

Due to storage system upgrades we need to perform a quick reboot of the server. Downtime is expected to be less than 10min. We are sorry for the short notice.

MySQL Upgrade to v5.6 (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 31/01/2017 03:00 - 09/02/2017 20:15
  • Last Updated 30/01/2017 16:18

We will be upgrading MySQL to version 5.6 between 3AM to 5AM UK/London time on the 30th of January. Downtime will be minimal during this period. We expect downtime to be <30min.

Thank you for your patience.

Power Failure (Resolved) Critical

Affecting System - Coventry DC

  • 21/01/2017 21:15 - 28/01/2017 11:41
  • Last Updated 28/01/2017 12:57

UPDATE 8: All customers have now been emailed a full report of the downtime.


UPDATE 7: All services are back online, we are dealing with a few reported issues with version differences which we hope to have resolved soon or options available to customers.

UPDATE 6: We have connected up most sites to the Lucee service but some configurations for sites are having some difficulties which we are working on. Thank you for your continued patience while we get all accounts linked to Lucee services.

UPDATE 5: Restores have been completed and we are now troubleahooting any issues on customers accounts. This may take some time as we have a large number of requests to work through. Thank you for your patience and understating during this period.

UPDATE 4: We have restored a number of accounts, please open a ticket for an update on your accounts restore or if you have any questions.

UPDATE 3: After trying to recovery the service without any luck we are restoring backups of customers accounts either on a new instance or on avaialble servers that are already active. We are sorry for the inconvenience caused.

UPDATE 2: We are attempting to restart services but at present the instance appears damage. A backup restore maybe required to get services back online. We will update this status once we have further information. We are sorry for the unexpected length of downtime.

UPDATE 1: Most services are back online but we are see issues with a couple instances including S11 shared server. We are working to get services back online asap.

Our Coventry DC team and our support team are investigating power issues which has caused disruptions to some servers in our racks. We will be monitoring this during the night.

Below is the full DC report of the power failure:

We are currently experience issues with power at our Coventry Datacentre. The Generators and APCs are running but some customers may experience issues. We are working this as a matter of urgency to ensure we have this resolved as soon as we physically can. We are sorry for the inconvenience.

Update 1:

Our Coventry datacentre site experienced a power outage from Western Power at around 08:00 this morning, our generators started and took the load however after a few hours generators at the site developed faults.
Power has now been restored at the site but this is through our generators, Western Power are working to restore the power which should hopefully be completed shortly.

Update 2:

A further power outage has occurred since the initial restoration. Please be assured we are doing all the possibly can to restore all services.

Update 3:
Western Power assure us power will be restored soon to the affected data hall.

Update 4:
Full power has now been restored to the affected datahall. All services will be online momentarily. If you are still experiencing issues then please let us know

OnApp Reboot (Resolved) Medium

Affecting System - OnApp

  • 28/01/2017 11:40 - 28/01/2017 11:52
  • Last Updated 28/01/2017 11:41

We are running a quick reboot of our OnApp controller, during this time there will be no downtime of any customers services. Thank you for your patience while we action this control panel reboot.

UK Instance CPU Load (Resolved) Critical

Affecting Server - [S01] Linux cPanel ~ Coventry UK

  • 05/12/2016 12:07
  • Last Updated 06/12/2016 22:11

UPDATE 5: CPU load has reduced to a normal level and we are monitoring services.


UPDATE 4: We are starting to transfer some accounts from the UK1 instance to our new cloud platform to help avoid CPU spikes. It appears we are getting hit a lot on the instance which is hitting both the hard drives I/O rate and CPU.

UPDATE 3: We are seeing new high load on the UK1 instance and we are looking at migrating all cPanel accounts to a new instance based on our new cloud platform. We are sorry for this unexpected downtime and are working to resolve this asap. Thank you for your patience.

UPDATE 2: We are monitoring the UK1 instance for any spikes in load. It appears a short term issue but we are updating software and implementing systems to help prevent this sort of issue from occuring again. 

UPDATE 1: Services are coming back online and we are investigating the root cause of the issue.

We are currently investigating a CPU issue causing timeouts on the server. We are looking into this now.

Bravo Node SolusVM Migration (Resolved) High

Affecting System - Bravo Node

  • 04/12/2016 04:00 - 08/12/2016 22:02
  • Last Updated 05/12/2016 18:12

UPDATE 3 - Full Report:

Firstly thank you for your patience during this migration period. We are in the middle of running the transfer/restore of the final VMs on the new node. It has been a long process and longer than we had hoped but all VMs have been migrated or in the final stages of transfer to our new UK data centre. Those that are fully restored appear to be running well with the final ones will be coming online asap. If you have any issues please do contact a member of the support team via a ticket for the fastest response. With all major migrations or issues we like to be very open about any issues and problems that may have occured during such tasks, please see below our report.

Problems/Issues Occured

We saw a number of issues which caused an increase length in the migration process. Firstly the amount of data that was transferred was a large amount but with eailer tests network speeds held at the top speed available over the network but during the 6th hour of the migration the speed started to drop. We had hoped this was a temporary loss of network speed and at points the connection dropped. Network issues had been reported on the old node server and one of the cases for the migration.

Total Downtime

In total there has been a total of 40 hours of downtime since we started the migration on Sunday morning.

Migration Reasons

There is a number of reasons for this migration but the main one was the hardware age in a data centre that was not performing at the level we wanted to provide to our customers. The server was starting to show issues in both memory and harddrive and to ensure customers data and services were not damaged in any way we decided to migrate to our UK data centres new hardware.

Conclusion

With any large scale migration there are lessons to be learnt and we have an internal review planned to see how we can improve our SolusVM migrations. Even though the migration was necessary we believe we can provide different options on how migration can be handled, for example transferring backup files over and then deploying them instead of the live data point. With the extent of downtime that was caused we will be providing all migrated customers VMs with 6 months of free hosting extentions - if you have any questions regarding this please do contact a member of the billing team. These extentions will be applied over the cause of the next 24 hours.

From everyone at Host Media we would like to thank all our customers for their patience and understanding during this migration period. Until we reach a good amount of time after all VMs have been checked and services appear stable this status issue will remain open for any further updates.


UPDATE 2: We have 3 VMs remaining in the migration and then all services would have been migrated. If you have any questions please do let our team know. Thank you for your patience and understanding.


UPDATE 1: We have transferred most VMs and the affected customers have been updated via support tickets. If you have any questions or issues please do get in touch with a member of the team.


We are now starting to migrate all VMs from the node Bravo to our Betelgeuse server. Please check your client portal for ticket updates which will contain details of your new IP address.

Thank you for your patience while we run this transfer.

Server Reboot (Resolved) Medium
  • 05/12/2016 00:00 - 04/12/2016 00:10
  • Last Updated 05/12/2016 10:20

UDPATE 1: All services came back online fine after the reboot and CloudLinux has also been installed. If you have any questions please do contact a member of the team.


We will be performing a server software upgrade on our S17 server which requires a server reboot. The reboot will be a standard reboot that will take up to 10min to complete.

Thank you for your understanding.

CPU Load Issue - Server Reboot (Resolved) Critical
  • 03/12/2016 20:07
  • Last Updated 03/12/2016 22:24

UPDATE 2: We have found the affected account and place in actions to help prevent the issue from happening again.


UPDATE 1: Services are back online and we are checking the cause of the issue.

We are currently investigating a server load issue and running a reboot due to the load.

Server hard drive scan (Resolved) Critical
  • 02/12/2016 10:23 - 02/12/2016 10:34
  • Last Updated 02/12/2016 10:28

UDPATE 1: Scan complete and drives are running OK.


We are currently performing a disk scan on server S13 due to an unexpected issue detected by our monitors. We hope to have the server back online asap. Thank you for your patience.

Server CPU Load - Reboot (Resolved) Critical

Affecting Server - [S18] Linux cPanel ~ Amsterdam NL

  • 22/11/2016 11:51
  • Last Updated 22/11/2016 12:01

UPDATE: Services are all running normally. Thank you for your patience.


We are running a reboot and load check on our S18 server as well as adding more resources. A reboot is taking place now and will be completed within 5min. Thank you for your patience and understanding.

Server Reboot (Resolved) High
  • 21/11/2016 07:15 - 21/11/2016 07:18
  • Last Updated 21/11/2016 07:18

UPDATE: Services are coming back online and reboot was successful. Thank you for your patiance.


We are running a standard reboot of S16 (Dallas USA) to apply memory updates to improve the performance of the services. Reboot downtime expected to be less than 5min. Thank you for your understanding.

Server reboot (Resolved) High
  • 19/11/2016 09:47 - 20/11/2016 12:15
  • Last Updated 19/11/2016 09:49

We are performing a reboot to correct a disk error and update software.

Unexpected downtime (Resolved) Critical
  • 31/10/2016 14:30 - 01/11/2016 18:20
  • Last Updated 31/10/2016 14:31

We are currently looking into an unexpected downtime on S17. We hope to have all services back online asap.

US East - Server 13 (Resolved) Critical

Affecting Other - Shared hosting platform

  • 24/10/2016 09:52
  • Last Updated 24/10/2016 12:21

Update 1: The server and all services are back online. Thank you for your patience.


We are performing emergency maintenance on this server which will make all/most services on this server inaccessible.

Services affected include HTTPD and Webmail access, MySQL, POP/IMAP, SSH and FTP.

During this outage, access to your websites, email, files, databases, will not be possible. We apologize for this inconvenience and while we do not have an ETA for this procedure, we will continue providing updates as soon as possible.

Migration S21 (Resolved) Critical

Affecting Server - [S21] Linux cPanel ~ Singapore

  • 18/10/2016 09:39
  • Last Updated 18/10/2016 15:41

UPDATE 1: Migration has been completed and all services are running on the new cloud instance. Thank you for your patience and understanding.


We will be performing a migration of all accounts from the current S21 Hong Kong server to a new cloud VM. This is due to network issues that has effected mail ports and connection speeds. Downtime will be minimal as we will be doing a direct transfer of accounts. If you are using A:Records please change your domains IP to point to: 45.126.124.59 - if you are using DNS then nothing is required to be changed.

If you have any questions please do contact us.

Lucee Memory Issue (Resolved) Critical

Affecting Server - [S11] Linux cPanel ~ Lucee ~ London UK

  • 12/10/2016 07:57
  • Last Updated 12/10/2016 09:19

UPDATE 1: Services are running normally but our team are looking into our Lucee platform to see how improvements can be made to avoid memory overload from Lucee. We will ensure to update all effected customers as soon as possible. Thank you for your patience and if you have any questions please feel free to open a ticket to the management team who will be more than happy to answer infrastructure questions.


We are investigating a memory issue on our UK Lucee server. We are working on this now to get CFML pages back online ASAP. Thank you for your patience and sorry for the downtime.

Load and connection timeouts (Resolved) High
  • 11/10/2016 13:37
  • Last Updated 12/10/2016 08:31

UPDATE 2: Our updates have been completed and services are running normally. Thank you for your patience during the reboot.


UPDATE 1: We will be performing a server reboot tonight at 10PM UK/London time. We will be increasing some resources allocated to the cloud VM to ensure performance is maintained to the highest level. Thank you for your patience.


We are investigating an issue with the cloud VM that causes some downtime for customers websites. We will be performing upgrades to the VM which may require a single reboot of the VM.

Thank you for your understanding and patience.

Cloud software upgrade (Resolved) High

Affecting System - London Cloud - Shared/Reseller Services

  • 01/10/2016 03:00 - 03/10/2016 09:31
  • Last Updated 03/10/2016 09:31

UPDATE: Services came back online and all systems are running normally. Thank you.


We will be rebooting all cloud nodes that are running our shared/reseller hosting services to allow an important software upgrade to take effect. Thank you for your understanding.

CloudLinux Install (Resolved) Medium
  • 24/09/2016 05:00 - 24/09/2016 12:38
  • Last Updated 23/09/2016 12:35

We will be installing CloudLinux on our S15 server to help ensure CPU usage by accounts are kept to an acceptable level. This install requires a standard reboot of the server and we expect downtime to be under 10min. If you have any questions please contact a member of the team.

Unexpected downtime (Resolved) Critical
  • 21/09/2016 15:16
  • Last Updated 21/09/2016 15:35

UPDATE 2: Services are back online and node issues are corrected. Sorry for the downtime.


UPDATE 1: We have run a reboot and services should be online shortly.

We are investigating an issue with our S12 ColdFusion server. We hope to have updates on this asap.

Network/ISP Issue (Resolved) Critical

Affecting System - DNS Hosted Platform

  • 19/09/2016 21:23
  • Last Updated 20/09/2016 08:15

UPDATE: The network has been resolved and all services are back online.


We are correctly having network issues with our ISP and working to resolve this ASAP. We are sorry for the unexpected downtime.

Kernel Update (Resolved) High

Affecting System - Archer Node

  • 14/09/2016 22:00 - 14/09/2016 23:45
  • Last Updated 15/09/2016 09:11

UDPATE 6: We have been monitoring the services during the night and all services are now running on the latest kernel. We will continue to monitor the server as normal to ensure the previous issues do not occur again on this node. Thank you for your patience and understanding during this update.


UPDATE 5: We have made adjustments to the XEN network configuation and services are loading up normally. All services should be back online within 5-10min. Thank you for your patience.

UPDATE 4: Data centre network administrators are tracing the routing issue their end. We hope to have services back online asap.

UPDATE 3: Engineers at the data centre are now checking an issue between the IP router and the node. The issue is due to the new kernel update and we hope to have further updates soon.

UPDATE 2: We are bringing the server back online manually due to the reboot failing to bring it back online automatically.

UPDATE 1: An emergancy reboot our the Archer node has been preformed due to issues found during pre-checks ready for the main reboot. We are monitoring the servers reboot and hope to have a stable service soon. Thank you for your patience.

We will be prefroming a kernel update which will require a reboot of the Archer node. This is to correct a number of issues that has caused the node to stop being pingable. We hope to keep downtime to a minimal. Thank you for your understanding.

Archer node (Resolved) Critical

Affecting System - Archer Node

  • 29/08/2016 20:16 - 29/08/2016 21:06
  • Last Updated 29/08/2016 20:16

We are investigating an issue on the Archer node. Our DC engineers are checking this now.

Server CPU Load - Reboot (Resolved) Critical
  • 23/08/2016 17:06
  • Last Updated 24/08/2016 10:11

UPDATE: After our reboot the services are running normally and we are monitoring the situation.


We are currently looking into a CPU load issue and running a reboot to clean out any CPU issues. We will update this status once we know the root cause.
Thank you and sorry for the downtime.

Server Down (Resolved) Critical

Affecting System - Archer Node

  • 20/08/2016 06:43 - 20/08/2016 10:59
  • Last Updated 20/08/2016 10:59

UPDATE: Services are back online and we are investogating the root cause.


Currently there is a network issue with one of our node. This has been escalated and the issue is currently being worked upon with utmost priority. Unfortunately, it is not possible to provide an ETA at the moment. We expect to rectify the issue and reinstate the services as soon as possible.

We will keep you updated about the ongoing issue and network statistics. We apologies to affected clients, who may experience slow or unresponsive services while the issue is being resolved.

Server Maintenance Reboot (Resolved) Medium
  • 18/08/2016 22:00 - 20/08/2016 08:01
  • Last Updated 17/08/2016 21:02

We have detected a system issue with the node hosting the SQL services listed above. Our engineering team applied system updates and scheduled a brief maintenance window to perform a server restart.

Date and Time: Aug-18-2016 10:00 GMT/UTC (Aug-18-2016 03:00 Local Time)

Please note: This event will reboot the server and a small amount of downtime will occur. Your data and configurations will not be affected by the reboot.

Server Reboot (Resolved) High
  • 11/08/2016 11:52 - 11/08/2016 11:55
  • Last Updated 11/08/2016 11:53

A reboot of server S15 is underway to apply new updates. Sorry for the downtime caused, this will be a max of 5min.

ColdFusion Service Failure (Resolved) Critical
  • 11/08/2016 10:15 - 11/08/2016 11:52
  • Last Updated 11/08/2016 11:52

UPDATE: Service reboot corrected the issue and services are running normally.


We are working on an unexpected issue with the ColdFusion service on our S12 CF server. We hope to have services running asap, all other services are working normally.

Archer server (Resolved) Critical

Affecting System - IP routing issues

  • 06/08/2016 15:39
  • Last Updated 06/08/2016 16:25

UPDATE: We have now resolved the issues, the main cause was a kernel error for the version the server requires. Plans are in place to move customers VMs and hosting accounts to our new Cloud solutions. Customers will be updated in the near future with planned migrations. Thank you for your patience.


We are currently investigating a IP routing issue on our archer node. We hope to have this resolved ASAP.

Toronto 6 Server - Emergency Disk Maintenance (Resolved) Critical

Affecting Server - Linux cPanel ~ Legacy Platform

  • 04/08/2016 01:00 - 06/08/2016 16:26
  • Last Updated 04/08/2016 09:37

MAINTENANCE START TIME: 7:30 pm EDT 08/03/16

ESTIMATED DURATION: 1 day

STATUS: In Progress

At 8:30PM tonight (03 August, 2016 20:30-CDT) We will be taking the server Toronto 6 server offline in order to synchronize data across multiple disks and re-initialize backup services.

Our team has detected an issue that could result in heavy data loss if left unattended. This process could take up to 24 hours to complete, and all hosting services will be unavailable during that time.

The safety and consistency of your data is one of our highest priorities, and this has been determined to be the quickest and safest way to proceed. Our team will be actively managing the process throughout.

We sincerely apologize for the inconvenience, and will have all services restored as soon as possible.

Thank you for your patience.

MEL3 - Data initialize (Resolved) Critical

Affecting System - MEL3 - Cluster Server

  • 29/07/2016 00:00 - 31/07/2016 14:24
  • Last Updated 31/07/2016 14:24

UPDATE 1: Services are back online and running normally. Thank you for your patience.


At 8PM tonight (28 July, 2016 20:00-CDT) We will be taking the server MEL3 offline in order to synchronize data across multiple disks and re-initialize backup services.
Our team has detected an issue that could result in heavy data loss if left unattended.
This process could take up to 24 hours to complete, and all hosting services will be unavailable during that time.

The safety and consistency of your data is one of our highest priorities, and this has been determined to be the quickest and safest way to proceed.
Our team will be actively managing the process throughout.

We sincerely apologize for the inconvenience, and will have all services restored as soon as possible.

Thank you for your patience.

Shared Cluster Network Issues (Resolved) Critical

Affecting System - Global cPanel Cluster Network

  • 25/07/2016 14:17 - 25/07/2016 15:28
  • Last Updated 25/07/2016 14:18

We are currently working on an issue with our global cluster network. A network issue was identified which affects multiple servers. As such, some of the sites hosted may load slow or appear inaccessible. Our System Administration team is actively working on this now and we will update this post as more information becomes available.

We sincerely apologize for the inconvenience this issue has caused. We understand service reliability is of the utmost importance. If you have any further questions please let us know and we will do our best to answer them!

Archer Node - Network IP Routing (Resolved) Critical

Affecting System - Coventry Data Centre Network Routing

  • 25/06/2016 08:00 - 30/06/2016 14:58
  • Last Updated 25/06/2016 14:22

UPDATE 2:
Services have been restored and we are investigating the issue with the Kernel version to help prevent this from happening in the future.

UPDATE 1:
Enginners are checking IP routing with a version of Kernel which appears to be the cause of the issues. We hope to have services back online asap. We are sorry for the downtime caused.

ISSUE:
We are currently investigating a major network issue connected to our Archer node. Engineers at the data centre are working to resolve this issue asap.

CokdFusion Servers (Resolved) Critical

Affecting System - US Data Centre - Walla Walla

  • 03/06/2016 22:40
  • Last Updated 03/06/2016 23:11

Update: services have returned to normal after network issues resolved the issues.
Engineers at our Walla Walla, US data centre are looking into packet loss issues on the network. All shared/reseller ColdFusion servers are currently affected.

Toronto S5 - Issues (Resolved) Critical

Affecting System - Server Cluster

  • 31/05/2016 14:31
  • Last Updated 31/05/2016 14:49

UPDATE 1: Services are back online and we are investogating what happened in full.


We are currently investigating an outage at our Toronto DC. We hope to have systems back online asap.

Reboot - Disk Expansion (Resolved) Medium
  • 28/05/2016 11:06
  • Last Updated 28/05/2016 11:17

UPDATE 1: Reboot complete and disk increased successfully.


Our US Lucee server (S16) has been rebooted for a disk expansion. Downtime <10min.

Maintenance for OnApp Cloud (Resolved) Medium

Affecting Server - OnApp Cloud

  • 01/06/2016 09:00 - 25/07/2016 15:42
  • Last Updated 27/05/2016 19:40

The OnApp Cloud will be undergoing maintenance for approx 1 hour on Wednesday June 1st at 9am UK time. 
 
Virtual Servers will stay online - please note that new Virtual Servers will not be able to be built during the maintenance window. 
 
Please also note that Virtual Servers cannot be edited during this time. 
 
For any questions please free free to contact us.

Planned ColdFusion Service Reboot (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 13/04/2016 21:00 - 13/04/2016 21:19
  • Last Updated 13/04/2016 17:08

We will be rebooting the Adobe ColdFusion services to resolve timeout issues in Tomcat. We hope to keep downtime to a minimum and is expected to be less than 5min.

Archer Node (Resolved) Critical
  • 18/03/2016 18:45
  • Last Updated 19/03/2016 11:52

Update:
After investigating the issue it appears the server was overloaded and after a reboot the memory cleared and all services came back online. We are monitoring the server to see any further build up of memory usage.

Issue:
We are currently investigating issues with the Archer node. Our engineers are working on the issue.

Archer Node - Network Issues (Resolved) Critical

Affecting System - DC Coventry Network

  • 05/12/2015 06:22
  • Last Updated 05/12/2015 10:07

UPDATE 3: We found the kernel was causing the main issues which we have now corrected. We are looking into why this happened and how to try to prevent this in the future.
UPDATE 2:
Our IP bridge for Archer node is no longer showing up for this server. Our team are working on correcting this to get all services back online asap.
UPDATE 1:
We have lowered the traffic coming into the rack and now working to restore all services.

We are seeing a large amount of traffic hitting our servers. We are looking into the cause and working to resolve this asap. Sorry for any inconvenience caused.

Upstream Network Issues (Resolved) Critical

Affecting System - US ColdFusion Data Centre

  • 07/10/2015 11:15 - 07/10/2015 11:17
  • Last Updated 07/10/2015 12:03

One of our data centres upstream network providers performed emergency maintenance which interrupted our service. All servers and sites are currently up at this time and downtime was <2min.

If you have any questions or concerns please feel free to contact us at any time.

Server reboot and memory updates (Resolved) Critical

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 24/09/2015 23:00 - 28/09/2015 11:12
  • Last Updated 25/09/2015 09:00

UPDATE 1: The reboot and memory upgrades have been completed. Services are running well. If you see any issues or have any questions please do let us know. Thank you for your patience.


We will be running a reboot of the S6 CF server tonight at 11PM. This is to apply memory updates on the server. Downtime is expected to be less than 60min.

Archer Rack (Resolved) Critical

Affecting System - Network Issue

  • 23/09/2015 16:03
  • Last Updated 24/09/2015 09:14

UPDATE 2: We can fully confirm this was a data centre network related issue and we are working with the DC to find out exactly what happened.


UPDATE 1: Services are running normally and network back online. Downtime
We are currently investigating a network issue to our Archer rack. Will post updates asap.

Node Memory Issue (Resolved) High

Affecting System - Archer

  • 20/09/2015 01:00
  • Last Updated 21/09/2015 16:36

UPDATE 2: We have resolved the issues detected on our memory update and all services are back online. We are sorry for the inconvenience caused and this downtime was unexpected and nessasry to ensure node services would run smoothly and this does not cause larger issues in the future. If you have any questions or comments please contact the sales or management team who will be more than happy to help.


UPDATE 1: After upgrding the memory on our node we found a couple issues we are repairing to ensure the stable service of the Archer node.

Due to a detected memory issue on our Archer node which caused some downtime last night we are running some updates at 2:30AM Sunday morning UK time. Downtime will be minimal.

IP Subnet Error (Resolved) Critical

Affecting System - Archer Node

  • 15/09/2015 10:30
  • Last Updated 15/09/2015 14:27

Update 1: The error at the data centre in regards to our IP subnet was human error which we our management team are working with the DC management to see how to prevent this from happening in the future. We are sorry for the downtime and if you have any comments or questions please feel free to contact a member of the team. Thank you for your understanding.


We were alerted to an IP error on our Archer node in the IP subnet from a recent migration (DELTAUK2). The data centre mistakenly took offline the IP subnet which was assigned to our racks VLAN. Our management team are in conversation with the data centre now. We will update this status update asap.
We are sorry for this unexpected downtime and will ensure this does not happen again.

Node Migration (Resolved) High
  • 04/09/2015 20:00 - 07/09/2015 11:00
  • Last Updated 07/09/2015 09:16

Update 6: The migration has been completed and all services are running normally. Thank you for your patience during this process and if you have any questions or comments please do let us know.


Update 5: We have started the transfer of S10 server. We hope to have this completed within the next 4-6 hours. Thank you for your patience.


Update 4: Server S11 has been migrated and now running on the new hardware.


Update 3: Due to complications with the transfer which were unforeseeable we had to stop the migration and rescheduled the migration of S10 server for Sunday 9PM. We are very sorry for the inconvenience caused and if you have any questions in the mean time please do let us know. Thank you for your understanding and patience.


Update 2: Due to the lower transfer speeds between the servers than expected we have rescheduled the migration of S11 server until tonight at 9PM. S10 is almost complete and should be back online shoryly.


Update 1: We have completed the migration of the dedicated VMs and now in the middle of migrating the 2 shared/reseller servers S10 and S11. Once this has been completed we will update this status. Thank you for your patience and bearing with us during this migration process.


We will be performing a node migration at our Coventry, UK data centre on Friday the 4th of September and starting from 10PM UK/London time.

Affected services:

  • DELTAUK2 VM Node - Completed
  • S10 Shared/Reseller Services - Completed
  • S11 Shared/Reseller Services - Completed

Expected downtime: 2-4 hours per service

The new node is one of our top of the line servers and we hope you will see a general performance increase.

Unexpected downtime (Resolved) Critical

Affecting System - DeltaUK2 Node

  • 31/08/2015 12:27
  • Last Updated 31/08/2015 12:42

UPDATE 2: There was a temporary network issue on the racks that host the DELTAUK2 at the Coventry DC which has been resolved.


UPDATE 1: Services are now back online and we are checking as to what caused this small period of downtime. Total downtime: <10min

We are investigating an issue with our DELTAUK2 node at our Coventry DC. We hope to have services back online asap.
Thank you for your patience.

US Data Centre Network Issues (Resolved) Critical

Affecting System - Network Connections

  • 24/07/2015 10:59
  • Last Updated 24/07/2015 12:14

UPDATE 2: One of the data centres network providers had experienced some issues. This has been resolved at the DC level and we will continue to monitor the situation.


UPDATE 1: Network connections have come back online and we are working with the DC now for a full network report.

We are currently seeing network issues at our US data centre and working on resolving these now. Services affected are S6 and S12 servers,

Alpha US 1 Server (Resolved) Critical

Affecting System - Server Node

  • 07/07/2015 17:08
  • Last Updated 07/07/2015 17:40



UPDATE 1: Engineers at the DC have resolved the issue and we are bringing VMs back online now. We will update this status once all VMs are back online and running. Thank you for your patience.

We are currently working on a server issue causing downtime on our US 1 server in the Kansas data centre. We will update this status as soon as possibe.

US Data Centre - Power Issues (Resolved) Critical

Affecting System - Power Supply

  • 27/06/2015 19:00
  • Last Updated 27/06/2015 21:05

UPDATE: DC has resolved the power issues and services are now fully online. Thank you for your patience and understanding.


Our US, Kansas data centre is currently having some power issues but we hope to have everything stable shortly. Sorry for the downtime caused.

Emergency hard drive replace and reboot (Resolved) High

Affecting System - Node: Betelgeuse

  • 08/06/2015 21:00 - 08/06/2015 21:30
  • Last Updated 08/06/2015 22:21

UPDATE 5: All services are stable and the sync will take a number of hours to complete. We will mark this status update as resolved and provide further updates if required. Thank you for your patience.


UPDATE 4: All VMs are back online and we are monitoring the data sync.

UPDATE 3: Most VMs are back online and the rest are coming online now.

UPDATE 2: The faulty hard drive has been replaced and the RAID controller is now syncing the data to the drive.

UPDATE 1: Our data centre engineers are now replacing the servers faulty hard drive. The node should be back online within 30min.

Due to a hard drive read issue on one of our RAID drives we will be replacing it at 9PM UK/London time today. Downtime will be minimal and a max of 30min but we hope this will be shorter. If you have any questions please do contact a member of the team.

XEN Security Updates (Resolved) Medium
  • 14/06/2015 01:00 - 14/06/2015 09:52
  • Last Updated 08/06/2015 11:36

On Sunday the 14th of June we will be running security updates on our XEN nodes. Services will be rebooted on Sunday between 1AM and 2AM. We do not expect any major downtime apart from the reboots. The reboots may take a little longer than normal to ensure all security updates are installed correctly.
Thank you for your understanding.

Network Issues (UK) (Resolved) Critical

Affecting System - Coventry DC Network

  • 27/05/2015 11:11 - 07/06/2015 12:51
  • Last Updated 27/05/2015 12:42

UPDATE 6: The issue was caused by a wide spread issue affecting much of the UK's connectivity at the London Internet Exchange (LINX). We have disabled our peering at LINX for now and all services are running normally. We will provide further updates shortly.


UPDATE 5: Networks appear to be stable and our DC engineers are writing up a report. Management are speaking with DC and network providers now.

UPDATE 4: DC are expecting all services back online within 5min. We are going to be having further talks with management at the DC and network providers as soon as services are stable. We are once again sorry for this downtime and a full report will be issued as soon as possible.

UPDATE 3: We are continuing to see issues at our UK data centre - the DC team are working to have this resolved asap. We are sorry for the inconvenience caused and the downtime.

UPDATE 2: The issue appears to be coming and going but we are working on this issue and hope to have everything stable shortly.

UPDATE 1: Network services have come back online and we are speaking with our DC partner for a full report and what they are doing to ensure this doesn't happen again.

Our UK Coventry data centre is having some network issues which we are investigating. Further updates coming asap.

Network Issues (UK) (Resolved) Critical

Affecting System - UK Coventry Data Center

  • 18/05/2015 17:32 - 19/05/2015 10:20
  • Last Updated 18/05/2015 18:04

UPDATE 4: Data Centre Update: The problem was caused by a broadcast storm on our network and as a result a number of rack switches locked up which we had to reboot.


UPDATE 3: The issue has been resolved and all services appear to be back online. We are getting a data centre report now to explain the cause of the network issue.

UPDATE 2: ETA on resolving time 5-10min. Data centre has confirmed it is an issue their end and not with our servers or racks. Further updates to come shortly.

UPDATE 1: DC Engineers are looking at the issue and will be updating us shortly.

We are currently investigating a network drop on our UK Coventry network. Further updates will be posted asap.

US Network Issues (Walla Walla DC) (Resolved) High

Affecting System - Network

  • 15/05/2015 09:00 - 15/05/2015 10:12
  • Last Updated 15/05/2015 10:12

UPDATE: One of our upstream network providers has been experiencing some issues. We have re-routed the network traffic around them for the time being. All servers and sites are currently up.


A network issues at our US, Walla Walla data centre and our team are investigating. This is effected all US ColdFusion 10 servers.

Network Connection Disruption (Resolved) High

Affecting System - Nodes affected: CharlieUK2

  • 10/05/2015 10:00 - 11/05/2015 11:42
  • Last Updated 10/05/2015 12:30

UPDATE 3: We have corrected the issues and all VMs are back online. We are monitoring the services and investigating the network issue fully.


UPDATE 2: We were able to boot the CharlieUK2 node but once VMs started to boot it failed with memory errors. We are working on this node and hope to get everything back online asap.

UPDATE 1: We have resolved the network issues seen by our monitoring systems and currently running a clean reboot of the nodes affectec to ensure networks are picked up again and working OK. We have also improved out internal network monitoring to alert us sooner when these packet loss issues occur as some users would have seen all services online and working. An update will be posted as soon as the node is online and tested.

We are seeing packet loss to our racks which are currently affecting customer node: CharlieUK2. We are working with the DC to correct this issue asap.

U.S. Network issues (Resolved) High

Affecting System - Network

  • 28/04/2015 22:58 - 28/04/2015 23:25
  • Last Updated 29/04/2015 08:21

UPDATE 1: Services were restored at 23:25.


Our U.S. team are currently working to resolve an issue at the Washington State data centre. We hope to have this resolved shortly.

UK - Germany DC Network Issues (Resolved) High

Affecting System - Network (UK-DE)

  • 28/04/2015 16:30 - 28/04/2015 23:00
  • Last Updated 28/04/2015 17:00

UPDATE 2: Networks appear stable but we are monitoring.


UPDATE 1: The network issues appear to be faulty when using some ISP providers. We are speaking with our Germany DC partners and also checking with local network providers.

We are investigating a network issue between the UK and our Germany DC partner.

Apache Automatic Graceful restart (Resolved) Low
  • 12/04/2015 20:09 - 12/04/2015 20:11
  • Last Updated 13/04/2015 14:33

On Sunday the 12th at 20:09PM UK time Apache made an automatic graceful restart. This caused an Apache log rotation and our external monitoring services picked up 2-3min of downtime. Other monitoring services showed sites and services running. If you have any questions about this outage please do contact a member of the team.

Archer Node (Resolved) Critical

Affecting Server - VM/VPS (SolusVM) Group

  • 02/04/2015 05:10 - 09/04/2015 12:11
  • Last Updated 02/04/2015 11:27

Report:
The main cause of the issue was due to the XEN security updates that caused a failure in the boot up systems of the Archer node. Our team had to correct the boot up issues and run manual hardware reboots at the data centre. Once the node came back online all VMs loaded up successfully.

What we are planning to help prevent this from happening again:

  • XEN security updates will now be scheduled for Sunday early hours to lower the impact of these updates.
  • A new permanent remote KVM has been installed to ensure our 24/7 team can remote access this server to correct any issues that come up.
  • Planned improvements to our support teams are planned in quarter 3 of this year to boost our customer service.
Issue Updates:
During a security update for the XEN services we found an issue on the reboot which caused the server to fail. We have corrected the issues and now investigating ways to ensure this does not happen again. We are sorry for the downtime caused - if you have any questions please do contact a member of the team.

Server failure.

CharlieUK2 Node - Unexpected VM Failure (Resolved) Critical

Affecting Server - VM/VPS (SolusVM) Group

  • 01/04/2015 09:51 - 02/04/2015 09:16
  • Last Updated 01/04/2015 10:30

UPDATE 1: VMs are now all back online and we are checking to see what happened to cause all VMs to fail without warning.


We are currently working on the node: CharlieUK2 which is one of our OpenVZ server nodes that has failed. The node is online but we are currently having issues booting the VMs online. We hope to have a more detailed update shortly.

Archer Node (Resolved) Critical

Affecting Server - VM/VPS (SolusVM) Group

  • 20/03/2015 09:56
  • Last Updated 23/03/2015 16:45

UPDATE 2: The outage to the majority of our servers should have been around 4 minutes. Those in racks 30, 13 and 27 have experienced upto ten minutes of downtime. This was due to the routing process restarting on our servers gateway device, we are looking into the cuase of this with Juniper and hope to have another update from them within the next 24 hours.


UPDATE 1: Services are coming back online now and we will provide further details shortly on the downtime.

We are currently seeing an issue with the Archer node - our DC team are investigating to get this back online asap.

Lucee Service Issue (Resolved) Critical

Affecting Server - [S11] Linux cPanel ~ Lucee ~ London UK

  • 26/02/2015 16:30 - 03/03/2015 10:52
  • Last Updated 26/02/2015 21:44

We have resolved the issue and will be publishing a full report shortly.



We are still working on resolving the Lucee service issue. We hope to have this resolved shortly. We are sorry for the downtime caused.


We are currently seeing downtime on the Lucee service which we are investigating.

Coventry DC Network Issues (Resolved) Critical

Affecting System - Coventry DC

  • 03/02/2015 13:37
  • Last Updated 09/02/2015 10:44

UPDATE 1: The network has come back online after corrections by the Coventry DC team. We are investigating what happeneded and will update you as soon as possible.


We are experiencing network issues at the Coventry DC and reboots of core pieces of network equipment are being actioned now. Further updates to come asap.

Node Migration (Resolved) High

Affecting Server - VM/VPS (SolusVM) Group

  • 05/02/2015 22:00 - 26/02/2015 22:35
  • Last Updated 04/02/2015 14:54

UPDATE 2: Updates have been sent via support tickets for clients with information regarding IP changes due to new DC node being deployed.


UPDATE 1: Migration has been rescheduled for the 5th of Feb.


Due to a decrease in performance on the node: AlphaUK2 we will be migrating all VMs from this node to the node: Betelgeuse. No IP changes will be required as we will be migrating the IP subnet over to the node.

Migration scheduled for: 03/Feb/2015 10PM UK/London Time Zone

Thank you.

BravoUK2 Emergency Migration (Resolved) High

Affecting Server - VM/VPS (SolusVM) Group

  • 27/01/2015 19:52
  • Last Updated 31/01/2015 12:42

UPDATE 5: All data has been migrated and we have been testing the sites and VMs over the night. Load and performance has generally increased. If you have any questions or issues please do get in contact with a member of the support team. Thank you for your patience and support during this migration.


UPDATE 4: We are still migrating data and currently on the largest section of data to migrate. Once this has been completed we will update this status.


UPDATE 3: We are still working on migrating the final services over to the new node. We hope the speed will increase once off-peak time comes. Thank you for your patience.


UPDATE 2: The migration of data is still progressing, due to the faults in the BravoUK2 drive the transfer process has been slower than expected.


UPDATE 1: Betelgeuse has had its final checks and data is now migrating over from BravoUK2.


Due to faults found in the BravoUK2 server we are performing an emergency migration of all services instances to a new node which has been setup. This may take up to 10-12 hours for the complete data transfer which we will then switch over the IP subnets to ensure all clients services have the same IP. No domain or DNS updates will be needed.

New node name: Betelgeuse

We are sorry for the quickness of this migration but to ensure no data or customers services are effected we are pushing forward with this emergency migration.

We will keep this network status updated while we process this migration.

Archer Node - Unexpected Downtime (Resolved) Critical

Affecting Server - VM/VPS (SolusVM) Group

  • 30/12/2014 13:05 - 13/01/2015 21:03
  • Last Updated 31/12/2014 12:31

Archer Downtime Report

Report Downtime Start-End Date/Time:

30/DEC/2014 13:05 - 31/DEC/2014 03:30

Cause:

The first reports showed issues with one of the 1TB SSD hard drives within the RAID10 configuration. This would not normally cause such issues due to the 7 other drives in the RAID setup. On further investigation we found a second hard drive had become faulty. This caused corruption in some files that controlled many elements of the Xen virtualisation setup which broke the network bridge between the node main domain and the VMs.

Fix:

We were able to restore the configuration files to allow networks to become available once again. No data loss has occured and the VM instances were running normally during this time but without a network connection to the outside world. We are continuing to monitor the server and any sign of disruption will be investigated straight away.

Future Prevention:

We are setting up new monitoring tasks on our RAID and hard drives company wide starting with the Archer node to help detect issues like this sooner.


UPDATE 11: As of 3:30am UK time we were able to correct the network issues on the server. We are monitoring the server heavily and will be making adjustments throughout the day to ensure services run smoothly. Further updates will be posted shortly. Thank you for your patience during this issue.


UPDATE 10: New server hardware has been requested directly with our UK data centre and we hope to have this deployed asap.


UPDATE 9: Due to the hardware failure on the drives the configuration setup for our virtualisation systems has become corrupted and we are looking at restoring/transferring VMs to a new node server asap. We hope to have further updates shortly.


UPDATE 8: We have the engineers at the data centre investigating their configuration and any faults there end. From all of us at Host Media we are very sorry for this long period of downtime.


UPDATE 7: We are still working on a network issue connected to the local network to the node. The network issue is a misconfiguration in the bridge routing of the IP to VM.


UPDATE 6: We have been able to access locally the VM consoles and now renetworking the IP configs as the network seems to have been lost during the issues.


UPDATE 5: The node has come back online after its faulty drive replacement. We are now working on restoring access to the VMs and hope to have our customers back online asap.


UPDATE 4: A faulty drive (one of our 1TB SSD) in our RAID collection has caused the drives to fail their sync and brought down the VMs. The data centre team are replacing the faulty drive and also checking the controllers. We hope to have the node back online in 10min and then try to boot all VMs.


UPDATE 3: It appears a RAID controller could be the cause of the issues on the 'Archer' node. We hope to have more for you soon and your websites/VMs back online.


UPDATE 2: We are seeing some slowness in VMs coming back online - our DC team are checking the status of our RAID10 controllers and our offices tech team are checking the status of VMs data. Further updates to come.


UPDATE 1: The server node is coming back online now and we are making sure all VMs come back online. We will update you as soon as we can confirm what happened to this node.


We are currently seeing a lost connection to our 'Archer' node in our UK data centre. We are working on recovering the node and will update this network status asap.

Load issues after migration (Resolved) Critical

Affecting Server - [S01] Linux cPanel ~ Coventry UK

  • 22/12/2014 12:00 - 30/12/2014 13:18
  • Last Updated 23/12/2014 15:59

We are currently working on resolving load issues with S01 server and we have started migrating some customers accounts to our CloudLinux SSD servers. If you wish to be migrated to our newer hosting platform (CloudLinux SSD Servers) while we are correcting the issues please contact a member of the sales team.

We are sorry for any downtime and slowness of the website loading speeds. We hope to improve the stability as soon as possible.

Kernel Updates - Scheduled Reboots (Resolved) High

Affecting Server - VM/VPS (SolusVM) Group

  • 18/12/2014 20:00 - 30/12/2014 13:26
  • Last Updated 18/12/2014 14:25

We will be rebooting the following server nodes at 8PM UK/London time to ensure the Kernal updates are fully applied.

Update type: Security

Servers Nodes: AlphaUK2, BravoUK2, CharlieUK2 and AlphaUS2
Shared Services: UK1, US1 and S5

Thank you.

Bravo UK Node Fail (Resolved) Critical

Affecting Server - VM/VPS (SolusVM) Group

  • 10/11/2014 09:31 - 24/11/2014 13:49
  • Last Updated 10/11/2014 09:31

We had to run a manual reboot of the node BRAVO UK - The OS is coming back online now and we are monitoring. VPS services should be coming online shortly. Sorry for the unexpected downtime.

Charlie UK Server Node Fail (Resolved) Medium

Affecting Server - VM/VPS (SolusVM) Group

  • 09/11/2014 09:16 - 10/11/2014 09:31
  • Last Updated 09/11/2014 10:01

UPDATE: Services are coming back online - we are investigating the cause of the downtime.


We are correctly working on restoring server node Charlie UK - VMs and shared services on this node are down. We hope to have the server back online shortly.

Network Connection Disruption (Resolved) Critical
  • 19/10/2014 09:52
  • Last Updated 19/10/2014 10:51

UPDATE 1: Network restored and services are coming back online. If you have any problems please contact the support team.


Our connection to our Windows based servers are currently down and our team are working with the data centre to resolve the issues. We hope to update you shortly.

Drive issue (Resolved) Critical

Affecting Server - [S01] Linux cPanel ~ Coventry UK

  • 12/09/2014 13:49 - 17/09/2014 14:20
  • Last Updated 12/09/2014 14:46

UPDATE 1: Services are now coming back online and we are monitoring the situation.


We are currently working on a drive issue on UK1 server. We hope to have this corrected shortly.
Sorry for any inconvenience caused.

UK19 Database Process Issue (Resolved) Critical

Affecting Server - Linux cPanel ~ Legacy Platform

  • 05/09/2014 12:27
  • Last Updated 05/09/2014 13:29

UPDATE 1: We have completed our repairs on the SQL services and all SQL systems are back to normal. We are sorry for the inconvenience caused by the SQL downtimes.


We are currently invesigating and repairing a process on UK19 server that is causing SQL services to drop. We will update this status as soon as possible and we are sorry for the DB downtime caused.

Charlie UK Node - Internal issue (Resolved) High

Affecting System - Charlie UK Node

  • 01/09/2014 10:09 - 01/09/2014 11:22
  • Last Updated 01/09/2014 10:12

The node Charlie at our UK data centre was experiencing issues which was first reported as network based by our internal systems but on checking was due to a VM instance with corrupted data. We are investigating further but all VMs are coming back online. We will update you further as soon as we can. We are already prepping our new systems using Xen and look forward to hosting all our customers on these new systems soon.

Thank you and sorry for the downtime caused.

Network loss to UK1 Media (Resolved) High

Affecting Server - [S01] Linux cPanel ~ Coventry UK

  • 20/08/2014 17:01 - 27/08/2014 17:13
  • Last Updated 21/08/2014 11:41

UPDATE 8: We have seen another automatic reboot from the server and we are investigating this now. Services are coming back online though.


UPDATE 7: Services have been running fine over the last 7-8 hours but we are continuing to monitor the services and once we are happy we will set this status as reoslved.

UPDATE 6: Services are coming back online now after we made adjustments to the RAID software. Our team at the data centre and from the office will monitor all services during the final stages of services coming back online. If you have any questions please feel free to contact the accounts team. Thank you again for your patience during this time.

UPDATE 5: We have found some issues with the RAID controllers and ACPI - we are continuing our investigation.

UPDATE 4: We are continuing to see drive issues with the services not able to continue past 30min of uptime with an unknown auto reboot of the node. We are continuing to work on the issue and hope to update you soon.

UPDATE 3: During the DDOS attack which knocked our our Charlie node it appears it done more damage to the drives than we thought. We are syncing and checking them now and hope to have it back online soon but can't say for sure how long this will take. We will do everything we can to get it online. Thank you again for your patience.

UPDATE 2: Services started to come back online for the node but failed to start up VMs and general web services. We have engineers at the DC checking this.

UPDATE 1: Unfortunately another server in the same rack came under a DDOS attack, but this has now been dealt with by our system so everything is coming back online now. If you have any questions please feel free to contact the accounts team.

We are currently correctly network issues on our UK1 media server.
Further updates to follow. Sorry for any downtime caused by the network issues.

High Load (Resolved) Low

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 20/08/2014 17:00
  • Last Updated 20/08/2014 19:48

UPDATE 1: The server is stable but load is still a little high for our liking and we are investigating now. We hope to have the servers load back to normal levels soon. 


We are currently resolving an issue with high load on the US1 Media server. We hope to have this correctly shortly. Sorry for any downtime to your website.

Backup Software Updates (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 10/08/2014 22:00 - 11/08/2014 19:22
  • Last Updated 08/08/2014 15:51

We are going to be running general updates and backup software updates on Sunday 10PM - Monday 1AM on our S6 CF10 server.

Expected downtime: 1hour.

Thank you for your understanding.

UK Charlie CPU Load (Resolved) High

Affecting System - CPU

  • 21/07/2014 15:56 - 24/07/2014 14:33
  • Last Updated 24/07/2014 14:33

UPDATE 4: After 24 hours of normal running we are closing this issue. If you require any support or have any questions please do let us know.


UPDATE 3: At 9:49 the node crashed for an unknown reason. We are doing a full scan and test of the node.

UPDATE 2: The services have been fine over night and this morning so we will be marking this issue resolved but we will monitor closely over the next 48 hours.

UPDATE 1: Services are back online, we are currently scanning a couple VMs that appear to be causing some issues. Further updates to follow.

Our UK node Charlie has failed and engineers are at the data centre checking this and will boot the server up shortly.

Germany Network Packet Loss (Resolved) Critical

Affecting System - Network

  • 21/07/2014 08:43 - 24/07/2014 14:33
  • Last Updated 24/07/2014 14:33

UPDATE 8: After 24 hours of normal running we are closing this issue. If you require any support or have any questions please do let us know.


UPDATE 7: After a night of normal load the Charlie node crashed at 9:05 - our team are investigating now.

UPDATE 6: Services have been running normally for the past 1.5 hours. We are continuing to monitor the services and bring on new servers. Further updates to follow directly to customers via email and social media.

UPDATE 5: The Charlie node appears to be having issues with a number of different clients VMs when the kernal is loading up. We've already have racked a new high-end server which is being loaded with Xen to be used for future VMs as well as current Charlie VMs to be attempted to be moved over too. This new node is codenamed 'Archer'.

UPDATE 4: We had all VMs running but due to an unknown issue the server failed once more. We are checking the logs and running a new boot up.

UPDATE 3: We have started a couple of the VMs on the node and they are coming up fine. We will be starting 1 or 2 VMs at a time and monitoring the services.

UPDATE 2: We are still investigating the issues on the charlie node - we will be running a boot up shortly and will try to bring instances back online one by one.

UPDATE 1: We have our engineers at the data centre now checking the last node that is having issues - 'CHARLIE' and we hope to have this corrected soon.

We are currently investigating load & network loss to some of our Germany nodes. Our team are investigating and hope to have further updates soon.

Loss of connection for small periods of time (Resolved) Critical

Affecting System - UK Coventry DC Network

  • 17/07/2014 12:10 - 17/07/2014 12:41
  • Last Updated 17/07/2014 12:41

UPDATE 1: The outage was caused by an emergency reboot of our core routing platform at our Coventry site as recommended by JTAC engineers due to an error we were seeing on these racks. If you are seeing any issues please do let us know.


We are currently investigating an issue at the UK coventry DC where small amounts of network loss has been detected on a couple racks. We have contacted the data centre engineers to find and resolve these issues.

UK data centre - high inbound traffic (Resolved) Critical

Affecting System - Network

  • 05/07/2014 18:10
  • Last Updated 07/07/2014 15:51

UPDATE 1: We have corrected the issue and all services have been running fine for some time. We had one spike in traffic but that was resolved as soon as it was detected. We will continue to monitor the services and any further updates will be posted here. Thank you for your patience.


We are currently investigate a high amount of inbound traffic from outside sources to some servers at our UK data centre.

Charlie Server Load Issue (Resolved) Critical

Affecting System - Germany Node - Charlie

  • 18/06/2014 17:34 - 25/06/2014 18:31
  • Last Updated 18/06/2014 17:36

We are currently looking into a high load issue on out Charlie node in the Germany DC. Services are coming back online now but any questions please contact us.

ColdFusion Service (Resolved) High

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 03/06/2014 10:03 - 18/06/2014 17:36
  • Last Updated 10/06/2014 08:25

UPDATE 5: The new CF server have been deployed and our team are just checking all settings for the new system.


UPDATE 4: Our DC team have started setting up another CF server to spread load and accounts over too. We hope to have this online soon and offer some customers to be transferred over to this server. Further updates to come.


UPDATE 3: A service reboot is currently in progress due to a CPU load - our engineers are checking the cause now and will implement systems to stop this from happening again.


UPDATE 2: Reboot is now underway and we will be monitoring services once back online.


UPDATE 1: Further updates including memory updates will be happening tonight midnight UK/London time to improve stability more.


Issue:
We are currently seeing a number of ColdFusion service downtime issues which appear to be caused by threads being unable to be created due to resource issues.

Planned Maintenance:
To correct the mentioned issue we will be applying further memory resources to the S6 server and applying updates to the JVM/Heap settings to optimize the performance of the server.

Packet loss at Germany Data Centre (Resolved) Critical

Affecting System - Germany Data Centre

  • 31/05/2014 20:06
  • Last Updated 02/06/2014 08:30

UPDATE 5: The issue has been fully resolved and all network activity is back to normal.


UPDATE 4: We believe to have found the main cause of the issue and now monitoring the network lines at the DC.


UPDATE 3: We have run a complete reconfigure of the network and the hardware for the routers and so far all networks have come back to a stable level. We are monitoring and ensure all services are back online.


UPDATE 2: A reboot of the server and network services was required to complete a full test of the servers network config. The server is coming back online now - we hope to have further updates soon.


UPDATE: The data centre has started speaking with network providers connecting the DC to the rest of the world (i.e. networks over to the UK etc) as the DC is currently unable to find an issue with the routers or networks within the centre.


We are currently seeing packet loss to our racks in the Germany data centre which was isolated to a couple IP's but now appears to effected entire subnets.

The engineers at the data centre are working as hard as they can to find the cause of the issue and we hope to have this resolved soon.

ColdFusion Service Reboot (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 30/05/2014 00:00 - 03/06/2014 10:10
  • Last Updated 29/05/2014 11:34

UPDATE: Due to some delays in the software updates we will be performing a CF service restart tonight instead. Downtime should be minimal.


We will be running a CF service reboot midnight tonight (UK/London) to enable the latest versions of server software. We expect CFML downtime to be around 15-20min.

ColdFusion Service Reboot (Resolved) High

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 16/05/2014 14:57
  • Last Updated 16/05/2014 15:05

UPATE: Services have now all come back online and the CFML services are running normally. If you are seeing any issues please do contact one of the support team via the client portal help desk. Thank you for your patience.


We are currently rebooting the Adobe CFML services due to an unexpected load issue. Websites should start coming back online shortly. Sorry for the downtime caused to your CFML websites.

US Network Issues (Resolved) Critical

Affecting System - US Data Centre

  • 21/04/2014 21:40 - 22/04/2014 10:05
  • Last Updated 21/04/2014 21:41

We are investigating issues at our US data centre.

Plesk Upgrades (Resolved) Medium
  • 01/04/2014 11:45 - 01/04/2014 20:58
  • Last Updated 12/04/2014 17:38

Update: We have completed the updates to the Plesk CP.


We are currently running updates on our new Windows servers control panel; Plesk. We hope to have this updated shortly - only the Plesk control panel is effect by this and all websites are running normally.

Service error 503 (Resolved) Critical

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 09/04/2014 11:46
  • Last Updated 09/04/2014 13:03

UPDATE: The server auto restarted the affected services but we will be investigating what caused this. If you need any assistance pleas contact the support team.


We are seeing 503 errors on the S06 server in the US. We are working on correctly this now and sorry for this unexpected downtime.

US Network - Alpha Node, Media and Reseller S (Resolved) Critical

Affecting System - US Data Centre

  • 31/03/2014 21:15 - 31/03/2014 22:00
  • Last Updated 31/03/2014 22:00

The Alpha node, Media servers and Reseller servers became unresponsive due to heavy load through the network to these boxes. We have rebooted the servers and the team are now looking into this matter.

Server Node Failure - US (Resolved) Critical

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 26/03/2014 08:51 - 26/03/2014 09:41
  • Last Updated 26/03/2014 09:41

UPDATE: Network and services issues have been resolved. If you have any questions please feel free to contact the accounts team.


We are currently working on an unexpected failure on our US network. We hope to have this resolved shortly.

UK1 Data Drive Issues (Resolved) Critical

Affecting Server - Linux cPanel ~ Legacy Platform

  • 19/03/2014 08:00
  • Last Updated 24/03/2014 09:50

UPDATE 5: With great pleasure I am now going to inform you that as of 10:59am EST UK1 has fully finished it's restore, re-configuration and testing.  If you feel there is anything on your site not working as it should be, please contact our account team. I would also like to take this time to say sorry for the long period of downtime and that we are investigating this matter fully. Thank you for your patience.

UPDATE 4: We are almost finished restoring databases on the server and we hope to have more detailed updates soon. We are sorry again for this downtime - during this period we have learnt a lot of lessons and we will ensure this entire matter does not happen again.

UPDATE 3: We wanted update you on the current status of UK1. The backup process is in preparation for the server rebuild which is ongoing. We will update this status page as the process progresses. Our apologies for the lengthy delay. Our server engineers are working as fast as they are able while preventing potential data loss. The key issue and main cause of the delay is to prevent data being lost. 

UPDATE 2: We are continuing to work on the disk issue but we still do not have an ETA but hope to correct this as soon as possible.

UPDATE 1: We would like to update you on the status of UK1. Our engineers are still working on this server and will be restoring data due to problems found during the file system check. The filesystem is currently set to Read Only (this will cause 403 error messages). There is not a current ETA for this outage to be resolved. We apologize for the inconvenience! 

We are working on a drive issue after a failed drive scan showed errors on the drives within the UK1 server. We will post updates as soon as we can.

Charlie node UK unexpectedly Dow (Resolved) Critical

Affecting Server - VM/VPS (SolusVM) Group

  • 21/03/2014 08:16 - 21/03/2014 12:14
  • Last Updated 21/03/2014 08:17

We are currently working on an issue affecting the Charlie UK node.
We are sorry for the unexpected downtime.

Unexpected downtime (Resolved) Critical

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 28/02/2014 19:02
  • Last Updated 01/03/2014 10:26

UPDATE: Last night our team completed restoring a backup of some SQL databases that corrupted and caused issues with service loading. We are moniting but all services appear to be running normally.

We are currently working on a disk issue on our S6 server which caused unexpected downtime.

Network issues in CF Data Centre (Resolved) High

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 06/02/2014 13:54
  • Last Updated 06/02/2014 14:00

UPDATE: All services are back online and the network issue has been corrected.

We are currently seeing a network issue at the data centre in the US. We are working on correctly this and hope to have everything back online soon.

DDoS Network Attack (Resolved) High

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 28/01/2014 18:52
  • Last Updated 28/01/2014 19:15

UPDATE: We have our services back online and monitoring the network to ensure all issues have been resolved.
We are currently facing a DDoS attack on our US data centre network. We are doing everything we can to resolve this and have all websites back online soon.
Sorry for the inconvenience caused.

UK Charlie Server Cluster Issues (Resolved) High

Affecting System - UK Charlie Server Cluster

  • 20/01/2014 12:19
  • Last Updated 20/01/2014 13:02

UPDATE: The DC team have resolved the network issue but now looking into what happened and how to try and prevent this from happening again. If you need any further assistance do let us know or if you are having issues with your service.


We are currently running a reboot of the UK Charlie Cluster due to an unexpected issues. Once we have more information we will published details here on our network status page.
We are sorry for the downtime and are working to resolve this asap.

US Data Centre Issues (Resolved) Critical

Affecting System - US Data Centre Issues

  • 10/01/2014 13:57 - 10/01/2014 14:50
  • Last Updated 10/01/2014 14:50

UPDATE: We have corrected the issues with the server and plans are being put in place and all customers on these US media servers will be updated soon.

We are currently experiencing issues with our US data centre which we are working on. We hope to have updates shortly.

Adobe ColdFusion Service Issues (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 02/01/2014 14:30 - 08/01/2014 16:44
  • Last Updated 07/01/2014 10:04

UPDATE/07.JAN.2014

- All services have been running stable for the last 12 hours and we are continuing to moniter the server.

UPDATE/06.JAN.2014

- We are continuing to see 1min downtimes on this server due to the ACF service stalling. We have our team working on this with a CF consultant.

UPDATE/03.JAN.2014

- 3PM Further to some more downtime logged within our systems we have made further adjustments to the CF memory systems and now monitoring our adjustments. We have further actions if required planned.

- 10AM We have made some changes to our CF services and now monitoring the server to ensure no further issues appear. We will continue to update this page once we have further information.


We are currently facing issues with Adobe ColdFusion 10 on the S6 server in the US. We have our team working on this now and hope to have all services back online shortly. Please note all other services such as PHP/cPanel/MySQL are all working normally. This issue is due to Adobe ColdFusion service.

ColdFusion 10 Update Hotfix APSB13-27 (Resolved) High

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 20/11/2013 23:00 - 25/11/2013 14:58
  • Last Updated 19/11/2013 09:38

Adobe released a security hotfix APSB13-27 which is an important update which we have scheduled to be installed tomorrow Wednesday night (Nov 20th) at 11PM UK Time.

Date: Wednesday night (Nov 20th)
Time: 11pm GMT (3pm PDT)
Downtime:

Thank you for your understanding.

Network IP Routing Issue (Resolved) High

Affecting Server - [S05] Linux cPanel ~ Railo Server ~ Kansas City USA

  • 12/11/2013 09:04 - 12/11/2013 09:55
  • Last Updated 12/11/2013 09:55

UPDATE We have resolved the IP routing issue with the data centre and all services are running normally.

We are currently investigating a network issue on IP routing on our S5 server. We hope to have further updates shortly.

Apache Recompile (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 31/10/2013 23:59 - 02/11/2013 11:33
  • Last Updated 30/10/2013 17:18

We have scheduled an Apache recompile for updates on the S6 server.

Expected Downtime: <10min
Date/Time: Thursday 31st October 2014 at 23:59 (UK/London timezone)

If you require any assistance or have any questions please do contact the accounts team who will be more than happy to assist you.

S6 Data Centre Network Issues (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 03/10/2013 14:20 - 04/10/2013 19:17
  • Last Updated 03/10/2013 16:32

UPDATE: Services are coming back online and we are now running scans on the servers to ensure everything is running normally.

We are currently having network issues at the US data centre controlling our CF10 services and we hope to have this resolved shortly.

Thank you and sorry for this unexpected downtime.

DNS Cluster Update (Resolved) High

Affecting System - DNS 1 Node Change

  • 20/08/2013 23:55
  • Last Updated 21/08/2013 12:29

UPDATE: All services appear to be 100% stable and no reports of issues have come in. If you do have any issues please contact the Web Hosting support department.
-------------
UPDATE:
We have made the changed internally and now monitoring all services to ensure the DNS takes effect.
-------------
Tonight (UK/London Time) we will be migrating the DNS1 cluster from it's current node to a new node. The clusters IP will change and this may cause some downtime for websites until all servers DNS zones are updated. No changes will need to be made on your domain/DNS as this is an internal change.

Affected Services
Customers using these nameservers will be affected by this change:
dns1.dnshostnetwork.com
dns2.dnshostnetwork.com
dns1.dnshostnetwork.com

If you are using A:Records to point your domain name to our services you will not be effected by this change.

Sorry for any downtime caused but we hope this will increase the overall performance of the DNS cluster. Please note no downtime may occur and this update is for your records.

Thank you,

Server Migration (Resolved) Medium

Affecting Server - [S04] Linux cPanel ~ Railo Server ~ Nuremberg, Germany

  • 23/08/2013 23:30 - 29/08/2013 23:26
  • Last Updated 21/08/2013 12:27

We will be migrating the server 178.63.146.5 (s4.dnshostnetwork.com) to a new server. All customers on this server has been emailed so please check your inbox for the email from us. Make sure to check your SPAM/JUNK folder just in case you have any filtering on your email client.

AlphaUS2, BravoUS2 and US services Unexpected (Resolved) Critical

Affecting Server - VM/VPS (SolusVM) Group

  • 06/08/2013 19:40 - 07/08/2013 08:47
  • Last Updated 07/08/2013 08:47

UPDATE: In the early hours of this morning (UK Time) the network was repaired and all services came fully back online. The data centre are looking further into the cause and how to prevent these issues from happening again. Thank you for your patience.

UPDATE: The network issue we are suffering is an external issue from a fiber cut, while we don't currently have an ETA they have found the problem and are working on fixing it as fast as possible.

ISSUE: We are currently investigating downtime on the nodes: AlphaUS2, BravoUS2 and US hosting services

Unexpected Downtime - UK Nodes: BravoUK2 and (Resolved) Critical

Affecting Server - VM/VPS (SolusVM) Group

  • 01/08/2013 12:04
  • Last Updated 01/08/2013 12:31

UPDATE 2: The servers are back online after the transit provider re-ran their filters.

UPDATE:
It appears there has been a filter issue on the 78.157.192.0/19 subnet with our transit providers, we have requested they run manual filter updates asap, we expect this issue to be resolved shortly. We are sorry again for the downtime caused.

ISSUE:
We are currently investigating 2 server nodes that have become unavailable to pings. The nodes effected are BravoUK2 and CharlieUK2.
We will post further updates as soon as we can.

Affected Services

  • Hosted VPS's
  • UK Media Server (UK1)

Node Drive Issue - Node: Delta Germany (Resolved) Critical

Affecting Server - VM/VPS (SolusVM) Group

  • 08/07/2013 16:39 - 10/07/2013 09:32
  • Last Updated 09/07/2013 17:47

UPDATE 5: We are now at 80% completed and should have the final data restored in the next 1-2 hours.


UPDATE 4: We have restored 40% of the data and working on the remaining 60% now. We hope to have most of the offline accounts backup of the next few hours.


UPDATE 3: Our team and a DC engineer has confirmed that both drives within the server and RAID had become faulty. We are now replacing both drives and will be restoring the data from backups. The backups available are from the date: 03/July/2013.


UPDATE 2: The migration of data on the servers failed due to the harddrives issues. We are now at the data centre replacing the hardware.


UPDATE: We are now migrating hosting services to a new node and will update clients shortly. Some VPS services are running normally and those clients will be contacted shortly to be migrated.


We are currently working on a drive issue on the Delta node but due to issues we will be looking at migrating instances to a different node or unracking the drives and replacing with upgraded hard drives. We will post further updates as soon as we can.

Network Issues - Node: Bravo (Resolved) Critical

Affecting Server - VM/VPS (SolusVM) Group

  • 26/06/2013 08:39 - 26/06/2013 22:14
  • Last Updated 26/06/2013 09:52

UPDATE 2 09:47 - 26/June/2013: We have to the node and now checking the server for server side network issues. We hope to have everything checked and repaired within the next <90min.

UPDATE 1 09:17 - 26/June/2013:
 Network engineers are now at the data centre and servers to work on the networks connected to the node BRAVO. We are sorry for the downtime but we are doing everything we can to get the server back online asap.

ORIGINAL: We are currenrtly investigating an issue with the networks in our Germany data centre connecting to the node: BRAVO. We hope to have an update soon and resolved these issues.

Node connection response issue - Node: Delta (Resolved) High

Affecting Server - VM/VPS (SolusVM) Group

  • 25/06/2013 15:00 - 26/06/2013 08:41
  • Last Updated 25/06/2013 21:43

UPDATE 21:42 | 25/June/2013: All services have now come back online and have been monitored for the past hour.

UPDATE 16:29 | 25/June/2013: We now have the KVM connected at the DC and now working on resolving the issue. If we are unable to get the network issues resolved a reboot will be planned for tonight (UK Time).

ORIGINAL POST: We are currently looking into a connection issue with the Delta node in Germany. The server is online but failing to respond to external VPS control panel (SolusVM) commands. We are connecting a KVM and checking this further. We hope not to run a server reboot to ensure uptime of the VPS instances are maintained.

Server Reboot (Resolved) Medium

Affecting Server - [S04] Linux cPanel ~ Railo Server ~ Nuremberg, Germany

  • 19/06/2013 13:51 - 21/06/2013 12:18
  • Last Updated 19/06/2013 16:07

Update 1 - 16:01 19/June/2013: We have adjusted the Railo/Tomcat memory settings to help with performance and CPU issues seen. We will continue to monitor and update this status update.


We had to run a quick server reboot on the S4 server (178.63.146.5) to resolve a CPU issue which now appears to be corrected. We will continue to monitor and make further adjustments if required. Websites should be coming back online within the next 5-10min. Thank you and sorry for the downtime caused.

PHP Extensions + Applied Reboot (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 21/05/2013 23:45 - 05/06/2013 14:42
  • Last Updated 21/05/2013 12:41

Tonight we will be running the install of some PHP extensions and an Apache recompile + server reboot will be required. We expect a maximum downtime of <30min.
We are sorry for any inconvenience caused.

Migration and Upgrade (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 09/05/2013 09:58 - 20/05/2013 16:44
  • Last Updated 16/05/2013 10:20

LATEST STATUS (16/May/2013 - 10:15AM): The server has been running well over the past 24 hours and only small adjustments have been made to CF settings without issues. We will soon mark this migration complete and the old server will be looked at being shut down in a week or so.

PREVIOUS 1 UPDATE (15/May/2013 - 9:09AM): The migration has been fully completed and we are seeing sites run really well on CF10.
PREVIOUS 2 UPDATE (14/May/2013 - 17:29PM): We have 100% completed the migration and now testing the CF services further. If you find any issues with your website please contact the ColdFusion support department - Direct URL: https://www.hostmedia.co.uk/client/submitticket.php?step=2&deptid=9

Note: If you are using our DNS/Nameservers and your website is showing as offline/down this is due to your account not yet being migrated. We are working on having all accounts migrated as soon as possible but this may still take sometime. Please see percentage above for details.


New Server Features: ColdFusion 10 Enterprise, CFManager (New version coming soon), CloudLinux (Based on CentOS 6 64Bit), Clustered DNS Network and Latest cPanel/WHM

New servers IP: 162.208.0.210

ColdFusion 10 Update 10 (Resolved) Low

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 15/05/2013 23:59 - 16/05/2013 09:50
  • Last Updated 16/05/2013 09:49

UPDATE: Upgrade completed and service rebooted.

A new ColdFusion 10 update was released late yesterday afternoon which we have scheduled to be installed tonight (15th of May) at 11:59PM. A ColdFusion service reboot will be required which will cause a small amount of downtime. Expected downtime <5min.

ColdFusion Update Information:

ColdFusion 10 Update 10 Tuesday, 14 May 2013
Update Level: 10
Update Type: Security
Update Description: The ColdFusion 10 Update 10 includes important security fixes.

Network Abuse (Resolved) Critical

Affecting Server - VM/VPS (SolusVM) Group

  • 08/05/2013 21:48 - 12/05/2013 14:35
  • Last Updated 10/05/2013 16:58

 

CURRENT STATUS:  United States (Kansas City) - ALPHAUS2 IP's are still nulled by our DC ISP but we are working on this.

UPDATE 3 - 09/May/2013 | 17:31:
We have started to see another attack on the same instances after the first set was successfully re-routered. We are working on this now and the DC is monitoring our routers for the BRAVO node.


UPDATE 2 - 09/May/2013 | 09:15: The attack on some of our clients servers has ended and we are reporting a normal service level. The clients who were effected was contacted and only those clients were effected by this attack all other services were running normally during this. Thank you.


UPDATE 1 - 08/May/2013 | 22:49: We have null routed the IP's but the attack is still continuing but the network appears to be handling the traffic fine now. We will continue to monitor and the data centres will also continue to monitor.


We are currently seeing an DDoS attack on some of our servers in Germany and the US. We have null routed the attacking IP but some connections are still carrying on the network.

Node Migration - 9th of May 2013 (Midnight) (Resolved) Medium
  • 09/05/2013 23:59 - 10/05/2013 09:38
  • Last Updated 09/05/2013 22:40

UPDATE 1: We have started preping and the migration of the S9 server.


We have scheduled a node migration of our S9 (current IP: 206.225.85.66) server from the Phoenix (US) data centre to our new Kansas City (US) data centre. The servers new IP will be: 173.208.220.211

Server Load Issue (Resolved) Critical

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 30/04/2013 09:10 - 30/04/2013 12:04
  • Last Updated 30/04/2013 12:04

Update 1: The server is performing normally and management will be sending an email to all CF customers shortly.


We are currently working on a load issue with our S6 CF9 server in the US, our team are working on bringing the server back online and we have the abuse team ready to check the accounts that were causing the CPU load issues.
We are sorry for the downtime but hope to have this resolved asap.

Reboot for performance changes (Resolved) High

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 29/04/2013 10:35 - 29/04/2013 10:46
  • Last Updated 29/04/2013 10:16

We have a quick reboot planned in the next 30-40min for the CF Linux server to increase general performance of this server. We are sorry for the downtime caused but this should be no longer than <10min.

Server Migration (Resolved) Low

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 18/04/2013 23:59 - 20/04/2013 10:00
  • Last Updated 21/04/2013 12:54

UPDATE 4 - 12:53 / 21/April/2013 : We have now completed the migrations and all sites are now responding well and running on the new servers. If you have any issues please do contact us and we will be more than happy to help.


UPDATE 3 - 22:52 / 19/April/2013 : We are still migrating over sites, due to the large amount of data it is taking some time but we will continue migrating sites and update all customers once completed. Thank you for bearing with us.


UPDATE 2 - 16:22 / 19/April/2013 : The migration continues but it is going well, we hope to have all sites migrated by the end of the day. Just a quick reminder if you are using our DNS/Nameservers (dns1.dnshostnetwork.com / dns2.dnshostnetwork.com / dns3.dnshostnetwork.com) you will not need to do anything. As soon as your site is migrated via cPanel the DNS will automatically point your site to the new servers. If you have any issues at all please do contact us via a support ticket to the WEB HOSTING department.


UPDATE 1 - 10:09 / 19/April/2013 : We have started the migrations after some setting changes and updates to the new server. Once the migration has been completed we will send all customers the new IP address again and to supply a general update. The new IP is: 173.208.236.229


Server migration planned to new Kansas City servers. New server IP has been emailed to all clients. Please contact the migration team if you have any questions.

Scheduled Reboot (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 08/04/2013 23:55 - 12/04/2013 10:18
  • Last Updated 08/04/2013 09:57

We will be applying ColdFusion 9 security updates midnight tonight with a 10/15min reboot. If you have any questions please do let us know.

Drive read-only issue (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 02/04/2013 10:07 - 04/04/2013 09:47
  • Last Updated 02/04/2013 21:51

UPDATE 5 21:50AM: Due to the issues with the CF services we are now restoring the CFIDE and settings from a backup. We hope to have this resolved soon.


UPDATE 4 16:42AM: We are seeing some issues with the ColdFusion service after the updates earlier today. Our Adobe qualified partners are checking this now to see what the cause is. The server has been upgraded heavily and running on newer systems which will increase the general service level.


UPDATE 3 15:52AM: We will be performing short restarts in the next hour to ensure changes fully take effect on the server and that the memory increases are running correctly. We are sorry for any further downtime but after these reboots the server will be classed as stable. We will continue to montior the server to ensure any problems are worked on straight away. We are again very sorry for the downtime during this morning and hope the new improvements will allow your sites to run faster than ever before.


UPDATE 2 12:44AM: All services are running normally, we are checking the server for issues while it is now live. We will keep this status update open until we have run our reports.


UPDATE 2 12:26AM: We have completed 1 scan and a reboot which did not correct the issues and a new scan has already started. A number of issues were fixed from the first scan and we hope this second scan will correct the final issues for the server to boot correctly.


UPDATE 1 11:19AM: We are now running a fsck (disc check) to work out the issues on the server and why the drives won't boot up correctly. We hope to have this completed soon and the server back online. Again we are very sorry for the downtime and hope to have everything back online soon.


We are currently having an issue on our CF+cPanel server due the drives becoming read-only and causing issues on boot up. We are working on resolving this as quickly as possible. We are sorry for the downtime and improvements for the hardware and general service have been planned by management for this service.

Email Relay Issue (Resolved) High

Affecting System - Gosport (Hampshire), UK Data Centre

  • 27/03/2013 12:19 - 02/04/2013 10:09
  • Last Updated 28/03/2013 22:49

UPDATE 2 28th/March/2013 22:48PM: We are currently delayed in the migration due to software installs. We will be working through the night to get the service up for tomorrow. We will keep posting updates here but any questions feel free to contact the sales team.


UPDATE 2 28th/March/2013 16:33PM: We are now working on getting the server racked, our team at the data centre are just putting all the bits together. Once we have the server racked we will get working on this. Sorry for the delay.



UPDATE 1 28th/March/2013 10:50AM: Our new server has arrived and is currently sitting in the foye along with a few others. We are just waiting for the disks at the moment which are showing as out for delivery with the courier. Once everything has arrived at the DC we will rack the server and get everything setup ready for the migration.



ISSUE START We are currently having issues with the email relay that is attached to all servers within the data centre. We have stepped up the plans to migrate our shared and reseller clients who are the last to be migrated from this data centre to our new servers in Coventry.

All customers will be sent an email later today confirming this, we will supply the IP addresses for the new server tomorrow once the box has been connected and the IP subnet added to our routers network.

The new server will be an Intel Xeon Quad Core E3-1230 3.30Ghz Ivybridge v2 with the very best hardware components.

We will update this status page as soon as we get any updates.

Thank you and sorry for the issues due to this SMTP email relay.

Coventry Subnet Issue (Resolved) Critical

Affecting System - Coventry Subnet Issue

  • 28/03/2013 19:24 - 28/03/2013 22:47
  • Last Updated 28/03/2013 19:27

We are currently having an issue with the subnet 78.110.170.66 - 78.110.170.94 - Websites and virtual instances will appear down. We have tech team investigating and working to resolve this.

Disk Increase Reboot (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 21/03/2013 13:00 - 21/03/2013 13:14
  • Last Updated 21/03/2013 13:00

We will be running a quick reboot of our CF9 cPanel server to apply extra disk storage to that server. Expected downtime: <10min

Network Failure (Resolved) Critical

Affecting System - Walla Walla, USA Data Centre

  • 20/03/2013 11:15
  • Last Updated 20/03/2013 11:27

UPDATE 1 (11:24AM London Time): Full network access has been restored and all services were not effected by the network outage. The DC team are investigating further. Sorry for the downtime caused.

============

We are currently experiencing a network failure in the Walla Walla, USA Data Centre. This is being investigated and we hope to have further updates for you soon.

Sorry for the downtime caused and we hope to have this resolved asap.

Server Memory Update Reboot (Rescheduled) (Resolved) High

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 07/03/2013 23:59 - 08/03/2013 10:11
  • Last Updated 07/03/2013 10:32

This reboot has been rescheduled for tonight (7/March/2013):
Due to updates to our server instance we require to run a reboot to apply the performance features. This has been scheduled in for midnight tonight. The expected downtime is less than 15 minutes.

DNS Cluster Issue (Resolved) High

Affecting Server - Linux cPanel ~ Legacy Platform

  • 06/03/2013 09:50 - 08/03/2013 17:26
  • Last Updated 06/03/2013 14:37

UPDATE 1: The DNS cluster has been repaired and services are back online but we are still working on some fragments of the cluster. Hosting services should start resolving shortly and access to FTP/mail/web should be back soon. 

---------------------------------------------------------------

ISSUE: We are currently investigating an issue with the DNS cluster on our shared hosting. Customers using these DNS reports may be effected:

dns1.dnshosted.co.uk   ['208.43.81.114']
dns2.dnshosted.co.uk   ['50.22.35.226']
dns3.dnshosted.co.uk   ['174.37.183.108']

We are working on this now and hope to have resolved shortly.

Scheduled Node Reboot - Coventry Bravo Node (Resolved) Low

Affecting Server - VM/VPS (SolusVM) Group

  • 06/03/2013 00:20 - 06/03/2013 10:15
  • Last Updated 04/03/2013 13:10

We will be running some updates on the Coventry Bravo node to ensure we are able to provide additional server features to you.

To ensure these features are done correctly we require to run a quick reboot of the server. This has been scheduled for Wednesday morning at 00:20 (6/March/2013). The expected downtime of the node during the reboot is <15min and our team will be checking this to ensure everything is rebooted correctly.

If you have any issues please do contact our support team and they will be happy to help you.

Server unresponsive (Resolved) Critical
  • 02/03/2013 11:20 - 02/03/2013 18:21
  • Last Updated 02/03/2013 11:24

Our DE1 Windows server (Germany) has become unresponsive but our engineer is at the data centre now working on the issue to see if the issue is hardware related or software. We will post updates as soon as we can.


We are sorry for the downtime caused.

Unexpected IP network failure (Resolved) Critical
  • 29/01/2013 15:41
  • Last Updated 29/01/2013 16:51

UPDATE 2 16:28 - It appears the issue was due to the recent upgrade to our MPLS network (Multiprotocol Label Switching). When the network ran a restart there was a miss-configuration and caused the range to drop from the router. Our internal networks picked up the server ping failure and the team acted to repair straight away at the data centre.

We are very sorry for the downtime caused (<40min) but the new networks have been re-tested and everything appears to be running faster than normal on the new MPLS network. If you have any questions please do let us know via a support ticket.

UPDATE 1 16:18 - We have resolved the network issue, it appears the IP range had got confused on the routers connected to the servers in our rack. We are speaking with the DC to get more information.

REPORTED 15:41 - We are currently having some issues within our Coventry Data Centre. Our team are investigating and will post an update as soon as possible.

Server Reboot (Resolved) Medium
  • 15/01/2013 23:59 - 29/01/2013 16:35
  • Last Updated 15/01/2013 16:42

We will be performing a quick reboot of our DE1 Windows Server to enable some newly installed software to be activated. The downtime will be small and we only expect a downtime of <15min. Sorry for any inconvenience caused.

ColdFusion 9 Hotfixes and Security Updates (Resolved) High

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 04/01/2013 23:59 - 07/01/2013 10:31
  • Last Updated 04/01/2013 11:03

Server: cPanel CF Server | 12PM Midnight UK/London Time | 4PM PST

Due to hotfixes from Adobe for ColdFusion 9 we will be applying these from midnight tonight after successful tests on development servers. This can cause downtime while the services reboot after the hotfixes. We hope to keep downtime to a minimum.

Thank you,

ColdFusion 9 Hotfixes and Security Updates (Resolved) High
  • 04/01/2013 23:59 - 07/01/2013 10:31
  • Last Updated 04/01/2013 11:03

Server: Kloxo CF Server | 12PM Midnight UK/London Time | 4PM PST

Due to hotfixes from Adobe for ColdFusion 9 we will be applying these from midnight tonight after successful tests on development servers. This can cause downtime while the services reboot after the hotfixes. We hope to keep downtime to a minimum.

Thank you,

CF Service Reboot (Resolved) High

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 02/01/2013 10:24
  • Last Updated 02/01/2013 10:26

We are currently rebooting the CF services on this server. This may cause your websites to become slow or stop working during the reboot of the services. Sorry for any downtime caused.

Reboot (Resolved) High

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 14/12/2012 13:16 - 14/12/2012 13:29
  • Last Updated 14/12/2012 13:17

We are currently preforming a server reboot of US1 to take upgrades and system changes in to effect.

Sorry for the downtime.

Scheduled Reboot (Resolved) Low

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 13/11/2012 23:59 - 14/11/2012 09:52
  • Last Updated 13/11/2012 14:44

We have planned a scheduled reboot to take system and server upgrades into effect. The expected downtime is <10min which will begin at midnight UK, London time tonight (13/Nov/2012 23:59).

Server Reboot (Resolved) Critical
  • 27/10/2012 12:36 - 27/10/2012 12:39
  • Last Updated 27/10/2012 12:40

Hi,

We had to run a reboot of the Windows server to resolve a CPU error. We are looking into this further now.
Thank you,

Server migration and Railo setup (Resolved) Critical
  • 15/10/2012 10:28
  • Last Updated 26/10/2012 09:58

UPDATE 14 (25/Oct) 09:34 UK Time: We have completed finished our migration and the DNS settings have been changed. If you are using A:Records for your domains please use this new IP: 5.9.108.8



UPDATE 13 (21/Oct) 16:44 UK Time: We are over 50% done on the migration and working as fast as we can to get the remaining accounts online. We do have a second set of DNS name servers for those who's accounts have been migrated and wish to swap their sites over now. Once we have completed the migration we will change the current DNS.



UPDATE 12 (20/Oct) 13:21 UK Time: We are a third of the way through our migration of the Windows accounts. We hope to have DNS changes ready for some of the customers soon.



UPDATE 11 17:30 UK Time: Just a quick update to confirm we are continuing with the restore and hope to have more details soon. There are very large files which we are restoring and it can take time. Thank you.



UPDATE 11 14:10 UK Time: We have completed the install and configuration which has now allowed us to start restoring account onto the server. We hope this will be a fast'ish process and allow us to change over the DNS settings in the next few hours.



UPDATE 10 09:40 UK Time: We are now installing Plesk addons to provide all the functions we require. Once completed we will be able to confirm the migration status and timings for sites to go live.



UPDATE 9 20:10 UK Time: We are still migrating data over to the new servers, we have had to trottle the connection to make sure the old server does not crash. We are expecting another 4-6 hours until we have the data over and accounts starting to be restored. Sorry for the delay.



UPDATE 8 12:22 UK Time: We can confirm the migration is in mid process and we are awaiting for the files to copy over to be completed.



UPDATE 7 12:00 UK Time: We have been able to do a mass transfer after feeling up some free space. We are restoring all accounts and enabling Railo onto customers domains that have requested this service. No DNS change will be required and we hope to have a more detailed update soon with timings.



UPDATE 6 09:22 UK Time: Due to resource issues on our old servers we are having to use a manually method of migration. This does take longer but we feel this is the only safe method of doing the migration. We are currently moving clients that Railo CFML services enabled and we will then migrate all other accounts. If you have a support ticket open regarding Railo + Windows one of our sales team will get to you with access details and the new name servers (DNS) that you will need to use.



UPDATE 5 15:39 UK Time: We are in the middle of migrating accounts, it is a slow process but we are moving forward to having all accounts restored on the new Windows server.



UPDATE 5 09:00 UK Time: Today we are migrating all accounts to our new Windows server and will be enabling Railo on to the sites. Please note due to issues with Plesk and Railo (via Tomcat), PHP and Railo are unable to run on the same application pool. We will continue to look into this but we are going to begin the migration and change over to allow sites to get back online for CFML pages.



UPDATE 4 12:01 UK Time: Railo has been installed but some of our tests are not fully integration the IIS to Tomcat connectors. We are continuing to work on this and hope to have further updates soon.



UPDATE 3 10:31 UK Time: We are currently testing Railo services on the Windows server to ensure the best performance and stab-ability. We hope to have live sites being moved soon.



UPDATE 2 17:26 UK Time: We have completed the install of Railo and now configuring the connectors and general IIS settings to work as we want them. We hope to have further updates soon.



UPDATE 1: We have the server racked and everything connected. We are now install Railo onto the server and connecting it to IIS. We hope this will be a much easier process with a clean new server. Once done we will migrate all accounts to this server and start the DNS process. We will email all Windows customers to inform them of the change over timings.



We have started this server network report to keep customers updated on our work with regards to the Windows servers.

What's our team working on:
  • Setup and deployment of a brand new server in Germany
    We have just brought in one of our high-end performance servers from our suppliers in Germany to be racked into our partnered data centre.
  • Further Railo testing on local development servers in our offices
    We are continuing to work on testing Railo even further to ensure our new setup will be working without any issues.
  • Migration preparation
    We will be migration all accounts from the old Windows server here in the UK to our Germany server, we will also be changing DNS records to ensure you will not need to do anything for having your websites live. Please note, we will not stop down the UK server until everyone is happy that their new websites are running fine on the Germany server.
For those who are interested in server specifications here is the basic information of our new Windows server compared to the current one here in the UK:

Old UK Server vs New Germany Server
CPU: Intel Dual Core 2.6 | Intel i7 Quadcore
RAM: 12GB RAM | 32GB RAM
OS: Windows 2008 Web 64Bit | Windows 2008 Standard 64Bit

As you can see the performance differences are greatly improved on our new Germany servers.

This was something we have planned to put in motion in the next year but with the latest CFML events we have decided this is the correct route to go down to ensure the very best hosting performance and stability.

We will keep this page updated throughout the process so you can keep track on what is going on.

Thank you.

Server reboot (Resolved) Medium

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 25/10/2012 23:59 - 13/11/2012 14:49
  • Last Updated 25/10/2012 10:04

We have a scheduled reboot planned tonight of our cPanel US ColdFusion 9 servers to apply updates to the web services. We only expect a downtime of 5 minutes.

cf2 - ColdFusion US Server Network Issues (Resolved) Critical

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 22/10/2012 11:45 - 22/10/2012 11:48
  • Last Updated 22/10/2012 12:03

UPDATE #1 22/10/2012 11:48:00: We have resolved the network issue and all services are online. We also ran a reboot of the server to ensure all networks were picked up correctly. Any questions or issues please let us know.



We are currently investigating a unknown network issue on our US ColdFusion server ID:CF2 (cPanel). We will update this status page as soon as we have more information and updates.

DDOS Attack on entire racked network (Resolved) Critical

Affecting Server - [S01] Linux cPanel ~ Coventry UK

  • 24/09/2012 23:30 - 27/09/2012 13:02
  • Last Updated 25/09/2012 10:19

UPDATE 1: 25/09/2012 10:00AM
We are still working on bringing all the sites and servers back online, we are very sorry for this downtime. As soon as the servers are back online we will be running a full investigating to this DDOS attack.

Our UK media/reseller servers have been down due to heavy DDOS attacks which has effected an entire rack in the UK data centre. We are currently working on bringing the network online and blocking the attacking IP's. Sorry to say our hardware DDOS protection failed due to the heavy load and we are doing everything we can to prevent this from happening again.

UK Cloud Rack (Resolved) Critical

Affecting System - UK Cloud Rack

  • 19/09/2012 09:30
  • Last Updated 19/09/2012 10:43

UPDATE 1 - 19/Sept/2012 10:30
We have fully resolved all issues in the rack and all services are back online. We are monitoring the power levels to ensure everything runs as normal.

We are currently looking into the power failure on our UK cloud server racks which caused downtime early hours this morning. We have the rack power unit replaced and working on bringing back the networking to the servers and VM's. We are sorry for the downtime and are working on having this resolved as soon as possible. Thank you.

Windows Server Boot-up Failure (Resolved) Critical
  • 18/09/2012 14:45 - 19/09/2012 10:06
  • Last Updated 18/09/2012 14:49

We are currently having issues after a reboot on our Windows server in the UK. Our team are working on this and should have an update shortly. We are sorry for this downtime and the issues caused. Thank you.

Railo Service Issue in US Data Centre (Resolved) Low

Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA

  • 28/08/2012 14:32 - 01/10/2012 09:22
  • Last Updated 14/09/2012 13:10

UPDATE 7:
(14/09/2012 13:00) The past 3 days has seen 0 downtime on the Railo service and we believe this issue is completely resolved but we are keeping the monitoring systems in place so we can make sure the issue does not come back. We will keep monitoring for the rest of this month and then close this network issue if no more issues occur. Thank you.



UPDATE 6:
(11/09/2012 17:00) We have been monitoring a new setting within our Railo servers to allow Java and the tomcat services to use a lot more of the servers memory without the overload effect we were seeing. This appears to have corrected the issues but we are keeping this system issue open while we continue to monitor. If you have any problems with your Railo service please contact the support team. Thank you.



UPDATE 5:
(05/09/2012 12:25) We have found a possible cause to the memory issues and are now working on this. Some downtime may occur but once complete we are hopeful that no further downtime will occur and all Railo services will be completely stable. We are sorry for the down time caused during this. Thank you.



UPDATE 4:
(05/09/2012 10:23) We are looking into a heap size issue for the 'PS Survivor Space' which we believe is the cause of the issues on the server. We will post an update as soon as we can. We are sorry for the down time caused during this.



UPDATE 3:
(30/08/2012 14:10) We are continuing to see issues with the memory after adjustments were made to the server. Our team are working on a new solution to try and resolve this issue fully. 

Please check here for further updates.



UPDATE 2:
(30/08/2012 10:00) We have seen a continue in high CPU and memory from the Railo server instance. We are now installing Railo monitoring software to find out which applications are taking up the memory and will be monitoring throughout the day and night. Once we have found the application or system that is using the memory we will investigate it further.

We are sorry for the down time caused during this. We have setup new procedures to help prevent this from happening on our other Railo servers by increasing the number of Railo servers running to spread accounts out more than we already are.

Please check here for further updates.



UPDATE 1:
(29/08/2012 16:30) We are monitoring the services and increasing the amount of CPU power that is assigned to this node.



We are currently investigating an issue with the Railo service stalling after 24 hours of normal service. We are sorry for the downtime this is causing and hope to have this resolved shortly. Thank you.

UK Network Issues (Resolved) Critical

Affecting Other - UK Network

  • 10/09/2012 14:58
  • Last Updated 11/09/2012 17:18

UPDATE: We have had no further reports of issues after 24 hours of monitoring. We will continue to monitor the networks as normal but if you have any connection issues please contact the support team. Thank you.

---

Over the course of this morning we have noticed UK network issues which connect to our Germany data centre.

If you are using one of the below services you maybe effected, please note the servers are up and running but your ISP (Internet/broadband provider) maybe having the connection issues with the network. This means users from other ISP's and countries will be able to view your website or service as normal.

  • Germany Based VPS
  • Germany Railo (Kloxo and cPanel)
  • Germany Media Services
  • Germany Dedicated Servers
We are monitoring the network connections and will update everyone as soon as we have more information. We believe this to be a temporary issue but we hope this is resolved asap by the London data centre network companies.

Thank you.

Railo Server Load (Resolved) Critical

Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA

  • 10/08/2012 13:47 - 10/08/2012 16:09
  • Last Updated 10/08/2012 13:49

We are currently seeing a high load on our US Railo server (DC: Phoenix Server: 1). Our team are already investigating.


Thank you.

PHP CGI / CF Service CPU Heavy Load (Resolved) Critical
  • 03/07/2012 16:52 - 04/07/2012 15:13
  • Last Updated 03/07/2012 16:56

We are currently working on high CPU load from accounts on the server. We are performing updates to the server. We are sorry for any downtime caused but we are doing everything possible to resolve the CPU load issues and have the service to stable level.

Future/long term plans for this server: Directors have agreed plans to move to our new servers which run higher grade processors (Intel i7's). We will be ordering the hardware soon and running tests on this platform.

Thank you,

Notice Server Restart - UK Windows Server (Resolved) Critical
  • 21/06/2012 10:00 - 22/06/2012 18:24
  • Last Updated 21/06/2012 10:40

Update 1 : 10:30

All services appear to be coming back online, we are just checking the settings and CPU readings.

===========================

We are in the middle of a restart and performance scan on our Windows server to increase the performance and make the server more stable.

We are sorry for the downtime and hope to have everything back online asap.

Thank you,

Web service restart (Resolved) Critical
  • 20/06/2012 14:10 - 20/06/2012 14:11
  • Last Updated 20/06/2012 14:10

Due to a high load from PHP and web services we are preforming a web service to clear temporary files. This will increase the performance of the service greatly.

Expected downtime <1

Thank you and sorry for the downtime.

CF1.HOSTMEDIAUK.COM Apache Load Issues (Resolved) Critical
  • 08/05/2012 15:13 - 09/05/2012 12:04
  • Last Updated 08/05/2012 15:21

UPDATE 1

We have corrected the apache issue but looking into what caused the fault and to make sure this does not happen again. If you have any questions do contact the team and ask for the level 3 tech team to help.

Thank you,

===

We are currently having issues on our CF1.HOSTMEDIAUK.COM server with Apache failing to start up. Our team are working on this now and we hope to have resolved shortly.

Thank you and very sorry for the inconvenience caused.

ColdFusion Service Issues (Resolved) Critical
  • 30/04/2012 13:01
  • Last Updated 30/04/2012 13:16

Update 1: 30/04/2012 13:12

We have made the changes to the ColdFusion services which included adjusting the MaxPermSize / Maximum JVM Heap and settings on the Windows services. This appears to have greatly increased the general performance of the ColdFusion service but requires monitoring. The server has enough resources to be updated again if required.

We are sorry for the down time and if you have any questions please contact the management department.

Thank you,

=====

We are currently looking into a ColdFusion service issue that is casuing the ColdFusion service to stop responding and requiring a restart. Our team are looking at increasing the general performance of the server and to allow ColdFusion to use more resources on the server.

We are sorry for the downtime and hope to have everything stable asap.

Management are looking into the possibility of moving the entire server to one of our custom Cloud services to greatly increase performance and reliability.

Thank you,

Packet loss 29/03/2012 (Resolved) High

Affecting System - Nottingham Data Centre

  • 29/03/2012 17:00 - 29/03/2012 18:00
  • Last Updated 30/03/2012 12:02

investigation on the network issues seen last night at our Data Centre seemed to be a flow based attack against a particular IP. We have since located the server in question and secured it.

Apologies for any inconvenience this may have caused.

UK Data Centre Switch Upgrade Scheduled (Resolved) Medium
  • 10/03/2012 03:30 - 30/03/2012 12:03
  • Last Updated 01/03/2012 15:56

 

In order to keep the infrastructure up to date and provide the best service for our customers we are upgrading the switch connections in our racks on Sat 10th March 03:30. You may experience a few seconds loss of connectivity when we plug cf3.hostmediauk.com (Windows Plesk CF9 Server) in to the new switch.

Any dedicated & colocation service customers will be informed by email effected by this upgrade.

Thank you.

 

Server Timeouts (Resolved) Critical

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 22/02/2012 14:41 - 22/02/2012 15:10
  • Last Updated 22/02/2012 15:21

===

Update 1: All services are running and memory increased. We are keeping an eye on this but everything has now been resolved. Thank you and sorry for the issues.

===

We have had reports our CF9 cPanel Linux server has started to run slow and timeout, we are working on this now and hope to have everything running smoothly shortly. We are sorry for the issues and will resolve asap.

Thank you,

Kloxo Server Issue (Resolved) Critical

Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA

  • 06/12/2011 17:22 - 07/12/2011 10:15
  • Last Updated 06/12/2011 17:24

Our team are fixing the Kloxo service issue which has knocked out all web services on our Kloxo CP based servers. (CF & Railo).

We will have everything back online asap.

Thank you,

Kloxo Server Issue (Resolved) Critical
  • 06/12/2011 17:22 - 07/12/2011 10:15
  • Last Updated 06/12/2011 17:24

Our team are fixing the Kloxo service issue which has knocked out all web services on our Kloxo CP based servers. (CF & Railo).

We will have everything back online asap.

Thank you,

CF3 Intermittent outages (Resolved) Medium
  • 10/10/2011 12:23
  • Last Updated 10/10/2011 13:06

-----------------

UPDATE 2: We was correct in seeing it was a JVM memory issue, a restart made all sites come back online. We are keeping an eye on the service and checking documentation to see if this is a known issue.

-----------------

UPDATE 1: It appears there is a JVM memory issue within ColdFusion which means sites are loading but only after a long period of time. We are looking into this now.

-----------------

We are currently investigating a number of small outages on the CF3 server, our team are working on this to find out why these are happening and will update all clients soon.

Thank you,

Windows ColdFusion Service Issues (Resolved) High
  • 19/09/2011 21:35 - 10/10/2011 12:43
  • Last Updated 05/10/2011 10:56

UPDATE 11: Everyone seems to be using the new servers well, but we have a small number of accounts we are working on with the clients to make sure everything is working as it was on the old box. All services are stable. Any questions or problems do let us know.

----

UPDATE 10: We are currently running a number of tests due to issues with making Plesk 10 work with the very latest ColdFusion hot fixes. We hope to have this final element resolved soon ready to move all accounts to the new server. We are sorry for the delay, any questions do contact our team. Thank you,

----

UPDATE 9: Our team are doing everything they can to get our new server setup and secure. We have had a couple small delays but our current server appears to be stable at the present time but we are working as fast as we can to get the new server online. ETA for online is 09:30AM Friday (23/Sept) and sites being restored first thing. We will continue to do everything we can to ensure the current setup stays online and stable. Thank you,

----

UPDATE 8: We have emailed all our Windows customers regarding a plan that will be going through today. This will involve moving all accounts to a new larger server with the lates ColdFusion hot fixes pre-installed to safe guard from the issues we have been having. This upgrade is a huge investment for the company in the Windows hosting service we offer and will allow faster support (due to new staff being brought in), faster speeds (due to the increased port speeds) and faster performance on your websites & applications (due to the larger server specification).

This will be worked on today, we of course want to run as many tests as we can to make sure no issues appear on the servers. If you have any questions do let us know.

Thank you,

----

UPDATE 7: We are investigating more issues on our Windows servers relating to Sundays attempt to install ColdFusion hot fix 9.0.1. The issues seem to get resolved by our team and then ColdFusion does not seem to handle after a few hours and wants to crash. Our entire team is working on this to resolve asap.

I wish to take this time to thank all our customers for their support and patience. We can undertstand this is not what you want from a provider having its main service down but we will resolve this.

Thank you,

----

UPDATE 6: It appears the CF service is still having a number of issues which we are working as hard as we can to resolve. We are sorry for this constant issues on this server relating to Plesk & ColdFusion. We will update everyone asap.

----

UPDATE 5: We are having a number of minor issues on the server still but all services are working fully. We are looking into getting these remaining issues resolved asap.

----

UPDATE 4: All services are back online and running, we are currently montioring all services to ensure everything is stable.

Some clients may see an error for your ColdFusion DSN (Data Source Names), to resolve this please do the following:

  1. Login to your Plesk account
  2. Click on 'Websites & Domains'
  3. Then select the link to bring the drop down list for 'Hide Advanced Operations'
  4. Once the list of icons appear click on the 'ColdFusion Data Source Names'
  5. You will then see a list of DSN's for that domain/account. You may have to repeat the following steps if you have more than 1 DSN setup in Plesk.
  6. Select your DSN so it brings up the edit DSN screen.
  7. Then just click 'OK' at the button, you do not need to edit anything.
  8. This will re-create your DSN in the ColdFusion administrator.

Any questions do contact us.

----

UPDATE 3: We have been having some issues with Plesk connections to ColdFusion. If you are unable to access Plesk this is due to the connection issues. We are sorry for this long delay and downtime overnight and hope to have this resolved soon. We are contacting teams from Adobe & Plesk for extra support in this case.

----

UPDATE 2: Our team have ColdFusion reinstalled and working, we are currently working on the connections between Plesk and ColdFusion. We hope to have this fully resolved soon. Thank you,

----

UPDATE 1: After applying the update yesterday for ColdFusion 9.0.1 a number of the ColdFusion files were corrupted, the team are now reinstalling ColdFusion to the server and applying all settings. The team are still investigating the issues and hope to have everything resolved asap.

Thank you again for your patience.

----

We have been having a number of issues with our ColdFusion service today which our team is working hard to resolve fully. We will post updates as soon as we have more details on the issue.

We are very sorry for the inconvenience downtime on the CF services.

Thank you for your patience.

CF1 / CF2 / Railo Server Updates (Resolved) Medium

Affecting System - CF1 / CF2 / Railo Servers

  • 04/10/2011 11:24 - 05/10/2011 10:54
  • Last Updated 04/10/2011 11:27

We will be running a number of hardware updates to our US servers for ColdFusion & Railo, the update will include a number of benefits such as faster network connections (faster ports being opened up), larger backup drives, increased disk space due to new drives being added.

We do not expect much downtime, but with a min downtime of <20min.

Thank you,

Railo Update => 3.2.3.000 (Resolved) Low

Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA

  • 04/10/2011 10:43 - 04/10/2011 11:54
  • Last Updated 04/10/2011 10:45

Railo 3.2.3.000 has been released and we are running the update today, we expect minimal downtime and only a Railo restart required.

If you have any questions feel free to contact our team.

Thank you,

Server Updates (Resolved) Critical
  • 18/09/2011 14:09 - 19/09/2011 21:34
  • Last Updated 18/09/2011 14:12

We are applying an update to our shared hosting server SERVER42 at 02:00 on the 18th September. Coldfusion sites may be unnavailable for a short period during this time.

We are sorry for any downtime that may occur.

Thank you,

Security Issuers (Resolved) Critical

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 16/07/2011 00:00 - 18/09/2011 14:12
  • Last Updated 20/07/2011 10:16

Since the 16th of July our team has been working to fully restore all accounts on our media1 servers after a hack using accounts insecure scripting allowing access to commands on the server. This server security break has been fixed but the accounts where these scripting errors allowed the accounts file to be edited or settings changed are being fixed now.

We are sorry for any downtime or low connection occurs but our team is monitoring the situation and hope to have more updates soon.

Please make sure to use the support tickets to contact our support team.

Thank you for your time & patience.

Server downtime (Resolved) Critical
  • 28/05/2011 18:23 - 28/05/2011 18:47
  • Last Updated 28/05/2011 18:25

We are currently having some issues with our Media 2 server in the US. Our data centre who we have contracts to maintain this server are looking into the problem now. It appears the semaphore arrays on the server were exhausted and is why the server keeps going offline. We are putting systems in place to handle this.

Very sorry for the downtime and hope to have this resolve shortly.

ColdFusion Updates / Server Issues (Resolved) High
  • 23/05/2011 17:22 - 24/05/2011 10:07
  • Last Updated 23/05/2011 17:33

We are currently running some tests and looking into an issue on our ColdFusion Kloxo server in the US. We are sorry for any downtime and hope to have the server backup soon.

Thank you,

Host Media UK Tech Team

:: UPDATE

We have restored most of the servers systems and working on the rest of the services. Sorry for the downtime that has occurred.

CF2 / ColdFusion + cPanel Server Issues (Resolved) Critical

Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA

  • 23/04/2011 11:29 - 23/04/2011 17:02
  • Last Updated 23/04/2011 11:48

 

We are currently seeing a high usage on our CF2 server which we are investigating to resolve asap. The issue appears to be server wide, we hope to have an update soon and the server back online.

We are very sorry for the downtime.

UPDATE 11.47AM :: Data Centre team is helping with the issue, thanks to them this should be resolved quickly.

 

High Memory & CPU issues (Resolved) Critical

Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA

  • 03/08/2010 12:05
  • Last Updated 03/08/2010 16:14

** Update

The issue has now been fixed, it appears the server was not taking in account cached memory so it showed using more memory than it really had.

Everything appears to be running fine now but our team will be keeping an eye on it to make sure nothing else comes of it.

-------

We are currently seeing a large increase in memory and CPU usage by apache for the Railo server even after our Upgrade of RAM last night. Our UK team are investigating this and our US server administrators will be checking the logs to see why this is. We believe an account is using a unsafe script which is creating the high server load.

We will post an update here asap!

Thank you.

Railo Servers - Service Error 503 - Update (Resolved) High

Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA

  • 03/07/2010 22:10 - 14/07/2010 17:19
  • Last Updated 03/07/2010 22:46

Update 2

The Railo service is now backup and running while our team investigates the reasons for the service downtime. We will update this issue post with our results. 

Update 1

We are currently updating our Railo service which we are sorry to say is taking a bit longer than we thought due to issues with the update. We are working on getting the service back to normal ASAP! The update will bring the Railo service to the latest version with all security and new features.

We will update all our customers once tests are finished.

Sorry for any downtime on Coldfusion / .cfm / .cfc files.

NY1 Restart Issue (Resolved) Critical

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 29/06/2010 17:17
  • Last Updated 30/06/2010 12:28

Update 3

All services have been tested over a period of 12 hours and appears to be running fine now. We are investigating the issue of the RAID controller.

Status: Resolved

----

Update 2

Our team has found the issue, it was an unexpected RAID hardware issue which is getting replaced / fixed now and our server will be up and running within the hour.

We are sorry for this issue, but was a RAID hardware issue.

Thank you and we will update you soon.

----

Update 1

We are currently having issues with our NY1 server which our US and UK team are working on.

This issue started after a restart due to a system clean to help performance on the server and speed up mail / POP3 systems. The server appears stalled on the main drives for an unknown reason.

We will update all our customers on this server asap!

Thank you

NY1 Server Failure (Resolved) Critical

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 18/02/2010 00:00 - 19/02/2010 00:00
  • Last Updated 19/02/2010 11:19

Our NY1 server has been going off and online over the night and we have been able to get partial access back to the server for some services but still working on the port 80 issue.

We are sorry for this issue but we have our looking into the issue and have our data centra investigating this.

We will update our reports here.

*** ISSUE RESOLVED ***

High CPU usage from unknown source (Resolved) Medium

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 16/09/2009 17:04 - 29/09/2009 16:18
  • Last Updated 22/09/2009 11:11

We have been seeing our servers getting hit with a high amount of CPU usage, we are working on this issue with our full team.

We hope to have news soon on why this is and make sure it does not happen again.

Sorry for any issues on your websites as reboots maybe required.

:: UPDATE 17/Sept/2009 ::

We have found the issues making our servers load higher than normal (Currently fixed but monitoring), we are in the process of moving up the deadline for our new systems which will offer customers new locations as well as newer servers. This is mainly for non FFMPEG/PHP Ming/Reseller customers. If you would like to know more please do open a support ticket to sales with your questions.

:: UPDATE 22/Sept/2009 ::

As many of the US media server customers will have seen all sites were offline while the servers main systems were running fine which included WHM/cPanel. We are investigating this and awaiting tests from our data centre.

We would like to take a moment to say sorry from all the team for any emails missed in this time and sorry for the down time. We will be offering customers the chance to move to our new servers in a wide range of locations. We hope our final tests today will allow customers to have new accounts setup around the world.

Mail issue using port 25 (Resolved) High

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 09/09/2020 18:08 - 18/09/2009 10:52
  • Last Updated 18/09/2009 10:54

Our US media server had issues with port 25 sending and getting mail to work. Our US ISP changed their security without any warning and forced us to use port 26 instead for all mail.

The mail servers have now been restarted and all tests show mail working fully again. If you have any problems with your mail please contact us.

IMAP / email server issues (Resolved) Low
  • 21/07/2009 11:57 - 16/09/2009 17:04
  • Last Updated 21/07/2009 11:58

We are currently running some fixes for some issues we have found on the email server for our UK servers. The team there are working on this issue.

 

New updates requires restarts (Resolved) Critical
  • 14/07/2009 13:35 - 14/07/2009 13:58
  • Last Updated 14/07/2009 13:59

Our UK servers maybe running a bit slower than our normal fast speeds and some minor downtime may occur due to updates to our systems.

We are hoping these updates and restarts will improve our overal systems.

Sorry for any issues caused.

Server is now backup and running.

Planned Windows Server Restart (Resolved) Medium
  • 05/05/2009 00:00 - 08/06/2009 23:02
  • Last Updated 05/05/2009 17:24

There is a planned restart of the Windows CF8 server for updates and install of FFmpeg.

This will be a short delay in the server.

Sorry for any inconvenience.

Support and Sales System Upgrades (Resolved) Critical

Affecting Other - Support and Sales Service

  • 27/03/2009 17:00 - 30/03/2009 09:36
  • Last Updated 26/03/2009 21:32

Over this weekend we will be performing some upgrades to our support and sales systems including our main mail for Host Media UK, all enquires will be answered as soon as this system upgrade has gone through and we are sorry for any inconvenience. All our server administrators will be on hand monitoring the servers as normal to make sure no down time occurs.

Thank you for your support.

Server Network and Maintenance (Resolved) High

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 19/03/2009 00:00 - 26/03/2009 21:28
  • Last Updated 20/03/2009 09:12

Due to some network issues found by our server administrators we are working on upgrading our server connection to make sure no major down time appears and to keep our 99% uptime.

Some downtime may occur but we hope to have this issue sorted asap.

If you have any questions please do contact us either through Host Media UK or AeonCube Networks

Best regards

Host Media UK Server Team

Urgent maintenance operation on the servers h (Resolved) Critical

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 28/02/2009 00:00 - 01/03/2009 00:00
  • Last Updated 01/03/2009 10:42

Over night we had to work on the cPanel / WHM Linux server for some urgent maintenance operations on the of this server.

This is now complete and we are sorry for the down time.

Total down time: 150min

Reason for maintenance

With the maintenance preformed we have ensured future server speeds will stay fast and reliable.

If you have any questions regarding the server and our updates please contact us.

ColdFusion Server - Helm Updates and Server C (Resolved) Medium
  • 13/02/2009 07:42 - 15/02/2009 00:00
  • Last Updated 20/02/2009 16:41

We are working on our ColdFusion / ASP services to run checks on accounts and setup of the server to run faster, some issues my occur on new setups and features maybe offline for a small amount of time (Pre-Installed script etc).

We are also looking into upgrading the server to a Plesk based server to allow faster and better systems.

Server Upgrades (Resolved) Medium

Affecting Server - [S02] Linux cPanel/WHM ~ US1

  • 03/01/2009 00:00 - 06/01/2009 16:03
  • Last Updated 18/01/2009 16:04

Shared Hosting Server Upgrades

Our shared servers will be under upgrade but we website should not be done while these upgrades process. As our servers are also being moved in this upgrade we will be publishing new shared IP addresses. If your site uses A RECORDS for its domain names please change this asap. All name servers such as: dns1.hostmediauk.com / dns2.hostmediauk.com will not be effected.

Server upgrades includes:

  • New network port connections
  • RAID 10 hard drives
  • FFMpeg and Red5 installed
  • Server backup drives

New features coming soon

  • Upgrades to all paid shared hosting bandwidths
  • Reseller Hosting Plans

We will keep everyone updated on the progress of our upgrades.

Best regards

Host Media UK Management