Affecting Other - DDOS - Aptum
Friday, 9/13 @ 5am PST:
We are currently tracking down a DDOS that seems to be coming from inside our own network in Los Angeles (Aptum) location. This appears to be deliberate and has been difficult to track down but we should have this resolved shortly. It has been escalated to the highest levels of engineering and we are currently working on a final resolution.
Affecting Other - Charlotte, NC Datacenter
Dear Customer
Thank you for your continued business!
This is a service impacting maintenance notification for our Charlotte, NC datacenter. If you do not have services at this datacenter, then you can safely ignore this notice.
The existing datacenter building on the Segra campus (currently 1612 Cross Beam Dr) is approaching planned obsolescence. As such, all services are being moved to a brand new facility on the same campus, which was just completed this year at 3101 International Airport Dr. All of our networking has been upgraded and duplicated in the new location, it is "live", and similar test migrations have been completed successfully so we expect the process to go smoothly.
We will be cleanly powering off servers and migrating one cabinet at a time, then powering them on at the new location. You can expect more redundancy, more on-net carriers, and a better, higher capacity network immediately and with more to come in the future (without the need for any similar migrations for decades to come)!
Dedicated and colo server customers: Please begin powering down your server at 8:45PM EST on Sunday, August 18th otherwise we will power down the server manually.
Expected impact:
8:45PM EST until 11:30PM EST
UPDATE: 8:45PM EST: SERVERS ARE BEING POWERED DOWN AS PLANNED. IF YOU HAVE NOT POWERED YOUR SERVER(S) DOWN, WE WILL BE PRESSING THE POWER BUTTONS SOON.
UPDATE: 9PM EST: MIGRATION/UPGRADE TO OUR NEW BUILDING IS COMMENCING
UPDATE: 11:20PM EST: THE PHYSICAL MIGRATION IS COMPLETE AND WE ARE POWERING UP THE LAST OF THE SERVERS. IF YOUR SERVER IS NOT ONLINE WITHIN 45 MINUTES, PLEASE POST A SUPPORT TICKET
Affecting Other - Bend, OR
2024-07-21 12:31am PST - A fiber issue affecting both of our primary redundant providers (Fatbeam and Lumen) between Sandy, OR and Maupin, OR is currently being addressed with window on July 22, 2024 between midnight thru 6am PST. The Lumen NOC has advised a partially damaged cable has been identified. The damage was caused by a construction project in the area, and the cable needs to be fully cut and spliced to repair the damage. Services are expected to be impacted for 5 hours within the maintenance window.
UPDATE 12:41AM PST, we have failed over to a tertiary provider. Connectivity services are restored but customers may notice increased latency or packetloss throughout this window. If your services are still completely offline, please post an urgent support request.
UPDATE: 3:21AM PST, one of our primary providers is back up and traffic has been restored.
Affecting Other - Las Vegas, NV location
Dear valued client,
This notice is to inform you of service affecting network maintenance upcoming in our Las Vegas, NV location. If you are receiving this notice, it is because we will be moving the hardware associated with your service into newly provisioned rackspace within the same facility. The maintenance window is scheduled for 7/23/2024 from 12-5PM local time but we do not expect downtime to exceed 20 minute per server. Please note that there will be two outages, one ~20 minute outage while your server is moved, and one ~5 minute outage while the network is cutover.
Despite no major observable issues currently, this work is considered "emergency maintenance" due to existing equipment reaching capacity thresholds so the decision was made to proceed as soon as possible. The new space features upgraded networking, 10Gbps connectivity, and room to grow.
You may power down your equipment prior to this maintenance or we will be carefully and CLEANLY shutting down all servers prior to the work, and ensuring it is all powered back up and online before conclusion.
Your patience and understanding is much appreciated. Thank you for your continued business!
Affecting Other - Bend, OR Datacenter
Dear Valued Clients,
This is a service impacting maintenance notification for our Bend, OR location.
It has been identified that necessary power maintenance needs to be performed, which will require the shutoff of the main power distribution units one at a time. First the A side PDU will be serviced on the evening of 12/11/2023, and then the B side PDU will be serviced on the evening of 12/12/2023. To reduce impact to services, we will be "swinging" rack-level PDUs temporarily during the maintenance, and then "swinging" them back to their original diverse feed after the maintenance is completed.
We will also be performing switch and router software upgrades at this time. You may experience one to three network outages of approximately 15 minutes each during these periods.
Maintenance Window One (A side):
December 11th from 11PM Pacific Time, until December 12th 7AM Pacific Time
Maintenance Window Two (B side):
December 12th from 11PM Pacific Time, until December 13th 7AM Pacific Time
Dedicated server customers: Most of our dedicated servers are dual power supply, meaning that the only impact to your service will be during the network maintenances. If you have a single power supply server, then we will cleanly shutdown your server, swing the power supply to the other feed, and power it back up - expected downtime of approximately 10-15 minutes. We will then repeat the reboot process at the end of the power maintenance, moving the power supply back to its original source. If you have an operating system that will not accept an ACPI shutdown command cleanly (ESXi, Xencenter, etc) - please ensure that we have an up-to-date root/administrator password on file otherwise we will be forced to hard power cycle the server.
VPS server customers: The only impact to your service will be during the network maintenances.
Shared hosting customers: The only impact to your service will be during the network maintenances.
Colocation customers: If you have purchased A+B power, there will be no power disruption to your services, the only impact to your service will be during the network maintenances. If you have not purchased A+B power. then we will cleanly shutdown your server, swing the power supply to the other feed, and power it back up - expected downtime of approximately 10-15 minutes. We will then repeat the reboot process at the end of the power maintenance, moving the power supply back to its original source. If you have an operating system that will not accept an ACPI shutdown command cleanly (ESXi, Xencenter, etc) - please ensure that we have an up-to-date root/administrator password on file otherwise we will be forced to hard power cycle the server.
If you have any questions regarding this maintenance event, please do not hesitate to contact us. This maintenance will allow us to continue to expand in our Bend, OR location and increase our redundancy, security, and capacity. We appreciate your understanding and thank you for being a valued client!
Affecting Other - Bend, OR Datacenter
10/19/2023 4:40PM PST: Our NOC was alerted to a network issue in our Bend, OR location and are investigating currently.
UPDATE 5:06PM PST: We have confirmed that there is a fiber cut which is in the same convergence point as the network outage as the cut on 09/19/2023 - unfortunately the diverse route we have ordered is not yet available. We are engaging a tertiary provider in the meantime.
UPDATE 5:13PM PST: We have engaged a tertiary provider and subnets are beginning to propagate.
UPDATE 7:49PM PST: We have received an update from our providers, they have identified a fiber cut located in Sandy Oregon. Their splicers have just arrived on-site to start repairs. Updates are to follow as more information becomes available.
UPDATE 8:56PM PST: The splicers have prepared the sheath and are commencing fusion splicing. They are giving a maximum ETTR of 01:00AM PST (four hours from now). The next update will be provided at that time or as additional information is obtained.
UPDATE 11:16PM PST: One of our transit providers has regained connectivity. Network access is now restored, please note that we do not yet have full redundancy. This incident is now closed.
Affecting Other - Bend, OR Network
At approximately 12:10 AM PST 9/19/2023 our NOC detected a network outage in our Bend, OR datacenter. We are currently investigating this outage.
Update 12:33AM PST: We have determined that all of our redundant network providers are currently hard down. While one was scheduled for maintenance tonight, no others were. We are currently in contact with both providers, working to establish an ETTR
Update 1:02AM PST: We have confirmed with both upstream providers that unfortunately there was a network topology change by one provider, which resulted in a convergence point we were unaware of resulting in a loss of service due to both providers being in the same sheath which is currently undergoing maintenance. We are engaging a tertiary upstream provider in order to restore services as quickly as possible. Please note the maximum ETTR for the originally announced maintenance windows is 6:00AM PST
Update 2:01AM PST: We have several prefixes beginning to propagate via a tertiary provider, and are working on bringing up the remaining prefixes.
Update 2:34AM PST: One of our primary providers has come back up, and services are now fully restored.
We will be conducting a thorough RCA of this incident, including investigating when this convergence point was changed, why we were not notified, and will be changing topology to have full path redundancy again.
Affecting Other - Staten Island, NY Network Maintenance
Dear valued clients,
Thank you for your continued business!
Please note we will be performing service impacting network maintenance in our Staten Island, NY location in order to complete a necessary expansion and capacity upgrade. Please note expected downtime is only 30 minutes, however over one hour is being allocated in case of unforeseen issues. During this time you may see 2-3 brief network outages.
Start time: 6/1/2023 9:00PM EDT
End time: 6/1/2023 11:30PM EDT
If you have any questions, please do not hesitate to get in contact with us.
Affecting Other - Psychz Los Angeles Network Maintenance
Dear valued clients,
Thank you for your continued business!
Please note we will be performing service impacting network maintenance in our Psychz, Los Angeles location in order to complete a necessary expansion and capacity upgrade.
Please note expected downtime is only 15 minutes, however over one hour is being allocated in case of unforeseen issues.
Start time: 5/30/2023 9:30PM PDT
End time: 5/30/2023 11:00PM PDT
Please do not hesitate to contact us if you have any questions.
Affecting Server - OR-Mercury
We have scheduled emergency maintenance for VPS hypervisor node OR-Artemis and shared hosting server OR-Mercury starting tonight at 7:30pm EST. The maintenance window will be approximately 30 minutes but we expect downtime closer to 10-15 minutes. The result should be noticeably improved performance and a return to 100% reliability. We appreciate your patience and understanding
Affecting Server - CA-Romeo
We have scheduled emergency maintenance for VPS hypervisor node CA-Titan and shared hosting server CA-Romeo starting tonight at 7pm EST. The maintenance window will be 1 hour but we expect downtime closer to 30-45 minutes. Most major components of the node are being replaced including some of the already new components that were replaced last week (which unexpectedly resulted in unexpected instability and random reboots). Since we had already used spare inventory for the previous maintenance, we decided to ship in brand new parts to replace and upgrade all major systems at once. The result should be noticeably improved performance and a return to 100% reliability. It was our decision not to replace the entire node (causing more extended downtime) and instead wait for upgraded parts. We appreciate your patience with us in the past week while we worked to minimize the impact of the situation.
Affecting System - CA-Romeo, CA-Titan
Please note we will be performing emergency maintenance on CA-Romeo and CA-Titan beginning at 6PM PST on 4/25/2023
We expect brief downtime which should not be more then 30 minutes. This is necessary for performance and reliability enhancements
Thank you for your continued business!
Affecting Other - CLT4 North Carolina - ENTIRE DATACENTER
On 3/26/2023 at approximately 3:36AM EST our NOC became aware of a major electrical issue in the North Carolina datacenter affecting all services. We are handling this with top priority. Updates will be posted as information is gathered. There is no ETA but services will be restored ASAP
UPDATE 5:52AM EST: We are presently experiencing an issue at our CLT4 data center which is causing temperature loads to increase to an unsafe equipment level. We currently have 2 maintenance vendors onsite to diagnose and resolve the issue however we have no ETA at this time. Out of an abundance of caution we must advise at this time to avoid any unnecessary damage to our equipment we may power down some equipment.
UPDATE 1:47PM EST: HVAC issues have been temporarily resolved and we are working to begin powering equipment back up as temperature thresholds allow.
UPDATE 2:06PM EST: We continue to power equipment back on gradually. We still have more then five racks of equipment online and will continue to power it back on as temperature thresholds allow
UPDATE 3:50PM EST: HVAC and electrical contractors have isolated the issue at the CLT4 data center which was causing the increased temperature loads, and ambient temperatures are beginning to drop. At this time it is now safe to power on equipment, and we continue to actively do so. We currently are still awaiting resolution from the root cause, current Estimated Time to Repair from Duke Energy is 8:00 PM Eastern. The facility will remain on generator power until Duke has completed their work.
UPDATE 6:42PM EST: We have worked through all remaining service alerts. If you currently have a service impacting issue, please open a high priority ticket from your client portal.
Affecting Other - USSHC (Iowa) datacenter
Dear Valued Clients,
Thank you for your continued business! This is a service impacting maintenance notification for our Iowa datacenter.
We will be performing major infrastructure upgrades starting the 3/27 at 5PM CST and ending on 3/30 at 10AM CST
During this time, both power and network upgrades will be occurring. Because of this, you may see your server reboot (one time only) and you may see 1-4 brief 1-5 minute network outages.
This necessary maintenance will bring major capacity upgrades and improve redundancy in our Iowa datacenter.
If you have any questions, please do note hesitate to get in contact with us.
Affecting Other - CLT4 UPS B
Dear Valued Customer,
Please be advised we will be performing non-customer impacting maintenance windows Monday July 11th on our UPS systems in the Charlotte data center. We will be replacing physical batteries in the UPS systems during its maintenance window. We do not anticipate any disruption to services during this time however we wanted to advise you of this upcoming maintenance window. During this maintenance window, the UPS systems will be online but will not have battery backup power until the windows have been completed. Should you have any questions, please contact support and we will be happy to address them with you.
Thank you,
Affecting Server - IA-Hotel
We will be migrating the node "Hotel" located in our Iowa location to a new machine featuring upgraded and modern hardware.
The migration process will cause 2-4 hours of downtime. Your patience is appreciated while we complete this necessary and beneficial upgrade.
Affecting Other - Las Vegas, NV
Dear Valued Clients,
Thank you for your continued business!
Please note we will be performing emergency maintenance in our Las Vegas, NV datacenter Friday, June 10th at 10AM PST - 11:30AM PST.
This will be service impacting maintenance, during which time your server will be gracefully shutdown, and then powered back up. During this maintenance we will be replacing PDU and routing infrastructure for better network speed, redundancy, and improved uptime.
Affecting Other - Segra, Charlotte NC Datacenter
We were informed of Duke Energy performing emergency maintenance on circuits affecting building CTL4 in North Carolina today. This building is where most of our network and hardware is housed.No impact was expected but we experienced an issue with one of the UPS that caused a downstream impact within the data center. Two PDU's (A1/A4) had the main breakers trip during the cutover (and then the failback).Any colo client with single-sourced to one of those PDU's, and our cabinets unlucky enough to have A side from one and B side from the other, were impacted.We are back on utility power now and we’ve put eyes on reported customer’s cages/racks where everything is up. Any remaining servers down should be reported to us by opening a new URGENT TICKET so that we can filter out the already previously resolved issues.We sincerely apologize for the impact but we were at the mercy of the building power systems and the quick failover was not seamless as expected so we are investigating this and will run tests and resolve those issues accordingly.
At approximately 8:59AM EST 5/4/2022 our NOC became aware of multiple service issues in our Charlotte, NC datacenter. Initial investigations showed loss of link to multiple cabinets from our network distribution.
At approximately 9:05AM EST 5/4/2022 our NOC contacted datacenter site operations, who confirmed that there is a known datacenter-wide power issue, affecting multiple distribution PDUs
At approximately 9:14AM EST 5/4/2022 power was restored.
At approximately 11:13AM EST 5/4/2022 power was lost again, and restored at approximately 11:15AM EST.
We are currently working through all services and ensuring all servers boot successfully. If your services are currently down, please open a trouble ticket and we will ensure they come back up as soon as possible.
Affecting Other - Las Vegas, NV
We are currently experiencing a network-wide event in Las Vegas, NV. It is currently being investigated and there is no ETA known at this point. We have technicians on site and believe the problem will be identified and resolved quickly.
Affecting Other - Aptum Los Angeles (CA1)
Dear Valued Customers,
Happy New Year! H4Y continues to grow at an ever increasing rate and we have had incredible demand in Los Angeles, among other locations. As a response to this demand and to modernize our infrastructure, we have planned an expansion and migration of our space in our Los Angeles, CA (CA1 - Aptum) datacenter. During the dates of January 17th thru 19th (all day and evening), we will be migrating servers into new space that has already been provisioned. We have mirrored our network settings in both locations and we plan to move machines one by one. While the maintenance will be ongoing from Jan 17th thru 19th, the actual downtime for any server should not exceed 1-2 hours, but we have calculated and allotted time for up to 4 hours per node in case there are unexpected issues or emergencies as we go. This announcement pertains to all of our colo, dedicated, shared, reseller, and VPS clients in Los Angeles, CA (Aptum formerly Peer 1) only.
We have many Asian clients in this facility so we have opted to move the Asian clients during night-time hours in Asia, and USA clients will be moved during off peak hours in the US respectively. We have planned this to cause the least amount of disruption possible.
The resulting improvements should be many: a higher capacity network, more redundancy in every fashion, modern switching and routing equipment, more organized cabling, and of course room to expand. We will enjoy new on-net providers and more options for cross connects to other ISPs and internet exchanges.
Affecting Other - Network
We are currently investigating a DDOS and possible complication with our anti-DDOS gear. ETA < 30 min.
Affecting Other - Network
We are currently investigating what appears to be a large (spoofed) DDOS attack in North Carolina. It is currently causing packetloss to several segments of our network in North Carolina. The issue is actively being worked on and should be resolved momentarily.
Affecting Other - 198.37.xxx range
Dear Customer,
As of 6:15AM EST, we started getting monitor alerts for IPs in the 198.37.xxx range. We recently created ROA (route objects) for these ranges after a netblock ownership change and it seems to have gone awry. We are working on correcting this and all services should be restored once the fixes propagate.
ETA 30min - 1hour
Affecting Other - Bend, Oregon Datacenter
At 12:27PM PST the NOC observed anomalies in our Bend, Oregon datacenter. Upon investigation it appears that one of our primary transit links to Lumen/CenturyLink/Level3 is flapping, as such we are engaging the Lumen NOC team.
Update: At 12:55PM PST the Lumen NOC team confirmed a partial fiber cut in the area affected circuits, causing the bounces. They have advised that additional bounces and/or full outages are possible. As such we have removed Lumen from our network mix temporarily until the issue is resolved. Please note that you may see packet loss during this time. Once Lumen has confirmed that the fiber cut is fully repaired we will update this network advisory.
Update: At 2:39PM PST the Lumen NOC team confirmed that the partial fiber cut has been repaired. As such, we are closing this network event.
Affecting Other - North Carolina
North Carolina: We are currently investigating an internal network anomaly that is causing packetloss. There is no ETA at this time but we have all hands on deck investigating and working towards a resolution, hopefully any minute.
As of 1:05pm EST, we resolved what turned out to be a spoofed outbound attack originating from inside our own network. The magnitude of the bandwidth should not have affected our network but the packet rate and type of packets sent (payload) managed to overload some of our devices and cause packetloss and misbehavior. We've identified the source of the attack vector and we are working with Cisco and our device manufacturers to harden our settings to permanently protect against this sort of abuse. We sincerely apologize for the problems this caused.
Affecting Other - Aptum Los Angeles
At approximately 9:04PM pacific time our NOC team noticed anomalies in the network in our Aptum, Los Angeles location. These anomalies are creating packet loss, or outages intermittently for random locations.
We immediately began investigating. The issue does not appear to be local to our network, as such we have opened a ticket with the Aptum NOC team.
Update: 9:46PM - The Aptum NOC team has acknowledged an issue in their network and have escalated it to their engineering team. This is being handled as a priority 1 service impacting incident. Further updates will be posted when available.
Update: 10:21PM - The Aptum NOC has resolved the issue on their end, and the incident has been resolved.
Affecting Server - NC-Uniform
We are currently experiencing issues with our iCF cloud infrastructure in Charlotte, NC due to an unexpected result of some development work. Affected servers include Uniform as well as other selected Charlotte, NC cloud VPS servers. We are working on restoring it now and should have a resolution shortly. ETA: 11am - noon EST.
Affecting Other - Cloud VPS infrastructure
Friday, May 28th 6pm EST
Location: Charlotte, NC
Affects: iCF Cloud VMs (VPS)
We will be upgrading the CPUs of our iCF compute nodes to upgrade our existing capacity. We determined that it would make sense to do it all at once rather than failing over from one node to another and doing in segments. Future upgrades will feature the hot-failover technique with no downtime. This maintenance window is 1 hour.
Affecting Other - VPS accounts on CA-Titan + Shared server Romeo/Papa
Dear Customer,
As of 6am EST on Saturday, May 15th, we received a notice that a drive had failed in the primary array of CA-Titan (which also houses CA-Romeo, a shared server). Upon rebuilding the drive, it seems to have caused some corruption with the array. We are working to restore all data as quickly as possible. ETA is 1-4 hours as it may require a full data restore. Your patience is required as we deal with this unexpected situation. Your continued business is much appreciated. We will post updates here as we have news.
Update: 9:45am EST - we are starting the restore of Romeo. It is a large server so we appreciate your patience.
Simultaneously, we will be bringing up the VMs on CA-Titan one by one. Your continued patience is appreciated.
Update: 12:15pm EST - the restore of Romeo continues simultaneously with VMs (one by one). Romeo is a large node and the restore is taking longer than expected.
Update: 3:38pm EST - the restore of Romeo completed and it is successful but we brought it back down temporarily to do an FSCK (filesystem check) to ensure data integrity. Final ETA has not changed.
Update: 5:38pm EST - Romeo is restored except for some innodb DBs/tables. We are working on restoring those and getting them into a consistent state.
Update 8:00pm EST - Still working on Romeo InnoDBs on account usernames that start with j-z. 2 VMs to be restored on CA-Titan - all others up and running.
Update: 8:30pm EST - all VMs have been restored and should be running normally. Contact us if your VM is offline or experiencing issues.
Update: 5/16/21 4:15am - most DBs restored. Still a few remaining on CA-Romeo. Contact us if you still see issues.
Affecting Other - Distribution Switching
Dear Valued Client,
This notice is to inform you of upcoming scheduled maintenance in our Charlotte, NC facility on Friday, May 14th at 7pm EST. We intend to fail over from temporary backup distribution switches to our primary (redundant) cluster. The goal is to restore full redundancy to our switching infrastcuture.
Background: On Saturday, May 8th, we experienced cascading PSU failures in our primary distribution switch cluster. It caused a complete outage and we ultimately failed over to backup/standby switches. These switches have been in use since but are not suitable for continued production utilization due to lack of redundancy.
Scope of work: We will be replacing 2 failed PSUs (power supplies) with 4 new, tested, and fully redundant PSUs. Our distribution switch cluster will effectively have twice the power redundancy as it had when it experienced multiple failures.
Timing / window: Starting at 7pm EST, we are scheduling a 30 minute window for the work but actual downtime is not expected to exceed 5-7 min as cabling is switched from the failover switches to the primary cluster. Both switches will be operating normally before we proceed so there should be little chance of unexpected issues or extended downtime.
As always, your patience and understanding is appreciated. We look forward to serving you for many more years with incredible uptime in Charlotte, NC.
Affecting Other - Networking / distribution swiches
Update 12:40PM EST: Confirmed all network connectivity is restored.
Update 12:21PM EST: Connectivity is mostly restored. The remaining disconnected cabinets will be back online momentarily.
Update 12:00PM EST: We are currently transitioning all cabling to our backup distribution switches. Our initial ETA holds true and we should have all services restored by 2PM EST.
Update: 9:10am EST. We are preparing to replace the distribution switches. There are a few last ditch efforts being worked to determine if the existing cluster will come back to life. The emergency maintenance window will extend through 2pm EST though we hope to have services restored before then.
Update: 7:45am EST. The distribution switch cluster appears to show multiple failed PSUs with red lights and otherwise totally dark. We are working on a decision to either replace the entire redundant cluster with a backup distribution switch or if we can fix it in place. ETA is currently unknown.
Update: 7:09am EST. Confirmed issues with distribution switches. We are working on a resolution. Your continued patience is appreciated.
As of 6:20am EST on Saturady, May 8th, we are currently experiencing a datacenter-wide outage in North Carolina. We suspect an issue with our distribution switches and are currently investigating but have no ETA. Hopefully services will be restored shortly. We appreciate your patience.
Affecting Other - Segra - Charlotte, NC
Please note we will be performing service impacting maintenance that will affect services in the Charlotte, NC datacenter. We will be performing router firmware upgrades to address a security vulnerability that requires a reload to apply.
Services affected: Charlotte, NC datacenter
Date: November 6th, 2020
Start time: 1AM EDT (GMT -5)
End time: 2AM EDT (GMT-5)
Outage Duration: Approximately 10 minutes, though 20 minutes is being allocated
If you have any questions, please do not hesitate to reach out to us at my.h4y.us
Affecting Other - CenturyLink
At approximately 6:10AM EST, we started getting reports of inaccessible servers from clients in some specific regions around the world. After a thorough investigation, we discovered that the issue is not related to our own networking. Rather, it seems CenturyLink, a major North American ISP and telco provider has some sort of major issue. (Link: https://downdetector.com/status/centurylink/) This is causing sporadic inaccessibility issues, packetloss, and latency for many routes to many locations in the USA. As a direct transit provider, we have contacted them but they are seemingly overwhelmed by the widespread issues and have not been able to promptly reply with specific information. Through our own connections, we have heard that they are expecting to have all issues sorted by 11:30am EST. However, we have done our best to route around their networks as much as possible. Since they are a major ISP in the USA, this proves impossible from some locations. If your route to us is through CenturyLink, you will likely experience intermittent connectivity until this is fully resolved on their end. Meanwhile, our network and all systems are 100% operational at this time. We will continue to monitor this situation and hope that CenturyLink resolves their issues ASAP. Please do not hesitate to contact us with any questions.
Affecting Other - Cascade Divide Datacenter
Beginning 8/12/2020 at 8PM PST we will be upgrading our core network infrastructure to better provide redundancy, speed, and to allow for larger future growth.
Services affected: Cascade Divide, Bend OR datacenter
Date: August 12th, 2020
Start time: 8PM PDT (GMT -8)
End time: 9:30PM PDT (GMT-8)
Outage Duration: Approximately 15 minutes, though 30 minutes is being allocated
Affecting Other - Distribution Switch
We will be replacing a distribution switch stack in order to accomodate a higher capacity and to prevent any possible saturation issues due to increased bandwidth consumption.
Details -
Date: Thursday, July 9th, 2020
Start Time: 8:00AM PDT
Work Window: 07/09/20 @ 8:00AM- 10:00AM ( Los Angeles time - (PDT) )
Outage Estimated Duration: 10-30 Minutes
Facility: Los Angeles, California
Affecting Server - NC-Gemini
Dear clients,
At approximately 7:30am EST on Saturday, 5/30, we will be rebooting NC-Whiskey and NC-Uniform. We expect downtime to be 30min up to 1 hour. This reboot facilitates a server physical location move designed to optimize our utilization of datacenter space in Charlotte and also will allow for critical software updates to the host nodes. Your patience is appreciated while we complete this scheduled maintenance! Thank you.
Dear Customer,
We are currently seeing issues with some IP ranges in our Charlotte, NC location. Network engineers are working on this issue right now. There is no ETA but we are hoping the issue will be resolved any minute. This issue appears to affect only certain IPs/ranges.
UPDATE (1:37am): Gateways are pinging and we are expecting routing issues to be solved any minute.
UPDATE (2:28am): Partial service restoration - we expect full resolution shortly.
UPDATE (2:49am): Most services restored. We are learning that the cause of this was poorly informed network maintenance upstream. We are working with the provider to resolve this issue.
UPDATE (2:50am): Incident fully resolved. The issue was indeed a result of PLANNED network maintenance by our provider, who did not properly communicate ahead of time. We are taking action to prevent this from occurring again in the future. Our apologies for the problems this caused. We will hold our upstream provider accountable. Please contact us for any more details or for SLA requests.
Affecting Other - Cascade Divide Datacenter
Dear valued clients,
Thank you for your continued business! This is an emergency service impacting maintenance notification:
Services affected: Cascade Divide, Bend OR datacenter
Date: September 11th, 2019
Start time: 5PM PDT (GMT -8)
Duration: Approximately 15 minutes, though 30 minutes is being allocated
Scope: Replacing a faulty line card that appears to have unexpectedly rebooted this morning, out of an abundance of caution
We appreciate your cooperation and continued business.
Affecting Other - Segra Datacenters (DC74, Charlotte NC)
Dear Valued Customer,
The SEGRA Data Center network team has been planning network upgrades at our Charlotte, NC data centers - CLT1, CLT2 and CLT4 for the past few months. During each night of the maintenance window, we will be performing router reboots to complete these upgrades. Each reboot is only expected to last 5-10 minutes.
These reboots will allow us to provide additional services at the data center and we do not anticipate any extended period of outages beyond the standard reboot time of each router themselves.
If you have any questions or concerns, please contact H4Y Customer Care by ticket at my.h4y.us and we will be happy to assist you further.
Affecting Other - Los Angeles Psychz
Dear Valued Customers,
Over the course of the past week, we have faced 3 power outages (including today) as a result of a faulty PDU unit at the datacenter. Attempted repairs to said PDU at the facility failed and replacement unit is on order. The replacement ETA is 1-2 weeks. Meanwhile, the existing unit has been bypassed so it cannot malfunction again. Servers that are fed by A/B redundant circuits have not been affected. Today, at approximately 1:21PM PST, the Los Angeles (Psychz) Datacenter lost utility power entirely. This is a common Los Angeles phenomenon during the hot summer months and typically occurs, without any sign of disruption or service impact, on average once or twice over the course of a year. Upon the loss of utility power today, one of two diesel backup generators was activated at the facility in Los Angeles and failover occurred within 30 sec, but it still caused downtime for all branch circuits up the line. Additional downtime may have been observed as individual machines, switches, and routers boot back up. Those with A/B redundancy were unaffected as the healthy battery backups took over. We are working with the datacenter to ensure that full resolution to this issue occurs immediately and does not cause any more power interruptions. As of 3:40pm PST, the utility power has still not been restored and generator power is in use. The generator is capable of running indefinitely as fuel is available. We will provide updates at https://my.h4y.us/serverstatus.php as they become available.
Thank you for your patience and understanding!
Affecting System - Peer1 Los Angeles
Beginning 3/12/2019 at approximately 07:45AM PST (GMT -8) our Peer1 Los Angeles datacenter began to experience various network interruptions intermittently, with services flapping.
At approximately 14:49 PST (GMT -8) the Peer1 network started to suffer a prolonged outage. The outage is still under investigation at this time, however it is believed to be sourced from an aggregation switch that may have become faulty. A previous distribution switch was already replaced, as part of the diagnosis. The Peer1 Los Angeles datacenter team has all resources working on the incident, however there is no expected ETA at this time for complete resolution. We will be posting updates as we receive them at this status page, and you may also monitor https://status.cogecopeer1.com/ directly if desired.
At approximate 16:14PM PST (GMT -8) services were restored. An aggregation switch was determined to be faulty, as well as the previously replaced distribution switch. JTAC has been involved to determine the root cause of both switches failing. We will continue to monitor the situation and provide a complete RFO as soon as a root cause analysis is available.
Once the incident has been resolved, clients can open a ticket in the billing department for SLA credit as applicable.
Thank you for your continued patience and understanding.
Affecting Other - Los Angeles Psychz Only
URGENT (Los Angeles Psychz ONLY) - Cooling system issue 1/9/2018
**RESOLVED**
A problem with the A/C / cooling system at the Los Angeles (Psychz) datacenter has been discovered and some equipment has been taken offline while the issue is resolved. We do not have an exact ETA for resolution but we will post updates as we have them. We have a great history of uptime and strive for 100%. Billing credits will be available per our SLA and we sincerely apologize for any problems this is causing you.
UPDATE:
-----------------------------
At approximately 5am PST, an overheat situation developed on a faulty A/C unit powering core routers. It did not failover properly and caused automated shut downs of several systems and damage to fiber optic cabling. Cabinet Z7 in Los Angeles Psychz was first affected by loss of connectivity and then other cabinets were affected as well. As of 7:15am PST, We've replaced the faulty equipment and cabling as needed and ensured proper cooling. BGP routes may continue to converge and cause some packetloss as operations return to normal.
Update:
----------------------------
Service is restored and servers are back online now. Sorry for any inconvenience this has caused.
Affecting Other - Bend, OR - Cascade Divide Network
We are working on an issue with LSN in Bend, OR. Traffic will be re-routed shortly.
Affecting System - Cascade Divide
UPDATE 7:34AM PST - All servers up. If you are still experiencing any issues. Please file a ticket. Welcome to Bend, OR!
UPDATE 4:32AM PST - 50% of servers are networked, cabled, and powered up. We continue to work on the remainder. All networking and routing has been tested and given the OK.
UPDATE 1:33AM PST - All equipment has arrived at the Bend facility. Staff is unpacking and racking the equipment into our cabinets as planned. Switches will be powered up first followed by servers.
UPDATE 10:03PM PST - IP routing has been tested and confirmed except for a few stragglers. Equipment is still enroute and more updates will follow shortly.
UPDATE 7:38PM PST - All servers have been cleanly shut down. We are now safely transporting them to the Bend facility. So far so good! Please check back for updates.
Dear Valued Clients,
This notice is to inform you that we will be relocating equipment at our Cascade Divide Roseburg, OR location to the Cascade Divide Bend, OR facility overnight Saturday, November 5th at 6pm PST. This relocation is necessary due to our requirements for a larger and more redundant facility. The new facility features more transit providers, additional redundancy, more space, and larger capacity in general. This relocation will indeed cause a service interruption on the night of November 5th, but it should be as minimal as humanly possible. There will be NO IP space changes, rack-level networking changes, and the new facility is in the same geographic region and state. We chose the most off-peak time while allowing for unforeseen conditions to complete the maintenance with plenty of time to spare before the following morning. Your data, IPs, and configurations are NOT at risk. There will be no functional difference to your service, though you can look forward to enhanced redundancy and reliability, plus pricing benefits for bandwidth and much more in the future. Updates as we go will be posted at https://my.h4y.us/serverstatus.php
Details:
If you have a dedicated or colo server, we suggest shutting it down (halting) it before November 5th at 6PM PST. If that is not possible, our staff will also begin powering down equipment at that time. We will attempt to shut down all servers cleanly in all cases (using CTRL+ALT+DEL where possible, or by logging in and halting). We will also cleanly shut down all VPS accounts and shared servers. Servers will all be booted up and checked once they arrive at the Bend, OR facility.
The physical relocation will begin by 7pm PST. We have at least one staff member assisting PER CABINET so the equipment will be loaded and relocated as quickly as possible. We expect that equipment will be at the Bend facility and powering up within 4 hours. The maintenance period will extend to 3AM PST on November 6th to account for any unforeseen issues. Please check our network status page for updates and be aware that we will be busy with phonecalls and support requests the entire night. If possible, refrain from contacting us for status updates until we announce that servers are racked/cabled and should be powered back up.
Your business is appreciated! Please contact us if you have any questions, concerns, or special instructions for us during this relocation. Our goal is to ensure clients waking up Sunday morning will simply return with ease to business as usual. We will post status updates as they become available on the night of November 5th. We will also post the Bend, OR datasheet and info to our site within the coming days. Thank you for choosing us!
Affecting System - Peer1 LA
Dear Customer,
Please be advised that we will be performing scheduled network maintenance in our Los Angeles (West 7th Street) facility during the following date and time:
From: July 22, 2016 - 00:00 PDT (July 22, 07:00 UTC)
To: July 22, 2016 - 02:00 PDT (July 22, 09:00 UTC)
The window will occur on Friday July 22nd from 00:00 - 02:00. During this timeframe, Network Engineers will reboot a virtual switch chassis, due to the nature of the maintenance downtime of around 10 minutes is expected during this window and services will be affected.
This work will be SERVICE IMPACTING. The appropriate staff will be present for the entire duration of this maintenance window.
Affecting Other - Cascade Divide Datacenter
07/18/2016 15:01 PST: We are currently investigating an outage in our Cascade Divide, Roseburg OR datacenter. Datacenter staff is aware of the outage. Updates to follow.
7/18/2016 15:49 PST: We have been advised by the datacenter that their primary router has failed, and failover to the backup router failed. A manual failover is currently taking place.
Affecting System - Cascade Divide Datacenter
At 10:57 AM PST (GMT-8) our NOC team noticed an outage at our Cascade Divide, Roseburg OR datacenter. The NOC made contact with the datacenter and they are aware of the issue, and investigating.
UPDATE: 11:12 AM PST -- There is a confirmed fiber cut which is affecting all circuits (including redundant and protected circuits) in and out of the datacenter.
UPDATE: 11:23 AM PST -- Response teams are en route from all major fiber providers to the facility (LSN, HE, Level3, etc). We do not yet have any ETA.
UPDATE: 12:04 PM PST -- Teams are on site from all carriers and repairs are underway. Substantial damage has been done to the multiple fiber paths and utility poles in the area, including full fiber sheath cuts. Crews are giving a rough estimate of 4-5 hours.
UPDATE: 1:55PM PST -- Crews have updated their original estimate. The new expected resolution time is 7:00PM PST. Our on-scene technicians have shared pictures, available at: http://imgur.com/a/023V2
UPDATE: 4:06PM PST -- Aerial lines have been pulled across roadways, and work is underway on terminating and splicing the fiber bundles. The ETA remains the same.
UPDATE: 5:47PM PST -- Service has been restored to the first fiber bundle. At this time if you have any remaining issues please open a ticket so we can address them.
Affecting System - Peer1 LA
Dear valued clients,
Thank you for your continued business! This is a maintenance notification regarding our Los Angeles (Peer1) location.
On Thursday, September 24th starting at 9PM PST (Pacific Standard Time), we will be relocating some of our equipment in the Los Angeles datacenter into new cabinet space that we already setup only a few yards away.
The goal of this maintenance period is to provide you a more reliable infrastructure. Brand new switches and PDUs are already installed, and ready to go!
SHARED CUSTOMERS:
This is a notification for the following servers:
CA-Alpha
CA-Bravo
CA-Foxtrot
CA-Kilo
CA-Lima
CA-Zeta
DEDICATED CUSTOMERS:
The new switches will increase our capacity and port limits for all clients. Most dedicated clients will receive free port upgrades.
We will take incredible care in cleanly shutting down all machines where possible (including self-managed servers where possible by using CTRL+ALT+DEL). If you have a Windows self-managed server or otherwise would like to ensure the shutdown is clean, please take it down before 9AM PST. We will ensure that all dedicated servers are powered back up and are online once the migration is complete.
We anticipate this maintenance window to last 1.5-2 hours.
Please let us know how we can make this as painless as possible for you and your clients. We will be available for any questions you have before, during, and after the move.
Affecting Other - DC74 Datacenter
Thank you for your continued business!
Please be advised of the following service impacting maintenance notification.
Reason for Notification: Router Maintenance
Location: DC74 Data Centers – CLT4 facility, 1612 Cross Beam Drive, Charlotte NC
Start Date & Time: Saturday, April 11th, 2015 starting at 2300 EST
Expected End Date & Time: Saturday, April 11th, 2015 ending at 2330 EST
PLEASE NOTE: This maintenance does not affect shared, reseller, and VPS clients as you are in a seperate datacenter. This affects DC74 datacenter ONLY.
Description: We will be performing router maintenance potentially requiring a reboot of the routers. This will impact BGP routing and could potentially cause up to a 15-30 minute loss of network connectivity. The date and time chosen is the lowest traffic routing period for the entirety of the datacenter. Please be aware of this maintenance window and potential service disruption.
Please let me us know if you have any questions.
Powered by WHMCompleteSolution