All Systems Operational
Network Mgmt Operational
90 days ago
99.99 % uptime
Today
my.auvik.com Operational
90 days ago
100.0 % uptime
Today
us1.my.auvik.com Operational
90 days ago
100.0 % uptime
Today
us2.my.auvik.com Operational
90 days ago
100.0 % uptime
Today
us3.my.auvik.com Operational
90 days ago
100.0 % uptime
Today
us4.my.auvik.com Operational
90 days ago
100.0 % uptime
Today
eu1.my.auvik.com Operational
90 days ago
99.92 % uptime
Today
eu2.my.auvik.com Operational
90 days ago
100.0 % uptime
Today
au1.my.auvik.com Operational
90 days ago
100.0 % uptime
Today
ca1.my.auvik.com Operational
90 days ago
100.0 % uptime
Today
us5.my.auvik.com Operational
90 days ago
100.0 % uptime
Today
us6.my.auvik.com Operational
90 days ago
100.0 % uptime
Today
Auvik TrafficInsights Operational
90 days ago
100.0 % uptime
Today
Auvik Website (www.auvik.com) Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Scheduled Maintenance
Scheduled Maintenance Jun 15, 2024 07:00-10:00 EDT
We will be upgrading the Auvik cloud and your Auvik collectors. The session will take about two hours. During this time, you may not be able to log into Auvik. There may also be interruptions to your network monitoring.

If you have any questions, please contact support@auvik.com.

Posted on Jun 13, 2024 - 16:40 EDT
Past Incidents
Jun 14, 2024

No incidents reported today.

Jun 13, 2024

No incidents reported.

Jun 12, 2024

No incidents reported.

Jun 11, 2024

No incidents reported.

Jun 10, 2024

No incidents reported.

Jun 9, 2024

No incidents reported.

Jun 8, 2024

No incidents reported.

Jun 7, 2024
Resolved - The impact of any remaining lag is negligible for customers on the US2 cluster and should resolve itself.

All other clusters are running optimally.

We are closing this incident at this time.

A Root Cause Analysis (RCA) will follow after completing a full review.

Jun 7, 09:05 EDT
Update - The US1 cluster has fully recovered.

Most of the US2 cluster’s clients have fully recovered. The delay is only in processing interface information in the map and only applies to a small subsection of clients. The estimate for recovery on this final part of the data lag for this map component depends on the influx of data it receives today.

All other parts of the product are running normally.

We continue actively monitoring the situation while waiting for this final component to recover from its data lag.

We understand the impact of this incident on your experience with the product and sincerely apologize for the inconvenience it has caused.

Jun 7, 05:53 EDT
Update - We’ve identified the source of the performance issue with Map discovery and rendering for customers running under the US1 and US2 clusters. We are waiting as the informational lag works through the data backlog.

The US1 cluster has had most of its clients fully recover. However, a very small subsection of clients still has a data lag and is delayed. Due to a heavy influx of data, the cluster is processing the data but maintaining the backlog size. The delay is only in processing interface information in the map.

The US2 cluster is still delayed but is still decreasing in lag. Customers are only experiencing interface information delays in the map.

We anticipate a full recovery by 11:00 UTC (7:00 EDT) tomorrow.

Rest assured, the dashboard information and alerts remain unaffected, providing up-to-date and accurate information.

We are diligently and actively monitoring the situation. We are waiting for the remaining components to catch up and be current.

We understand the impact of this incident on your experience with the product and sincerely apologize for the inconvenience it has caused.

Jun 6, 18:44 EDT
Update - We’ve identified the source of the performance issue with Map discovery and rendering for customers running under the US1 and US2 clusters. We are waiting as the informational lag works through the data backlog.

The US1 cluster has had most of its clients fully recover. However, a very small subsection of clients still has a data lag and is delayed. Due to a heavy influx of data, the cluster is processing the data but maintaining the backlog size. The delay is only in processing interface information in the map.

The US2 cluster is still delayed but is still decreasing in lag. Customers are only experiencing interface information delays in the map.

Rest assured, the dashboard information and alerts remain unaffected, providing up-to-date and accurate information.

We are diligently and actively monitoring the situation. We are waiting for the remaining components to catch up and be current.

We understand the impact of this incident on your experience with the product and sincerely apologize for the inconvenience it has caused.

Jun 6, 13:58 EDT
Update - We’ve identified the source of the performance issue with Map discovery and rendering for customers running under the US1 and US2 clusters. We are waiting as the informational lag works through the data backlog.

The US1 cluster has almost recovered, with only a small subset of customers experiencing interface information delays in the map.

The US2 cluster is still delayed, customers are only experiencing interface information delays in the map.

Customers on the US3 and US5 clusters have fully recovered since the last update.

Dashboard information and alerts are not affected and are providing up-to-date information.

We are actively monitoring the situation and waiting for the remaining components to catch up and be current.

We understand the impact of this incident on your experience with the product and we sincerely apologize for the inconvenience it has caused.

Jun 6, 05:24 EDT
Update - We’ve identified the source of the performance issue with Map discovery and rendering for customers running under the US clusters (US1, US2, US3, US5). We are waiting as the informational lag works through the data backlog.

Dashboard information and alerts are not affected and are providing up-to-date information.

The maps for customers on a small percentage of customers on the US5 cluster still show delayed inferred connections, but the rest of the map should be current. The inferred connection delay should conclude in the next several hours.

Clients on US clusters US1, US2, and US3 continue to decrease their lag. We now estimate it will take another 10-12 hours for all clusters' Map discovery and rendering to be current again. Several components are again current in the map. We are waiting for the remaining components to catch up and be current. We continue to monitor this.

We understand the impact this is having on your experience with the product and apologize for any impact this may be having on you and your clients.

Jun 5, 17:42 EDT
Update - We’ve identified the source of the performance issue with Map discovery and rendering for customers running under the US clusters (US1, US2, US3, US5). We are waiting as the informational lag works through the data backlog.

Dashboard information and alerts are not affected and are providing up-to-date information.

The maps for customers on the US5 cluster still show delayed inferred connections, but the rest of the map should be current. The inferred connection delay is still dropping and should become current in the next 4 hours.

Clients on US clusters US1, US2, and US3 are continue to decrease their lag. We now estimate it will take another 18-20 hours for all clusters' Map discovery and rendering to be current again. We will continue to monitor this.

We understand the impact this is having on your experience with the product and apologize for any impact this may be having on you and your clients.

Jun 5, 11:50 EDT
Update - We’ve identified the source of the performance issue with Map discovery and rendering for customers running under the US clusters (US1, US2, US3, US5). We are waiting as the informational lag works through the data backlog.

Dashboard information and alerts are not affected and are providing up-to-date information.

The maps for customers on the US5 cluster still show delayed inferred connections, but the rest of the map should be current. The inferred connection delay is still dropping and should become current in the next 4-8 hours.

Clients on US clusters. US1, US2, and US3 are continuing to decrease their lag. We estimate it will take another 24 hours for all clusters' Map discovery and rendering to be current again. We will continue to monitor it.

We apologize for the impact this may be causing you and your clients.

Jun 5, 06:00 EDT
Update - We’ve identified the source of the performance issue with Map discovery and rendering for customers running under the US clusters (US1, US2, US3, US5). We are waiting as the informational lag works through the data backlog.

Dashboard information and alerts are not affected and are providing up-to-date information.

We still expect clients on the US5 cluster to recover from their lag sometime during the evening, most likely in the next four hours.

Clients on US clusters. US1, US2, and US3 are slowly decreasing their lag. We do not have an estimate of when their Map discovery and rendering will be current, but we continue monitoring it closely.

We apologize for the impact this may be causing you and your clients.

We continue to monitor progress and will post relevant updates.

Jun 4, 19:22 EDT
Update - We’ve identified the source of the performance issue with Map discovery and rendering for customers running under the US clusters (US1, US2, US3, US5). We are waiting as the informational lag works through the data backlog.

Dashboard information and alerts are not affected and are providing up-to-date information.

We expect clients on the US5 cluster to recover from their lag at some point during the evening.

Clients on US clusters. US1, US2, and US3 are slowly decreasing their lag. We do not have an estimate of when their Map discovery and rendering will be current, but we continue monitoring it closely.

We apologize for the impact this may be causing you and your clients.

We continue to monitor progress and will post relevant updates.

Jun 4, 15:13 EDT
Monitoring - We’ve identified the source of the performance issue with Map discovery and rendering for customers running under the US clusters (US1, US2, US3, US5). We are waiting as the informational lag works through the data backlog.

The lag for clients on the US4 cluster should be recovered in the next hour.

Dashboard information and alerts are not affected and are providing up-to-date information.

We apologize for the impact this may be causing you and your clients.

We continue to monitor progress and will post updates throughout the delay.

Jun 4, 12:57 EDT
Identified - We’ve identified the source of the performance issue with Map discovery and rendering for customers running under the US clusters (US1, US2, US3, US4, US5). We are waiting as the informational lag works its way through the data backlog.

Dashboard information and alerts are not affected and are providing up-to-date information.

All relevant resources have been upgraded to provide the most expedient resolution.

We apologize for the impact this may be causing you and your clients.

We will continue to monitor the progress and post updates throughout the day.

Jun 4, 10:31 EDT
Jun 6, 2024
Jun 5, 2024
Jun 4, 2024
Jun 3, 2024
Resolved - We've applied the fix for the performance disruption with services for clients on the US4 cluster. It is related to the incident earlier today. Tenants in the other clusters are not affected. There may be some associated lag as the system comes up to full speed. Alerting will become active for US4 clients at 22:00 UTC (18:00 EDT).

We apologize for the issues today.

The source of the disruption has been resolved. Services have been fully restored.

A Root Cause Analysis (RCA) will follow after a full review and posted to the original incident.

Jun 3, 17:34 EDT
Monitoring - We've identified the source of the performance disruption with services for clients on the US4 cluster. It is related to the incident earlier today. Tenants in the other clusters are not affected. The maintenance window began at 20:00 UTC (16:00 EDT), during which updates to the UI and alerts will be delayed. This maintenance window is expected to last for around 90 minutes.

We apologize for the continued issues.

We will continue to provide updates as they become available..

Jun 3, 16:24 EDT
Identified - We've identified the source of the performance disruption with services for clients on the US4 cluster. It is related to the incident earlier today. Tenants in the other clusters are not affected. The will begin a maintenance window at 20:00 UTC (16:00 EDT) where updates to the UI and alerting will be delayed.

We apologize for the continued issues.

We will continue to provide updates as they become available.

Jun 3, 15:56 EDT
Investigating - We’re experiencing a performance disruption with services for clients on the US4 cluster. It is related to the incident earlier today. Tenants in the other clusters are not affected.

We apologize for the continued issues.

We will continue to provide updates as they become available.

Jun 3, 15:33 EDT
Resolved - The fix for service disruption with site performance and device discovery has been fully deployed and implemented. The source of the disruption has been resolved, and services have been fully restored. There may be a slight delay with some connectors reconnecting and map updating, but this will resolve itself.

Delays with alerts have ended, and sites are again communicating as normal.

A Root Cause Analysis (RCA) will follow after a full review.

Jun 3, 14:08 EDT
Update - We’ve identified the source of the service disruption with site performance and device discovery. In some cases, this may include the Map and Network dashboard.

We have deployed the hotfix. The application is taking longer to recover than anticipated but is recovering. We are anticipating another hour for all sites to recover.

During this window, alerting and site communication may be interrupted or delayed. We apologize for this inconvenience.
We will monitor the progress and provide updates here and the banner on the website.

Jun 3, 13:20 EDT
Monitoring - We’ve identified the source of the service disruption with site performance and device discovery. In some cases, this may include the Map and Network dashboard.

We have begun deploying the hotfix, which is estimated to take approximately two hours to fully deploy.

During this window, alerting and site communication may be interrupted or delayed. We apologize for this inconvenience.
We will monitor the progress and provide updates here, as well as the banner on the website.

Jun 3, 11:31 EDT
Update - We’ve identified the source of the service disruption with site performance and device discovery. In some cases, this may include the Map and Network dashboard. We will deploy a hotfix to the affected clusters starting at 15:30 UTC (11:30 EDT), which will take approximately two hours to deploy.
During this window, alerting and site communication may be delayed. We apologize for this inconvenience.
We will monitor the progress and provide updates here, as needed.

Jun 3, 11:01 EDT
Identified - We’ve identified the source of the service disruption with site performance and device discovery. In some cases, this may include the Map and Network dashboard. We are currently testing a fix for the issue and working to restore service as quickly as possible.
Jun 3, 10:04 EDT
Jun 2, 2024
Completed - The scheduled maintenance has been completed.
Jun 2, 22:16 EDT
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 2, 21:00 EDT
Scheduled - We need to complete unscheduled preventative maintenance on the Auvik system eu1 cluster. The session will take about one to two hours. During this time, you may not be able to log into Auvik. There may also be interruptions to your network monitoring.
Jun 2, 19:59 EDT
Jun 1, 2024
Completed - The scheduled maintenance has been completed.
Jun 1, 09:43 EDT
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 1, 07:00 EDT
Scheduled - We will be upgrading the Auvik cloud and your Auvik collectors. The session will take about two hours. During this time, you may not be able to log into Auvik. There may also be interruptions to your network monitoring.

If you have any questions, please contact support@auvik.com.

May 29, 14:15 EDT
May 31, 2024

No incidents reported.