Is Snowflake Down? Complete Status Check Guide + Quick Fixes
Snowflake queries timing out?
Can't connect to your data warehouse?
Snowpipe not loading data?
Before escalating to support, verify if Snowflake is actually downβor if it's a configuration, network, or credential issue on your end. Here's your complete guide to checking Snowflake status and fixing common problems fast.
Quick Check: Is Snowflake Actually Down?
Don't assume it's Snowflake. 70% of "Snowflake down" reports are actually network issues, warehouse suspension, authentication problems, or query timeout errors.
1. Check Official Sources
Snowflake Status Page:
π status.snowflake.com
What to look for:
- β "All Systems Operational" = Snowflake is fine
- β οΈ "Degraded Performance" = Some services affected
- π΄ "Major Outage" = Snowflake is down
Real-time updates:
- Compute layer issues (warehouses)
- Storage layer problems
- Metadata service status
- Authentication failures
- Regional cloud outages (AWS/Azure/GCP)
Twitter/X Search:
π Search "Snowflake down" on Twitter
Why it works:
- Users report outages instantly
- See if others in your region/cloud affected
- Snowflake support team responds here
Pro tip: If 200+ tweets in the last hour mention "Snowflake down," it's probably actually down.
2. Check Service-Specific Status
Snowflake has multiple layers that can fail independently:
| Service | What It Does | Status Check |
|---|---|---|
| Compute Layer | Virtual warehouses, query execution | status.snowflake.com |
| Storage Layer | Cloud storage (S3/Azure Blob/GCS) | Check status page under "Storage" |
| Metadata Service | Account info, object definitions | Check status page under "Metadata" |
| Authentication | Login, SSO, OAuth | Check status page under "Authentication" |
| Snowpipe | Continuous data loading | Check status page under "Snowpipe" |
| Data Sharing | Cross-account sharing | Check status page under "Data Sharing" |
Your service might be down while Snowflake globally is up.
How to check which service is affected:
- Visit status.snowflake.com
- Look for specific service status by region
- Check "Incident History" for recent issues
- Subscribe to status updates (email/SMS)
- Filter by your cloud provider (AWS/Azure/GCP)
3. Check Your Cloud Region
Snowflake runs on multiple cloud providers and regions:
AWS Regions:
- US East (N. Virginia, Ohio)
- US West (Oregon, N. California)
- Canada (Central)
- Europe (Frankfurt, Ireland, London, Paris, Stockholm)
- Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo)
Azure Regions:
- East US 2, West US 2
- West Europe, North Europe
- Australia East, Southeast Asia
GCP Regions:
- us-central1, us-east4
- europe-west2, europe-west4
- asia-northeast1
How to check your region:
-- Check your current region
SELECT CURRENT_REGION();
-- Check your account region
SHOW REGIONS;
Region-specific outages are common. Check if other regions are working.
4. Test Different Connection Methods
If SnowSQL works but Python connector doesn't, it's likely your driver/library.
| Method | Test Command |
|---|---|
| Web UI | Login at app.snowflake.com |
| SnowSQL | snowsql -a <account> -u <username> |
| Python | import snowflake.connector; conn = snowflake.connector.connect(...) |
| JDBC | Test with your Java application or SQL client |
| ODBC | Test with Tableau, Power BI, or Excel |
Decision tree:
Web UI works + SnowSQL fails β Client/network issue
Web UI fails + SnowSQL fails β Authentication/network issue
Python works + JDBC fails β Driver version mismatch
Nothing works β Snowflake is down (or credentials expired)
Common Snowflake Error Messages (And What They Mean)
Error 250001: "Could Not Connect to Snowflake Backend"
What it means: Can't establish connection to Snowflake.
Causes:
- Network firewall blocking Snowflake
- VPN/proxy interference
- DNS resolution failure
- Wrong account identifier
Quick fixes:
- Check if app.snowflake.com loads in browser
- Verify account identifier:
<orgname>-<account_name> - Check DNS:
nslookup <account>.snowflakecomputing.com - Disable VPN temporarily
- Allow Snowflake through firewall (port 443)
Error 390144: "Authentication Token Has Expired"
What it means: Your session token expired.
Causes:
- Session idle for >4 hours (default timeout)
- SSO token expired
- Clock skew between client and Snowflake
- Key pair rotation (for key-pair auth)
Quick fixes:
- Re-authenticate (log in again)
- Check system clock is accurate
- For key-pair auth: rotate and update keys
- Increase
CLIENT_SESSION_KEEP_ALIVEparameter:
ALTER SESSION SET CLIENT_SESSION_KEEP_ALIVE = TRUE;
Error 002003: "SQL Compilation Error: Object Does Not Exist"
What it means: Table, view, or database doesn't exist or you lack access.
Causes:
- Object name misspelled
- Wrong database/schema context
- Object dropped
- Insufficient privileges
Quick fixes:
- Check object exists:
SHOW TABLES LIKE 'table_name';
SHOW DATABASES LIKE 'db_name';
- Set correct context:
USE DATABASE my_database;
USE SCHEMA my_schema;
- Check privileges:
SHOW GRANTS ON TABLE my_table;
Error 090105: "Cannot Perform Operation. Warehouse Suspended"
What it means: Virtual warehouse is not running.
Causes:
- Warehouse auto-suspended due to inactivity
- Manually suspended by admin
- Insufficient credits/spending limit reached
- Warehouse dropped
Quick fixes:
- Resume warehouse:
ALTER WAREHOUSE my_warehouse RESUME;
- Check warehouse status:
SHOW WAREHOUSES LIKE 'my_warehouse';
- Enable auto-resume:
ALTER WAREHOUSE my_warehouse SET AUTO_RESUME = TRUE;
Error 100038: "Query Execution Timeout"
What it means: Query took too long to execute.
Causes:
- Large dataset scan
- Missing indexes/clustering
- Warehouse too small
- Resource contention
- Statement timeout parameter
Quick fixes:
- Increase statement timeout:
ALTER SESSION SET STATEMENT_TIMEOUT_IN_SECONDS = 3600; -- 1 hour
- Use larger warehouse:
ALTER WAREHOUSE my_warehouse SET WAREHOUSE_SIZE = 'LARGE';
- Optimize query (add filters, partition pruning)
- Check query profile in Web UI for bottlenecks
Error 090001: "Cannot Execute Statement: Warehouse Size Too Small"
What it means: Query needs more compute resources.
Causes:
- Complex aggregations on large datasets
- Too many concurrent queries
- Memory-intensive operations
- Joins on large tables
Quick fixes:
- Scale up warehouse:
ALTER WAREHOUSE my_warehouse SET WAREHOUSE_SIZE = 'X-LARGE';
- Use multi-cluster warehouse for concurrency:
ALTER WAREHOUSE my_warehouse SET
MIN_CLUSTER_COUNT = 1
MAX_CLUSTER_COUNT = 5
SCALING_POLICY = 'STANDARD';
- Optimize query to reduce memory footprint
Error 000606: "IP Address Not Allowed"
What it means: Your IP is blocked by network policy.
Causes:
- Network policy restricting IP ranges
- VPN IP not whitelisted
- Cloud function IP changed
- Dynamic IP changed
Quick fixes:
- Check current network policies:
SHOW NETWORK POLICIES;
DESC NETWORK POLICY my_policy;
- Temporarily disable (ACCOUNTADMIN only):
ALTER ACCOUNT UNSET NETWORK_POLICY;
- Add your IP to allowed list:
ALTER NETWORK POLICY my_policy SET
ALLOWED_IP_LIST = ('192.168.1.100/32', '10.0.0.0/8');
Quick Fixes: Snowflake Not Working?
Fix #1: Verify Warehouse Status
Virtual warehouses consume credits. Suspended warehouse = no queries run.
Check warehouse status:
SHOW WAREHOUSES;
Look for:
state: SUSPENDED, STARTED, RESIZINGsize: XSMALL, SMALL, MEDIUM, LARGE, etc.auto_resume: TRUE/FALSE
Resume warehouse:
ALTER WAREHOUSE my_warehouse RESUME;
-- Or resume when first query runs
ALTER WAREHOUSE my_warehouse SET AUTO_RESUME = TRUE;
Why warehouses suspend:
- Auto-suspend after X minutes of inactivity (default: 10 min)
- Manually suspended to save credits
- Credit limit reached
Best practice: Enable auto-resume for production warehouses.
Fix #2: Check Authentication & Credentials
Authentication failures are common after password changes, MFA setup, or SSO changes.
Test authentication:
SnowSQL:
snowsql -a <account> -u <username> -p <password>
Python:
import snowflake.connector
conn = snowflake.connector.connect(
user='<username>',
password='<password>',
account='<account>',
warehouse='<warehouse>',
database='<database>',
schema='<schema>'
)
# Test connection
cur = conn.cursor()
cur.execute("SELECT CURRENT_VERSION()")
print(cur.fetchone())
Common auth issues:
1. Wrong account identifier:
- Format:
<orgname>-<account_name>(new format) - Legacy:
<account_locator>.<region>.<cloud>(still supported)
Check your account:
SELECT CURRENT_ACCOUNT();
SELECT CURRENT_ORGANIZATION_NAME();
2. Password expired:
- Snowflake enforces password rotation policies
- Reset password via Web UI
3. SSO issues:
- Test SSO login via Web UI first
- Check with identity provider (Okta, Azure AD, etc.)
- Verify SSO configuration not changed
4. Key-pair authentication:
# Using key-pair auth (more secure for apps)
import snowflake.connector
conn = snowflake.connector.connect(
user='<username>',
account='<account>',
private_key=private_key_bytes,
warehouse='<warehouse>'
)
Fix #3: Test Network Connectivity
Snowflake needs port 443 (HTTPS) access to multiple endpoints.
Required endpoints:
# Test DNS resolution
nslookg <account>.snowflakecomputing.com
# Test HTTPS connectivity
curl -v https://<account>.snowflakecomputing.com
# Test authentication endpoint
curl -v https://<account>.snowflakecomputing.com/session/v1/login-request
Firewall requirements:
| Endpoint | Port | Purpose |
|---|---|---|
<account>.snowflakecomputing.com |
443 | Main account endpoint |
<account>.<region>.snowflakecomputing.com |
443 | Regional endpoint |
snowflakecomputing.com |
443 | Global services |
*.cloudfront.net (AWS) |
443 | Static content |
Corporate networks:
- Snowflake doesn't support proxy servers for drivers
- Use VPN or direct internet connection
- Contact IT to whitelist Snowflake endpoints
VPN issues:
- Some VPNs block Snowflake traffic
- Try disconnecting VPN temporarily
- Configure split tunneling to exclude Snowflake
Fix #4: Update Snowflake Connectors/Drivers
Outdated connectors = authentication failures and missing features.
Check versions:
Python:
pip show snowflake-connector-python
Current version (as of Feb 2026): 3.7.x
Update:
pip install --upgrade snowflake-connector-python
JDBC:
- Download latest from Snowflake JDBC Drivers
- Current version: 3.15.x
- Replace JAR file in your project
ODBC:
- Download from Snowflake ODBC Drivers
- Version: 3.2.x
- Reinstall driver
SnowSQL:
# Check version
snowsql -v
# Update
brew upgrade snowsql # Mac
choco upgrade snowsql # Windows
Why update matters:
- Security patches
- Bug fixes
- Support for new authentication methods (OAuth, key-pair)
- Performance improvements
Fix #5: Check Role & Privileges
"Object does not exist" often means "you can't see it."
Check current role:
SELECT CURRENT_ROLE();
Switch to role with proper access:
USE ROLE ACCOUNTADMIN; -- Full admin (use carefully!)
USE ROLE SYSADMIN; -- Warehouse/database admin
USE ROLE PUBLIC; -- Minimal access
Check granted roles:
SHOW GRANTS TO USER current_user();
Grant necessary privileges:
-- Grant warehouse usage
GRANT USAGE ON WAREHOUSE my_warehouse TO ROLE my_role;
-- Grant database access
GRANT USAGE ON DATABASE my_db TO ROLE my_role;
GRANT USAGE ON SCHEMA my_db.my_schema TO ROLE my_role;
-- Grant table access
GRANT SELECT ON TABLE my_db.my_schema.my_table TO ROLE my_role;
Common privilege issues:
- Default role changed
- Privileges revoked by admin
- Using wrong role for task
- Database/schema not accessible to role
Fix #6: Optimize Query Performance
Slow queries β Snowflake down. Often it's inefficient SQL or undersized warehouse.
Check query performance:
1. Use Query Profile in Web UI:
- Login β History β Click query
- See execution timeline
- Identify bottlenecks (network, I/O, CPU)
2. Check warehouse load:
-- See currently running queries
SELECT * FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY())
WHERE EXECUTION_STATUS = 'RUNNING'
ORDER BY START_TIME DESC;
3. Common performance issues:
a) Full table scan:
-- Bad: scans entire table
SELECT * FROM large_table WHERE date = '2026-02-01';
-- Good: use clustering key or partitions
SELECT * FROM large_table
WHERE date = '2026-02-01'
AND cluster_key_column = 'value';
b) Warehouse too small:
-- Scale up temporarily
ALTER WAREHOUSE my_warehouse SET WAREHOUSE_SIZE = 'LARGE';
-- Run query
-- Scale back down
ALTER WAREHOUSE my_warehouse SET WAREHOUSE_SIZE = 'SMALL';
c) No result cache:
-- Enable result caching (default: ON)
ALTER SESSION SET USE_CACHED_RESULT = TRUE;
d) Clustering issues:
-- Check clustering health
SELECT SYSTEM$CLUSTERING_INFORMATION('my_table', '(date_column)');
-- Manually cluster if needed
ALTER TABLE my_table CLUSTER BY (date_column);
Fix #7: Snowpipe Issues
Snowpipe not loading data? Check status and logs.
Check Snowpipe status:
-- See all pipes
SHOW PIPES;
-- Check specific pipe
SHOW PIPES LIKE 'my_pipe';
-- Get pipe status
SELECT SYSTEM$PIPE_STATUS('my_pipe');
Check load history:
-- See recent loads
SELECT * FROM TABLE(INFORMATION_SCHEMA.COPY_HISTORY(
TABLE_NAME => 'my_table',
START_TIME => DATEADD(hours, -24, CURRENT_TIMESTAMP())
));
Common Snowpipe issues:
1. Stage files not detected:
- Check S3 event notifications configured
- Verify IAM permissions for Snowflake
- Check file format matches pipe definition
2. Pipe paused:
-- Resume pipe
ALTER PIPE my_pipe REFRESH;
-- If stalled, recreate notification
ALTER PIPE my_pipe SET PIPE_EXECUTION_PAUSED = FALSE;
3. Authentication expired:
- Regenerate AWS IAM keys
- Update storage integration
- Check Azure SAS token expiry
4. File format errors:
-- Check error logs
SELECT * FROM TABLE(VALIDATE(my_pipe, JOB_ID => '_last'));
Fix #8: Data Sharing Problems
Can't access shared data? Check provider and consumer settings.
Check shares (Provider):
-- See all shares
SHOW SHARES;
-- Check share details
DESC SHARE my_share;
-- Check which accounts have access
SHOW GRANTS TO SHARE my_share;
Check shares (Consumer):
-- See available shares
SHOW SHARES;
-- Create database from share
CREATE DATABASE shared_db FROM SHARE provider_account.my_share;
-- Grant access to roles
GRANT IMPORTED PRIVILEGES ON DATABASE shared_db TO ROLE my_role;
Common sharing issues:
- Share revoked by provider
- Account identifier wrong
- Privileges not granted to roles
- Cross-region sharing limitations (same cloud provider required)
Snowflake Warehouse Not Starting?
Issue: Warehouse Stuck in "RESIZING" State
Troubleshoot:
1. Check for long-running queries:
-- Kill blocking queries
SELECT SYSTEM$CANCEL_QUERY('<query_id>');
2. Force warehouse to suspend and resume:
ALTER WAREHOUSE my_warehouse SUSPEND;
ALTER WAREHOUSE my_warehouse RESUME;
3. Check for resource limits:
- Verify credit quota not exceeded
- Check warehouse size limits in account
4. Contact Snowflake support if stuck >30 minutes
Issue: "Insufficient Resources" Error
Causes:
- Account credit limit reached
- Too many concurrent warehouses
- Warehouse size not available in region
- Resource policy restrictions
Fixes:
1. Check account credits:
-- Requires ACCOUNTADMIN
SHOW PARAMETERS LIKE 'RESOURCE%' IN ACCOUNT;
2. Reduce concurrent warehouses:
- Suspend unused warehouses
- Share warehouses between teams (use roles)
3. Use smaller warehouse:
ALTER WAREHOUSE my_warehouse SET WAREHOUSE_SIZE = 'XSMALL';
Snowflake Connection Timeout?
Issue: Connection Drops During Long Queries
Causes:
- Client timeout too short
- Network instability
- Firewall dropping long connections
Fixes:
1. Increase client timeout:
Python:
conn = snowflake.connector.connect(
...,
network_timeout=300, # 5 minutes
login_timeout=60
)
JDBC:
connectionTimeout=300000 # 5 minutes
loginTimeout=60000
2. Enable keep-alive:
ALTER SESSION SET CLIENT_SESSION_KEEP_ALIVE = TRUE;
ALTER SESSION SET CLIENT_SESSION_KEEP_ALIVE_HEARTBEAT_FREQUENCY = 3600; -- 1 hour
3. Use asynchronous queries for long operations:
# Submit query asynchronously
cur = conn.cursor()
cur.execute_async("SELECT * FROM huge_table")
# Check status
query_id = cur.sfqid
status = conn.get_query_status(query_id)
# Fetch results when ready
if conn.is_still_running(conn.get_query_status(query_id)):
print("Still running...")
else:
results = cur.fetchall()
Snowflake Connector-Specific Issues
Python Connector Issues
Error: "Failed to Get OCSP Response"
Causes:
- Firewall blocking OCSP requests
- OCSP server down
- Certificate validation failure
Fixes:
1. Bypass OCSP check (NOT recommended for production):
import os
os.environ['SNOWFLAKE_OCSP_FAIL_OPEN'] = 'true'
conn = snowflake.connector.connect(...)
2. Update connector:
pip install --upgrade snowflake-connector-python
3. Configure proxy for OCSP:
conn = snowflake.connector.connect(
...,
ocsp_response_cache_filename='/tmp/ocsp_cache'
)
JDBC Connector Issues
Error: "SSL Peer Unverified"
Causes:
- Certificate chain incomplete
- Java keystore missing Snowflake CA
- JDBC driver version mismatch
Fixes:
1. Update JDBC driver to latest version
2. Ensure JVM has proper certificates:
# Verify Java version
java -version
# Update Java if < Java 8u161
3. Add Snowflake CA to Java keystore:
keytool -import -trustcacerts -file snowflake.crt \
-alias snowflake -keystore $JAVA_HOME/jre/lib/security/cacerts
ODBC Connector Issues
Error: "Data Source Name Not Found"
Causes:
- DSN not configured
- ODBC driver not installed
- Wrong DSN name
Fixes:
1. Check ODBC driver installed:
Windows:
- ODBC Data Source Administrator (64-bit)
- System DSN tab β Check for Snowflake
Mac/Linux:
odbcinst -q -d
2. Create DSN configuration:
Windows: Use ODBC Administrator GUI
Mac/Linux: Edit ~/.odbc.ini:
[Snowflake]
Driver = /usr/lib/snowflake/odbc/lib/libSnowflake.dylib
Server = <account>.snowflakecomputing.com
UID = <username>
PWD = <password>
Database = <database>
Warehouse = <warehouse>
Schema = <schema>
3. Test connection:
isql -v Snowflake
Snowpark Issues
Issue: Snowpark Session Fails to Create
Causes:
- Outdated snowflake-snowpark-python package
- Authentication issues
- Missing dependencies
Fixes:
1. Update Snowpark:
pip install --upgrade snowflake-snowpark-python
2. Create session with error handling:
from snowflake.snowpark import Session
connection_parameters = {
"account": "<account>",
"user": "<username>",
"password": "<password>",
"warehouse": "<warehouse>",
"database": "<database>",
"schema": "<schema>",
"role": "<role>"
}
try:
session = Session.builder.configs(connection_parameters).create()
print(f"Snowpark session created: {session.sql('SELECT CURRENT_VERSION()').collect()}")
except Exception as e:
print(f"Error creating session: {e}")
3. Check Python version (requires Python 3.8+):
python --version
Issue: Snowpark UDF Fails to Run
Causes:
- Missing package dependencies
- Python version mismatch
- Warehouse not running
- Insufficient privileges
Fixes:
1. Specify packages explicitly:
from snowflake.snowpark.functions import udf
@udf(packages=['pandas', 'numpy'], is_permanent=True, name="my_udf", replace=True)
def my_function(x):
return x * 2
2. Check UDF was created:
SHOW USER FUNCTIONS LIKE 'my_udf';
3. Grant usage privileges:
GRANT USAGE ON FUNCTION my_udf(NUMBER) TO ROLE my_role;
Regional Outages: Is It Just My Cloud?
Snowflake runs on multiple cloud providers. Outages are often cloud-specific.
Check cloud provider status:
AWS: π status.aws.amazon.com
Azure: π status.azure.com
GCP: π status.cloud.google.com
How to check for cloud-specific issues:
1. Check DownDetector:
π downdetector.com/status/snowflake
Shows:
- Real-time outage reports
- Heatmap of affected regions
- Spike in reports = likely real outage
2. Test from different region:
-- Replicate to different region for testing
CREATE DATABASE my_db_backup_region AS CLONE my_db;
3. Check if specific to cloud provider:
- Ask teams on different clouds (AWS vs Azure vs GCP)
- Check Snowflake status page filter by cloud
When Snowflake Actually Goes Down
What Happens
Recent major outages:
- November 2023: 4-hour AWS us-east-1 outage (S3 dependency)
- June 2023: 2-hour authentication service disruption
- March 2023: Metadata service degradation (query compilation slow)
Typical causes:
- Cloud provider outages (AWS S3, Azure Storage, GCP)
- Metadata service failures
- Authentication service issues
- Network infrastructure problems
- Software deployment bugs
How Snowflake Responds
Communication channels:
- status.snowflake.com - Primary source
- @SnowflakeDB on Twitter/X
- Email alerts (if subscribed to status page)
- In-app notifications
Timeline:
- 0-15 min: Users report issues on Twitter/Slack
- 15-30 min: Snowflake acknowledges on status page
- 30-90 min: Updates posted every 20-30 min
- Resolution: Usually 1-4 hours for major outages
What to Do During Outages
1. Check if failover possible:
- Use database replicas in different regions
- Switch to secondary Snowflake account
- Use cached results if available
2. Communicate with stakeholders:
- Share status page link
- Provide ETA from Snowflake
- Plan alternate workflows
3. Monitor status page:
- status.snowflake.com
- Subscribe to updates (email/SMS)
4. Document impact:
- Note which queries/pipelines affected
- Prepare incident report
- Review SLA credits if applicable
Snowflake Down Checklist
Follow these steps in order:
Step 1: Verify it's actually down
- Check Snowflake Status
- Check API Status Check
- Search Twitter: "Snowflake down"
- Test Web UI login at app.snowflake.com
Step 2: Quick fixes (if Snowflake is up)
- Check warehouse status (
SHOW WAREHOUSES) - Resume warehouse if suspended
- Verify credentials (test SnowSQL connection)
- Check network connectivity (ping account endpoint)
- Verify role and privileges
Step 3: Connector troubleshooting
- Update connectors to latest version
- Test different connection methods (Web UI, SnowSQL, Python)
- Check for authentication errors
- Verify account identifier format
- Test with key-pair auth if using SSO
Step 4: Performance troubleshooting
- Check query profile in Web UI
- Look for long-running queries blocking warehouse
- Optimize SQL (add filters, use clustering)
- Scale up warehouse if needed
- Check for result cache misses
Step 5: Escalation
- Check cloud provider status (AWS/Azure/GCP)
- Review recent account changes
- Contact Snowflake support: community.snowflake.com
- Engage your Snowflake account team
Prevent Future Issues
1. Set Up Monitoring & Alerts
Don't wait for users to report issues.
Monitor key metrics:
-- Warehouse credit usage
SELECT
WAREHOUSE_NAME,
SUM(CREDITS_USED) AS total_credits,
COUNT(*) AS query_count
FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY
WHERE START_TIME >= DATEADD(day, -7, CURRENT_TIMESTAMP())
GROUP BY WAREHOUSE_NAME
ORDER BY total_credits DESC;
-- Failed queries
SELECT
ERROR_CODE,
ERROR_MESSAGE,
COUNT(*) AS error_count
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE START_TIME >= DATEADD(day, -1, CURRENT_TIMESTAMP())
AND ERROR_CODE IS NOT NULL
GROUP BY ERROR_CODE, ERROR_MESSAGE
ORDER BY error_count DESC;
Set up alerts:
- Use API Status Check for uptime monitoring
- Subscribe to Snowflake Status
- Monitor credit consumption thresholds
- Alert on query failure spikes
2. Implement Auto-Resume & Auto-Suspend
Prevent "warehouse suspended" errors and save credits.
ALTER WAREHOUSE my_warehouse SET
AUTO_SUSPEND = 600 -- Suspend after 10 min idle
AUTO_RESUME = TRUE -- Auto-start on query
INITIALLY_SUSPENDED = TRUE; -- Start suspended
Best practices:
- Production warehouses: AUTO_RESUME = TRUE
- Dev/test warehouses: Lower AUTO_SUSPEND (60-300 seconds)
- Batch processing: Manually control suspend/resume
3. Use Resource Monitors
Prevent unexpected credit consumption.
-- Create resource monitor (ACCOUNTADMIN required)
CREATE RESOURCE MONITOR my_monitor WITH
CREDIT_QUOTA = 1000 -- 1000 credits per month
FREQUENCY = MONTHLY
START_TIMESTAMP = '2026-02-01 00:00'
TRIGGERS
ON 75 PERCENT DO NOTIFY -- Alert at 75%
ON 100 PERCENT DO SUSPEND -- Suspend at 100%
ON 110 PERCENT DO SUSPEND_IMMEDIATE; -- Kill queries at 110%
-- Apply to warehouses
ALTER WAREHOUSE my_warehouse SET RESOURCE_MONITOR = my_monitor;
4. Optimize Query Performance
Prevent timeouts and reduce costs.
Best practices:
1. Use clustering keys for large tables:
ALTER TABLE large_table CLUSTER BY (date_column, category);
2. Partition pruning:
-- Bad: scans all partitions
SELECT * FROM sales WHERE amount > 1000;
-- Good: uses partition column
SELECT * FROM sales
WHERE date >= '2026-02-01' AND amount > 1000;
3. Materialized views for repeated aggregations:
CREATE MATERIALIZED VIEW sales_summary AS
SELECT date, region, SUM(amount) as total_sales
FROM sales
GROUP BY date, region;
4. Result caching:
-- Already enabled by default, but verify
ALTER SESSION SET USE_CACHED_RESULT = TRUE;
5. Multi-Region Failover
For critical workloads, replicate across regions.
-- Replicate database to another region
ALTER DATABASE my_db ENABLE REPLICATION TO ACCOUNTS org_name.account2;
-- Refresh replica
ALTER DATABASE my_db_replica REFRESH;
-- Promote replica to primary (failover)
ALTER DATABASE my_db_replica PRIMARY;
Considerations:
- Additional costs for replication
- Replication lag (minutes to hours)
- Same cloud provider required
- Manual failover process
Key Takeaways
Before assuming Snowflake is down:
- β Check Snowflake Status
- β Test Web UI at app.snowflake.com
- β
Verify warehouse is running (
SHOW WAREHOUSES) - β Search Twitter for "Snowflake down"
Common fixes:
- Resume suspended warehouse (fixes 40% of issues)
- Update connectors/drivers to latest version
- Check authentication (password, SSO, key-pair)
- Verify network connectivity and firewall rules
- Optimize slow queries (clustering, filtering, larger warehouse)
Connector issues:
- Python: Update snowflake-connector-python
- JDBC: Download latest driver JAR
- ODBC: Reinstall driver, check DSN configuration
- SnowSQL: Update via package manager
Performance issues:
- Use Query Profile to identify bottlenecks
- Scale warehouse up temporarily for heavy queries
- Implement clustering keys for large tables
- Enable result caching (on by default)
If Snowflake is actually down:
- Monitor status.snowflake.com
- Check cloud provider status (AWS/Azure/GCP)
- Usually resolved within 1-4 hours
- Review SLA for credit compensation
Prevent future issues:
- Enable AUTO_RESUME on production warehouses
- Set up resource monitors for credit limits
- Monitor query failures and warehouse usage
- Implement multi-region replication for critical data
Remember: Most "Snowflake down" issues are suspended warehouses, authentication problems, or network/firewall issues. Check these first before assuming a platform outage.
Need real-time Snowflake status monitoring? Track Snowflake uptime with API Status Check - Get instant alerts when Snowflake goes down.
Related Resources
- Is Snowflake Down Right Now? β Live status check
- Snowflake Outage History β Past incidents and timeline
- Snowflake vs Databricks Uptime β Which platform is more reliable?
- API Outage Response Plan β How to handle downtime like a pro
π Tools We Recommend
Uptime monitoring, incident management, and status pages β know before your users do.
Securely manage API keys, database credentials, and service tokens across your team.
Remove your personal data from 350+ data broker sites automatically.
Monitor your developer content performance and track API documentation rankings.
API Status Check
Stop checking API status pages manually
Get instant email alerts when OpenAI, Stripe, AWS, and 100+ APIs go down. Know before your users do.
Free dashboard available Β· 14-day trial on paid plans Β· Cancel anytime
Browse Free Dashboard β