1. The Macroeconomic Context of Data Platform Selection
The transition from on-premises data centers to cloud-native architectures represents one of the most profound shifts in enterprise IT economics over the last two decades. Historically, the economics of data warehousing were governed by the principles of Capital Expenditure (CapEx). Organizations would forecast their capacity requirements for a three-to-five-year horizon, procure massive appliances from vendors such as Teradata or Netezza, and depreciate these assets over time. In this model, the marginal cost of a query was effectively zero; the hardware was already paid for, and the constraint was strictly capacity—if the system was full, no new work could be done until a forklift upgrade occurred.
Today, the dominant economic model is Operational Expenditure (OpEx), characterized by utility billing. The “pay-as-you-go” promise of the cloud suggests that costs scale linearly with value. However, the reality revealed by Total Cost of Ownership (TCO) analysis is far more complex. Cloud Data Warehouses (CDWs) like Snowflake, Google BigQuery, and Amazon Redshift have introduced variable pricing models where architectural inefficiencies, poor query optimization, and unmanaged concurrency can lead to exponential cost growth. The constraint is no longer capacity; it is budget.
This report provides an exhaustive economic analysis of the three market leaders in the CDW space. It moves beyond superficial list-price comparisons to dissect the underlying architectural mechanisms that drive billing. Furthermore, it localizes this analysis to the London (UK) region—a market characterized by higher unit costs than the United States due to energy, real estate, and taxation premiums—providing a realistic financial baseline for European enterprises.1 By modeling costs across distinct workload patterns ranging from steady-state reporting to bursty data science exploration, this document aims to serve as a definitive guide for architectural decision-makers in 2025 and beyond.
1.1 The Regional Economic Baseline: Why London Pricing Matters
Global pricing comparisons often default to US-East (Northern Virginia) due to its status as the cheapest and most feature-rich region. However, for organizations operating under GDPR mandates or data sovereignty requirements in the United Kingdom, relying on US pricing for budgeting leads to significant variances.
Research indicates a consistent “London Premium” across all three vendors. For instance, Snowflake’s Standard Edition credits cost $2.00 in US-East but $2.70 in the London AWS region, a markup of 35%.2 Similarly, Google BigQuery’s analysis charges are $5.00 per TB in the US but approximately $6.25 per TB in London.3 Amazon Redshift also exhibits regional variance, with node hourly rates reflecting the higher operational costs of the eu-west-2 zone.5
This report strictly utilizes London-specific pricing data where available to ensure the financial models presented reflect the reality for UK-hosted workloads. This distinction is vital; a 35% variance is often the difference between a project coming in under budget or requiring executive intervention.
1.2 Defining the Total Cost of Ownership (TCO)
To compare these platforms accurately, one must look beyond the “sticker price” of compute and storage. A comprehensive TCO model comprises four distinct layers:
- Compute Costs: The direct cost of processing queries, whether billed by the second (Snowflake, Redshift Serverless), by the hour (Redshift Provisioned), or by the byte (BigQuery On-Demand).
- Storage Costs: The cost of retaining data, which now includes nuance regarding compression (Logical vs. Physical billing in BigQuery), long-term retention tiers, and the hidden costs of Time Travel and Fail-safe retention.7
- Data Transfer & Ingestion: The costs associated with moving data into the warehouse (Ingestion) and extracting results (Egress). This includes specific mechanisms like Snowpipe charges 9 or Redshift’s interaction with Kinesis streams.10
- Operational Overhead: The “Human TCO.” This refers to the engineering hours required to manage keys, vacuum tables, resize clusters, and optimize queries. While Snowflake often claims a premium for its “zero-management” ethos, Redshift and BigQuery have introduced serverless and autonomic features to close this gap.
2. Snowflake: The Utility Model and the Cost of Elasticity
Snowflake’s market dominance is built upon its “Multi-Cluster Shared Data” architecture, which fundamentally decoupled compute from storage before its competitors fully embraced the paradigm. This architectural decision dictates its pricing model: a pure utility model based on the concept of the “Credit.”
2.1 The Credit Economy and Warehouse Physics
In Snowflake, the unit of currency is the Credit. The price of a credit is determined by the Edition (Standard, Enterprise, Business Critical) and the Region.
London Region (AWS) Credit Pricing 2:
| Edition | Price Per Credit | Key Differentiators |
| Standard | $2.70 | Base SQL functionality, 1-day Time Travel. |
| Enterprise | $4.00 | Multi-cluster warehouses, 90-day Time Travel, Materialized Views. |
| Business Critical | $5.40 | Private Link support, HIPAA/PCI compliance, Failover/Failback. |
The consumption of these credits is driven by Virtual Warehouses—clusters of stateless compute resources. Snowflake employs a T-shirt sizing model where each size increment doubles both the compute power and the credit burn rate.
Warehouse Consumption Rates 1:
- X-Small: 1 Credit/Hour ($4.00/hr on Enterprise)
- Small: 2 Credits/Hour ($8.00/hr)
- Medium: 4 Credits/Hour ($16.00/hr)
- …
- 2X-Large: 32 Credits/Hour ($128.00/hr)
- 4X-Large: 128 Credits/Hour ($512.00/hr)
The economic implication of this linearity is that query performance (for complex queries) theoretically scales linearly with cost. If a query takes 10 minutes on a Small warehouse (2 credits/hr), it costs roughly $1.33. If a Large warehouse (8 credits/hr) executes it in 2.5 minutes, the cost remains ~$1.33. This linearity holds until the query is no longer CPU-bound or until metadata overhead dominates.
2.2 The “60-Second Tax” and Auto-Suspend Dynamics
A critical and often misunderstood component of Snowflake’s billing is the minimum billing increment. Whenever a Virtual Warehouse is started (or resumed from suspension), Snowflake charges for a minimum of 60 seconds, regardless of whether the query took 500 milliseconds or 50 seconds.11 Following the first minute, billing becomes per-second.
This mechanism has profound implications for workload patterns. Consider a “drip-feed” ingestion pattern where a reporting tool fires a single, sub-second query every 2 minutes.
- Scenario: 1 query (duration: 1s) every 2 minutes.
- Behavior: The warehouse wakes up, executes for 1s, and waits. If Auto-Suspend is set to 1 minute, the warehouse runs for 60s, suspends, and then wakes up 60s later for the next query.
- Utilization: The warehouse is effectively running 30 minutes out of every hour to process 30 seconds of actual work.
- Cost Efficiency: Extremely low (1.6% efficiency).
This “Start-Stop” penalty necessitates careful configuration of the Auto-Suspend setting. While Snowflake defaults often suggest 10 minutes, aggressive cost optimization strategies recommend lowering this to 60 seconds for ad-hoc warehouses to minimize “tail” costs—the period where the warehouse sits idle burning credits after the last query finishes.13 However, setting it too low can induce “cache trashing,” where the local SSD cache of the warehouse is lost upon suspension, forcing the next query to pull data from remote S3 storage, thereby slowing performance and potentially increasing runtime costs.
2.3 Storage Economics: The Hidden Multipliers
Snowflake storage in London is priced at approximately $23 per TB per month (On-Demand).1 While this appears competitive with Amazon S3 standard rates, Snowflake’s architecture introduces unique multipliers: Time Travel and Fail-safe.
Time Travel: Snowflake retains historical versions of data to allow users to query the database as it existed in the past. On Enterprise Edition, this can be configured up to 90 days.
- Economic Impact: Every time a record is updated or deleted, the old micro-partition is retained for the duration of the Time Travel window. In a high-churn environment (e.g., a table that is completely overwritten daily), a 90-day Time Travel setting implies that the user is paying for 91 copies of the table (1 current + 90 historical). This can silently bloat storage bills by an order of magnitude.
Fail-safe: Following the Time Travel window, data moves into Fail-safe for 7 days. This is non-configurable and immutable, designed for disaster recovery. Users are billed for storage during this period, adding a mandatory 7-day storage tail to all deleted data.1
2.4 The Evolution of Snowpipe Pricing (2025)
Data ingestion via Snowpipe has historically been a complex calculation involving a per-file notification charge and a compute charge. This often penalized architectures that generated thousands of tiny files (e.g., Kinesis Firehose defaults).
However, recent updates effective in late 2025 have simplified this model. Snowflake now charges a flat 0.0037 credits per GB of data ingested.9
- Economic Impact: This shifts the cost driver from file count to data volume.
- Calculation: Ingesting 1 TB (1,000 GB) costs 3.7 credits. At the London Enterprise rate of $4.00, this is $14.80 per TB.
- Comparison: This is highly competitive against legacy ingestion methods and removes the penalty for streaming architectures that produce frequent, small files, significantly altering the TCO calculation for real-time analytics platforms built on Snowflake.
3. Google BigQuery: The Serverless Paradigm and Capacity Planning
Google BigQuery represents a fundamentally different architectural philosophy. Built on Google’s Dremel engine and Colossus file system, it is a true serverless platform where the concept of a “node” is abstracted away entirely. This abstraction allows for massive parallelism but introduces a different set of economic variables.
3.1 On-Demand Pricing: The High-Risk, High-Reward Model
The traditional BigQuery pricing model is On-Demand, where users pay for the volume of data scanned by a query.
- London Rate: Approximately $6.25 per TB scanned.3
- Free Tier: The first 1 TB per month is free.
The Economic Mechanic:
This model completely decouples cost from time and CPU usage. A query that utilizes 2,000 slots (CPUs) to scan 1 TB in 5 seconds costs exactly the same as a query that uses 100 slots to scan 1 TB in 2 minutes.
- Advantage: This is ideal for sporadic, high-performance workloads. A data scientist can run a massive query across petabytes of data and get an answer in seconds without provisioning a massive cluster.
- Risk: The “Select Star” problem. If a user inadvertently runs SELECT * on a massive table without partition filters, the cost is immediate and substantial. A single query can cost hundreds of dollars. This necessitates a culture of strict query governance and the implementation of maximum bytes billed quotas at the project or user level.
3.2 BigQuery Editions: The Return to Capacity Planning
Recognizing that large enterprises require predictable budgeting, Google introduced BigQuery Editions (Standard, Enterprise, Enterprise Plus), which utilize Capacity Pricing (Slot-Hours).15
London Slot Pricing 17:
| Edition | Pay-As-You-Go (PAYG) | 1-Year Commitment (Est.) |
| Standard | $0.052 / slot-hour | N/A |
| Enterprise | ~$0.06 – $0.078 / slot-hour | ~$0.052 / slot-hour |
| Enterprise Plus | ~$0.10+ / slot-hour | ~$0.08 / slot-hour |
Note: Enterprise Plus includes advanced features like higher concurrency limits and disaster recovery, justifying the premium.
The Autoscaling Mechanic 18: BigQuery’s autoscaler adds capacity in increments (typically 100 slots). The billing is based on the provisioned capacity, not the precise second-by-second utilization.
- Scenario: A query requires 120 slots. The autoscaler provisions 200 slots. The user pays for 200 slot-hours for the duration of the activity (minimum 1 minute).
- Hybrid Model: Users can purchase a “Baseline” of committed slots (e.g., 500 slots at the cheaper 1-Year rate) to cover steady-state usage, and allow “Autoscaling” (at the higher PAYG rate) to handle peaks. This “Baseline + Burst” strategy is the primary method for optimizing TCO in mature BigQuery implementations.19
3.3 Storage: The Logical vs. Physical Arbitrage
One of the most significant yet underutilized cost levers in BigQuery is the choice between Logical and Physical storage billing.7
Logical Storage (Default):
- Basis: Uncompressed bytes.
- London Price: ~$23/TB (Active), ~$11.50/TB (Long-Term).
- Inclusions: Includes the cost of Time Travel and Fail-safe storage.
Physical Storage (Opt-In):
- Basis: Compressed bytes (on disk).
- London Price: ~$45/TB (Active), ~$22.50/TB (Long-Term).
- Exclusions: Time Travel is billed separately.
The Arbitrage:
BigQuery uses the Capacitor file format, which often achieves compression ratios of 4:1 to 10:1, especially for repetitive data (logs, JSON).
- Scenario: 10 TB of JSON logs (Logical size). Compression ratio 5:1. Physical size = 2 TB.
- Logical Cost: 10 TB * $23 = $230.
- Physical Cost: 2 TB * $45 = $90.
- Result: A 60% cost reduction simply by toggling a billing setting.
- Caveat: If the data is poorly compressible (e.g., images, already compressed Avro), Physical billing could be more expensive. Furthermore, heavy use of Time Travel on Physical storage adds costs that are “free” in Logical storage, requiring careful analysis before switching.21
4. Amazon Redshift: The Hybrid Evolution
Amazon Redshift has undergone a metamorphosis. Once a rigid, coupled MPP system (using DC2 nodes), it has evolved into a decoupled architecture via RA3 nodes and Managed Storage, and further into a consumption-based model with Redshift Serverless.
4.1 Provisioned RA3: The Power of Reservation
For steady-state workloads, Redshift’s RA3 nodes offer what is often the lowest price-performance ratio in the industry, largely due to the Reserved Instance (RI) mechanism.
London Pricing (On-Demand) 5:
| Node Type | vCPU | RAM | Price Per Hour |
| ra3.xlplus | 4 | 32 GB | $1.086 |
| ra3.4xlarge | 12 | 96 GB | $3.26 |
| ra3.16xlarge | 48 | 384 GB | $13.04 |
The RI Advantage: Committing to a 1-year or 3-year term yields massive discounts. A 3-year “All Upfront” RI can reduce the effective hourly rate by up to 75%.22
- Economic Logic: If a company knows it will need a data warehouse for the next 3 years, paying for Redshift compute is akin to buying hardware wholesale. Unlike Snowflake or BigQuery Editions (which offer ~20-40% commit discounts), Redshift’s RI discounts are deeper, rewarding long-term stability.
4.2 Redshift Serverless: Closing the Agility Gap
To compete with Snowflake and BigQuery on ease of use, AWS introduced Redshift Serverless.
- Unit: Redshift Processing Unit (RPU). 1 RPU ≈ 16 GB memory.
- London Price: Estimated $0.40 – $0.45 per RPU-hour (based on US base of $0.375 + London premium).5
- Minimum Capacity: Recently lowered from 8 RPU to 4 RPU.23
- Impact: The 4 RPU minimum ($1.60/hr active) significantly lowers the barrier to entry for development environments and small workloads, addressing a previous competitive disadvantage against BigQuery’s free tier and Snowflake’s XS warehouse.
Billing Granularity: Redshift Serverless charges per second with a 60-second minimum, identical to Snowflake.25 This reinforces the standard industry practice of penalizing high-frequency, short-duration connection patterns.
4.3 The Hidden Subsidy: Concurrency Scaling Credits
A unique feature of Redshift’s pricing model is Concurrency Scaling.
- Mechanism: When the main cluster is fully utilized, Redshift can automatically burst queries to a transient cluster.
- The Subsidy: Users accrue 1 hour of free Concurrency Scaling for every 24 hours the main cluster runs.26
- Economic Impact: For a reporting workload that experiences a massive “morning rush” at 9 AM but is steady the rest of the day, this bursting is effectively free. In Snowflake, managing this spike would require upsizing the warehouse or enabling multi-cluster scaling, both of which incur direct costs. In Redshift, the user has effectively “pre-paid” for this burst capacity through their steady-state usage.
5. Comparative Workload Modeling
To determine the “cheapest” platform, we must simulate specific real-world scenarios. Cost is not an intrinsic property of the platform but a function of the workload’s shape.
5.1 Scenario A: The “Always-On” Enterprise Dashboard
Profile: A large retail bank in London.
- Requirement: 24/7 availability for 500 concurrent users.
- Workload: Continuous stream of reporting queries. Dashboards refresh every 15 minutes.
- Equivalent Power Needed: ~50-60 vCPUs / 400 GB RAM.
Snowflake Model:
- Configuration: 2X-Large Warehouse (Enterprise Edition).
- Consumption: 32 Credits/Hour * 24 Hours * 30 Days = 23,040 Credits.
- London Cost: 23,040 * $4.00 = $92,160 / month.
- Note: Even with a commit discount, this is a high baseline.
BigQuery Model:
- Configuration: Enterprise Edition with 1-Year Commitment.
- Capacity: 1600 Slots (Estimated requirement for concurrency).
- London Cost: 1600 slots * $0.052 (commit rate) * 730 hours = $60,736 / month.
- Note: Autoscaling could add costs if peaks exceed 1600 slots.
Redshift Model:
- Configuration: 2x ra3.16xlarge nodes (Total 96 vCPU / 768 GB RAM).
- Pricing: 1-Year Reserved Instance (Partial Upfront).
- Effective Hourly Rate: ~$9.00/hr (Estimated blended RI rate).
- London Cost: $9.00 * 730 hours = $6,570 / month. Wait—calculating strictly on On-Demand: 2 * $13.04 * 730 = $19,038.
- Analysis: Even at On-Demand rates ($19k), Redshift is drastically cheaper than Snowflake ($92k) or BigQuery ($60k) for this raw, brute-force, always-on scenario. The architecture of “owning” the nodes allows for massive cost efficiencies at high utilization rates. The Concurrency Scaling credits further insulate against morning spike costs.
Winner: Amazon Redshift (Provisioned).
5.2 Scenario B: The “Data Science Exploration” (Bursty)
Profile: A marketing analytics team.
- Requirement: Ad-hoc exploration on 50 TB of data.
- Workload: The team works 4 hours a day. During those 4 hours, they run complex queries. The system is idle 20 hours a day and on weekends.
- Data Scanned: 20 TB per day (heavy scans).
Snowflake Model:
- Configuration: 2X-Large Warehouse (Speed is critical). Running 4 hours/day, 20 days/mo.
- Consumption: 32 Credits * 4 Hours * 20 Days = 2,560 Credits.
- London Cost: 2,560 * $4.00 = $10,240 / month.
BigQuery Model (On-Demand):
- Configuration: On-Demand (pay per scan).
- Consumption: 20 TB * 20 Days = 400 TB Scanned.
- London Cost: 400 TB * $6.25 = $2,500 / month.
- Note: This highlights the “Select Star” risk. If they scan 200 TB, cost jumps to $12,500.
Redshift Model (Serverless):
- Configuration: Serverless (auto-scaling).
- Consumption: 4 hours/day * 20 days = 80 hours.
- Capacity: High RPU usage (e.g., 256 RPU) to match Snowflake 2XL performance.
- London Cost: 256 RPU * $0.45 * 80 hours = $9,216 / month.
Analysis:
For purely bursty workloads where the system is idle 80%+ of the time, Redshift Provisioned makes no sense (idle cost). Snowflake is efficient but the high compute power (2XL) drives up the hourly rate. BigQuery On-Demand shines here—provided the team optimizes queries to avoid full table scans. If the data volume scanned is moderate, BigQuery is the clear winner. If the volume scanned is massive (Petabytes), Snowflake’s time-based billing becomes a cap on costs that BigQuery lacks.
Winner: BigQuery On-Demand (with caveats on query governance).
5.3 Scenario C: The “Continuous Streaming Ingestion”
Profile: IoT company ingesting sensor logs.
- Workload: Constant trickle of data, 24/7.
- Volume: 10 TB per month total.
Snowflake (Snowpipe):
- New Pricing: 10,000 GB * 0.0037 credits = 37 credits.
- Cost: 37 * $4.00 = $148 / month.
- Note: Extremely cheap due to new volume-based pricing.
BigQuery (Streaming API):
- Storage Write API: 10 TB * $25/TB = $250 / month.
- Legacy Streaming: 10 TB * $50/TB = $500 / month.
Redshift (Streaming Ingestion):
- Configuration: Ingestion from Kinesis Data Streams.
- Cost: $0 direct cost. The ingestion consumes CPU cycles on the existing cluster.
- Note: If the cluster has spare capacity (which is common in provisioned setups), this ingestion is free. If it forces a Serverless scale-up, costs apply.
Winner: Redshift (if using Provisioned with headroom) or Snowflake (if pure Serverless is required).
6. Hidden Costs and Regional Nuances
While compute and storage dominate the conversation, the “Long Tail” of the invoice often contains the surprises.
6.1 Data Egress and Transfer
Cloud providers charge for data leaving their network (Egress).
- Multi-Cloud Strategy: If you host Snowflake on AWS London but feed a dashboard hosted in Azure Amsterdam, you will pay AWS Data Transfer Out rates (typically ~$0.09/GB).27 For a data-intensive application, this can rival the storage bill.
- Redshift Spectrum: Querying data in S3 (Data Lake) incurs a cost of $5 per TB scanned. This is separate from the cluster cost. It allows expanding the warehouse without resizing, but unoptimized Spectrum queries can mimic BigQuery’s “bill shock.”
6.2 The Cost of Maintenance (Human TCO)
- Snowflake: “Near Zero Maintenance” is a marketing claim that largely holds true for infrastructure (no vacuuming, no indexing). However, the financial maintenance is high. FinOps teams must constantly monitor warehouse sizes and auto-suspend settings.
- BigQuery: Truly zero infrastructure maintenance. However, the cost of query optimization is high. Engineers must constantly refine schemas (partitioning/clustering) to keep On-Demand costs low or Slot utilization efficient.
- Redshift: Historically high maintenance (Vacuum, Analyze, WLM configuration). Redshift Serverless and recent RA3 automations have reduced this, but it still requires more “DBA-like” attention than the others.
6.3 London-Specific Nuances
The London region (eu-west-2) is not just expensive; it is rigid.
- Feature Lag: New instance types (like the latest GRAVITON-based nodes) or features (like certain GenAI integrations in BigQuery) often launch in US regions months before London. This can force UK companies to choose between the latest price-performance innovations (hosted in US) vs. data sovereignty (hosted in UK).
- Currency Risk: While cloud bills are often quoted in USD, UK enterprises paying in GBP are exposed to FX volatility. A weakening Pound increases the effective cost of US-denominated cloud services, acting as a dynamic price hike completely outside the architectural control.
7. Strategic Recommendations and Future Outlook
As we look toward late 2025 and 2026, the convergence of these platforms continues. Snowflake is becoming more “Data Lake-like” with Iceberg tables; Redshift is becoming more “Serverless”; BigQuery is adding “Capacity” constraints. The choice is no longer about feature parity—it is about Economic Philosophy.
7.1 Decision Framework
- Choose Redshift (Provisioned/RA3) if:
- You have a predictable, steady-state workload (24/7 reporting).
- You are already deeply embedded in the AWS ecosystem.
- You can leverage Reserved Instances to achieve the lowest possible unit cost for compute.
- Your ingestion patterns (Kinesis) align with Redshift’s “free” ingestion capabilities.
- Choose Snowflake if:
- You require strict isolation of workloads (e.g., ensuring Data Science never impacts Executive Reporting).
- Your workload is highly variable/bursty, and you can aggressively manage auto-suspend.
- You require a multi-cloud strategy (e.g., sharing data between AWS and Azure regions).
- You value “maintenance-free” operations over raw unit cost efficiency.
- Choose BigQuery if:
- You have massive, sporadic datasets that need to be queried instantly without sizing a cluster.
- You are building on GCP.
- Your data is highly compressible (JSON/Logs), allowing you to leverage the Physical Storage billing arbitrage.
- You prefer a “hands-off” infrastructure model and are willing to accept variable query costs (On-Demand) or manage slot commitments (Editions).
7.2 FinOps Best Practices for London Enterprises
- Aggressive Auto-Suspend: In London, where credits are 35% more expensive, the standard 10-minute auto-suspend on Snowflake is burning cash. Lower it to 60 seconds for all ad-hoc warehouses.
- Physical Billing Toggle: Audit all BigQuery datasets. If compression ratios exceed 2.5:1, switch to Physical Billing immediately.
- Commitment Strategy: For Redshift and BigQuery, the difference between Pay-As-You-Go and 1-Year Commit/RI is often 20-40%. Purchase baseline capacity for your “floor” usage and use on-demand/serverless only for the “ceiling.”
In conclusion, there is no single “cheapest” data warehouse. There is only the most efficient warehouse for a specific workload profile. The winner in the TCO battle is not determined by the vendor selection, but by the architect who aligns the workload’s shape with the pricing physics of the chosen platform.
