OneLake is the storage foundation of Microsoft Fabric — a single logical data lake where all Fabric workloads read and write data. Unlike Fabric capacity (which is licensed separately), OneLake storage is billed like Azure Data Lake Storage Gen2 at $0.023/GB/month. For an enterprise with 50 TB of analytics data, that's $1,150/month or $13,800/year in storage costs alone — separate from, and additive to, Fabric capacity costs. Understanding OneLake's pricing model, shortcut architecture, and data lifecycle options is essential to building an accurate Fabric total cost model.
Independent Advisory. Zero Vendor Bias.
500+ Microsoft EA engagements. $2.1B in managed spend. 32% average cost reduction. We negotiate on your behalf — never Microsoft's.
View Advisory Services →OneLake Pricing: The Complete Rate Card
| Cost Component | Rate | Notes |
|---|---|---|
| OneLake LRS storage | $0.023/GB/month | Locally redundant; single Azure region |
| OneLake GRS storage | $0.046/GB/month | Geo-redundant; data replicated to paired region |
| OneLake ZRS storage | $0.028/GB/month | Zone-redundant; available in select regions |
| Write operations | $0.0065/10,000 operations | PUT, COPY, POST, LIST requests |
| Read operations | $0.0004/10,000 operations | GET and all other operations |
| Data retrieval (no charge within region) | $0 | Intra-region reads from Fabric workloads are free |
| Cross-region data transfer | $0.01–$0.05/GB | Varies by source/destination region pairing |
| Internet egress (outbound from Azure) | $0.087/GB (first 10 TB/month) | Reading OneLake data from outside Azure |
| Soft-deleted files (recovery) | $0.023/GB/month | Applies to files in soft-delete retention window |
| Snapshots (Delta table history) | $0.023/GB/month | Incremental storage only vs base table |
At these rates, a 100 TB OneLake estate costs $2,300/month (LRS) or $4,600/month (GRS) in pure storage — before any operation costs or egress. For most enterprise Fabric deployments, storage is 20–35% of total Fabric platform cost, with capacity being the dominant component at 65–80%.
OneLake Architecture: What Actually Gets Stored
Understanding what data lands in OneLake vs what is referenced via shortcuts is the primary lever for controlling OneLake storage costs.
Native OneLake Storage (You Pay)
Data written natively to Fabric workloads lands in OneLake and is billed:
- Lakehouse tables: Delta Parquet files in the Tables folder of each Lakehouse. This is where Spark-written, ingested, and transformed data lives. Can be very large.
- Lakehouse Files: Unstructured/raw files in the Files folder. CSV inputs, JSON payloads, images. Often the largest storage consumer in data engineering scenarios.
- Data Warehouse tables: Parquet files backing the Fabric Warehouse (serverless T-SQL). Each table row group is stored as Parquet in OneLake.
- Semantic model caches: Power BI semantic models (imported mode) stored in the capacity are cached in OneLake. Large models (5–50 GB) generate meaningful storage costs.
- Eventstream outputs: Real-time data written by Eventstream landing in Lakehouse tables.
- Delta table history (time travel): Delta log files and older file versions for time travel queries. Grows continuously without vacuum operations.
OneLake Shortcuts (You Don't Pay OneLake Storage)
A OneLake shortcut is a metadata pointer — it tells Fabric "data for this path lives in [ADLS Gen2/S3/GCS location]." No data is copied to OneLake. When Fabric workloads access a shortcut path, they read directly from the source storage. You pay the source storage costs (whatever you already pay for ADLS Gen2, S3, etc.) plus Fabric capacity CU costs for processing — but not OneLake storage.
| Shortcut Source | Availability | OneLake Storage Cost | Authentication |
|---|---|---|---|
| ADLS Gen2 (same tenant) | Generally Available | $0 (pay ADLS Gen2 rates) | Service Principal, Managed Identity |
| ADLS Gen2 (cross-tenant) | Generally Available | $0 | Service Principal |
| Azure Blob Storage | Generally Available | $0 (pay Blob rates) | Service Principal, SAS |
| Amazon S3 | Generally Available | $0 (pay AWS S3 rates) | AWS IAM Access Key |
| Google Cloud Storage | Generally Available | $0 (pay GCS rates) | GCP service account |
| OneLake (another workspace) | Generally Available | $0 (charged to source workspace) | Fabric permissions |
| Amazon S3 Compatible (Cloudflare R2, MinIO) | Preview | $0 | S3-compatible auth |
Shortcuts are the primary cost optimisation tool for organisations with existing cloud data lakes. If you have 100 TB in AWS S3 or Azure Data Lake that you want to query with Fabric, create shortcuts rather than copying data into OneLake. You get all Fabric query capabilities (Spark, SQL, Power BI) with zero OneLake storage cost for that dataset.
Get an Independent Second Opinion
Before you commit to OneLake architecture decisions that drive years of storage costs, speak with an adviser who has no commercial relationship with Microsoft.
Request a Consultation →Delta Table Storage Growth: The Silent Cost Driver
Delta Lake's time travel and transaction log capabilities are powerful — but they generate storage overhead that compounds over time. Every write operation to a Delta table creates new Parquet files; old files are retained until explicitly vacuumed. Without maintenance, Delta table storage grows 2–5x larger than the actual "live" data over 6–12 months.
Delta Lake Storage Components
| Component | What It Is | Storage Growth Rate | Retention Control |
|---|---|---|---|
| Active Parquet files | Current table data | Proportional to data growth | Compaction + VACUUM |
| Delta log (_delta_log) | Transaction history JSON files | 1 file per write + checkpoints every 10 operations | Log retention setting |
| Old versions (time travel) | Previous Parquet files superseded by updates/deletes | Proportional to UPDATE/DELETE frequency | VACUUM (default 7-day retention) |
| Checkpoint files | Periodic snapshots of delta log state | Small; 1 checkpoint per 10 log entries | Log retention setting |
The critical maintenance operation is VACUUM. Delta's default retention is 7 days — meaning old file versions are kept for 7 days after they're superseded. Running VACUUM at the default 7-day retention removes old files after the retention window. Running it less frequently (or never) causes unbounded storage growth. Many Fabric customers don't run VACUUM because it's not run automatically in Fabric Warehouse (as of early 2026) — only in Lakehouse via Spark.
A table with 100 GB of active data that receives 50,000 update operations/day with no VACUUM grows by approximately 8–15 GB/month in orphaned files. At $0.023/GB/month, that's $2.30–$3.45/month per 100 GB active table — which seems small until you have 500 tables with similar churn patterns generating $1,150–$1,725/month in avoidable storage cost.
Storage Optimisation Operations
- VACUUM: Removes files outside the retention window. Run weekly with the default 7-day retention. Consider shorter retention (1–2 days) for tables with frequent small updates if time travel is not a compliance requirement.
- OPTIMIZE (compaction): Merges many small Parquet files into larger files (target 128 MB+). Reduces file count, improves read performance, and reduces the number of Delta log entries. Run daily on actively written tables.
- Delta log retention: Delta logs older than the retention threshold are deleted automatically when VACUUM runs. For most analytics tables, 30-day log retention is sufficient. Reducing from default unlimited to 30 days removes months of accumulated log history.
- Storage tier management: OneLake does not currently support automated storage tiering (Hot vs Cool vs Archive) — all storage is effectively "Hot" tier pricing. This is a cost management gap vs raw ADLS Gen2 where you can move old data to cheaper tiers. Watch for Microsoft to add tiering support to OneLake in future updates.
OneLake vs Azure Data Lake Storage Gen2: Full Cost Comparison
| Dimension | OneLake | ADLS Gen2 (raw) |
|---|---|---|
| Storage pricing | $0.023/GB/month (LRS) | $0.023/GB/month (LRS) — identical |
| Operation pricing | Same as ADLS Gen2 | $0.0065/10K write, $0.0004/10K read |
| Fabric workload write support | Native — direct write | Requires shortcut (read only from Fabric natively) |
| Data governance | OneLake data catalogue, unified namespace | No built-in governance |
| Delta Lake support | Native, first-class | Supported but not managed |
| Storage tiering (Hot/Cool/Archive) | Not currently supported | ✅ Full tiering support |
| Hierarchical namespace | ✅ (built on HNS) | ✅ (optional, required for ACLs) |
| Immutable storage (WORM) | Not currently supported | ✅ Policy-based immutability |
| MACC eligible | ✅ | ✅ |
The pricing is identical — OneLake and ADLS Gen2 cost the same per GB per month. The difference is governance and integration depth. For organisations that need storage tiering (archiving data older than 90 days to Cool or Archive tier at 60–90% cost reduction), raw ADLS Gen2 with OneLake shortcuts is currently the better architecture for historical datasets.
Building the OneLake Cost Model for Your Organisation
Before committing to Microsoft Fabric, build a 3-year OneLake storage cost model:
| Input | Source | Estimation Method |
|---|---|---|
| Current data lake size (GB) | ADLS Gen2 / S3 billing | Actual from Azure Cost Management |
| Annual data growth rate (%) | Historical storage trend | 12-month trend from storage metrics |
| Fabric-native vs shortcut split (%) | Architecture decision | % of data you plan to migrate natively |
| Delta churn rate (UPDATE/DELETE %) | ETL pattern analysis | High churn (SCD2, merge) = more orphaned files |
| Redundancy required (LRS vs GRS) | Compliance/DR requirements | Regulated industries often require GRS |
Example: A 50 TB organisation, 25% annual growth, 60% native OneLake / 40% shortcut, LRS:
- Year 1: 50 TB × 60% native = 30 TB OneLake = 30,000 GB × $0.023 = $690/month
- Year 2: 62.5 TB × 60% = 37.5 TB = $863/month
- Year 3: 78 TB × 60% = 46.8 TB = $1,076/month
- 3-year total storage cost: ~$32,040 (plus operation costs, minus shortcut savings)
For the full Fabric cost model including capacity + OneLake + per-user licences, see our Microsoft Fabric licensing complete guide.
📄 Free Guide: Azure Cost Optimisation Guide
Includes OneLake storage modelling, Azure Reserved Instance strategy, and data lifecycle cost frameworks.
Download Free Guide →Frequently Asked Questions
How much does OneLake storage cost?
OneLake storage is priced at $0.023/GB/month for LRS and $0.046/GB/month for GRS. This is equivalent to Azure Data Lake Storage Gen2 pricing. OneLake storage is billed through Azure separately from Fabric capacity and counts toward MACC commitments.
What is a OneLake shortcut and does it have storage costs?
A OneLake shortcut is a pointer to data stored outside OneLake — in ADLS Gen2, Azure Blob, Amazon S3, or Google Cloud Storage. Shortcuts do not copy data; they reference it in place. There is no OneLake storage charge for data accessed via shortcut — you pay only the source storage charges.
Is there an included storage allocation with Microsoft Fabric?
No. OneLake storage is always billed separately at $0.023/GB/month. Fabric separates compute (capacity SKU) from storage (OneLake). Your Fabric cost model always has two line items: capacity and storage.
How does OneLake compare to Azure Data Lake Storage Gen2?
OneLake is built on ADLS Gen2 infrastructure and carries identical pricing. OneLake adds Fabric-native governance (unified namespace, data catalogue, Delta Parquet default). ADLS Gen2 supports storage tiering (Cool/Archive); OneLake does not currently. For archival data, ADLS Gen2 with OneLake shortcuts is the better cost architecture.
What are OneLake data access costs (egress)?
Data reads within the same Azure region from Fabric workloads have no egress charge. Cross-region reads cost $0.01–$0.05/GB. External access costs standard Azure egress rates. Size your OneLake in the same Azure region as your Fabric capacity to eliminate egress costs.
Related Microsoft Fabric & Analytics Licensing Guides
- Microsoft Fabric Licensing: Complete Enterprise Guide
- Microsoft Fabric Capacity Planning: F SKU Sizing Guide
- Microsoft Fabric vs Power BI Premium: Migration Decision Guide
- Fabric F SKU vs P SKU: Complete Comparison
- Synapse Analytics vs Microsoft Fabric: Migration Guide
- Power BI Licensing Complete Guide
- Power BI Embedded Licensing Guide