top of page

Azure Databricks Standard Tier End of Life: What It Means and How to Prepare

Azure Databricks Migration: Standard to Premium Tier


Databricks is steadily simplifying its platform tiers across clouds, and Azure Databricks Standard Tier is reaching end of life. While the change aligns Azure with AWS and GCP, this transition is a strategic move to ensure all customers and users have access to the advanced security, governance, and performance features required for modern AI and data engineering. This article breaks down what you should know, consider and plan for this upgrade.


Just side note: On AWS, Databricks already completed this transition. As of October 1, 2025, all AWS Standard Tier workspaces were automatically upgraded to Premium Tier. Azure Databricks is now following the same direction.

Timelines for Azure Databricks


Azure subscriptions will no longer support creation of Azure Databricks Standard Tier workspaces after April 1st, 2026.


Starting October 1st, 2026, all Azure Databricks Standard tier workspaces will be automatically upgraded to Premium Tier.



If you are running production or large-scale job workloads on Standard Tier today, this change deserves immediate attention.





What Changes with Premium Tier?


Premium Tier is not just “Standard with a higher price.” It introduces enterprise-grade governance, security, and operational controls that many teams previously implemented externally or inconsistently.


Key capabilities unlocked in Premium workspaces include:


1. Advanced Governance with Unity Catalog

Unity Catalog as central metastore providing unified namespace across all your databricks workspaces fundamentally changes how organizations manage data:

  • Centralized data governance across workspaces

  • Fine-grained access control at catalog, schema, table, and column level.

  • Built-in data lineage and auditability.

  • Consistent security across analytics, ML, and AI workloads


2. Enterprise Identity Integration

  • Automatic Identity management enabling no effort integration with Entra ID (Previously Azure AD). (Previously SCIM integration was required still to be carried out by Admins to sync with Entra ID but not anymore with Automatic identity management).



  • Fine-grained user and group lifecycle management streamlined from Entra ID.


3. Advanced Security and Compliance Features

  • IP Access lists, Control plane private connectivity.

  • Better alignment with regulated and enterprise environments. Higher-level compliance certifications necessary for regulated industries (HIPAA, PCI-DSS, etc.).


4. Access to all latest components and capabilities of Databricks platform

  • Unity Catalog (mentioned separately above)

  • Databricks SQL — datawarehousing layer on top of lakehouse in Databricks platform

  • Delta Sharing — enable secure data sharing across regions, clouds for various teams and use cases.

  • Lakebase — Postgresql compatible database for your AI apps

  • Lakeflow spark declarative pipelines — if you prefer

  • Mosiac AI Capabilities including vector search, model endpoints, etc.

  • Databricks Assistant to power your workforce with AI assistants in the platform.

  • Managed services (lakehouse monitoring, predictive optimization)





Cost Implications: The Most Critical Impact


The most visible change for many teams will be DBU pricing, especially for job compute as most teams who used and stick to Standard tier until now would primarily be for their ETL workloads with job compute as that could bring significant cost savings.


Transparency is key when managing cloud budgets. The move to Premium does involve a major change in DBU (Databricks Unit) pricing. For Job Compute workloads — the backbone of most automated data pipelines in Databricks, the price shift is as follows as example for one of region “West Europe”:


  • Standard Tier (Job Compute): ~$0.13 / DBU / hour

  • Premium Tier (Job Compute): ~$0.255 / DBU / hour


That is nearly a 2× increase in DBU cost.


The increase is real — and should be planned for especially for large scale workloads.


  • Batch-heavy platforms with thousands of job hours per month will see material cost increases


  • Teams that previously optimized cost purely at the cluster level now need architecture-level optimization.


  • Poorly tuned jobs, oversized clusters, and legacy pipelines become far more expensive overnight.


But Premium Tier also enables the tools needed to actively manage and optimize that cost. Leverage this opportunity to also check utilization of clusters to right size the clusters and also leverage latest cluster runtimes for performance optimizations which might help reduce overall runtime of jobs or data scanned resulting in offsetting the cost partially or fully depending on the setup.





How to Check Your Workspace Tier


Not sure which tier you are currently using? It’s easy to verify:


  1. Azure Portal: Navigate to your Azure Databricks service instance. The Overview page will display the “Pricing Tier.”


  2. Databricks Account Console: Check the Workspaces page in your account console to see a bird’s-eye view of all workspace tiers across your organization.





Upgrade Steps and Considerations for

automated IAC deployments


High level upgrade steps are shared in documentation here:



As fan of IAC for automated cloud infra deployments, I wanted to share some insights to ensure your Infrastructure-as-Code (IaC) remains the source of truth for your environment, you should perform the upgrade by updating your configuration files rather than just clicking the button in the portal.


🛠️ Upgrading via Terraform


If you manage your workspaces with Terraform, the upgrade is straightforward but requires attention to the provider version to avoid accidental resource recreation.


  • Update the SKU: Locate your azurerm_databricks_workspace resource and change the sku argument from standard to premium.


  • Check Provider Version: Ensure you are using azurerm version 3.41.0 or later. Older versions may not recognize the SKU change as an "update" and might attempt to destroy and recreate the workspace.


  • Verification: Always run terraform plan first. Look for the output ~ update in-place. If you see -/+ destroy and then create replacement, stop and verify that all other parameters (like location, name, and resource_group_name) match your current deployment exactly.


Terraform Example


resource "azurerm_databricks_workspace" "example" {
  name                = "my-databricks-workspace"
  resource_group_name = azurerm_resource_group.example.name
  location            = azurerm_resource_group.example.location
  sku                 = "premium" # Update this from "standard"
}

📝 Upgrading via ARM or Bicep


For ARM templates or Bicep files, the process follows the “Same Name” logic. Azure recognizes a deployment as an update if the resource name and resource group remain identical.


  1. Modify the Template: Update the sku property in your Bicep or JSON file.


  2. Match Parameters: Ensure the managedResourceGroupId and any custom VNet parameters match the existing workspace exactly. If these differ, Azure may attempt to create a new resource or fail.

  3. Deployment: Run your deployment command (e.g., az deployment group create). Azure will detect the SKU change and transition the existing workspace to Premium.


Bicep Example:


resource databricksWorkspace 'Microsoft.Databricks/workspaces@2023-02-01' = {
  name: 'my-databricks-workspace'
  location: location
  sku: {
    name: 'premium' // Update from 'standard'
  }
  properties: {
    managedResourceGroupId: managedResourceGroupId
  }
}

Important Considerations


  • Cluster Termination: During the SKU transition, active clusters may be terminated. Schedule your IaC apply during a maintenance window.


  • Access Control Lists (ACLs): By default, workspaces upgraded from Standard to Premium have ACLs disabled. You must enable them manually or via the API after the upgrade.

    Note: Once enabled, ACLs cannot be disabled.


  • VNet Injection: If you are using a custom VNet (VNet Injection), the upgrade is fully supported via IaC, but you must ensure your subnets have sufficient IP addresses to handle any new Premium-tier features you plan to enable.


  • State Consistency: If you manually upgrade via the Azure Portal first, your next Terraform/ARM run will show a state drift. It is highly recommended to update your code first to keep your environment predictable.


  • No application code or job changes required: No Job or artifacts changes and all jobs, artifacts on workspace should work as expected after upgrading to premium tier.





Feel free to drop a comment below — I’d love to hear your thoughts. If you need expert guidance for this migration or want to optimize your workloads, my team at @EnablerMinds is here to help. With deep expertise in Databricks, we ensure a seamless transition that maximizes your ROI.



Recent Posts

See All

Comments


bottom of page