Production-ready infrastructure templates, AI governance patterns, and security architectures for Databricks on Azure, AWS, and GCP — built by Bhavin Kukadia.
Modular Terraform templates for Non-PL, Full Private (air-gapped), and Hub-Spoke patterns. 8 reusable modules covering networking, workspace, Unity Catalog, Key Vault, private endpoints, and monitoring.
Private Link workspace templates with full Data Exfiltration Protection (DEP) controls. Covers VPC design, PrivateLink endpoints, IAM roles, cross-account setups, and S3 data access patterns.
VPC Service Controls (VPC-SC), Private Service Connect (PSC), and CMEK implementations. Includes Workload Identity Federation, GCS connectors, and data exfiltration prevention patterns.
Scroll-based animated visualizations covering all Databricks AI product authentication and authorization patterns — Agent Bricks, Genie Space, Databricks Apps, MCP, AI Gateway. Interactive decision trees help you choose the right pattern for your workload.
End-to-end governed orchestration hub covering all Databricks AI services in one authoritative reference.
Service Principal passthrough, On-Behalf-Of-User (OBO), and OAuth Token Federation — with full code examples.
Four-layer access control: workspace restrictions, UC privileges, ABAC governed tags, and row/column-level security.
Multi-team access patterns for 1000+ users, scaling Genie with complex Unity Catalog governance models.
System tables monitoring and audit queries for tracking AI product usage, access patterns, and governance compliance.
Set up Terraform authentication for Azure, AWS, or GCP. Zero jargon — step-by-step from scratch.
Complete multi-cloud networking reference covering VNet, VPC, VPC-SC, Private Link, and troubleshooting flows.
How Databricks accesses your cloud account — managed identities, service accounts, IAM roles explained clearly.
Quick answers to the most frequently asked questions about Databricks infrastructure and security setup.
967-line architecture document covering streaming patterns, resilience strategies, data quality, and medallion lakehouse design for healthcare ingestion.
Full bronze to silver to gold pipeline for HL7v2 messages. 11 curated tables covering ADT, ORM, ORU, and more message types.
REST API ingestion and streaming pipeline for FHIR R4. 8 curated tables, 1579-line parser with validation, quarantine patterns, and schema evolution support.
Official API-backed tool to extract Databricks IP ranges by region and service type. Supports JSON, CSV, and plain CIDR output formats. Includes runbook and tests.
Legacy content including Databricks jump-start notebooks, Spark/MLflow/Delta Lake examples, and REST API Postman collections.