🔐 Agent Authentication Methods

Agent Authentication Methods

How agents deployed on Databricks Model Serving authenticate to resources like Vector Search, Genie, UC Functions, and MCP servers.

↓ Scroll to explore each method
Overview

Three Authentication Methods

When your agent runs on Model Serving, it needs to authenticate to access Databricks resources.

Choose the right method based on two questions:

  • Do you need per-user access control?
  • Does the resource support automatic passthrough?
You can mix and match methods - an agent can use different methods for different resources.
Decision Flow

Which Method Do I Need?

Start with this question: Is per-user access control or user-attributed auditing required?

  • Yes → Use OBO (On-Behalf-Of-User)
  • No → Check if resource supports automatic passthrough

If automatic passthrough is supported → Use Automatic Passthrough

Otherwise → Use Manual Authentication

Method 1

Automatic Authentication Passthrough

The simplest method for accessing Databricks-managed resources.

Declare resource dependencies when logging the agent, and Databricks automatically provisions, rotates, and manages short-lived credentials.

Similar to "Run as owner" for Databricks dashboards. A system-generated service principal with least-privilege access is created automatically.
Automatic Passthrough

How It Works

At deploy time:

  • Permission verification - Databricks verifies endpoint creator can access all declared dependencies
  • Service principal creation - System SP created with read access to agent resources
  • Credential provisioning - Short-lived M2M OAuth tokens injected into endpoint

At runtime: Agent uses the auto-provisioned credentials. Databricks handles rotation.

Automatic Passthrough

Supported Resources

Resources that support automatic authentication passthrough:

  • Vector Search index - requires Can Use
  • Model Serving endpoint - requires Can Query
  • UC Functions - requires EXECUTE
  • Genie space - requires Can Run
  • SQL Warehouse - requires Use Endpoint
  • UC Table - requires SELECT
  • UC Connection - requires Use Connection
  • Lakebase - requires databricks_superuser
Method 2

On-Behalf-Of-User (OBO)

The agent acts as the Databricks user who runs the query.

This provides:

  • Per-user access to sensitive data
  • Fine-grained controls enforced by Unity Catalog
  • Downscoped tokens - restricted to declared API scopes
User identity is only known at query time, so OBO resources must be initialized in predict(), not __init__().
OBO Authentication

How It Works

At logging time:

  • Declare REST API scopes the agent requires
  • Use AuthPolicy with UserAuthPolicy

At runtime:

  • User sends request to agent endpoint
  • Agent initializes with ModelServingUserCredentials()
  • Resources accessed with user's identity and permissions
  • UC ACLs, row filters, column masks apply to the user
OBO Authentication

Required API Scopes

Declare scopes to follow least privilege:

  • serving.serving-endpoints - Model Serving
  • vectorsearch.vector-search-endpoints
  • vectorsearch.vector-search-indexes
  • sql.warehouses + sql.statement-execution
  • dashboards.genie - Genie spaces
  • catalog.connections - UC connections
Tokens are restricted to only the APIs declared - reducing risk of misuse.
Method 3

Manual Authentication

Explicitly provide credentials during agent deployment. Use when:

  • Resource doesn't support automatic passthrough
  • Agent needs different credentials than deployer
  • Agent accesses external resources outside Databricks
  • Agent uses the prompt registry
Manual auth has the most flexibility but requires more setup and ongoing credential management.
Manual Authentication

Two Options

OAuth (Recommended):

  • Create service principal + generate OAuth credentials
  • Store in Databricks secrets
  • Automatic token refresh

Personal Access Token (PAT):

  • Simpler setup for development/testing
  • Requires manual rotation management
  • Store PAT in Databricks secrets
Never embed credentials in code. Always use UC secret scopes.
Model Serving

The Deployment Context

All three auth methods apply to agents deployed via Model Serving:

  • Custom Models - Your agents, MLflow-packaged
  • Foundation Models - Databricks-hosted LLMs
  • External Models - Via AI Gateway

For foundation models, Databricks handles auth internally. For custom models/agents, you choose the method based on your access requirements.

Summary

Choosing the Right Method

Automatic Passthrough: Simplest, Databricks manages credentials. Use when no per-user access needed.

OBO: Agent runs as the user. Use when per-user access control or auditing is required.

Manual: Explicit credentials. Use for external resources or when passthrough isn't supported.

Remember: You can combine methods. Use automatic for Vector Search, OBO for Genie, and manual for external APIs - all in the same agent.
🤖
Agent on Model Serving
Custom model deployed
Per-user access
required?
Automatic
System SP + short-lived tokens
👤
OBO
User identity passthrough
🔑
Manual
Explicit credentials
No
Yes
Fallback
🔍
Vector Search
🔮
Genie
📊
UC Tables
🌐
External APIs
🛡️
Unity Catalog Governance
Permissions, ABAC, Row Filters, Column Masks, Audit Logs

Quick Reference

Automatic Passthrough

How: Declare resources at logging time

  • System-generated service principal
  • Short-lived M2M OAuth tokens
  • Automatic rotation
  • Low setup complexity
Use when: No per-user access needed

On-Behalf-Of-User (OBO)

How: Initialize in predict() with user credentials

  • Agent runs as end user
  • Downscoped to declared API scopes
  • UC enforces user's permissions
  • Medium setup complexity
Use when: Per-user access control required

Manual Authentication

How: Provide credentials via env vars + secrets

  • OAuth (recommended) or PAT
  • Service principal credentials
  • Manual rotation management
  • High setup complexity
Use when: External resources or no passthrough