In this article, you learn how to automate Databricks operations and accelerate development with the Databricks SDK for Go. This article supplements the Databricks SDK for Go README, API reference, and examples.
note
This feature is in Beta and is okay to use in production.
During the Beta period, Databricks recommends that you pin a dependency on the specific minor version of the Databricks SDK for Go that your code depends on, for example, in a project's go.mod
file. For more information about pinning dependencies, see Managing dependencies.
Before you begin to use the Databricks SDK for Go, your development machine must have:
On your development machine with Go already installed, an existing Go code project already created, and Databricks authentication configured, create a go.mod
file to track your Go code's dependencies by running the go mod init
command, for example:
Take a dependency on the Databricks SDK for Go package by running the go mod edit -require
command, replacing 0.8.0
with the latest version of the Databricks SDK for Go package as listed in the CHANGELOG:
Bash
go mod edit -require github.com/databricks/databricks-sdk-go@v0.8.0
Your go.mod
file should now look like this:
Go
module sample
go 1.18
require github.com/databricks/databricks-sdk-go v0.8.0
Within your project, create a Go code file that imports the Databricks SDK for Go. The following example, in a file named main.go
with the following contents, lists all the clusters in your Databricks workspace:
Go
package main
import (
"context"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/compute"
)
func main() {
w := databricks.Must(databricks.NewWorkspaceClient())
all, err := w.Clusters.ListAll(context.Background(), compute.ListClustersRequest{})
if err != nil {
panic(err)
}
for _, c := range all {
println(c.ClusterName)
}
}
Add any missing module dependencies by running the go mod tidy
command:
note
If you get the error go: warning: "all" matched no packages
, you forgot to add a Go code file that imports the Databricks SDK for Go.
Grab copies of all packages needed to support builds and tests of packages in your main
module, by running the go mod vendor
command:
Set up your development machine for Databricks authentication.
Run your Go code file, assuming a file named main.go
, by running the go run
command:
To update your Go project to use one of the Databricks SDK for Go packages as listed in the CHANGELOG, do the following:
Run the go get
command from the root of your project, specifying the -u
flag to do an update, and providing the name and target version number of the Databricks SDK for Go package. For example, to update to version 0.12.0
, run the following command:
Bash
go get -u github.com/databricks/databricks-sdk-go@v0.12.0
Add and update any missing and outdated module dependencies by running the go mod tidy
command:
Grab copies of all new and updated packages needed to support builds and tests of packages in your main
module, by running the go mod vendor
command:
The Databricks SDK for Go implements the Databricks client unified authentication standard, a consolidated and consistent architectural and programmatic approach to authentication. This approach helps make setting up and automating authentication with Databricks more centralized and predictable. It enables you to configure Databricks authentication once and then use that configuration across multiple Databricks tools and SDKs without further authentication configuration changes. For more information, including more complete code examples in Go, see Databricks client unified authentication.
Some of the available coding patterns to initialize Databricks authentication with the Databricks SDK for Go include:
Use Databricks default authentication by doing one of the following:
DATABRICKS_CONFIG_PROFILE
environment variable to the name of the custom configuration profile.Then instantiate for example a WorkspaceClient
object with Databricks default authentication as follows:
Go
import (
"github.com/databricks/databricks-sdk-go"
)
w := databricks.Must(databricks.NewWorkspaceClient())
Hard-coding the required fields is supported but not recommended, as it risks exposing sensitive information in your code, such as Databricks personal access tokens. The following example hard-codes Databricks host and access token values for Databricks token authentication:
Go
import (
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/config"
)
w := databricks.Must(databricks.NewWorkspaceClient(&databricks.Config{
Host: "https://...",
Token: "...",
}))
See also Authentication in the Databricks SDK for Go README.
ExamplesâThe following code examples demonstrate how to use the Databricks SDK for Go to create and delete clusters, run jobs, and list account users. These code examples use the Databricks SDK for Go's default Databricks authentication process.
For additional code examples, see the examples folder in the Databricks SDK for Go repository in GitHub.
This code example creates a cluster with the latest available Databricks Runtime Long Term Support (LTS) version and the smallest available cluster node type with a local disk. This cluster has one worker, and the cluster will automatically terminate after 15 minutes of idle time. The CreateAndWait
method call causes the code to pause until the new cluster is running in the workspace.
Go
package main
import (
"context"
"fmt"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/compute"
)
func main() {
const clusterName = "my-cluster"
const autoTerminationMinutes = 15
const numWorkers = 1
w := databricks.Must(databricks.NewWorkspaceClient())
ctx := context.Background()
sparkVersions, err := w.Clusters.SparkVersions(ctx)
if err != nil {
panic(err)
}
latestLTS, err := sparkVersions.Select(compute.SparkVersionRequest{
Latest: true,
LongTermSupport: true,
})
if err != nil {
panic(err)
}
nodeTypes, err := w.Clusters.ListNodeTypes(ctx)
if err != nil {
panic(err)
}
smallestWithLocalDisk, err := nodeTypes.Smallest(clusters.NodeTypeRequest{
LocalDisk: true,
})
if err != nil {
panic(err)
}
fmt.Println("Now attempting to create the cluster, please wait...")
runningCluster, err := w.Clusters.CreateAndWait(ctx, compute.CreateCluster{
ClusterName: clusterName,
SparkVersion: latestLTS,
NodeTypeId: smallestWithLocalDisk,
AutoterminationMinutes: autoTerminationMinutes,
NumWorkers: numWorkers,
})
if err != nil {
panic(err)
}
switch runningCluster.State {
case compute.StateRunning:
fmt.Printf("The cluster is now ready at %s#setting/clusters/%s/configuration\n",
w.Config.Host,
runningCluster.ClusterId,
)
default:
fmt.Printf("Cluster is not running or failed to create. %s", runningCluster.StateMessage)
}
}
Permanently delete a clusterâ
This code example permanently deletes the cluster with the specified cluster ID from the workspace.
Go
package main
import (
"context"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/clusters"
)
func main() {
const clusterId = "1234-567890-ab123cd4"
w := databricks.Must(databricks.NewWorkspaceClient())
ctx := context.Background()
err := w.Clusters.PermanentDelete(ctx, compute.PermanentDeleteCluster{
ClusterId: clusterId,
})
if err != nil {
panic(err)
}
}
Run a jobâ
This code example creates a Databricks job that runs the specified notebook on the specified cluster. As the code runs, it gets the existing notebook's path, the existing cluster ID, and related job settings from the user at the terminal. The RunNowAndWait
method call causes the code to pause until the new job has finished running in the workspace.
Go
package main
import (
"bufio"
"context"
"fmt"
"os"
"strings"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/jobs"
)
func main() {
w := databricks.Must(databricks.NewWorkspaceClient())
ctx := context.Background()
nt := jobs.NotebookTask{
NotebookPath: askFor("Workspace path of the notebook to run:"),
}
jobToRun, err := w.Jobs.Create(ctx, jobs.CreateJob{
Name: askFor("Some short name for the job:"),
Tasks: []jobs.JobTaskSettings{
{
Description: askFor("Some short description for the job:"),
TaskKey: askFor("Some key to apply to the job's tasks:"),
ExistingClusterId: askFor("ID of the existing cluster in the workspace to run the job on:"),
NotebookTask: &nt,
},
},
})
if err != nil {
panic(err)
}
fmt.Printf("Now attempting to run the job at %s/#job/%d, please wait...\n",
w.Config.Host,
jobToRun.JobId,
)
runningJob, err := w.Jobs.RunNow(ctx, jobs.RunNow{
JobId: jobToRun.JobId,
})
if err != nil {
panic(err)
}
jobRun, err := runningJob.Get()
if err != nil {
panic(err)
}
fmt.Printf("View the job run results at %s/#job/%d/run/%d\n",
w.Config.Host,
jobRun.JobId,
jobRun.RunId,
)
}
func askFor(prompt string) string {
var s string
r := bufio.NewReader(os.Stdin)
for {
fmt.Fprint(os.Stdout, prompt+" ")
s, _ = r.ReadString('\n')
if s != "" {
break
}
}
return strings.TrimSpace(s)
}
Manage files in Unity Catalog volumesâ
This code example demonstrates various calls to files
functionality within WorkspaceClient
to access a Unity Catalog volume.
Go
package main
import (
"context"
"io"
"os"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/files"
)
func main() {
w := databricks.Must(databricks.NewWorkspaceClient())
catalog := "main"
schema := "default"
volume := "my-volume"
volumePath := "/Volumes/" + catalog + "/" + schema + "/" + volume
volumeFolder := "my-folder"
volumeFolderPath := volumePath + "/" + volumeFolder
volumeFile := "data.csv"
volumeFilePath := volumeFolderPath + "/" + volumeFile
uploadFilePath := "./data.csv"
err := w.Files.CreateDirectory(
context.Background(),
files.CreateDirectoryRequest{DirectoryPath: volumeFolderPath},
)
if err != nil {
panic(err)
}
fileUpload, err := os.Open(uploadFilePath)
if err != nil {
panic(err)
}
defer fileUpload.Close()
w.Files.Upload(
context.Background(),
files.UploadRequest{
Contents: fileUpload,
FilePath: volumeFilePath,
Overwrite: true,
},
)
items := w.Files.ListDirectoryContents(
context.Background(),
files.ListDirectoryContentsRequest{DirectoryPath: volumePath},
)
for {
if items.HasNext(context.Background()) {
item, err := items.Next(context.Background())
if err != nil {
break
}
println(item.Path)
} else {
break
}
}
itemsFolder := w.Files.ListDirectoryContents(
context.Background(),
files.ListDirectoryContentsRequest{DirectoryPath: volumeFolderPath},
)
for {
if itemsFolder.HasNext(context.Background()) {
item, err := itemsFolder.Next(context.Background())
if err != nil {
break
}
println(item.Path)
} else {
break
}
}
file, err := w.Files.DownloadByFilePath(
context.Background(),
volumeFilePath,
)
if err != nil {
panic(err)
}
bufDownload := make([]byte, file.ContentLength)
for {
file, err := file.Contents.Read(bufDownload)
if err != nil && err != io.EOF {
panic(err)
}
if file == 0 {
break
}
println(string(bufDownload[:file]))
}
w.Files.DeleteByFilePath(
context.Background(),
volumeFilePath,
)
w.Files.DeleteDirectory(
context.Background(),
files.DeleteDirectoryRequest{
DirectoryPath: volumeFolderPath,
},
)
}
List account usersâ
This code example lists the available users within a Databricks account.
Go
package main
import (
"context"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/iam"
)
func main() {
a := databricks.Must(databricks.NewAccountClient())
all, err := a.Users.ListAll(context.Background(), iam.ListAccountUsersRequest{})
if err != nil {
panic(err)
}
for _, u := range all {
println(u.UserName)
}
}
Additional resourcesâ
For more information, see:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4