dbutils
) reference
This article contains reference for Databricks Utilities (dbutils
). The utilities provide commands that enable you to work with your Databricks environment from notebooks. For example, you can manage files and object storage, and work with secrets. dbutils
are available in Python, R, and Scala notebooks.
note
dbutils
only supports compute environments that use DBFS.
The following table lists the Databricks Utilities modules, which you can retrieve using dbutils.help()
.
To list commands for a utility module along with a short description of each command, append .help()
after the name of the utility module. The following example lists available commands for the notebook utility:
Output
The notebook module.
exit(value: String): void -> This method lets you exit a notebook with a value
run(path: String, timeoutSeconds: int, arguments: Map): String -> This method runs a notebook and returns its exit value
To output help for a command, run dbutils.<utility-name>.help("<command-name>")
. The following example displays help for the file system utilities copy command, dbutils.fs.cp
:
Output
/**
* Copies a file or directory, possibly across FileSystems.
*
* Example: cp("/mnt/my-folder/a", "dbfs:/a/b")
*
* @param from FileSystem URI of the source file or directory
* @param to FileSystem URI of the destination file or directory
* @param recurse if true, all files and directories will be recursively copied
* @return true if all files were successfully copied
*/
cp(from: java.lang.String, to: java.lang.String, recurse: boolean = false): boolean
Credentials utility (dbutils.credentials)â
The credentials utility allows you to interact with credentials within notebooks. This utility is usable only on clusters with credential passthrough enabled.
The following table lists the available commands for this utility, which you can retrieve using dbutils.credentials.help()
.
assumeRole(role: String): boolean
Sets the Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role to assume when looking for credentials to authenticate with Amazon S3. After you run this command, you can run S3 access commands, such as sc.textFile("s3a://my-bucket/my-file.csv")
to access an object.
To display complete help for this command, run:
dbutils.credentials.help("assumeRole")
Exampleâ
Python
dbutils.credentials.assumeRole("arn:aws:iam::123456789012:roles/my-role")
R
dbutils.credentials.assumeRole("arn:aws:iam::123456789012:roles/my-role")
Scala
dbutils.credentials.assumeRole("arn:aws:iam::123456789012:roles/my-role")
getServiceCredentialsProvider command (dbutils.credentials.getServiceCredentialsProvider)â
getServiceCredentialsProvider(credentialName: String): Object
Returns a service credentials provider for the given service credential. The return object type is specific to the cloud provider.
To display complete help for this command, run:
dbutils.credentials.help("getServiceCredentialsProvider")
Exampleâ
Python
dbutils.credentials.getServiceCredentialsProvider("my-credential")
This utility is not supported in R.
Scala
dbutils.credentials.getServiceCredentialsProvider("my-credential")
showCurrentRole command (dbutils.credentials.showCurrentRole)â
showCurrentRole: List
Lists the currently set AWS Identity and Access Management (IAM) role.
To display complete help for this command, run:
dbutils.credentials.help("showCurrentRole")
Exampleâ
Python
dbutils.credentials.showCurrentRole()
R
dbutils.credentials.showCurrentRole()
Scala
dbutils.credentials.showCurrentRole()
showRoles command (dbutils.credentials.showRoles)â
showRoles: List
Lists the set of possible assumed AWS Identity and Access Management (IAM) roles.
To display complete help for this command, run:
dbutils.credentials.help("showRoles")
Exampleâ
Python
dbutils.credentials.showRoles()
R
dbutils.credentials.showRoles()
Scala
dbutils.credentials.showRoles()
Data utility (dbutils.data)â
note
Available in Databricks Runtime 9.0 and above.
The data utility allows you to understand and interact with datasets.
The following table lists the available commands for this utility, which you can retrieve using dbutils.data.help()
.
summarize(df: Object, precise: boolean): void
Calculates and displays summary statistics of an Apache Spark DataFrame or pandas DataFrame. This command is available for Python, Scala and R.
important
This command analyzes the complete contents of the DataFrame. Running this command for very large DataFrames can be very expensive.
To display complete help for this command, run:
dbutils.data.help("summarize")
In Databricks Runtime 10.4 LTS and above, you can use the additional precise
parameter to adjust the precision of the computed statistics.
precise
is set to false (the default), some returned statistics include approximations to reduce run time.
precise
is set to true, the statistics are computed with higher precision. All statistics except for the histograms and percentiles for numeric columns are now exact.
The tooltip at the top of the data summary output indicates the mode of the current run.
ExampleâThis example displays summary statistics for an Apache Spark DataFrame with approximations enabled by default. To see the results, run this command in a notebook. This example is based on Sample datasets.
Python
df = spark.read.format('csv').load(
'/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv',
header=True,
inferSchema=True
)
dbutils.data.summarize(df)
R
df <- read.df("/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv", source = "csv", header="true", inferSchema = "true")
dbutils.data.summarize(df)
Scala
val df = spark.read.format("csv")
.option("inferSchema", "true")
.option("header", "true")
.load("/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv")
dbutils.data.summarize(df)
The visualization uses SI notation to concisely render numerical values smaller than 0.01 or larger than 10000. As an example, the numerical value 1.25e-15
will be rendered as 1.25f
. One exception: the visualization uses âB
â for 1.0e9
(giga) instead of âG
â.
The file system utility allows you to access What is DBFS?. To access workspace files, use shell commands such as %sh ls
, as there are some limitations when using dbutils.fs
commands with workspace files.
warning
The Python implementation of all dbutils.fs
methods uses snake_case
rather than camelCase
for keyword formatting.
For example, dbutils.fs.help()
displays the option extraConfigs
for dbutils.fs.mount()
. However, in Python you would use the keyword extra_configs
.
The following table lists the available commands for this utility, which you can retrieve using dbutils.fs.help()
.
tip
In notebooks, you can use the %fs
magic command to access DBFS. For example, %fs ls /Volumes/main/default/my-volume/
is the same as dbutils.fs.ls("/Volumes/main/default/my-volume/")
. See magic commands.
cp(from: String, to: String, recurse: boolean = false): boolean
Copies a file or directory, possibly across filesystems.
To display complete help for this command, run:
ExampleâThis example copies the file named data.csv
from /Volumes/main/default/my-volume/
to new-data.csv
in the same volume.
Python
dbutils.fs.cp("/Volumes/main/default/my-volume/data.csv", "/Volumes/main/default/my-volume/new-data.csv")
R
dbutils.fs.cp("/Volumes/main/default/my-volume/data.csv", "/Volumes/main/default/my-volume/new-data.csv")
Scala
dbutils.fs.cp("/Volumes/main/default/my-volume/data.csv", "/Volumes/main/default/my-volume/new-data.csv")
head command (dbutils.fs.head)â
head(file: String, maxBytes: int = 65536): String
Returns up to the specified maximum number of bytes in the given file. The bytes are returned as a UTF-8 encoded string.
To display complete help for this command, run:
ExampleâThis example displays the first 25 bytes of the file data.csv
located in /Volumes/main/default/my-volume/
.
Python
dbutils.fs.head("/Volumes/main/default/my-volume/data.csv", 25)
R
dbutils.fs.head("/Volumes/main/default/my-volume/data.csv", 25)
Scala
dbutils.fs.head("/Volumes/main/default/my-volume/data.csv", 25)
ls command (dbutils.fs.ls)â
ls(dir: String): Seq
Lists the contents of a directory.
To display complete help for this command, run:
ExampleâThis example displays information about the contents of /Volumes/main/default/my-volume/
. The modificationTime
field is available in Databricks Runtime 10.4 LTS and above. In R, modificationTime
is returned as a string.
Python
dbutils.fs.ls("/Volumes/main/default/my-volume/")
R
dbutils.fs.ls("/Volumes/main/default/my-volume/")
mkdirs command (dbutils.fs.mkdirs)â
mkdirs(dir: String): boolean
Creates the given directory if it does not exist. Also creates any necessary parent directories.
To display complete help for this command, run:
dbutils.fs.help("mkdirs")
Exampleâ
This example creates the directory my-data
within /Volumes/main/default/my-volume/
.
Python
dbutils.fs.mkdirs("/Volumes/main/default/my-volume/my-data")
R
dbutils.fs.mkdirs("/Volumes/main/default/my-volume/my-data")
Scala
dbutils.fs.mkdirs("/Volumes/main/default/my-volume/my-data")
mount command (dbutils.fs.mount)â
mount(source: String, mountPoint: String, encryptionType: String = "",
owner: String = null, extraConfigs: Map = Map.empty[String, String]): boolean
Mounts the specified source directory into DBFS at the specified mount point.
To display complete help for this command, run:
ExampleâPython
aws_bucket_name = "my-bucket"
mount_name = "s3-my-bucket"
dbutils.fs.mount("s3a://%s" % aws_bucket_name, "/mnt/%s" % mount_name)
Scala
val AwsBucketName = "my-bucket"
val MountName = "s3-my-bucket"
dbutils.fs.mount(s"s3a://$AwsBucketName", s"/mnt/$MountName")
For additional code examples, see Connect to Amazon S3.
mounts command (dbutils.fs.mounts)âmounts: Seq
Displays information about what is currently mounted within DBFS.
To display complete help for this command, run:
dbutils.fs.help("mounts")
Exampleâ
For additional code examples, see Connect to Amazon S3.
mv command (dbutils.fs.mv)âmv(from: String, to: String, recurse: boolean = false): boolean
Moves a file or directory, possibly across filesystems. A move is a copy followed by a delete, even for moves within filesystems.
To display complete help for this command, run:
ExampleâThis example moves the file rows.csv
from /Volumes/main/default/my-volume/
to /Volumes/main/default/my-volume/my-data/
.
Python
dbutils.fs.mv("/Volumes/main/default/my-volume/rows.csv", "/Volumes/main/default/my-volume/my-data/")
R
dbutils.fs.mv("/Volumes/main/default/my-volume/rows.csv", "/Volumes/main/default/my-volume/my-data/")
Scala
dbutils.fs.mv("/Volumes/main/default/my-volume/rows.csv", "/Volumes/main/default/my-volume/my-data/")
put command (dbutils.fs.put)â
put(file: String, contents: String, overwrite: boolean = false): boolean
Writes the specified string to a file. The string is UTF-8 encoded.
To display complete help for this command, run:
ExampleâThis example writes the string Hello, Databricks!
to a file named hello.txt
in /Volumes/main/default/my-volume/
. If the file exists, it will be overwritten.
Python
dbutils.fs.put("/Volumes/main/default/my-volume/hello.txt", "Hello, Databricks!", True)
R
dbutils.fs.put("/Volumes/main/default/my-volume/hello.txt", "Hello, Databricks!", TRUE)
Scala
dbutils.fs.put("/Volumes/main/default/my-volume/hello.txt", "Hello, Databricks!", true)
refreshMounts command (dbutils.fs.refreshMounts)â
refreshMounts: boolean
Forces all machines in the cluster to refresh their mount cache, ensuring they receive the most recent information.
To display complete help for this command, run:
dbutils.fs.help("refreshMounts")
Exampleâ
Python
dbutils.fs.refreshMounts()
Scala
dbutils.fs.refreshMounts()
For additional code examples, see Connect to Amazon S3.
rm command (dbutils.fs.rm)ârm(dir: String, recurse: boolean = false): boolean
Removes a file or directory and, optionally, all of its contents. If a file is specified, the recurse
parameter is ignored. If a directory is specified, an error occurs when recurse
is disabled and the directory is not empty.
To display complete help for this command, run:
ExampleâThis example removes the entire directory /Volumes/main/default/my-volume/my-data/
including its contents.
Python
dbutils.fs.rm("/Volumes/main/default/my-volume/my-data/", True)
R
dbutils.fs.rm("/Volumes/main/default/my-volume/my-data/", TRUE)
Scala
dbutils.fs.rm("/Volumes/main/default/my-volume/my-data/", true)
unmount command (dbutils.fs.unmount)â
unmount(mountPoint: String): boolean
Deletes a DBFS mount point.
warning
To avoid errors, never modify a mount point while other jobs are reading or writing to it. After modifying a mount, always run dbutils.fs.refreshMounts()
on all other running clusters to propagate any mount updates. See refreshMounts command (dbutils.fs.refreshMounts).
To display complete help for this command, run:
dbutils.fs.help("unmount")
Exampleâ
Python
dbutils.fs.unmount("/mnt/<mount-name>")
For additional code examples, see Connect to Amazon S3.
updateMount command (dbutils.fs.updateMount)âupdateMount(source: String, mountPoint: String, encryptionType: String = "",
owner: String = null, extraConfigs: Map = Map.empty[String, String]): boolean
Similar to the dbutils.fs.mount
command, but updates an existing mount point instead of creating a new one. Returns an error if the mount point is not present.
warning
To avoid errors, never modify a mount point while other jobs are reading or writing to it. After modifying a mount, always run dbutils.fs.refreshMounts()
on all other running clusters to propagate any mount updates. See refreshMounts command (dbutils.fs.refreshMounts).
This command is available in Databricks Runtime 10.4 LTS and above.
To display complete help for this command, run:
dbutils.fs.help("updateMount")
Exampleâ
Python
aws_bucket_name = "my-bucket"
mount_name = "s3-my-bucket"
dbutils.fs.updateMount("s3a://%s" % aws_bucket_name, "/mnt/%s" % mount_name)
Scala
val AwsBucketName = "my-bucket"
val MountName = "s3-my-bucket"
dbutils.fs.updateMount(s"s3a://$AwsBucketName", s"/mnt/$MountName")
Jobs utility (dbutils.jobs)â
Provides utilities for leveraging jobs features.
note
This utility is available only for Python.
The following table lists the available modules for this utility, which you can retrieve using dbutils.jobs.help()
.
note
This subutility is available only for Python.
Provides commands for leveraging job task values.
Use this sub-utility to set and get arbitrary values during a job run. These values are called task values. Any task can get values set by upstream tasks and set values for downstream tasks to use.
Each task value has a unique key within the same task. This unique key is known as the task value's key. A task value is accessed with the task name and the task value's key. You can use this to pass information downstream from task to task within the same job run. For example, you can pass identifiers or metrics, such as information about the evaluation of a machine learning model, between different tasks within a job run.
The following table lists available commands for this subutility, which you can retrieve using dbutils.jobs.taskValues.help()
.
note
This command is available only for Python.
On Databricks Runtime 10.4 and earlier, if get
cannot find the task, a Py4JJavaError is raised instead of a ValueError
.
get(taskKey: String, key: String, default: int, debugValue: int): Seq
Gets the contents of the specified task value for the specified task in the current job run.
To display complete help for this command, run:
dbutils.jobs.taskValues.help("get")
Exampleâ
For example:
Python
dbutils.jobs.taskValues.get(taskKey = "my-task", \
key = "my-key", \
default = 7, \
debugValue = 42)
In the preceding example:
taskKey
is the name of the task that sets the task value. If the command cannot find this task, a ValueError
is raised.key
is the name of the task value's key that you set with the set command (dbutils.jobs.taskValues.set). If the command cannot find this task value's key, a ValueError
is raised (unless default
is specified).default
is an optional value that is returned if key
cannot be found. default
cannot be None
.debugValue
is an optional value that is returned if you try to get the task value from within a notebook that is running outside of a job. This can be useful during debugging when you want to run your notebook manually and return some value instead of raising a TypeError
by default. debugValue
cannot be None
.If you try to get a task value from within a notebook that is running outside of a job, this command raises a TypeError
by default. However, if the debugValue
argument is specified in the command, the value of debugValue
is returned instead of raising a TypeError
.
note
This command is available only for Python.
set(key: String, value: String): boolean
Sets or updates a task value. You can set up to 250 task values for a job run.
To display complete help for this command, run:
dbutils.jobs.taskValues.help("set")
Exampleâ
Some examples include:
Python
dbutils.jobs.taskValues.set(key = "my-key", \
value = 5)
dbutils.jobs.taskValues.set(key = "my-other-key", \
value = "my other value")
In the preceding examples:
key
is the task value's key. This key must be unique to the task. That is, if two different tasks each set a task value with key K
, these are two different task values that have the same key K
.value
is the value for this task value's key. This command must be able to represent the value internally in JSON format. The size of the JSON representation of the value cannot exceed 48 KiB.If you try to set a task value from within a notebook that is running outside of a job, this command does nothing.
Library utility (dbutils.library)âMost methods in the dbutils.library
submodule are deprecated. See Library utility (dbutils.library) (legacy).
You might need to programmatically restart the Python process on Databricks to ensure that locally installed or upgraded libraries function correctly in the Python kernel for your current SparkSession. To do this, run the dbutils.library.restartPython
command. See Restart the Python process on Databricks.
The notebook utility allows you to chain together notebooks and act on their results. See Orchestrate notebooks and modularize code in notebooks.
The following table lists the available commands for this utility, which you can retrieve using dbutils.notebook.help()
.
exit(value: String): void
Exits a notebook with a value.
To display complete help for this command, run:
dbutils.notebook.help("exit")
Exampleâ
This example exits the notebook with the value Exiting from My Other Notebook
.
Python
dbutils.notebook.exit("Exiting from My Other Notebook")
R
dbutils.notebook.exit("Exiting from My Other Notebook")
Scala
dbutils.notebook.exit("Exiting from My Other Notebook")
note
If the run has a query with structured streaming running in the background, calling dbutils.notebook.exit()
does not terminate the run. The run will continue to execute for as long as the query is executing in the background. You can stop the query running in the background by clicking Cancel in the cell of the query or by running query.stop()
. When the query stops, you can terminate the run with dbutils.notebook.exit()
.
run(path: String, timeoutSeconds: int, arguments: Map): String
Runs a notebook and returns its exit value. The notebook will run in the current cluster.
To display complete help for this command, run:
dbutils.notebook.help("run")
Exampleâ
This example runs a notebook named My Other Notebook
in the same location as the calling notebook. The called notebook ends with the line of code dbutils.notebook.exit("Exiting from My Other Notebook")
. If the called notebook does not finish running within 60 seconds, an exception is thrown.
Python
dbutils.notebook.run("My Other Notebook", 60)
Scala
dbutils.notebook.run("My Other Notebook", 60)
Secrets utility (dbutils.secrets)â
The secrets utility allows you to store and access sensitive credential information without making them visible in notebooks. See Secret management and Step 3: Use the secrets in a notebook.
The following table lists the available commands for this utility, which you can retrieve using dbutils.secrets.help()
.
get(scope: String, key: String): String
Gets the string representation of a secret value for the specified secrets scope and key.
warning
Administrators, secret creators, and users granted permission can read Databricks secrets. While Databricks makes an effort to redact secret values that might be displayed in notebooks, it is not possible to prevent such users from reading secrets. For more information, see Secret redaction.
To display complete help for this command, run:
dbutils.secrets.help("get")
Exampleâ
This example gets the string representation of the secret value for the scope named my-scope
and the key named my-key
.
Python
dbutils.secrets.get(scope="my-scope", key="my-key")
R
dbutils.secrets.get(scope="my-scope", key="my-key")
Scala
dbutils.secrets.get(scope="my-scope", key="my-key")
getBytes command (dbutils.secrets.getBytes)â
getBytes(scope: String, key: String): byte[]
Gets the bytes representation of a secret value for the specified scope and key.
To display complete help for this command, run:
dbutils.secrets.help("getBytes")
Exampleâ
This example gets the byte representation of the secret value (in this example, a1!b2@c3#
) for the scope named my-scope
and the key named my-key
.
Python
dbutils.secrets.getBytes(scope="my-scope", key="my-key")
R
dbutils.secrets.getBytes(scope="my-scope", key="my-key")
Scala
dbutils.secrets.getBytes(scope="my-scope", key="my-key")
list command (dbutils.secrets.list)â
list(scope: String): Seq
Lists the metadata for secrets within the specified scope.
To display complete help for this command, run:
dbutils.secrets.help("list")
Exampleâ
This example lists the metadata for secrets within the scope named my-scope
.
Python
dbutils.secrets.list("my-scope")
R
dbutils.secrets.list("my-scope")
Scala
dbutils.secrets.list("my-scope")
listScopes command (dbutils.secrets.listScopes)â
listScopes: Seq
Lists the available scopes.
To display complete help for this command, run:
dbutils.secrets.help("listScopes")
Exampleâ
This example lists the available scopes.
Python
dbutils.secrets.listScopes()
R
dbutils.secrets.listScopes()
Scala
dbutils.secrets.listScopes()
The widgets utility allows you to parameterize notebooks. See Databricks widgets.
The following table lists the available commands for this utility, which you can retrieve using dbutils.widgets.help()
.
combobox(name: String, defaultValue: String, choices: Seq, label: String): void
Creates and displays a combobox widget with the specified programmatic name, default value, choices, and optional label.
To display complete help for this command, run:
dbutils.widgets.help("combobox")
Exampleâ
This example creates and displays a combobox widget with the programmatic name fruits_combobox
. It offers the choices apple
, banana
, coconut
, and dragon fruit
and is set to the initial value of banana
. This combobox widget has an accompanying label Fruits
. This example ends by printing the initial value of the combobox widget, banana
.
Python
dbutils.widgets.combobox(
name='fruits_combobox',
defaultValue='banana',
choices=['apple', 'banana', 'coconut', 'dragon fruit'],
label='Fruits'
)
print(dbutils.widgets.get("fruits_combobox"))
R
dbutils.widgets.combobox(
name='fruits_combobox',
defaultValue='banana',
choices=list('apple', 'banana', 'coconut', 'dragon fruit'),
label='Fruits'
)
print(dbutils.widgets.get("fruits_combobox"))
Scala
dbutils.widgets.combobox(
"fruits_combobox",
"banana",
Array("apple", "banana", "coconut", "dragon fruit"),
"Fruits"
)
print(dbutils.widgets.get("fruits_combobox"))
SQL
CREATE WIDGET COMBOBOX fruits_combobox DEFAULT "banana" CHOICES SELECT * FROM (VALUES ("apple"), ("banana"), ("coconut"), ("dragon fruit"))
SELECT :fruits_combobox
dropdown command (dbutils.widgets.dropdown)â
dropdown(name: String, defaultValue: String, choices: Seq, label: String): void
Creates and displays a dropdown widget with the specified programmatic name, default value, choices, and optional label.
To display complete help for this command, run:
dbutils.widgets.help("dropdown")
Exampleâ
This example creates and displays a dropdown widget with the programmatic name toys_dropdown
. It offers the choices alphabet blocks
, basketball
, cape
, and doll
and is set to the initial value of basketball
. This dropdown widget has an accompanying label Toys
. This example ends by printing the initial value of the dropdown widget, basketball
.
Python
dbutils.widgets.dropdown(
name='toys_dropdown',
defaultValue='basketball',
choices=['alphabet blocks', 'basketball', 'cape', 'doll'],
label='Toys'
)
print(dbutils.widgets.get("toys_dropdown"))
R
dbutils.widgets.dropdown(
name='toys_dropdown',
defaultValue='basketball',
choices=list('alphabet blocks', 'basketball', 'cape', 'doll'),
label='Toys'
)
print(dbutils.widgets.get("toys_dropdown"))
Scala
dbutils.widgets.dropdown(
"toys_dropdown",
"basketball",
Array("alphabet blocks", "basketball", "cape", "doll"),
"Toys"
)
print(dbutils.widgets.get("toys_dropdown"))
SQL
CREATE WIDGET DROPDOWN toys_dropdown DEFAULT "basketball" CHOICES SELECT * FROM (VALUES ("alphabet blocks"), ("basketball"), ("cape"), ("doll"))
SELECT :toys_dropdown
get command (dbutils.widgets.get)â
get(name: String): String
Gets the current value of the widget with the specified programmatic name. This programmatic name can be either:
fruits_combobox
or toys_dropdown
.name
or age
. For more information, see the coverage of parameters for notebook tasks in the jobs UI or the notebook_params
field in the Trigger a new job run (POST /jobs/run-now
) operation in the Jobs API.To display complete help for this command, run:
dbutils.widgets.help("get")
Exampleâ
This example gets the value of the widget that has the programmatic name fruits_combobox
.
Python
dbutils.widgets.get('fruits_combobox')
R
dbutils.widgets.get('fruits_combobox')
Scala
dbutils.widgets.get("fruits_combobox")
This example gets the value of the notebook task parameter that has the programmatic name age
. This parameter was set to 35
when the related notebook task was run.
Python
dbutils.widgets.get('age')
R
dbutils.widgets.get('age')
Scala
dbutils.widgets.get("age")
getAll command (dbutils.widgets.getAll)â
getAll: map
Gets a mapping of all current widget names and values. This can be especially useful to quickly pass widget values to a spark.sql()
query.
This command is available in Databricks Runtime 13.3 LTS and above. It is only available for Python and Scala.
To display complete help for this command, run:
dbutils.widgets.help("getAll")
Exampleâ
This example gets the map of widget values and passes it as parameter arguments in a Spark SQL query.
Python
df = spark.sql("SELECT * FROM table where col1 = :param", dbutils.widgets.getAll())
df.show()
Scala
val df = spark.sql("SELECT * FROM table where col1 = :param", dbutils.widgets.getAll())
df.show()
getArgument command (dbutils.widgets.getArgument)â
getArgument(name: String, optional: String): String
Gets the current value of the widget with the specified programmatic name. If the widget does not exist, an optional message can be returned.
To display complete help for this command, run:
dbutils.widgets.help("getArgument")
Exampleâ
This example gets the value of the widget that has the programmatic name fruits_combobox
. If this widget does not exist, the message Error: Cannot find fruits combobox
is returned.
Python
dbutils.widgets.getArgument('fruits_combobox', 'Error: Cannot find fruits combobox')
R
dbutils.widgets.getArgument('fruits_combobox', 'Error: Cannot find fruits combobox')
Scala
dbutils.widgets.getArgument("fruits_combobox", "Error: Cannot find fruits combobox")
multiselect command (dbutils.widgets.multiselect)â
multiselect(name: String, defaultValue: String, choices: Seq, label: String): void
Creates and displays a multiselect widget with the specified programmatic name, default value, choices, and optional label.
To display complete help for this command, run:
dbutils.widgets.help("multiselect")
Exampleâ
This example creates and displays a multiselect widget with the programmatic name days_multiselect
. It offers the choices Monday
through Sunday
and is set to the initial value of Tuesday
. This multiselect widget has an accompanying label Days of the Week
. This example ends by printing the initial value of the multiselect widget, Tuesday
.
Python
dbutils.widgets.multiselect(
name='days_multiselect',
defaultValue='Tuesday',
choices=['Monday', 'Tuesday', 'Wednesday', 'Thursday',
'Friday', 'Saturday', 'Sunday'],
label='Days of the Week'
)
print(dbutils.widgets.get("days_multiselect"))
R
dbutils.widgets.multiselect(
name='days_multiselect',
defaultValue='Tuesday',
choices=list('Monday', 'Tuesday', 'Wednesday', 'Thursday',
'Friday', 'Saturday', 'Sunday'),
label='Days of the Week'
)
print(dbutils.widgets.get("days_multiselect"))
Scala
dbutils.widgets.multiselect(
"days_multiselect",
"Tuesday",
Array("Monday", "Tuesday", "Wednesday", "Thursday",
"Friday", "Saturday", "Sunday"),
"Days of the Week"
)
print(dbutils.widgets.get("days_multiselect"))
SQL
CREATE WIDGET MULTISELECT days_multiselect DEFAULT "Tuesday" CHOICES SELECT * FROM (VALUES ("Monday"), ("Tuesday"), ("Wednesday"), ("Thursday"), ("Friday"), ("Saturday"), ("Sunday"))
SELECT :days_multiselect
remove command (dbutils.widgets.remove)â
remove(name: String): void
Removes the widget with the specified programmatic name.
To display complete help for this command, run:
dbutils.widgets.help("remove")
important
If you add a command to remove a widget, you cannot add a subsequent command to create a widget in the same cell. You must create the widget in another cell.
ExampleâThis example removes the widget with the programmatic name fruits_combobox
.
Python
dbutils.widgets.remove('fruits_combobox')
R
dbutils.widgets.remove('fruits_combobox')
Scala
dbutils.widgets.remove("fruits_combobox")
SQL
REMOVE WIDGET fruits_combobox
removeAll command (dbutils.widgets.removeAll)â
removeAll: void
Removes all widgets from the notebook.
To display complete help for this command, run:
dbutils.widgets.help("removeAll")
important
If you add a command to remove all widgets, you cannot add a subsequent command to create any widgets in the same cell. You must create the widgets in another cell.
ExampleâThis example removes all widgets from the notebook.
Python
dbutils.widgets.removeAll()
R
dbutils.widgets.removeAll()
Scala
dbutils.widgets.removeAll()
text command (dbutils.widgets.text)â
text(name: String, defaultValue: String, label: String): void
Creates and displays a text widget with the specified programmatic name, default value, and optional label.
To display complete help for this command, run:
dbutils.widgets.help("text")
Exampleâ
This example creates and displays a text widget with the programmatic name your_name_text
. It is set to the initial value of Enter your name
. This text widget has an accompanying label Your name
. This example ends by printing the initial value of the text widget, Enter your name
.
Python
dbutils.widgets.text(
name='your_name_text',
defaultValue='Enter your name',
label='Your name'
)
print(dbutils.widgets.get("your_name_text"))
R
dbutils.widgets.text(
name='your_name_text',
defaultValue='Enter your name',
label='Your name'
)
print(dbutils.widgets.get("your_name_text"))
Scala
dbutils.widgets.text(
"your_name_text",
"Enter your name",
"Your name"
)
print(dbutils.widgets.get("your_name_text"))
SQL
CREATE WIDGET TEXT your_name_text DEFAULT "Enter your name"
SELECT :your_name_text
Databricks Utilities API libraryâ
To accelerate application development, it can be helpful to compile, build, and test applications before you deploy them as production jobs. To enable you to compile against Databricks Utilities, Databricks provides the dbutils-api
library. You can download the dbutils-api
library from the DBUtils API webpage on the Maven Repository website or include the library by adding a dependency to your build file:
SBT
Scala
libraryDependencies += "com.databricks" % "dbutils-api_TARGET" % "VERSION"
Maven
XML
<dependency>
<groupId>com.databricks</groupId>
<artifactId>dbutils-api_TARGET</artifactId>
<version>VERSION</version>
</dependency>
Gradle
Bash
compile 'com.databricks:dbutils-api_TARGET:VERSION'
Replace TARGET
with the desired target (for example, 2.12
) and VERSION
with the desired version (for example, 0.0.5
). For a list of available targets and versions, see the DBUtils API webpage on the Maven Repository website.
Once you build your application against this library, you can deploy the application.
important
The dbutils-api
library only allows you to locally compile an application that uses dbutils
, not to run it. To run the application, you must deploy it in Databricks.
Calling dbutils
inside of executors can produce unexpected results or errors.
If you need to run file system operations on executors using dbutils
, refer to Parallelize filesystem operations.
For information about executors, see Cluster Mode Overview on the Apache Spark website.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4