This page provides reference information for configuring a Nomad client in the client
block of a Nomad agent configuration. Enable a client, configure allocation directories, artifact and template behavior, networking, node pools, servers to join, garbage collection, workload behavior, client resources, chroot, host volumes, host network, and driver-specific behavior.
Refer to the Set Server and Client Nodes and Nomad Agent pages to learn about the Nomad agent process and how to configure the server and client nodes in your cluster.
client {
enabled = true
servers = ["1.2.3.4:4647", "5.6.7.8:4647"]
}
alloc_dir
(string: "")
- Specifies the directory to use for allocation data. When this parameter is empty, Nomad will generate the path using the top-level data_dir
suffixed with alloc
, like "/opt/nomad/alloc"
. This must be an absolute path. Nomad will create the directory on the host, if it does not exist when the agent process starts.
alloc_mounts_dir
(string: "")
- Specifies the directory to use for binding mounts for the unveil file isolation mode. When this parameter is empty, Nomad generates the path as a sibling of the top-level data_dir
, with the name alloc_mounts
. For example, if the data_dir
is /opt/nomad/data
, then the alloc mounts directory is /opt/nomad/alloc_mounts
. This must be an absolute path and should not be inside the Nomad data directory. Nomad creates the directory on the host, if it does not exist when the agent process starts.
chroot_env
(ChrootEnv: nil)
- Specifies a key-value mapping that defines the chroot environment for jobs using the Exec and Java drivers.
enabled
(bool: false)
- Specifies if client mode is enabled. All other client configuration options depend on this value.
max_kill_timeout
(string: "30s")
- Specifies the maximum amount of time a job is allowed to wait to exit. Individual jobs may customize their own kill timeout, but it may not exceed this value.
disable_remote_exec
(bool: false)
- Specifies if the client should disable remote task execution to tasks running on this client.
meta
(map[string]string: nil)
- Specifies a key-value map that annotates with user-defined metadata.
network_interface
(string: varied)
- Specifies the name of the interface to force network fingerprinting on. When run in dev mode, this defaults to the loopback interface. When not in dev mode, the interface attached to the default route is used. The scheduler chooses from these fingerprinted IP addresses when allocating ports for tasks. This value support go-sockaddr/template format.
If no non-local IP addresses are found, Nomad could fingerprint link-local IPv6 addresses depending on the client's "fingerprint.network.disallow_link_local"
configuration value.
preferred_address_family
(string: "")
- Specifies the preferred address family for the network interface. The value can be ipv4
or ipv6
. If the selected network interface has both IPv4 and IPv6 addresses, this option will select an IP address of the preferred family. When the option is not specified, the current behavior is conserved: the first IP address is selected no matter the family.
cpu_disable_dmidecode
(bool: false)
- Specifies the client should not use dmidecode as a method of cpu detection. Nomad ignores this field on all platforms except Linux.
cpu_total_compute
(int: 0)
- Specifies an override for the total CPU compute. This value should be set to # Cores * Core MHz
. For example, a quad-core running at 2 GHz would have a total compute of 8000 (4 * 2000). Most clients can determine their total CPU compute automatically, and thus in most cases this should be left unset.
memory_total_mb
(int:0)
- Specifies an override for the total memory. If set, this value overrides any detected memory.
disk_total_mb
(int:0)
- Specifies an override for the total disk space fingerprint attribute. This value is not used by the scheduler unless you have constraints set on the attribute unique.storage.bytestotal
. The actual total disk space can be determined via the Read Stats API
disk_free_mb
(int:0)
- Specifies the disk space free for scheduling allocations. If set, this value overrides any detected free disk space. This value can be seen in nomad node status
under Allocated Resources.
min_dynamic_port
(int:20000)
- Specifies the minimum dynamic port to be assigned. Individual ports and ranges of ports may be excluded from dynamic port assignment via reserved
parameters.
max_dynamic_port
(int:32000)
- Specifies the maximum dynamic port to be assigned. Individual ports and ranges of ports may be excluded from dynamic port assignment via reserved
parameters.
node_class
(string: "")
- Specifies an arbitrary string used to logically group client nodes by user-defined class. This value can be used during job placement as an affinity
or constraint
attribute and other places where variable interpolation is supported.
node_max_allocs
(int: 0)
- Specifies the maximum number of allocations that may be scheduled on a client node and is not enforced if unset. This value can be seen in nomad node status
under Allocated Resources.
node_pool
(string: "default")
- Specifies the node pool in which the client is registered. If the node pool does not exist yet, it will be created automatically if the node registers in the authoritative region. In non-authoritative regions, the node is kept in the initializing
status until the node pool is created and replicated.
options
(Options: nil)
- Specifies a key-value mapping of internal configuration for clients, such as for driver configuration.
reserved
(Reserved: nil)
- Specifies that Nomad should reserve a portion of the node's resources from receiving tasks. This can be used to target a certain capacity usage for the node. For example, a value equal to 20% of the node's CPU could be reserved to target a CPU utilization of 80%.
servers
(array<string>: [])
- Specifies an array of addresses to the Nomad servers this client should join. This list is used to register the client with the server nodes and advertise the available resources so that the agent can receive work. This may be specified as an IP address or DNS, with or without the port. If the port is omitted, the default port of 4647
is used. If you are specifying IPv6 addresses, they must be in URL format with brackets (ex. "[2001:db8::1]"
).
server_join
(server_join: nil)
- Specifies how the Nomad client will connect to Nomad servers. The start_join
field is not supported on the client. The retry_join fields may directly specify the server address or use go-discover syntax for auto-discovery.
state_dir
(string: "")
- Specifies the directory to use to store client state. When this parameter is empty, Nomad will generate the path using the top-level data_dir
suffixed with client
, like "/opt/nomad/client"
. This must be an absolute path. Nomad will create the directory on the host, if it does not exist when the agent process starts.
gc_interval
(string: "1m")
- Specifies the interval at which Nomad attempts to garbage collect terminal allocation directories.
gc_disk_usage_threshold
(float: 80)
- Specifies the disk usage percent which Nomad tries to maintain by garbage collecting terminal allocations.
gc_inode_usage_threshold
(float: 70)
- Specifies the inode usage percent which Nomad tries to maintain by garbage collecting terminal allocations.
gc_max_allocs
(int: 50)
- Specifies the maximum number of allocations which a client will track before triggering a garbage collection of terminal allocations. This will not limit the number of allocations a node can run at a time, however after gc_max_allocs
every new allocation will cause terminal allocations to be GC'd.
gc_parallel_destroys
(int: 2)
- Specifies the maximum number of parallel destroys allowed by the garbage collector. This value should be relatively low to avoid high resource usage during garbage collections.
gc_volumes_on_node_gc
(bool: false)
- Specifies that the server should delete any dynamic host volumes on this node when the node is garbage collected. You should only set this to true
if you know that garbage collected nodes will never rejoin the cluster, such as with ephemeral cloud hosts.
no_host_uuid
(bool: true)
- By default a random node UUID will be generated, but setting this to false
will use the system's UUID.
cni_path
(string: "/opt/cni/bin")
- Sets the search path that is used for CNI plugin discovery. Multiple paths can be searched using colon delimited paths
cni_config_dir
(string: "/opt/cni/config")
- Sets the directory where CNI network configuration is located. The client will use this path when fingerprinting CNI networks. Filenames should use the .conflist
extension. Filenames with the .conf
or .json
extensions are loaded as individual plugin configuration.
bridge_network_name
(string: "nomad")
- Sets the name of the bridge to be created by Nomad for allocations running with bridge networking mode on the client.
bridge_network_subnet
(string: "172.26.64.0/20")
- Specifies the subnet which the client will use to allocate IP addresses from.
bridge_network_subnet_ipv6
(string: "")
- Enables IPv6 on Nomad's bridge network by specifying the subnet which the client will use to allocate IPv6 addresses.
bridge_network_hairpin_mode
(bool: false)
- Specifies if hairpin mode is enabled on the network bridge created by Nomad for allocations running with bridge networking mode on this client. You may use the corresponding node attribute nomad.bridge.hairpin_mode
in constraints. When hairpin mode is enabled, allocations are able to reach their own IP and all ports bound to it. Changing this value requires a reboot of the client host to take effect.
artifact
(Artifact: varied)
- Specifies controls on the behavior of task artifact
blocks.
template
(Template: nil)
- Specifies controls on the behavior of task template
blocks.
host_volume
(host_volume: nil)
- Exposes paths from the host as volumes that can be mounted into jobs.
host_volumes_dir
(string: "")
- Specifies the directory wherein host volume plugins should place volumes. When this parameter is empty, Nomad generates the path using the top-level data_dir
suffixed with host_volumes
, like "/opt/nomad/host_volumes"
. This must be an absolute path.
host_volume_plugin_dir
(string: "")
- Specifies the directory to find host volume plugins. When this parameter is empty, Nomad generates the path using the top-level data_dir
suffixed with host_volume_plugins
, like "/opt/nomad/host_volume_plugins"
. This must be an absolute path.
host_network
(host_network: nil)
- Registers additional host networks with the node that can be selected when port mapping.
drain_on_shutdown
(drain_on_shutdown: nil)
- Controls the behavior of the client when leave_on_interrupt
or leave_on_terminate
are set and the client receives the appropriate signal.
cgroup_parent
(string: "/nomad")
- Specifies the cgroup parent for which cgroup subsystems managed by Nomad will be mounted under. Currently this only applies to the cpuset
subsystems. This field is ignored on non Linux platforms.
users
(Users: nil)
- Specifies options concerning Nomad client's use of operating system users.
chroot_env
Parameters
On Linux, drivers based on isolated fork/exec implement file system isolation using chroot. The chroot_env
map lets you configure the chroot environment using source paths on the host operating system.
The mapping format is:
The following example specifies a chroot which contains just enough to run the ls
utility:
client {
chroot_env {
"/bin/ls" = "/bin/ls"
"/etc/ld.so.cache" = "/etc/ld.so.cache"
"/etc/ld.so.conf" = "/etc/ld.so.conf"
"/etc/ld.so.conf.d" = "/etc/ld.so.conf.d"
"/etc/passwd" = "/etc/passwd"
"/lib" = "/lib"
"/lib64" = "/lib64"
}
}
Warning chroot limitations: Nomad by default doesn't copy the ephemeral runtime files in the /run
directory. For exampe, on Ubuntu, /etc/resolv.conf
is a symlink to /run/systemd/resolve/stub-resolv.conf
, so Nomad doesn't copy resolv.conf
to the chroot environment.
In these cases, configure your job's network block for each chroot task.
When chroot_env
is unspecified, the exec
driver uses a default chroot environment with the most commonly used parts of the operating system. Refer to the Nomad exec
driver documentation for the full list.
Nomad never attempts to embed the alloc_dir
in the chroot as doing so would cause infinite recursion.
options
Parameters
The following is not an exhaustive list of options for only the Nomad client. To find the options supported by each individual Nomad driver, refer to the drivers documentation.
"driver.allowlist"
(string: "")
- Specifies a comma-separated list of allowlisted drivers. If specified, drivers not in the allowlist will be disabled. If the allowlist is empty, all drivers are fingerprinted and enabled where applicable.
client {
options = {
"driver.allowlist" = "docker,qemu"
}
}
"driver.denylist"
(string: "")
- Specifies a comma-separated list of denylisted drivers. If specified, drivers in the denylist will be disabled.
client {
options = {
"driver.denylist" = "docker,qemu"
}
}
"env.denylist"
(string: refer to explanation)
- Specifies a comma-separated list of environment variable keys not to pass to these tasks. Nomad passes the host environment variables to exec
, raw_exec
and java
tasks. If specified, the defaults are overridden. If a value is provided, all defaults are overridden (they are not merged).
client {
options = {
"env.denylist" = "MY_CUSTOM_ENVVAR"
}
}
The default list is:
CONSUL_TOKEN
CONSUL_HTTP_TOKEN
CONSUL_HTTP_TOKEN_FILE
NOMAD_TOKEN
VAULT_TOKEN
CONSUL_LICENSE
NOMAD_LICENSE
VAULT_LICENSE
CONSUL_LICENSE_PATH
NOMAD_LICENSE_PATH
VAULT_LICENSE_PATH
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
AWS_METADATA_URL
GOOGLE_APPLICATION_CREDENTIALS
GOOGLE_OAUTH_ACCESS_TOKEN
"user.denylist"
(string: refer to explanation)
- Specifies a comma-separated denylist of usernames for which a task is not allowed to run. This only applies if the driver is included in "user.checked_drivers"
. If a value is provided, all defaults are overridden (they are not merged).
client {
options = {
"user.denylist" = "root,ubuntu"
}
}
The default list is:
"user.checked_drivers"
(string: refer to explanation)
- Specifies a comma-separated list of drivers for which to enforce the "user.denylist"
. For drivers using containers, this enforcement is usually unnecessary. If a value is provided, all defaults are overridden (they are not merged).
client {
options = {
"user.checked_drivers" = "exec,raw_exec"
}
}
The default list is:
"fingerprint.allowlist"
(string: "")
- Specifies a comma-separated list of allowlisted fingerprinters. If specified, any fingerprinters not in the allowlist will be disabled. If the allowlist is empty, all fingerprinters are used.
client {
options = {
"fingerprint.allowlist" = "network"
}
}
"fingerprint.denylist"
(string: "")
- Specifies a comma-separated list of denylisted fingerprinters. If specified, any fingerprinters in the denylist will be disabled. A common use-case for the fingerprint denylist is to disable fingerprinters of irrelevant cloud environments, which can slow down client agent startup time.
client {
options = {
"fingerprint.denylist" = "env_aws,env_gce,env_azure,env_digitalocean"
}
}
"fingerprint.network.disallow_link_local"
(string: "false")
- Specifies whether the network fingerprinter should ignore link-local addresses in the case that no globally routable address is found. The fingerprinter will always prefer globally routable addresses.
client {
options = {
"fingerprint.network.disallow_link_local" = "true"
}
}
reserved
Parameters
cpu
(int: 0)
- Specifies the amount of CPU to reserve, in MHz.
cores
(string: "")
- Specifies the cpuset of CPU cores to reserve. Only supported on Linux.
client {
reserved {
cores = "0-4"
}
}
memory
(int: 0)
- Specifies the amount of memory to reserve, in MB.
disk
(int: 0)
- Specifies the amount of disk to reserve, in MB.
reserved_ports
(string: "")
- Specifies a comma-separated list of ports to reserve on all fingerprinted network devices. Ranges can be specified by using a hyphen separating the two inclusive ends. Refer to host_network
for reserving ports on specific host networks.
artifact
Parameters
http_read_timeout
(string: "30m")
- Specifies the maximum duration in which an HTTP download request must complete before it is canceled. Set to 0
to not enforce a limit.
http_max_size
(string: "100GB")
- Specifies the maximum size allowed for artifacts downloaded via HTTP. Set to 0
to not enforce a limit.
gcs_timeout
(string: "30m")
- Specifies the maximum duration in which a Google Cloud Storate operation must complete before it is canceled. Set to 0
to not enforce a limit.
git_timeout
(string: "30m")
- Specifies the maximum duration in which a Git operation must complete before it is canceled. Set to 0
to not enforce a limit.
hg_timeout
(string: "30m")
- Specifies the maximum duration in which a Mercurial operation must complete before it is canceled. Set to 0
to not enforce a limit.
s3_timeout
(string: "30m")
- Specifies the maximum duration in which an S3 operation must complete before it is canceled. Set to 0
to not enforce a limit.
decompression_size_limit
(string: "100GB")
- Specifies the maximum amount of data that will be decompressed before triggering an error and cancelling the operation. Set to "0"
to not enforce a limit.
decompression_file_count_limit
(int: 4096)
- Specifies the maximum number of files that will be decompressed before triggering an error and cancelling the operation. Set to 0
to not enforce a limit.
disable_filesystem_isolation
(bool: false)
- Specifies whether filesystem isolation should be disabled for artifact downloads. Applies only to systems where filesystem isolation via landlock is possible (Linux kernel 5.13+).
filesystem_isolation_extra_paths
([]string: nil)
- Allow extra paths in the filesystem isolation. Paths are specified in the form [kind]:[mode]:[path]
where kind
must be either f
or d
(file or directory) and mode
must be zero or more of r
, w
, c
, x
(read, write, create, execute) e.g. f:r:/dev/urandom
would enable reading the /dev/urandom file, d:rx:/opt/bin
would enable reading and executing from the /opt/bin directory
set_environment_variables
(string:"")
- Specifies a comma separated list of environment variables that should be inherited by the artifact sandbox from the Nomad client's environment. By default a minimal environment is set including a PATH
appropriate for the operating system.
template
Parameters
function_denylist
([]string: ["plugin", "executeTemplate", "writeToFile"])
- Specifies a list of template rendering functions that should be disallowed in job specs. By default the plugin
, executeTemplate
and writeToFile
functions are disallowed as they allow unrestricted root access to the host or allow recursive execution.
disable_file_sandbox
(bool: false)
- Allows templates access to arbitrary files on the client host via the file
function. By default, templates can access files only within the task working directory.
max_stale
(string: "87600h")
- This is the maximum interval to allow "stale" data. If max_stale
is set to 0
, only the Consul leader will respond to queries, and requests that reach a follower will forward to the leader. In large clusters with many requests, this is not as scalable. This option allows any follower to respond to a query, so long as the last-replicated data is within this bound. Higher values result in less cluster load, but are more likely to have outdated data. This default of 10 years (87600h
) matches the default Consul configuration.
wait
(map: { min = "5s" max = "4m" })
- Defines the minimum and maximum amount of time to wait before attempting to re-render a template. Consul Template re-renders templates whenever rendered variables from Consul, Nomad, or Vault change. However in order to minimize how often tasks are restarted or reloaded, Nomad will configure Consul Template with a backoff timer that will tick on an interval equal to the specified min
value. Consul Template will always wait at least the as long as the min
value specified. If the underlying data has not changed between two tick intervals, Consul Template will re-render. If the underlying data has changed, Consul Template will delay re-rendering until the underlying data stabilizes for at least one tick interval, or the configured max
duration has elapsed. Once the max
duration has elapsed, Consul Template will re-render the template with the data available at the time. This is useful to enable in systems where Consul is in a degraded state, or the referenced data values are changing rapidly, because it will reduce the number of times a template is rendered. Setting both min
and max
to 0
disables the feature. This configuration is also exposed in the task template block to allow overrides per task.
wait {
min = "5s"
max = "4m"
}
wait_bounds
(map: nil)
- Defines client level lower and upper bounds for per-template wait
configuration. If the individual template configuration has a min
lower than wait_bounds.min
or a max
greater than the wait_bounds.max
, the bounds will be enforced, and the template wait
will be adjusted before being sent to consul-template
.
wait_bounds {
min = "5s"
max = "10s"
}
block_query_wait
(string: "5m")
- This is amount of time in seconds to wait for the results of a blocking query. Many endpoints in Consul support a feature known as "blocking queries". A blocking query is used to wait for a potential change using long polling.
consul_retry
(map: { attempts = 12 backoff = "250ms" max_backoff = "1m" })
- This controls the retry behavior when an error is returned from Consul. The template runner will not exit in the face of failure. Instead, it uses exponential back-off and retry functions to wait for the Consul cluster to become available, as is customary in distributed systems.
consul_retry {
# This specifies the number of attempts to make before giving up. Each
# attempt adds the exponential backoff sleep time. Setting this to
# zero will implement an unlimited number of retries.
attempts = 12
# This is the base amount of time to sleep between retry attempts. Each
# retry sleeps for an exponent of 2 longer than this base. For 5 retries,
# the sleep times would be: 250ms, 500ms, 1s, 2s, then 4s.
backoff = "250ms"
# This is the maximum amount of time to sleep between retry attempts.
# When max_backoff is set to zero, there is no upper limit to the
# exponential sleep between retry attempts.
# If max_backoff is set to 10s and backoff is set to 1s, sleep times
# would be: 1s, 2s, 4s, 8s, 10s, 10s, ...
max_backoff = "1m"
}
vault_retry
(map: { attempts = 12 backoff = "250ms" max_backoff = "1m" })
- This controls the retry behavior when an error is returned from Vault. Consul Template is highly fault tolerant, meaning it does not exit in the face of failure. Instead, it uses exponential back-off and retry functions to wait for the cluster to become available, as is customary in distributed systems.
vault_retry {
# This specifies the number of attempts to make before giving up. Each
# attempt adds the exponential backoff sleep time. Setting this to
# zero will implement an unlimited number of retries.
attempts = 12
# This is the base amount of time to sleep between retry attempts. Each
# retry sleeps for an exponent of 2 longer than this base. For 5 retries,
# the sleep times would be: 250ms, 500ms, 1s, 2s, then 4s.
backoff = "250ms"
# This is the maximum amount of time to sleep between retry attempts.
# When max_backoff is set to zero, there is no upper limit to the
# exponential sleep between retry attempts.
# If max_backoff is set to 10s and backoff is set to 1s, sleep times
# would be: 1s, 2s, 4s, 8s, 10s, 10s, ...
max_backoff = "1m"
}
nomad_retry
(map: { attempts = 12 backoff = "250ms" max_backoff = "1m" })
- This controls the retry behavior when an error is returned from Nomad. Consul Template is highly fault tolerant, meaning it does not exit in the face of failure. Instead, it uses exponential back-off and retry functions to wait for the cluster to become available, as is customary in distributed systems.
nomad_retry {
# This specifies the number of attempts to make before giving up. Each
# attempt adds the exponential backoff sleep time. Setting this to
# zero will implement an unlimited number of retries.
attempts = 12
# This is the base amount of time to sleep between retry attempts. Each
# retry sleeps for an exponent of 2 longer than this base. For 5 retries,
# the sleep times would be: 250ms, 500ms, 1s, 2s, then 4s.
backoff = "250ms"
# This is the maximum amount of time to sleep between retry attempts.
# When max_backoff is set to zero, there is no upper limit to the
# exponential sleep between retry attempts.
# If max_backoff is set to 10s and backoff is set to 1s, sleep times
# would be: 1s, 2s, 4s, 8s, 10s, 10s, ...
max_backoff = "1m"
}
host_volume
Block
The host_volume
block is used to make volumes available to jobs. You can also configure dynamic host volumes via the volume create
or volume register
commands.
The key of the block corresponds to the name of the volume for use in the source
parameter of a "host"
type volume
and ACLs. A host volume in the configuration cannot have the same name as a dynamic host volume on the same node.
client {
host_volume "ca-certificates" {
path = "/etc/ssl/certs"
read_only = true
}
}
host_volume
Parameters
path
(string: "", required)
- Specifies the path on the host that should be used as the source when this volume is mounted into a task. The path must exist on client startup.
read_only
(bool: false)
- Specifies whether the volume should only ever be allowed to be mounted read_only
, or if it should be writeable.
host_network
Block
The host_network
block is used to register additional host networks with the node that can be used when port mapping.
The key of the block corresponds to the name of the network used in the host_network
.
client {
host_network "public" {
cidr = "203.0.113.0/24"
reserved_ports = "22,80"
}
}
host_network
Parameters
cidr
(string: "")
- Specifies a cidr block of addresses to match against. If an address is found on the node that is contained by this cidr block, the host network will be registered with it.
interface
(string: "")
- Filters searching of addresses to a specific interface.
reserved_ports
(string: "")
- Specifies a comma-separated list of ports to reserve on all addresses associated with this network. Ranges can be specified by using a hyphen separating the two inclusive ends. reserved.reserved_ports
are also reserved on each host network.
drain_on_shutdown
Block
The drain_on_shutdown
block controls the behavior of the client when leave_on_interrupt
or leave_on_terminate
are set. By default drain_on_shutdown
is not configured and clients will not drain on any signal.
If drain_on_shutdown
is configured, the node will drain itself when receiving the appropriate signal (SIGINT
for leave_on_interrupt
or SIGTERM
on leave_on_terminate
). By default this acts similarly to running nomad node drain -self -no-deadline
Note that even if no deadline is set, your init system may send SIGKILL
to Nomad if the drain takes longer than allowed by the service shutdown. For example, when running under Linux with systemd
, you should adjust the TimeoutStopSec
value in the nomad.service
unit file to account for enough time for the client to drain.
client {
# Either leave_on_interrupt or leave_on_terminate must be set
# for this to take effect.
drain_on_shutdown {
deadline = "1h"
force = false
ignore_system_jobs = false
}
}
deadline
(string: "1h")
- Set the deadline by which all allocations must be moved off the client. Remaining allocations after the deadline are removed from the client, regardless of their [migrate
][] block. Defaults to 1 hour.
force
(bool: false)
- Setting to true
drains all the allocations on the client immediately, ignoring the [migrate
][] block. Note if you have multiple allocations for the same job on the draining client without additional allocations on other clients, this will result in an outage for that job until the drain is complete.
ignore_system_jobs
(bool: false)
- Setting to true
allows the drain to complete without stopping system job allocations. By default system jobs (and CSI plugins) are stopped last.
users
Block
The users
block controls aspects of Nomad client's use of operating system users.
client {
users {
dynamic_user_min = 80000
dynamic_user_max = 89999
}
}
dynamic_user_min
(int: 80000)
- The lowest UID/GID to allocate for task drivers capable of making use of dynamic workload users.
dynamic_user_max
(int: 89999)
- The highest UID/GID to allocate for task drivers capable of making use of dynamic workload users.
This example shows the most basic configuration for a Nomad client joined to a cluster.
client {
enabled = true
server_join {
retry_join = [ "1.1.1.1", "2.2.2.2" ]
retry_max = 3
retry_interval = "15s"
}
}
Reserved Resources
This example shows a sample configuration for reserving resources to the client. This is useful if you want to allocate only a portion of the client's resources to jobs.
client {
enabled = true
reserved {
cpu = 500
memory = 512
disk = 1024
reserved_ports = "22,80,8500-8600"
}
}
Custom Metadata and Node Class
This example shows a client configuration which customizes the metadata and node class. The scheduler can use this information while processing constraints. The metadata is completely user configurable; the values in the example are for illustrative purposes only.
client {
enabled = true
node_class = "prod"
meta {
owner = "ops"
cached_binaries = "redis,apache,nginx,jq,cypress,nodejs"
rack = "rack-12-1"
}
}
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4