gcloud functions deploy
(NAME
: --region
=REGION
) [--[no-]allow-unauthenticated
] [--concurrency
=CONCURRENCY
] [--docker-registry
=DOCKER_REGISTRY
] [--egress-settings
=EGRESS_SETTINGS
] [--entry-point
=ENTRY_POINT
] [--gen2
] [--ignore-file
=IGNORE_FILE
] [--ingress-settings
=INGRESS_SETTINGS
] [--retry
] [--run-service-account
=RUN_SERVICE_ACCOUNT
] [--runtime
=RUNTIME
] [--runtime-update-policy
=RUNTIME_UPDATE_POLICY
] [--security-level
=SECURITY_LEVEL
; default="secure-always"] [--serve-all-traffic-latest-revision
] [--service-account
=SERVICE_ACCOUNT
] [--source
=SOURCE
] [--stage-bucket
=STAGE_BUCKET
] [--timeout
=TIMEOUT
] [--trigger-location
=TRIGGER_LOCATION
] [--trigger-service-account
=TRIGGER_SERVICE_ACCOUNT
] [--update-labels
=[KEY
=VALUE
,…]] [--binary-authorization
=BINARY_AUTHORIZATION
| --clear-binary-authorization
] [--build-env-vars-file
=FILE_PATH
| --clear-build-env-vars
| --set-build-env-vars
=[KEY
=VALUE
,…] | --remove-build-env-vars
=[KEY
,…] --update-build-env-vars
=[KEY
=VALUE
,…]] [--build-service-account
=BUILD_SERVICE_ACCOUNT
| --clear-build-service-account
] [--build-worker-pool
=BUILD_WORKER_POOL
| --clear-build-worker-pool
] [--clear-docker-repository
| --docker-repository
=DOCKER_REPOSITORY
] [--clear-env-vars
| --env-vars-file
=FILE_PATH
| --set-env-vars
=[KEY
=VALUE
,…] | --remove-env-vars
=[KEY
,…] --update-env-vars
=[KEY
=VALUE
,…]] [--clear-kms-key
| --kms-key
=KMS_KEY
] [--clear-labels
| --remove-labels
=[KEY
,…]] [--clear-max-instances
| --max-instances
=MAX_INSTANCES
] [--clear-min-instances
| --min-instances
=MIN_INSTANCES
] [--clear-secrets
| --set-secrets
=[SECRET_ENV_VAR
=SECRET_VALUE_REF
,/secret_path
=SECRET_VALUE_REF
,/mount_path
:/secret_file_path
=SECRET_VALUE_REF
,…] | --remove-secrets
=[SECRET_ENV_VAR
,/secret_path
,/mount_path
:/secret_file_path
,…] --update-secrets
=[SECRET_ENV_VAR
=SECRET_VALUE_REF
,/secret_path
=SECRET_VALUE_REF
,/mount_path
:/secret_file_path
=SECRET_VALUE_REF
,…]] [--clear-vpc-connector
| --vpc-connector
=VPC_CONNECTOR
] [--memory
=MEMORY
: --cpu
=CPU
] [--trigger-bucket
=TRIGGER_BUCKET
| --trigger-http
| --trigger-topic
=TRIGGER_TOPIC
| --trigger-event
=EVENT_TYPE
--trigger-resource
=RESOURCE
| --trigger-event-filters
=[ATTRIBUTE
=VALUE
,…] --trigger-event-filters-path-pattern
=[ATTRIBUTE
=PATH_PATTERN
,…]] [GCLOUD_WIDE_FLAG …
]
/messages/{pushId}
, run:
gcloud functions deploy my_function --runtime=python37 --trigger-event=providers/cloud.firestore/eventTypes/document.write --trigger-resource=projects/project_id/databases/(default)/documents/messages/{pushId}
See https://cloud.google.com/functions/docs/calling for more details of using other types of resource as triggers.
To set the project
attribute:
NAME
on the command line with a fully specified name;--project
on the command line;core/project
.This must be specified.
NAME
To set the function
attribute:
NAME
on the command line.This positional argument must be specified if any of the other arguments in this group are specified.
--region
=REGION
functions/region
property value for this command invocation.
To set the region
attribute:
NAME
on the command line with a fully specified name;--region
on the command line;functions/region
.--[no-]allow-unauthenticated
--allow-unauthenticated
to enable and --no-allow-unauthenticated
to disable.
--concurrency
=CONCURRENCY
--docker-registry
=DOCKER_REGISTRY
artifact-registry
is used by default.
With the general transition from Container Registry to Artifact Registry, the option to specify docker registry is deprecated. All container image storage and management will automatically transition to Artifact Registry. For more information, see https://cloud.google.com/artifact-registry/docs/transition/transition-from-gcr
DOCKER_REGISTRY
must be one of: artifact-registry
, container-registry
.
--egress-settings
=EGRESS_SETTINGS
private-ranges-only
will be used. EGRESS_SETTINGS
must be one of: private-ranges-only
, all
.
--entry-point
=ENTRY_POINT
--gen2
--no-gen2
, Cloud Functions (First generation) will be used. If not specified, the value of this flag will be taken from the functions/gen2
configuration property. If the functions/gen2
configuration property is not set, defaults to looking up the given function and using its generation.
--ignore-file
=IGNORE_FILE
--ignore-file=.mygcloudignore
combined with --source=./mydir
would point to ./mydir/.mygcloudignore
--ingress-settings
=INGRESS_SETTINGS
all
will be used. INGRESS_SETTINGS
must be one of: all
, internal-only
, internal-and-gclb
.
--retry
--run-service-account
=RUN_SERVICE_ACCOUNT
If not provided, the function will use the project's default service account for Compute Engine.
--runtime
=RUNTIME
Required when deploying a new function; optional when updating an existing function.
For a list of available runtimes, run gcloud functions runtimes list
.
--runtime-update-policy
=RUNTIME_UPDATE_POLICY
automatic
is used by default. RUNTIME_UPDATE_POLICY
must be one of: automatic
, on-deploy
.
--security-level
=SECURITY_LEVEL
; default="secure-always"
secure-always
will be used, meaning only HTTPS is supported. SECURITY_LEVEL
must be one of: secure-always
, secure-optional
.
--serve-all-traffic-latest-revision
--service-account
=SERVICE_ACCOUNT
If not provided, the function will use the project's default service account for Compute Engine.
--source
=SOURCE
Location of the source can be one of the following three options:
.zip
archive),Note that, depending on your runtime type, Cloud Functions will look for files with specific names for deployable functions. For Node.js, these filenames are index.js
or function.js
. For Python, this is main.py
.
If you do not specify the --source
flag:
The value of the flag will be interpreted as a Cloud Storage location, if it starts with gs://
.
The value will be interpreted as a reference to a source repository, if it starts with https://
.
Otherwise, it will be interpreted as the local filesystem path. When deploying source from the local filesystem, this command skips files specified in the .gcloudignore
file (see gcloud topic gcloudignore
for more information). If the .gcloudignore
file doesn't exist, the command will try to create it.
The minimal source repository URL is: https://source.developers.google.com/projects/${PROJECT}/repos/${REPO}
By using the URL above, sources from the root directory of the repository on the revision tagged master
will be used.
If you want to deploy from a revision different from master
, append one of the following three sources to the URL:
/revisions/${REVISION}
,/moveable-aliases/${MOVEABLE_ALIAS}
,/fixed-aliases/${FIXED_ALIAS}
.If you'd like to deploy sources from a directory different from the root, you must specify a revision, a moveable alias, or a fixed alias, as above, and append /paths/${PATH_TO_SOURCES_DIRECTORY}
to the URL.
Overall, the URL should match the following regular expression:
^https://source\.developers\.google\.com/projects/ (?<accountId>[^/]+)/repos/(?<repoName>[^/]+) (((/revisions/(?<commit>[^/]+))|(/moveable-aliases/(?<branch>[^/]+))| (/fixed-aliases/(?<tag>[^/]+)))(/paths/(?<path>.*))?)?$
An example of a validly formatted source repository URL is:
https://source.developers.google.com/projects/123456789/repos/testrepo/ moveable-aliases/alternate-branch/paths/path-to=source
--stage-bucket
=STAGE_BUCKET
--stage-bucket
flag when deploying a function, you will need to specify --source
or --stage-bucket
in subsequent deployments to update your source code. To use this flag successfully, the account in use must have permissions to write to this bucket. For help granting access, refer to this guide: https://cloud.google.com/storage/docs/access-control/
--timeout
=TIMEOUT
For GCF 1st gen functions, cannot be more than 540s.
For GCF 2nd gen functions, cannot be more than 3600s.
See $ gcloud topic datetimes for information on duration formats.
--trigger-location
=TRIGGER_LOCATION
--trigger-service-account
=TRIGGER_SERVICE_ACCOUNT
If not provided, the function will use the project's default service account for Compute Engine.
--update-labels
=[KEY
=VALUE
,…]
Keys must start with a lowercase character and contain only hyphens (-
), underscores (_
), lowercase characters, and numbers. Values must contain only hyphens (-
), underscores (_
), lowercase characters, and numbers.
Label keys starting with deployment
are reserved for use by deployment tools and cannot be specified manually.
Example: default
The flag is only applicable to 2nd gen functions.
--clear-binary-authorization
--build-env-vars-file
=FILE_PATH
--clear-build-env-vars
--set-build-env-vars
=[KEY
=VALUE
,…]
--remove-build-env-vars
=[KEY
,…]
--update-build-env-vars
=[KEY
=VALUE
,…]
--build-service-account
=BUILD_SERVICE_ACCOUNT
If not provided, the function will use the project's default service account for Cloud Build.
--clear-build-service-account
--build-worker-pool
=BUILD_WORKER_POOL
projects/${PROJECT}/locations/${LOCATION}/workerPools/${WORKERPOOL}
where ${PROJECT} is the project id and ${LOCATION} is the location where the worker pool is defined and ${WORKERPOOL} is the short name of the worker pool.
--clear-build-worker-pool
--clear-docker-repository
--docker-repository
=DOCKER_REPOSITORY
DOCKER_REPOSITORY
must be an Artifact Registry Docker repository present in the same
project and location as the Cloud Function.
**Preview:** for 2nd gen functions, a Docker Artifact registry repository in a different project and/or location may be used. Additional requirements apply, see https://cloud.google.com/functions/docs/building#image_registry
The repository name should match one of these patterns:
projects/${PROJECT}/locations/${LOCATION}/repositories/${REPOSITORY}
,{LOCATION}-docker.pkg.dev/{PROJECT}/{REPOSITORY}
.where ${PROJECT}
is the project, ${LOCATION}
is the location of the repository and ${REPOSITORY}
is a valid repository ID.
--clear-env-vars
--env-vars-file
=FILE_PATH
--set-env-vars
=[KEY
=VALUE
,…]
--remove-env-vars
=[KEY
,…]
--update-env-vars
=[KEY
=VALUE
,…]
--clear-kms-key
--kms-key
=KMS_KEY
The KMS crypto key name should match the pattern projects/${PROJECT}/locations/${LOCATION}/keyRings/${KEYRING}/cryptoKeys/${CRYPTOKEY}
where ${PROJECT} is the project, ${LOCATION} is the location of the key ring, and ${KEYRING} is the key ring that contains the ${CRYPTOKEY} crypto key.
If this flag is set, then a Docker repository created in Artifact Registry must be specified using the --docker-repository
flag and the repository must be encrypted using the same
KMS key.
--clear-labels
--update-labels
is also specified then --clear-labels
is applied first.
For example, to remove all labels:
gcloud functions deploy --clear-labels
To remove all existing labels and create two new labels,
and foo
:baz
gcloud functions deploy --clear-labels --update-labels foo=bar,baz=qux
--remove-labels
=[KEY
,…]
--update-labels
is also specified then --update-labels
is applied first.Label keys starting with deployment
are reserved for use by deployment tools and cannot be specified manually.
--clear-max-instances
If it's any 2nd gen function or a 1st gen HTTP function, this flag sets maximum instances to 0, which means there is no limit to maximum instances. If it's an event-driven 1st gen function, this flag sets maximum instances to 3000, which is the default value for 1st gen functions.
--max-instances
=MAX_INSTANCES
--clear-min-instances
--min-instances
=MIN_INSTANCES
--clear-secrets
--set-secrets
=[SECRET_ENV_VAR
=SECRET_VALUE_REF
,/secret_path
=SECRET_VALUE_REF
,/mount_path
:/secret_file_path
=SECRET_VALUE_REF
,…]
You can reference a secret value referred to as SECRET_VALUE_REF
in the help text in the following ways.
${SECRET}:${VERSION}
if you are referencing a secret in the same project, where ${SECRET}
is the name of the secret in secret manager (not the full resource name) and ${VERSION}
is the version of the secret which is either a positive integer
or the label latest
. For example, use SECRET_FOO:1
to reference version 1
of the secret SECRET_FOO
which exists in the same project as the function.projects/${PROJECT}/secrets/${SECRET}/versions/${VERSION}
or projects/${PROJECT}/secrets/${SECRET}:${VERSION}
to reference a secret version using the full resource name, where ${PROJECT}
is either the project number (preferred
) or the project ID of the project which contains the secret, ${SECRET}
is the name of the secret in secret manager (not the full resource name) and ${VERSION}
is the version of the secret which is either a positive integer
or the label latest
. For example, use projects/1234567890/secrets/SECRET_FOO/versions/1
or projects/project_id/secrets/SECRET_FOO/versions/1
to reference version 1
of the secret SECRET_FOO
that exists in the project 1234567890
or project_id
respectively. This format is useful when the secret exists in a different project.To configure the secret as an environment variable, use SECRET_ENV_VAR=SECRET_VALUE_REF
. To use the value of the secret, read the environment variable SECRET_ENV_VAR
as you would normally do in the function's programming language.
We recommend using a numeric
version for secret environment variables as any updates to the secret value are not reflected until new clones start.
To mount the secret within a volume use /secret_path=SECRET_VALUE_REF
or /mount_path:/secret_file_path=SECRET_VALUE_REF
. To use the value of the secret, read the file at /secret_path
as you would normally do in the function's programming language.
For example, /etc/secrets/secret_foo=SECRET_FOO:latest
or /etc/secrets:/secret_foo=SECRET_FOO:latest
will make the value of the latest
version of the secret SECRET_FOO
available in a file secret_foo
under the directory /etc/secrets
. /etc/secrets
will be considered as the mount path
and will not
be available for any other volume.
We recommend referencing the latest
version when using secret volumes so that the secret's value changes are reflected immediately.
--update-secrets
and --remove-secrets
can be used together. If both are specified, then --remove-secrets
will be applied first.
--remove-secrets
=[SECRET_ENV_VAR
,/secret_path
,/mount_path
:/secret_file_path
,…]
Existing secrets configuration of secret environment variable names and secret paths not specified in this list will be preserved.
To remove a secret environment variable, use the name of the environment variable SECRET_ENV_VAR
.
To remove a file within a secret volume or the volume itself, use the secret path as the key (either /secret_path
or /mount_path:/secret_file_path
).
--update-secrets
=[SECRET_ENV_VAR
=SECRET_VALUE_REF
,/secret_path
=SECRET_VALUE_REF
,/mount_path
:/secret_file_path
=SECRET_VALUE_REF
,…]
--clear-vpc-connector
projects/${PROJECT}/locations/${LOCATION}/connectors/${CONNECTOR}
or ${CONNECTOR}
, where ${CONNECTOR}
is the short name of the VPC Access connector. This represents a Cloud resource. (NOTE) Some attributes are not given arguments in this group but can be set in other ways.
To set the project
attribute:
--vpc-connector
on the command line with a fully specified name;--project
on the command line;core/project
.To set the region
attribute:
--vpc-connector
on the command line with a fully specified name;--region
on the command line;functions/region
.--vpc-connector
=VPC_CONNECTOR
To set the connector
attribute:
--vpc-connector
on the command line.--memory
=MEMORY
Allowed values for v1 are: 128MB, 256MB, 512MB, 1024MB, 2048MB, 4096MB, and 8192MB.
Allowed values for GCF 2nd gen are in the format: <number><unit> with allowed units of "k", "M", "G", "Ki", "Mi", "Gi". Ending 'b' or 'B' is allowed, but both are interpreted as bytes as opposed to bits.
Examples: 1000000K, 1000000Ki, 256Mb, 512M, 1024Mi, 2G, 4Gi.
By default, a new function is limited to 256MB of memory. When deploying an update to an existing function, the function keeps its old memory limit unless you specify this flag.
--cpu
=CPU
--memory=MEMORY
is specified.
Examples: .5, 2, 2.0, 2000m.
By default, a new function's available CPUs is determined based on its memory value.
When deploying an update that includes memory changes to an existing function, the function's available CPUs will be recalculated based on the new memory unless this flag is specified. When deploying an update that does not include memory changes to an existing function, the function's "available CPUs" setting will keep its old value unless you use this flag to change the setting.
--trigger-topic
,--trigger-bucket
,--trigger-http
,--trigger-event
AND --trigger-resource
,--trigger-event-filters
and optionally --trigger-event-filters-path-pattern
.--trigger-bucket
=TRIGGER_BUCKET
--trigger-http
describe
command. Any HTTP request (of a supported type) to the endpoint will trigger function execution. Supported HTTP request types are: POST, PUT, GET, DELETE, and OPTIONS.
--trigger-topic
=TRIGGER_TOPIC
--trigger-event
=EVENT_TYPE
gcloud functions event-types list
.
--trigger-resource
=RESOURCE
--trigger-event
is being observed. E.g. if --trigger-event
is providers/cloud.storage/eventTypes/object.change
, --trigger-resource
must be a bucket name. For a list of expected resources, call gcloud functions event-types list
.
--trigger-event-filters
=[ATTRIBUTE
=VALUE
,…]
type
attribute, as well as any other attributes that are expected for the chosen type.
--trigger-event-filters-path-pattern
=[ATTRIBUTE
=PATH_PATTERN
,…]
The provided attribute/value pair will be used with the match-path-pattern
operator to configure the trigger, see https://cloud.google.com/eventarc/docs/reference/rest/v1/projects.locations.triggers#eventfilter and https://cloud.google.com/eventarc/docs/path-patterns for more details about on how to construct path patterns.
For example, to filter on events for Compute Engine VMs in a given zone: --trigger-event-filters-path-pattern=resourceName='/projects/*/zones/us-central1-a/instances/*'
--access-token-file
, --account
, --billing-project
, --configuration
, --flags-file
, --flatten
, --format
, --help
, --impersonate-service-account
, --log-http
, --project
, --quiet
, --trace-token
, --user-output-enabled
, --verbosity
.
Run $ gcloud help
for details.
gcloud alpha functions deploy
gcloud beta functions deploy
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4