A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.snowflake.com/en/developer-guide/python-connector/../../sql-reference/sql/create-task below:

Website Navigation


CREATE TASK | Snowflake Documentation

CREATE TASK

Creates a new task in the current/specified schema or replaces an existing task.

This command supports the following variants:

See also:

ALTER TASK , DROP TASK , SHOW TASKS , DESCRIBE TASK

CREATE OR ALTER <object>

Syntax
CREATE [ OR REPLACE ] TASK [ IF NOT EXISTS ] <name>
    [ WITH TAG ( <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' , ... ] ) ]
    [ WITH CONTACT ( <purpose> = <contact_name> [ , <purpose> = <contact_name> ... ] ) ]
    [ { WAREHOUSE = <string> }
      | { USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE = <string> } ]
    [ SCHEDULE = { '<num> { HOURS | MINUTES | SECONDS }'
      | 'USING CRON <expr> <time_zone>' } ]
    [ CONFIG = <configuration_string> ]
    [ ALLOW_OVERLAPPING_EXECUTION = TRUE | FALSE ]
    [ <session_parameter> = <value>
      [ , <session_parameter> = <value> ... ] ]
    [ USER_TASK_TIMEOUT_MS = <num> ]
    [ SUSPEND_TASK_AFTER_NUM_FAILURES = <num> ]
    [ ERROR_INTEGRATION = <integration_name> ]
    [ SUCCESS_INTEGRATION = <integration_name> ]
    [ LOG_LEVEL = '<log_level>' ]
    [ COMMENT = '<string_literal>' ]
    [ FINALIZE = <string> ]
    [ TASK_AUTO_RETRY_ATTEMPTS = <num> ]
    [ USER_TASK_MINIMUM_TRIGGER_INTERVAL_IN_SECONDS = <num> ]
    [ TARGET_COMPLETION_INTERVAL = '<num> { HOURS | MINUTES | SECONDS }' ]
    [ SERVERLESS_TASK_MIN_STATEMENT_SIZE = '{ XSMALL | SMALL
      | MEDIUM | LARGE | XLARGE | XXLARGE }' ]
    [ SERVERLESS_TASK_MAX_STATEMENT_SIZE = '{ XSMALL | SMALL
      | MEDIUM | LARGE | XLARGE | XXLARGE }' ]
  [ AFTER <string> [ , <string> , ... ] ]
  [ WHEN <boolean_expr> ]
  AS
    <sql>

Copy

Variant syntax CREATE OR ALTER TASK

Creates a new task if it doesn’t already exist, or transforms an existing task into the task defined in the statement. A CREATE OR ALTER TASK statement follows the syntax rules of a CREATE TASK statement and has the same limitations as an ALTER TASK statement.

Supported task alterations include:

For more information, see CREATE OR ALTER TASK usage notes.

CREATE OR ALTER TASK <name>
    [ { WAREHOUSE = <string> }
      | { USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE = <string> } ]
    [ SCHEDULE = { '<num> { HOURS | MINUTES | SECONDS }'
      | 'USING CRON <expr> <time_zone>' } ]
    [ CONFIG = <configuration_string> ]
    [ ALLOW_OVERLAPPING_EXECUTION = TRUE | FALSE ]
    [ USER_TASK_TIMEOUT_MS = <num> ]
    [ <session_parameter> = <value>
      [ , <session_parameter> = <value> ... ] ]
    [ SUSPEND_TASK_AFTER_NUM_FAILURES = <num> ]
    [ ERROR_INTEGRATION = <integration_name> ]
    [ SUCCESS_INTEGRATION = <integration_name> ]
    [ COMMENT = '<string_literal>' ]
    [ FINALIZE = <string> ]
    [ TASK_AUTO_RETRY_ATTEMPTS = <num> ]
  [ AFTER <string> [ , <string> , ... ] ]
  [ WHEN <boolean_expr> ]
  AS
    <sql>

Copy

CREATE TASK … CLONE

Creates a new task with the same parameter values:

CREATE [ OR REPLACE ] TASK <name> CLONE <source_task>
  [ ... ]

Copy

For more details, see CREATE <object> … CLONE.

Note

Cloning tasks using CREATE TASK <name> CLONE, or cloning a schema containing tasks, copies all underlying task properties unless explicitly overridden.

Required parameters
name

String that specifies the identifier for the task; must be unique for the schema in which the task is created.

In addition, the identifier must start with an alphabetic character and cannot contain spaces or special characters unless the entire identifier string is enclosed in double quotes, such as "My object". Identifiers enclosed in double quotes are also case-sensitive.

For more details, see Identifier requirements.

sql

Any one of the following:

The SQL code is executed when the task runs. Verify that the {sql} executes as expected before using it in a task.

Clone tasks in a task graph

For task graphs, you might also need to make clones of each dependent task (that is, each child task or finalizer task); for example:

  1. Clone the task (for example, CREATE TASK new_task_name CLONE old_task_name).

  2. Find dependent tasks by using the TASK_DEPENDENTS function; for example:

    SELECT * FROM TABLE(INFORMATION_SCHEMA.TASK_DEPENDENTS('old_task_name'));
    

    Copy

  3. Clone the dependent tasks (for example, CREATE TASK new_child_task CLONE old_child_task).

  4. Update the new dependent tasks to use the new cloned task name (ALTER TASK new_child_task ADD AFTER new_task_name).

Optional parameters

CREATE OR REPLACE TASK or . CREATE TASK IF NOT EXISTS

Note

WAREHOUSE = string or . USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE = string

WAREHOUSE = string

Specifies the virtual warehouse that provides compute resources for task runs.

Omit this parameter to use serverless compute resources for runs of this task. Snowflake automatically resizes and scales serverless compute resources as required for each workload. When a schedule is specified for a task, Snowflake adjusts the resource size to complete future runs of the task within the specified time frame. To specify the initial warehouse size for the task, set the USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE = string parameter.

USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE = string

Applied only to serverless tasks.

Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting.

Note that if the task history is unavailable for a given task, the compute resources revert to this initial size.

Note

If a WAREHOUSE = string parameter value is specified, then setting this parameter produces a user error.

The size is equivalent to the compute resources available when creating a warehouse (using CREATE WAREHOUSE), such as SMALL, MEDIUM, or LARGE. The largest size supported by the parameter is XXLARGE. If the parameter is omitted, the first runs of the task are executed using a medium-sized (MEDIUM) warehouse.

You can change the initial size (using ALTER TASK) after the task is created but before it has run successfully once. Changing the parameter after the first run of this task starts has no effect on the compute resources for current or future task runs.

Note that suspending and resuming a task doesn’t remove the task history used to size the compute resources. The task history is only removed if the task is recreated (using the CREATE OR REPLACE TASK syntax).

For more information about this parameter, see USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE.

SCHEDULE = ...

Specifies the schedule for periodically running the task:

Note

  • 'num { HOURS | MINUTES | SECONDS }'

    Specifies an interval of wait time between runs of the task.

    Snowflake sets the base interval time when the task is resumed (ALTER TASK … RESUME) or when a different interval is set (ALTER TASK … SET SCHEDULE).

    For example, if an INTERVAL value of 10 MINUTES is set and the task is enabled at 9:03 a.m., then the task runs at 9:13 a.m., 9:23 a.m., and so on.

    Snowflake ensures that a task won’t run before the set interval; however, Snowflake can’t guarantee task runs at precisely the specified interval.

    Values: { 10 - 691200 } SECONDS, { 1 - 11520 } MINUTES, or { 1-192 } HOURS (That is, from 10 seconds to the equivalent of 8 days). Accepts positive integers only.

    Also supports the notations: HOUR, MINUTE, SECOND, and H, M, S.

CONFIG = configuration_string

Specifies a string representation of key value pairs that can be accessed by all tasks in the task graph. Must be in JSON format. For more information about getting the configuration string for the task that is currently running, see SYSTEM$GET_TASK_GRAPH_CONFIG.

Note

This parameter can only be set on a root task. The setting applies to all tasks in the task graph.

The parameter can be set on standalone tasks but doesn’t affect the task behavior. Snowflake ensures only one instance of a standalone task is running at a given time.

ALLOW_OVERLAPPING_EXECUTION = TRUE | FALSE

Specifies whether to allow multiple instances of the task graph to run concurrently.

Note

This parameter can only be set on a root task. The setting applies to all tasks in the task graph.

The parameter can be set on standalone tasks but doesn’t affect the task behavior. Snowflake ensures only one instance of a standalone task is running at a given time.

Default: FALSE

session_parameter = value [ , session_parameter = value ... ]

Specifies a comma-separated list of session parameters to set for the session when the task runs. A task supports all session parameters. For the complete list, see Session parameters.

USER_TASK_TIMEOUT_MS = num

Specifies the time limit on a single run of the task before it times out (in milliseconds).

Note

For more information about this parameter, see USER_TASK_TIMEOUT_MS.

Values: 0 - 604800000 (7 days). A value of 0 specifies that the maximum timeout value is enforced.

Default: 3600000 (1 hour)

SUSPEND_TASK_AFTER_NUM_FAILURES = num

Specifies the number of consecutive failed task runs after which the current task is suspended automatically. Failed task runs include runs in which the SQL code in the task body either produces a user error or times out. Task runs that are skipped, canceled, or that fail due to a system error are considered indeterminate and aren’t included in the count of failed task runs.

Set the parameter on a standalone task or the root task in a task graph. When the parameter is set to a value greater than 0, the following behavior applies to runs of the standalone task or task graph:

When the parameter is set to 0, failed tasks aren’t automatically suspended.

The setting applies to tasks that rely on either serverless compute resources or virtual warehouse compute resources.

For more information about this parameter, see SUSPEND_TASK_AFTER_NUM_FAILURES.

Values: 0 - No upper limit.

Default: 10

ERROR_INTEGRATION = 'integration_name'

Required only when configuring a task to send error notifications using Amazon Simple Notification Service (SNS), Microsoft Azure Event Grid, or Google Pub/Sub.

Specifies the name of the notification integration used to communicate with Amazon SNS, MS Azure Event Grid, or Google Pub/Sub. For more information, see Enabling notifications for tasks.

SUCCESS_INTEGRATION = 'integration_name'

Required only when configuring a task to send success notifications using Amazon Simple Notification Service (SNS), Microsoft Azure Event Grid, or Google Pub/Sub.

Specifies the name of the notification integration used to communicate with Amazon SNS, MS Azure Event Grid, or Google Pub/Sub. For more information, see Enabling notifications for tasks.

LOG_LEVEL = 'log_level'

Specifies the severity level of events for this task that are ingested and made available in the active event table. Events at the specified level (and at more severe levels) are ingested.

For more information about levels, see LOG_LEVEL. For information about setting the log level, see Setting levels for logging, metrics, and tracing.

COMMENT = 'string_literal'

Specifies a comment for the task.

Default: No value

AFTER string [ , string , ... ]

Specifies one or more predecessor tasks for the current task. Use this option to create a task graph or add this task to an existing task graph. A task graph is a series of tasks that starts with a scheduled root task and is linked together by dependencies.

Note that the structure of a task graph can be defined after all of its component tasks are created. Execute ALTER TASK … ADD AFTER statements to specify the predecessors for each task in the planned task graph.

A task runs after all of its predecessor tasks have finished their own runs successfully (after a brief lag).

Note

WHEN boolean_expr

Specifies a Boolean SQL expression; multiple conditions joined with AND/OR are supported. When a task is triggered (based on its SCHEDULE or AFTER setting), it validates the conditions of the expression to determine whether to execute. If the conditions of the expression are not met, then the task skips the current run. Any tasks that identify this task as a predecessor also don’t run.

The following are supported in a task WHEN clause:

Validating the conditions of the WHEN expression does not require compute resources. The validation is instead processed in the cloud services layer. A nominal charge accrues each time a task evaluates its WHEN condition and doesn’t run. The charges accumulate each time the task is triggered until it runs. At that time, the charge is converted to Snowflake credits and added to the compute resource usage for the task run.

Generally the compute time to validate the condition is insignificant compared to task execution time. As a best practice, align scheduled and actual task runs as closely as possible. Avoid task schedules that don’t align with task runs. For example, if data is inserted into a table with a stream roughly every 24 hours, don’t schedule a task that checks for stream data every minute. The charge to validate the WHEN expression with each run is generally insignificant, but the charges are cumulative.

Note that daily consumption of cloud services that falls below the 10% quota of the daily usage of the compute resources accumulates no cloud services charges.

TAG ( tag_name = 'tag_value' [ , tag_name = 'tag_value' , ... ] )

Specifies the tag name and the tag string value.

The tag value is always a string, and the maximum number of characters for the tag value is 256.

For information about specifying tags in a statement, see Tag quota for objects.

This parameter is not supported by the CREATE OR ALTER variant syntax.

WITH CONTACT ( purpose = contact [ , purpose = contact ...] )

Associate the new object with one or more contacts.

FINALIZE = string

Specifies the name of a root task that the finalizer task is associated with. Finalizer tasks run after all other tasks in the task graph run to completion. You can define the SQL of a finalizer task to handle notifications and the release and cleanup of resources that a task graph uses. For more information, see Finalizer task.

Default: No value

TASK_AUTO_RETRY_ATTEMPTS = num

Specifies the number of automatic task graph retry attempts. If any task graphs complete in a FAILED state, Snowflake can automatically retry the task graphs from the last task in the graph that failed.

The automatic task graph retry is disabled by default. To enable this feature, set TASK_AUTO_RETRY_ATTEMPTS to a value greater than 0 on the root task of a task graph.

Note that this parameter must be set to the root task of a task graph. If it’s set to a child task, an error will be returned.

Values: 0 - 30.

Default: 0

USER_TASK_MINIMUM_TRIGGER_INTERVAL_IN_SECONDS = num

Defines how frequently a task can execute in seconds. If data changes occur more often than the specified minimum, changes will be grouped and processed together.

The task will run every 12 hours even if this value is set to more than 12 hours.

Values: Minimum 10, maximum 604800.

Default: 30

TARGET_COMPLETION_INTERVAL = 'num { HOURS | MINUTES | SECONDS }'

Specifies the desired task completion time. This parameter only applies to serverless tasks. This property is only set on a Task.

This parameter is required when you create serverless Triggered tasks.

Values: { 10 - 86400 } SECONDS, { 1 - 1440 } MINUTES, or { 1-24 } HOURS (That is, from 10 seconds to the equivalent of 1 day). Accepts positive integers only.

Also supports the notations: HOUR, MINUTE, SECOND, and H, M, S.

Default: Snowflake resizes serverless compute resources to complete before the next scheduled execution time.

SERVERLESS_TASK_MIN_STATEMENT_SIZE = string

Specifies the minimum allowed warehouse size for the serverless task. This parameter only applies to serverless tasks. This parameter can be specified on the Task, Schema, Database, or Account. Precedence follows the standard parameter hierarchy.

Values: Minimum XSMALL, Maximum XXLARGE. Values are consistent with WAREHOUSE_SIZE values.

Also supports the notation: X2LARGE.

Default: XSMALL

Note that if both SERVERLESS_TASK_MIN_STATEMENT_SIZE and USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE are specified, SERVERLESS_TASK_MIN_STATEMENT_SIZE must be equal to or smaller than USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE.

SERVERLESS_TASK_MAX_STATEMENT_SIZE = string

Specifies the maximum allowed warehouse size for the serverless task. This parameter only applies to serverless tasks. This parameter can be specified on the Task, Schema, Database, or Account. Precedence follows the standard parameter hierarchy.

Values: Minimum XSMALL, Maximum XXLARGE.

Also supports the notation: X2LARGE.

Default: XXLARGE

Note that if both SERVERLESS_TASK_MIN_STATEMENT_SIZE and SERVERLESS_TASK_MAX_STATEMENT_SIZE are specified, SERVERLESS_TASK_MIN_STATEMENT_SIZE must be less than or equal to SERVERLESS_TASK_MAX_STATEMENT_SIZE. SERVERLESS_TASK_MAX_STATEMENT_SIZE must be equal to or greater than USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE

Access control requirements

A role used to execute this operation must have the following privileges at a minimum:

Privilege

Object

Notes

EXECUTE TASK

Account

Required to run any tasks the role owns. Revoking the EXECUTE TASK privilege on a role prevents all subsequent task runs from starting under that role.

EXECUTE MANAGED TASK

Account

Required only for tasks that rely on serverless compute resources for runs.

CREATE TASK

Schema

USAGE

Warehouse

Required only for tasks that rely on user-managed warehouses for runs.

OWNERSHIP

Task

Required only when executing a CREATE OR ALTER TASK statement for an existing task.

OWNERSHIP is a special privilege on an object that is automatically granted to the role that created the object, but can also be transferred using the GRANT OWNERSHIP command to a different role by the owning role (or any role with the MANAGE GRANTS privilege).

The USAGE privilege on the parent database and schema are required to perform operations on any object in a schema.

For instructions on creating a custom role with a specified set of privileges, see Creating custom roles.

For general information about roles and privilege grants for performing SQL actions on securable objects, see Overview of Access Control.

Usage notes CREATE OR ALTER TASK usage notes Examples Single SQL statement

Create a serverless task that queries the current timestamp every hour starting at 9:00 a.m. and ending at 5:00 p.m. on Sundays (America/Los_Angeles time zone).

The initial warehouse size is XSMALL:

CREATE TASK t1
  SCHEDULE = 'USING CRON 0 9-17 * * SUN America/Los_Angeles'
  USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE = 'XSMALL'
  AS
    SELECT CURRENT_TIMESTAMP;

Copy

Same as the previous example, but the task relies on a user-managed warehouse to provide the compute resources for runs:

CREATE TASK mytask_hour
  WAREHOUSE = mywh
  SCHEDULE = 'USING CRON 0 9-17 * * SUN America/Los_Angeles'
  AS
    SELECT CURRENT_TIMESTAMP;

Copy

Create a serverless task that inserts the current timestamp into a table every hour. The task sets the TIMESTAMP_INPUT_FORMAT parameter for the session in which the task runs. This session parameter specifies the format of the inserted timestamp:

CREATE TASK t1
  SCHEDULE = '60 MINUTES'
  TIMESTAMP_INPUT_FORMAT = 'YYYY-MM-DD HH24'
  USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE = 'XSMALL'
  AS
    INSERT INTO mytable(ts) VALUES(CURRENT_TIMESTAMP);

Copy

Create a task that inserts the current timestamp into a table every 5 minutes:

CREATE TASK mytask_minute
  WAREHOUSE = mywh
  SCHEDULE = '5 MINUTES'
  AS
    INSERT INTO mytable(ts) VALUES(CURRENT_TIMESTAMP);

Copy

Create a task that inserts change tracking data for INSERT operations from a stream into a table every 5 minutes. The task polls the stream using the SYSTEM$STREAM_HAS_DATA function to determine whether change data exists and, if the result is FALSE, skips the current run:

CREATE TASK mytask1
  WAREHOUSE = mywh
  SCHEDULE = '5 MINUTES'
  WHEN
    SYSTEM$STREAM_HAS_DATA('MYSTREAM')
  AS
    INSERT INTO mytable1(id,name) SELECT id, name FROM mystream WHERE METADATA$ACTION = 'INSERT';

Copy

Create a serverless child task in a task graph and add multiple predecessor tasks. The child task runs only after all specified predecessor tasks have successfully completed their own runs.

Suppose that the root task for a task graph is task1 and that task2, task3, and task4 are child tasks of task1. This example adds child task task5 to the task graph and specifies task2, task3, and task4 as predecessor tasks:

-- Create task5 and specify task2, task3, task4 as predecessors tasks.
-- The new task is a serverless task that inserts the current timestamp into a table column.
CREATE TASK task5
  AFTER task2, task3, task4
AS
  INSERT INTO t1(ts) VALUES(CURRENT_TIMESTAMP);

Copy

Stored procedure

Create a task named my_copy_task that calls a stored procedure to unload data from the mytable table to the named mystage stage (using COPY INTO <location>) every hour:

-- Create a stored procedure that unloads data from a table
-- The COPY statement in the stored procedure unloads data to files in a path identified by epoch time (using the Date.now() method)
CREATE OR REPLACE PROCEDURE my_unload_sp()
  returns string not null
  language javascript
  AS
    $$
      var my_sql_command = ""
      var my_sql_command = my_sql_command.concat("copy into @mystage","/",Date.now(),"/"," from mytable overwrite=true;");
      var statement1 = snowflake.createStatement( {sqlText: my_sql_command} );
      var result_set1 = statement1.execute();
    return my_sql_command; // Statement returned for info/debug purposes
    $$;

-- Create a task that calls the stored procedure every hour
CREATE TASK my_copy_task
  WAREHOUSE = mywh
  SCHEDULE = '60 MINUTES'
  AS
    CALL my_unload_sp();

Copy

Multiple SQL statements using SnowSQL

Create a task that executes multiple SQL statements. In this example, the task modifies the TIMESTAMP_OUTPUT_FORMAT for the session and then queries the CURRENT_TIMESTAMP function.

Note

The SQL code in the task definition includes multiple statements. To execute the CREATE TASK statement, you must temporarily set a character other than a semicolon as the delimiter for SQL statements; otherwise, the CREATE TASK statement would return a user error. The command to change the SQL delimiter in SnowSQL is !set sql_delimiter = '<character>'.

!set sql_delimiter=/
CREATE OR REPLACE TASK test_logging
  USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE = 'XSMALL'
  SCHEDULE = 'USING CRON  0 * * * * America/Los_Angeles'
  AS
    BEGIN
      ALTER SESSION SET TIMESTAMP_OUTPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF';
      SELECT CURRENT_TIMESTAMP;
    END;/
!set sql_delimiter=';'

Copy

Procedural logic using Snowflake Scripting

Create a task that declares a variable, uses the variable, and returns the value of the variable every 15 seconds:

CREATE TASK t1
  USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE = 'XSMALL'
  SCHEDULE = '15 SECONDS'
  AS
    EXECUTE IMMEDIATE
    $$
    DECLARE
      radius_of_circle float;
      area_of_circle float;
    BEGIN
      radius_of_circle := 3;
      area_of_circle := pi() * radius_of_circle * radius_of_circle;
      return area_of_circle;
    END;
    $$;

Copy

Root task with configuration

Create a task that specifies configuration, and then reads that configuration.

CREATE OR REPLACE TASK root_task_with_config
  WAREHOUSE=mywarehouse
  SCHEDULE='10 m'
  CONFIG=$${"output_dir": "/temp/test_directory/", "learning_rate": 0.1}$$
  AS
    BEGIN
      LET OUTPUT_DIR STRING := SYSTEM$GET_TASK_GRAPH_CONFIG('output_dir')::string;
      LET LEARNING_RATE DECIMAL := SYSTEM$GET_TASK_GRAPH_CONFIG('learning_rate')::DECIMAL;
    ...
    END;

Copy

Finalizer task

Create a finalizer task, associated with the root task of a task graph, that sends an email alert after task completion. For more information about finalizer tasks, see Finalizer task.

CREATE TASK finalize_task
  WAREHOUSE = my_warehouse
  FINALIZE = my_root_task
  AS
    CALL SYSTEM$SEND_EMAIL(
      'my_email_int',
      'first.last@example.com, first2.last2@example.com',
      'Email Alert: Task A has finished.',
      'Task A has successfully finished.\nStart Time: 10:10:32\nEnd Time: 12:15:45\nTotal Records Processed: 115678'
    );

Copy

Triggered task

Create a triggered task, associated with a stream, that inserts data from the specified stream into the table every time there is new data in the stream. For more information, see Triggered tasks.

CREATE TASK triggeredTask  WAREHOUSE = my_warehouse
  WHEN system$stream_has_data('my_stream')
  AS
    INSERT INTO my_downstream_table
    SELECT * FROM my_stream;

ALTER TASK triggeredTask RESUME;

Copy

Create and alter a simple task using the CREATE OR ALTER TASK command

Create a task my_task to execute every hour in warehouse my_warehouse:

CREATE OR ALTER TASK my_task
  WAREHOUSE = my_warehouse
  SCHEDULE = '60 MINUTES'
  AS
    SELECT PI();

Copy

Alter task my_task to execute after task my_other_task and update the task definition:

CREATE OR ALTER TASK my_task
  WAREHOUSE = regress
  AFTER my_other_task
  AS
    SELECT 2 * PI();

Copy


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4