A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://pkg.go.dev/github.com/jackc/pgx/v5 below:

pgx package - github.com/jackc/pgx/v5 - Go Packages

Package pgx is a PostgreSQL database driver.

pgx provides a native PostgreSQL driver and can act as a database/sql driver. The native PostgreSQL interface is similar to the database/sql interface while providing better speed and access to PostgreSQL specific features. Use github.com/jackc/pgx/v5/stdlib to use pgx as a database/sql compatible driver. See that package's documentation for details.

Establishing a Connection

The primary way of establishing a connection is with pgx.Connect:

conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL"))

The database connection string can be in URL or key/value format. Both PostgreSQL settings and pgx settings can be specified here. In addition, a config struct can be created by ParseConfig and modified before establishing the connection with ConnectConfig to configure settings such as tracing that cannot be configured with a connection string.

Connection Pool

*pgx.Conn represents a single connection to the database and is not concurrency safe. Use package github.com/jackc/pgx/v5/pgxpool for a concurrency safe connection pool.

Query Interface

pgx implements Query in the familiar database/sql style. However, pgx provides generic functions such as CollectRows and ForEachRow that are a simpler and safer way of processing rows than manually calling defer rows.Close(), rows.Next(), rows.Scan, and rows.Err().

CollectRows can be used collect all returned rows into a slice.

rows, _ := conn.Query(context.Background(), "select generate_series(1,$1)", 5)
numbers, err := pgx.CollectRows(rows, pgx.RowTo[int32])
if err != nil {
  return err
}
// numbers => [1 2 3 4 5]

ForEachRow can be used to execute a callback function for every row. This is often easier than iterating over rows directly.

var sum, n int32
rows, _ := conn.Query(context.Background(), "select generate_series(1,$1)", 10)
_, err := pgx.ForEachRow(rows, []any{&n}, func() error {
  sum += n
  return nil
})
if err != nil {
  return err
}

pgx also implements QueryRow in the same style as database/sql.

var name string
var weight int64
err := conn.QueryRow(context.Background(), "select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
    return err
}

Use Exec to execute a query that does not return a result set.

commandTag, err := conn.Exec(context.Background(), "delete from widgets where id=$1", 42)
if err != nil {
    return err
}
if commandTag.RowsAffected() != 1 {
    return errors.New("No row found to delete")
}
PostgreSQL Data Types

pgx uses the pgtype package to converting Go values to and from PostgreSQL values. It supports many PostgreSQL types directly and is customizable and extendable. User defined data types such as enums, domains, and composite types may require type registration. See that package's documentation for details.

Transactions

Transactions are started by calling Begin.

tx, err := conn.Begin(context.Background())
if err != nil {
    return err
}
// Rollback is safe to call even if the tx is already closed, so if
// the tx commits successfully, this is a no-op
defer tx.Rollback(context.Background())

_, err = tx.Exec(context.Background(), "insert into foo(id) values (1)")
if err != nil {
    return err
}

err = tx.Commit(context.Background())
if err != nil {
    return err
}

The Tx returned from Begin also implements the Begin method. This can be used to implement pseudo nested transactions. These are internally implemented with savepoints.

Use BeginTx to control the transaction mode. BeginTx also can be used to ensure a new transaction is created instead of a pseudo nested transaction.

BeginFunc and BeginTxFunc are functions that begin a transaction, execute a function, and commit or rollback the transaction depending on the return value of the function. These can be simpler and less error prone to use.

err = pgx.BeginFunc(context.Background(), conn, func(tx pgx.Tx) error {
    _, err := tx.Exec(context.Background(), "insert into foo(id) values (1)")
    return err
})
if err != nil {
    return err
}
Prepared Statements

Prepared statements can be manually created with the Prepare method. However, this is rarely necessary because pgx includes an automatic statement cache by default. Queries run through the normal Query, QueryRow, and Exec functions are automatically prepared on first execution and the prepared statement is reused on subsequent executions. See ParseConfig for information on how to customize or disable the statement cache.

Copy Protocol

Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a CopyFromSource interface. If the data is already in a [][]any use CopyFromRows to wrap it in a CopyFromSource interface. Or implement CopyFromSource to avoid buffering the entire data set in memory.

rows := [][]any{
    {"John", "Smith", int32(36)},
    {"Jane", "Doe", int32(29)},
}

copyCount, err := conn.CopyFrom(
    context.Background(),
    pgx.Identifier{"people"},
    []string{"first_name", "last_name", "age"},
    pgx.CopyFromRows(rows),
)

When you already have a typed array using CopyFromSlice can be more convenient.

rows := []User{
    {"John", "Smith", 36},
    {"Jane", "Doe", 29},
}

copyCount, err := conn.CopyFrom(
    context.Background(),
    pgx.Identifier{"people"},
    []string{"first_name", "last_name", "age"},
    pgx.CopyFromSlice(len(rows), func(i int) ([]any, error) {
        return []any{rows[i].FirstName, rows[i].LastName, rows[i].Age}, nil
    }),
)

CopyFrom can be faster than an insert with as few as 5 rows.

Listen and Notify

pgx can listen to the PostgreSQL notification system with the `Conn.WaitForNotification` method. It blocks until a notification is received or the context is canceled.

_, err := conn.Exec(context.Background(), "listen channelname")
if err != nil {
    return err
}

notification, err := conn.WaitForNotification(context.Background())
if err != nil {
    return err
}
// do something with notification
Tracing and Logging

pgx supports tracing by setting ConnConfig.Tracer. To combine several tracers you can use the multitracer.Tracer.

In addition, the tracelog package provides the TraceLog type which lets a traditional logger act as a Tracer.

For debug tracing of the actual PostgreSQL wire protocol messages see github.com/jackc/pgx/v5/pgproto3.

Lower Level PostgreSQL Functionality

github.com/jackc/pgx/v5/pgconn contains a lower level PostgreSQL driver roughly at the level of libpq. pgx.Conn is implemented on top of pgconn. The Conn.PgConn() method can be used to access this lower layer.

PgBouncer

By default pgx automatically uses prepared statements. Prepared statements are incompatible with PgBouncer. This can be disabled by setting a different QueryExecMode in ConnConfig.DefaultQueryExecMode.

View Source
const (
	TextFormatCode   = 0
	BinaryFormatCode = 1
)

PostgreSQL format codes

ErrTxCommitRollback occurs when an error has occurred in a transaction and Commit() is called. PostgreSQL accepts COMMIT on aborted transactions, but it is treated as ROLLBACK.

AppendRows iterates through rows, calling fn for each row, and appending the results into a slice of T.

This function closes the rows automatically on return.

BeginFunc calls Begin on db and then calls fn. If fn does not return an error then it calls Commit on db. If fn returns an error it calls Rollback on db. The context will be used when executing the transaction control statements (BEGIN, ROLLBACK, and COMMIT) but does not otherwise affect the execution of fn.

BeginTxFunc calls BeginTx on db and then calls fn. If fn does not return an error then it calls Commit on db. If fn returns an error it calls Rollback on db. The context will be used when executing the transaction control statements (BEGIN, ROLLBACK, and COMMIT) but does not otherwise affect the execution of fn.

CollectExactlyOneRow calls fn for the first row in rows and returns the result.

This function closes the rows automatically on return.

CollectOneRow calls fn for the first row in rows and returns the result. If no rows are found returns an error where errors.Is(ErrNoRows) is true. CollectOneRow is to CollectRows as QueryRow is to Query.

This function closes the rows automatically on return.

CollectRows iterates through rows, calling fn for each row, and collecting the results into a slice of T.

This function closes the rows automatically on return.

This example uses CollectRows with a manually written collector function. In most cases RowTo, RowToAddrOf, RowToStructByPos, RowToAddrOfStructByPos, or another generic function would be used.

ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()

conn, err := pgx.Connect(ctx, os.Getenv("PGX_TEST_DATABASE"))
if err != nil {
	fmt.Printf("Unable to establish connection: %v", err)
	return
}

rows, _ := conn.Query(ctx, `select n from generate_series(1, 5) n`)
numbers, err := pgx.CollectRows(rows, func(row pgx.CollectableRow) (int32, error) {
	var n int32
	err := row.Scan(&n)
	return n, err
})
if err != nil {
	fmt.Printf("CollectRows error: %v", err)
	return
}

fmt.Println(numbers)
Output:

[1 2 3 4 5]

ForEachRow iterates through rows. For each row it scans into the elements of scans and calls fn. If any row fails to scan or fn returns an error the query will be aborted and the error will be returned. Rows will be closed when ForEachRow returns.

conn, err := pgx.Connect(context.Background(), os.Getenv("PGX_TEST_DATABASE"))
if err != nil {
	fmt.Printf("Unable to establish connection: %v", err)
	return
}

rows, _ := conn.Query(
	context.Background(),
	"select n, n * 2 from generate_series(1, $1) n",
	3,
)
var a, b int
_, err = pgx.ForEachRow(rows, []any{&a, &b}, func() error {
	fmt.Printf("%v, %v\n", a, b)
	return nil
})
if err != nil {
	fmt.Printf("ForEachRow error: %v", err)
	return
}
Output:

1, 2
2, 4
3, 6

RowTo returns a T scanned from row.

ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()

conn, err := pgx.Connect(ctx, os.Getenv("PGX_TEST_DATABASE"))
if err != nil {
	fmt.Printf("Unable to establish connection: %v", err)
	return
}

rows, _ := conn.Query(ctx, `select n from generate_series(1, 5) n`)
numbers, err := pgx.CollectRows(rows, pgx.RowTo[int32])
if err != nil {
	fmt.Printf("CollectRows error: %v", err)
	return
}

fmt.Println(numbers)
Output:

[1 2 3 4 5]

RowTo returns a the address of a T scanned from row.

ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()

conn, err := pgx.Connect(ctx, os.Getenv("PGX_TEST_DATABASE"))
if err != nil {
	fmt.Printf("Unable to establish connection: %v", err)
	return
}

rows, _ := conn.Query(ctx, `select n from generate_series(1, 5) n`)
pNumbers, err := pgx.CollectRows(rows, pgx.RowToAddrOf[int32])
if err != nil {
	fmt.Printf("CollectRows error: %v", err)
	return
}

for _, p := range pNumbers {
	fmt.Println(*p)
}
Output:

1
2
3
4
5

RowToAddrOfStructByName returns the address of a T scanned from row. T must be a struct. T must have the same number of named public fields as row has fields. The row and T fields will be matched by name. The match is case-insensitive. The database column name can be overridden with a "db" struct tag. If the "db" struct tag is "-" then the field will be ignored.

RowToAddrOfStructByNameLax returns the address of a T scanned from row. T must be a struct. T must have greater than or equal number of named public fields as row has fields. The row and T fields will be matched by name. The match is case-insensitive. The database column name can be overridden with a "db" struct tag. If the "db" struct tag is "-" then the field will be ignored.

RowToAddrOfStructByPos returns the address of a T scanned from row. T must be a struct. T must have the same number a public fields as row has fields. The row and T fields will be matched by position. If the "db" struct tag is "-" then the field will be ignored.

RowToMap returns a map scanned from row.

RowToStructByName returns a T scanned from row. T must be a struct. T must have the same number of named public fields as row has fields. The row and T fields will be matched by name. The match is case-insensitive. The database column name can be overridden with a "db" struct tag. If the "db" struct tag is "-" then the field will be ignored.

ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()

conn, err := pgx.Connect(ctx, os.Getenv("PGX_TEST_DATABASE"))
if err != nil {
	fmt.Printf("Unable to establish connection: %v", err)
	return
}

if conn.PgConn().ParameterStatus("crdb_version") != "" {
	// Skip test / example when running on CockroachDB. Since an example can't be skipped fake success instead.
	fmt.Println(`Cheeseburger: $10
Fries: $5
Soft Drink: $3`)
	return
}

// Setup example schema and data.
_, err = conn.Exec(ctx, `
create temporary table products (
	id int primary key generated by default as identity,
	name varchar(100) not null,
	price int not null
);

insert into products (name, price) values
	('Cheeseburger', 10),
	('Double Cheeseburger', 14),
	('Fries', 5),
	('Soft Drink', 3);
`)
if err != nil {
	fmt.Printf("Unable to setup example schema and data: %v", err)
	return
}

type product struct {
	ID    int32
	Name  string
	Price int32
}

rows, _ := conn.Query(ctx, "select * from products where price < $1 order by price desc", 12)
products, err := pgx.CollectRows(rows, pgx.RowToStructByName[product])
if err != nil {
	fmt.Printf("CollectRows error: %v", err)
	return
}

for _, p := range products {
	fmt.Printf("%s: $%d\n", p.Name, p.Price)
}
Output:

Cheeseburger: $10
Fries: $5
Soft Drink: $3

RowToStructByNameLax returns a T scanned from row. T must be a struct. T must have greater than or equal number of named public fields as row has fields. The row and T fields will be matched by name. The match is case-insensitive. The database column name can be overridden with a "db" struct tag. If the "db" struct tag is "-" then the field will be ignored.

ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()

conn, err := pgx.Connect(ctx, os.Getenv("PGX_TEST_DATABASE"))
if err != nil {
	fmt.Printf("Unable to establish connection: %v", err)
	return
}

if conn.PgConn().ParameterStatus("crdb_version") != "" {
	// Skip test / example when running on CockroachDB. Since an example can't be skipped fake success instead.
	fmt.Println(`Cheeseburger: $10
Fries: $5
Soft Drink: $3`)
	return
}

// Setup example schema and data.
_, err = conn.Exec(ctx, `
create temporary table products (
	id int primary key generated by default as identity,
	name varchar(100) not null,
	price int not null
);

insert into products (name, price) values
	('Cheeseburger', 10),
	('Double Cheeseburger', 14),
	('Fries', 5),
	('Soft Drink', 3);
`)
if err != nil {
	fmt.Printf("Unable to setup example schema and data: %v", err)
	return
}

type product struct {
	ID    int32
	Name  string
	Type  string
	Price int32
}

rows, _ := conn.Query(ctx, "select * from products where price < $1 order by price desc", 12)
products, err := pgx.CollectRows(rows, pgx.RowToStructByNameLax[product])
if err != nil {
	fmt.Printf("CollectRows error: %v", err)
	return
}

for _, p := range products {
	fmt.Printf("%s: $%d\n", p.Name, p.Price)
}
Output:

Cheeseburger: $10
Fries: $5
Soft Drink: $3

RowToStructByPos returns a T scanned from row. T must be a struct. T must have the same number a public fields as row has fields. The row and T fields will be matched by position. If the "db" struct tag is "-" then the field will be ignored.

ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()

conn, err := pgx.Connect(ctx, os.Getenv("PGX_TEST_DATABASE"))
if err != nil {
	fmt.Printf("Unable to establish connection: %v", err)
	return
}

if conn.PgConn().ParameterStatus("crdb_version") != "" {
	// Skip test / example when running on CockroachDB. Since an example can't be skipped fake success instead.
	fmt.Println(`Cheeseburger: $10
Fries: $5
Soft Drink: $3`)
	return
}

// Setup example schema and data.
_, err = conn.Exec(ctx, `
create temporary table products (
	id int primary key generated by default as identity,
	name varchar(100) not null,
	price int not null
);

insert into products (name, price) values
	('Cheeseburger', 10),
	('Double Cheeseburger', 14),
	('Fries', 5),
	('Soft Drink', 3);
`)
if err != nil {
	fmt.Printf("Unable to setup example schema and data: %v", err)
	return
}

type product struct {
	ID    int32
	Name  string
	Price int32
}

rows, _ := conn.Query(ctx, "select * from products where price < $1 order by price desc", 12)
products, err := pgx.CollectRows(rows, pgx.RowToStructByPos[product])
if err != nil {
	fmt.Printf("CollectRows error: %v", err)
	return
}

for _, p := range products {
	fmt.Printf("%s: $%d\n", p.Name, p.Price)
}
Output:

Cheeseburger: $10
Fries: $5
Soft Drink: $3

ScanRow decodes raw row data into dest. It can be used to scan rows read from the lower level pgconn interface.

typeMap - OID to Go type mapping. fieldDescriptions - OID and format of values values - the raw data as returned from the PostgreSQL server dest - the destination that values will be decoded into

Batch queries are a way of bundling multiple queries together to avoid unnecessary network round trips. A Batch must only be sent once.

Len returns number of queries that have been queued so far.

Queue queues a query to batch b. query can be an SQL query or the name of a prepared statement. The only pgx option argument that is supported is QueryRewriter. Queries are executed using the connection's DefaultQueryExecMode.

While query can contain multiple statements if the connection's DefaultQueryExecMode is QueryModeSimple, this should be avoided. QueuedQuery.Fn must not be set as it will only be called for the first query. That is, QueuedQuery.Query, QueuedQuery.QueryRow, and QueuedQuery.Exec must not be called. In addition, any error messages or tracing that include the current query may reference the wrong query.

BatchTracer traces SendBatch.

CollectableRow is the subset of Rows methods that a RowToFunc is allowed to call.

Conn is a PostgreSQL connection handle. It is not safe for concurrent usage. Use a connection pool to manage access to multiple database connections from multiple goroutines.

Connect establishes a connection with a PostgreSQL server with a connection string. See pgconn.Connect for details.

ConnectConfig establishes a connection with a PostgreSQL server with a configuration struct. connConfig must have been created by ParseConfig.

ConnectWithOptions behaves exactly like Connect with the addition of options. At the present options is only used to provide a GetSSLPassword function.

Begin starts a transaction. Unlike database/sql, the context only affects the begin command. i.e. there is no auto-rollback on context cancellation.

BeginTx starts a transaction with txOptions determining the transaction mode. Unlike database/sql, the context only affects the begin command. i.e. there is no auto-rollback on context cancellation.

Close closes a connection. It is safe to call Close on an already closed connection.

Config returns a copy of config that was used to establish this connection.

CopyFrom uses the PostgreSQL copy protocol to perform bulk data insertion. It returns the number of rows copied and an error.

CopyFrom requires all values use the binary format. A pgtype.Type that supports the binary format must be registered for the type of each column. Almost all types implemented by pgx support the binary format.

Even though enum types appear to be strings they still must be registered to use with CopyFrom. This can be done with Conn.LoadType and pgtype.Map.RegisterType.

Deallocate releases a prepared statement. Calling Deallocate on a non-existent prepared statement will succeed.

DeallocateAll releases all previously prepared statements from the server and client, where it also resets the statement and description cache.

Exec executes sql. sql can be either a prepared statement name or an SQL string. arguments should be referenced positionally from the sql string as $1, $2, etc.

IsClosed reports if the connection has been closed.

LoadType inspects the database for typeName and produces a pgtype.Type suitable for registration. typeName must be the name of a type where the underlying type(s) is already understood by pgx. It is for derived types. In particular, typeName must be one of the following:

LoadTypes performs a single (complex) query, returning all the required information to register the named types, as well as any other types directly or indirectly required to complete the registration. The result of this call can be passed into RegisterTypes to complete the process.

PgConn returns the underlying *pgconn.PgConn. This is an escape hatch method that allows lower level access to the PostgreSQL connection than pgx exposes.

It is strongly recommended that the connection be idle (no in-progress queries) before the underlying *pgconn.PgConn is used and the connection must be returned to the same state before any *pgx.Conn methods are again used.

Ping delegates to the underlying *pgconn.PgConn.Ping.

Prepare creates a prepared statement with name and sql. sql can contain placeholders for bound parameters. These placeholders are referenced positionally as $1, $2, etc. name can be used instead of sql with Query, QueryRow, and Exec to execute the statement. It can also be used with Batch.Queue.

The underlying PostgreSQL identifier for the prepared statement will be name if name != sql or a digest of sql if name == sql.

Prepare is idempotent; i.e. it is safe to call Prepare multiple times with the same name and sql arguments. This allows a code path to Prepare and Query/Exec without concern for if the statement has already been prepared.

Query sends a query to the server and returns a Rows to read the results. Only errors encountered sending the query and initializing Rows will be returned. Err() on the returned Rows must be checked after the Rows is closed to determine if the query executed successfully.

The returned Rows must be closed before the connection can be used again. It is safe to attempt to read from the returned Rows even if an error is returned. The error will be the available in rows.Err() after rows are closed. It is allowed to ignore the error returned from Query and handle it in Rows.

It is possible for a call of FieldDescriptions on the returned Rows to return nil even if the Query call did not return an error.

It is possible for a query to return one or more rows before encountering an error. In most cases the rows should be collected before processing rather than processed while receiving each row. This avoids the possibility of the application processing rows from a query that the server rejected. The CollectRows function is useful here.

An implementor of QueryRewriter may be passed as the first element of args. It can rewrite the sql and change or replace args. For example, NamedArgs is QueryRewriter that implements named arguments.

For extra control over how the query is executed, the types QueryExecMode, QueryResultFormats, and QueryResultFormatsByOID may be used as the first args to control exactly how the query is executed. This is rarely needed. See the documentation for those types for details.

This example uses Query without using any helpers to read the results. Normally CollectRows, ForEachRow, or another helper function should be used.

ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()

conn, err := pgx.Connect(ctx, os.Getenv("PGX_TEST_DATABASE"))
if err != nil {
	fmt.Printf("Unable to establish connection: %v", err)
	return
}

if conn.PgConn().ParameterStatus("crdb_version") != "" {
	// Skip test / example when running on CockroachDB. Since an example can't be skipped fake success instead.
	fmt.Println(`Cheeseburger: $10
Fries: $5
Soft Drink: $3`)
	return
}

// Setup example schema and data.
_, err = conn.Exec(ctx, `
create temporary table products (
	id int primary key generated by default as identity,
	name varchar(100) not null,
	price int not null
);

insert into products (name, price) values
	('Cheeseburger', 10),
	('Double Cheeseburger', 14),
	('Fries', 5),
	('Soft Drink', 3);
`)
if err != nil {
	fmt.Printf("Unable to setup example schema and data: %v", err)
	return
}

rows, err := conn.Query(ctx, "select name, price from products where price < $1 order by price desc", 12)

// It is unnecessary to check err. If an error occurred it will be returned by rows.Err() later. But in rare
// cases it may be useful to detect the error as early as possible.
if err != nil {
	fmt.Printf("Query error: %v", err)
	return
}

// Ensure rows is closed. It is safe to close rows multiple times.
defer rows.Close()

// Iterate through the result set
for rows.Next() {
	var name string
	var price int32

	err = rows.Scan(&name, &price)
	if err != nil {
		fmt.Printf("Scan error: %v", err)
		return
	}

	fmt.Printf("%s: $%d\n", name, price)
}

// rows is closed automatically when rows.Next() returns false so it is not necessary to manually close rows.

// The first error encountered by the original Query call, rows.Next or rows.Scan will be returned here.
if rows.Err() != nil {
	fmt.Printf("rows error: %v", rows.Err())
	return
}
Output:

Cheeseburger: $10
Fries: $5
Soft Drink: $3

QueryRow is a convenience wrapper over Query. Any error that occurs while querying is deferred until calling Scan on the returned Row. That Row will error with ErrNoRows if no rows are returned.

SendBatch sends all queued queries to the server at once. All queries are run in an implicit transaction unless explicit transaction control statements are executed. The returned BatchResults must be closed before the connection is used again.

Depending on the QueryExecMode, all queries may be prepared before any are executed. This means that creating a table and using it in a subsequent query in the same batch can fail.

ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()

conn, err := pgx.Connect(ctx, os.Getenv("PGX_TEST_DATABASE"))
if err != nil {
	fmt.Printf("Unable to establish connection: %v", err)
	return
}

batch := &pgx.Batch{}
batch.Queue("select 1 + 1").QueryRow(func(row pgx.Row) error {
	var n int32
	err := row.Scan(&n)
	if err != nil {
		return err
	}

	fmt.Println(n)

	return err
})

batch.Queue("select 1 + 2").QueryRow(func(row pgx.Row) error {
	var n int32
	err := row.Scan(&n)
	if err != nil {
		return err
	}

	fmt.Println(n)

	return err
})

batch.Queue("select 2 + 3").QueryRow(func(row pgx.Row) error {
	var n int32
	err := row.Scan(&n)
	if err != nil {
		return err
	}

	fmt.Println(n)

	return err
})

err = conn.SendBatch(ctx, batch).Close()
if err != nil {
	fmt.Printf("SendBatch error: %v", err)
	return
}
Output:

2
3
5

TypeMap returns the connection info used for this connection.

WaitForNotification waits for a PostgreSQL notification. It wraps the underlying pgconn notification system in a slightly more convenient form.

ConnConfig contains all the options used to establish a connection. It must be created by ParseConfig and then it can be modified. A manually initialized ConnConfig will cause ConnectConfig to panic.

ParseConfig creates a ConnConfig from a connection string. ParseConfig handles all options that pgconn.ParseConfig does. In addition, it accepts the following options:

ParseConfigWithOptions behaves exactly as ParseConfig does with the addition of options. At the present options is only used to provide a GetSSLPassword function.

ConnString returns the connection string as parsed by pgx.ParseConfig into pgx.ConnConfig.

Copy returns a deep copy of the config that is safe to use and modify. The only exception is the tls.Config: according to the tls.Config docs it must not be modified after creation.

ConnectTracer traces Connect and ConnectConfig.

CopyFromSource is the interface used by *Conn.CopyFrom as the source for copy data.

CopyFromFunc returns a CopyFromSource interface that relies on nxtf for values. nxtf returns rows until it either signals an 'end of data' by returning row=nil and err=nil, or it returns an error. If nxtf returns an error, the copy is aborted.

CopyFromRows returns a CopyFromSource interface over the provided rows slice making it usable by *Conn.CopyFrom.

CopyFromSlice returns a CopyFromSource interface over a dynamic func making it usable by *Conn.CopyFrom.

CopyFromTracer traces CopyFrom.

type ExtendedQueryBuilder struct {
	ParamValues [][]byte

	ParamFormats  []int16
	ResultFormats []int16
	
}

ExtendedQueryBuilder is used to choose the parameter formats, to format the parameters and to choose the result formats for an extended query.

Build sets ParamValues, ParamFormats, and ResultFormats for use with *PgConn.ExecParams or *PgConn.ExecPrepared. If sd is nil then QueryExecModeExec behavior will be used.

Identifier a PostgreSQL identifier or name. Identifiers can be composed of multiple parts such as ["schema", "table"] or ["table", "column"].

Sanitize returns a sanitized string safe for SQL interpolation.

type LargeObject struct {
	
}

A LargeObject is a large object stored on the server. It is only valid within the transaction that it was initialized in. It uses the context it was initialized with for all operations. It implements these interfaces:

io.Writer
io.Reader
io.Seeker
io.Closer

Close the large object descriptor.

Read reads up to len(p) bytes into p returning the number of bytes read.

Seek moves the current location pointer to the new location specified by offset.

Tell returns the current read or write location of the large object descriptor.

Truncate the large object to size.

Write writes p to the large object and returns the number of bytes written and an error if not all of p was written.

type LargeObjectMode int32
type LargeObjects struct {
	
}

LargeObjects is a structure used to access the large objects API. It is only valid within the transaction where it was created.

For more details see: http://www.postgresql.org/docs/current/static/largeobjects.html

Create creates a new large object. If oid is zero, the server assigns an unused OID.

Open opens an existing large object with the given mode. ctx will also be used for all operations on the opened large object.

Unlink removes a large object from the database.

NamedArgs can be used as the first argument to a query method. It will replace every '@' named placeholder with a '$' ordinal placeholder and construct the appropriate arguments.

For example, the following two queries are equivalent:

conn.Query(ctx, "select * from widgets where foo = @foo and bar = @bar", pgx.NamedArgs{"foo": 1, "bar": 2})
conn.Query(ctx, "select * from widgets where foo = $1 and bar = $2", 1, 2)

Named placeholders are case sensitive and must start with a letter or underscore. Subsequent characters can be letters, numbers, or underscores.

RewriteQuery implements the QueryRewriter interface.

ParseConfigOptions contains options that control how a config is built such as getsslpassword.

PrepareTracer traces Prepare.

const (

	
	
	
	
	QueryExecModeCacheStatement QueryExecMode

	
	
	
	
	QueryExecModeCacheDescribe

	
	
	
	
	
	QueryExecModeDescribeExec

	
	
	
	
	
	
	
	
	
	
	
	
	
	
	QueryExecModeExec

	
	
	
	
	
	
	
	
	
	
	
	
	
	
	
	
	QueryExecModeSimpleProtocol
)
type QueryResultFormats []int16

QueryResultFormats controls the result format (text=0, binary=1) of a query by result column position.

QueryResultFormatsByOID controls the result format (text=0, binary=1) of a query by the result column OID.

QueryRewriter rewrites a query when used as the first arguments to a query method.

QueryTracer traces Query, QueryRow, and Exec.

type QueuedQuery struct {
	SQL       string
	Arguments []any
	Fn        batchItemFunc
	
}

QueuedQuery is a query that has been queued for execution via a Batch.

Exec sets fn to be called when the response to qq is received.

Query sets fn to be called when the response to qq is received.

Query sets fn to be called when the response to qq is received.

type Row interface {
	
	
	
	Scan(dest ...any) error
}

Row is a convenience wrapper over Rows that is returned by QueryRow.

Row is an interface instead of a struct to allow tests to mock QueryRow. However, adding a method to an interface is technically a breaking change. Because of this the Row interface is partially excluded from semantic version requirements. Methods will not be removed or changed, but new methods may be added.

type RowScanner interface {
	
	ScanRow(rows Rows) error
}

RowScanner scans an entire row at a time into the RowScanner.

RowToFunc is a function that scans or otherwise converts row to a T.

Rows is the result set returned from *Conn.Query. Rows must be closed before the *Conn can be used again. Rows are closed by explicitly calling Close(), calling Next() until it returns false, or when a fatal error occurs.

Once a Rows is closed the only methods that may be called are Close(), Err(), and CommandTag().

Rows is an interface instead of a struct to allow tests to mock Query. However, adding a method to an interface is technically a breaking change. Because of this the Rows interface is partially excluded from semantic version requirements. Methods will not be removed or changed, but new methods may be added.

RowsFromResultReader returns a Rows that will read from values resultReader and decode with typeMap. It can be used to read from the lower level pgconn interface.

type ScanArgError struct {
	ColumnIndex int
	FieldName   string
	Err         error
}

StrictNamedArgs can be used in the same way as NamedArgs, but provided arguments are also checked to include all named arguments that the sql query uses, and no extra arguments.

RewriteQuery implements the QueryRewriter interface.

type TraceBatchEndData struct {
	Err error
}
type TraceBatchStartData struct {
	Batch *Batch
}
type TraceConnectEndData struct {
	Conn *Conn
	Err  error
}
type TraceConnectStartData struct {
	ConnConfig *ConnConfig
}
type TraceCopyFromStartData struct {
	TableName   Identifier
	ColumnNames []string
}
type TracePrepareEndData struct {
	AlreadyPrepared bool
	Err             error
}
type TracePrepareStartData struct {
	Name string
	SQL  string
}
type TraceQueryStartData struct {
	SQL  string
	Args []any
}
type Tx interface {
	
	Begin(ctx context.Context) (Tx, error)

	
	
	
	
	Commit(ctx context.Context) error

	
	
	
	
	
	Rollback(ctx context.Context) error

	CopyFrom(ctx context.Context, tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int64, error)
	SendBatch(ctx context.Context, b *Batch) BatchResults
	LargeObjects() LargeObjects

	Prepare(ctx context.Context, name, sql string) (*pgconn.StatementDescription, error)

	Exec(ctx context.Context, sql string, arguments ...any) (commandTag pgconn.CommandTag, err error)
	Query(ctx context.Context, sql string, args ...any) (Rows, error)
	QueryRow(ctx context.Context, sql string, args ...any) Row

	
	Conn() *Conn
}

Tx represents a database transaction.

Tx is an interface instead of a struct to enable connection pools to be implemented without relying on internal pgx state, to support pseudo-nested transactions with savepoints, and to allow tests to mock transactions. However, adding a method to an interface is technically a breaking change. If new methods are added to Conn it may be desirable to add them to Tx as well. Because of this the Tx interface is partially excluded from semantic version requirements. Methods will not be removed or changed, but new methods may be added.

TxAccessMode is the transaction access mode (read write or read only)

TxDeferrableMode is the transaction deferrable mode (deferrable or not deferrable)

Transaction deferrable modes

TxIsoLevel is the transaction isolation level (serializable, repeatable read, read committed or read uncommitted)

Transaction isolation levels

TxOptions are transaction modes within a transaction block


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4