A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://pkg.go.dev/github.com/twmb/franz-go/pkg/kadm below:

kadm package - github.com/twmb/franz-go/pkg/kadm - Go Packages

Package kadm provides a helper Kafka admin client around a *kgo.Client.

This package is meant to cover the common use cases for dropping into an "admin" like interface for Kafka. As with any admin client, this package must make opinionated decisions on what to provide and what to hide. The underlying Kafka protocol gives more detailed information in responses, or allows more fine tuning in requests, but most of the time, these details are unnecessary.

By virtue of making opinionated decisions, this package cannot satisfy every need for requests and responses. If you need more control than this admin client provides, you can use the kmsg package directly.

This package contains a lot of types, but the main two types type to know are Client and ShardErrors. Every other type is used for inputs or outputs to methods on the client.

The Client type is a simple small wrapper around a *kgo.Client that exists solely to namespace methods. The ShardErrors type is a bit more complicated. When issuing requests, under the hood some of these requests actually need to be mapped to brokers and split, issuing different pieces of the input request to different brokers. The *kgo.Client handles this all internally, but (if using RequestSharded as directed), returns each response to each of these split requests individually. Each response can fail or be successful. This package goes one step further and merges these failures into one meta failure, ShardErrors. Any function that returns ShardErrors is documented as such, and if a function returns a non-nil ShardErrors, it is possible that the returned data is actually valid and usable. If you care to, you can log / react to the partial failures and continue using the partial successful result. This is in contrast to other clients, which either require to to request individual brokers directly, or they completely hide individual failures, or they completely fail on any individual failure.

For methods that list or describe things, this package often completely fails responses on auth failures. If you use a method that accepts two topics, one that you are authorized to and one that you are not, you will not receive a partial successful response. Instead, you will receive an AuthError. Methods that do *not* fail on auth errors are explicitly documented as such.

Users may often find it easy to work with lists of topics or partitions. Rather than needing to build deeply nested maps directly, this package has a few helper types that are worth knowing:

TopicsList  - a slice of topics and their partitions
TopicsSet   - a set of topics, each containing a set of partitions
Partitions  - a slice of partitions
OffsetsList - a slice of offsets
Offsets     - a map of offsets

These types are meant to be easy to build and use, and can be used as the starting point for other types.

Many functions in this package are variadic and return either a map or a list of responses, and you may only use one element as input and are only interested in one element of output. This package provides the following functions to help:

Any(map)
AnyE(map, err)
First(slice)
FirstE(slice, err)

The intended use case of these is something like `kadm.AnyE(kadm.CreateTopics(..., "my-one-topic"))`, such that you can immediately get the response for the one topic you are creating.

View Source
const FetchAllGroupTopics = "|fetch-all-group-topics|"

FetchAllGroupTopics is a kadm "internal" topic name that can be used in [FetchOffsetsForTopics]. By default, [FetchOffsetsForTopics] only returns topics that are explicitly requested. Other topics that may be committed to in the group are not returned. Using FetchAllRequestedTopics switches the behavior to return the union of all committed topics and all requested topics.

ErrEmpty is returned from FirstE or AnyE if the input is empty.

Any returns the first range element of the input map and whether it exists. This is the non-error-accepting equivalent of AnyE.

Many client methods in kadm accept a variadic amount of input arguments and return either a slice or a map of responses, but you often use the method with only one argument. This function can help extract the one response you are interested in.

AnyE returns the first range element of the input map, or the input error if it is non-nil. If the error is nil but the map is empty, this returns ErrEmpty. This is the error-accepting equivalent of Any.

Many client methods in kadm accept a variadic amount of input arguments and return either a slice or a map of responses, but you often use the method with only one argument. This function can help extract the one response you are interested in.

func First[S ~[]T, T any](s S) (T, bool)

First returns the first element of the input slice and whether it exists. This is the non-error-accepting equivalent of FirstE.

Many client methods in kadm accept a variadic amount of input arguments and return either a slice or a map of responses, but you often use the method with only one argument. This function can help extract the one response you are interested in.

FirstE returns the first element of the input slice, or the input error if it is non-nil. If the error is nil but the slice is empty, this returns ErrEmpty. This is the error-accepting equivalent of First.

Many client methods in kadm accept a variadic amount of input arguments and return either a slice or a map of responses, but you often use the method with only one argument. This function can help extract the one response you are interested in.

StringPtr is a shortcut function to aid building configs for creating or altering topics.

WithAuthorizedOps attaches an internal key/value to the context through WithValue. Using this context will cause all Metadata requests (assuming Kafka version 2.3.0 or higher) to fetch the list of authorized operations. See KIP-430 for details.

type ACLBuilder struct {
	
}

ACLBuilder is a builder that is used for batch creating / listing / deleting ACLS.

An ACL consists of five components:

This builder allows for adding the above five components in batches and then creating, listing, or deleting a batch of ACLs in one go. This builder merges the fifth component (allowing or denying) into allowing principals and hosts and denying principals and hosts. The builder must always have an Allow or Deny. For creating, the host is optional and defaults to the wildcard * that allows or denies all hosts. For listing / deleting, the host is also required (specifying no hosts matches all hosts, but you must specify this).

Building works on a multiplying factor: every user, every host, every resource, and every operation is combined (principals * hosts * resources * operations).

With the Kafka simple authorizer (and most reimplementations), all principals are required to have the "User:" prefix. The PrefixUserExcept function can be used to easily add the "User:" prefix if missing.

The full set of operations and which requests require what operations is described in a large doc comment on the ACLOperation type.

Lastly, resources to access / deny access to can be created / matched based on literal (exact) names, or on prefix names, or more. See the ACLPattern docs for more information.

NewACLs returns a new ACL builder.

Allow sets the principals to add allow permissions for. For listing and deleting, you must also use AllowHosts.

This returns the input pointer.

For creating, if this is not paired with AllowHosts, the user will have access to all hosts (the wildcard *).

For listing & deleting, if the principals are empty, this matches any user.

AllowHosts sets the hosts to add allow permissions for. If using this, you must also use Allow.

This returns the input pointer.

For creating, if this is empty, the user will have access to all hosts (the wildcard *) and this function is actually not necessary.

For listing & deleting, if the hosts are empty, this matches any host.

AnyResource lists & deletes ACLs of any type matching the given names (pending other filters). If no names are given, this matches all names.

This returns the input pointer.

This function does nothing for creating.

Clusters lists/deletes/creates ACLs of resource type "cluster".

This returns the input pointer.

There is only one type of cluster in Kafka, "kafka-cluster". Opting in to listing or deleting by cluster inherently matches all ACLS of resource type cluster. For creating, this function allows for creating cluster ACLs.

DelegationTokens lists/deletes/creates ACLs of resource type "delegation_token" for the given delegation tokens.

This returns the input pointer.

For listing or deleting, if this is provided no tokens, all "delegation_token" resource type ACLs are matched. For creating, if no tokens are provided, this function does nothing.

Deny sets the principals to add deny permissions for. For listing and deleting, you must also use DenyHosts.

This returns the input pointer.

For creating, if this is not paired with DenyHosts, the user will be denied access to all hosts (the wildcard *).

For listing & deleting, if the principals are empty, this matches any user.

DenyHosts sets the hosts to add deny permissions for. If using this, you must also use Deny.

This returns the input pointer.

For creating, if this is empty, the user will be denied access to all hosts (the wildcard *) and this function is actually not necessary.

For listing & deleting, if the hosts are empty, this matches any host.

Groups lists/deletes/creates ACLs of resource type "group" for the given groups.

This returns the input pointer.

For listing or deleting, if this is provided no groups, all "group" resource type ACLs are matched. For creating, if no groups are provided, this function does nothing.

HasAnyFilter returns whether any field in this builder is opted into "any", meaning a wide glob. This would be if you used Topics with no topics, and so on. This function can be used to detect if you accidentally opted into a non-specific ACL.

The evaluated fields are: resources, principals/hosts, a single OpAny operation, and an Any pattern.

HasHosts returns if any allow or deny hosts have been set, or if their "any" field is true.

HasPrincipals returns if any allow or deny principals have been set, or if their "any" field is true.

HasResource returns true if the builder has a non-empty resource (topic, group, ...), or if any resource has "any" set to true.

MaybeAllow is the same as Allow, but does not match all allowed principals if none are provided.

MaybeAllowHosts is the same as AllowHosts, but does not match all allowed hosts if none are provided.

MaybeClusters is the same as Clusters, but only matches clusters if c is true.

MaybeDelegationTokens is the same as DelegationTokens, but does not match all tokens if none are provided.

MaybeDeny is the same as Deny, but does not match all denied principals if none are provided.

MaybeDenyHosts is the same as DenyHosts, but does not match all denied hosts if none are provided.

MaybeGroups is the same as Groups, but does not match all groups if none are provided.

MaybeOperations is the same as Operations, but does not match all operations if none are provided.

MaybeTopics is the same as Topics, but does not match all topics if none are provided.

MaybeTransactionalIDs is the same as TransactionalIDs, but does not match all transactional ID's if none are provided.

Operations sets operations to allow or deny. Passing no operations defaults to OpAny.

This returns the input pointer.

For creating, OpAny returns an error, for it is strictly used for filters (listing & deleting).

PrefixUser prefixes all allowed and denied principals with "User:".

PrefixUserExcept prefixes all allowed and denied principals with "User:", unless they have any of the given except prefixes.

ResourcePatternType sets the pattern type to use when creating or filtering ACL resource names, overriding the default of LITERAL.

This returns the input pointer.

For creating, only LITERAL and PREFIXED are supported.

Topics lists/deletes/creates ACLs of resource type "topic" for the given topics.

This returns the input pointer.

For listing or deleting, if this is provided no topics, all "topic" resource type ACLs are matched. For creating, if no topics are provided, this function does nothing.

TransactionalIDs lists/deletes/creates ACLs of resource type "transactional_id" for the given transactional IDs.

This returns the input pointer.

For listing or deleting, if this is provided no IDs, all "transactional_id" resource type ACLs matched. For creating, if no IDs are provided, this function does nothing.

ValidateCreate returns an error if the builder is invalid for creating ACLs.

ValidateDelete is an alias for ValidateFilter.

ValidateDescribe is an alias for ValidateFilter.

ValidateFilter returns an error if the builder is invalid for deleting or describing ACLs (which both operate on a filter basis).

ACLOperation is a type alias for kmsg.ACLOperation, which is an enum containing all Kafka ACL operations and has helper functions.

Kafka requests require the following operations (broker <=> broker ACLs elided):

PRODUCING/CONSUMING
===================
Produce      WRITE on TOPIC for topics
             WRITE on TRANSACTIONAL_ID for txn id (if transactionally producing)

Fetch        READ on TOPIC for topics

ListOffsets  DESCRIBE on TOPIC for topics

Metadata     DESCRIBE on TOPIC for topics
             CREATE on CLUSTER for kafka-cluster (if automatically creating new topics)
             CREATE on TOPIC for topics (if automatically creating new topics)

OffsetForLeaderEpoch  DESCRIBE on TOPIC for topics

GROUPS
======
FindCoordinator  DESCRIBE on GROUP for group (if finding group coordinator)
                 DESCRIBE on TRANSACTIONAL_ID for id (if finding transactiona coordinator)

OffsetCommit     READ on GROUP for group
                 READ on TOPIC for topics

OffsetFetch      DESCRIBE on GROUP for group
                 DESCRIBE on TOPIC for topics

OffsetDelete     DELETE on GROUP For group
                 READ on TOPIC for topics

JoinGroup        READ on GROUP for group
Heartbeat        READ on GROUP for group
LeaveGroup       READ on GROUP for group
SyncGroup        READ on GROUP for group

DescribeGroup    DESCRIBE on GROUP for groups

ListGroups       DESCRIBE on GROUP for groups
                 or, DESCRIBE on CLUSTER for kafka-cluster

DeleteGroups     DELETE on GROUP for groups

TRANSACTIONS (including FindCoordinator above)
============
InitProducerID      WRITE on TRANSACTIONAL_ID for id, if using transactions
                    or, IDEMPOTENT_WRITE on CLUSTER for kafka-cluster, if pre Kafka 3.0
                    or, WRITE on TOPIC for any topic, if Kafka 3.0+

AddPartitionsToTxn  WRITE on TRANSACTIONAL_ID for id
                    WRITE on TOPIC for topics

AddOffsetsToTxn     WRITE on TRANSACTIONAL_ID for id
                    READ on GROUP for group

EndTxn              WRITE on TRANSACTIONAL_ID for id

TxnOffsetCommit     WRITE on TRANSACTIONAL_ID for id
                    READ on GROUP for group
                    READ on TOPIC for topics

TOPIC ADMIN
===========
CreateTopics      CREATE on CLUSTER for kafka-cluster
                  CREATE on TOPIC for topics
                  DESCRIBE_CONFIGS on TOPIC for topics, for returning topic configs on create

CreatePartitions  ALTER on TOPIC for topics

DeleteTopics      DELETE on TOPIC for topics
                  DESCRIBE on TOPIC for topics, if deleting by topic id (in addition to prior ACL)

DeleteRecords     DELETE on TOPIC for topics

CONFIG ADMIN
============
DescribeConfigs          DESCRIBE_CONFIGS on CLUSTER for kafka-cluster, for broker or broker-logger describing
                         DESCRIBE_CONFIGS on TOPIC for topics, for topic describing

AlterConfigs             ALTER_CONFIGS on CLUSTER for kafka-cluster, for broker altering
                         ALTER_CONFIGS on TOPIC for topics, for topic altering

IncrementalAlterConfigs  ALTER_CONFIGS on CLUSTER for kafka-cluster, for broker or broker-logger altering
                         ALTER_CONFIGS on TOPIC for topics, for topic altering

MISC ADMIN
==========
AlterReplicaLogDirs  ALTER on CLUSTER for kafka-cluster
DescribeLogDirs      DESCRIBE on CLUSTER for kafka-cluster

AlterPartitionAssignments   ALTER on CLUSTER for kafka-cluster
ListPartitionReassignments  DESCRIBE on CLUSTER for kafka-cluster

DescribeDelegationTokens    DESCRIBE on DELEGATION_TOKEN for id

ElectLeaders          ALTER on CLUSTER for kafka-cluster

DescribeClientQuotas  DESCRIBE_CONFIGS on CLUSTER for kafka-cluster
AlterClientQuotas     ALTER_CONFIGS on CLUSTER for kafka-cluster

DescribeUserScramCredentials  DESCRIBE on CLUSTER for kafka-cluster
AlterUserScramCredentials     ALTER on CLUSTER for kafka-cluster

UpdateFeatures        ALTER on CLUSTER for kafka-cluster

DescribeCluster       DESCRIBE on CLUSTER for kafka-cluster

DescribeProducerIDs   READ on TOPIC for topics
DescribeTransactions  DESCRIBE on TRANSACTIONAL_ID for ids
                      DESCRIBE on TOPIC for topics
ListTransactions      DESCRIBE on TRANSACTIONAL_ID for ids

DecodeACLOperations decodes an int32 bitfield into a slice of kmsg.ACLOperation values.

This function is used to interpret the `AuthorizedOperations` field returned by the Kafka APIs, which specifies the operations a client is allowed to perform on a cluster, topic, or consumer group. It is utilized in multiple Kafka API responses, including Metadata, DescribeCluster, and DescribeGroupsResponseGroup.

Caveats with Metadata API

  1. To include authorized operations in the Metadata response, the client must explicitly opt in by setting `IncludeClusterAuthorizedOperations` and/or `IncludeTopicAuthorizedOperations`. These options were introduced in Kafka 2.3.0 as part of KIP-430.
  2. In Kafka 2.8.0 (Metadata v11), the `AuthorizedOperations` for the cluster was removed from the Metadata response. Instead, clients should use the DescribeCluster API to retrieve cluster-level permissions.

Function Behavior

Supported Use Cases - Cluster Operations: Retrieved via the DescribeCluster API or older Metadata API versions (v8–v10). - Topic Operations: Retrieved via the Metadata API when `IncludeTopicAuthorizedOperations` is set. - Group Operations: Retrieved in the DescribeGroups API response.

ACLPattern is a type alias for kmsg.ACLResourcePatternType, which is an enum containing all Kafka ACL resource pattern options.

Creating/listing/deleting ACLs works on a resource name basis: every ACL created has a name, and every ACL filtered for listing / deleting matches by name. The name by default is "literal", meaning created ACLs will have the exact name, and matched ACLs must match completely.

Prefixed names allow for creating an ACL that matches any prefix: principals foo-bar and foo-baz both have the prefix "foo-", meaning a READ on TOPIC for User:foo- with prefix pattern will allow both of those principals to read the topic.

Any and match are used for listing and deleting. Any will match any name, be it literal or prefix or a wildcard name. There is no need for specifying topics, groups, etc. when using any resource pattern.

Alternatively, match requires a name, but it matches any literal name (exact match), any prefix, and any wildcard.

AlterAllReplicaLogDirsResponses contains per-broker responses to altered partition directories.

Each calls fn for every response.

Sorted returns the responses sorted by broker, topic, and partition.

AlterClientQuotaEntry pairs an entity with quotas to set or remove.

AlterClientQuotaOp sets or remove a client quota.

AlterConfig is an individual key/value operation to perform when altering configs.

This package includes a StringPtr function to aid in building config values.

AlteredConfigsResponse contains the response for an individual alteration.

AlterConfigsResponses contains responses for many alterations.

On calls fn for the response name if it exists, returning the response and the error returned from fn. If fn is nil, this simply returns the response.

The fn is given a copy of the response. This function returns the copy as well; any modifications within fn are modifications on the returned copy.

If the resource does not exist, this returns kerr.UnknownTopicOrPartition.

AlterPartitionAssignmentsReq is the input for a request to alter partition assignments. The keys are topics and partitions, and the final slice corresponds to brokers that replicas will be assigneed to. If the brokers for a given partition are null, the request will *cancel* any active reassignment for that partition.

Assign specifies brokers that a partition should be placed on. Using null for the brokers cancels a pending reassignment of the parititon.

CancelAssign cancels a reassignment of the given partition.

AlterPartitionAssignmentsResponse contains a response for an individual partition that was assigned.

AlterPartitionAssignmentsResponses contains responses to all partitions in an alter assignment request.

Each calls fn for every response.

Error returns the first error in the responses, if any.

Sorted returns the responses sorted by topic and partition.

AlterReplicaLogDirsReq is the input for a request to alter replica log directories. The key is the directory that all topics and partitions in the topic set will move to.

Add merges the input topic set into the given directory.

AlterReplicaLogDirsResponse contains a the response for an individual altered partition directory.

Less returns if the response is less than the other by broker, dir, topic, and partition.

AlterReplicaLogDirsResponses contains responses to altered partition directories for a single broker.

Each calls fn for every response.

Sorted returns the responses sorted by topic and partition.

AlteredClientQuota is the result for a single entity that was altered.

AlteredClientQuotas contains results for all altered entities.

AlteredUserSCRAM is the result of an alter operation.

AlteredUserSCRAMs contains altered user SCRAM credentials keyed by user.

AllFailed returns whether all altered user credentials are errored.

Each calls fn for every altered user.

EachError calls fn for every altered user that has a non-nil error.

Error iterates over all altered users and returns the first error encountered, if any.

Ok returns true if there are no errors. This is a shortcut for rs.Error() == nil.

Sorted returns the altered user credentials ordered by user.

type AuthError struct {
	Err error 
}

AuthError can be returned from requests for resources that you are not authorized for.

type BrokerApiVersions struct {
	NodeID int32 

	Err error 
	
}

BrokerApiVersions contains the API versions for a single broker.

EachKeySorted calls fn for every API key in the broker response, from the smallest API key to the largest.

KeyVersions returns the broker's max version for an API key and whether this broker supports the request.

KeyVersions returns the broker's min version for an API key and whether this broker supports the request.

KeyVersions returns the broker's min and max version for an API key and whether this broker supports the request.

Raw returns the raw API versions response.

VersionGuess returns the best guess of Kafka that this broker is. This is a shorcut for:

kversion.FromApiVersionsResponse(v.Raw()).VersionGuess(opt...)

Check the kversion.VersionGuess API docs for more details.

BrokerDetail is a type alias for kgo.BrokerMetadata.

BrokerDetails contains the details for many brokers.

NodeIDs returns the IDs of all nodes.

BrokerApiVersions contains API versions for all brokers that are reachable from a metadata response.

Each calls fn for every broker response.

Sorted returns all broker responses sorted by node ID.

Client is an admin client.

This is a simple wrapper around a *kgo.Client to provide helper admin methods.

NewClient returns an admin client.

NewOptClient returns a new client directly from kgo options. This is a wrapper around creating a new *kgo.Client and then creating an admin client.

AlterAllReplicaLogDirs alters the log directories for the input topic partitions, moving each partition to the requested directory. This function moves all replicas on any broker.

This may return *ShardErrors.

AlterBrokerConfigs incrementally alters broker configuration values. If brokers are specified, this updates each specific broker. If no brokers are specified, this updates whole-cluster broker configuration values.

This method requires talking to a cluster that supports IncrementalAlterConfigs (officially introduced in Kafka v2.3, but many broker reimplementations support this request even if they do not support all other requests from Kafka v2.3).

If you want to alter the entire configs state using the older AlterConfigs request, use AlterBrokerConfigsState.

This may return *ShardErrors. You may consider checking ValidateAlterBrokerConfigs before using this method.

AlterBrokerConfigs alters the full state of broker configurations. If broker are specified, this updates each specific broker. If no brokers are specified, this updates whole-cluster broker configuration values. All prior configuration is lost.

This may return *ShardErrors. You may consider checking ValidateAlterBrokerConfigs before using this method.

AlterBrokerReplicaLogDirs alters the log directories for the input topic on the given broker, moving each partition to the requested directory.

AlterClientQuotas alters quotas for the input entries. You may consider checking ValidateAlterClientQuotas before using this method.

AlterPartitionAssignments alters partition assignments for the requested partitions, returning an error if the response could not be issued or if you do not have permissions.

AlterTopicConfigs incrementally alters topic configuration values.

This method requires talking to a cluster that supports IncrementalAlterConfigs (officially introduced in Kafka v2.3, but many broker reimplementations support this request even if they do not support all other requests from Kafka v2.3).

If you want to alter the entire configs state using the older AlterConfigs request, use AlterTopicConfigsState.

This may return *ShardErrors. You may consider checking ValidateAlterTopicConfigs before using this method.

AlterTopicConfigsState alters the full state of topic configurations. All prior configuration is lost.

This may return *ShardErrors. You may consider checking ValidateAlterTopicConfigs before using this method.

AlterUserSCRAMs deletes, updates, or creates (inserts) user SCRAM credentials. Note that a username can only appear once across both upserts and deletes. This modifies elements of the upsert slice that need to have a salted password generated.

ApiVersions queries every broker in a metadata response for their API versions. This returns an error only if the metadata request fails.

BrokerMetadata issues a metadata request and returns it, and does not ask for any topics.

This returns an error if the request fails to be issued, or an *AuthErr.

Close closes the underlying *kgo.Client.

CommitAllOffsets is identical to CommitOffsets, but returns an error if the offset commit was successful, but some offset within the commit failed to be committed.

This is a shortcut function provided to avoid checking two errors, but you must be careful with this if partially successful commits can be a problem for you.

CommitOffsets issues an offset commit request for the input offsets.

This function can be used to manually commit offsets when directly consuming partitions outside of an actual consumer group. For example, if you assign partitions manually, but want still use Kafka to checkpoint what you have consumed, you can manually issue an offset commit request with this method.

This does not return on authorization failures, instead, authorization failures are included in the responses.

CreateACLs creates a batch of ACLs using the ACL builder, validating the input before issuing the CreateACLs request.

If the input is invalid, or if the response fails, or if the response does not contain as many ACLs as we issued in our create request, this returns an error.

CreateDelegationToken creates a delegation token, which is a scoped SCRAM-SHA-256 username and password.

Creating delegation tokens allows for an (ideally) quicker and easier method of enabling authorization for a wide array of clients. Rather than having to manage many passwords external to Kafka, you only need to manage a few accounts and use those to create delegation tokens per client.

Note that delegation tokens inherit the same ACLs as the user creating the token. Thus, if you want to properly scope ACLs, you should not create delegation tokens with admin accounts.

This can return *AuthError.

CreatePartitions issues a create partitions request for the given topics, adding "add" partitions to each topic. This request lets Kafka choose where the new partitions should be.

This does not return an error on authorization failures for the create partitions request itself, instead, authorization failures are included in the responses. Before adding partitions, this request must issue a metadata request to learn the current count of partitions. If that fails, this returns the metadata request error. If you already know the final amount of partitions you want, you can use UpdatePartitions to set the count directly (rather than adding to the current count). You may consider checking ValidateCreatePartitions before using this method.

CreateTopic issues a create topics request with the given partitions, replication factor, and (optional) configs for the given topic name. This is similar to CreateTopics, but returns the kerr.ErrorForCode(response.ErrorCode) if the request/response is successful.

CreateTopics issues a create topics request with the given partitions, replication factor, and (optional) configs for every topic. Under the hood, this uses the default 15s request timeout and lets Kafka choose where to place partitions.

Version 4 of the underlying create topic request was introduced in Kafka 2.4 and brought client support for creation defaults. If talking to a 2.4+ cluster, you can use -1 for partitions and replicationFactor to use broker defaults.

This package includes a StringPtr function to aid in building config values.

This does not return an error on authorization failures, instead, authorization failures are included in the responses. This only returns an error if the request fails to be issued. You may consider checking ValidateCreateTopics before using this method.

DeleteACLs deletes a batch of ACLs using the ACL builder, validating the input before issuing the DeleteACLs request.

If the input is invalid, or if the response fails, or if the response does not contain as many ACL results as we issued in our delete request, this returns an error.

Deleting ACLs works on a filter basis: a single filter can match many ACLs. For example, deleting with operation ANY matches any operation. For safety / verification purposes, you an DescribeACLs with the same builder first to see what would be deleted.

DeleteGroup deletes the specified group. This is similar to DeleteGroups, but returns the kerr.ErrorForCode(response.ErrorCode) if the request/response is successful.

DeleteGroups deletes all groups specified.

The purpose of this request is to allow operators a way to delete groups after Kafka 1.1, which removed RetentionTimeMillis from offset commits. See KIP-229 for more details.

This may return *ShardErrors. This does not return on authorization failures, instead, authorization failures are included in the responses.

DeleteOffsets deletes offsets for the given group.

Originally, offset commits were persisted in Kafka for some retention time. This posed problematic for infrequently committing consumers, so the retention time concept was removed in Kafka v2.1 in favor of deleting offsets for a group only when the group became empty. However, if a group stops consuming from a topic, then the offsets will persist and lag monitoring for the group will notice an ever increasing amount of lag for these no-longer-consumed topics. Thus, Kafka v2.4 introduced an OffsetDelete request to allow admins to manually delete offsets for no longer consumed topics.

This method requires talking to Kafka v2.4+. This returns an *AuthErr if the user is not authorized to delete offsets in the group at all. This does not return on per-topic authorization failures, instead, per-topic authorization failures are included in the responses.

DeleteRecords issues a delete records request for the given offsets. Per offset, only the Offset field needs to be set.

To delete records, Kafka sets the LogStartOffset for partitions to the requested offset. All segments whose max partition is before the requested offset are deleted, and any records within the segment before the requested offset can no longer be read.

This does not return an error on authorization failures, instead, authorization failures are included in the responses.

This may return *ShardErrors.

DeleteTopic issues a delete topic request for the given topic name with a (by default) 15s timeout. This is similar to DeleteTopics, but returns the kerr.ErrorForCode(response.ErrorCode) if the request/response is successful.

DeleteTopics issues a delete topics request for the given topic names with a (by default) 15s timeout.

This does not return an error on authorization failures, instead, authorization failures are included in the responses. This only returns an error if the request fails to be issued.

DescribeACLs describes a batch of ACLs using the ACL builder, validating the input before issuing DescribeACLs requests.

If the input is invalid, or if any response fails, this returns an error.

Listing ACLs works on a filter basis: a single filter can match many ACLs. For example, describing with operation ANY matches any operation. Under the hood, this method issues one describe request per filter, because describing ACLs does not work on a batch basis (unlike creating & deleting). The return of this function can be used to see what would be deleted given the same builder input.

DescribeAllLogDirs describes the log directores for every input topic partition on every broker. If the input set is nil, this describes all log directories.

This may return *ShardErrors.

DescribeBrokerConfigs returns configuration for the requested brokers. If no brokers are requested, a single request is issued and any broker in the cluster replies with the cluster-level dynamic config values.

This may return *ShardErrors.

DescribeBrokerLogDirs describes the log directories for the input topic partitions on the given broker. If the input set is nil, this describes all log directories.

DescribeClientQuotas describes client quotas. If strict is true, the response includes only the requested components.

DescribeDelegationTokens describes delegation tokens. This returns either all delegation tokens, or returns only tokens with owners in the requested owners list.

This can return *AuthError.

DescribeGroups describes either all groups specified, or all groups in the cluster if none are specified.

This may return *ShardErrors or *AuthError.

If no groups are specified and this method first lists groups, and list groups returns a *ShardErrors, this function describes all successfully listed groups and appends the list shard errors to any describe shard errors.

If only one group is described, there will be at most one request issued, and there is no need to deeply inspect the error.

DescribeProducers describes all producers that are transactionally producing to the requested topic set. This request can be used to detect hanging transactions or other transaction related problems. If the input set is empty, this requests data for all partitions.

This may return *ShardErrors or *AuthError.

DescribeTopicConfigs returns the configuration for the requested topics.

This may return *ShardErrors.

DescribeTransactions describes either all transactional IDs specified, or all transactional IDs in the cluster if none are specified.

This may return *ShardErrors or *AuthError.

If no transactional IDs are specified and this method first lists transactional IDs, and listing IDs returns a *ShardErrors, this function describes all successfully listed IDs and appends the list shard errors to any describe shard errors.

If only one ID is described, there will be at most one request issued and there is no need to deeply inspect the error.

DescribeUserSCRAMs returns a small bit of information about all users in the input request that have SCRAM passwords configured. No users requests all users.

ElectLeaders elects leaders for partitions. This request was added in Kafka 2.2 to replace the previously-ZooKeeper-only option of triggering leader elections. See KIP-183 for more details.

Kafka 2.4 introduced the ability to use unclean leader election. If you use unclean leader election on a Kafka 2.2 or 2.3 cluster, the client will instead fall back to preferred replica (clean) leader election. You can check the result's How function (or field) to see.

If s is nil, this will elect leaders for all partitions.

This will return *AuthError if you do not have ALTER on CLUSTER for kafka-cluster.

ExpireDelegationToken changes a delegation token's expiry timestamp and returns the new expiry timestamp, which is min(now+expiry, maxTimestamp). This request can be used to force tokens to expire quickly, or to give tokens a grace period before expiry. Using an expiry of -1 expires the token immediately.

This can return *AuthError.

FetchManyOffsets issues a fetch offsets requests for each group specified.

This function is a batch version of FetchOffsets. FetchOffsets and CommitOffsets are important to provide as simple APIs for users that manage group offsets outside of a consumer group. Each individual group may have an auth error.

FetchOffsets issues an offset fetch requests for all topics and partitions in the group. Because Kafka returns only partitions you are authorized to fetch, this only returns an auth error if you are not authorized to describe the group at all.

This method requires talking to Kafka v0.11+.

FetchOffsetsForTopics is a helper function that returns the currently committed offsets for the given group, as well as default -1 offsets for any topic/partition that does not yet have a commit.

If any partition fetched or listed has an error, this function returns an error. The returned offset responses are ready to be used or converted directly to pure offsets with `Into`, and again into kgo offsets with another `Into`.

By default, this function returns offsets for only the requested topics. You can use the special "topic" FetchAllGroupTopics to return all committed-to topics in addition to all requested topics.

FindGroupCoordinators returns the coordinator for all requested group names.

This may return *ShardErrors or *AuthError.

FindTxnCoordinators returns the coordinator for all requested transactional IDs.

This may return *ShardErrors or *AuthError.

Lag returns the lag for all input groups. This function is a shortcut for the steps required to use CalculateGroupLagWithStartOffsets properly, with some opinionated choices for error handling since calculating lag is multi-request process. If a group cannot be described or the offsets cannot be fetched, an error is returned for the group. If any topic cannot have its end offsets listed, the lag for the partition has a corresponding error. If any request fails with an auth error, this returns *AuthError.

LeaveGroup causes instance IDs to leave a group.

This function allows manually removing members using instance IDs from a group, which allows for fast scale down / host replacement (see KIP-345 for more detail). This returns an *AuthErr if the use is not authorized to remove members from groups.

ListBrokers issues a metadata request and returns BrokerDetails. This returns an error if the request fails to be issued, or an *AuthError.

ListCommittedOffsets returns newest committed offsets for each partition in each requested topic. A committed offset may be slightly less than the latest offset. In Kafka terms, committed means the last stable offset, and newest means the high watermark. Record offsets in active, uncommitted transactions will not be returned. If no topics are specified, all topics are listed. If a requested topic does not exist, no offsets for it are listed and it is not present in the response.

If any topics being listed do not exist, a special -1 partition is added to the response with the expected error code kerr.UnknownTopicOrPartition.

This may return *ShardErrors.

ListEndOffsets returns the end (newest) offsets for each partition in each requested topic. In Kafka terms, this returns high watermarks. If no topics are specified, all topics are listed. If a requested topic does not exist, no offsets for it are listed and it is not present in the response.

If any topics being listed do not exist, a special -1 partition is added to the response with the expected error code kerr.UnknownTopicOrPartition.

This may return *ShardErrors.

ListGroups returns all groups in the cluster. If you are talking to Kafka 2.6+, filter states can be used to return groups only in the requested states. By default, this returns all groups. In almost all cases, DescribeGroups is more useful.

This may return *ShardErrors or *AuthError.

ListOffsetsAfterMilli returns the first offsets at or after the requested millisecond timestamp. Unlike listing start/end/committed offsets, offsets returned from this function also include the timestamp of the offset. If no topics are specified, all topics are listed. If a partition has no offsets after the requested millisecond, the offset will be the current end offset. If a requested topic does not exist, no offsets for it are listed and it is not present in the response.

If any topics being listed do not exist, a special -1 partition is added to the response with the expected error code kerr.UnknownTopicOrPartition.

This may return *ShardErrors.

ListPartitionReassignments lists the state of any active reassignments for all requested partitions, returning an error if the response could not be issued or if you do not have permissions.

ListStartOffsets returns the start (oldest) offsets for each partition in each requested topic. In Kafka terms, this returns the log start offset. If no topics are specified, all topics are listed. If a requested topic does not exist, no offsets for it are listed and it is not present in the response.

If any topics being listed do not exist, a special -1 partition is added to the response with the expected error code kerr.UnknownTopicOrPartition.

This may return *ShardErrors.

ListTopics issues a metadata request and returns TopicDetails. Specific topics to describe can be passed as additional arguments. If no topics are specified, all topics are requested. Internal topics are not returned unless specifically requested. To see all topics including internal topics, use ListTopicsWithInternal.

This returns an error if the request fails to be issued, or an *AuthError.

ListTopicsWithInternal is the same as ListTopics, but does not filter internal topics before returning.

ListTransactions returns all transactions and their states in the cluster. Filter states can be used to return transactions only in the requested states. By default, this returns all transactions you have DESCRIBE access to. Producer IDs can be specified to filter for transactions from the given producer.

This may return *ShardErrors or *AuthError.

Metadata issues a metadata request and returns it. Specific topics to describe can be passed as additional arguments. If no topics are specified, all topics are requested.

This returns an error if the request fails to be issued, or an *AuthErr.

OffsetForLeaderEpoch requests end offsets for the requested leader epoch in partitions in the request. This is a relatively advanced and client internal request, for more details, see the doc comments on the OffsetForLeaderEpoch type.

This may return *ShardErrors or *AuthError.

RenewDelegationToken renews a delegation token that has not yet hit its max timestamp and returns the new expiry timestamp.

This can return *AuthError.

SetTimeoutMillis sets the timeout to use for requests that have a timeout, overriding the default of 15,000 (15s).

Not all requests have timeouts. Most requests are expected to return immediately or are expected to deliberately hang. The following requests have timeout fields:

Produce
CreateTopics
DeleteTopics
DeleteRecords
CreatePartitions
ElectLeaders
AlterPartitionAssignments
ListPartitionReassignments
UpdateFeatures

Not all requests above are supported in the admin API.

UpdatePartitions issues a create partitions request for the given topics, setting the final partition count to "set" for each topic. This request lets Kafka choose where the new partitions should be.

This does not return an error on authorization failures for the create partitions request itself, instead, authorization failures are included in the responses. Unlike CreatePartitions, this request uses your "set" value to set the new final count of partitions. "set" must be equal to or larger than the current count of partitions in the topic. All topics will have the same final count of partitions (unlike CreatePartitions, which allows you to add a specific count of partitions to topics that have a different amount of current partitions). You may consider checking ValidateUpdatePartitions before using this method.

ValidateAlterBrokerConfigs validates an incremental alter config for the given brokers.

This returns exactly what AlterBrokerConfigs returns, but does not actually alter configurations.

ValidateAlterBrokerConfigs validates an AlterBrokerconfigsState for the given brokers.

This returns exactly what AlterBrokerConfigs returns, but does not actually alter configurations.

ValidateAlterClientQuotas validates an alter client quota request. This returns exactly what AlterClientQuotas returns, but does not actually alter quotas.

ValidateAlterTopicConfigs validates an incremental alter config for the given topics.

This returns exactly what AlterTopicConfigs returns, but does not actually alter configurations.

ValidateAlterTopicConfigs validates an AlterTopicConfigsState for the given topics.

This returns exactly what AlterTopicConfigsState returns, but does not actually alter configurations.

ValidateCreatePartitions validates a create partitions request for adding "add" partitions to the given topics.

This uses the same logic as CreatePartitions, but with the request's ValidateOnly field set to true. The response is the same response you would receive from CreatePartitions, but no partitions are actually added.

ValidateCreateTopics validates a create topics request with the given partitions, replication factor, and (optional) configs for every topic.

This package includes a StringPtr function to aid in building config values.

This uses the same logic as CreateTopics, but with the request's ValidateOnly field set to true. The response is the same response you would receive from CreateTopics, but no topics are actually created.

ValidateUpdatePartitions validates a create partitions request for setting the partition count on the given topics to "set".

This uses the same logic as UpdatePartitions, but with the request's ValidateOnly field set to true. The response is the same response you would receive from UpdatePartitions, but no partitions are actually added.

WriteTxnMarkers writes transaction markers to brokers. This is an advanced admin way to close out open transactions. See KIP-664 for more details.

This may return *ShardErrors or *AuthError.

ClientQuotaEntity contains the components that make up a single entity.

String returns {key=value, key=value}, joining all entities with a ", " and wrapping in braces.

type ClientQuotaEntityComponent struct {
	Type string  
	Name *string 
}

ClientQuotaEntityComponent is a quota entity component.

String returns key=value, or key=<default> if value is nil.

ClientQuotaValue is a quota name and value.

String returns key=value.

ClientQuotaValues contains all client quota values.

Config is a configuration for a resource (topic, broker)

MaybeValue returns the config's value if it is non-nil, otherwise an empty string.

ConfigSynonym is a fallback value for a config.

CreateACLsResult is a result for an individual ACL creation.

CreateACLsResults contains all results to created ACLs.

CreateDelegationToken is a create delegation token request, allowing you to create scoped tokens with the same ACLs as the creator. This allows you to more easily manage authorization for a wide array of clients. All delegation tokens use SCRAM-SHA-256 SASL for authorization.

CreatePartitionsResponse contains the response for an individual topic from a create partitions request.

CreatePartitionsResponses contains per-topic responses for a create partitions request.

Error iterates over all responses and returns the first error encountered, if any.

On calls fn for the response topic if it exists, returning the response and the error returned from fn. If fn is nil, this simply returns the response.

The fn is given a copy of the response. This function returns the copy as well; any modifications within fn are modifications on the returned copy.

If the topic does not exist, this returns kerr.UnknownTopicOrPartition.

Sorted returns all create partitions responses sorted by topic.

CreateTopicResponse contains the response for an individual created topic.

CreateTopicRepsonses contains per-topic responses for created topics.

Error iterates over all responses and returns the first error encountered, if any.

On calls fn for the response topic if it exists, returning the response and the error returned from fn. If fn is nil, this simply returns the response.

The fn is given a copy of the response. This function returns the copy as well; any modifications within fn are modifications on the returned copy.

If the topic does not exist, this returns kerr.UnknownTopicOrPartition.

Sorted returns all create topic responses sorted first by topic ID, then by topic name.

CredInfo contains the SCRAM mechanism and iterations for a password.

String returns MECHANISM=iterations={c.Iterations}.

DelegationToken contains information about a delegation token.

DelegationTokens contains a list of delegation tokens.

DeleteACLsResult contains the input used for a delete ACL filter, and the deletes that the filter matched or the error for this filter.

All fields but Deleted and Err are set from the request input. The response sets either Deleted (potentially to nothing if the filter matched nothing) or Err.

DeleteACLsResults contains all results to deleted ACLs.

type DeleteGroupResponse struct {
	Group string 
	Err   error  
}

DeleteGroupResponse contains the response for an individual deleted group.

DeleteGroupResponses contains per-group responses to deleted groups.

Error iterates over all groups and returns the first error encountered, if any.

On calls fn for the response group if it exists, returning the response and the error returned from fn. If fn is nil, this simply returns the group.

The fn is given a copy of the response. This function returns the copy as well; any modifications within fn are modifications on the returned copy.

If the group does not exist, this returns kerr.GroupIDNotFound.

Sorted returns all deleted group responses sorted by group name.

DeleteOffsetsResponses contains the per topic, per partition errors. If an offset deletion for a partition was successful, the error will be nil.

EachError calls fn for every partition that as a non-nil deletion error.

Error iterates over all responses and returns the first error encountered, if any.

Lookup returns the response at t and p and whether it exists.

DeleteRecordsResponse contains the response for an individual partition from a delete records request.

DeleteRecordsResponses contains per-partition responses to a delete records request.

Each calls fn for every delete records response.

Error iterates over all responses and returns the first error encountered, if any.

Lookup returns the response at t and p and whether it exists.

On calls fn for the response topic/partition if it exists, returning the response and the error returned from fn. If fn is nil, this simply returns the response.

The fn is given a copy of the response. This function returns the copy as well; any modifications within fn are modifications on the returned copy.

If the topic or partition does not exist, this returns kerr.UnknownTopicOrPartition.

Sorted returns all delete records responses sorted first by topic, then by partition.

DeleteSCRAM deletes a password with the given mechanism for the user.

DeleteTopicResponse contains the response for an individual deleted topic.

DeleteTopicResponses contains per-topic responses for deleted topics.

Error iterates over all responses and returns the first error encountered, if any.

On calls fn for the response topic if it exists, returning the response and the error returned from fn. If fn is nil, this simply returns the response.

The fn is given a copy of the response. This function returns the copy as well; any modifications within fn are modifications on the returned copy.

If the topic does not exist, this returns kerr.UnknownTopicOrPartition.

Sorted returns all delete topic responses sorted first by topic ID, then by topic name.

DeletedACL an ACL that was deleted.

DeletedACLs contains ACLs that were deleted from a single delete filter.

DescribeACLsResults contains the input used for a describe ACL filter, and the describes that the filter matched or the error for this filter.

All fields but Described and Err are set from the request input. The response sets either Described (potentially to nothing if the filter matched nothing) or Err.

DescribeACLsResults contains all results to described ACLs.

DescribeClientQuotaComponent is an input entity component to describing client quotas: we define the type of quota ("client-id", "user"), how to match, and the match name if needed.

DescribedACL is an ACL that was described.

DescribedACLs contains ACLs that were described from a single describe filter.

DescribedAllLogDirs contains per-broker responses to described log directories.

Each calls fn for every described log dir in all responses.

Sorted returns each log directory sorted by broker, then by directory.

DescribedClientQuota contains a described quota. A single quota is made up of multiple entities and multiple values, for example, "user=foo" is one component of the entity, and "client-id=bar" is another.

DescribedClientQuota contains client quotas that were described.

DescribedGroup contains data from a describe groups response for a single group.

AssignedPartitions returns the set of unique topics and partitions that are assigned across all members in this group.

This function is only relevant if the group is of type "consumer".

JoinTopics returns the set of topics that all members are interested in consuming.

This function is only relevant for groups of time "consumer".

DescribedGroupLag contains a described group and its lag, or the errors that prevent the lag from being calculated.

Err returns the first of DescribeErr or FetchErr that is non-nil.

DescribedGroupLags is a map of group names to the described group with its lag, or error for those groups.

Each calls fn for every group.

EachError calls fn for every group that has a non-nil error.

Error iterates over all groups and returns the first error encountered, if any.

Ok returns true if there are no errors. This is a shortcut for ls.Error() == nil.

Sorted returns all lags sorted by group name.

DescribedGroupMember is the detail of an individual group member as returned by a describe groups response.

DescribedGroups contains data for multiple groups from a describe groups response.

AssignedPartitions returns the set of unique topics and partitions that are assigned across all members in all groups. This is the all-group analogue to DescribedGroup.AssignedPartitions.

This function is only relevant for groups of type "consumer".

Error iterates over all groups and returns the first error encountered, if any.

JoinTopics returns the set of topics that all members are interested in consuming. This is the all-group analogue to DescribedGroup.JoinTopics.

This function is only relevant for groups of time "consumer".

Topics returns a sorted list of all group names.

On calls fn for the group if it exists, returning the group and the error returned from fn. If fn is nil, this simply returns the group.

The fn is given a shallow copy of the group. This function returns the copy as well; any modifications within fn are modifications on the returned copy. Modifications on a described group's inner fields are persisted to the original map (because slices are pointers).

If the group does not exist, this returns kerr.GroupIDNotFound.

Sorted returns all groups sorted by group name.

DescribedLogDir is a described log directory.

Size returns the total size of all partitions in this directory. This is a shortcut for .Topics.Size().

DescribedLogDirPartition is the information for a single partitions described log directory.

Less returns if one dir partition is less than the other, by dir, topic, partition, and size.

LessBySize returns if one dir partition is less than the other by size, otherwise by normal Less semantics.

DescribedLogDirTopics contains per-partition described log directories.

Each calls fn for every partition.

Lookup returns the described partition if it exists.

Size returns the total size of all partitions in this directory.

Sorted returns all partitions sorted by topic then partition.

SortedBySize returns all partitions sorted by smallest size to largest. If partitions are of equal size, the sorting is topic then partition.

DescribedLogDirs contains per-directory responses to described log directories for a single broker.

Each calls fn for each log directory.

EachError calls fn for every directory that has a non-nil error.

Each calls fn for each partition in any directory.

Error iterates over all directories and returns the first error encounted, if any. This can be used to check if describing was entirely successful or not.

LargestPartitionBySize returns the largest partition by directory size, or no partition if there are no partitions.

Lookup returns the described partition if it exists.

LookupPartition returns the described partition if it exists in any directory. Brokers should only have one replica of a partition, so this should always find at most one partition.

Ok returns true if there are no errors. This is a shortcut for ds.Error() == nil.

Size returns the total size of all directories.

SmallestPartitionBySize returns the smallest partition by directory size, or no partition if there are no partitions.

Sorted returns all directories sorted by dir.

SortedBySize returns all directories sorted from smallest total directory size to largest.

SortedPartitions returns all partitions sorted by dir, then topic, then partition.

SortedPartitionsBySize returns all partitions across all directories sorted by smallest to largest, falling back to by broker, dir, topic, and partition.

DescribedProducer contains the state of a transactional producer's last produce.

Less returns whether the left described producer is less than the right, in order of:

DescribedProducers maps producer IDs to the full described producer.

Each calls fn for each described producer.

Sorted returns the described producers sorted by topic, partition, and producer ID.

DescribedProducersPartition is a partition whose producer's were described.

DescribedProducersPartitions contains partitions whose producer's were described.

Each calls fn for each partition.

EachProducer calls fn for each producer in all partitions.

Sorted returns the described partitions sorted by topic and partition.

SortedProducer returns all producers sorted first by partition, then by producer ID.

DescribedProducersTopic contains topic partitions whose producer's were described.

DescribedProducersTopics contains topics whose producer's were described.

Each calls fn for every topic.

EachPartitions calls fn for all topic partitions.

EachProducer calls fn for each producer in all topics and partitions.

Sorted returns the described topics sorted by topic.

Sorted returns the described partitions sorted by topic and partition.

SortedProducer returns all producers sorted first by partition, then by producer ID.

DescribedTransaction contains data from a describe transactions response for a single transactional ID.

DescribedTransactions contains information from a describe transactions response.

Each calls fn for each described transaction.

On calls fn for the transactional ID if it exists, returning the transaction and the error returned from fn. If fn is nil, this simply returns the transaction.

The fn is given a shallow copy of the transaction. This function returns the copy as well; any modifications within fn are modifications on the returned copy. Modifications on a described transaction's inner fields are persisted to the original map (because slices are pointers).

If the transaction does not exist, this returns kerr.TransactionalIDNotFound.

Sorted returns all described transactions sorted by transactional ID.

TransactionalIDs returns a sorted list of all transactional IDs.

DescribedUserSCRAM contains a user, the SCRAM mechanisms that the user has passwords for, and if describing the user SCRAM credentials errored.

DescribedUserSCRAMs contains described user SCRAM credentials keyed by user.

AllFailed returns whether all described user credentials are errored.

Each calls fn for every described user.

EachError calls fn for every described user that has a non-nil error.

Error iterates over all described users and returns the first error encountered, if any.

Ok returns true if there are no errors. This is a shortcut for rs.Error() == nil.

Sorted returns the described user credentials ordered by user.

type ElectLeadersHow int8

ElectLeadersHow is how partition leaders should be elected.

ElectLeadersResult is the result for a single partition in an elect leaders request.

ElectLeadersResults contains per-topic, per-partition results for an elect leaders request.

type ErrAndMessage added in v1.13.0
type ErrAndMessage struct {
	Err        error  
	ErrMessage string 
}

ErrAndMessage is returned as the error from requests that were successfully responded to, but the response indicates failure with a message.

func (*ErrAndMessage) Error added in v1.13.0 func (*ErrAndMessage) Unwrap added in v1.13.0

FetchOffsetsResponse contains a fetch offsets response for a single group.

CommittedPartitions returns the set of unique topics and partitions that have been committed to in this group.

FetchOFfsetsResponses contains responses for many fetch offsets requests.

AllFailed returns whether all fetch offsets requests failed.

CommittedPartitions returns the set of unique topics and partitions that have been committed to across all members in all responses. This is the all-group analogue to FetchOffsetsResponse.CommittedPartitions.

EachError calls fn for every response that as a non-nil error.

Error iterates over all responses and returns the first error encountered, if any.

On calls fn for the response group if it exists, returning the response and the error returned from fn. If fn is nil, this simply returns the group.

The fn is given a copy of the response. This function returns the copy as well; any modifications within fn are modifications on the returned copy.

If the group does not exist, this returns kerr.GroupIDNotFound.

FindCoordinatorResponse contains information for the coordinator for a group or transactional ID.

FindCoordinatorResponses contains responses to finding coordinators for groups or transactions.

AllFailed returns whether all responses are errored.

Each calls fn for every response.

EachError calls fn for every response that has a non-nil error.

Error iterates over all responses and returns the first error encountered, if any.

Ok returns true if there are no errors. This is a shortcut for rs.Error() == nil.

Sorted returns all coordinator responses sorted by name.

GroupLag is the per-topic, per-partition lag of members in a group.

CalculateGroupLag returns the per-partition lag of all members in a group. The input to this method is the returns from the following methods (make sure to check shard errors):

// Note that FetchOffsets exists to fetch only one group's offsets,
// but some of the code below slightly changes.
groups := DescribeGroups(ctx, group)
commits := FetchManyOffsets(ctx, group)
var endOffsets ListedOffsets
listPartitions := described.AssignedPartitions()
listPartitions.MergeTopics(described.JoinTopics())
listPartitions.Merge(commits.CommittedPartitions()
if topics := listPartitions.Topics(); len(topics) > 0 {
	endOffsets = ListEndOffsets(ctx, listPartitions.Topics())
}
for _, group := range groups {
	lag := CalculateGroupLag(group, commits[group.Group].Fetched, endOffsets)
}

If assigned partitions are missing in the listed end offsets, the partition will have an error indicating it is missing. A missing topic or partition in the commits is assumed to be nothing committing yet.

CalculateGroupLagWithStartOffsets returns the per-partition lag of all members in a group. This function slightly expands on CalculateGroupLag to handle calculating lag for partitions that (1) have no commits AND (2) have some segments deleted (cleanup.policy=delete) such that the log start offset is non-zero.

As an example, if a group is consuming a partition with log end offset 30 and log start offset 10 and has not yet committed to the group, this function can correctly tell you that the lag is 20, whereas CalculateGroupLag would tell you the lag is 30.

This function accepts 'nil' for startOffsets, which will result in the same behavior as CalculateGroupLag. This function is useful if you have infrequently committing groups against topics that have segments being deleted.

IsEmpty returns if the group is empty.

Lookup returns the lag at t and p and whether it exists.

Sorted returns the per-topic, per-partition lag by member sorted in order by topic then partition.

Total returns the total lag across all topics.

TotalByTopic returns the total lag for each topic.

type GroupMemberAssignment struct {
	
}

GroupMemberAssignment is the assignment that a leader sent / a member received in a SyncGroup request. This can have one of three types:

*kmsg.ConsumerMemberAssignment, if the group's ProtocolType is "consumer"
*kmsg.ConnectMemberAssignment, if the group's ProtocolType is "connect"
[]byte, if the group's ProtocolType is unknown

AsConnect returns the assignment as ConnectMemberAssignment if possible.

AsConsumer returns the assignment as a ConsumerMemberAssignment if possible.

Raw returns the assignment as a raw byte slice, if it is neither of consumer type nor connect type.

GroupMemberLag is the lag between a group member's current offset commit and the current end offset.

If either the offset commits have load errors, or the listed end offsets have load errors, the Lag field will be -1 and the Err field will be set (to the first of either the commit error, or else the list error).

If the group is in the Empty state, lag is calculated for all partitions in a topic, but the member is nil. The calculate function assumes that any assigned topic is meant to be entirely consumed. If the group is Empty and topics could not be listed, some partitions may be missing.

IsEmpty returns if the this lag is for a group in the Empty state.

type GroupMemberMetadata struct {
	
}

GroupMemberMetadata is the metadata that a client sent in a JoinGroup request. This can have one of three types:

*kmsg.ConsumerMemberMetadata, if the group's ProtocolType is "consumer"
*kmsg.ConnectMemberMetadata, if the group's ProtocolType is "connect"
[]byte, if the group's ProtocolType is unknown

AsConnect returns the metadata as ConnectMemberMetadata if possible.

AsConsumer returns the metadata as a ConsumerMemberMetadata if possible.

Raw returns the metadata as a raw byte slice, if it is neither of consumer type nor connect type.

GroupTopicsLag is the total lag per topic within a group.

Sorted returns the per-topic lag, sorted by topic.

IncrementalOp is a typed int8 that is used for incrementally updating configuration keys for topics and brokers.

type LeaveGroupBuilder struct {
	
}

LeaveGroupBuilder helps build a leave group request, rather than having a function signature (string, string, ...string).

All functions on this type accept and return the same pointer, allowing for easy build-and-use usage.

LeaveGroup returns a LeaveGroupBuilder for the input group.

InstanceIDs are members to remove from a group.

Reason attaches a reason to all members in the leave group request. This requires Kafka 3.2+.

LeaveGroupResponse contains the response for an individual instance ID that left a group.

LeaveGroupResponses contains responses for each member of a leave group request. The map key is the instance ID that was removed from the group.

Each calls fn for every removed member.

EachError calls fn for every removed member that has a non-nil error.

Error iterates over all removed members and returns the first error encountered, if any.

Ok returns true if there are no errors. This is a shortcut for ls.Error() == nil.

Sorted returns all removed group members by instance ID.

type ListPartitionReassignmentsResponse struct {
	Topic            string  
	Partition        int32   
	Replicas         []int32 
	AddingReplicas   []int32 
	RemovingReplicas []int32 
}

ListPartitionReassignmentsResponse contains a response for an individual partition that was listed.

ListPartitionReassignmentsResponses contains responses to all partitions in a list reassignment request.

Each calls fn for every response.

Sorted returns the responses sorted by topic and partition.

ListedGroup contains data from a list groups response for a single group.

ListedGroups contains information from a list groups response.

Groups returns a sorted list of all group names.

Sorted returns all groups sorted by group name.

ListedOffset contains record offset information.

ListedOffsets contains per-partition record offset information that is returned from any of the List.*Offsets functions.

Each calls fn for each listed offset.

Error iterates over all offsets and returns the first error encountered, if any. This can be to check if a listing was entirely successful or not.

Note that offset listing can be partially successful. For example, some offsets could succeed to be listed, while other could fail (maybe one partition is offline). If this is something you need to worry about, you may need to check all offsets manually.

KOffsets returns these listed offsets as a kgo offset map.

Lookup returns the offset at t and p and whether it exists.

Offsets returns these listed offsets as offsets.

ListedTransaction contains data from a list transactions response for a single transactional ID.

ListedTransactions contains information from a list transactions response.

Each calls fn for each listed transaction.

Sorted returns all transactions sorted by transactional ID.

TransactionalIDs returns a sorted list of all transactional IDs.

Metadata is the data from a metadata response.

Offset is an offset for a topic.

NewOffsetFromRecord is a helper to create an Offset for a given Record

OffsetForLeaderEpoch contains a response for a single partition in an OffsetForLeaderEpoch request.

OffsetForLeaderEpochRequest contains topics, partitions, and leader epochs to request offsets for in an OffsetForLeaderEpoch.

Add adds a topic, partition, and leader epoch to the request.

OffsetResponse contains the response for an individual offset for offset methods.

OffsetResponses contains per-partition responses to offset methods.

Add adds an offset for a given topic/partition to this OffsetResponses map (even if it exists).

DeleteFunc deletes any offset for which fn returns true.

Each calls fn for every offset.

EachError calls fn for every offset that as a non-nil error.

Error iterates over all offsets and returns the first error encountered, if any. This can be used to check if an operation was entirely successful or not.

Note that offset operations can be partially successful. For example, some offsets could succeed in an offset commit while others fail (maybe one topic does not exist for some reason, or you are not authorized for one topic). If this is something you need to worry about, you may need to check all offsets manually.

KOffsets returns these offset responses as a kgo offset map.

Keep filters the responses to only keep the input offsets.

DeleteFunc keeps only the offsets for which fn returns true.

Lookup returns the offset at t and p and whether it exists.

Offsets returns these offset responses as offsets.

Ok returns true if there are no errors. This is a shortcut for os.Error() == nil.

Partitions returns the set of unique topics and partitions in these offsets.

Sorted returns the responses sorted by topic and partition.

Offsets wraps many offsets and is the type used for offset functions.

OffsetsFromFetches returns Offsets for the final record in any partition in the fetches. This is a helper to enable committing an entire returned batch.

This function looks at only the last record per partition, assuming that the last record is the highest offset (which is the behavior returned by kgo's Poll functions). The returned offsets are one past the offset contained in the records.

OffsetsFromRecords returns offsets for all given records, using the highest offset per partition. The returned offsets are one past the offset contained in the records.

Add adds an offset for a given topic/partition to this Offsets map.

If the partition already exists, the offset is only added if:

If you would like to add offsets forcefully no matter what, use the Delete method before this.

AddOffset is a helper to add an offset for a given topic and partition. The leader epoch field must be -1 if you do not know the leader epoch or if you do not have an offset yet.

Delete removes any offset at topic t and partition p.

DeleteFunc calls fn for every offset, deleting the offset if fn returns true.

Each calls fn for each offset in these offsets.

KOffsets returns these offsets as a kgo offset map.

KeepFunc calls fn for every offset, keeping the offset if fn returns true.

Lookup returns the offset at t and p and whether it exists.

Sorted returns the offsets sorted by topic and partition.

Topics returns the set of topics and partitions currently used in these offsets.

OffsetsForLeaderEpochs contains responses for partitions in a OffsetForLeaderEpochRequest.

OffsetsList wraps many offsets and is a helper for building Offsets.

KOffsets returns this list as a kgo offset map.

Offsets returns this list as the non-list Offsets. All fields in each Offset must be set properly.

Partition is a partition for a topic.

PartitionDetail is the detail of a partition as returned by a metadata response. If the partition fails to load / has an error, then only the partition number itself and the Err fields will be set.

PartitionDetails contains details for partitions as returned by a metadata response.

NumReplicas returns the number of replicas for these partitions

It is assumed that all partitions have the same number of replicas, so this simply returns the number of replicas in the first encountered partition.

Numbers returns a sorted list of all partition numbers.

Sorted returns the partitions in sorted order.

Partitions wraps many partitions.

TopicsList returns these partitions as sorted TopicsList.

TopicsSet returns these partitions as TopicsSet.

Principal is a principal that owns or renews a delegation token. This is the same as an ACL's principal, but rather than being a single string, the type and name are split into two fields.

QuotasMatchType specifies how to match a described client quota entity.

0 means to match the name exactly: user=foo will only match components of entity type "user" and entity name "foo".

1 means to match the default of the name: entity type "user" with a default match will return the default quotas for user entities.

2 means to match any name: entity type "user" with any matching will return both names and defaults.

ResourceConfig contains the configuration values for a resource (topic, broker, broker logger).

ResourceConfigs contains the configuration values for many resources.

On calls fn for the response config if it exists, returning the config and the error returned from fn. If fn is nil, this simply returns the config.

The fn is given a copy of the config. This function returns the copy as well; any modifications within fn are modifications on the returned copy.

If the resource does not exist, this returns kerr.UnknownTopicOrPartition.

ScramMechanism is a SCRAM mechanism.

String returns either SCRAM-SHA-256, SCRAM-SHA-512, or UNKNOWN.

ShardError is a piece of a request that failed. See ShardErrors for more detail.

ShardErrors contains each individual error shard of a request.

Under the hood, some requests to Kafka need to be mapped to brokers, split, and sent to many brokers. The kgo.Client handles this all internally, but returns the individual pieces that were requested as "shards". Internally, each of these pieces can also fail, and they can all fail uniquely.

The kadm package takes one further step and hides the failing pieces into one meta error, the ShardErrors. Methods in this package that can return this meta error are documented; if desired, you can use errors.As to check and unwrap any ShardErrors return.

If a request returns ShardErrors, it is possible that some aspects of the request were still successful. You can check ShardErrors.AllFailed as a shortcut for whether any of the response is usable or not.

Error returns an error indicating the name of the request that failed, the number of separate errors, and the first error.

Unwrap returns the underlying errors.

TopicDetail is the detail of a topic as returned by a metadata response. If the topic fails to load / has an error, then there will be no partitions.

TopicDetails contains details for topics as returned by a metadata response.

EachError calls fn for each topic that could not be loaded.

EachPartition calls fn for every partition in all topics.

Error iterates over all topic details and returns the first error encountered, if any.

FilterInternal deletes any internal topics from this set of topic details.

Has returns whether the topic details has the given topic and, if so, that the topic's load error is not an unknown topic error.

Topics returns a sorted list of all topic names.

Sorted returns all topics in sorted order.

TopicsList returns the topics and partitions as a list.

TopicsSet returns the topics and partitions as a set.

TopicID is the 16 byte underlying topic ID.

Less returns if this ID is less than the other, byte by byte.

MarshalJSON returns the topic ID encoded as quoted base64.

String returns the topic ID encoded as base64.

TopicLag is the lag for an individual topic within a group.

type TopicPartitions struct {
	Topic      string
	Partitions []int32
}

TopicPartitions is a topic and partitions.

TopicsList is a list of topics and partitions.

Each calls fn for each topic / partition in the topics list.

EachPartitions calls fn for each topic and its partitions in the topics list.

EmptyTopics returns all topics with no partitions.

IntoSet returns this list as a set.

Topics returns all topics in this set in sorted order.

TopicsSet is a set of topics and, per topic, a set of partitions.

All methods provided for TopicsSet are safe to use on a nil (default) set.

Add adds partitions for a topic to the topics set. If no partitions are added, this still creates the topic.

Delete removes partitions from a topic from the topics set. If the topic ends up with no partitions, the topic is removed from the set.

Each calls fn for each topic / partition in the topics set.

EachPartitions calls fn for each topic and its partitions in the topics set.

EmptyTopics returns all topics with no partitions.

IntoList returns this set as a list.

Lookup returns whether the topic and partition exists.

Merge merges another topic set into this one.

Merge topics merges topic names into this set.

Sorted returns this set as a list in topic-sorted order, with each topic having sorted partitions.

Topics returns all topics in this set in sorted order.

TxnMarkers marks the end of a partition: the producer ID / epoch doing the writing, whether this is a commit, the coordinator epoch of the broker we are writing to (for fencing), and the topics and partitions that we are writing this abort or commit for.

This is a very low level admin request and should likely be built from data in a DescribeProducers response. See KIP-664 if you are trying to use this.

TxnMarkersPartitionResponse is a response to a topic's partition within a single marker written.

TxnMarkersPartitionResponses contains per-partition responses to a WriteTxnMarkers request.

Each calls fn for each partition.

Sorted returns all partitions sorted by partition.

TxnMarkersResponse is a response for a single marker written.

TxnMarkersResponse contains per-partition-ID responses to a WriteTxnMarkers request.

Each calls fn for each marker response.

EachPartition calls fn for every partition in all topics in all marker responses.

EachTopic calls fn for every topic in all marker responses.

Sorted returns all markers sorted by producer ID.

SortedPartitions returns all marker topic partitions sorted by producer ID then topic then partition.

SortedTopics returns all marker topics sorted by producer ID then topic.

TxnMarkersTopicResponse is a response to a topic within a single marker written.

TxnMarkersTopicResponses contains per-topic responses to a WriteTxnMarkers request.

Each calls fn for each topic.

EachPartition calls fn for every partition in all topics.

Sorted returns all topics sorted by topic.

SortedPartitions returns all topics sorted by topic then partition.

UpsertSCRAM either updates or creates (inserts) a new password for a user. There are two ways to specify a password: either with the Password field directly, or by specifying both Salt and SaltedPassword. If you specify just a password, this package generates a 24 byte salt and uses pbkdf2 to create the salted password.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4