When libbpf is used in “libbpf 1.0 mode”, API functions can return errors in one of two ways.
You can set “libbpf 1.0” mode with the following line:
libbpf_set_strict_mode(LIBBPF_STRICT_DIRECT_ERRS | LIBBPF_STRICT_CLEAN_PTRS);
If the function returns an error code directly, it uses 0 to indicate success and a negative error code to indicate what caused the error. In this case the error code should be checked directly from the return, you do not need to check errno.
For example:
err = some_libbpf_api_with_error_return(...); if (err < 0) { /* Handle error accordingly */ }
If the function returns a pointer, it will return NULL to indicate there was an error. In this case errno should be checked for the error code.
For example:
ptr = some_libbpf_api_returning_ptr(); if (!ptr) { /* note no minus sign for EINVAL and E2BIG below */ if (errno == EINVAL) { /* handle EINVAL error */ } else if (errno == E2BIG) { /* handle E2BIG error */ } }libbpf.h
Functions
libbpf_bpf_attach_type_str() converts the provided attach type value into a textual representation.
t – The attach type.
Pointer to a static string identifying the attach type. NULL is returned for unknown bpf_attach_type values.
libbpf_bpf_link_type_str() converts the provided link type value into a textual representation.
t – The link type.
Pointer to a static string identifying the link type. NULL is returned for unknown bpf_link_type values.
libbpf_bpf_map_type_str() converts the provided map type value into a textual representation.
t – The map type.
Pointer to a static string identifying the map type. NULL is returned for unknown bpf_map_type values.
libbpf_bpf_prog_type_str() converts the provided program type value into a textual representation.
t – The program type.
Pointer to a static string identifying the program type. NULL is returned for unknown bpf_prog_type values.
libbpf_set_print() sets user-provided log callback function to be used for libbpf warnings and informational messages. If the user callback is not set, messages are logged to stderr by default. The verbosity of these messages can be controlled by setting the environment variable LIBBPF_LOG_LEVEL to either warn, info, or debug.
This function is thread-safe.
fn – The log print function. If NULL, libbpf won’t print anything.
Pointer to old print function.
bpf_object__open() creates a bpf_object by opening the BPF ELF object file pointed to by the passed path and loading it into memory.
path – BPF object file path.
pointer to the new bpf_object; or NULL is returned on error, error code is stored in errno
bpf_object__open_file() creates a bpf_object by opening the BPF ELF object file pointed to by the passed path and loading it into memory.
path – BPF object file path
opts – options for how to load the bpf object, this parameter is optional and can be set to NULL
pointer to the new bpf_object; or NULL is returned on error, error code is stored in errno
bpf_object__open_mem() creates a bpf_object by reading the BPF objects raw bytes from a memory buffer containing a valid BPF ELF object file.
obj_buf – pointer to the buffer containing ELF file bytes
obj_buf_sz – number of bytes in the buffer
opts – options for how to load the bpf object
pointer to the new bpf_object; or NULL is returned on error, error code is stored in errno
bpf_object__prepare() prepares BPF object for loading: performs ELF processing, relocations, prepares final state of BPF program instructions (accessible with bpf_program__insns()), creates and (potentially) pins maps. Leaves BPF object in the state ready for program loading.
obj – Pointer to a valid BPF object instance returned by bpf_object__open*() API
0, on success; negative error code, otherwise, error code is stored in errno
bpf_object__load() loads BPF object into kernel.
obj – Pointer to a valid BPF object instance returned by bpf_object__open*() APIs
0, on success; negative error code, otherwise, error code is stored in errno
bpf_object__close() closes a BPF object and releases all resources.
obj – Pointer to a valid BPF object
bpf_object__pin_maps() pins each map contained within the BPF object at the passed directory.
If path
is NULL bpf_map__pin
(which is being used on each map) will use the pin_path attribute of each map. In this case, maps that don’t have a pin_path set will be ignored.
obj – Pointer to a valid BPF object
path – A directory where maps should be pinned.
0, on success; negative error code, otherwise
bpf_object__unpin_maps() unpins each map contained within the BPF object found in the passed directory.
If path
is NULL bpf_map__unpin
(which is being used on each map) will use the pin_path attribute of each map. In this case, maps that don’t have a pin_path set will be ignored.
obj – Pointer to a valid BPF object
path – A directory where pinned maps should be searched for.
0, on success; negative error code, otherwise
bpf_object__token_fd is an accessor for BPF token FD associated with BPF object.
obj – Pointer to a valid BPF object
BPF token FD or -1, if it wasn’t set
bpf_program__insns() gives read-only access to BPF program’s underlying BPF instructions.
Returned pointer is always valid and not NULL. Number of struct bpf_insn
pointed to can be fetched using bpf_program__insn_cnt() API.
Keep in mind, libbpf can modify and append/delete BPF program’s instructions as it processes BPF object file and prepares everything for uploading into the kernel. So depending on the point in BPF object lifetime, bpf_program__insns() can return different sets of instructions. As an example, during BPF object load phase BPF program instructions will be CO-RE-relocated, BPF subprograms instructions will be appended, ldimm64 instructions will have FDs embedded, etc. So instructions returned before bpf_object__load() and after it might be quite different.
prog – BPF program for which to return instructions
a pointer to an array of BPF instructions that belong to the specified BPF program
bpf_program__set_insns() can set BPF program’s underlying BPF instructions.
WARNING: This is a very advanced libbpf API and users need to know what they are doing. This should be used from prog_prepare_load_fn callback only.
prog – BPF program for which to return instructions
new_insns – a pointer to an array of BPF instructions
new_insn_cnt – number of struct bpf_insn
’s that form specified BPF program
0, on success; negative error code, otherwise
bpf_program__insn_cnt() returns number of struct bpf_insn
’s that form specified BPF program.
See bpf_program__insns() documentation for notes on how libbpf can change instructions and their count during different phases of bpf_object lifetime.
prog – BPF program for which to return number of BPF instructions
bpf_program__pin() pins the BPF program to a file in the BPF FS specified by a path. This increments the programs reference count, allowing it to stay loaded after the process which loaded it has exited.
prog – BPF program to pin, must already be loaded
path – file path in a BPF file system
0, on success; negative error code, otherwise
bpf_program__unpin() unpins the BPF program from a file in the BPFFS specified by a path. This decrements the programs reference count.
The file pinning the BPF program can also be unlinked by a different process in which case this function will return an error.
prog – BPF program to unpin
path – file path to the pin in a BPF file system
0, on success; negative error code, otherwise
bpf_link__pin() pins the BPF link to a file in the BPF FS specified by a path. This increments the links reference count, allowing it to stay loaded after the process which loaded it has exited.
link – BPF link to pin, must already be loaded
path – file path in a BPF file system
0, on success; negative error code, otherwise
bpf_link__unpin() unpins the BPF link from a file in the BPFFS specified by a path. This decrements the links reference count.
The file pinning the BPF link can also be unlinked by a different process in which case this function will return an error.
prog – BPF program to unpin
path – file path to the pin in a BPF file system
0, on success; negative error code, otherwise
bpf_program__attach() is a generic function for attaching a BPF program based on auto-detection of program type, attach type, and extra parameters, where applicable.
This is supported for:
kprobe/kretprobe (depends on SEC() definition)
uprobe/uretprobe (depends on SEC() definition)
tracepoint
raw tracepoint
tracing programs (typed raw TP/fentry/fexit/fmod_ret)
prog – BPF program to attach
Reference to the newly created BPF link; or NULL is returned on error, error code is stored in errno
bpf_program__attach_uprobe_multi() attaches a BPF program to multiple uprobes with uprobe_multi link.
User can specify 2 mutually exclusive set of inputs:
1) use only path/func_pattern/pid arguments
2) use path/pid with allowed combinations of syms/offsets/ref_ctr_offsets/cookies/cnt
syms and offsets are mutually exclusive
ref_ctr_offsets and cookies are optional
prog – BPF program to attach
pid – Process ID to attach the uprobe to, 0 for self (own process), -1 for all processes
binary_path – Path to binary
func_pattern – Regular expression to specify functions to attach BPF program to
opts – Additional options (see struct bpf_uprobe_multi_opts)
0, on success; negative error code, otherwise
bpf_program__attach_ksyscall() attaches a BPF program to kernel syscall handler of a specified syscall. Optionally it’s possible to request to install retprobe that will be triggered at syscall exit. It’s also possible to associate BPF cookie (though options).
Libbpf automatically will determine correct full kernel function name, which depending on system architecture and kernel version/configuration could be of the form __<arch>sys<syscall> or __se_sys_<syscall>, and will attach specified program using kprobe/kretprobe mechanism.
bpf_program__attach_ksyscall() is an API counterpart of declarative SEC(“ksyscall/<syscall>”) annotation of BPF programs.
At the moment SEC(“ksyscall”) and bpf_program__attach_ksyscall() do not handle all the calling convention quirks for mmap(), clone() and compat syscalls. It also only attaches to “native” syscall interfaces. If host system supports compat syscalls or defines 32-bit syscalls in 64-bit kernel, such syscall interfaces won’t be attached to by libbpf.
These limitations may or may not change in the future. Therefore it is recommended to use SEC(“kprobe”) for these syscalls or if working with compat and 32-bit interfaces is required.
prog – BPF program to attach
syscall_name – Symbolic name of the syscall (e.g., “bpf”)
opts – Additional options (see struct bpf_ksyscall_opts)
Reference to the newly created BPF link; or NULL is returned on error, error code is stored in errno
bpf_program__attach_uprobe() attaches a BPF program to the userspace function which is found by binary path and offset. You can optionally specify a particular process to attach to. You can also optionally attach the program to the function exit instead of entry.
prog – BPF program to attach
retprobe – Attach to function exit
pid – Process ID to attach the uprobe to, 0 for self (own process), -1 for all processes
binary_path – Path to binary that contains the function symbol
func_offset – Offset within the binary of the function symbol
Reference to the newly created BPF link; or NULL is returned on error, error code is stored in errno
bpf_program__attach_uprobe_opts() is just like bpf_program__attach_uprobe() except with a options struct for various configurations.
prog – BPF program to attach
pid – Process ID to attach the uprobe to, 0 for self (own process), -1 for all processes
binary_path – Path to binary that contains the function symbol
func_offset – Offset within the binary of the function symbol
opts – Options for altering program attachment
Reference to the newly created BPF link; or NULL is returned on error, error code is stored in errno
bpf_program__attach_usdt() is just like bpf_program__attach_uprobe_opts() except it covers USDT (User-space Statically Defined Tracepoint) attachment, instead of attaching to user-space function entry or exit.
prog – BPF program to attach
pid – Process ID to attach the uprobe to, 0 for self (own process), -1 for all processes
binary_path – Path to binary that contains provided USDT probe
usdt_provider – USDT provider name
usdt_name – USDT probe name
opts – Options for altering program attachment
Reference to the newly created BPF link; or NULL is returned on error, error code is stored in errno
bpf_program__set_type() sets the program type of the passed BPF program.
This must be called before the BPF object is loaded, otherwise it has no effect and an error is returned.
prog – BPF program to set the program type for
type – program type to set the BPF map to have
error code; or 0 if no error. An error occurs if the object is already loaded.
bpf_program__set_expected_attach_type() sets the attach type of the passed BPF program. This is used for auto-detection of attachment when programs are loaded.
This must be called before the BPF object is loaded, otherwise it has no effect and an error is returned.
prog – BPF program to set the attach type for
type – attach type to set the BPF map to have
error code; or 0 if no error. An error occurs if the object is already loaded.
bpf_program__set_attach_target() sets BTF-based attach target for supported BPF program types:
BTF-aware raw tracepoints (tp_btf);
fentry/fexit/fmod_ret;
lsm;
freplace.
prog – BPF program to set the attach type for
type – attach type to set the BPF map to have
error code; or 0 if no error occurred.
bpf_object__find_map_by_name() returns BPF map of the given name, if it exists within the passed BPF object
obj – BPF object
name – name of the BPF map
BPF map instance, if such map exists within the BPF object; or NULL otherwise.
bpf_map__set_autocreate() sets whether libbpf has to auto-create BPF map during BPF object load phase.
bpf_map__set_autocreate() allows to opt-out from libbpf auto-creating BPF map. By default, libbpf will attempt to create every single BPF map defined in BPF object file using BPF_MAP_CREATE command of bpf() syscall and fill in map FD in BPF instructions.
This API allows to opt-out of this process for specific map instance. This can be useful if host kernel doesn’t support such BPF map type or used combination of flags and user application wants to avoid creating such a map in the first place. User is still responsible to make sure that their BPF-side code that expects to use such missing BPF map is recognized by BPF verifier as dead code, otherwise BPF verifier will reject such BPF program.
map – the BPF map instance
autocreate – whether to create BPF map during BPF object load
0 on success; -EBUSY if BPF object was already loaded
bpf_map__set_autoattach() sets whether libbpf has to auto-attach map during BPF skeleton attach phase.
map – the BPF map instance
autoattach – whether to attach map during BPF skeleton attach phase
0 on success; negative error code, otherwise
bpf_map__autoattach() returns whether BPF map is configured to auto-attach during BPF skeleton attach phase.
map – the BPF map instance
true if map is set to auto-attach during skeleton attach phase; false, otherwise
bpf_map__fd() gets the file descriptor of the passed BPF map
map – the BPF map instance
the file descriptor; or -EINVAL in case of an error
bpf_map__set_value_size() sets map value size.
There is a special case for maps with associated memory-mapped regions, like the global data section maps (bss, data, rodata). When this function is used on such a map, the mapped region is resized. Afterward, an attempt is made to adjust the corresponding BTF info. This attempt is best-effort and can only succeed if the last variable of the data section map is an array. The array BTF type is replaced by a new BTF array type with a different length. Any previously existing pointers returned from bpf_map__initial_value() or corresponding data section skeleton pointer must be reinitialized.
map – the BPF map instance
0, on success; negative error, otherwise
bpf_map__is_internal() tells the caller whether or not the passed map is a special map created by libbpf automatically for things like global variables, __ksym externs, Kconfig values, etc
map – the bpf_map
true, if the map is an internal map; false, otherwise
bpf_map__set_pin_path() sets the path attribute that tells where the BPF map should be pinned. This does not actually create the ‘pin’.
map – The bpf_map
path – The path
0, on success; negative error, otherwise
bpf_map__pin_path() gets the path attribute that tells where the BPF map should be pinned.
map – The bpf_map
The path string; which can be NULL
bpf_map__is_pinned() tells the caller whether or not the passed map has been pinned via a ‘pin’ file.
map – The bpf_map
true, if the map is pinned; false, otherwise
bpf_map__pin() creates a file that serves as a ‘pin’ for the BPF map. This increments the reference count on the BPF map which will keep the BPF map loaded even after the userspace process which loaded it has exited.
If path
is NULL the maps pin_path
attribute will be used. If this is also NULL, an error will be returned and the map will not be pinned.
map – The bpf_map to pin
path – A file path for the ‘pin’
0, on success; negative error, otherwise
bpf_map__unpin() removes the file that serves as a ‘pin’ for the BPF map.
The path
parameter can be NULL, in which case the pin_path
map attribute is unpinned. If both the path
parameter and pin_path
map attribute are set, they must be equal.
map – The bpf_map to unpin
path – A file path for the ‘pin’
0, on success; negative error, otherwise
bpf_map__lookup_elem() allows to lookup BPF map value corresponding to provided key.
bpf_map__lookup_elem() is high-level equivalent of bpf_map_lookup_elem() API with added check for key and value size.
map – BPF map to lookup element in
key – pointer to memory containing bytes of the key used for lookup
key_sz – size in bytes of key data, needs to match BPF map definition’s key_size
value – pointer to memory in which looked up value will be stored
value_sz – size in byte of value data memory; it has to match BPF map definition’s value_size. For per-CPU BPF maps value size has to be a product of BPF map value size and number of possible CPUs in the system (could be fetched with libbpf_num_possible_cpus()). Note also that for per-CPU values value size has to be aligned up to closest 8 bytes for alignment reasons, so expected size is: round_up(value_size, 8)
0, on success; negative error, otherwise
bpf_map__update_elem() allows to insert or update value in BPF map that corresponds to provided key.
bpf_map__update_elem() is high-level equivalent of bpf_map_update_elem() API with added check for key and value size.
map – BPF map to insert to or update element in
key – pointer to memory containing bytes of the key
key_sz – size in bytes of key data, needs to match BPF map definition’s key_size
value – pointer to memory containing bytes of the value
value_sz – size in byte of value data memory; it has to match BPF map definition’s value_size. For per-CPU BPF maps value size has to be a product of BPF map value size and number of possible CPUs in the system (could be fetched with libbpf_num_possible_cpus()). Note also that for per-CPU values value size has to be aligned up to closest 8 bytes for alignment reasons, so expected size is: round_up(value_size, 8)
0, on success; negative error, otherwise
bpf_map__delete_elem() allows to delete element in BPF map that corresponds to provided key.
bpf_map__delete_elem() is high-level equivalent of bpf_map_delete_elem() API with added check for key size.
map – BPF map to delete element from
key – pointer to memory containing bytes of the key
key_sz – size in bytes of key data, needs to match BPF map definition’s key_size @flags extra flags passed to kernel for this operation
0, on success; negative error, otherwise
bpf_map__lookup_and_delete_elem() allows to lookup BPF map value corresponding to provided key and atomically delete it afterwards.
bpf_map__lookup_and_delete_elem() is high-level equivalent of bpf_map_lookup_and_delete_elem() API with added check for key and value size.
map – BPF map to lookup element in
key – pointer to memory containing bytes of the key used for lookup
key_sz – size in bytes of key data, needs to match BPF map definition’s key_size
value – pointer to memory in which looked up value will be stored
value_sz – size in byte of value data memory; it has to match BPF map definition’s value_size. For per-CPU BPF maps value size has to be a product of BPF map value size and number of possible CPUs in the system (could be fetched with libbpf_num_possible_cpus()). Note also that for per-CPU values value size has to be aligned up to closest 8 bytes for alignment reasons, so expected size is: round_up(value_size, 8)
0, on success; negative error, otherwise
bpf_map__get_next_key() allows to iterate BPF map keys by fetching next key that follows current key.
bpf_map__get_next_key() is high-level equivalent of bpf_map_get_next_key() API with added check for key size.
map – BPF map to fetch next key from
cur_key – pointer to memory containing bytes of current key or NULL to fetch the first key
next_key – pointer to memory to write next key into
key_sz – size in bytes of key data, needs to match BPF map definition’s key_size
0, on success; -ENOENT if cur_key is the last key in BPF map; negative error, otherwise
ring_buffer__ring() returns the ringbuffer object inside a given ringbuffer manager representing a single BPF_MAP_TYPE_RINGBUF map instance.
rb – A ringbuffer manager object.
idx – An index into the ringbuffers contained within the ringbuffer manager object. The index is 0-based and corresponds to the order in which ring_buffer__add was called.
A ringbuffer object on success; NULL and errno set if the index is invalid.
ring__consumer_pos() returns the current consumer position in the given ringbuffer.
r – A ringbuffer object.
The current consumer position.
ring__producer_pos() returns the current producer position in the given ringbuffer.
r – A ringbuffer object.
The current producer position.
ring__avail_data_size() returns the number of bytes in the ringbuffer not yet consumed. This has no locking associated with it, so it can be inaccurate if operations are ongoing while this is called. However, it should still show the correct trend over the long-term.
r – A ringbuffer object.
The number of bytes not yet consumed.
ring__size() returns the total size of the ringbuffer’s map data area (excluding special producer/consumer pages). Effectively this gives the amount of usable bytes of data inside the ringbuffer.
r – A ringbuffer object.
The total size of the ringbuffer map data area.
ring__map_fd() returns the file descriptor underlying the given ringbuffer.
r – A ringbuffer object.
The underlying ringbuffer file descriptor
ring__consume() consumes available ringbuffer data without event polling.
r – A ringbuffer object.
The number of records consumed (or INT_MAX, whichever is less), or a negative number if any of the callbacks return an error.
ring__consume_n() consumes up to a requested amount of items from a ringbuffer without event polling.
r – A ringbuffer object.
n – Maximum amount of items to consume.
The number of items consumed, or a negative number if any of the callbacks return an error.
user_ring_buffer__new() creates a new instance of a user ring buffer.
map_fd – A file descriptor to a BPF_MAP_TYPE_USER_RINGBUF map.
opts – Options for how the ring buffer should be created.
A user ring buffer on success; NULL and errno being set on a failure.
user_ring_buffer__reserve() reserves a pointer to a sample in the user ring buffer.
This function is not thread safe, and callers must synchronize accessing this function if there are multiple producers. If a size is requested that is larger than the size of the entire ring buffer, errno will be set to E2BIG and NULL is returned. If the ring buffer could accommodate the size, but currently does not have enough space, errno is set to ENOSPC and NULL is returned.
After initializing the sample, callers must invoke user_ring_buffer__submit() to post the sample to the kernel. Otherwise, the sample must be freed with user_ring_buffer__discard().
rb – A pointer to a user ring buffer.
size – The size of the sample, in bytes.
A pointer to an 8-byte aligned reserved region of the user ring buffer; NULL, and errno being set if a sample could not be reserved.
user_ring_buffer__reserve_blocking() reserves a record in the ring buffer, possibly blocking for up to @timeout_ms until a sample becomes available.
This function is not thread safe, and callers must synchronize accessing this function if there are multiple producers
If timeout_ms is -1, the function will block indefinitely until a sample becomes available. Otherwise, timeout_ms must be non-negative, or errno is set to EINVAL, and NULL is returned. If timeout_ms is 0, no blocking will occur and the function will return immediately after attempting to reserve a sample.
If size is larger than the size of the entire ring buffer, errno is set to E2BIG and NULL is returned. If the ring buffer could accommodate size, but currently does not have enough space, the caller will block until at most timeout_ms has elapsed. If insufficient space is available at that time, errno is set to ENOSPC, and NULL is returned.
The kernel guarantees that it will wake up this thread to check if sufficient space is available in the ring buffer at least once per invocation of the bpf_ringbuf_drain() helper function, provided that at least one sample is consumed, and the BPF program did not invoke the function with BPF_RB_NO_WAKEUP. A wakeup may occur sooner than that, but the kernel does not guarantee this. If the helper function is invoked with BPF_RB_FORCE_WAKEUP, a wakeup event will be sent even if no sample is consumed.
When a sample of size size is found within timeout_ms, a pointer to the sample is returned. After initializing the sample, callers must invoke user_ring_buffer__submit() to post the sample to the ring buffer. Otherwise, the sample must be freed with user_ring_buffer__discard().
rb – The user ring buffer.
size – The size of the sample, in bytes.
timeout_ms – The amount of time, in milliseconds, for which the caller should block when waiting for a sample. -1 causes the caller to block indefinitely.
A pointer to an 8-byte aligned reserved region of the user ring buffer; NULL, and errno being set if a sample could not be reserved.
user_ring_buffer__submit() submits a previously reserved sample into the ring buffer.
It is not necessary to synchronize amongst multiple producers when invoking this function.
rb – The user ring buffer.
sample – A reserved sample.
user_ring_buffer__discard() discards a previously reserved sample.
It is not necessary to synchronize amongst multiple producers when invoking this function.
rb – The user ring buffer.
sample – A reserved sample.
user_ring_buffer__free() frees a ring buffer that was previously created with user_ring_buffer__new().
rb – The user ring buffer being freed.
perf_buffer__new() creates BPF perfbuf manager for a specified BPF_PERF_EVENT_ARRAY map
map_fd – FD of BPF_PERF_EVENT_ARRAY BPF map that will be used by BPF code to send data over to user-space
page_cnt – number of memory pages allocated for each per-CPU buffer
sample_cb – function called on each received data record
lost_cb – function called when record loss has occurred
ctx – user-provided extra context passed into sample_cb and lost_cb
a new instance of struct perf_buffer on success, NULL on error with errno containing an error code
perf_buffer__buffer() returns the per-cpu raw mmap()’ed underlying memory region of the ring buffer. This ring buffer can be used to implement a custom events consumer. The ring buffer starts with the struct perf_event_mmap_page, which holds the ring buffer management fields, when accessing the header structure it’s important to be SMP aware. You can refer to perf_event_read_simple for a simple example.
pb – the perf buffer structure
buf_idx – the buffer index to retrieve
buf – (out) gets the base pointer of the mmap()’ed memory
buf_size – (out) gets the size of the mmap()’ed region
0 on success, negative error code for failure
libbpf_probe_bpf_prog_type() detects if host kernel supports BPF programs of a given type.
Make sure the process has required set of CAP_* permissions (or runs as root) when performing feature checking.
prog_type – BPF program type to detect kernel support for
opts – reserved for future extensibility, should be NULL
1, if given program type is supported; 0, if given program type is not supported; negative error code if feature detection failed or can’t be performed
libbpf_probe_bpf_map_type() detects if host kernel supports BPF maps of a given type.
Make sure the process has required set of CAP_* permissions (or runs as root) when performing feature checking.
map_type – BPF map type to detect kernel support for
opts – reserved for future extensibility, should be NULL
1, if given map type is supported; 0, if given map type is not supported; negative error code if feature detection failed or can’t be performed
libbpf_probe_bpf_helper() detects if host kernel supports the use of a given BPF helper from specified BPF program type.
Make sure the process has required set of CAP_* permissions (or runs as root) when performing feature checking.
prog_type – BPF program type used to check the support of BPF helper
helper_id – BPF helper ID (enum bpf_func_id) to check support for
opts – reserved for future extensibility, should be NULL
1, if given combination of program type and helper is supported; 0, if the combination is not supported; negative error code if feature detection for provided input arguments failed or can’t be performed
libbpf_num_possible_cpus() is a helper function to get the number of possible CPUs that the host kernel supports and expects.
Example usage:
int ncpus = libbpf_num_possible_cpus(); if (ncpus < 0) { // error handling } long values[ncpus]; bpf_map_lookup_elem(per_cpu_map_fd, key, values);
number of possible CPUs; or error code on failure
libbpf_register_prog_handler() registers a custom BPF program SEC() handler.
sec defines which SEC() definitions are handled by this custom handler registration. sec can have few different forms:
if sec is just a plain string (e.g., “abc”), it will match only SEC(“abc”). If BPF program specifies SEC(“abc/whatever”) it will result in an error;
if sec is of the form “abc/”, proper SEC() form is SEC(“abc/something”), where acceptable “something” should be checked by prog_init_fn callback, if there are additional restrictions;
if sec is of the form “abc+”, it will successfully match both SEC(“abc”) and SEC(“abc/whatever”) forms;
if sec is NULL, custom handler is registered for any BPF program that doesn’t match any of the registered (custom or libbpf’s own) SEC() handlers. There could be only one such generic custom handler registered at any given time.
All custom handlers (except the one with sec == NULL) are processed before libbpf’s own SEC() handlers. It is allowed to “override” libbpf’s SEC() handlers by registering custom ones for the same section prefix (i.e., it’s possible to have custom SEC(“perf_event/LLC-load-misses”) handler).
Note, like much of global libbpf APIs (e.g., libbpf_set_print(), libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs to ensure synchronization if there is a risk of running this API from multiple threads simultaneously.
sec – section prefix for which custom handler is registered
prog_type – BPF program type associated with specified section
exp_attach_type – Expected BPF attach type associated with specified section
opts – optional cookie, callbacks, and other extra options
Non-negative handler ID is returned on success. This handler ID has to be passed to libbpf_unregister_prog_handler() to unregister such custom handler. Negative error code is returned on error.
libbpf_unregister_prog_handler() unregisters previously registered custom BPF program SEC() handler.
Note, like much of global libbpf APIs (e.g., libbpf_set_print(), libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs to ensure synchronization if there is a risk of running this API from multiple threads simultaneously.
handler_id – handler ID returned by libbpf_register_prog_handler() after successful registration
0 on success, negative error code if handler isn’t found
Defines
for ((pos) = bpf_object__next_program((obj), NULL); \
(pos) != NULL; \
(pos) = bpf_object__next_program((obj), (pos)))
for ((pos) = bpf_object__next_map((obj), NULL); \
(pos) != NULL; \
(pos) = bpf_object__next_map((obj), (pos)))
Enums
Values:
Values:
enum probe_attach_mode - the mode to attach kprobe/uprobe
force libbpf to attach kprobe/uprobe in specific mode, -ENOTSUP will be returned if it is not supported by the kernel.
Values:
Values:
Values:
Values:
Values:
Functions
bpf_map_delete_batch() allows for batch deletion of multiple elements in a BPF map.
fd – BPF map file descriptor
keys – pointer to an array of count keys
count – input and output parameter; on input count represents the number of elements in the map to delete in batch; on output if a non-EFAULT error is returned, count represents the number of deleted elements if the output count value is not equal to the input count value If EFAULT is returned, count should not be trusted to be correct.
opts – options for configuring the way the batch deletion works
0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_map_lookup_batch() allows for batch lookup of BPF map elements.
The parameter in_batch is the address of the first element in the batch to read. out_batch is an output parameter that should be passed as in_batch to subsequent calls to bpf_map_lookup_batch(). NULL can be passed for in_batch to indicate that the batched lookup starts from the beginning of the map. Both in_batch and out_batch must point to memory large enough to hold a single key, except for maps of type BPF_MAP_TYPE_{HASH, PERCPU_HASH, LRU_HASH, LRU_PERCPU_HASH}, for which the memory size must be at least 4 bytes wide regardless of key size.
The keys and values are output parameters which must point to memory large enough to hold count items based on the key and value size of the map map_fd. The keys buffer must be of key_size * count. The values buffer must be of value_size * count.
fd – BPF map file descriptor
in_batch – address of the first element in batch to read, can pass NULL to indicate that the batched lookup starts from the beginning of the map.
out_batch – output parameter that should be passed to next call as in_batch
keys – pointer to an array large enough for count keys
values – pointer to an array large enough for count values
count – input and output parameter; on input it’s the number of elements in the map to read in batch; on output it’s the number of elements that were successfully read. If a non-EFAULT error is returned, count will be set as the number of elements that were read before the error occurred. If EFAULT is returned, count should not be trusted to be correct.
opts – options for configuring the way the batch lookup works
0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_map_lookup_and_delete_batch() allows for batch lookup and deletion of BPF map elements where each element is deleted after being retrieved.
fd – BPF map file descriptor
in_batch – address of the first element in batch to read, can pass NULL to get address of the first element in out_batch. If not NULL, must be large enough to hold a key. For BPF_MAP_TYPE_{HASH, PERCPU_HASH, LRU_HASH, LRU_PERCPU_HASH}, the memory size must be at least 4 bytes wide regardless of key size.
out_batch – output parameter that should be passed to next call as in_batch
keys – pointer to an array of count keys
values – pointer to an array large enough for count values
count – input and output parameter; on input it’s the number of elements in the map to read and delete in batch; on output it represents the number of elements that were successfully read and deleted If a non-**EFAULT** error code is returned and if the output count value is not equal to the input count value, up to count elements may have been deleted. if EFAULT is returned up to count elements may have been deleted without being returned via the keys and values output parameters.
opts – options for configuring the way the batch lookup and delete works
0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_map_update_batch() updates multiple elements in a map by specifying keys and their corresponding values.
The keys and values parameters must point to memory large enough to hold count items based on the key and value size of the map.
The opts parameter can be used to control how bpf_map_update_batch() should handle keys that either do or do not already exist in the map. In particular the flags parameter of bpf_map_batch_opts can be one of the following:
Note that count is an input and output parameter, where on output it represents how many elements were successfully updated. Also note that if EFAULT then count should not be trusted to be correct.
BPF_ANY Create new elements or update existing.
BPF_NOEXIST Create new elements only if they do not exist.
BPF_EXIST Update existing elements.
BPF_F_LOCK Update spin_lock-ed map elements. This must be specified if the map value contains a spinlock.
fd – BPF map file descriptor
keys – pointer to an array of count keys
values – pointer to an array of count values
count – input and output parameter; on input it’s the number of elements in the map to update in batch; on output if a non-EFAULT error is returned, count represents the number of updated elements if the output count value is not equal to the input count value. If EFAULT is returned, count should not be trusted to be correct.
opts – options for configuring the way the batch update works
0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_prog_attach_opts() attaches the BPF program corresponding to prog_fd to a target which can represent a file descriptor or netdevice ifindex.
prog_fd – BPF program file descriptor
target – attach location file descriptor or ifindex
type – attach type for the BPF program
opts – options for configuring the attachment
0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_prog_detach_opts() detaches the BPF program corresponding to prog_fd from a target which can represent a file descriptor or netdevice ifindex.
prog_fd – BPF program file descriptor
target – detach location file descriptor or ifindex
type – detach type for the BPF program
opts – options for configuring the detachment
0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_prog_get_info_by_fd() obtains information about the BPF program corresponding to prog_fd.
Populates up to info_len bytes of info and updates info_len with the actual number of bytes written to info. Note that info should be zero-initialized or initialized as expected by the requested info type. Failing to (zero-)initialize info under certain circumstances can result in this helper returning an error.
prog_fd – BPF program file descriptor
info – pointer to struct bpf_prog_info that will be populated with BPF program information
info_len – pointer to the size of info; on success updated with the number of bytes written to info
0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_map_get_info_by_fd() obtains information about the BPF map corresponding to map_fd.
Populates up to info_len bytes of info and updates info_len with the actual number of bytes written to info. Note that info should be zero-initialized or initialized as expected by the requested info type. Failing to (zero-)initialize info under certain circumstances can result in this helper returning an error.
map_fd – BPF map file descriptor
info – pointer to struct bpf_map_info that will be populated with BPF map information
info_len – pointer to the size of info; on success updated with the number of bytes written to info
0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_btf_get_info_by_fd() obtains information about the BTF object corresponding to btf_fd.
Populates up to info_len bytes of info and updates info_len with the actual number of bytes written to info. Note that info should be zero-initialized or initialized as expected by the requested info type. Failing to (zero-)initialize info under certain circumstances can result in this helper returning an error.
btf_fd – BTF object file descriptor
info – pointer to struct bpf_btf_info that will be populated with BTF object information
info_len – pointer to the size of info; on success updated with the number of bytes written to info
0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_btf_get_info_by_fd() obtains information about the BPF link corresponding to link_fd.
Populates up to info_len bytes of info and updates info_len with the actual number of bytes written to info. Note that info should be zero-initialized or initialized as expected by the requested info type. Failing to (zero-)initialize info under certain circumstances can result in this helper returning an error.
link_fd – BPF link file descriptor
info – pointer to struct bpf_link_info that will be populated with BPF link information
info_len – pointer to the size of info; on success updated with the number of bytes written to info
0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_prog_query_opts() queries the BPF programs and BPF links which are attached to target which can represent a file descriptor or netdevice ifindex.
target – query location file descriptor or ifindex
type – attach type for the BPF program
opts – options for configuring the query
0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_token_create() creates a new instance of BPF token derived from specified BPF FS mount point.
BPF token created with this API can be passed to bpf() syscall for commands like BPF_PROG_LOAD, BPF_MAP_CREATE, etc.
bpffs_fd – FD for BPF FS instance from which to derive a BPF token instance.
opts – optional BPF token creation options, can be NULL
BPF token FD > 0, on success; negative error code, otherwise (errno is also set to the error code)
bpf_prog_stream_read reads data from the BPF stream of a given BPF program.
prog_fd – FD for the BPF program whose BPF stream is to be read.
stream_id – ID of the BPF stream to be read.
buf – Buffer to read data into from the BPF stream.
buf_len – Maximum number of bytes to read from the BPF stream.
opts – optional options, can be NULL
The number of bytes read, on success; negative error code, otherwise (errno is also set to the error code)
Defines
Functions
btf__free() frees all data of a BTF object
btf – BTF object to free
btf__new() creates a new instance of a BTF object from the raw bytes of an ELF’s BTF section
On error, error-code-encoded-as-pointer is returned, not a NULL. To extract error code from such a pointer libbpf_get_error()
should be used. If libbpf_set_strict_mode(LIBBPF_STRICT_CLEAN_PTRS)
is enabled, NULL is returned on error instead. In both cases thread-local errno
variable is always set to error code as well.
data – raw bytes
size – number of bytes passed in data
new BTF object instance which has to be eventually freed with btf__free()
btf__new_split() create a new instance of a BTF object from the provided raw data bytes. It takes another BTF instance, base_btf, which serves as a base BTF, which is extended by types in a newly created BTF instance
If base_btf is NULL, btf__new_split()
is equivalent to btf__new()
and creates non-split BTF.
On error, error-code-encoded-as-pointer is returned, not a NULL. To extract error code from such a pointer libbpf_get_error()
should be used. If libbpf_set_strict_mode(LIBBPF_STRICT_CLEAN_PTRS)
is enabled, NULL is returned on error instead. In both cases thread-local errno
variable is always set to error code as well.
data – raw bytes
size – length of raw bytes
base_btf – the base BTF object
new BTF object instance which has to be eventually freed with btf__free()
btf__new_empty() creates an empty BTF object. Use btf__add_*()
to populate such BTF object.
On error, error-code-encoded-as-pointer is returned, not a NULL. To extract error code from such a pointer libbpf_get_error()
should be used. If libbpf_set_strict_mode(LIBBPF_STRICT_CLEAN_PTRS)
is enabled, NULL is returned on error instead. In both cases thread-local errno
variable is always set to error code as well.
new BTF object instance which has to be eventually freed with btf__free()
btf__new_empty_split() creates an unpopulated BTF object from an ELF BTF section except with a base BTF on top of which split BTF should be based
If base_btf is NULL, btf__new_empty_split()
is equivalent to btf__new_empty()
and creates non-split BTF.
On error, error-code-encoded-as-pointer is returned, not a NULL. To extract error code from such a pointer libbpf_get_error()
should be used. If libbpf_set_strict_mode(LIBBPF_STRICT_CLEAN_PTRS)
is enabled, NULL is returned on error instead. In both cases thread-local errno
variable is always set to error code as well.
new BTF object instance which has to be eventually freed with btf__free()
btf__distill_base() creates new versions of the split BTF src_btf and its base BTF. The new base BTF will only contain the types needed to improve robustness of the split BTF to small changes in base BTF. When that split BTF is loaded against a (possibly changed) base, this distilled base BTF will help update references to that (possibly changed) base BTF.
Both the new split and its associated new base BTF must be freed by the caller.
If successful, 0 is returned and new_base_btf and new_split_btf will point at new base/split BTF. Both the new split and its associated new base BTF must be freed by the caller.
A negative value is returned on error and the thread-local errno
variable is set to the error code as well.
btf__add_btf() appends all the BTF types from src_btf into btf
btf__add_btf() can be used to simply and efficiently append the entire contents of one BTF object to another one. All the BTF type data is copied over, all referenced type IDs are adjusted by adding a necessary ID offset. Only strings referenced from BTF types are copied over and deduplicated, so if there were some unused strings in src_btf, those won’t be copied over, which is consistent with the general string deduplication semantics of BTF writing APIs.
If any error is encountered during this process, the contents of btf is left intact, which means that btf__add_btf() follows the transactional semantics and the operation as a whole is all-or-nothing.
src_btf has to be non-split BTF, as of now copying types from split BTF is not supported and will result in -ENOTSUP error code returned.
btf – BTF object which all the BTF types and strings are added to
src_btf – BTF object which all BTF types and referenced strings are copied from
BTF type ID of the first appended BTF type, or negative error code
btf__relocate() will check the split BTF btf for references to base BTF kinds, and verify those references are compatible with base_btf; if they are, btf is adjusted such that is re-parented to base_btf and type ids and strings are adjusted to accommodate this.
If successful, 0 is returned and btf now has base_btf as its base.
A negative value is returned on error and the thread-local errno
variable is set to the error code as well.
Defines
Enums
Values:
Values:
Warning
doxygenfile: Cannot find file “xsk.h
bpf_tracing.hDefines
name(unsigned long long *ctx); \
static __always_inline typeof(name(0)) \
____##name(unsigned long long *ctx, ##args); \
typeof(name(0)) name(unsigned long long *ctx) \
{ \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
return ____##name(___bpf_ctx_cast(args)); \
_Pragma("GCC diagnostic pop") \
} \
static __always_inline typeof(name(0)) \
____##name(unsigned long long *ctx, ##args)
name(unsigned long long *ctx); \
static __always_inline typeof(name(0)) \
____##name(unsigned long long *ctx ___bpf_ctx_decl(args)); \
typeof(name(0)) name(unsigned long long *ctx) \
{ \
return ____##name(ctx ___bpf_ctx_arg(args)); \
} \
static __always_inline typeof(name(0)) \
____##name(unsigned long long *ctx ___bpf_ctx_decl(args))
name(struct pt_regs *ctx); \
static __always_inline typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args); \
typeof(name(0)) name(struct pt_regs *ctx) \
{ \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
return ____##name(___bpf_kprobe_args(args)); \
_Pragma("GCC diagnostic pop") \
} \
static __always_inline typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args)
name(struct pt_regs *ctx); \
static __always_inline typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args); \
typeof(name(0)) name(struct pt_regs *ctx) \
{ \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
return ____##name(___bpf_kretprobe_args(args)); \
_Pragma("GCC diagnostic pop") \
} \
static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args)
name(struct pt_regs *ctx); \
extern _Bool LINUX_HAS_SYSCALL_WRAPPER __kconfig; \
static __always_inline typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args); \
typeof(name(0)) name(struct pt_regs *ctx) \
{ \
struct pt_regs *regs = LINUX_HAS_SYSCALL_WRAPPER \
? (struct pt_regs *)PT_REGS_PARM1(ctx) \
: ctx; \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
if (LINUX_HAS_SYSCALL_WRAPPER) \
return ____##name(___bpf_syswrap_args(args)); \
else \
return ____##name(___bpf_syscall_args(args)); \
_Pragma("GCC diagnostic pop") \
} \
static __always_inline typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args)
Functions
Defines
bpf_probe_read_kernel( \
(void *)dst, \
__CORE_RELO(src, fld, BYTE_SIZE), \
(const void *)src + __CORE_RELO(src, fld, BYTE_OFFSET))
({ \
unsigned long long val = 0; \
\
__CORE_BITFIELD_PROBE_READ(&val, s, field); \
val <<= __CORE_RELO(s, field, LSHIFT_U64); \
if (__CORE_RELO(s, field, SIGNED)) \
val = ((long long)val) >> __CORE_RELO(s, field, RSHIFT_U64); \
else \
val = val >> __CORE_RELO(s, field, RSHIFT_U64); \
val; \
})
({ \
const void *p = (const void *)s + __CORE_RELO(s, field, BYTE_OFFSET); \
unsigned long long val; \
\
/* This is a so-called barrier_var() operation that makes specified \
* variable "a black box" for optimizing compiler. \
* It forces compiler to perform BYTE_OFFSET relocation on p and use \
* its calculated value in the switch below, instead of applying \
* the same relocation 4 times for each individual memory load. \
*/ \
asm volatile("" : "=r"(p) : "0"(p)); \
\
switch (__CORE_RELO(s, field, BYTE_SIZE)) { \
case 1: val = *(const unsigned char *)p; break; \
case 2: val = *(const unsigned short *)p; break; \
case 4: val = *(const unsigned int *)p; break; \
case 8: val = *(const unsigned long long *)p; break; \
default: val = 0; break; \
} \
val <<= __CORE_RELO(s, field, LSHIFT_U64); \
if (__CORE_RELO(s, field, SIGNED)) \
val = ((long long)val) >> __CORE_RELO(s, field, RSHIFT_U64); \
else \
val = val >> __CORE_RELO(s, field, RSHIFT_U64); \
val; \
})
({ \
void *p = (void *)s + __CORE_RELO(s, field, BYTE_OFFSET); \
unsigned int byte_size = __CORE_RELO(s, field, BYTE_SIZE); \
unsigned int lshift = __CORE_RELO(s, field, LSHIFT_U64); \
unsigned int rshift = __CORE_RELO(s, field, RSHIFT_U64); \
unsigned long long mask, val, nval = new_val; \
unsigned int rpad = rshift - lshift; \
\
asm volatile("" : "+r"(p)); \
\
switch (byte_size) { \
case 1: val = *(unsigned char *)p; break; \
case 2: val = *(unsigned short *)p; break; \
case 4: val = *(unsigned int *)p; break; \
case 8: val = *(unsigned long long *)p; break; \
} \
\
mask = (~0ULL << rshift) >> lshift; \
val = (val & ~mask) | ((nval << rpad) & mask); \
\
switch (byte_size) { \
case 1: *(unsigned char *)p = val; break; \
case 2: *(unsigned short *)p = val; break; \
case 4: *(unsigned int *)p = val; break; \
case 8: *(unsigned long long *)p = val; break; \
} \
})
({ \
___core_read(bpf_core_read, bpf_core_read, \
dst, (src), a, ##__VA_ARGS__) \
})
({ \
___core_read(bpf_core_read_user, bpf_core_read_user, \
dst, (src), a, ##__VA_ARGS__) \
})
({ \
___core_read(bpf_probe_read_kernel, bpf_probe_read_kernel, \
dst, (src), a, ##__VA_ARGS__) \
})
({ \
___core_read(bpf_probe_read_user, bpf_probe_read_user, \
dst, (src), a, ##__VA_ARGS__) \
})
({ \
___core_read(bpf_core_read_str, bpf_core_read, \
dst, (src), a, ##__VA_ARGS__) \
})
({ \
___core_read(bpf_core_read_user_str, bpf_core_read_user, \
dst, (src), a, ##__VA_ARGS__) \
})
({ \
___core_read(bpf_probe_read_kernel_str, bpf_probe_read_kernel, \
dst, (src), a, ##__VA_ARGS__) \
})
({ \
___core_read(bpf_probe_read_user_str, bpf_probe_read_user, \
dst, (src), a, ##__VA_ARGS__) \
})
({ \
___type((src), a, ##__VA_ARGS__) __r; \
BPF_CORE_READ_INTO(&__r, (src), a, ##__VA_ARGS__); \
__r; \
})
({ \
___type((src), a, ##__VA_ARGS__) __r; \
BPF_CORE_READ_USER_INTO(&__r, (src), a, ##__VA_ARGS__); \
__r; \
})
({ \
___type((src), a, ##__VA_ARGS__) __r; \
BPF_PROBE_READ_INTO(&__r, (src), a, ##__VA_ARGS__); \
__r; \
})
({ \
___type((src), a, ##__VA_ARGS__) __r; \
BPF_PROBE_READ_USER_INTO(&__r, (src), a, ##__VA_ARGS__); \
__r; \
})
Enums
Values:
Values:
Values:
Values:
Defines
(__builtin_constant_p(x) ? \
__bpf_constant_htons(x) : __bpf_htons(x))
(__builtin_constant_p(x) ? \
__bpf_constant_ntohs(x) : __bpf_ntohs(x))
(__builtin_constant_p(x) ? \
__bpf_constant_htonl(x) : __bpf_htonl(x))
(__builtin_constant_p(x) ? \
__bpf_constant_ntohl(x) : __bpf_ntohl(x))
(__builtin_constant_p(x) ? \
__bpf_constant_cpu_to_be64(x) : __bpf_cpu_to_be64(x))
(__builtin_constant_p(x) ? \
__bpf_constant_be64_to_cpu(x) : __bpf_be64_to_cpu(x))
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4