Source Code: lib/buffer.js
Buffer
objects are used to represent a fixed-length sequence of bytes. Many Node.js APIs support Buffer
s.
The Buffer
class is a subclass of JavaScript's <Uint8Array> class and extends it with methods that cover additional use cases. Node.js APIs accept plain <Uint8Array>s wherever Buffer
s are supported as well.
While the Buffer
class is available within the global scope, it is still recommended to explicitly reference it via an import or require statement.
Buffers and character encodings#import { Buffer } from 'node:buffer'; const buf1 = Buffer.alloc(10); const buf2 = Buffer.alloc(10, 1); const buf3 = Buffer.allocUnsafe(10); const buf4 = Buffer.from([1, 2, 3]); const buf5 = Buffer.from([257, 257.5, -255, '1']); const buf6 = Buffer.from('tést'); const buf7 = Buffer.from('tést', 'latin1');
const { Buffer } = require('node:buffer'); const buf1 = Buffer.alloc(10); const buf2 = Buffer.alloc(10, 1); const buf3 = Buffer.allocUnsafe(10); const buf4 = Buffer.from([1, 2, 3]); const buf5 = Buffer.from([257, 257.5, -255, '1']); const buf6 = Buffer.from('tést'); const buf7 = Buffer.from('tést', 'latin1');
When converting between Buffer
s and strings, a character encoding may be specified. If no character encoding is specified, UTF-8 will be used as the default.
import { Buffer } from 'node:buffer'; const buf = Buffer.from('hello world', 'utf8'); console.log(buf.toString('hex')); console.log(buf.toString('base64')); console.log(Buffer.from('fhqwhgads', 'utf8')); console.log(Buffer.from('fhqwhgads', 'utf16le'));
const { Buffer } = require('node:buffer'); const buf = Buffer.from('hello world', 'utf8'); console.log(buf.toString('hex')); console.log(buf.toString('base64')); console.log(Buffer.from('fhqwhgads', 'utf8')); console.log(Buffer.from('fhqwhgads', 'utf16le'));
Node.js buffers accept all case variations of encoding strings that they receive. For example, UTF-8 can be specified as 'utf8'
, 'UTF8'
, or 'uTf8'
.
The character encodings currently supported by Node.js are the following:
'utf8'
(alias: 'utf-8'
): Multi-byte encoded Unicode characters. Many web pages and other document formats use UTF-8. This is the default character encoding. When decoding a Buffer
into a string that does not exclusively contain valid UTF-8 data, the Unicode replacement character U+FFFD
� will be used to represent those errors.
'utf16le'
(alias: 'utf-16le'
): Multi-byte encoded Unicode characters. Unlike 'utf8'
, each character in the string will be encoded using either 2 or 4 bytes. Node.js only supports the little-endian variant of UTF-16.
'latin1'
: Latin-1 stands for ISO-8859-1. This character encoding only supports the Unicode characters from U+0000
to U+00FF
. Each character is encoded using a single byte. Characters that do not fit into that range are truncated and will be mapped to characters in that range.
Converting a Buffer
into a string using one of the above is referred to as decoding, and converting a string into a Buffer
is referred to as encoding.
Node.js also supports the following binary-to-text encodings. For binary-to-text encodings, the naming convention is reversed: Converting a Buffer
into a string is typically referred to as encoding, and converting a string into a Buffer
as decoding.
'base64'
: Base64 encoding. When creating a Buffer
from a string, this encoding will also correctly accept "URL and Filename Safe Alphabet" as specified in RFC 4648, Section 5. Whitespace characters such as spaces, tabs, and new lines contained within the base64-encoded string are ignored.
'base64url'
: base64url encoding as specified in RFC 4648, Section 5. When creating a Buffer
from a string, this encoding will also correctly accept regular base64-encoded strings. When encoding a Buffer
to a string, this encoding will omit padding.
'hex'
: Encode each byte as two hexadecimal characters. Data truncation may occur when decoding strings that do not exclusively consist of an even number of hexadecimal characters. See below for an example.
The following legacy character encodings are also supported:
'ascii'
: For 7-bit ASCII data only. When encoding a string into a Buffer
, this is equivalent to using 'latin1'
. When decoding a Buffer
into a string, using this encoding will additionally unset the highest bit of each byte before decoding as 'latin1'
. Generally, there should be no reason to use this encoding, as 'utf8'
(or, if the data is known to always be ASCII-only, 'latin1'
) will be a better choice when encoding or decoding ASCII-only text. It is only provided for legacy compatibility.
'binary'
: Alias for 'latin1'
. The name of this encoding can be very misleading, as all of the encodings listed here convert between strings and binary data. For converting between strings and Buffer
s, typically 'utf8'
is the right choice.
'ucs2'
, 'ucs-2'
: Aliases of 'utf16le'
. UCS-2 used to refer to a variant of UTF-16 that did not support characters that had code points larger than U+FFFF. In Node.js, these code points are always supported.
import { Buffer } from 'node:buffer'; Buffer.from('1ag123', 'hex'); Buffer.from('1a7', 'hex'); Buffer.from('1634', 'hex');
const { Buffer } = require('node:buffer'); Buffer.from('1ag123', 'hex'); Buffer.from('1a7', 'hex'); Buffer.from('1634', 'hex');
Modern Web browsers follow the WHATWG Encoding Standard which aliases both 'latin1'
and 'ISO-8859-1'
to 'win-1252'
. This means that while doing something like http.get()
, if the returned charset is one of those listed in the WHATWG specification it is possible that the server actually returned 'win-1252'
-encoded data, and using 'latin1'
encoding may incorrectly decode the characters.
Buffer
instances are also JavaScript <Uint8Array> and <TypedArray> instances. All <TypedArray> methods are available on Buffer
s. There are, however, subtle incompatibilities between the Buffer
API and the <TypedArray> API.
In particular:
TypedArray.prototype.slice()
creates a copy of part of the TypedArray
, Buffer.prototype.slice()
creates a view over the existing Buffer
without copying. This behavior can be surprising, and only exists for legacy compatibility. TypedArray.prototype.subarray()
can be used to achieve the behavior of Buffer.prototype.slice()
on both Buffer
s and other TypedArray
s and should be preferred.buf.toString()
is incompatible with its TypedArray
equivalent.buf.indexOf()
, support additional arguments.There are two ways to create new <TypedArray> instances from a Buffer
:
Buffer
to a <TypedArray> constructor will copy the Buffer
's contents, interpreted as an array of integers, and not as a byte sequence of the target type.import { Buffer } from 'node:buffer'; const buf = Buffer.from([1, 2, 3, 4]); const uint32array = new Uint32Array(buf); console.log(uint32array);
const { Buffer } = require('node:buffer'); const buf = Buffer.from([1, 2, 3, 4]); const uint32array = new Uint32Array(buf); console.log(uint32array);
Buffer
's underlying <ArrayBuffer> will create a <TypedArray> that shares its memory with the Buffer
.import { Buffer } from 'node:buffer'; const buf = Buffer.from('hello', 'utf16le'); const uint16array = new Uint16Array( buf.buffer, buf.byteOffset, buf.length / Uint16Array.BYTES_PER_ELEMENT); console.log(uint16array);
const { Buffer } = require('node:buffer'); const buf = Buffer.from('hello', 'utf16le'); const uint16array = new Uint16Array( buf.buffer, buf.byteOffset, buf.length / Uint16Array.BYTES_PER_ELEMENT); console.log(uint16array);
It is possible to create a new Buffer
that shares the same allocated memory as a <TypedArray> instance by using the TypedArray
object's .buffer
property in the same way. Buffer.from()
behaves like new Uint8Array()
in this context.
import { Buffer } from 'node:buffer'; const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf1 = Buffer.from(arr); const buf2 = Buffer.from(arr.buffer); console.log(buf1); console.log(buf2); arr[1] = 6000; console.log(buf1); console.log(buf2);
const { Buffer } = require('node:buffer'); const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf1 = Buffer.from(arr); const buf2 = Buffer.from(arr.buffer); console.log(buf1); console.log(buf2); arr[1] = 6000; console.log(buf1); console.log(buf2);
When creating a Buffer
using a <TypedArray>'s .buffer
, it is possible to use only a portion of the underlying <ArrayBuffer> by passing in byteOffset
and length
parameters.
import { Buffer } from 'node:buffer'; const arr = new Uint16Array(20); const buf = Buffer.from(arr.buffer, 0, 16); console.log(buf.length);
const { Buffer } = require('node:buffer'); const arr = new Uint16Array(20); const buf = Buffer.from(arr.buffer, 0, 16); console.log(buf.length);
The Buffer.from()
and TypedArray.from()
have different signatures and implementations. Specifically, the <TypedArray> variants accept a second argument that is a mapping function that is invoked on every element of the typed array:
TypedArray.from(source[, mapFn[, thisArg]])
The Buffer.from()
method, however, does not support the use of a mapping function:
Buffer.from(array)
Buffer.from(buffer)
Buffer.from(arrayBuffer[, byteOffset[, length]])
Buffer.from(string[, encoding])
Buffer
instances can be iterated over using for..of
syntax:
import { Buffer } from 'node:buffer'; const buf = Buffer.from([1, 2, 3]); for (const b of buf) { console.log(b); }
const { Buffer } = require('node:buffer'); const buf = Buffer.from([1, 2, 3]); for (const b of buf) { console.log(b); }
Additionally, the buf.values()
, buf.keys()
, and buf.entries()
methods can be used to create iterators.
Blob
#
A <Blob> encapsulates immutable, raw data that can be safely shared across multiple worker threads.
new buffer.Blob([sources[, options]])
#
sources
<string[]> | <ArrayBuffer[]> | <TypedArray[]> | <DataView[]> | <Blob[]> An array of string, <ArrayBuffer>, <TypedArray>, <DataView>, or <Blob> objects, or any mix of such objects, that will be stored within the Blob
.options
<Object>
endings
<string> One of either 'transparent'
or 'native'
. When set to 'native'
, line endings in string source parts will be converted to the platform native line-ending as specified by require('node:os').EOL
.type
<string> The Blob content-type. The intent is for type
to convey the MIME media type of the data, however no validation of the type format is performed.Creates a new Blob
object containing a concatenation of the given sources.
<ArrayBuffer>, <TypedArray>, <DataView>, and <Buffer> sources are copied into the 'Blob' and can therefore be safely modified after the 'Blob' is created.
String sources are encoded as UTF-8 byte sequences and copied into the Blob. Unmatched surrogate pairs within each string part will be replaced by Unicode U+FFFD replacement characters.
blob.arrayBuffer()
#
Added in: v15.7.0, v14.18.0
Returns a promise that fulfills with an <ArrayBuffer> containing a copy of the Blob
data.
blob.bytes()
#
Added in: v22.3.0, v20.16.0
The blob.bytes()
method returns the byte of the Blob
object as a Promise<Uint8Array>
.
const blob = new Blob(['hello']);
blob.bytes().then((bytes) => {
console.log(bytes);
});
blob.size
#
Added in: v15.7.0, v14.18.0
The total size of the Blob
in bytes.
blob.slice([start[, end[, type]]])
#
Added in: v15.7.0, v14.18.0
start
<number> The starting index.end
<number> The ending index.type
<string> The content-type for the new Blob
Creates and returns a new Blob
containing a subset of this Blob
objects data. The original Blob
is not altered.
blob.stream()
#
Added in: v16.7.0
Returns a new ReadableStream
that allows the content of the Blob
to be read.
blob.text()
#
Added in: v15.7.0, v14.18.0
Returns a promise that fulfills with the contents of the Blob
decoded as a UTF-8 string.
blob.type
#
Added in: v15.7.0, v14.18.0
The content-type of the Blob
.
Blob
objects and MessageChannel
#
Once a <Blob> object is created, it can be sent via MessagePort
to multiple destinations without transferring or immediately copying the data. The data contained by the Blob
is copied only when the arrayBuffer()
or text()
methods are called.
Class:import { Blob } from 'node:buffer'; import { setTimeout as delay } from 'node:timers/promises'; const blob = new Blob(['hello there']); const mc1 = new MessageChannel(); const mc2 = new MessageChannel(); mc1.port1.onmessage = async ({ data }) => { console.log(await data.arrayBuffer()); mc1.port1.close(); }; mc2.port1.onmessage = async ({ data }) => { await delay(1000); console.log(await data.arrayBuffer()); mc2.port1.close(); }; mc1.port2.postMessage(blob); mc2.port2.postMessage(blob); blob.text().then(console.log);
const { Blob } = require('node:buffer'); const { setTimeout: delay } = require('node:timers/promises'); const blob = new Blob(['hello there']); const mc1 = new MessageChannel(); const mc2 = new MessageChannel(); mc1.port1.onmessage = async ({ data }) => { console.log(await data.arrayBuffer()); mc1.port1.close(); }; mc2.port1.onmessage = async ({ data }) => { await delay(1000); console.log(await data.arrayBuffer()); mc2.port1.close(); }; mc1.port2.postMessage(blob); mc2.port2.postMessage(blob); blob.text().then(console.log);
Buffer
#
The Buffer
class is a global type for dealing with binary data directly. It can be constructed in a variety of ways.
Buffer.alloc(size[, fill[, encoding]])
#
size
<integer> The desired length of the new Buffer
.fill
<string> | <Buffer> | <Uint8Array> | <integer> A value to pre-fill the new Buffer
with. Default: 0
.encoding
<string> If fill
is a string, this is its encoding. Default: 'utf8'
.Allocates a new Buffer
of size
bytes. If fill
is undefined
, the Buffer
will be zero-filled.
import { Buffer } from 'node:buffer'; const buf = Buffer.alloc(5); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.alloc(5); console.log(buf);
If size
is larger than buffer.constants.MAX_LENGTH
or smaller than 0, ERR_OUT_OF_RANGE
is thrown.
If fill
is specified, the allocated Buffer
will be initialized by calling buf.fill(fill)
.
import { Buffer } from 'node:buffer'; const buf = Buffer.alloc(5, 'a'); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.alloc(5, 'a'); console.log(buf);
If both fill
and encoding
are specified, the allocated Buffer
will be initialized by calling buf.fill(fill, encoding)
.
import { Buffer } from 'node:buffer'; const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); console.log(buf);
Calling Buffer.alloc()
can be measurably slower than the alternative Buffer.allocUnsafe()
but ensures that the newly created Buffer
instance contents will never contain sensitive data from previous allocations, including data that might not have been allocated for Buffer
s.
A TypeError
will be thrown if size
is not a number.
Buffer.allocUnsafe(size)
#
Allocates a new Buffer
of size
bytes. If size
is larger than buffer.constants.MAX_LENGTH
or smaller than 0, ERR_OUT_OF_RANGE
is thrown.
The underlying memory for Buffer
instances created in this way is not initialized. The contents of the newly created Buffer
are unknown and may contain sensitive data. Use Buffer.alloc()
instead to initialize Buffer
instances with zeroes.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(10); console.log(buf); buf.fill(0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(10); console.log(buf); buf.fill(0); console.log(buf);
A TypeError
will be thrown if size
is not a number.
The Buffer
module pre-allocates an internal Buffer
instance of size Buffer.poolSize
that is used as a pool for the fast allocation of new Buffer
instances created using Buffer.allocUnsafe()
, Buffer.from(array)
, Buffer.from(string)
, and Buffer.concat()
only when size
is less than Buffer.poolSize >>> 1
(floor of Buffer.poolSize
divided by two).
Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill)
vs. Buffer.allocUnsafe(size).fill(fill)
. Specifically, Buffer.alloc(size, fill)
will never use the internal Buffer
pool, while Buffer.allocUnsafe(size).fill(fill)
will use the internal Buffer
pool if size
is less than or equal to half Buffer.poolSize
. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe()
provides.
Buffer.allocUnsafeSlow(size)
#
Allocates a new Buffer
of size
bytes. If size
is larger than buffer.constants.MAX_LENGTH
or smaller than 0, ERR_OUT_OF_RANGE
is thrown. A zero-length Buffer
is created if size
is 0.
The underlying memory for Buffer
instances created in this way is not initialized. The contents of the newly created Buffer
are unknown and may contain sensitive data. Use buf.fill(0)
to initialize such Buffer
instances with zeroes.
When using Buffer.allocUnsafe()
to allocate new Buffer
instances, allocations less than Buffer.poolSize >>> 1
(4KiB when default poolSize is used) are sliced from a single pre-allocated Buffer
. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffer
instances. This approach improves both performance and memory usage by eliminating the need to track and clean up as many individual ArrayBuffer
objects.
However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer
instance using Buffer.allocUnsafeSlow()
and then copying out the relevant bits.
import { Buffer } from 'node:buffer'; const store = []; socket.on('readable', () => { let data; while (null !== (data = readable.read())) { const sb = Buffer.allocUnsafeSlow(10); data.copy(sb, 0, 0, 10); store.push(sb); } });
const { Buffer } = require('node:buffer'); const store = []; socket.on('readable', () => { let data; while (null !== (data = readable.read())) { const sb = Buffer.allocUnsafeSlow(10); data.copy(sb, 0, 0, 10); store.push(sb); } });
A TypeError
will be thrown if size
is not a number.
Buffer.byteLength(string[, encoding])
#
string
<string> | <Buffer> | <TypedArray> | <DataView> | <ArrayBuffer> | <SharedArrayBuffer> A value to calculate the length of.encoding
<string> If string
is a string, this is its encoding. Default: 'utf8'
.string
.Returns the byte length of a string when encoded using encoding
. This is not the same as String.prototype.length
, which does not account for the encoding that is used to convert the string into bytes.
For 'base64'
, 'base64url'
, and 'hex'
, this function assumes valid input. For strings that contain non-base64/hex-encoded data (e.g. whitespace), the return value might be greater than the length of a Buffer
created from the string.
import { Buffer } from 'node:buffer'; const str = '\u00bd + \u00bc = \u00be'; console.log(`${str}: ${str.length} characters, ` + `${Buffer.byteLength(str, 'utf8')} bytes`);
const { Buffer } = require('node:buffer'); const str = '\u00bd + \u00bc = \u00be'; console.log(`${str}: ${str.length} characters, ` + `${Buffer.byteLength(str, 'utf8')} bytes`);
When string
is a <Buffer> | <DataView> | <TypedArray> | <ArrayBuffer> | <SharedArrayBuffer>, the byte length as reported by .byteLength
is returned.
Buffer.compare(buf1, buf2)
#
buf1
<Buffer> | <Uint8Array>buf2
<Buffer> | <Uint8Array>-1
, 0
, or 1
, depending on the result of the comparison. See buf.compare()
for details.Compares buf1
to buf2
, typically for the purpose of sorting arrays of Buffer
instances. This is equivalent to calling buf1.compare(buf2)
.
Static method:import { Buffer } from 'node:buffer'; const buf1 = Buffer.from('1234'); const buf2 = Buffer.from('0123'); const arr = [buf1, buf2]; console.log(arr.sort(Buffer.compare));
const { Buffer } = require('node:buffer'); const buf1 = Buffer.from('1234'); const buf2 = Buffer.from('0123'); const arr = [buf1, buf2]; console.log(arr.sort(Buffer.compare));
Buffer.concat(list[, totalLength])
#
list
<Buffer[]> | <Uint8Array[]> List of Buffer
or <Uint8Array> instances to concatenate.totalLength
<integer> Total length of the Buffer
instances in list
when concatenated.Returns a new Buffer
which is the result of concatenating all the Buffer
instances in the list
together.
If the list has no items, or if the totalLength
is 0, then a new zero-length Buffer
is returned.
If totalLength
is not provided, it is calculated from the Buffer
instances in list
by adding their lengths.
If totalLength
is provided, it is coerced to an unsigned integer. If the combined length of the Buffer
s in list
exceeds totalLength
, the result is truncated to totalLength
. If the combined length of the Buffer
s in list
is less than totalLength
, the remaining space is filled with zeros.
import { Buffer } from 'node:buffer'; const buf1 = Buffer.alloc(10); const buf2 = Buffer.alloc(14); const buf3 = Buffer.alloc(18); const totalLength = buf1.length + buf2.length + buf3.length; console.log(totalLength); const bufA = Buffer.concat([buf1, buf2, buf3], totalLength); console.log(bufA); console.log(bufA.length);
const { Buffer } = require('node:buffer'); const buf1 = Buffer.alloc(10); const buf2 = Buffer.alloc(14); const buf3 = Buffer.alloc(18); const totalLength = buf1.length + buf2.length + buf3.length; console.log(totalLength); const bufA = Buffer.concat([buf1, buf2, buf3], totalLength); console.log(bufA); console.log(bufA.length);
Buffer.concat()
may also use the internal Buffer
pool like Buffer.allocUnsafe()
does.
Buffer.copyBytesFrom(view[, offset[, length]])
#
Added in: v19.8.0, v18.16.0
view
<TypedArray> The <TypedArray> to copy.offset
<integer> The starting offset within view
. Default: 0
.length
<integer> The number of elements from view
to copy. Default: view.length - offset
.Copies the underlying memory of view
into a new Buffer
.
const u16 = new Uint16Array([0, 0xffff]);
const buf = Buffer.copyBytesFrom(u16, 1, 1);
u16[1] = 0;
console.log(buf.length);
console.log(buf[0]);
console.log(buf[1]);
Static method: Buffer.from(array)
#
Added in: v5.10.0
array
<integer[]>Allocates a new Buffer
using an array
of bytes in the range 0
– 255
. Array entries outside that range will be truncated to fit into it.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x62, 0x75, 0x66, 0x66, 0x65, 0x72]);
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x62, 0x75, 0x66, 0x66, 0x65, 0x72]);
If array
is an Array
-like object (that is, one with a length
property of type number
), it is treated as if it is an array, unless it is a Buffer
or a Uint8Array
. This means all other TypedArray
variants get treated as an Array
. To create a Buffer
from the bytes backing a TypedArray
, use Buffer.copyBytesFrom()
.
A TypeError
will be thrown if array
is not an Array
or another type appropriate for Buffer.from()
variants.
Buffer.from(array)
and Buffer.from(string)
may also use the internal Buffer
pool like Buffer.allocUnsafe()
does.
Buffer.from(arrayBuffer[, byteOffset[, length]])
#
Added in: v5.10.0
arrayBuffer
<ArrayBuffer> | <SharedArrayBuffer> An <ArrayBuffer>, <SharedArrayBuffer>, for example the .buffer
property of a <TypedArray>.byteOffset
<integer> Index of first byte to expose. Default: 0
.length
<integer> Number of bytes to expose. Default: arrayBuffer.byteLength - byteOffset
.This creates a view of the <ArrayBuffer> without copying the underlying memory. For example, when passed a reference to the .buffer
property of a <TypedArray> instance, the newly created Buffer
will share the same allocated memory as the <TypedArray>'s underlying ArrayBuffer
.
import { Buffer } from 'node:buffer'; const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); console.log(buf); arr[1] = 6000; console.log(buf);
const { Buffer } = require('node:buffer'); const arr = new Uint16Array(2); arr[0] = 5000; arr[1] = 4000; const buf = Buffer.from(arr.buffer); console.log(buf); arr[1] = 6000; console.log(buf);
The optional byteOffset
and length
arguments specify a memory range within the arrayBuffer
that will be shared by the Buffer
.
import { Buffer } from 'node:buffer'; const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length);
const { Buffer } = require('node:buffer'); const ab = new ArrayBuffer(10); const buf = Buffer.from(ab, 0, 2); console.log(buf.length);
A TypeError
will be thrown if arrayBuffer
is not an <ArrayBuffer> or a <SharedArrayBuffer> or another type appropriate for Buffer.from()
variants.
It is important to remember that a backing ArrayBuffer
can cover a range of memory that extends beyond the bounds of a TypedArray
view. A new Buffer
created using the buffer
property of a TypedArray
may extend beyond the range of the TypedArray
:
Static method:import { Buffer } from 'node:buffer'; const arrA = Uint8Array.from([0x63, 0x64, 0x65, 0x66]); const arrB = new Uint8Array(arrA.buffer, 1, 2); console.log(arrA.buffer === arrB.buffer); const buf = Buffer.from(arrB.buffer); console.log(buf);
const { Buffer } = require('node:buffer'); const arrA = Uint8Array.from([0x63, 0x64, 0x65, 0x66]); const arrB = new Uint8Array(arrA.buffer, 1, 2); console.log(arrA.buffer === arrB.buffer); const buf = Buffer.from(arrB.buffer); console.log(buf);
Buffer.from(buffer)
#
Added in: v5.10.0
buffer
<Buffer> | <Uint8Array> An existing Buffer
or <Uint8Array> from which to copy data.Copies the passed buffer
data onto a new Buffer
instance.
import { Buffer } from 'node:buffer'; const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); console.log(buf2.toString());
const { Buffer } = require('node:buffer'); const buf1 = Buffer.from('buffer'); const buf2 = Buffer.from(buf1); buf1[0] = 0x61; console.log(buf1.toString()); console.log(buf2.toString());
A TypeError
will be thrown if buffer
is not a Buffer
or another type appropriate for Buffer.from()
variants.
Buffer.from(object[, offsetOrEncoding[, length]])
#
Added in: v8.2.0
object
<Object> An object supporting Symbol.toPrimitive
or valueOf()
.offsetOrEncoding
<integer> | <string> A byte-offset or encoding.length
<integer> A length.For objects whose valueOf()
function returns a value not strictly equal to object
, returns Buffer.from(object.valueOf(), offsetOrEncoding, length)
.
import { Buffer } from 'node:buffer'; const buf = Buffer.from(new String('this is a test'));
const { Buffer } = require('node:buffer'); const buf = Buffer.from(new String('this is a test'));
For objects that support Symbol.toPrimitive
, returns Buffer.from(object[Symbol.toPrimitive]('string'), offsetOrEncoding)
.
import { Buffer } from 'node:buffer'; class Foo { [Symbol.toPrimitive]() { return 'this is a test'; } } const buf = Buffer.from(new Foo(), 'utf8');
const { Buffer } = require('node:buffer'); class Foo { [Symbol.toPrimitive]() { return 'this is a test'; } } const buf = Buffer.from(new Foo(), 'utf8');
A TypeError
will be thrown if object
does not have the mentioned methods or is not of another type appropriate for Buffer.from()
variants.
Buffer.from(string[, encoding])
#
Added in: v5.10.0
string
<string> A string to encode.encoding
<string> The encoding of string
. Default: 'utf8'
.Creates a new Buffer
containing string
. The encoding
parameter identifies the character encoding to be used when converting string
into bytes.
import { Buffer } from 'node:buffer'; const buf1 = Buffer.from('this is a tést'); const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf1.toString()); console.log(buf2.toString()); console.log(buf1.toString('latin1'));
const { Buffer } = require('node:buffer'); const buf1 = Buffer.from('this is a tést'); const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); console.log(buf1.toString()); console.log(buf2.toString()); console.log(buf1.toString('latin1'));
A TypeError
will be thrown if string
is not a string or another type appropriate for Buffer.from()
variants.
Buffer.from(string)
may also use the internal Buffer
pool like Buffer.allocUnsafe()
does.
Buffer.isBuffer(obj)
#
Added in: v0.1.101
Returns true
if obj
is a Buffer
, false
otherwise.
Static method:import { Buffer } from 'node:buffer'; Buffer.isBuffer(Buffer.alloc(10)); Buffer.isBuffer(Buffer.from('foo')); Buffer.isBuffer('a string'); Buffer.isBuffer([]); Buffer.isBuffer(new Uint8Array(1024));
const { Buffer } = require('node:buffer'); Buffer.isBuffer(Buffer.alloc(10)); Buffer.isBuffer(Buffer.from('foo')); Buffer.isBuffer('a string'); Buffer.isBuffer([]); Buffer.isBuffer(new Uint8Array(1024));
Buffer.isEncoding(encoding)
#
Added in: v0.9.1
Returns true
if encoding
is the name of a supported character encoding, or false
otherwise.
Class property:import { Buffer } from 'node:buffer'; console.log(Buffer.isEncoding('utf8')); console.log(Buffer.isEncoding('hex')); console.log(Buffer.isEncoding('utf/8')); console.log(Buffer.isEncoding(''));
const { Buffer } = require('node:buffer'); console.log(Buffer.isEncoding('utf8')); console.log(Buffer.isEncoding('hex')); console.log(Buffer.isEncoding('utf/8')); console.log(Buffer.isEncoding(''));
Buffer.poolSize
#
Added in: v0.11.3
8192
This is the size (in bytes) of pre-allocated internal Buffer
instances used for pooling. This value may be modified.
buf[index]
#
index
<integer>The index operator [index]
can be used to get and set the octet at position index
in buf
. The values refer to individual bytes, so the legal value range is between 0x00
and 0xFF
(hex) or 0
and 255
(decimal).
This operator is inherited from Uint8Array
, so its behavior on out-of-bounds access is the same as Uint8Array
. In other words, buf[index]
returns undefined
when index
is negative or greater or equal to buf.length
, and buf[index] = value
does not modify the buffer if index
is negative or >= buf.length
.
import { Buffer } from 'node:buffer'; const str = 'Node.js'; const buf = Buffer.allocUnsafe(str.length); for (let i = 0; i < str.length; i++) { buf[i] = str.charCodeAt(i); } console.log(buf.toString('utf8'));
const { Buffer } = require('node:buffer'); const str = 'Node.js'; const buf = Buffer.allocUnsafe(str.length); for (let i = 0; i < str.length; i++) { buf[i] = str.charCodeAt(i); } console.log(buf.toString('utf8'));
buf.buffer
#
ArrayBuffer
object based on which this Buffer
object is created.This ArrayBuffer
is not guaranteed to correspond exactly to the original Buffer
. See the notes on buf.byteOffset
for details.
import { Buffer } from 'node:buffer'; const arrayBuffer = new ArrayBuffer(16); const buffer = Buffer.from(arrayBuffer); console.log(buffer.buffer === arrayBuffer);
const { Buffer } = require('node:buffer'); const arrayBuffer = new ArrayBuffer(16); const buffer = Buffer.from(arrayBuffer); console.log(buffer.buffer === arrayBuffer);
buf.byteOffset
#
byteOffset
of the Buffer
's underlying ArrayBuffer
object.When setting byteOffset
in Buffer.from(ArrayBuffer, byteOffset, length)
, or sometimes when allocating a Buffer
smaller than Buffer.poolSize
, the buffer does not start from a zero offset on the underlying ArrayBuffer
.
This can cause problems when accessing the underlying ArrayBuffer
directly using buf.buffer
, as other parts of the ArrayBuffer
may be unrelated to the Buffer
object itself.
A common issue when creating a TypedArray
object that shares its memory with a Buffer
is that in this case one needs to specify the byteOffset
correctly:
import { Buffer } from 'node:buffer'; const nodeBuffer = Buffer.from([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]); new Int8Array(nodeBuffer.buffer, nodeBuffer.byteOffset, nodeBuffer.length);
const { Buffer } = require('node:buffer'); const nodeBuffer = Buffer.from([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]); new Int8Array(nodeBuffer.buffer, nodeBuffer.byteOffset, nodeBuffer.length);
buf.compare(target[, targetStart[, targetEnd[, sourceStart[, sourceEnd]]]])
#
target
<Buffer> | <Uint8Array> A Buffer
or <Uint8Array> with which to compare buf
.targetStart
<integer> The offset within target
at which to begin comparison. Default: 0
.targetEnd
<integer> The offset within target
at which to end comparison (not inclusive). Default: target.length
.sourceStart
<integer> The offset within buf
at which to begin comparison. Default: 0
.sourceEnd
<integer> The offset within buf
at which to end comparison (not inclusive). Default: buf.length
.Compares buf
with target
and returns a number indicating whether buf
comes before, after, or is the same as target
in sort order. Comparison is based on the actual sequence of bytes in each Buffer
.
0
is returned if target
is the same as buf
1
is returned if target
should come before buf
when sorted.-1
is returned if target
should come after buf
when sorted.import { Buffer } from 'node:buffer'; const buf1 = Buffer.from('ABC'); const buf2 = Buffer.from('BCD'); const buf3 = Buffer.from('ABCD'); console.log(buf1.compare(buf1)); console.log(buf1.compare(buf2)); console.log(buf1.compare(buf3)); console.log(buf2.compare(buf1)); console.log(buf2.compare(buf3)); console.log([buf1, buf2, buf3].sort(Buffer.compare));
const { Buffer } = require('node:buffer'); const buf1 = Buffer.from('ABC'); const buf2 = Buffer.from('BCD'); const buf3 = Buffer.from('ABCD'); console.log(buf1.compare(buf1)); console.log(buf1.compare(buf2)); console.log(buf1.compare(buf3)); console.log(buf2.compare(buf1)); console.log(buf2.compare(buf3)); console.log([buf1, buf2, buf3].sort(Buffer.compare));
The optional targetStart
, targetEnd
, sourceStart
, and sourceEnd
arguments can be used to limit the comparison to specific ranges within target
and buf
respectively.
import { Buffer } from 'node:buffer'; const buf1 = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8, 9]); const buf2 = Buffer.from([5, 6, 7, 8, 9, 1, 2, 3, 4]); console.log(buf1.compare(buf2, 5, 9, 0, 4)); console.log(buf1.compare(buf2, 0, 6, 4)); console.log(buf1.compare(buf2, 5, 6, 5));
const { Buffer } = require('node:buffer'); const buf1 = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8, 9]); const buf2 = Buffer.from([5, 6, 7, 8, 9, 1, 2, 3, 4]); console.log(buf1.compare(buf2, 5, 9, 0, 4)); console.log(buf1.compare(buf2, 0, 6, 4)); console.log(buf1.compare(buf2, 5, 6, 5));
ERR_OUT_OF_RANGE
is thrown if targetStart < 0
, sourceStart < 0
, targetEnd > target.byteLength
, or sourceEnd > source.byteLength
.
buf.copy(target[, targetStart[, sourceStart[, sourceEnd]]])
#
Added in: v0.1.90
target
<Buffer> | <Uint8Array> A Buffer
or <Uint8Array> to copy into.targetStart
<integer> The offset within target
at which to begin writing. Default: 0
.sourceStart
<integer> The offset within buf
from which to begin copying. Default: 0
.sourceEnd
<integer> The offset within buf
at which to stop copying (not inclusive). Default: buf.length
.Copies data from a region of buf
to a region in target
, even if the target
memory region overlaps with buf
.
TypedArray.prototype.set()
performs the same operation, and is available for all TypedArrays, including Node.js Buffer
s, although it takes different function arguments.
import { Buffer } from 'node:buffer'; const buf1 = Buffer.allocUnsafe(26); const buf2 = Buffer.allocUnsafe(26).fill('!'); for (let i = 0; i < 26; i++) { buf1[i] = i + 97; } buf1.copy(buf2, 8, 16, 20); console.log(buf2.toString('ascii', 0, 25));
const { Buffer } = require('node:buffer'); const buf1 = Buffer.allocUnsafe(26); const buf2 = Buffer.allocUnsafe(26).fill('!'); for (let i = 0; i < 26; i++) { buf1[i] = i + 97; } buf1.copy(buf2, 8, 16, 20); console.log(buf2.toString('ascii', 0, 25));
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(26); for (let i = 0; i < 26; i++) { buf[i] = i + 97; } buf.copy(buf, 0, 4, 10); console.log(buf.toString());
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(26); for (let i = 0; i < 26; i++) { buf[i] = i + 97; } buf.copy(buf, 0, 4, 10); console.log(buf.toString());
buf.entries()
#
Added in: v1.1.0
Creates and returns an iterator of [index, byte]
pairs from the contents of buf
.
import { Buffer } from 'node:buffer'; const buf = Buffer.from('buffer'); for (const pair of buf.entries()) { console.log(pair); }
const { Buffer } = require('node:buffer'); const buf = Buffer.from('buffer'); for (const pair of buf.entries()) { console.log(pair); }
buf.equals(otherBuffer)
#
otherBuffer
<Buffer> | <Uint8Array> A Buffer
or <Uint8Array> with which to compare buf
.Returns true
if both buf
and otherBuffer
have exactly the same bytes, false
otherwise. Equivalent to buf.compare(otherBuffer) === 0
.
import { Buffer } from 'node:buffer'; const buf1 = Buffer.from('ABC'); const buf2 = Buffer.from('414243', 'hex'); const buf3 = Buffer.from('ABCD'); console.log(buf1.equals(buf2)); console.log(buf1.equals(buf3));
const { Buffer } = require('node:buffer'); const buf1 = Buffer.from('ABC'); const buf2 = Buffer.from('414243', 'hex'); const buf3 = Buffer.from('ABCD'); console.log(buf1.equals(buf2)); console.log(buf1.equals(buf3));
buf.fill(value[, offset[, end]][, encoding])
#
value
<string> | <Buffer> | <Uint8Array> | <integer> The value with which to fill buf
. Empty value (string, Uint8Array, Buffer) is coerced to 0
.offset
<integer> Number of bytes to skip before starting to fill buf
. Default: 0
.end
<integer> Where to stop filling buf
(not inclusive). Default: buf.length
.encoding
<string> The encoding for value
if value
is a string. Default: 'utf8'
.buf
.Fills buf
with the specified value
. If the offset
and end
are not given, the entire buf
will be filled:
import { Buffer } from 'node:buffer'; const b = Buffer.allocUnsafe(50).fill('h'); console.log(b.toString()); const c = Buffer.allocUnsafe(5).fill(''); console.log(c.fill(''));
const { Buffer } = require('node:buffer'); const b = Buffer.allocUnsafe(50).fill('h'); console.log(b.toString()); const c = Buffer.allocUnsafe(5).fill(''); console.log(c.fill(''));
value
is coerced to a uint32
value if it is not a string, Buffer
, or integer. If the resulting integer is greater than 255
(decimal), buf
will be filled with value & 255
.
If the final write of a fill()
operation falls on a multi-byte character, then only the bytes of that character that fit into buf
are written:
import { Buffer } from 'node:buffer'; console.log(Buffer.allocUnsafe(5).fill('\u0222'));
const { Buffer } = require('node:buffer'); console.log(Buffer.allocUnsafe(5).fill('\u0222'));
If value
contains invalid characters, it is truncated; if no valid fill data remains, an exception is thrown:
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(5); console.log(buf.fill('a')); console.log(buf.fill('aazz', 'hex')); console.log(buf.fill('zz', 'hex'));
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(5); console.log(buf.fill('a')); console.log(buf.fill('aazz', 'hex')); console.log(buf.fill('zz', 'hex'));
buf.includes(value[, byteOffset][, encoding])
#
Added in: v5.3.0
value
<string> | <Buffer> | <Uint8Array> | <integer> What to search for.byteOffset
<integer> Where to begin searching in buf
. If negative, then offset is calculated from the end of buf
. Default: 0
.encoding
<string> If value
is a string, this is its encoding. Default: 'utf8'
.true
if value
was found in buf
, false
otherwise.Equivalent to buf.indexOf() !== -1
.
import { Buffer } from 'node:buffer'; const buf = Buffer.from('this is a buffer'); console.log(buf.includes('this')); console.log(buf.includes('is')); console.log(buf.includes(Buffer.from('a buffer'))); console.log(buf.includes(97)); console.log(buf.includes(Buffer.from('a buffer example'))); console.log(buf.includes(Buffer.from('a buffer example').slice(0, 8))); console.log(buf.includes('this', 4));
const { Buffer } = require('node:buffer'); const buf = Buffer.from('this is a buffer'); console.log(buf.includes('this')); console.log(buf.includes('is')); console.log(buf.includes(Buffer.from('a buffer'))); console.log(buf.includes(97)); console.log(buf.includes(Buffer.from('a buffer example'))); console.log(buf.includes(Buffer.from('a buffer example').slice(0, 8))); console.log(buf.includes('this', 4));
buf.indexOf(value[, byteOffset][, encoding])
#
value
<string> | <Buffer> | <Uint8Array> | <integer> What to search for.byteOffset
<integer> Where to begin searching in buf
. If negative, then offset is calculated from the end of buf
. Default: 0
.encoding
<string> If value
is a string, this is the encoding used to determine the binary representation of the string that will be searched for in buf
. Default: 'utf8'
.value
in buf
, or -1
if buf
does not contain value
.If value
is:
value
is interpreted according to the character encoding in encoding
.Buffer
or <Uint8Array>, value
will be used in its entirety. To compare a partial Buffer
, use buf.subarray
.value
will be interpreted as an unsigned 8-bit integer value between 0
and 255
.import { Buffer } from 'node:buffer'; const buf = Buffer.from('this is a buffer'); console.log(buf.indexOf('this')); console.log(buf.indexOf('is')); console.log(buf.indexOf(Buffer.from('a buffer'))); console.log(buf.indexOf(97)); console.log(buf.indexOf(Buffer.from('a buffer example'))); console.log(buf.indexOf(Buffer.from('a buffer example').slice(0, 8))); const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'utf16le'); console.log(utf16Buffer.indexOf('\u03a3', 0, 'utf16le')); console.log(utf16Buffer.indexOf('\u03a3', -4, 'utf16le'));
const { Buffer } = require('node:buffer'); const buf = Buffer.from('this is a buffer'); console.log(buf.indexOf('this')); console.log(buf.indexOf('is')); console.log(buf.indexOf(Buffer.from('a buffer'))); console.log(buf.indexOf(97)); console.log(buf.indexOf(Buffer.from('a buffer example'))); console.log(buf.indexOf(Buffer.from('a buffer example').slice(0, 8))); const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'utf16le'); console.log(utf16Buffer.indexOf('\u03a3', 0, 'utf16le')); console.log(utf16Buffer.indexOf('\u03a3', -4, 'utf16le'));
If value
is not a string, number, or Buffer
, this method will throw a TypeError
. If value
is a number, it will be coerced to a valid byte value, an integer between 0 and 255.
If byteOffset
is not a number, it will be coerced to a number. If the result of coercion is NaN
or 0
, then the entire buffer will be searched. This behavior matches String.prototype.indexOf()
.
import { Buffer } from 'node:buffer'; const b = Buffer.from('abcdef'); console.log(b.indexOf(99.9)); console.log(b.indexOf(256 + 99)); console.log(b.indexOf('b', undefined)); console.log(b.indexOf('b', {})); console.log(b.indexOf('b', null)); console.log(b.indexOf('b', []));
const { Buffer } = require('node:buffer'); const b = Buffer.from('abcdef'); console.log(b.indexOf(99.9)); console.log(b.indexOf(256 + 99)); console.log(b.indexOf('b', undefined)); console.log(b.indexOf('b', {})); console.log(b.indexOf('b', null)); console.log(b.indexOf('b', []));
If value
is an empty string or empty Buffer
and byteOffset
is less than buf.length
, byteOffset
will be returned. If value
is empty and byteOffset
is at least buf.length
, buf.length
will be returned.
buf.keys()
#
Added in: v1.1.0
Creates and returns an iterator of buf
keys (indexes).
import { Buffer } from 'node:buffer'; const buf = Buffer.from('buffer'); for (const key of buf.keys()) { console.log(key); }
const { Buffer } = require('node:buffer'); const buf = Buffer.from('buffer'); for (const key of buf.keys()) { console.log(key); }
buf.lastIndexOf(value[, byteOffset][, encoding])
#
value
<string> | <Buffer> | <Uint8Array> | <integer> What to search for.byteOffset
<integer> Where to begin searching in buf
. If negative, then offset is calculated from the end of buf
. Default: buf.length - 1
.encoding
<string> If value
is a string, this is the encoding used to determine the binary representation of the string that will be searched for in buf
. Default: 'utf8'
.value
in buf
, or -1
if buf
does not contain value
.Identical to buf.indexOf()
, except the last occurrence of value
is found rather than the first occurrence.
import { Buffer } from 'node:buffer'; const buf = Buffer.from('this buffer is a buffer'); console.log(buf.lastIndexOf('this')); console.log(buf.lastIndexOf('buffer')); console.log(buf.lastIndexOf(Buffer.from('buffer'))); console.log(buf.lastIndexOf(97)); console.log(buf.lastIndexOf(Buffer.from('yolo'))); console.log(buf.lastIndexOf('buffer', 5)); console.log(buf.lastIndexOf('buffer', 4)); const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'utf16le'); console.log(utf16Buffer.lastIndexOf('\u03a3', undefined, 'utf16le')); console.log(utf16Buffer.lastIndexOf('\u03a3', -5, 'utf16le'));
const { Buffer } = require('node:buffer'); const buf = Buffer.from('this buffer is a buffer'); console.log(buf.lastIndexOf('this')); console.log(buf.lastIndexOf('buffer')); console.log(buf.lastIndexOf(Buffer.from('buffer'))); console.log(buf.lastIndexOf(97)); console.log(buf.lastIndexOf(Buffer.from('yolo'))); console.log(buf.lastIndexOf('buffer', 5)); console.log(buf.lastIndexOf('buffer', 4)); const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'utf16le'); console.log(utf16Buffer.lastIndexOf('\u03a3', undefined, 'utf16le')); console.log(utf16Buffer.lastIndexOf('\u03a3', -5, 'utf16le'));
If value
is not a string, number, or Buffer
, this method will throw a TypeError
. If value
is a number, it will be coerced to a valid byte value, an integer between 0 and 255.
If byteOffset
is not a number, it will be coerced to a number. Any arguments that coerce to NaN
, like {}
or undefined
, will search the whole buffer. This behavior matches String.prototype.lastIndexOf()
.
import { Buffer } from 'node:buffer'; const b = Buffer.from('abcdef'); console.log(b.lastIndexOf(99.9)); console.log(b.lastIndexOf(256 + 99)); console.log(b.lastIndexOf('b', undefined)); console.log(b.lastIndexOf('b', {})); console.log(b.lastIndexOf('b', null)); console.log(b.lastIndexOf('b', []));
const { Buffer } = require('node:buffer'); const b = Buffer.from('abcdef'); console.log(b.lastIndexOf(99.9)); console.log(b.lastIndexOf(256 + 99)); console.log(b.lastIndexOf('b', undefined)); console.log(b.lastIndexOf('b', {})); console.log(b.lastIndexOf('b', null)); console.log(b.lastIndexOf('b', []));
If value
is an empty string or empty Buffer
, byteOffset
will be returned.
buf.length
#
Added in: v0.1.90
Returns the number of bytes in buf
.
import { Buffer } from 'node:buffer'; const buf = Buffer.alloc(1234); console.log(buf.length); buf.write('some string', 0, 'utf8'); console.log(buf.length);
const { Buffer } = require('node:buffer'); const buf = Buffer.alloc(1234); console.log(buf.length); buf.write('some string', 0, 'utf8'); console.log(buf.length);
buf.parent
#
Deprecated since: v8.0.0
The buf.parent
property is a deprecated alias for buf.buffer
.
buf.readBigInt64BE([offset])
#
Added in: v12.0.0, v10.20.0
offset
<integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8
. Default: 0
.Reads a signed, big-endian 64-bit integer from buf
at the specified offset
.
Integers read from a Buffer
are interpreted as two's complement signed values.
buf.readBigInt64LE([offset])
#
Added in: v12.0.0, v10.20.0
offset
<integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8
. Default: 0
.Reads a signed, little-endian 64-bit integer from buf
at the specified offset
.
Integers read from a Buffer
are interpreted as two's complement signed values.
buf.readBigUInt64BE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8
. Default: 0
.Reads an unsigned, big-endian 64-bit integer from buf
at the specified offset
.
This function is also available under the readBigUint64BE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff]); console.log(buf.readBigUInt64BE(0));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff]); console.log(buf.readBigUInt64BE(0));
buf.readBigUInt64LE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy: 0 <= offset <= buf.length - 8
. Default: 0
.Reads an unsigned, little-endian 64-bit integer from buf
at the specified offset
.
This function is also available under the readBigUint64LE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff]); console.log(buf.readBigUInt64LE(0));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff]); console.log(buf.readBigUInt64LE(0));
buf.readDoubleBE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 8
. Default: 0
.Reads a 64-bit, big-endian double from buf
at the specified offset
.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8]); console.log(buf.readDoubleBE(0));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8]); console.log(buf.readDoubleBE(0));
buf.readDoubleLE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 8
. Default: 0
.Reads a 64-bit, little-endian double from buf
at the specified offset
.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8]); console.log(buf.readDoubleLE(0)); console.log(buf.readDoubleLE(1));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8]); console.log(buf.readDoubleLE(0)); console.log(buf.readDoubleLE(1));
buf.readFloatBE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.Reads a 32-bit, big-endian float from buf
at the specified offset
.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([1, 2, 3, 4]); console.log(buf.readFloatBE(0));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([1, 2, 3, 4]); console.log(buf.readFloatBE(0));
buf.readFloatLE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.Reads a 32-bit, little-endian float from buf
at the specified offset
.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([1, 2, 3, 4]); console.log(buf.readFloatLE(0)); console.log(buf.readFloatLE(1));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([1, 2, 3, 4]); console.log(buf.readFloatLE(0)); console.log(buf.readFloatLE(1));
buf.readInt8([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 1
. Default: 0
.Reads a signed 8-bit integer from buf
at the specified offset
.
Integers read from a Buffer
are interpreted as two's complement signed values.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([-1, 5]); console.log(buf.readInt8(0)); console.log(buf.readInt8(1)); console.log(buf.readInt8(2));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([-1, 5]); console.log(buf.readInt8(0)); console.log(buf.readInt8(1)); console.log(buf.readInt8(2));
buf.readInt16BE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2
. Default: 0
.Reads a signed, big-endian 16-bit integer from buf
at the specified offset
.
Integers read from a Buffer
are interpreted as two's complement signed values.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0, 5]); console.log(buf.readInt16BE(0));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0, 5]); console.log(buf.readInt16BE(0));
buf.readInt16LE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2
. Default: 0
.Reads a signed, little-endian 16-bit integer from buf
at the specified offset
.
Integers read from a Buffer
are interpreted as two's complement signed values.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0, 5]); console.log(buf.readInt16LE(0)); console.log(buf.readInt16LE(1));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0, 5]); console.log(buf.readInt16LE(0)); console.log(buf.readInt16LE(1));
buf.readInt32BE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.Reads a signed, big-endian 32-bit integer from buf
at the specified offset
.
Integers read from a Buffer
are interpreted as two's complement signed values.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0, 0, 0, 5]); console.log(buf.readInt32BE(0));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0, 0, 0, 5]); console.log(buf.readInt32BE(0));
buf.readInt32LE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.Reads a signed, little-endian 32-bit integer from buf
at the specified offset
.
Integers read from a Buffer
are interpreted as two's complement signed values.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0, 0, 0, 5]); console.log(buf.readInt32LE(0)); console.log(buf.readInt32LE(1));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0, 0, 0, 5]); console.log(buf.readInt32LE(0)); console.log(buf.readInt32LE(1));
buf.readIntBE(offset, byteLength)
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength
.byteLength
<integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6
.Reads byteLength
number of bytes from buf
at the specified offset
and interprets the result as a big-endian, two's complement signed value supporting up to 48 bits of accuracy.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]); console.log(buf.readIntBE(0, 6).toString(16)); console.log(buf.readIntBE(1, 6).toString(16)); console.log(buf.readIntBE(1, 0).toString(16));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]); console.log(buf.readIntBE(0, 6).toString(16)); console.log(buf.readIntBE(1, 6).toString(16)); console.log(buf.readIntBE(1, 0).toString(16));
buf.readIntLE(offset, byteLength)
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength
.byteLength
<integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6
.Reads byteLength
number of bytes from buf
at the specified offset
and interprets the result as a little-endian, two's complement signed value supporting up to 48 bits of accuracy.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]); console.log(buf.readIntLE(0, 6).toString(16));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]); console.log(buf.readIntLE(0, 6).toString(16));
buf.readUInt8([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 1
. Default: 0
.Reads an unsigned 8-bit integer from buf
at the specified offset
.
This function is also available under the readUint8
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([1, -2]); console.log(buf.readUInt8(0)); console.log(buf.readUInt8(1)); console.log(buf.readUInt8(2));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([1, -2]); console.log(buf.readUInt8(0)); console.log(buf.readUInt8(1)); console.log(buf.readUInt8(2));
buf.readUInt16BE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2
. Default: 0
.Reads an unsigned, big-endian 16-bit integer from buf
at the specified offset
.
This function is also available under the readUint16BE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x12, 0x34, 0x56]); console.log(buf.readUInt16BE(0).toString(16)); console.log(buf.readUInt16BE(1).toString(16));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x12, 0x34, 0x56]); console.log(buf.readUInt16BE(0).toString(16)); console.log(buf.readUInt16BE(1).toString(16));
buf.readUInt16LE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 2
. Default: 0
.Reads an unsigned, little-endian 16-bit integer from buf
at the specified offset
.
This function is also available under the readUint16LE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x12, 0x34, 0x56]); console.log(buf.readUInt16LE(0).toString(16)); console.log(buf.readUInt16LE(1).toString(16)); console.log(buf.readUInt16LE(2).toString(16));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x12, 0x34, 0x56]); console.log(buf.readUInt16LE(0).toString(16)); console.log(buf.readUInt16LE(1).toString(16)); console.log(buf.readUInt16LE(2).toString(16));
buf.readUInt32BE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.Reads an unsigned, big-endian 32-bit integer from buf
at the specified offset
.
This function is also available under the readUint32BE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x12, 0x34, 0x56, 0x78]); console.log(buf.readUInt32BE(0).toString(16));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x12, 0x34, 0x56, 0x78]); console.log(buf.readUInt32BE(0).toString(16));
buf.readUInt32LE([offset])
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.Reads an unsigned, little-endian 32-bit integer from buf
at the specified offset
.
This function is also available under the readUint32LE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x12, 0x34, 0x56, 0x78]); console.log(buf.readUInt32LE(0).toString(16)); console.log(buf.readUInt32LE(1).toString(16));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x12, 0x34, 0x56, 0x78]); console.log(buf.readUInt32LE(0).toString(16)); console.log(buf.readUInt32LE(1).toString(16));
buf.readUIntBE(offset, byteLength)
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength
.byteLength
<integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6
.Reads byteLength
number of bytes from buf
at the specified offset
and interprets the result as an unsigned big-endian integer supporting up to 48 bits of accuracy.
This function is also available under the readUintBE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]); console.log(buf.readUIntBE(0, 6).toString(16)); console.log(buf.readUIntBE(1, 6).toString(16));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]); console.log(buf.readUIntBE(0, 6).toString(16)); console.log(buf.readUIntBE(1, 6).toString(16));
buf.readUIntLE(offset, byteLength)
#
offset
<integer> Number of bytes to skip before starting to read. Must satisfy 0 <= offset <= buf.length - byteLength
.byteLength
<integer> Number of bytes to read. Must satisfy 0 < byteLength <= 6
.Reads byteLength
number of bytes from buf
at the specified offset
and interprets the result as an unsigned, little-endian integer supporting up to 48 bits of accuracy.
This function is also available under the readUintLE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]); console.log(buf.readUIntLE(0, 6).toString(16));
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]); console.log(buf.readUIntLE(0, 6).toString(16));
buf.subarray([start[, end]])
#
Added in: v3.0.0
start
<integer> Where the new Buffer
will start. Default: 0
.end
<integer> Where the new Buffer
will end (not inclusive). Default: buf.length
.Returns a new Buffer
that references the same memory as the original, but offset and cropped by the start
and end
indexes.
Specifying end
greater than buf.length
will return the same result as that of end
equal to buf.length
.
This method is inherited from TypedArray.prototype.subarray()
.
Modifying the new Buffer
slice will modify the memory in the original Buffer
because the allocated memory of the two objects overlap.
import { Buffer } from 'node:buffer'; const buf1 = Buffer.allocUnsafe(26); for (let i = 0; i < 26; i++) { buf1[i] = i + 97; } const buf2 = buf1.subarray(0, 3); console.log(buf2.toString('ascii', 0, buf2.length)); buf1[0] = 33; console.log(buf2.toString('ascii', 0, buf2.length));
const { Buffer } = require('node:buffer'); const buf1 = Buffer.allocUnsafe(26); for (let i = 0; i < 26; i++) { buf1[i] = i + 97; } const buf2 = buf1.subarray(0, 3); console.log(buf2.toString('ascii', 0, buf2.length)); buf1[0] = 33; console.log(buf2.toString('ascii', 0, buf2.length));
Specifying negative indexes causes the slice to be generated relative to the end of buf
rather than the beginning.
import { Buffer } from 'node:buffer'; const buf = Buffer.from('buffer'); console.log(buf.subarray(-6, -1).toString()); console.log(buf.subarray(-6, -2).toString()); console.log(buf.subarray(-5, -2).toString());
const { Buffer } = require('node:buffer'); const buf = Buffer.from('buffer'); console.log(buf.subarray(-6, -1).toString()); console.log(buf.subarray(-6, -2).toString()); console.log(buf.subarray(-5, -2).toString());
buf.slice([start[, end]])
#
start
<integer> Where the new Buffer
will start. Default: 0
.end
<integer> Where the new Buffer
will end (not inclusive). Default: buf.length
.Returns a new Buffer
that references the same memory as the original, but offset and cropped by the start
and end
indexes.
This method is not compatible with the Uint8Array.prototype.slice()
, which is a superclass of Buffer
. To copy the slice, use Uint8Array.prototype.slice()
.
import { Buffer } from 'node:buffer'; const buf = Buffer.from('buffer'); const copiedBuf = Uint8Array.prototype.slice.call(buf); copiedBuf[0]++; console.log(copiedBuf.toString()); console.log(buf.toString()); const notReallyCopiedBuf = buf.slice(); notReallyCopiedBuf[0]++; console.log(notReallyCopiedBuf.toString()); console.log(buf.toString());
const { Buffer } = require('node:buffer'); const buf = Buffer.from('buffer'); const copiedBuf = Uint8Array.prototype.slice.call(buf); copiedBuf[0]++; console.log(copiedBuf.toString()); console.log(buf.toString()); const notReallyCopiedBuf = buf.slice(); notReallyCopiedBuf[0]++; console.log(notReallyCopiedBuf.toString()); console.log(buf.toString());
buf.swap16()
#
Added in: v5.10.0
buf
.Interprets buf
as an array of unsigned 16-bit integers and swaps the byte order in-place. Throws ERR_INVALID_BUFFER_SIZE
if buf.length
is not a multiple of 2.
import { Buffer } from 'node:buffer'; const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]); console.log(buf1); buf1.swap16(); console.log(buf1); const buf2 = Buffer.from([0x1, 0x2, 0x3]); buf2.swap16();
const { Buffer } = require('node:buffer'); const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]); console.log(buf1); buf1.swap16(); console.log(buf1); const buf2 = Buffer.from([0x1, 0x2, 0x3]); buf2.swap16();
One convenient use of buf.swap16()
is to perform a fast in-place conversion between UTF-16 little-endian and UTF-16 big-endian:
import { Buffer } from 'node:buffer'; const buf = Buffer.from('This is little-endian UTF-16', 'utf16le'); buf.swap16();
const { Buffer } = require('node:buffer'); const buf = Buffer.from('This is little-endian UTF-16', 'utf16le'); buf.swap16();
buf.swap32()
#
Added in: v5.10.0
buf
.Interprets buf
as an array of unsigned 32-bit integers and swaps the byte order in-place. Throws ERR_INVALID_BUFFER_SIZE
if buf.length
is not a multiple of 4.
import { Buffer } from 'node:buffer'; const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]); console.log(buf1); buf1.swap32(); console.log(buf1); const buf2 = Buffer.from([0x1, 0x2, 0x3]); buf2.swap32();
const { Buffer } = require('node:buffer'); const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]); console.log(buf1); buf1.swap32(); console.log(buf1); const buf2 = Buffer.from([0x1, 0x2, 0x3]); buf2.swap32();
buf.swap64()
#
Added in: v6.3.0
buf
.Interprets buf
as an array of 64-bit numbers and swaps byte order in-place. Throws ERR_INVALID_BUFFER_SIZE
if buf.length
is not a multiple of 8.
import { Buffer } from 'node:buffer'; const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]); console.log(buf1); buf1.swap64(); console.log(buf1); const buf2 = Buffer.from([0x1, 0x2, 0x3]); buf2.swap64();
const { Buffer } = require('node:buffer'); const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]); console.log(buf1); buf1.swap64(); console.log(buf1); const buf2 = Buffer.from([0x1, 0x2, 0x3]); buf2.swap64();
buf.toJSON()
#
Added in: v0.9.2
Returns a JSON representation of buf
. JSON.stringify()
implicitly calls this function when stringifying a Buffer
instance.
Buffer.from()
accepts objects in the format returned from this method. In particular, Buffer.from(buf.toJSON())
works like Buffer.from(buf)
.
import { Buffer } from 'node:buffer'; const buf = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5]); const json = JSON.stringify(buf); console.log(json); const copy = JSON.parse(json, (key, value) => { return value && value.type === 'Buffer' ? Buffer.from(value) : value; }); console.log(copy);
const { Buffer } = require('node:buffer'); const buf = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5]); const json = JSON.stringify(buf); console.log(json); const copy = JSON.parse(json, (key, value) => { return value && value.type === 'Buffer' ? Buffer.from(value) : value; }); console.log(copy);
buf.toString([encoding[, start[, end]]])
#
Added in: v0.1.90
encoding
<string> The character encoding to use. Default: 'utf8'
.start
<integer> The byte offset to start decoding at. Default: 0
.end
<integer> The byte offset to stop decoding at (not inclusive). Default: buf.length
.Decodes buf
to a string according to the specified character encoding in encoding
. start
and end
may be passed to decode only a subset of buf
.
If encoding
is 'utf8'
and a byte sequence in the input is not valid UTF-8, then each invalid byte is replaced with the replacement character U+FFFD
.
The maximum length of a string instance (in UTF-16 code units) is available as buffer.constants.MAX_STRING_LENGTH
.
import { Buffer } from 'node:buffer'; const buf1 = Buffer.allocUnsafe(26); for (let i = 0; i < 26; i++) { buf1[i] = i + 97; } console.log(buf1.toString('utf8')); console.log(buf1.toString('utf8', 0, 5)); const buf2 = Buffer.from('tést'); console.log(buf2.toString('hex')); console.log(buf2.toString('utf8', 0, 3)); console.log(buf2.toString(undefined, 0, 3));
const { Buffer } = require('node:buffer'); const buf1 = Buffer.allocUnsafe(26); for (let i = 0; i < 26; i++) { buf1[i] = i + 97; } console.log(buf1.toString('utf8')); console.log(buf1.toString('utf8', 0, 5)); const buf2 = Buffer.from('tést'); console.log(buf2.toString('hex')); console.log(buf2.toString('utf8', 0, 3)); console.log(buf2.toString(undefined, 0, 3));
buf.values()
#
Added in: v1.1.0
Creates and returns an iterator for buf
values (bytes). This function is called automatically when a Buffer
is used in a for..of
statement.
import { Buffer } from 'node:buffer'; const buf = Buffer.from('buffer'); for (const value of buf.values()) { console.log(value); } for (const value of buf) { console.log(value); }
const { Buffer } = require('node:buffer'); const buf = Buffer.from('buffer'); for (const value of buf.values()) { console.log(value); } for (const value of buf) { console.log(value); }
buf.write(string[, offset[, length]][, encoding])
#
Added in: v0.1.90
string
<string> String to write to buf
.offset
<integer> Number of bytes to skip before starting to write string
. Default: 0
.length
<integer> Maximum number of bytes to write (written bytes will not exceed buf.length - offset
). Default: buf.length - offset
.encoding
<string> The character encoding of string
. Default: 'utf8'
.Writes string
to buf
at offset
according to the character encoding in encoding
. The length
parameter is the number of bytes to write. If buf
did not contain enough space to fit the entire string, only part of string
will be written. However, partially encoded characters will not be written.
import { Buffer } from 'node:buffer'; const buf = Buffer.alloc(256); const len = buf.write('\u00bd + \u00bc = \u00be', 0); console.log(`${len} bytes: ${buf.toString('utf8', 0, len)}`); const buffer = Buffer.alloc(10); const length = buffer.write('abcd', 8); console.log(`${length} bytes: ${buffer.toString('utf8', 8, 10)}`);
const { Buffer } = require('node:buffer'); const buf = Buffer.alloc(256); const len = buf.write('\u00bd + \u00bc = \u00be', 0); console.log(`${len} bytes: ${buf.toString('utf8', 0, len)}`); const buffer = Buffer.alloc(10); const length = buffer.write('abcd', 8); console.log(`${length} bytes: ${buffer.toString('utf8', 8, 10)}`);
buf.writeBigInt64BE(value[, offset])
#
Added in: v12.0.0, v10.20.0
value
<bigint> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as big-endian.
value
is interpreted and written as a two's complement signed integer.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(8); buf.writeBigInt64BE(0x0102030405060708n, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(8); buf.writeBigInt64BE(0x0102030405060708n, 0); console.log(buf);
buf.writeBigInt64LE(value[, offset])
#
Added in: v12.0.0, v10.20.0
value
<bigint> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as little-endian.
value
is interpreted and written as a two's complement signed integer.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(8); buf.writeBigInt64LE(0x0102030405060708n, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(8); buf.writeBigInt64LE(0x0102030405060708n, 0); console.log(buf);
buf.writeBigUInt64BE(value[, offset])
#
value
<bigint> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as big-endian.
This function is also available under the writeBigUint64BE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(8); buf.writeBigUInt64BE(0xdecafafecacefaden, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(8); buf.writeBigUInt64BE(0xdecafafecacefaden, 0); console.log(buf);
buf.writeBigUInt64LE(value[, offset])
#
value
<bigint> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy: 0 <= offset <= buf.length - 8
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as little-endian
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(8); buf.writeBigUInt64LE(0xdecafafecacefaden, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(8); buf.writeBigUInt64LE(0xdecafafecacefaden, 0); console.log(buf);
This function is also available under the writeBigUint64LE
alias.
buf.writeDoubleBE(value[, offset])
#
value
<number> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 8
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as big-endian. The value
must be a JavaScript number. Behavior is undefined when value
is anything other than a JavaScript number.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(8); buf.writeDoubleBE(123.456, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(8); buf.writeDoubleBE(123.456, 0); console.log(buf);
buf.writeDoubleLE(value[, offset])
#
value
<number> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 8
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as little-endian. The value
must be a JavaScript number. Behavior is undefined when value
is anything other than a JavaScript number.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(8); buf.writeDoubleLE(123.456, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(8); buf.writeDoubleLE(123.456, 0); console.log(buf);
buf.writeFloatBE(value[, offset])
#
value
<number> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as big-endian. Behavior is undefined when value
is anything other than a JavaScript number.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(4); buf.writeFloatBE(0xcafebabe, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(4); buf.writeFloatBE(0xcafebabe, 0); console.log(buf);
buf.writeFloatLE(value[, offset])
#
value
<number> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as little-endian. Behavior is undefined when value
is anything other than a JavaScript number.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(4); buf.writeFloatLE(0xcafebabe, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(4); buf.writeFloatLE(0xcafebabe, 0); console.log(buf);
buf.writeInt8(value[, offset])
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 1
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
. value
must be a valid signed 8-bit integer. Behavior is undefined when value
is anything other than a signed 8-bit integer.
value
is interpreted and written as a two's complement signed integer.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(2); buf.writeInt8(2, 0); buf.writeInt8(-2, 1); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(2); buf.writeInt8(2, 0); buf.writeInt8(-2, 1); console.log(buf);
buf.writeInt16BE(value[, offset])
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as big-endian. The value
must be a valid signed 16-bit integer. Behavior is undefined when value
is anything other than a signed 16-bit integer.
The value
is interpreted and written as a two's complement signed integer.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(2); buf.writeInt16BE(0x0102, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(2); buf.writeInt16BE(0x0102, 0); console.log(buf);
buf.writeInt16LE(value[, offset])
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as little-endian. The value
must be a valid signed 16-bit integer. Behavior is undefined when value
is anything other than a signed 16-bit integer.
The value
is interpreted and written as a two's complement signed integer.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(2); buf.writeInt16LE(0x0304, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(2); buf.writeInt16LE(0x0304, 0); console.log(buf);
buf.writeInt32BE(value[, offset])
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as big-endian. The value
must be a valid signed 32-bit integer. Behavior is undefined when value
is anything other than a signed 32-bit integer.
The value
is interpreted and written as a two's complement signed integer.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(4); buf.writeInt32BE(0x01020304, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(4); buf.writeInt32BE(0x01020304, 0); console.log(buf);
buf.writeInt32LE(value[, offset])
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as little-endian. The value
must be a valid signed 32-bit integer. Behavior is undefined when value
is anything other than a signed 32-bit integer.
The value
is interpreted and written as a two's complement signed integer.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(4); buf.writeInt32LE(0x05060708, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(4); buf.writeInt32LE(0x05060708, 0); console.log(buf);
buf.writeIntBE(value, offset, byteLength)
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength
.byteLength
<integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6
.offset
plus the number of bytes written.Writes byteLength
bytes of value
to buf
at the specified offset
as big-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value
is anything other than a signed integer.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(6); buf.writeIntBE(0x1234567890ab, 0, 6); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(6); buf.writeIntBE(0x1234567890ab, 0, 6); console.log(buf);
buf.writeIntLE(value, offset, byteLength)
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength
.byteLength
<integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6
.offset
plus the number of bytes written.Writes byteLength
bytes of value
to buf
at the specified offset
as little-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value
is anything other than a signed integer.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(6); buf.writeIntLE(0x1234567890ab, 0, 6); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(6); buf.writeIntLE(0x1234567890ab, 0, 6); console.log(buf);
buf.writeUInt8(value[, offset])
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 1
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
. value
must be a valid unsigned 8-bit integer. Behavior is undefined when value
is anything other than an unsigned 8-bit integer.
This function is also available under the writeUint8
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(4); buf.writeUInt8(0x3, 0); buf.writeUInt8(0x4, 1); buf.writeUInt8(0x23, 2); buf.writeUInt8(0x42, 3); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(4); buf.writeUInt8(0x3, 0); buf.writeUInt8(0x4, 1); buf.writeUInt8(0x23, 2); buf.writeUInt8(0x42, 3); console.log(buf);
buf.writeUInt16BE(value[, offset])
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as big-endian. The value
must be a valid unsigned 16-bit integer. Behavior is undefined when value
is anything other than an unsigned 16-bit integer.
This function is also available under the writeUint16BE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(4); buf.writeUInt16BE(0xdead, 0); buf.writeUInt16BE(0xbeef, 2); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(4); buf.writeUInt16BE(0xdead, 0); buf.writeUInt16BE(0xbeef, 2); console.log(buf);
buf.writeUInt16LE(value[, offset])
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 2
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as little-endian. The value
must be a valid unsigned 16-bit integer. Behavior is undefined when value
is anything other than an unsigned 16-bit integer.
This function is also available under the writeUint16LE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(4); buf.writeUInt16LE(0xdead, 0); buf.writeUInt16LE(0xbeef, 2); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(4); buf.writeUInt16LE(0xdead, 0); buf.writeUInt16LE(0xbeef, 2); console.log(buf);
buf.writeUInt32BE(value[, offset])
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as big-endian. The value
must be a valid unsigned 32-bit integer. Behavior is undefined when value
is anything other than an unsigned 32-bit integer.
This function is also available under the writeUint32BE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(4); buf.writeUInt32BE(0xfeedface, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(4); buf.writeUInt32BE(0xfeedface, 0); console.log(buf);
buf.writeUInt32LE(value[, offset])
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - 4
. Default: 0
.offset
plus the number of bytes written.Writes value
to buf
at the specified offset
as little-endian. The value
must be a valid unsigned 32-bit integer. Behavior is undefined when value
is anything other than an unsigned 32-bit integer.
This function is also available under the writeUint32LE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(4); buf.writeUInt32LE(0xfeedface, 0); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(4); buf.writeUInt32LE(0xfeedface, 0); console.log(buf);
buf.writeUIntBE(value, offset, byteLength)
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength
.byteLength
<integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6
.offset
plus the number of bytes written.Writes byteLength
bytes of value
to buf
at the specified offset
as big-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value
is anything other than an unsigned integer.
This function is also available under the writeUintBE
alias.
import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(6); buf.writeUIntBE(0x1234567890ab, 0, 6); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(6); buf.writeUIntBE(0x1234567890ab, 0, 6); console.log(buf);
buf.writeUIntLE(value, offset, byteLength)
#
value
<integer> Number to be written to buf
.offset
<integer> Number of bytes to skip before starting to write. Must satisfy 0 <= offset <= buf.length - byteLength
.byteLength
<integer> Number of bytes to write. Must satisfy 0 < byteLength <= 6
.offset
plus the number of bytes written.Writes byteLength
bytes of value
to buf
at the specified offset
as little-endian. Supports up to 48 bits of accuracy. Behavior is undefined when value
is anything other than an unsigned integer.
This function is also available under the writeUintLE
alias.
Class:import { Buffer } from 'node:buffer'; const buf = Buffer.allocUnsafe(6); buf.writeUIntLE(0x1234567890ab, 0, 6); console.log(buf);
const { Buffer } = require('node:buffer'); const buf = Buffer.allocUnsafe(6); buf.writeUIntLE(0x1234567890ab, 0, 6); console.log(buf);
File
#
A <File> provides information about files.
new buffer.File(sources, fileName[, options])
#
Added in: v19.2.0, v18.13.0
sources
<string[]> | <ArrayBuffer[]> | <TypedArray[]> | <DataView[]> | <Blob[]> | <File[]> An array of string, <ArrayBuffer>, <TypedArray>, <DataView>, <File>, or <Blob> objects, or any mix of such objects, that will be stored within the File
.fileName
<string> The name of the file.options
<Object>
endings
<string> One of either 'transparent'
or 'native'
. When set to 'native'
, line endings in string source parts will be converted to the platform native line-ending as specified by require('node:os').EOL
.type
<string> The File content-type.lastModified
<number> The last modified date of the file. Default: Date.now()
.file.name
#
Added in: v19.2.0, v18.13.0
The name of the File
.
file.lastModified
#
Added in: v19.2.0, v18.13.0
The last modified date of the File
.
node:buffer
module APIs#
While, the Buffer
object is available as a global, there are additional Buffer
-related APIs that are available only via the node:buffer
module accessed using require('node:buffer')
.
buffer.atob(data)
#
Added in: v15.13.0, v14.17.0
Stability: 3- Legacy. Use
Buffer.from(data, 'base64')
instead.
data
<any> The Base64-encoded input string.Decodes a string of Base64-encoded data into bytes, and encodes those bytes into a string using Latin-1 (ISO-8859-1).
The data
may be any JavaScript-value that can be coerced into a string.
This function is only provided for compatibility with legacy web platform APIs and should never be used in new code, because they use strings to represent binary data and predate the introduction of typed arrays in JavaScript. For code running using Node.js APIs, converting between base64-encoded strings and binary data should be performed using Buffer.from(str, 'base64')
and buf.toString('base64')
.
buffer.btoa(data)
#
Added in: v15.13.0, v14.17.0
data
<any> An ASCII (Latin1) string.Decodes a string into bytes using Latin-1 (ISO-8859), and encodes those bytes into a string using Base64.
The data
may be any JavaScript-value that can be coerced into a string.
This function is only provided for compatibility with legacy web platform APIs and should never be used in new code, because they use strings to represent binary data and predate the introduction of typed arrays in JavaScript. For code running using Node.js APIs, converting between base64-encoded strings and binary data should be performed using Buffer.from(str, 'base64')
and buf.toString('base64')
.
buffer.isAscii(input)
#
Added in: v19.6.0, v18.15.0
input
<Buffer> | <ArrayBuffer> | <TypedArray> The input to validate.This function returns true
if input
contains only valid ASCII-encoded data, including the case in which input
is empty.
Throws if the input
is a detached array buffer.
buffer.isUtf8(input)
#
Added in: v19.4.0, v18.14.0
input
<Buffer> | <ArrayBuffer> | <TypedArray> The input to validate.This function returns true
if input
contains only valid UTF-8-encoded data, including the case in which input
is empty.
Throws if the input
is a detached array buffer.
buffer.INSPECT_MAX_BYTES
#
Added in: v0.5.4
50
Returns the maximum number of bytes that will be returned when buf.inspect()
is called. This can be overridden by user modules. See util.inspect()
for more details on buf.inspect()
behavior.
buffer.resolveObjectURL(id)
#
id
<string> A 'blob:nodedata:...
URL string returned by a prior call to URL.createObjectURL()
.Resolves a 'blob:nodedata:...'
an associated <Blob> object registered using a prior call to URL.createObjectURL()
.
buffer.transcode(source, fromEnc, toEnc)
#
source
<Buffer> | <Uint8Array> A Buffer
or Uint8Array
instance.fromEnc
<string> The current encoding.toEnc
<string> To target encoding.Re-encodes the given Buffer
or Uint8Array
instance from one character encoding to another. Returns a new Buffer
instance.
Throws if the fromEnc
or toEnc
specify invalid character encodings or if conversion from fromEnc
to toEnc
is not permitted.
Encodings supported by buffer.transcode()
are: 'ascii'
, 'utf8'
, 'utf16le'
, 'ucs2'
, 'latin1'
, and 'binary'
.
The transcoding process will use substitution characters if a given byte sequence cannot be adequately represented in the target encoding. For instance:
import { Buffer, transcode } from 'node:buffer'; const newBuf = transcode(Buffer.from('€'), 'utf8', 'ascii'); console.log(newBuf.toString('ascii'));
const { Buffer, transcode } = require('node:buffer'); const newBuf = transcode(Buffer.from('€'), 'utf8', 'ascii'); console.log(newBuf.toString('ascii'));
Because the Euro (€
) sign is not representable in US-ASCII, it is replaced with ?
in the transcoded Buffer
.
SlowBuffer
#
Deprecated since: v6.0.0
See Buffer.allocUnsafeSlow()
. This was never a class in the sense that the constructor always returned a Buffer
instance, rather than a SlowBuffer
instance.
Added in: v8.2.0
buffer.constants.MAX_LENGTH
#
Buffer
instance.On 32-bit architectures, this value currently is 230 - 1 (about 1 GiB).
On 64-bit architectures, this value currently is 253 - 1 (about 8 PiB).
It reflects v8::TypedArray::kMaxLength
under the hood.
This value is also available as buffer.kMaxLength
.
buffer.constants.MAX_STRING_LENGTH
#
Added in: v8.2.0
string
instance.Represents the largest length
that a string
primitive can have, counted in UTF-16 code units.
This value may depend on the JS engine that is being used.
Buffer.from()
, Buffer.alloc()
, and Buffer.allocUnsafe()
#
In versions of Node.js prior to 6.0.0, Buffer
instances were created using the Buffer
constructor function, which allocates the returned Buffer
differently based on what arguments are provided:
Buffer()
(e.g. new Buffer(10)
) allocates a new Buffer
object of the specified size. Prior to Node.js 8.0.0, the memory allocated for such Buffer
instances is not initialized and can contain sensitive data. Such Buffer
instances must be subsequently initialized by using either buf.fill(0)
or by writing to the entire Buffer
before reading data from the Buffer
. While this behavior is intentional to improve performance, development experience has demonstrated that a more explicit distinction is required between creating a fast-but-uninitialized Buffer
versus creating a slower-but-safer Buffer
. Since Node.js 8.0.0, Buffer(num)
and new Buffer(num)
return a Buffer
with initialized memory.Buffer
as the first argument copies the passed object's data into the Buffer
.Buffer
that shares allocated memory with the given array buffer.Because the behavior of new Buffer()
is different depending on the type of the first argument, security and reliability issues can be inadvertently introduced into applications when argument validation or Buffer
initialization is not performed.
For example, if an attacker can cause an application to receive a number where a string is expected, the application may call new Buffer(100)
instead of new Buffer("100")
, leading it to allocate a 100 byte buffer instead of allocating a 3 byte buffer with content "100"
. This is commonly possible using JSON API calls. Since JSON distinguishes between numeric and string types, it allows injection of numbers where a naively written application that does not validate its input sufficiently might expect to always receive a string. Before Node.js 8.0.0, the 100 byte buffer might contain arbitrary pre-existing in-memory data, so may be used to expose in-memory secrets to a remote attacker. Since Node.js 8.0.0, exposure of memory cannot occur because the data is zero-filled. However, other attacks are still possible, such as causing very large buffers to be allocated by the server, leading to performance degradation or crashing on memory exhaustion.
To make the creation of Buffer
instances more reliable and less error-prone, the various forms of the new Buffer()
constructor have been deprecated and replaced by separate Buffer.from()
, Buffer.alloc()
, and Buffer.allocUnsafe()
methods.
Developers should migrate all existing uses of the new Buffer()
constructors to one of these new APIs.
Buffer.from(array)
returns a new Buffer
that contains a copy of the provided octets.Buffer.from(arrayBuffer[, byteOffset[, length]])
returns a new Buffer
that shares the same allocated memory as the given <ArrayBuffer>.Buffer.from(buffer)
returns a new Buffer
that contains a copy of the contents of the given Buffer
.Buffer.from(string[, encoding])
returns a new Buffer
that contains a copy of the provided string.Buffer.alloc(size[, fill[, encoding]])
returns a new initialized Buffer
of the specified size. This method is slower than Buffer.allocUnsafe(size)
but guarantees that newly created Buffer
instances never contain old data that is potentially sensitive. A TypeError
will be thrown if size
is not a number.Buffer.allocUnsafe(size)
and Buffer.allocUnsafeSlow(size)
each return a new uninitialized Buffer
of the specified size
. Because the Buffer
is uninitialized, the allocated segment of memory might contain old data that is potentially sensitive.Buffer
instances returned by Buffer.allocUnsafe()
, Buffer.from(string)
, Buffer.concat()
and Buffer.from(array)
may be allocated off a shared internal memory pool if size
is less than or equal to half Buffer.poolSize
. Instances returned by Buffer.allocUnsafeSlow()
never use the shared internal memory pool.
--zero-fill-buffers
command-line option#
Added in: v5.10.0
Node.js can be started using the --zero-fill-buffers
command-line option to cause all newly-allocated Buffer
instances to be zero-filled upon creation by default. Without the option, buffers created with Buffer.allocUnsafe()
, Buffer.allocUnsafeSlow()
, and new SlowBuffer(size)
are not zero-filled. Use of this flag can have a measurable negative impact on performance. Use the --zero-fill-buffers
option only when necessary to enforce that newly allocated Buffer
instances cannot contain old data that is potentially sensitive.
$ node --zero-fill-buffers
> Buffer.allocUnsafe(5);
<Buffer 00 00 00 00 00>
What makes Buffer.allocUnsafe()
and Buffer.allocUnsafeSlow()
"unsafe"?#
When calling Buffer.allocUnsafe()
and Buffer.allocUnsafeSlow()
, the segment of allocated memory is uninitialized (it is not zeroed-out). While this design makes the allocation of memory quite fast, the allocated segment of memory might contain old data that is potentially sensitive. Using a Buffer
created by Buffer.allocUnsafe()
without completely overwriting the memory can allow this old data to be leaked when the Buffer
memory is read.
While there are clear performance advantages to using Buffer.allocUnsafe()
, extra care must be taken in order to avoid introducing security vulnerabilities into an application.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4