Asar is a simple extensive archive format, it works like tar
that concatenates all files together without compression, while having random access support.
This module requires Node 22.12.0 or later.
$ npm install --engine-strict @electron/asar
$ asar --help Usage: asar [options] [command] Commands: pack|p <dir> <output> create asar archive list|l <archive> list files of asar archive extract-file|ef <archive> <filename> extract one file from archive extract|e <archive> <dest> extract archive Options: -h, --help output usage information -V, --version output the version numberExcluding multiple resources from being packed
Given:
app
(a) ├── x1
(b) ├── x2
(c) ├── y3
(d) │ ├── x1
(e) │ └── z1
(f) │ └── x2
(g) └── z4
(h) └── w1
Exclude: a, b
$ asar pack app app.asar --unpack-dir "{x1,x2}"
Exclude: a, b, d, f
$ asar pack app app.asar --unpack-dir "**/{x1,x2}"
Exclude: a, b, d, f, h
$ asar pack app app.asar --unpack-dir "{**/x1,**/x2,z4/w1}"
import { createPackage } from '@electron/asar'; const src = 'some/path/'; const dest = 'name.asar'; await createPackage(src, dest); console.log('done.');
Please note that there is currently no error handling provided!
You can pass in a transform
option, that is a function, which either returns nothing, or a stream.Transform
. The latter will be used on files that will be in the .asar
file to transform them (e.g. compress).
import { createPackageWithOptions } from '@electron/asar'; const src = 'some/path/'; const dest = 'name.asar'; function transform (filename) { return new CustomTransformStream() } await createPackageWithOptions(src, dest, { transform: transform }); console.log('done.');
Asar uses Pickle to safely serialize binary value to file.
The format of asar is very flat:
| UInt32: header_size | String: header | Bytes: file1 | ... | Bytes: file42 |
The header_size
and header
are serialized with Pickle class, and header_size
's Pickle object is 8 bytes.
The header
is a JSON string, and the header_size
is the size of header
's Pickle
object.
Structure of header
is something like this:
{ "files": { "tmp": { "files": {} }, "usr" : { "files": { "bin": { "files": { "ls": { "offset": "0", "size": 100, "executable": true, "integrity": { "algorithm": "SHA256", "hash": "...", "blockSize": 1024, "blocks": ["...", "..."] } }, "cd": { "offset": "100", "size": 100, "executable": true, "integrity": { "algorithm": "SHA256", "hash": "...", "blockSize": 1024, "blocks": ["...", "..."] } } } } } }, "etc": { "files": { "hosts": { "offset": "200", "size": 32, "integrity": { "algorithm": "SHA256", "hash": "...", "blockSize": 1024, "blocks": ["...", "..."] } } } } } }
offset
and size
records the information to read the file from archive, the offset
starts from 0 so you have to manually add the size of header_size
and header
to the offset
to get the real offset of the file.
offset
is a UINT64 number represented in string, because there is no way to precisely represent UINT64 in JavaScript Number
. size
is a JavaScript Number
that is no larger than Number.MAX_SAFE_INTEGER
, which has a value of 9007199254740991
and is about 8PB in size. We didn't store size
in UINT64 because file size in Node.js is represented as Number
and it is not safe to convert Number
to UINT64.
integrity
is an object consisting of a few keys:
algorithm
, currently only SHA256
is supported.hash
value representing the hash of the entire file.blocks
of the file. i.e. for a blockSize of 4KB this array contains the hash of every block if you split the file into N 4KB blocks.blockSize
representing the size in bytes of each block in the blocks
hashes aboveRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4