A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/fxamacker/ccf_draft below:

fxamacker/ccf_draft: Cadence Compact Format specification 1.0.0 (March 31, 2025). See github.com/onflow/ccf for official specs.

Cadence Compact Format (CCF)

Cadence Compact Format (CCF) is a data format designed for compact, efficient, and deterministic encoding of Cadence external values. CCF is defined in ccf_specs.md.

Cadence is a resource-oriented programming language that introduces new features to smart contract programming. It's used by Flow blockchain and has a syntax inspired by Swift, Kotlin, and Rust. Its use of resource types maps well to the Move language.

CCF can be used as a hybrid data format. CCF-based messages can be fully self-describing or partially self-describing. Both are more compact than JSON-based messages. CCF-based protocols can send Cadence metadata just once for all messages of that type. Malformed data can be detected without Cadence metadata and without creating Cadence objects.

CCF obsoletes JSON-Cadence Data Interchange Format for use cases that do not require JSON.

CCF is a data format that allows compact, efficient, and deterministic encoding of Cadence external values.

Cadence external values (e.g. events, transaction arguments, etc.) have been encoded using JSON-CDC, which is inefficient, verbose, and doesn't define deterministic encoding.

The same FeesDeducted event on the Flow blockchain can encode to:

CCF defines all requirements for deterministic encoding (sort orders, smallest encoded forms, and Cadence-specific requirements) to allow CCF codecs implemented in different programming languages to produce the same deterministic encodings.

Some requirements (such as "Deterministic CCF Encoding Requirements") are defined as optional. Each CCF-based format or protocol can have its specification state how CCF options are used. This allows each protocol to balance tradeoffs such as compatibility, determinism, speed, encoded data size, etc.

CCF uses CBOR and is designed to allow efficient detection and rejection of malformed messages without creating Cadence objects. This allows more costly checks for validity, etc. to be performed only on well-formed messages.

CBOR is an Internet Standard defined by IETF STD 94. CBOR is designed to be relevant for decades and is used by data formats and protocols such as W3C WebAuthn, C-DNS (IETF RFC 8618), COSE (IETF STD 96), CWT (IETF RFC 8392), etc.

CCF uses a subset of CBOR and Core Deterministic Encoding Requirements which are defined in RFC 8949. CCF specification document uses CDDL (Concise Data Definition Language) notation and EDN (Extended Diagnostic Notation). CDDL and EDN are defined in RFC 8610.

RFC 8949 and RFC 8610 are Internet Standards designed to be relevant for many years (not just regular RFCs).

Preliminary Size and Benchmark Comparisons

We are not comparing apples to apples. Prior formats (CBF and JSON-Cadence Data Interchange) didn't specify requirements for validity, sorting, etc.

At this time, CCF decoder doesn't include the option to check for "Preferred Serialization" (encoding to smallest size).

Encoding Event Count Encoded size Comments JSON 48,309 13,858,836 JSON-Cadence Data Interchange Format CCF 48,309 6,159,931 CCF in fully self-describing and deterministic mode CCF 48,309 TBD Est. 1/14 size of JSON-CDC with CCF in partially self-describing mode

CCF's partially self-describing mode would be even smaller (roughly 1/14 the size of JSON) in some use cases.

Preliminary Speed and Memory Comparisons (obsolete)

These informal and preliminary benchmarks used commit f911063 in onflow/cadence#2364.

This is obsolete because we opt-out of "Deterministic CCF Encoding Requirements" for events encoding. Not using that mode makes CCF faster and more memory efficient than shown here.

$ benchstat bench_json_events_48k.log bench_ccf_events_48k.log 
goos: linux
goarch: amd64
pkg: github.com/onflow/cadence/encoding/ccf
cpu: 13th Gen Intel(R) Core(TM) i5-13600K
                     │ bench_json_events_48k.log │      bench_ccf_events_48k.log       │
                     │          sec/op           │   sec/op     vs base                │
EncodeBatchEvents-20                 96.61m ± 4%   70.73m ± 3%  -26.79% (p=0.000 n=10)
DecodeBatchEvents-20                 647.7m ± 3%   157.5m ± 3%  -75.68% (p=0.000 n=10)
geomean                              250.1m        105.5m       -57.81%

                     │ bench_json_events_48k.log │       bench_ccf_events_48k.log       │
                     │           B/op            │     B/op      vs base                │
EncodeBatchEvents-20                32.45Mi ± 0%   25.82Mi ± 0%  -20.45% (p=0.000 n=10)
DecodeBatchEvents-20               234.97Mi ± 0%   56.16Mi ± 0%  -76.10% (p=0.000 n=10)
geomean                             87.32Mi        38.08Mi       -56.39%

                     │ bench_json_events_48k.log │      bench_ccf_events_48k.log       │
                     │         allocs/op         │  allocs/op   vs base                │
EncodeBatchEvents-20                 756.6k ± 0%   370.4k ± 0%  -51.05% (p=0.000 n=10)
DecodeBatchEvents-20                 4.746M ± 0%   1.288M ± 0%  -72.86% (p=0.000 n=10)
geomean                              1.895M        690.7k       -63.55%

The 48,309 events used in comparisons are from a transaction on mainnet with unusually high number of events.

There were 9 event types. These 3 event types had over 15,000 events each: FlowToken.TokensDeposited, FlowToken.TokensWithdrawn, FlowIDTableStaking.DelegatorRewardsPaid.

To simplify benchmark code (it's Sunday night), all event values for each event type are the same (i.e. the values are from the first event of that type).

These benchmark results are preliminary and subject to change.

Draft of CCF was originally in README.md and moved to ccf_specs.md on Nov 29, 2022. Given this, the initial commit history of the CCF specification is associated with the README.md file rather than ccf_specs.md.

CCF is licensed under the terms of the Apache License, Version 2.0. See LICENSE for more information.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4