Skip to main content
Tolk language has a magic feature — the lazy keyword. The compiler tracks exactly which fields are accessed, and automatically loads only those, skipping the rest. In practice, prefer lazy T.fromCell() to a regular T.fromCell().
It is recommended to review automatic serialization first.

A short demo of lazy

Suppose there is a Storage struct in a wallet:
struct Storage {
    isSignatureAllowed: bool
    seqno: uint32
    subwalletId: uint32
    publicKey: uint256
    extensions: cell?
}

fun Storage.load() {
    return Storage.fromCell(contract.getData())
}
What does Storage.load() do? It unpacks a cell, populates all struct fields, checks consistency, and so on. The magic of lazy Storage.load() is that it does not load the entire cell. Instead, unused fields are just skipped:
get fun publicKey() {
    val st = lazy Storage.load();
    // <-- here "skip 65 bits, preload uint256" is inserted
    return st.publicKey
}
The compiler tracks all control flow paths, inserts loading points as needed, groups unused fields to be skipped, etc. Best of all, this works with any type and any combination of fields.

Even deeper than it seems

Take a look at the NFT collection:
struct NftCollectionStorage {
    adminAddress: address
    nextItemIndex: uint64
    content: Cell<CollectionContent>
    // ...
}

struct CollectionContent {
    metadata: cell
    minIndex: int32
    commonKey: uint256
}
Suppose a developer needs to read content and get commonKey from it:
val storage = lazy NftCollectionStorage.load();
// <-- here just "preload ref" is inserted
val contentCell = storage.content;
First trick: no need to skip address and uint64. To access a ref, it is not necessary to skip preceding data. Second trick: having content, how to get commonKey from it? The answer: since content is a cell, load it… lazily:
val storage = lazy NftCollectionStorage.load();

// <-- "preload ref" inserted — to get `content`
// Cell<T>.load() unpacks a cell and returns T
val content = lazy storage.content.load();

// <-- "skip 32 bits, preload uint256" - to get commonKey
return content.commonKey;
A quick reminder: having p: Cell<Point>, it is not allowed to access p.x — the cell (reference) needs to be loaded first, either with Point.fromCell(p) or p.load(). Both can be used with lazy.

Lazy matching

Similarly, a union type (an incoming message) can be read with lazy:
struct (0x12345678) CounterIncrement { /* ... */ }
struct (0x23456789) CounterReset     { /* ... */ }

type MyMessage = CounterIncrement | CounterReset

fun onInternalMessage(in: InMessage) {
    val msg = lazy MyMessage.fromSlice(in.body);
    match (msg) {
        CounterReset => {
            assert (something) throw 403;
            // <-- here "load msg.initial" is inserted
            storage.counter = msg.initial;
        }
        // ...
    }
}
With lazy applied to unions:
  1. No union is allocated on the stack upfront; matching and loading are deferred until needed.
  2. match operates naturally by inspecting the slice prefix (opcode).
  3. Within each branch, the compiler inserts loading points and skips unused fields — just like it does for structs.
Lazy matching is highly efficient, outperforming if (op == OP_RESET). It aligns perfectly with the TVM execution model, eliminating unnecessary stack operations.

Lazy matching and else

Since lazy match for a union is done by inspecting the prefix (opcode), unmatched cases fall through to the else branch.
val msg = lazy MyMessage.fromSlice(in.body);
match (msg) {
    CounterReset => { /* ... */ }
    // ... handle all variants of the union

    // else - when nothing matched;
    // even input less than 32 bits, no "underflow" thrown
    else => {
        // for example
        throw 0xFFFF
    }
}
Without an explicit else, unpacking throws error 63 by default, which is controlled by the throwIfOpcodeDoesNotMatch option in fromSlice. The else branch allows inserting any custom logic.
else in match by type is only allowed with lazy because it matches on prefixes. Without lazy, it’s just a regular union, else is not allowed.

Partial updating

The magic doesn’t stop at reading. The lazy keyword also works seamlessly when writing data back. Example: load a storage, use its fields for assertions, modify one field, and save it back:
var storage = lazy Storage.load();

assert (storage.validUntil > blockchain.now()) throw 123;
assert (storage.seqno == msg.seqno) throw 456;
// ...

storage.seqno += 1;
contract.setData(storage.toCell());   // <-- magic
The compiler is smart: toCell() does not save all fields of the storage since only seqno was modified. Instead, after loading seqno, an immutable tail was saved — and is reused when writing back:
var storage = lazy Storage.load();
// actually, what was done:
// - load isSignatureAllowed, seqno
// - save immutable tail
// - load validUntil, etc.

// ... use all fields for reading

storage.seqno += 1;
storage.toCell();
// actually, what was done:
// - store isSignatureAllowed, seqno
// - store immutable tail
The compiler can even group unmodified fields in the middle, load them as a slice, and preserve that slice on write-back.

Q: How does lazy skip unused fields?

When several consecutive fields are unused, the compiler tries to group them. It works perfectly for fixed-size types such as intN or bitsN:
struct Demo {
    isAllowed: bool     // always 1 bit
    queryId: uint64     // always 64 bits
    crc: bits32         // always 32 bits
    next: RemainingBitsAndRefs
}

fun demo() {
    val obj = lazy Demo.fromSlice(someSlice);
    // <-- skip 1+64+32 = 97 bits
    obj.next;
}
In Fift assembler, “skip 97 bits” is generated to
97 LDU
NIP
But variable-width fields, like coins, cannot be grouped. And cannot be skipped in a single instruction: TVM has no “skip coins” feature. The only possible way is to load, but ignore the result. Similarly, for address: despite it’s always 267 bits, it should be validated even if unused — otherwise, binary data could be decoded wrong. For such types, lazy cannot do anything better than “load and ignore”. In practice, intN types are very common, so grouping has an evident effect. A trick “access a ref without skipping any data” also works fine.

Q: What are the disadvantages of lazy?

In terms of gas consumption, lazy fromSlice is equal to or cheaper than regular fromSlice. In the worst case — when all fields are accessed — it loads everything one by one, just like the non-lazy version. However, there is a difference unrelated to gas consumption:
  • If a slice is too small or contains extra data, fromSlice will throw.
  • The lazy keyword selectively picks only the requested fields and handles partially invalid input gracefully. For example, given:
struct Point {
    x: int8
    y: int8
}

fun demo(s: slice) {
    val p = lazy Point.fromSlice(s);
    return p.x;
}
Since only p.x is accessed, an input of FF (8 bits) is acceptable even though y is missing. Similarly, FFFF0000 (16 bits of extra data) is also fine, as lazy ignores any data that is not requested. In most cases, this isn’t an issue. For incoming messages, typically all fields are used (otherwise, why include them in the struct?). Extra data in the input is typically harmless. The message can still be deserialized correctly.
Perhaps someday, lazy will become the default.For now, it remains a distinct keyword highlighting the lazy-loading capability — a key feature of Tolk.