lazy keyword.
The compiler tracks exactly which fields are accessed, and automatically loads only those, skipping the rest.
In practice, prefer lazy T.fromCell() to a regular T.fromCell().
It is recommended to review automatic serialization first.
A short demo of lazy
Suppose there is a Storage struct in a wallet:
Storage.load() do? It unpacks a cell, populates all struct fields, checks consistency, and so on.
The magic of lazy Storage.load() is that it does not load the entire cell. Instead, unused fields are just skipped:
Even deeper than it seems
Take a look at the NFT collection:content and get commonKey from it:
content, how to get commonKey from it? The answer: since content is a cell, load it… lazily:
p: Cell<Point>, it is not allowed to access p.x — the cell (reference) needs to be loaded first, either with Point.fromCell(p) or p.load(). Both can be used with lazy.
Lazy matching
Similarly, a union type (an incoming message) can be read withlazy:
lazy applied to unions:
- No union is allocated on the stack upfront; matching and loading are deferred until needed.
matchoperates naturally by inspecting the slice prefix (opcode).- Within each branch, the compiler inserts loading points and skips unused fields — just like it does for structs.
if (op == OP_RESET).
It aligns perfectly with the TVM execution model, eliminating unnecessary stack operations.
Lazy matching and else
Since lazy match for a union is done by inspecting the prefix (opcode), unmatched cases fall through to the else branch.
else, unpacking throws error 63 by default, which is controlled by the throwIfOpcodeDoesNotMatch option in fromSlice.
The else branch allows inserting any custom logic.
else in match by type is only allowed with lazy because it matches on prefixes.
Without lazy, it’s just a regular union, else is not allowed.Partial updating
The magic doesn’t stop at reading. Thelazy keyword also works seamlessly when writing data back.
Example: load a storage, use its fields for assertions, modify one field, and save it back:
toCell() does not save all fields of the storage since only seqno was modified.
Instead, after loading seqno, an immutable tail was saved — and is reused when writing back:
Q: How does lazy skip unused fields?
When several consecutive fields are unused, the compiler tries to group them.
It works perfectly for fixed-size types such as intN or bitsN:
coins, cannot be grouped.
And cannot be skipped in a single instruction: TVM has no “skip coins” feature.
The only possible way is to load, but ignore the result.
Similarly, for address: despite it’s always 267 bits, it should be validated even if unused —
otherwise, binary data could be decoded wrong.
For such types, lazy cannot do anything better than “load and ignore”.
In practice, intN types are very common, so grouping has an evident effect.
A trick “access a ref without skipping any data” also works fine.
Q: What are the disadvantages of lazy?
In terms of gas consumption, lazy fromSlice is equal to or cheaper than regular fromSlice.
In the worst case — when all fields are accessed — it loads everything one by one, just like the non-lazy version.
However, there is a difference unrelated to gas consumption:
-
If a slice is too small or contains extra data,
fromSlicewill throw. -
The
lazykeyword selectively picks only the requested fields and handles partially invalid input gracefully. For example, given:
p.x is accessed, an input of FF (8 bits) is acceptable even though y is missing.
Similarly, FFFF0000 (16 bits of extra data) is also fine, as lazy ignores any data that is not requested.
In most cases, this isn’t an issue.
For incoming messages, typically all fields are used (otherwise, why include them in the struct?).
Extra data in the input is typically harmless. The message can still be deserialized correctly.
Perhaps someday,
lazy will become the default.For now, it remains a distinct keyword highlighting the lazy-loading capability — a key feature of Tolk.