Substrate Recipes 🍴😋🍴

A Hands-On Cookbook for Aspiring Blockchain Chefs

Substrate Recipes is a cookbook of working examples that demonstrate best practices when building blockchains with Substrate. Each recipe contains a complete working code example as well as a detailed writeup describing the code. This book is open source. Check out the contributing guidelines for an overview of the structure and directions for getting involved.

How to Use This Book

The easiest place to read this book is at https://substrate.dev/recipes.

The first two chapters are meant to be read in order.

In Chapter 1, Preparing your Kitchen, you will set up your toolchain, compile a blockchain node, and learn to interact with the blockchain.

In Chapter 2, Appetizers, you will cook your first few recipes, learning the fundamentals of Substrate development.

The rest of the book, the "Entrees", can be read in any order; skip to whichever recipes interest you.

Remember, you can't learn to cook by reading alone. As you work through the book, put on your apron, get out some pots and pans, and practice compiling, testing, and hacking on the recipes. Play with the code in the kitchen, extract patterns, and apply them to a problem that you want to solve!

Getting Help

When learning any new skill, you will inevitably get stuck at some point. When you do get stuck you can seek help in several ways:

What is Substrate?

Substrate is a framework for building blockchains. For a high level overview, read the following blog posts:

To learn more about Substrate, see the official documentation.

Learning Rust

Becoming productive with Substrate requires some familiarity with Rust. Fortunately, the Rust community is known for comprehensive documentation and tutorials. The most common resource for initially learning Rust is The Rust Book. To see examples of popular crate usage patterns, Rust by Example is also convenient.

While knowing some Rust is certainly necessary, it is not wise to delay learning Substrate until you are a Rust guru. Rather than learning Rust before you learn Substrate, consider learning Rust as you learn Substrate. If you're beyond the fundamentals of Rust, there are lots more Rust resources at the end of the book.

Setting Up Your Kitchen

Any experienced chef will tell you that cooking delicious blockchains... I mean meals... starts with a properly equipped and organized kitchen. In this chapter we will guide you through setting up your development environment, and introduce you to the structure of the recipes repository.

This section covers:

Building a Node

Prerequisites

Before we can even begin compiling our first blockchain node, we need to have a properly configured Rust toolchain. There is a convenient script that will set up this toolchain for us, and we can run it with the following command.

# Setup Rust and Substrate
curl https://getsubstrate.io -sSf | bash -s -- --fast

This command downloads and executes code from the internet. Give yourself peace-of-mind by inspecting the script's source to confirm it isn't doing anything nasty.

For Windows

These instructions and the rest of the instructions in this chapter assume a unix-like environment such as Linux, MacOS, or Windows Subsystem for Linux (WSL). If you are a Windows user, WSL is the best way to proceed. If you want or need to work in a native Windows environment, this is possible, but is not covered in detail here. Please follow along with the Getting Started on Windows guide, then return here when you're ready to proceed.

Compile the Kitchen Node

If you haven't already, git clone the recipes repository. We also want to kick-start the node compilation as it may take about 30 minutes to complete depending on your hardware.

# Clone the Recipes Repository
git clone https://github.com/substrate-developer-hub/recipes.git
cd recipes

#  Update Rust-Wasm toolchain
./nodes/scripts/init.sh

# Compile the Kitchen Node
# This step takes a while to complete
cargo build --release -p kitchen-node

As you work through the recipes, refer back to these instructions each time you wish to re-compile the node. Over time the commands will become familiar, and you will even modify them to compile other nodes.

Checking Your Work

Once the compilation is completed, you can ensure that the node has been built properly by displaying its help page. Notice that the node executable is found in the target/release directory. This is the default location for Rust projects.

# Inside `recipes` directory

# Display the Kitchen Node's help page
./target/release/kitchen-node --help

Interact with the Kitchen Node

If you followed the instructions to build the node, you my proceed to launch your first blockchain.

Launch a Development Node

Before we launch our node we will purge any chain data. If you've followed the instructions exactly, you will not yet have any chain data to purge, but on each subsequent run, you will, and it is best to get in the habit of purging your chain now. We will start our node in development mode (--dev).

# Purge existing blockchain data (if any)
./target/release/kitchen-node purge-chain --dev

# Start a fresh development blockchain
./target/release/kitchen-node --dev

You should now see your node up and running and waiting for transactions. This Kitchen Node, and several other nodes included with the Recipes, is an instant seal node. That means it will not create any blocks until there are transactions to process. It also means that when a transaction is ready, the node will instantly create a block. Instant seal nodes are ideal for experimenting with your Substrate runtime. The output looks something like this.

2020-05-18 14:33:35 Running in --dev mode, RPC CORS has been disabled.
2020-05-18 14:33:35 Kitchen Node
2020-05-18 14:33:35 ✌️  version 2.0.0-alpha.7-6f91ef9-x86_64-linux-gnu
2020-05-18 14:33:35 ❤️  by Joshy Orndorff:4meta5:Jimmy Chu, 2019-2020
2020-05-18 14:33:35 📋 Chain specification: Development
2020-05-18 14:33:35 🏷  Node name: confused-songs-1348
2020-05-18 14:33:35 👤 Role: AUTHORITY
2020-05-18 14:33:35 💾 Database: RocksDb at /home/joshy/.local/share/kitchen-node/chains/dev/db
2020-05-18 14:33:35 ⛓  Native runtime: super-runtime-1 (super-runtime-1.tx1.au1)
2020-05-18 14:33:35 🔨 Initializing Genesis block/state (state: 0x1835…9bd7, header-hash: 0x239e…48d8)
2020-05-18 14:33:35 📦 Highest known block at #0
2020-05-18 14:33:35 Using default protocol ID "sup" because none is configured in the chain specs
2020-05-18 14:33:35 🏷  Local node identity is: QmQXnCTyCAfe3QAs43ggyJyWAJ1MoKzqizK991ZRTNQhxi
2020-05-18 14:33:35 〽️ Prometheus server started at 127.0.0.1:9615
2020-05-18 14:33:40 💤 Idle (0 peers), best: #0 (0x239e…48d8), finalized #0 (0x239e…48d8), ⬇ 0 ⬆ 0

Launch the Apps User Interface

You can navigate to the Polkadot-JS Apps user interface. This is a general purpose interface for interacting with many different Substrate-based blockchains including Polkadot. From now on we'll call it "Apps" for short. Before Apps will work with our blockchain, we need to give it some chain-specific information known as the "types". You'll learn what all this means as you work through the recipes; for now just follow the instructions.

If you are not clicking the link above but visiting Apps directly, by default Apps connects to the Kusama network. You will need to switch to your locally running network, with only one node, by clicking on the network icon on Apps top left corner.

Screenshot: Switching Network

Some browsers, notably Firefox, will not connect to a local node from an https website. An easy work around is to try another browser, like Chromium. Another option is to host this interface locally.

If you're not already on the Settings -> Developerpage, please navigate there. Copy the contents of runtimes/super-runtime/types.json into Apps.

Screenshot: pasting types into Apps UI

The kitchen node uses the super-runtime by default. As you work through the recipes, you'll learn that it is easy to use other runtimes in this node, or use other nodes entirely. When you do use another runtime, you need to insert the appropriate types from the runtimes/<whatever runtime you're using>/types.json file. Every runtime that ships with the Recipes has this file.

Submitting a Transaction

You may now submit a simple token transfer transaction using the "Transfer" tab. When you do, you will notice that your node instantly creates a block, and the transaction is processed. As you work through the recipes, you will use the Chain State tab to query the blockchain status and Extrinsics to send transactions to the blockchain. Play around for a bit before moving on.

Kitchen Organization

Now that your kitchen is well-equipped with all the right tools (bowls, knives, Rust compiler, etc), let's take a look at how it is organized.

Structure of a Substrate Node

It is useful to recognize that coding is all about abstraction.

To understand how the code in this repository is organized, let's first take a look at how a Substrate node is constructed. Each node has many components that manage things like the transaction queue, communicating over a P2P network, reaching consensus on the state of the blockchain, and the chain's actual runtime logic. Each aspect of the node is interesting in its own right, and the runtime is particularly interesting because it contains the business logic (aka "state transition function") that codifies the chain's functionality.

Much, but not all, of the Recipes focuses on writing runtimes with FRAME, Parity's Framework for composing runtimes from individual building blocks called Pallets. Runtimes built with FRAME typically contain several such pallets. The kitchen node you built previously follows this paradigm.

Substrate Architecture Diagram

The Directories in our Kitchen

There are five primary directories in this repository:

  • Text: Source of the book written in markdown. This is what you're reading right now.
  • Pallets: Pallets for use in FRAME-based runtimes.
  • Runtimes: Runtimes for use in Substrate nodes.
  • Consensus: Consensus engines for use in Substrate nodes.
  • Nodes: Complete Substrate nodes ready to run.

Exploring those directories reveals a tree that looks like this

recipes
|
+-- text
|
+-- consensus
  |
  +-- shaw3pow
|
+-- nodes
	|
	+-- kitchen-node    <-- You built this previously
	|
	+-- rpc-node
|
+-- runtimes
	|
	+-- api-runtime
	|
	+-- super-runtime    <-- You built this too (it is part of the kitchen-node)
	|
	+-- weight-fee-runtime
	|
	+ ...
|
+-- pallets
	|
	+-- adding-machine    <-- You built this too (it is part of super-runtime)
	|
	+-- basic-token        <-- You built this too (it is part of super-runtime)
	|
	+ ...
	|
	+-- weights

Inside the Kitchen Node

Let us take a deeper look at the Kitchen Node.

Looking inside the Kitchen Node's Cargo.toml file we see that it has many dependencies. Most of them come from Substrate itself. Indeed most parts of this Kitchen Node are not unique or specialized, and Substrate offers robust implementations that we can use. The runtime does not come from Substrate. Rather, we use our super-runtime which is in the runtimes folder.

nodes/kitchen-node/Cargo.toml

# This node is compatible with any of the runtimes below
# ---
# Common runtime configured with most Recipes pallets.
runtime = { package = "super-runtime", path = "../../runtimes/super-runtime" }

# Runtime with custom weight and fee calculation.
# runtime = { package = "weight-fee-runtime", path = "../../runtimes/weight-fee-runtime"}

# Runtime with off-chain worker enabled.
# To use this runtime, compile the node with `ocw` feature enabled,
#   `cargo build --release --features ocw`.
# runtime = { package = "ocw-runtime", path = "../../runtimes/ocw-runtime" }

# Runtime with custom runtime-api (custom API only used in rpc-node)
# runtime = { package = "api-runtime", path = "../../runtimes/api-runtime" }
# ---

The commented lines, quoted above, show that the Super Runtime is not the only runtime we could have chosen. We could also use the Weight-Fee runtime, and I encourage you to try that experiment (remember, instructions to re-compile the node are in the previous section).

Every node must have a runtime. You may confirm that by looking at the Cago.toml files of the other nodes included in our kitchen.

Inside the Super Runtime

Having seen that the Kitchen Node depends on a runtime, let us now look deeper at the Super Runtime.

runtimes/super-runtime/Cargo.toml

# -- snip --

# Substrate Pallets
balances = { package = 'pallet-balances', , ... }
transaction-payment = { package = 'pallet-transaction-payment', ,... }
# Recipe Pallets
adding-machine = { path = "../../pallets/adding-machine", default-features = false }
basic-token = { path = "../../pallets/basic-token", default-features = false }

Here we see that the runtime depends on many pallets. Some of these pallets come from Substrate itself. Indeed, Substrate offers a rich collection of commonly used pallets which you may use in your own runtimes. This runtime also contains several custom pallets that are written right here in our Kitchen.

Common Patterns

We will not yet look closely at individual Pallets. We will begin that endeavor in the next chapter -- Appetizers.

We've just observed the general pattern used throughout the recipes. From the inside out, we see a piece of pallet code stored in pallets/<pallet-name>/src/lib.rs. The pallet is then included into a runtime by adding its name and relative path in runtimes/<runtime-name>/Cargo.toml. That runtime is then installed in a node by adding its name and relative path in nodes/<node-name>/Cargo.toml. Of course adding pallets and runtimes also requires changing actual code as well. We will cover those details in due course. For now we're just focusing on macroscopic relationships between the parts of a Substrate node.

Some recipes explore aspects of Blockchain development that are outside of the runtime. Looking back to our node architecture at the beginning of this section, you can imagine that changing a node's RPC or Consensus would be conceptually similar to changing its runtime.

Additional Resources

Substrate Developer Hub offers tutorials that go into more depth about writing pallets and including them in runtimes. If you desire, you may read them as well.

Let's Get Cooking!

When you're ready, we can begin by cooking some appetizer pallets.

Appetizers

This section of the cookbook will focus on Appetizers, small runtime pallets that teach you the basics of writing pallets with a little hand-holding. If you are brand new to Substrate, you should follow through these appetizers in order. If you've already got the basics of pallet development down, you may skip ahead to the entrees which may be read in any order.

This section covers:

Hello Substrate

pallets/hello-substrate

The first pallet we'll explore is a simple "hello world" example. This pallet will have one dispatchable call that prints a message to the node's output. Because this is our first pallet, we'll also explore the structure that every pallet has. This code lives in pallets/hello-substrate/src/lib.rs.

No Std

The very first line of code tells the rust compiler that this crate should not use rust's standard library except when explicitly told to. This is useful because Substrate runtimes compile to Web Assembly where the standard library is not available.

#![cfg_attr(not(feature = "std"), no_std)]

Imports

Next, you'll find imports that come from various parts of the Substrate framework. All pallets will import from a few common crates including frame-support, and frame-system. Complex pallets will have many imports as we'll see later. The hello-substrate pallet uses these imports.

use frame_support::{ decl_module, dispatch::DispatchResult, debug };
use frame_system::{ self as system, ensure_signed };
use sp_runtime::print;

Tests

Next we see a reference to the tests module. This pallet has tests written in a separate file called tests.rs. We will not discuss the tests further at this point, but they are covered in the Testing section of the book.

Configuration Trait

Next, each pallet has a configuration trait which is called Trait. The configuration trait can be used to access features from other pallets, or constants that affect the pallet's behavior. This pallet is simple enough that our configuration trait can remain empty, although it must still exist.

pub trait Trait: system::Trait {}

Dispatchable Calls

A Dispatchable call is a function that a blockchain user can call as part of an Extrinsic. "Extrinsic" is Substrate jargon meaning a call from outside of the chain. Most of the time they are transactions, and for now it is fine to think of them as transactions. Dispatchable calls are defined in the decl_module! macro.

decl_module! {
	pub struct Module<T: Trait> for enum Call where origin: T::Origin {

		/// A function that says hello to the user by printing messages to the node log
		#[weight = 10_000]
		pub fn say_hello(origin) -> DispatchResult {
			// --snip--
		}

		// More dispatchable calls could go here
	}
}

As you can see, our hello-substrate pallet has a dispatchable call that takes a single argument, called origin which we'll investigate shortly. The call returns a DispatchResult which can be either Ok(()) indicating that the call succeeded, or an Err which we'll investigate in the appetizer about errors.

Weight Annotations

Right before the hello-substrate function, we see the line #[weight = 10_000]. This line attaches a default weight to the call. Ultimately weights affect the fees a user will have to pay to call the function. Weights are a very interesting aspect of developing with Substrate, but they too shall be covered later in the section on Weights. For now, and for may of the recipes pallets, we will simply use the default weight as we have done here.

Inside a Dispatchable Call

Let's take a closer look at our dispatchable call.

pub fn say_hello(origin) -> DispatchResult {
	// Ensure that the caller is a regular keypair account
	let caller = ensure_signed(origin)?;

	// Print a message
	print("Hello World");
	// Inspecting variables
	debug::info!("Request sent by: {:?}", caller);

	// Indicate that this call succeeded
	Ok(())
}

This function essentially does three things. First, it uses the ensure_signed function to ensure that the caller of the function was a regular user who owns a private key. This function also returns who that caller was. We store the caller's identity in the caller variable.

Second, it prints a message and logs the caller. Notice that we aren't using Rust's normal println! macro, but rather a special print function and debug::info! macro. The reason for this is explained in the next section.

Finally, the call returns Ok(()) to indicate that the call has succeeded. At a glance it seems that there is no way for this call to fail, but this is not quite true. The ensure_signed function, used at the beginning, can return an error if the call was not from a signed origin. This is the first time we're seeing the important paradigm "Verify first, write last". In Substrate development, it is important that you always ensure preconditions are met and return errors at the beginning. After these checks have completed, then you may begin the function's computation.

Printing from the Runtime

Printing to the terminal from a Rust program is typically very simple using the println! macro. However, Substrate runtimes are compiled to both Web Assembly and a regular native binary, and do not have access to rust's standard library. That means we cannot use the regular println!. I encourage you to modify the code to try using println! and confirm that it will not compile. Nonetheless, printing a message from the runtime is useful both for logging information, and also for debugging.

Substrate Architecture Diagram

At the top of our pallet, we imported sp_runtime's print function. This special function allows the runtime to pass a message for printing to the outer part of the node which is not compiled to Wasm and does have access to the standard library and can perform regular IO. This function is only able to print items that implement the Printable trait. Luckily all the primitive types already implement this trait, and you can implement the trait for your own datatypes too.

Print function note: To actually see the printed messages, we need to use the flag -lruntime=debug when running the kitchen node. So, for the kitchen node, the command would become ./target/release/kitchen-node --dev -lruntime=debug.

The next line demonstrates using debug::info! macro to log to the screen and also inspecting the variable's content. The syntax inside the macro is very similar to what regular rust macro println! takes.

Runtime logger note: When we execute the runtime in native, debug::info! messages are printed. However, if we execute the runtime in Wasm, then an additional step to initialise RuntimeLogger is required.

Installing the Pallet in a Runtime

In order to actually use a pallet, it must be installed in a Substrate runtime. This particular pallet is installed in the super-runtime which you built as part of the kitchen node. To install a pallet in a runtime, you must do three things.

Depend on the Pallet

First we must include the pallet in our runtime's Cargo.toml file. In the case of the super-runtime, this file is at runtimes/super-runtime/Cargo.toml.

[dependencies]
# --snip--
hello-substrate = { path = "../../pallets/hello-substrate", default-features = false }

Because the runtime is compiled to both native and Wasm, we must ensure that our pallet is built to the correct target as well. At the bottom of the Cargo.toml file, we see this.

[features]
default = ["std"]
std = [
	# --snip--
	"hello-substrate/std",
]

Implement its Configuration Trait

Next we must implement the pallet's configuration trait. This happens in the runtime's main lib.rs file. In the case of the super-runtime, this file is at runtimes/super-runtime/src/lib.rs. Because this pallet's configuration trait is trivial, so is implementing it.

impl hello_substrate::Trait for Runtime {}

You can see the other pallets' trait implementations in the surrounding lines. Most of them are more complex.

Add it to construct_runtime!

Finally, we add our pallet to the construct_runtime! macro.

construct_runtime!(
	pub enum Runtime where
		Block = Block,
		NodeBlock = opaque::Block,
		UncheckedExtrinsic = UncheckedExtrinsic
	{
		// --snip--
		HelloSubstrate: hello_substrate::{Module, Call},
	}
);

This macro does the heavy lifting of composing each individual pallet into a single usable runtime. Let's explain the syntax for each line. Each Pallet listed in the macro needs several pieces of information.

First is a convenient name to give to this pallet. We've chosen HelloSubstrate. It is common to choose the same name as the pallet itself except when there is more than one instance. Next is the name of the crate that the pallet lives in. And finally there is a list of features the pallet provides. All pallets require Module. Our pallet also provides dispatchable calls, so it requires Call.

Try it Out

If you haven't already, try interacting with the pallet using the Apps UI. You should see your message printed to the log of your node. Remember to run the kitchen node with the correct flags: ./target/release/kitchen-node --dev -lruntime=debug

You're now well on your way to becoming a blockchain chef. Let's continue to build our skills with another appetizer.

Single Value

pallets/single-value

Storage is used for data that should be kept between blocks and accessible to future transactions. Most runtimes will have many storage values, and together the storage values make up the blockchain's "state". The storage values themselves are not stored in the blocks. Instead the blocks contain extrinsics that represent changes to the storage values. It is the job of each node in a blockchain network to keep track of the current storage. The current state of storage can be determined by executing all of the blocks in the chain.

Declaring Storage

A pallet's storage items are declared with the decl_storage! macro.

decl_storage! {
    trait Store for Module<T: Trait> as SingleValue {
        // --snip--
    }
}

The code above is boilerplate that does not change with the exception of the SingleValue. The macro uses this as the name for a struct that it creates. As a pallet author you don't need to worry about this value much, and it is fine to use the name of the pallet itself.

This pallet has two storage items, both of which are single storage values. Substrate's storage API also supports more complex storage types which are covered in the entrees. The fundamentals for all types are the same.

Our first storage item is a u32 value which is declared with this syntax

StoredValue get(fn stored_value): u32;

The StorageValue is the name of the storage item, similar to a variable name. We will use this name any time we write to the storage item. The get(fn stored_value) is optional. It tells the decl_storage! macro to create a getter function for us. That means we get a function called stored_value which returns the value in that storage item. Finally, the : u32 declares the type of the item.

The next storage item is an AccountId. This is not a primitive type, but rather comes from the system pallet. Types like this need to be prefixed with a T:: as we see here.

StoredAccount get(fn stored_account): T::AccountId;

Reading and Writing to Storage

Functions used to access a single storage value are defined in the StorageValue trait. In this pallet, we use the most common method, put, but it is worth skimming the other methods so you know what is available.

The set_value method demonstrates writing to storage, as well as taking a parameter in our dispatchable call.

fn set_value(origin, value: u32) -> DispatchResult {
	let _ = ensure_signed(origin)?;

	StoredValue::put(value);

	Ok(())
}

To read a value from storage, we could use the get method, or we could use the getter method we declared in decl_storage!.

// The following lines are equivalent
let my_val = StoredValue::get();
let my_val = Self::stored_value();

Storing the Callers Account

In terms of storage, the set_account method is quite similar to set_value, but it also demonstrates how to retreive the AccountId of the caller using the ensure_signed function.

fn set_account(origin) -> DispatchResult {
	let who = ensure_signed(origin)?;

	<StoredAccount<T>>::put(&who);

	Ok(())
}

Because the AccountId type comes from the configuration trait, we must use slightly different syntax. Notice the <T> attached to the name of the storage value this time. Notice also that because AccountId is not primitive, we lend a reference to it rather than transferring ownership.

Constructing the Runtime

We learned about the construct_runtime! macro in the previous section. Because this pallet uses storage items, we must add this to the line in construct runtime. In the Super Runtime, we see the additional Storage feature.

construct_runtime!(
	pub enum Runtime where
		Block = Block,
		NodeBlock = opaque::Block,
		UncheckedExtrinsic = UncheckedExtrinsic
	{
		// --snip--
		SingleValue: single_value::{Module, Call, Storage},
	}
);

Handling Errors

pallets/adding-machine

As we've mentioned before, in Substrate development it is important to Verify first, write last. In this recipe, we'll create an adding machine that checks for unlucky numbers (a silly example) as well as integer overflow (a serious and realistic example), and throws the appropriate errors.

Declaring Errors

Errors are declared with the decl_error! macro. Although it is optional, it is good practice to write doc comments for each error variant as demonstrated here.

decl_error! {
	pub enum Error for Module<T: Trait> {
		/// Thirteen is unlucky and prohibitted
		UnluckyThirteen,
		/// Sum would have overflowed if we had added
		SumTooLarge,
	}
}

Throwing Errors in match Statement

Errors can be thrown in two different ways, both of which are demonstrated in the the add dispatchable call. The first is with the ensure! macro where the error to throw is the second parameter. The second is to throw the error by explicitly returning it.

fn add(origin, val_to_add: u32) -> DispatchResult {
	let _ = ensure_signed(origin)?;

	// First check for unlucky number 13
	ensure!(val_to_add != 13, <Error<T>>::UnluckyThirteen);

	// Now check for overflow while adding
	let result = match Self::sum().checked_add(val_to_add) {
		Some(r) => r,
		None => return Err(<Error<T>>::SumTooLarge.into()),
	};

	// Write the new sum to storage
	Sum::put(result);

	Ok(())
}

Notice that the Error type always takes the generic parameter T. Notice also that we have verified all preconditions, and thrown all possible errors before ever writing to storage.

Throwing Errors with .ok_or and .map_err

In fact, the pattern of:

  • calling functions that returned a Result or Option, and
  • checking if the result is Some or Ok and if not, return from the function early with an error

are so common that there are two standard Rust methods help performing the task.

fn add_alternate(origin, val_to_add: u32) -> DispatchResult {
	let _ = ensure_signed(origin)?;

	ensure!(val_to_add != 13, <Error<T>>::UnluckyThirteen);

	// Using `ok_or()` to check if the returned value is `Ok` and unwrap the value.
	//   If not, returns error from the function.
	let result = Self::sum().checked_add(val_to_add).ok_or(<Error<T>>::SumTooLarge)?;

	Sum::put(result);
	Ok(())
}

Notice the pattern of .ok_or(<Error<T>>::MyError)?;. This is really handy when you have a function call that returns an Option and you expect there should be a value inside. If not, return early with an error message, all the while unwrapping the value for your further processing.

If your function returns a Result<T, E>, you could apply .map_err(|_e| <Error<T>>::MyError)?; in the same spirit.

Constructing the Runtime

Unlike before, adding errors to our pallet does not require a change to the line in construct_runtime!. This is just an idiosyncrasy of developing in Substrate.

Using Events

pallets/simple-event, pallets/generic-event

Having a transaction included in a block does not guarantee that the function executed successfully. As we saw in the previous recipe, many calls can cause errors, but the transaction may still be included in a block. To verify that functions have executed successfully, emit an event at the bottom of the function body.

Events notify the off-chain world of successful state transitions.

Some Prerequisites

When using events, we have to include the Event type in our configuration trait. Although the syntax is a bit complex, it is the same every time. If you are a skilled Rust programmer you will recognize this as a series of trait bounds. If you don't recognize this feature of Rust yet, don't worry; it is the same every time, so you can just copy it and move on.

pub trait Trait: system::Trait {
	type Event: From<Event> + Into<<Self as system::Trait>::Event>;
}

Next we have to add a line inside of the decl_module! macro which generates the deposit_event function we'll use later when emitting our events. Even experienced Rust programmers will not recognize this syntax because it is unique to this macro. Just copy it each time.

decl_module! {
	pub struct Module<T: Trait> for enum Call where origin: T::Origin {

		// This line is new
		fn deposit_event() = default;

		// --snip--
	}
}

Declaring Events

To declare an event, use the decl_event! macro. Like any rust enum, Events have names and can optionally carry data with them. The syntax is slightly different depending on whether the events carry data of primitive types, or generic types from the pallet's configuration trait. These two techniques are demonstrated in the simple-event and generic-event pallets respectively.

Simple Events

The simplest example of an event uses the following syntax

decl_event!(
	pub enum Event {
		EmitInput(u32),
	}
);

Events with Generic Types

Sometimes events might contain types from the pallet's Configuration Trait. In this case, it is necessary to specify additional syntax

decl_event!(
	pub enum Event<T> where AccountId = <T as system::Trait>::AccountId {
		EmitInput(AccountId, u32),
	}
);

This example also demonstrates how the where clause can be used to specify type aliasing for more readable code.

Emitting Events

Events are emitted from dispatchable calls using the deposit_event method.

Events are not emitted on block 0. So any dispatchable calls made during genesis block formation will have no events emitted.

Simple Events

The event is emitted at the bottom of the do_something function body.

Self::deposit_event(Event::EmitInput(new_number));

Events with Generic Types

The syntax for deposit_event now takes the RawEvent type because it is generic over the pallet's configuration trait.

Self::deposit_event(RawEvent::EmitInput(user, new_number));

Constructing the Runtime

For the first time in the recipes, our pallet has an associated type in its configuration trait. We must specify this type when implementing its trait. In the case of the Event type, this is entirely straight forward, and looks the same for both simple events and generic events.

impl simple_event::Trait for Runtime {
	type Event = Event;
}

Events, like dispatchable calls and storage items, requires a slight change to the line in construct_runtime!. Notice that the <T> is necessary for generic events.

construct_runtime!(
	pub enum Runtime where
		Block = Block,
		NodeBlock = opaque::Block,
		UncheckedExtrinsic = UncheckedExtrinsic
	{
		// --snip--
		GenericEvent: generic_event::{Module, Call, Event<T>},
		SimpleEvent: simple_event::{Module, Call, Event},
	}
);

Entrees

These Entrees are for chefs who have the basics down. If you've read through the first two chapters of this cookbook, that includes you! The entrees cover a wide variety of topics in Substrate development, and are meant to be read in any order.

Storage API

We've already encountered the decl_storage! macro in the appetizer on storage items. There is a rich storage API in Substrate which we will explore in this section.

For cryptocurrencies, storage might consist of a mapping between account keys and corresponding balances.

More generally, blockchains provide an interface to store and interact with data in a verifiable and globally irreversible way. In this context, data is stored in a series of snapshots, each of which may be accessed at a later point in time, but, once created, snapshots are considered irreversible.

Arbitrary data may be stored, as long as its data type is serializable in Substrate i.e. implements Encode and Decode traits.

The previous single-value storage recipe showed how a single value can be stored in runtime storage. In this section, we cover

Storage Maps

pallets/simple-map

In the appetizer on storage values we learned to store a single value in blockchain storage to be persisted between blocks. In this recipe, we will see how to store a mapping from keys to values, similar to Rust's own HashMap.

Declaring a StorageMap

We declare a single storage map with the following syntax.

decl_storage! {
	trait Store for Module<T: Trait> as SimpleMap {
		SimpleMap get(fn simple_map): map hasher(blake2_128_concat) T::AccountId => u32;
	}
}

Much of this should look familiar to you from storage values. Reading the line from left to right we have:

  • SimpleMap - the name of the storage map
  • get(fn simple_map) - the name of a getter function that will return values from the map.
  • : map hasher(blake2_128_concat) - beginning of the type declaration. This is a map and it will use the blake2_128_concat hasher. More on this below.
  • T::AccountId => u32 - The specific key and value tyes of the map. This is a map from AccountIds to u32s.

Choosing a Hasher

Although the syntax above is complex, most of it should be straightforward if you've understood the recipe on storage values. The last unfamiliar piece of writing a storage map is choosing which hasher to use. In general you should choose one of the three following hashers. The choice of hasher will affect the performance and security of your chain. If you don't want to think much about this, just choose blake2_128_concat and skip to the next section.

blake2_128_concat

This is a cryptographically secure hash function, and is always safe to use. It is reasonably efficient, and will keep your storage tree balanced. You must choose this hasher if users of your chain have the ability to affect the storage keys. In this pallet, the keys are AccountIds. At first it may seem that the user doesn't affect the AccountId, but in reality a malicious user can generate thousands of accounts and use the one that will affect the chain's storage tree in the way the attacker likes. For this reason, we have chosen to use the blake2_128_concat hasher.

twox_64_concat

This hasher is not cryptographically secure, but is more efficient than blake2. Thus it represents trading security for performance. You should not use this hasher if chain users can affect the storage keys. However, it is perfectly safe to use this hasher to gain performance in scenarios where the users do not control the keys. For example, if the keys in your map are sequentially increasing indices and users cannot cause the indices to rapidly increase, then this is a perfectly reasonable choice.

identity

The identity "hasher" is really not a hasher at all, but merely an identity function that returns the same value it receives. This hasher is only an option when the key type in your storage map is already a hash, and is not controllable by the user. If you're in doubt whether the user can influence the key just use blake2.

The Storage Map API

This pallet demonstrated some of the most common methods available in a storage map including insert, get, take, and contains_key.

// Insert
<SimpleMap<T>>::insert(&user, entry);

// Get
let entry = <SimpleMap<T>>::get(account);

// Take
let entry = <SimpleMap<T>>::take(&user);

// Contains Key
<SimpleMap<T>>::contains_key(&user)

The rest of the API is documented in the rustdocs on the StorageMap trait. You do not need to explicitly use this trait because the decl_storage! macro will do it for you if you use a storage map.

Cache Multiple Calls

pallets/storage-cache

Calls to runtime storage have an associated cost and developers should strive to minimize the number of calls.

decl_storage! {
	trait Store for Module<T: Trait> as StorageCache {
		// copy type
		SomeCopyValue get(fn some_copy_value): u32;

		// clone type
		KingMember get(fn king_member): T::AccountId;
		GroupMembers get(fn group_members): Vec<T::AccountId>;
	}
}

Copy Types

For Copy types, it is easy to reuse previous storage calls by simply reusing the value, which is automatically cloned upon reuse. In the code below, the second call is unnecessary:

fn increase_value_no_cache(origin, some_val: u32) -> DispatchResult {
	let _ = ensure_signed(origin)?;
	let original_call = <SomeCopyValue>::get();
	let some_calculation = original_call.checked_add(some_val).ok_or("addition overflowed1")?;
	// this next storage call is unnecessary and is wasteful
	let unnecessary_call = <SomeCopyValue>::get();
	// should've just used `original_call` here because u32 is copy
	let another_calculation = some_calculation.checked_add(unnecessary_call).ok_or("addition overflowed2")?;
	<SomeCopyValue>::put(another_calculation);
	let now = <system::Module<T>>::block_number();
	Self::deposit_event(RawEvent::InefficientValueChange(another_calculation, now));
	Ok(())
}

Instead, the initial call value should be reused. In this example, the SomeCopyValue value is Copy so we should prefer the following code without the unnecessary second call to storage:

fn increase_value_w_copy(origin, some_val: u32) -> DispatchResult {
	let _ = ensure_signed(origin)?;
	let original_call = <SomeCopyValue>::get();
	let some_calculation = original_call.checked_add(some_val).ok_or("addition overflowed1")?;
	// uses the original_call because u32 is copy
	let another_calculation = some_calculation.checked_add(original_call).ok_or("addition overflowed2")?;
	<SomeCopyValue>::put(another_calculation);
	let now = <system::Module<T>>::block_number();
	Self::deposit_event(RawEvent::BetterValueChange(another_calculation, now));
	Ok(())
}

Clone Types

If the type was not Copy, but was Clone, then it is still better to clone the value in the method than to make another call to runtime storage.

The runtime methods enable the calling account to swap the T::AccountId value in storage if

  1. the existing storage value is not in GroupMembers AND
  2. the calling account is in GroupMembers

The first implementation makes a second unnecessary call to runtime storage instead of cloning the call for existing_key:

fn swap_king_no_cache(origin) -> DispatchResult {
	let new_king = ensure_signed(origin)?;
	let existing_king = <KingMember<T>>::get();

	// only places a new account if
	// (1) the existing account is not a member &&
	// (2) the new account is a member
	ensure!(!Self::is_member(&existing_king), "current king is a member so maintains priority");
	ensure!(Self::is_member(&new_king), "new king is not a member so doesn't get priority");

	// BAD (unnecessary) storage call
	let old_king = <KingMember<T>>::get();
	// place new king
	<KingMember<T>>::put(new_king.clone());

	Self::deposit_event(RawEvent::InefficientKingSwap(old_king, new_king));
	Ok(())
}

If the existing_key is used without a clone in the event emission instead of old_king, then the compiler returns the following error

error[E0382]: use of moved value: `existing_king`
  --> src/lib.rs:93:63
   |
80 |             let existing_king = <KingMember<T>>::get();
   |                 ------------- move occurs because `existing_king` has type `<T as frame_system::Trait>::AccountId`, which does not implement the `Copy` trait
...
85 |             ensure!(!Self::is_member(existing_king), "is a member so maintains priority");
   |                                      ------------- value moved here
...
93 |             Self::deposit_event(RawEvent::InefficientKingSwap(existing_king, new_king));
   |                                                               ^^^^^^^^^^^^^ value used here after move

error: aborting due to previous error

For more information about this error, try `rustc --explain E0382`.
error: Could not compile `storage-cache`.

To learn more, run the command again with --verbose.

Fixing this only requires cloning the original value before it is moved:

fn swap_king_with_cache(origin) -> DispatchResult {
	let new_king = ensure_signed(origin)?;
	let existing_king = <KingMember<T>>::get();
	// prefer to clone previous call rather than repeat call unnecessarily
	let old_king = existing_king.clone();

	// only places a new account if
	// (1) the existing account is not a member &&
	// (2) the new account is a member
	ensure!(!Self::is_member(&existing_king), "current king is a member so maintains priority");
	ensure!(Self::is_member(&new_king), "new king is not a member so doesn't get priority");

	// <no (unnecessary) storage call here>
	// place new king
	<KingMember<T>>::put(new_king.clone());

	Self::deposit_event(RawEvent::BetterKingSwap(old_king, new_king));
	Ok(())
}

Not all types implement Copy or Clone, so it is important to discern other patterns that minimize and alleviate the cost of calls to storage.

Using Vectors as Sets

pallets/vec-set

A Set is an unordered data structure that stores entries without duplicates. Substrate's storage API does not provide a way to declare sets explicitly, but they can be implemented using either vectors or maps.

This recipe demonstrates how to implement a storage set on top of a vector, and explores the performance of the implementation. When implementing a set in your own runtime, you should compare this technique to implementing a map-set.

In this pallet we implement a set of AccountIds. We do not use the set for anything in this pallet; we simply maintain the set. Using the set is demonstrated in the recipe on pallet coupling. We provide dispatchable calls to add and remove members, ensuring that the number of members never exceeds a hard-coded maximum.

/// A maximum number of members. When membership reaches this number, no new members may join.
pub const MAX_MEMBERS: usize = 16;

Storage Item

We will store the members of our set in a Rust Vec. A Vec is a collection of elements that is ordered and may contain duplicates. Because the Vec provides more functionality than our set needs, we are able to build a set from the Vec. We declare our single storage item as so

decl_storage! {
	trait Store for Module<T: Trait> as VecSet {
		// The set of all members. Stored as a single vec
		Members get(fn members): Vec<T::AccountId>;
	}
}

In order to use the Vec successfully as a set, we will need to manually ensure that no duplicate entries are added. To ensure reasonable performance, we will enforce that the Vec always remains sorted. This allows for quickly determining whether an item is present using a binary search.

Adding Members

Any user may join the membership set by calling the add_member dispatchable, providing they are not already a member and the membership limit has not been reached. We check for these two conditions first, and then insert the new member only after we are sure it is safe to do so. This is an example of the mnemonic idiom, "verify first write last".

pub fn add_member(origin) -> DispatchResult {
	let new_member = ensure_signed(origin)?;

	let mut members = Members::<T>::get();
	ensure!(members.len() < MAX_MEMBERS, Error::<T>::MembershipLimitReached);

	// We don't want to add duplicate members, so we check whether the potential new
	// member is already present in the list. Because the list is always ordered, we can
	// leverage the binary search which makes this check O(log n).
	match members.binary_search(&new_member) {
		// If the search succeeds, the caller is already a member, so just return
		Ok(_) => Err(Error::<T>::AlreadyMember.into()),
		// If the search fails, the caller is not a member and we learned the index where
		// they should be inserted
		Err(index) => {
			members.insert(index, new_member.clone());
			Members::<T>::put(members);
			Self::deposit_event(RawEvent::MemberAdded(new_member));
			Ok(())
		}
	}
}

If it turns out that the caller is not already a member, the binary search will fail. In this case it still returns the index into the Vec at which the member would have been stored had they been present. We then use this information to insert the member at the appropriate location, thus maintaining a sorted Vec.

Removing a Member

Removing a member is straightforward. We begin by looking for the caller in the list. If not present, there is no work to be done. If the caller is present, the search algorithm returns her index, and she can be removed.

fn remove_member(origin) -> DispatchResult {
	let old_member = ensure_signed(origin)?;

	let mut members = Members::<T>::get();

	// We have to find out where, in the sorted vec the member is, if anywhere.
	match members.binary_search(&old_member) {
		// If the search succeeds, the caller is a member, so remove her
		Ok(index) => {
			members.remove(index);
			Members::<T>::put(members);
			Self::deposit_event(RawEvent::MemberRemoved(old_member));
			Ok(())
		},
		// If the search fails, the caller is not a member, so just return
		Err(_) => Err(Error::<T>::NotMember.into()),
	}
}

Performance

Now that we have built our set, let's analyze its performance in some common operations.

Membership Check

In order to check for the presence of an item in a vec-set, we make a single storage read, decode the entire vector, and perform a binary search.

DB Reads: O(1) Decoding: O(n) Search: O(log n)

Updating

Updates to the set, such as adding and removing members as we demonstrated, requires first performing a membership check. It also requires re-encoding the entire Vec and storing it back in the database. Finally, it still costs the normal amortized constant time associated with mutating a Vec.

DB Writes: O(1) Encoding: O(n)

Iteration

Iterating over all items in a vec-set is achieved by using the Vec's own iter method. The entire set can be read from storage in one go, and each item must be decoded. Finally, the actual processing you do on the items will take some time.

DB Reads: O(1) Decoding: O(n) Processing: O(n)

Because accessing the database is a relatively slow operation, reading the entire list in a single read is a big win. If you need to iterate over the data frequently, you may want a vec-set.

A Note on Weights

It is always important that the weight associated with your dispatchables represent the actual time it takes to execute them. In this pallet, we have provided an upper bound on the size of the set, which places an upper bound on the computation - this means we can use constant weight annotations. Your set operations should either have a maximum size or a custom weight function that captures the computation appropriately.

Using Maps as Sets

pallets/map-set

A Set is an unordered data structure that stores entries without duplicates. Substrate's storage API does not provide a way to declare sets explicitly, but they can be implemented using either vectors or maps.

This recipe shows how to implement a storage set on top of a map, and explores the performance of the implementation. When implementing a set in your own runtime, you should compare this technique to implementing a vec-set.

In this pallet we implement a set of AccountIds. We do not use the set for anything in this pallet; we simply maintain its membership. Using the set is demonstrated in the recipe on pallet coupling. We provide dispatchable calls to add and remove members, ensuring that the number of members never exceeds a hard-coded maximum.

/// A maximum number of members. When membership reaches this number, no new members may join.
pub const MAX_MEMBERS: u32 = 16;

Storage Item

We will store the members of our set as the keys in one of Substrate's StorageMaps. There is also a recipe specifically about using storage maps. The storage map itself does not track its size internally, so we introduce a second storage value for this purpose.

decl_storage! {
	trait Store for Module<T: Trait> as VecMap {
		// The set of all members.
		Members get(fn members): map hasher(blake2_128_concat) T::AccountId => bool;
		// The total number of members stored in the map.
		// Because the map does not store its size, we must store it separately
		MemberCount: u32;
	}
}

As the code comment says, we will not associate any meaning with the value stored in the map; we only care about the keys. As a convention, the value will always be true.

Adding Members

Any user may join the membership set by calling the add_member dispatchable, so long as they are not already a member and the membership limit has not been reached. We check for these two conditions first, and then insert the new member only after we are sure it is safe to do so.

fn add_member(origin) -> DispatchResult {
	let new_member = ensure_signed(origin)?;

	let member_count = MemberCount::get();
	ensure!(member_count < MAX_MEMBERS, Error::<T>::MembershipLimitReached);

	// We don't want to add duplicate members, so we check whether the potential new
	// member is already present in the list. Because the membership is stored as a hash
	// map this check is constant time O(1)
	ensure!(!Members::<T>::contains_key(&new_member), Error::<T>::AlreadyMember);

	// Insert the new member and emit the event
	Members::<T>::insert(&new_member, true);
	MemberCount::put(member_count + 1); // overflow check not necessary because of maximum
	Self::deposit_event(RawEvent::MemberAdded(new_member));
	Ok(())
}

When we successfully add a new member, we also manually update the size of the set.

Removing a Member

Removing a member is straightforward. We begin by looking for the caller in the list. If not present, there is no work to be done. If the caller is present, we simply remove them and update the size of the set.

fn remove_member(origin) -> DispatchResult {
	let old_member = ensure_signed(origin)?;

	ensure!(Members::<T>::contains_key(&old_member), Error::<T>::NotMember);

	Members::<T>::remove(&old_member);
	MemberCount::mutate(|v| *v -= 1);
	Self::deposit_event(RawEvent::MemberRemoved(old_member));
	Ok(())
}

Performance

Now that we have built our set, let's analyze its performance in some common operations.

Membership Check

In order to check for the presence of an item in a map set, we make a single storage read. If we only care about the presence or absence of the item, we don't even need to decode it. This constant time membership check is the greatest strength of a map set.

DB Reads: O(1)

Updating

Updates to the set, such as adding and removing members as we demonstrated, requires first performing a membership check. Additions also require encooding the new item.

DB Reads: O(1) Encoding: O(1) DB Writes: O(1)

If your set operations will require a lot of membership checks or mutation of individual items, you may want a map-set.

Iteration

Iterating over all items in a map-set is achieved by using the IterableStorageMap trait, which iterates (key, value) pairs (although in this case, we don't care about the values). Because each map entry is stored as an individual trie node, iterating a map set requires a database read for each item. Finally, the actual processing of the items will take some time.

DB Reads: O(n) Decoding: O(n) Processing: O(n)

Because accessing the database is a relatively slow operation, returning to the database for each item is quite expensive. If your set operations will require frequent iterating, you will probably prefer a vec-set.

A Note on Weights

It is always important that the weight associated with your dispatchables represent the actual time it takes to execute them. In this pallet, we have provided an upper bound on the size of the set, which places an upper bound on the computation - this means we can use constant weight annotations. Your set operations should either have a maximum size or a custom weight function that captures the computation appropriately.

Efficent Subgroup Removal by Subkey: Double Maps

pallets/double-map

For some runtimes, it may be necessary to remove a subset of values in a key-value mapping. If the subset maintain an associated identifier type, this can be done in a clean way with the double_map via the remove_prefix api.

pub type GroupIndex = u32; // this is Encode (which is necessary for double_map)

decl_storage! {
	trait Store for Module<T: Trait> as Dmap {
		/// Member score (double map)
		MemberScore get(fn member_score):
			double_map hasher(blake2_128_concat) GroupIndex, hasher(blake2_128_concat) T::AccountId => u32;
		/// Get group ID for member
		GroupMembership get(fn group_membership): map hasher(blake2_128_concat) T::AccountId => GroupIndex;
		/// For fast membership checks, see check-membership recipe for more details
		AllMembers get(fn all_members): Vec<T::AccountId>;
	}
}

For the purposes of this example, store the scores of each member in a map that associates this u32 value with two keys: (1) a GroupIndex identifier, and (2) the member's AccountId. This allows for efficient removal of all values associated with a specific GroupIndex identifier.

fn remove_group_score(origin, group: GroupIndex) -> DispatchResult {
	let member = ensure_signed(origin)?;

	let group_id = <GroupMembership<T>>::get(member);
	// check that the member is in the group
	ensure!(group_id == group, "member isn't in the group, can't remove it");

	// remove all group members from MemberScore at once
	<MemberScore<T>>::remove_prefix(&group_id);

	Self::deposit_event(RawEvent::RemoveGroup(group_id));
	Ok(())
}

Using and Storing Structs

pallets/struct-storage

In Rust, a struct, or structure, is a custom data type that lets you name and package together multiple related values that make up a meaningful group. If you’re familiar with an object-oriented language, a struct is like an object’s data attributes (read more in The Rust Book).

Defining a Struct

To define a simple custom struct for the runtime, the following syntax may be used:

#[derive(Encode, Decode, Default, Clone, PartialEq)]
pub struct MyStruct {
    some_number: u32,
    optional_number: Option<u32>,
}

In the code snippet above, the derive macro is declared to ensure MyStruct conforms to shared behavior according to the specified traits: Encode, Decode, Default, Clone, PartialEq. If you wish the store this struct in blockchain storage, you will need to derive (or manually ipmlement) each of these traits.

To use the Encode and Decode traits, it is necessary to import them.

use frame_support::codec::{Encode, Decode};

Structs with Generic Fields

The simple struct shown earlier only uses Rust primitive types for its fields. In the common case where you want to store types that come from your pallet's configuration trait (or the configuration trait of another pallet in your runtime), you must use generic type parameters in your struct's definition.

#[derive(Encode, Decode, Clone, Default, RuntimeDebug)]
pub struct InnerThing<Hash, Balance> {
	number: u32,
	hash: Hash,
	balance: Balance,
}

Here you can see that we want to store items of type Hash and Balance in the struct. Because these types come from the system and balances pallets' configuration traits, we must specify them as generics when declaring the struct.

It is often convenient to make a type alias that takes T, your pallet's configuration trait, as a single type parameter. Doing so simply saves you typing in the future.

type InnerThingOf<T> = InnerThing<<T as system::Trait>::Hash, <T as balances::Trait>::Balance>;

Structs in Storage

Using one of our structs as a storage item is not significantly different than using a primitive type. When using a generic struct, we must supply all of the generic type parameters. This snippet shows how to supply thos parameters when you have a type alias (like we do for InnerThing) as well as when you don't. Whether to include the type alias is a matter of style and taste, but it is generally preferred when the entire type exceeds the preferred line length.

decl_storage! {
	trait Store for Module<T: Trait> as NestedStructs {
		InnerThingsByNumbers get(fn inner_things_by_numbers):
			map hasher(blake2_128_concat) u32 => InnerThingOf<T>;
		SuperThingsBySuperNumbers get(fn super_things_by_super_numbers):
			map hasher(blake2_256) u32 => SuperThing<T::Hash, T::Balance>;
	}
}

Interacting with the storage maps is now exactly as it was when we didn't use any custom structs

fn insert_inner_thing(origin, number: u32, hash: T::Hash, balance: T::Balance) -> DispatchResult {
	let _ = ensure_signed(origin)?;
	let thing = InnerThing {
					number,
					hash,
					balance,
				};
	<InnerThingsByNumbers<T>>::insert(number, thing);
	Self::deposit_event(RawEvent::NewInnerThing(number, hash, balance));
	Ok(())
}

Nested Structs

Structs can also contain other structs as their fields. We have demonstrated this with the type SuperThing. As you see, any generic types needed by the inner struct must also be supplied to the outer.

#[derive(Encode, Decode, Default, RuntimeDebug)]
pub struct SuperThing<Hash, Balance> {
	super_number: u32,
	inner_thing: InnerThing<Hash, Balance>,
}

Ringbuffer Queue

pallets/ringbuffer-queue

Building a transient adapter on top of storage.

This pallet provides a trait and implementation for a ringbuffer that abstracts over storage items and presents them as a FIFO queue.

When building more sophisticated pallets you might notice a need for more complex data structures stored in storage. This recipe shows how to build a transient storage adapter by walking through the implementation of a ringbuffer FIFO queue. The adapter in this recipe manages a queue that is persisted as a StorageMap and a (start, end) range in storage.

The ringbuffer-queue/src/lib.rs file contains the usage of the transient storage adapter while ringbuffer-queue/src/ringbuffer.rs contains the implementation.

Defining the RingBuffer Trait

First we define the queue interface we want to use:

pub trait RingBufferTrait<Item>
where
	Item: Codec + EncodeLike,
{
	/// Store all changes made in the underlying storage.
	fn commit(&self);
	/// Push an item onto the end of the queue.
	fn push(&mut self, i: Item);
	/// Pop an item from the start of the queue.
	fn pop(&mut self) -> Option<Item>;
	/// Return whether the queue is empty.
	fn is_empty(&self) -> bool;
}

It defines the usual push, pop and is_empty functions we expect from a queue as well as a commit function that will be used to sync the changes made to the underlying storage.

Specifying the RingBuffer Transient

Now we want to add an implementation of the trait. We will be storing the start and end of the ringbuffer separately from the actual items and will thus need to store these in our struct:

pub struct RingBufferTransient<Index>
where
	Index: Codec + EncodeLike + Eq + Copy,
{
	start: Index,
	end: Index,
}

Defining the Storage Interface

In order to access the underlying storage we will also need to include the bounds (we will call the type B) and the item storage (whose type will be M). In order to specify the constraints on the storage map (M) we will also need to specify the Item type. This results in the following struct definition:

pub struct RingBufferTransient<Item, B, M, Index>
where
	Item: Codec + EncodeLike,
	B: StorageValue<(Index, Index), Query = (Index, Index)>,
	M: StorageMap<Index, Item, Query = Item>,
	Index: Codec + EncodeLike + Eq + Copy,
{
	start: Index,
	end: Index,
	_phantom: PhantomData<(Item, B, M)>,
}

The bounds B will be a StorageValue storing a tuple of indices (Index, Index). The item storage will be a StorageMap mapping from our Index type to the Item type. We specify the associated Query type for both of them to help with type inference (because the value returned can be different from the stored representation).

The Codec and EncodeLike type constraints make sure that both items and indices can be stored in storage.

We need the PhantomData in order to "hold on to" the types during the lifetime of the transient object.

The Complete Type

There are two more alterations we will make to our struct to make it work well:

type DefaultIdx = u16;
pub struct RingBufferTransient<Item, B, M, Index = DefaultIdx>
where
	Item: Codec + EncodeLike,
	B: StorageValue<(Index, Index), Query = (Index, Index)>,
	M: StorageMap<Index, Item, Query = Item>,
	Index: Codec + EncodeLike + Eq + WrappingOps + From<u8> + Copy,
{
	start: Index,
	end: Index,
	_phantom: PhantomData<(Item, B, M)>,
}

We specify a default type for Index and define it as u16 to allow for 65536 entries in the ringbuffer per default. We also add the WrappingOps and From<u8> type bounds to enable the kind of operations we need in our implementation. More details in the implementation section, especially in the WrappingOps subsection.

Implementation of the RingBuffer

Now that we have the type definition for RingBufferTransient we need to write the implementation.

Instantiating the Transient

First we need to specify how to create a new instance by providing a new function:

impl<Item, B, M, Index> RingBufferTransient<Item, B, M, Index>
where // ... same where clause as the type, elided here
{
	pub fn new() -> RingBufferTransient<Item, B, M, Index> {
		let (start, end) = B::get();
		RingBufferTransient {
			start, end, _phantom: PhantomData,
		}
	}
}

Here we access the bounds stored in storage to initialize the transient.

Aside: Of course we could also provide a with_bounds function that takes the bounds as a parameter. Feel free to add that function as an exercise.

Second Aside: This B::get() is one of the reasons for specifying the Query associated type on the StorageValue type constraint.

Implementing the RingBufferTrait

We will now implement the RingBufferTrait:

impl<Item, B, M, Index> RingBufferTrait<Item> for RingBufferTransient<Item, B, M, Index>
where // same as the struct definition
	Item: Codec + EncodeLike,
	B: StorageValue<(Index, Index), Query = (Index, Index)>,
	M: StorageMap<Index, Item, Query = Item>,
	Index: Codec + EncodeLike + Eq + WrappingOps + From<u8> + Copy,
{
	fn commit(&self) {
		B::put((self.start, self.end));
	}

commit just consists of putting the potentially changed bounds into storage. You will notice that we don't update the bounds' storage when changing them in the other functions.

	fn is_empty(&self) -> bool {
		self.start == self.end
	}

The is_empty function just checks whether the start and end bounds have the same value to determine whether the queue is empty, thus avoiding expensive storage accesses. This means we need to uphold the corresponding invariant in the other (notably the push) functions.

	fn push(&mut self, item: Item) {
		M::insert(self.end, item);
		// this will intentionally overflow and wrap around when bonds_end
		// reaches `Index::max_value` because we want a ringbuffer.
		let next_index = self.end.wrapping_add(1.into());
		if next_index == self.start {
			// queue presents as empty but is not
			// --> overwrite the oldest item in the FIFO ringbuffer
			self.start = self.start.wrapping_add(1.into());
		}
		self.end = next_index;
	}

In the push function, we insert the pushed item into the map and calculate the new bounds by using the wrapping_add function. This way our ringbuffer will wrap around when reaching max_value of the Index type. This is why we need the WrappingOps type trait for Index.

The if is necessary because we need to keep the invariant that start == end means that the queue is empty, otherwise we would need to keep track of this state separately. We thus "toss away" the oldest item in the queue if a new item is pushed into a full queue by incrementing the start index.

Note: The WrappingOps Trait

The ringbuffer should be agnostic to the concrete Index type used. In order to decrement and increment the start and end index, though, any concrete type needs to implement wrapping_add and wrapping_sub. Because std does not provide such a trait, we need another way to require this behavior. We just implement our own trait WrappingOps for the types we want to support (u8, u16, u32 and u64).

The last function we implement is pop:

	fn pop(&mut self) -> Option<Item> {
		if self.is_empty() {
			return None;
		}
		let item = M::take(self.start);
		self.start = self.start.wrapping_add(1.into());

		item.into()
	}

We can return None on is_empty because we are upholding the invariant. If the queue is not empty we take the value at self.start from storage, i.e. the first value is removed from storage and passed to us. We then increment self.start to point to the new first item of the queue, again using the wrapping_add to get the ringbuffer behavior.

Implementing Drop

In order to make the usage more ergonomic and to avoid synchronization errors (where the storage map diverges from the bounds) we also implement the Drop trait:

impl<Item, B, M, Index> Drop for RingBufferTransient<Item, B, M, Index>
where // ... same where clause elided
{
	fn drop(&mut self) {
		<Self as RingBufferTrait<Item>>::commit(self);
	}
}

On drop, we commit the bounds to storage. With this implementation of Drop, commit is called when our transient goes out of scope, making sure that the storage state is consistent for the next call to the using pallet.

Typical Usage

The lib.rs file of the pallet shows typical usage of the transient.

impl<T: Trait> Module<T> {
	fn queue_transient() -> Box<dyn RingBufferTrait<ValueStruct>> {
		Box::new(RingBufferTransient::<
			ValueStruct,
			<Self as Store>::BufferRange,
			<Self as Store>::BufferMap,
			BufferIndex,
		>::new())
	}
}

First we define a constructor function (queue_transient) so we don't have to specify the types every time we want to access the transient. This function constructs a ringbuffer transient and returns it as a boxed trait object. See the Rust book's section on trait objects for an explanation of why we need a boxed trait object (defined with the syntax dyn TraitName) when using dynamic dispatch.

The add_multiple function shows the actual typical usage of our transient:

pub fn add_multiple(origin, integers: Vec<i32>, boolean: bool) -> DispatchResult {
	let _user = ensure_signed(origin)?;
	let mut queue = Self::queue_transient();
	for integer in integers {
		queue.push(ValueStruct{ integer, boolean });
	}
	Ok(())
} // commit happens on drop

Here we use the queue_transient function defined above to get a queue object. We then push into it repeatedly with commit happening on drop of the queue object at the end of the function. pop works analogously and can of course be intermixed with pushes.

Basic Token

pallets/basic-token

This recipe demonstrates a simple but functional token in a pallet.

Mapping Accounts to Balances

Mappings are a very powerful primitive. A stateful cryptocurrency might store a mapping between accounts and balances. Likewise, mappings prove useful when representing owned data. By tracking ownership with maps, it is easy manage permissions for modifying values specific to individual users or groups.

Storage Items

The primary storage item is the mapping between AccountIds and Balances described above. Every account that holds tokens appears as a key in that map and its value is the number of tokens it holds.

The next two storage items set the total supply of the token and keep track of whether the token has been initialized yet.

decl_storage! {
	trait Store for Module<T: Trait> as Token {
		pub Balances get(get_balance): map hasher(blake2_128_concat) T::AccountId => u64;

		pub TotalSupply get(total_supply): u64 = 21000000;

		Init get(is_init): bool;
	}
}

Because users can influence the keys in our storage map, we've chosen the blake2_128_concat hasher as described in the recipe on storage mapss.

Events and Errors

The pallet defines events and errors for common lifecycle events such as successful and failed transfers, and successful and failed initialization.

decl_event!(
	pub enum Event<T>
	where
		AccountId = <T as system::Trait>::AccountId,
	{
		/// Token was initialized by user
		Initialized(AccountId),
		/// Tokens successfully transferred between users
		Transfer(AccountId, AccountId, u64), // (from, to, value)
	}
);

decl_error! {
	pub enum Error for Module<T: Trait> {
		/// Attempted to initialize the token after it had already been initialized.
		AlreadyInitialized,
		/// Attempted to transfer more funds than were available
		InsufficientFunds,
	}
}

Initializing the Token

In order for the token to be useful, some accounts need to own it. There are many possible ways to initialize a token including genesis config, claims process, lockdrop, and many more. This pallet will use a simple process where the first user to call the init function receives all of the funds. The total supply is hard-coded in the pallet in a fairly naive way: It is specified as the default value in the decl_storage! block.

fn init(origin) -> DispatchResult {
	let sender = ensure_signed(origin)?;
	ensure!(!Self::is_init(), <Error<T>>::AlreadyInitialized);

	<Balances<T>>::insert(sender, Self::total_supply());

	Init::put(true);
	Ok(())
}

As usual, we first check for preconditions. In this case that means making sure that the token is not already initialized. Then we do any mutation necessary.

Transferring Tokens

To transfer tokens, a user who owns some tokens calls the transfer method specifying the recipient and the amount of tokens to transfer as parameters.

We again check for error conditions before mutating storage. In this case it is not necessary to check whether the token has been initialized. If it has not, nobody has any funds and the transfer will simply fail with InsufficientFunds.

fn transfer(_origin, to: T::AccountId, value: u64) -> DispatchResult {
	let sender = ensure_signed(_origin)?;
	let sender_balance = Self::get_balance(&sender);
	let receiver_balance = Self::get_balance(&to);

	// Calculate new balances
	let updated_from_balance = sender_balance.checked_sub(value).ok_or(<Error<T>>::InsufficientFunds)?;
	let updated_to_balance = receiver_balance.checked_add(value).expect("Entire supply fits in u64; qed");

	// Write new balances to storage
	<Balances<T>>::insert(&sender, updated_from_balance);
	<Balances<T>>::insert(&to, updated_to_balance);

	Self::deposit_event(RawEvent::Transfer(sender, to, value));
	Ok(())
}

Don't Panic!

When adding the incoming balance, notice the peculiar .expect method. In a Substrate runtime, you must never panic. To encourage careful thinking about your code, you use the .expect method and provide a proof of why the potential panic will never happen.

Configurable Pallet Constants

pallets/constant-config

To declare constant values within a runtime, it is necessary to import the Get trait from frame_support

use frame_support::traits::Get;

Configurable constants are declared as associated types in the pallet's configuration trait using the Get<T> syntax for any type T.

pub trait Trait: system::Trait {
	type Event: From<Event> + Into<<Self as system::Trait>::Event>;

	/// Maximum amount added per invocation
	type MaxAddend: Get<u32>;

	/// Frequency with which the stored value is deleted
	type ClearFrequency: Get<Self::BlockNumber>;
}

In order to make these constants and their values appear in the runtime metadata, it is necessary to declare them with the const syntax in the decl_module! block. Usually constants are declared at the top of this block, right after fn deposit_event.

decl_module! {
	pub struct Module<T: Trait> for enum Call where origin: T::Origin {
		fn deposit_event() = default;

		const MaxAddend: u32 = T::MaxAddend::get();

		const ClearFrequency: T::BlockNumber = T::ClearFrequency::get();

		// --snip--
	}
}

This example manipulates a single value in storage declared as SingleValue.

decl_storage! {
	trait Store for Module<T: Trait> as Example {
		SingleValue get(fn single_value): u32;
	}
}

SingleValue is set to 0 every ClearFrequency number of blocks in the on_finalize function that runs at the end of blocks execution.

fn on_finalize(n: T::BlockNumber) {
	if (n % T::ClearFrequency::get()).is_zero() {
		let c_val = <SingleValue>::get();
		<SingleValue>::put(0u32);
		Self::deposit_event(Event::Cleared(c_val));
	}
}

Signed transactions may invoke the add_value runtime method to increase SingleValue as long as each call adds less than MaxAddend. There is no anti-sybil mechanism so a user could just split a larger request into multiple smaller requests to overcome the MaxAddend, but overflow is still handled appropriately.

fn add_value(origin, val_to_add: u32) -> DispatchResult {
	let _ = ensure_signed(origin)?;
	ensure!(val_to_add <= T::MaxAddend::get(), "value must be <= maximum add amount constant");

	// previous value got
	let c_val = <SingleValue>::get();

	// checks for overflow when new value added
	let result = match c_val.checked_add(val_to_add) {
		Some(r) => r,
		None => return Err(DispatchError::Other("Addition overflowed")),
	};
	<SingleValue>::put(result);
	Self::deposit_event(Event::Added(c_val, val_to_add, result));
	Ok(())
}

In more complex patterns, the constant value may be used as a static, base value that is scaled by a multiplier to incorporate stateful context for calculating some dynamic fee (i.e. floating transaction fees).

To test the range of pallet configurations introduced by configurable constants, see custom configuration of externalities

Supplying the Constant Value

When the pallet is included in a runtime, the runtime developer supplies the value of the constant using the parameter_types! macro. This pallet is included in the super-runtime where we see the following macro invocation and trait implementation.


#![allow(unused_variables)]
fn main() {
parameter_types! {
	pub const MaxAddend: u32 = 1738;
	pub const ClearFrequency: u32 = 10;
}

impl constant_config::Trait for Runtime {
	type Event = Event;
	type MaxAddend = MaxAddend;
	type ClearFrequency = ClearFrequency;
}
}

Simple Crowdfund

pallets/simple-crowdfund

This pallet demonstrates a simple on-chain crowdfunding app where participants can pool funds toward a common goal. It demonstrates a pallet that controls multiple token accounts, and storing data in child storage.

Basic Usage

Any user can start a crowdfund by specifying a goal amount for the crowdfund, an end time, and a beneficiary who will receive the pooled funds if the goal is reached by the end time. If the fund is not successful, it enters into a retirement period when contributors can reclaim their pledged funds. Finally, an unsuccessful fund can be dissolved, sending any remaining tokens to the user who dissolves it.

Configuration Trait

We begin by declaring our configuration trait. In addition to the ubiquitous Event type, our crowdfund pallet will depend on a notion of Currency, and three configuration constants.

/// The pallet's configuration trait
pub trait Trait: system::Trait {
	/// The ubiquious Event type
	type Event: From<Event<Self>> + Into<<Self as system::Trait>::Event>;

	/// The currency in which the crowdfunds will be denominated
	type Currency: ReservableCurrency<Self::AccountId>;

	/// The amount to be held on deposit by the owner of a crowdfund
	type SubmissionDeposit: Get<BalanceOf<Self>>;

	/// The minimum amount that may be contributed into a crowdfund. Should almost certainly be at
	/// least ExistentialDeposit.
	type MinContribution: Get<BalanceOf<Self>>;

	/// The period of time (in blocks) after an unsuccessful crowdfund ending during which
	/// contributors are able to withdraw their funds. After this period, their funds are lost.
	type RetirementPeriod: Get<Self::BlockNumber>;
}

Custom Types

Our pallet introduces a custom struct that is used to store the metadata about each fund.

#[derive(Encode, Decode, Default, PartialEq, Eq)]
#[cfg_attr(feature = "std", derive(Debug))]
pub struct FundInfo<AccountId, Balance, BlockNumber> {
	/// The account that will recieve the funds if the campaign is successful
	beneficiary: AccountId,
	/// The amount of deposit placed
	deposit: Balance,
	/// The total amount raised
	raised: Balance,
	/// Block number after which funding must have succeeded
	end: BlockNumber,
	/// Upper bound on `raised`
	goal: Balance,
}

In addition to this FundInfo struct, we also introduce an index type to track the number of funds that have ever been created and three convenience aliases.

pub type FundIndex = u32;

type AccountIdOf<T> = <T as system::Trait>::AccountId;
type BalanceOf<T> = <<T as Trait>::Currency as Currency<AccountIdOf<T>>>::Balance;
type FundInfoOf<T> = FundInfo<AccountIdOf<T>, BalanceOf<T>, <T as system::Trait>::BlockNumber>;

Storage

The pallet has two storage items declared the usual way using decl_storage!. The first is the index that tracks the number of funds, and the second is a mapping from index to FundInfo.

decl_storage! {
	trait Store for Module<T: Trait> as ChildTrie {
		/// Info on all of the funds.
		Funds get(fn funds):
			map hasher(blake2_128_concat) FundIndex => Option<FundInfoOf<T>>;

		/// The total number of funds that have so far been allocated.
		FundCount get(fn fund_count): FundIndex;

		// Additional information is stored in a child trie. See the helper
		// functions in the impl<T: Trait> Module<T> block below
	}
}

This pallet also stores the data about which users have contributed and how many funds they contributed in a child trie. This child trie is not explicitly declared anywhere.

The use of the child trie provides two advantages over using standard storage. First, it allows for removing the entirety of the trie is a single storage write when the fund is dispensed or dissolved. Second, it allows any contributor to prove that they contributed using a Merkle Proof.

Using the Child Trie API

The child API is abstracted into a few helper functions in the impl<T: Trait> Module<T> block.

/// Record a contribution in the associated child trie.
pub fn contribution_put(index: FundIndex, who: &T::AccountId, balance: &BalanceOf<T>) {
	let id = Self::id_from_index(index);
	who.using_encoded(|b| child::put(&id, b, &balance));
}

/// Lookup a contribution in the associated child trie.
pub fn contribution_get(index: FundIndex, who: &T::AccountId) -> BalanceOf<T> {
	let id = Self::id_from_index(index);
	who.using_encoded(|b| child::get_or_default::<BalanceOf<T>>(&id, b))
}

/// Remove a contribution from an associated child trie.
pub fn contribution_kill(index: FundIndex, who: &T::AccountId) {
	let id = Self::id_from_index(index);
	who.using_encoded(|b| child::kill(&id, b));
}

/// Remove the entire record of contributions in the associated child trie in a single
/// storage write.
pub fn crowdfund_kill(index: FundIndex) {
	let id = Self::id_from_index(index);
	child::kill_storage(&id);
}

Because this pallet uses one trie for each active crowdfund, we need to generate a unique ChildInfo for each of them. To ensure that the ids are really unique, we incluce the FundIndex in the generation.

pub fn id_from_index(index: FundIndex) -> child::ChildInfo {
	let mut buf = Vec::new();
	buf.extend_from_slice(b"crowdfnd");
	buf.extend_from_slice(&index.to_le_bytes()[..]);

	child::ChildInfo::new_default(T::Hashing::hash(&buf[..]).as_ref())
}

Pallet Dispatchables

The dispatchable functions in this pallet follow a standard flow of verifying preconditions, raising appropriate errors, mutating storage, and finally emitting events. We will not present them all in this writeup, but as always, you're encouraged to experiment with the recipe.

We will look closely only at the dispense dispatchable which pays the funds to the beneficiary after a successful crowdfund. This dispatchable, as well as dissolve, use an incentivization scheme to encourage users of the chain to eliminate extra data as soon as possible.

Data from finished funds takes up space on chain, so it is best to settle the fund and cleanup the data as soon as possible. To incentivize this behavior, the pallet awards the initial deposit to whoever calls the dispense function. Users, in hopes of receiving this reward, will race to call these cleanup methods before each other.

/// Dispense a payment to the beneficiary of a successful crowdfund.
/// The beneficiary receives the contributed funds and the caller receives
/// the deposit as a reward to incentivize clearing settled crowdfunds out of storage.
#[weight = 10_000]
fn dispense(origin, index: FundIndex) {
	let caller = ensure_signed(origin)?;

	let fund = Self::funds(index).ok_or(Error::<T>::InvalidIndex)?;

	// Check that enough time has passed to remove from storage
	let now = <system::Module<T>>::block_number();

	ensure!(now >= fund.end, Error::<T>::FundStillActive);

	// Check that the fund was actually successful
	ensure!(fund.raised >= fund.goal, Error::<T>::UnsuccessfulFund);

	let account = Self::fund_account_id(index);

	// Beneficiary collects the contributed funds
	let _ = T::Currency::resolve_creating(&fund.beneficiary, T::Currency::withdraw(
		&account,
		fund.raised,
		WithdrawReasons::from(WithdrawReason::Transfer),
		ExistenceRequirement::AllowDeath,
	)?);

	// Caller collects the deposit
	let _ = T::Currency::resolve_creating(&caller, T::Currency::withdraw(
		&account,
		fund.deposit,
		WithdrawReasons::from(WithdrawReason::Transfer),
		ExistenceRequirement::AllowDeath,
	)?);

This pallet also uses Currency Imbalances as discussed in the Charity recipe, to make transfers without incurring transfer fees to the crowdfund pallet itself.

Instantiable Pallets

pallets/last-caller pallets/default-instance

Instantiable pallets enable multiple instances of the same pallet logic within a single runtime. Each instance of the pallet has its own independent storage, and extrinsics must specify which instance of the pallet they are intended for. These patterns are illustrated in the kitchen in the last-caller and default-instance pallets.

Some use cases:

  • Token chain hosts two independent cryptocurrencies.
  • Marketplace track users' reputations as buyers separately from their reputations as sellers.
  • Governance has two (or more) houses which act similarly internally.

Substrate's own Balances and Collective pallets are good examples of real-world code using this technique. The default Substrate node has two instances of the Collectives pallet that make up its Council and Technical Committee. Each collective has its own storage, events, and configuration.

Council: collective::<Instance1>::{Module, Call, Storage, Origin<T>, Event<T>, Config<T>},
TechnicalCommittee: collective::<Instance2>::{Module, Call, Storage, Origin<T>, Event<T>, Config<T>}

Writing an Instantiable Pallet

Writing an instantiable pallet is almost entirely the same process as writing a plain non-instantiable pallet. There are just a few places where the syntax differs.

You must call decl_storage!

Instantiable pallets must call the decl_storage! macro so that the Instance type is created.

Configuration Trait

pub trait Trait<I: Instance>: system::Trait {
	/// The overarching event type.
	type Event: From<Event<Self, I>> + Into<<Self as system::Trait>::Event>;
}

Storage Declaration

decl_storage! {
	trait Store for Module<T: Trait<I>, I: Instance> as TemplatePallet {
		...
	}
}

Declaring the Module Struct

decl_module! {
	/// The module declaration.
	pub struct Module<T: Trait<I>, I: Instance> for enum Call where origin: T::Origin {
		...
	}
}

Accessing Storage

<Something<T, I>>::put(something);

If the storage item does not use any types specified in the configuration trait, the T is omitted, as always.

<Something<I>>::put(something);

Event initialization

fn deposit_event() = default;

Event Declaration

decl_event!(
	pub enum Event<T, I> where AccountId = <T as system::Trait>::AccountId {
		...
	}
}

Installing a Pallet Instance in a Runtime

The syntax for including an instance of an instantiable pallet in a runtime is slightly different than for a regular pallet. The only exception is for pallets that use the Default Instance feature described below.

Implementing Configuration Traits

Each instance needs to be configured separately. Configuration consists of implementing the specific instance's trait. The following snippet shows a configuration for Instance1.

impl template::Trait<template::Instance1> for Runtime {
	type Event = Event;
}

Using the construct_runtime! Macro

The final step of installing the pallet instance in your runtime is updating the construct_runtime! macro. You may give each instance a meaningful name. Here I've called Instance1 FirstTemplate.

FirstTemplate: template::<Instance1>::{Module, Call, Storage, Event<T>, Config},

Default Instance

One drawback of instantiable pallets, as we've presented them so far, is that they require the runtime designer to use the more elaborate syntax even if they only desire a single instance of the pallet. To alleviate this inconvenience, Substrate provides a feature known as DefaultInstance. This allows runtime developers to deploy an instantiable pallet exactly as they would if it were not instantiable provided they only use a single instance.

To make your instantiable pallet support DefaultInstance, you must specify it in four places.

pub trait Trait<I=DefaultInstance>: system::Trait {
decl_storage! {
	trait Store for Module<T: Trait<I>, I: Instance=DefaultInstance> as TemplateModule {
		...
	}
}
decl_module! {
	pub struct Module<T: Trait<I>, I: Instance = DefaultInstance> for enum Call where origin: T::Origin {
		...
	}
}
decl_event!(
	pub enum Event<T, I=DefaultInstance> where ... {
		...
	}
}

Having made these changes, a developer who uses your pallet doesn't need to know or care that your pallet is instantable. They can deploy it just as they would any other pallet.

Genesis Configuration

Some pallets require a genesis configuration to be specified. Let's look to the default Substrate node's use of the Collective pallet as an example.

In its chain_spec.rs file we see

GenesisConfig {
	...
	collective_Instance1: Some(CouncilConfig {
		members: vec![],
		phantom: Default::default(),
	}),
	collective_Instance2: Some(TechnicalCommitteeConfig {
		members: vec![],
		phantom: Default::default(),
	}),
	...
}

Computational Resources and Weights

pallets/weights

Any computational resources used by a transaction must be accounted for so that appropriate fees can be applied, and it is a pallet author's job to ensure that this accounting happens. Substrate provides a mechanism known as transaction weighting to quantify the resources consumed while executing a transaction.

Indeed, mispriced EVM operations have shown how operations that underestimate cost can provide economic DOS attack vectors: Onwards; Underpriced EVM Operations, Under-Priced DOS Attacks on Ethereum

Assigning Transaction Weights

Pallet authors can annotate their dispatchable function with a weight using syntax like this,

#[weight = <Some Weighting Instance>]
fn some_call(...) -> Result {
	// --snip--
}

For simple transactions a fixed weight will do. Substrate allows simply specifying a constant integer in cases situations like this.

decl_module! {
	pub struct Module<T: Trait> for enum Call {

		#[weight = 10_000]
		fn store_value(_origin, entry: u32) -> DispatchResult {
			StoredValue::put(entry);
			Ok(())
		}

For more complex transactions, custom weight calculations can be performed that consider the parameters passed to the call. This snippet shows a weighting struct that weighs transactions where the first parameter is a bool. If the first parameter is true, then the weight is linear in the second parameter. Otherwise the weight is constant. A transaction where this weighting scheme makes sense is demonstrated in the kitchen.

pub struct Conditional(u32);

impl WeighData<(&bool, &u32)> for Conditional {
	fn weigh_data(&self, (switch, val): (&bool, &u32)) -> Weight {

		if *switch {
			val.saturating_mul(self.0)
		}
		else {
			self.0
		}
	}
}

In addition to the WeightData Trait, shown above, types that are used to calculate transaction weights must also implement ClassifyDispatch, and PaysFee.

impl<T> ClassifyDispatch<T> for Conditional {
    fn classify_dispatch(&self, _: T) -> DispatchClass {
        // Classify all calls as Normal (which is the default)
        Default::default()
    }
}
impl PaysFee for Conditional {
    fn pays_fee(&self) -> bool {
        true
    }
}

The complete code for this example as well as several others can be found in the kitchen.

Cautions

While you can make reasonable estimates of resource consumption at design time, it is always best to actually measure the resources required of your functions through an empirical process. Failure to perform such rigorous measurement may result in an economically insecure chain.

While it isn't enforced, calculating a transaction's weight should itself be a cheap operation. If the weight calculation itself is expensive, your chain will be insecure.

What About Fees?

Weights are used only to describe the computational resources consumed by a transaction, and enable accounting of these resources. To learn how to turn these weights into actual fees charged to transactors, continue to the recipe on Fees.

Transaction Fees

runtimes/weight-fee-runtime

Substrate provides the transaction_payment pallet for calculating and collecting fees for executing transactions. Fees are broken down into two components:

  • Byte fee - A fee proportional to the transaction's length in bytes. The proportionality constant is a parameter in the transaction_payment pallet.
  • Weight fee - A fee calculated from the transaction's weight. Weights quantify the time spent executing the transaction. Learn more in the recipe on weights. The conversion doesn't need to be linear, although it often is. The same conversion function is applied across all transactions from all pallets in the runtime.
  • Fee Multiplier - A multiplier for the computed fee, that can change as the chain progresses. This topic is not (yet) covered further in the recipes.
total_fee = transaction_length * length_fee + weight_to_fee(total_weight)

Setting the Parameters

Each of the parameters described above is set in the transaction payment pallet's configuration trait. For example, the super-runtime sets these parameters as follows.

src: runtimes/super-runtime/src/lib.rs

parameter_types! {
	pub const TransactionByteFee: u128 = 1;
}

impl transaction_payment::Trait for Runtime {
	type Currency = balances::Module<Runtime>;
	type OnTransactionPayment = ();
	type TransactionByteFee = TransactionByteFee;
	type WeightToFee = IdentityFee<Balance>;
	type FeeMultiplierUpdate = ();
}

1 to 1 Conversion

In many cases converting weight to fees one-to-one, as shown above, will suffice and can be accomplished with IdentityFee. This approach is also taken in the node template. It is also possible to provide a type that makes a more complex calculation. Any type that implements WeightToFeePolynomial will suffice.

Linear Conversion

Another common way to convert weight to fees is linearly. When converting linearly, the weight is multiplied by a constant coefficient to determine the fee to charge. This is demonstrated in the weight-fee-runtime with the LinearWeightToFee struct.

We declare the struct with an associated type C, which will provide the coefficient.

pub struct LinearWeightToFee<C>(sp_std::marker::PhantomData<C>);

Then we implement WeightToFeePolynomial for it. When implementing this trait, your main job is to return a set of WeightToFeeCoefficients. These coefficients can have integer and fractional parts and be positive or negative. In our LinearWeightToFee there is a single integer coefficient supplied by the associated type.

impl<C> WeightToFeePolynomial for LinearWeightToFee<C>
where
	C: Get<Balance>,
{
	type Balance = Balance;

	fn polynomial() -> WeightToFeeCoefficients<Self::Balance> {
		let coefficient = WeightToFeeCoefficient {
			coeff_integer: C::get(),
			coeff_frac: Perbill::zero(),
			negative: false,
			degree: 1,
		};

		// Return a smallvec of coefficients. Order does not need to match degrees
		// because each coefficient has an explicit degree annotation.
		smallvec!(coefficient)
	}
}

This struct is reusable, and works with different coefficients. Using it looks like this.

parameter_types! {
	// Used with LinearWeightToFee conversion. Leaving this constant intact when using other
	// conversion techniques is harmless.
	pub const FeeWeightRatio: u128 = 1_000;

	// --snip--
}

impl transaction_payment::Trait for Runtime {

	// Convert dispatch weight to a chargeable fee.
	type WeightToFee = LinearWeightToFee<FeeWeightRatio>;

	// --snip--
}

Quadratic Conversion

More complex polynomials can also be used. When using complex polynomials, it is unlikely that your logic will be reused among multiple chains, so it is generally not worth the overhead of making the coefficients configurable. The QuadraticWeightToFee demonstrates a 2nd-degree polynomial with hard-coded non-integer signed coefficients.

pub struct QuadraticWeightToFee;

impl WeightToFeePolynomial for QuadraticWeightToFee {
	type Balance = Balance;

	fn polynomial() -> WeightToFeeCoefficients<Self::Balance> {
		let linear = WeightToFeeCoefficient {
			coeff_integer: 2,
			coeff_frac: Perbill::from_percent(40),
			negative: true,
			degree: 1,
		};
		let quadratic = WeightToFeeCoefficient {
			coeff_integer: 3,
			coeff_frac: Perbill::zero(),
			negative: false,
			degree: 2,
		};

		// Return a smallvec of coefficients. Order does not need to match degrees
		// because each coefficient has an explicit degree annotation. In fact, any
		// negative coefficients should be saved for last regardless of their degree
		// because large negative coefficients will likely cause saturation (to zero)
		// if they happen early on.
		smallvec![quadratic, linear]
	}
}

Collecting Fees

Having calculated the amount of fees due, runtime authors must decide which asset the fees should be paid in. A common choice is the use the Balances pallet, but any type that implements the Currency trait can be used. The weight-fee-runtime demonstrates how to use an asset provided by the Generic Asset pallet.

src: runtimes/weight-fee-runtime/src/lib.rs

impl transaction_payment::Trait for Runtime {

	// A generic asset whose ID is stored in the generic_asset pallet's runtime storage
	type Currency = SpendingAssetCurrency<Self>;

	// --snip--
}

Charity

pallets/charity

The Charity pallet represents a simple charitable organization that collects funds into a pot that it controls, and allocates those funds to the appropriate causes. It demonstrates two useful concepts in Substrate development:

  • A pallet-controlled shared pot of funds
  • Absorbing imbalances from the runtime

Instantiate a Pot

Our charity needs an account to hold its funds. Unlike other accounts, it will not be controlled by a user's cryptographic key pair, but directly by the pallet. To instantiate such a pool of funds, import ModuleId and AccountIdConversion from sp-runtime.

use sp-runtime::{ModuleId, traits::AccountIdConversion};

With these imports, a PALLET_ID constant can be generated as an identifier for the pool of funds. The PALLET_ID must be exactly eight characters long which is why we've included the exclamation point. (Well, that and Charity work is just so exciting!) This identifier can be converted into an AccountId with the into_account() method provided by the AccountIdConversion trait.

const PALLET_ID: ModuleId = ModuleId(*b"Charity!");

impl<T: Trait> Module<T> {
	/// The account ID that holds the Charity's funds
	pub fn account_id() -> T::AccountId {
		PALLET_ID.into_account()
	}

	/// The Charity's balance
	fn pot() -> BalanceOf<T> {
		T::Currency::free_balance(&Self::account_id())
	}
}

Receiving Funds

Our charity can receive funds in two different ways.

Donations

The first and perhaps more familiar way is through charitable donations. Donations can be made through a standard donate extrinsic which accepts the amount to be donated as a parameter.

fn donate(
		origin,
		amount: BalanceOf<T>
) -> DispatchResult {
		let donor = ensure_signed(origin)?;

		let _ = T::Currency::transfer(&donor, &Self::account_id(), amount, AllowDeath);

		Self::deposit_event(RawEvent::DonationReceived(donor, amount, Self::pot()));
		Ok(())
}

Imbalances

The second way the charity can receive funds is by absorbing imbalances created elsewhere in the runtime. An Imbalance is created whenever tokens are burned, or minted. Because our charity wants to collect funds, we are specifically interested in NegativeImbalances. Negative imbalances are created, for example, when a validator is slashed for violating consensus rules, transaction fees are collected, or another pallet burns funds as part of an incentive-alignment mechanism. To allow our pallet to absorb these imbalances, we implement the OnUnbalanced trait.

use frame_support::traits::{OnUnbalanced, Imbalance};
type NegativeImbalanceOf<T> = <<T as Trait>::Currency as Currency<<T as system::Trait>::AccountId>>::NegativeImbalance;

impl<T: Trait> OnUnbalanced<NegativeImbalanceOf<T>> for Module<T> {
	fn on_nonzero_unbalanced(amount: NegativeImbalanceOf<T>) {
		let numeric_amount = amount.peek();

		// Must resolve into existing but better to be safe.
		let _ = T::Currency::resolve_creating(&Self::account_id(), amount);

		Self::deposit_event(RawEvent::ImbalanceAbsorbed(numeric_amount, Self::pot()));
	}
}

Allocating Funds

In order for the charity to affect change with the funds it has collected it must be able to allocate those funds. Our charity pallet abstracts the governance of where funds will be allocated to the rest of the runtime. Funds can be allocated by a root call to the allocate extrinsic. One good example of a governance mechanism for such decisions is Substrate's own Democracy pallet.

Fixed Point Arithmetic

pallets/fixed-point pallets/compounding-interest

When programmers learn to use non-integer numbers in their programs, they are usually taught to use floating points. In blockchain, we use an alternative representation of fractional numbers called fixed point. There are several ways to use fixed point numbers, and this recipe will introduce three of them. In particular we'll see:

  • Substrate's own fixed point structs and traits
  • The substrate-fixed library
  • A manual fixed point implementation (and why it's nicer to use a library)
  • A comparison of the two libraries in a compounding interest example

What's Wrong with Floats?

Floats are cool for all kinds of reasons, but they also have one important drawback. Floating point arithmetic is nondeterministic which means that different processors compute (slightly) different results for the same operation. Although there is an IEEE spec, nondeterminism can come from specific libraries used, or even hardware. In order for the nodes in a blockchain network to reach agreement on the state of the chain, all operations must be completely deterministic. Luckily fixed point arithmetic is deterministic, and is often not much harder to use once you get the hang of it.

Multiplicative Accumulators

pallets/fixed-point

The first pallet covered in this recipe contains three implementations of a multiplicative accumulator. That's a fancy way to say the pallet lets users submit fractional numbers and keeps track of the product from multiplying them all together. The value starts out at one (the multiplicative identity), and it gets multiplied by whatever values the users submit. These three independent implementations compare and contrast the features of each.

Permill Accumulator

We'll be using the most common approach which takes its fixed point implementation from Substrate itself. There are a few fixed-point structs available in Substrate, all of which implement the PerThing trait, that cover different amounts of precision. For this accumulator example, we'll use the PerMill struct which represents fractions as parts per million. There are also Perbill, PerCent, and PerU16, which all provide the same interface (because it comes from the trait). Substrate's fixed-point structs are somewhat unique because they represent only fractional parts of numbers. That means they can represent numbers between 0 and 1 inclusive, but not numbers with whole parts like 2.718 or 3.14.

To begin we declare the storage item that will hold our accumulated product. You can see that the trait provides a handy function for getting the identity value which we use to set the default storage value to 1.

decl_storage! {
	trait Store for Module<T: Trait> as Example {
		// --snip--

		/// Permill accumulator, value starts at 1 (multiplicative identity)
		PermillAccumulator get(fn permill_value): Permill = Permill::one();
	}
}

The only extrinsic for this Permill accumulator is the one that allows users to submit new Permill values to get multiplied into the accumulator.

fn update_permill(origin, new_factor: Permill) -> DispatchResult {
	ensure_signed(origin)?;

	let old_accumulated = Self::permill_value();

	// There is no need to check for overflow here. Permill holds values in the range
	// [0, 1] so it is impossible to ever overflow.
	let new_product = old_accumulated.saturating_mul(new_factor);

	// Write the new value to storage
	PermillAccumulator::put(new_product);

	// Emit event
	Self::deposit_event(Event::PermillUpdated(new_factor, new_product));
	Ok(())
}

The code of this extrinsic largely speaks for itself. One thing to take particular note of is that we don't check for overflow on the multiplication. If you've read many of the recipes you know that a Substrate runtime must never panic, and a developer must be extremely diligent in always checking for and gracefully handling error conditions. Because Permill only holds values between 0 and 1, we know that their product will always be in that same range. Thus it is impossible to overflow or saturate. So we can happily use saturating_mul and move on.

Substrate-fixed Accumulator

Substrate-fixed takes a more traditional approach in that their types represent numbers with both whole and fractional parts. For this implementation, we'll use the U16F16 type. This type contains an unsigned number (indicated by the U at the beginning) and has 32 total bits of precision - 16 for the integer part, and 16 for the fractional part. There are several other types provided that follow the same naming convention. Some examples include U32F32 and I32F32 where the I indicates a signed number, just like in Rust primitive types.

As in the Permill example, we begin by declaring the storage item. With substrate-fixed, there is not a one function, but there is a from_num function that we use to set the storage item's default value. This from_num method and its counterpart to-num are your primary ways of converting between substrate-fixed types and Rust primitive types. If your use case does a lot of fixed-point arithmetic, like ours does, it is advisable to keep your data in substrate-fixed types.

We're able to use U16F16 as a storage item type because it, and the other substrate-fixed types, implements the parity scale codec.

decl_storage! {
	trait Store for Module<T: Trait> as Example {
		// --snip--

		/// Substrate-fixed accumulator, value starts at 1 (multiplicative identity)
		FixedAccumulator get(fn fixed_value): U16F16 = U16F16::from_num(1);
	}
}

Next we implement the extrinsic that allows users to update the accumulator by multiplying in a new value.

fn update_fixed(origin, new_factor: U16F16) -> DispatchResult {
	ensure_signed(origin)?;

	let old_accumulated = Self::fixed_value();

	// Multiply, handling overflow
	let new_product = old_accumulated.checked_mul(new_factor)
		.ok_or(Error::<T>::Overflow)?;

	// Write the new value to storage
	FixedAccumulator::put(new_product);

	// Emit event
	Self::deposit_event(Event::FixedUpdated(new_factor, new_product));
	Ok(())
}

This extrinsic is quite similar to the Permill version with one notable difference. Because U16F16 handles numbers greater than one, overflow is possible, and we need to handle it. The error handling here is straightforward, the important part is just that you remember to do it.

This example has shown the fundamentals of substrate-fixed, but this library has much more to offer as we'll see in the compounding interest example.

Manual Accumulator

In this final accumulator implementation, we manually track fixed point numbers using Rust's native u32 as the underlying data type. This example is educational, but is only practical in the simplest scenarios. Generally you will have a more fun less error-prone time coding if you use one of the previous two fixed-point types in your real-world applications.

Fixed point is not very complex conceptually. We represent fractional numbers as regular old integers, and we decide in advance to consider some of the place values fractional. It's just like saying we'll omit the decimal point when talking about money and all agree that "1995" actually means 19.95 €. This is exactly how Substrate's Balances pallet works, a tradition that's been in blockchain since Bitcon. In our example we will treat 16 bits as integer values, and 16 as fractional, just as substrate-fixed's U16F16 did.

If you're rusty or unfamiliar with place values in the binary number system, it may be useful to brush up. (Or skip this detailed section and proceed to the compounding interest example.)

Normal interpretation of u32 place values
... ___ ___ ___ ___ ___ ___ ___ .
...  64  32  16  8   4   2   1

Fixed interpretation of u32 place values
... ___ ___ ___ ___ . ___ ___ ___ ___ ...
...  8   4   2   1    1/2 1/4 1/8 1/16...

Although the concepts are straight-forward, you'll see that manually implementing operations like multiplication is quite error prone. Therefore, when writing your own blockchain applications, it is often best to use on of the provided libraries covered in the other two implementations of the accumulator.

As before, we begin by declaring the storage value. This time around it is just a simple u32. But the default value, 1 << 16 looks quite funny. If you haven't encountered it before << is Rust's bit shift operator. It takes a value and moves all the bits to the left. In this case we start with the value 1 and move it 16 bits to the left. This is because Rust interprets 1 as a regular u32 value and puts the 1 in the far right place value. But because we're treating this u32 specially, we need to shift that bit to the middle just left of the imaginary radix point.

decl_storage! {
	trait Store for Module<T: Trait> as Example {
		// --snip--

		/// Manual accumulator, value starts at 1 (multiplicative identity)
		ManualAccumulator get(fn manual_value): u32 = 1 << 16;
	}
}

The extrinsic to multiply a new factor into the accumulator follows the same general flow as in the other two implementations. In this case, there are more intermediate values calculated, and more comments explaining the bit-shifting operations. In the function body most intermediate values are held in u64 variables. This is because when you multiply two 32-bit numbers, you can end up with as much as 64 bits in the product.

fn update_manual(origin, new_factor: u32) -> DispatchResult {
	ensure_signed(origin)?;

	// To ensure we don't overflow unnecessarily, the values are cast up to u64 before multiplying.
	// This intermediate format has 48 integer positions and 16 fractional.
	let old_accumulated : u64 = Self::manual_value() as u64;
	let new_factor_u64 : u64 = new_factor as u64;

	// Perform the multiplication on the u64 values
	// This intermediate format has 32 integer positions and 32 fractional.
	let raw_product : u64 = old_accumulated * new_factor_u64;

	// Right shift to restore the convention that 16 bits are fractional.
	// This is a lossy conversion.
	// This intermediate format has 48 integer positions and 16 fractional.
	let shifted_product : u64 = raw_product >> 16;

	// Ensure that the product fits in the u32, and error if it doesn't
	if shifted_product > (u32::max_value() as u64) {
		return Err(Error::<T>::Overflow.into())
	}

	// Write the new value to storage
	ManualAccumulator::put(shifted_product as u32);

	// Emit event
	Self::deposit_event(Event::ManualUpdated(new_factor, shifted_product as u32));
	Ok(())
}

As mentioned above, when you multiply two 32-bit numbers, you can end up with as much as 64 bits in the product. In this 64-bit intermediate product, we have 32 integer bits and 32 fractional. We can simply throw away the 16 right-most fractional bits that merely provide extra precision. But we need to be careful with the 16 left-most integer bits. If any of those bits are non-zero after the multiplication it means overflow has occurred. If they are all zero, then we can safely throw them away as well.

If this business about having more bits after the multiplication is confusing, try this exercise in the more familiar decimal system. Consider these numbers that have 4 total digits (2 integer, and two fractional): 12.34 and 56.78. Multiply them together. How many integer and fractional digits are in the product? Try that again with larger numbers like 98.76 and 99.99, and smaller like 00.11 and 00.22. Which of these products can be fit back into a 4-digit number like the ones we started with?

Compounding Interest

pallets/compounding-interest

Many financial agreements involve interest for loaned or borrowed money. Compounding interest is when new interest is paid on top of not only the original loan amount, the so-called "principal", but also any interest that has been previously paid.

Discrete Compounding

Our first example will look at discrete compounding interest. This is when interest is paid at a fixed interval. In our case, interest will be paid every ten blocks.

For this implementation we've chosen to use Substrate's Percent type. It works nearly the same as Permill, but it represents numbers as "parts per hundred" rather than "parts per million". We could also have used Substrate-fixed for this implementation, but chose to save it for the next example.

The only storage item needed is a tracker of the account's balance. In order to focus on the fixed-point- and interest-related topics, this pallet does not actually interface with a Currency. Instead we just allow anyone to "deposit" or "withdraw" funds with no source or destination.

decl_storage! {
	trait Store for Module<T: Trait> as Example {
		// --snip--

		/// Balance for the discrete interest account
		DiscreteAccount get(fn discrete_account): u64;
	}
}

There are two extrinsics associated with the discrete interest account. The deposit_discrete extrinsic is shown here, and the withdraw_discrete extrinsic is nearly identical. Check it out in the kitchen.

fn deposit_discrete(origin, val_to_add: u64) -> DispatchResult {
	ensure_signed(origin)?;

	let old_value = DiscreteAccount::get();

	// Update storage for discrete account
	DiscreteAccount::put(old_value + val_to_add);

	// Emit event
	Self::deposit_event(Event::DepositedDiscrete(val_to_add));
	Ok(())
}

The flow of these deposit and withdraw extrinsics is entirely straight-forward. They each perform a simple addition or substraction from the stored value, and they have nothing to do with interest.

Because the interest is paid discretely every ten blocks it can be handled independently of deposits and withdrawals. The interest calculation happens automatically in the on_finalize block.

fn on_finalize(n: T::BlockNumber) {
	// Apply newly-accrued discrete interest every ten blocks
	if (n % 10.into()).is_zero() {

		// Calculate interest Interest = principal * rate * time
		// We can use the `*` operator for multiplying a `Percent` by a u64
		// because `Percent` implements the trait Mul<u64>
		let interest = Self::discrete_interest_rate() * DiscreteAccount::get() * 10;

		// The following line, although similar, does not work because
		// u64 does not implement the trait Mul<Percent>
		// let interest = DiscreteAccount::get() * Self::discrete_interest_rate() * 10;

		// Update the balance
		let old_balance = DiscreteAccount::get();
		DiscreteAccount::put(old_balance + interest);

		// Emit the event
		Self::deposit_event(Event::DiscreteInterestApplied(interest));
	}
}

on_finalize is called at the end of every block, but we only want to pay interest every ten blocks, so the first thing we do is check whether this block is a multiple of ten. If it is we calculate the interest due by the formula interest = principal * rate * time. As the comments explain, there is some subtlety in the order of the multiplication. You can multiply PerCent * u64 but not u64 * PerCent.

Continuously Compounding

You can imagine increasing the frequency at which the interest is paid out. Increasing the frequency enough approaches continuously compounding interest. Calculating continuously compounding interest requires the exponential function which is not available using Substrate's PerThing types. Luckily exponential and other transcendental functions are available in substrate-fixed, which is why we've chosen to use it for this example.

With continuously compounded interest, we could update the interest in on_finalize as we did before, but it would need to be updated every single block. Instead we wait until a user tries to use the account (to deposit or withdraw funds), and then calculate the account's current value "just in time".

To facilitate this implementation, we represent the state of the account not only as a balance, but as a balance, paired with the time when that balance was last updated.

#[derive(Encode, Decode, Default)]
pub struct ContinuousAccountData<BlockNumber> {
	/// The balance of the account after last manual adjustment
	principal: I32F32,
	/// The time (block height) at which the balance was last adjusted
	deposit_date: BlockNumber,
}

You can see we've chosen substrate-fixed's I32F32 as our balance type this time. While we don't intend to handle negative balances, there is currently a limitation in the transcendental functions that requires using signed types.

With the struct to represent the account's state defined, we can initialize the storage value.

decl_storage! {
	trait Store for Module<T: Trait> as Example {
		// --snip--

		/// Balance for the continuously compounded account
		ContinuousAccount get(fn balance_compound): ContinuousAccountData<T::BlockNumber>;
	}
}

As before, there are two relevant extrinsics, deposit_continuous and withdraw_continuous. They are nearly identical so we'll only show one.

fn deposit_continuous(origin, val_to_add: u64) -> DispatchResult {
	ensure_signed(origin)?;

	let current_block = system::Module::<T>::block_number();
	let old_value = Self::value_of_continuous_account(&current_block);

	// Update storage for compounding account
	ContinuousAccount::<T>::put(
		ContinuousAccountData {
			principal: old_value + I32F32::from_num(val_to_add),
			deposit_date: current_block,
		}
	);

	// Emit event
	Self::deposit_event(Event::DepositedContinuous(val_to_add));
	Ok(())
}

This function itself isn't too insightful. It does the same basic things as the discrete variant: look up the old value and the deposit, update storage, and emit an event. The one interesting part is that it calls a helper function to get the account's previous value. This helper function calculates the value of the account considering all the interest that has accrued since the last time the account was touched. Let's take a closer look.

fn value_of_continuous_account(now: &<T as system::Trait>::BlockNumber) -> I32F32 {
	// Get the old state of the accout
	let ContinuousAccountData{
		principal,
		deposit_date,
	} = ContinuousAccount::<T>::get();

	// Calculate the exponential function (lots of type conversion)
	let elapsed_time_block_number = *now - deposit_date;
	let elapsed_time_u32 = TryInto::try_into(elapsed_time_block_number)
		.expect("blockchain will not exceed 2^32 blocks; qed");
	let elapsed_time_i32f32 = I32F32::from_num(elapsed_time_u32);
	let exponent : I32F32 = Self::continuous_interest_rate() * elapsed_time_i32f32;
	let exp_result : I32F32 = exp(exponent)
		.expect("Interest will not overflow account (at least not until the learner has learned enough about fixed point :)");

	// Return the result interest = principal * e ^ (rate * time)
	principal * exp_result
}

This function gets the previous state of the account, makes the interest calculation and returns the result. The reality of making these fixed point calculations is that type conversion will likely be your biggest pain point. Most of the lines are doing type conversion between the BlockNumber, u32, and I32F32 types.

We've already seen that this helper function is used within the runtime for calculating the current balance "just in time" to make adjustments. In a real-world scenario, chain users would also want to check their balance at any given time. Because the current balance is not stored in runtime storage, it would be wise to implement a runtime API so this helper can be called from outside the runtime.

Off-chain Workers

Before learning how to build your own off-chain worker, you may want to learn about what off-chain workers are, why you want to use them, and what kinds of problems they can solve best. These topics are covered in our guide. Here, we will focus on using off-chain workers in Substrate.

Off-chain workers contain a set of powerful tools allowing your Substrate node to offload tasks that take too long or too much CPU / memory resources to compute, or have non-deterministic result. In particular we have a set of helpers allowing fetching of HTTP requests and using a community-contributed tool for parsing the returned JSON. It also provides its own storage that is unique to the particular off-chain worker node and not synchronized across the network.

Once the off-chain computation is completed, off-chain workers can submit either signed or unsigned transactions back on-chain.

We will deep-dive into each of the topics below.

Transactions in Off-chain Workers

pallets/offchain-demo

Compiling this Pallet

This offchain-demo pallet is included in the ocw-runtime. That runtime can be used in the kitchen node.

In order to use the Offchain worker, the node must inject some keys into its keystore, and that is enabled with a feature flag.

First, edit nodes/kitchen-node/Cargo.toml to enable the ocw-runtime.

Then, build the kitchen node with these commands.

# Switch to kitchen-node directory
cd nodes/kitchen-node

# Compile with OCW feature
cargo build --release --features ocw

Life-cycle of Off-chain Worker

Running the kitchen-node, you will see the off-chain worker is run after each block generation phase, as shown by Entering off-chain workers in the node output message:

...
2020-03-14 13:30:36 Starting BABE Authorship worker
2020-03-14 13:30:36 Prometheus server started at 127.0.0.1:9615
2020-03-14 13:30:41 Idle (0 peers), best: #0 (0x2658…9a5b), finalized #0 (0x2658…9a5b), ⬇ 0 ⬆ 0
2020-03-14 13:30:42 Starting consensus session on top of parent 0x26582455e63448e8dafe1e70f04d7d74d39358c6b71c306eb7013e2c54069a5b
2020-03-14 13:30:42 Prepared block for proposing at 1 [hash: 0xdc7a76fc89c45a3f318e29df06cbdb097cc3094112b204f10e1e84e0799eba88; parent_hash: 0x2658…9a5b; extrinsics (1): [0xf572…63c0]]
2020-03-14 13:30:42 Pre-sealed block for proposal at 1. Hash now 0x3558accae1325a2ae5569512b8542e90ae11b4f0de6834ba901eb03b97a680aa, previously 0xdc7a76fc89c45a3f318e29df06cbdb097cc3094112b204f10e1e84e0799eba88.
2020-03-14 13:30:42 New epoch 0 launching at block 0x3558…80aa (block slot 264027307 >= start slot 264027307).
2020-03-14 13:30:42 Next epoch starts at slot 264027407
2020-03-14 13:30:42 Imported #1 (0x3558…80aa)
2020-03-14 13:30:42 Entering off-chain workers
2020-03-14 13:30:42 off-chain send_signed: acc: 5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY| number: 0
...

Referring to the code at pallets/offchain-demo/src/lib.rs, there is an offchain_worker function inside decl_module!. This is the entry point of the off-chain worker that is executed once after each block generation, so we put all the off-chain logic here.

Two kinds of transactions can be sent back on-chain from off-chain workers, Signed Transactions and Unsigned Transactions. Signed transactions are used if the transaction requires the sender to be specified. Unsigned transactions are used when the sender does not need to be known, and additional logic is written in the code to provide extra data verification. Let's walk through how to set up each one.

Signed Transactions

Setup

For signed transactions, the first thing you will notice is that we have defined another sub-module here:

src: pallets/offchain-demo/src/lib.rs


#![allow(unused_variables)]
fn main() {
pub const KEY_TYPE: KeyTypeId = KeyTypeId(*b"demo");

pub mod crypto {
	use crate::KEY_TYPE;
	use sp_runtime::app_crypto::{app_crypto, sr25519};
	app_crypto!(sr25519, KEY_TYPE);
}
}

This is the application key to be used as the prefix for this pallet in underlying storage.

Second, we have added an additional associated type AuthorityId.

src: pallets/offchain-demo/src/lib.rs


#![allow(unused_variables)]
fn main() {
pub trait Trait: system::Trait {
	//...snip
	type AuthorityId: AppCrypto<Self::Public, Self::Signature>;
}
}

This associated type needs to be specified by the runtime when the runtime is to include this pallet (implement this pallet trait).

Now if we build the kitchen-node, we will see the compiler complain that there are three trait bounds Runtime: frame_system::offchain::CreateSignedTransaction, frame_system::offchain::SigningTypes, and frame_system::offchain::SendTransactionTypes are not satisfied. We learn that when using SubmitSignedTransaction, we also need to have our runtime implement the CreateSignedTransaction trait. So let's implement this in our runtime.

src: runtimes/ocw-runtime/src/lib.rs


#![allow(unused_variables)]
fn main() {
impl<LocalCall> frame_system::offchain::CreateSignedTransaction<LocalCall> for Runtime
where
	Call: From<LocalCall>,
{
	fn create_transaction<C: frame_system::offchain::AppCrypto<Self::Public, Self::Signature>>(
		call: Call,
		public: <Signature as sp_runtime::traits::Verify>::Signer,
		account: AccountId,
		index: Index,
	) -> Option<(
		Call,
		<UncheckedExtrinsic as sp_runtime::traits::Extrinsic>::SignaturePayload,
	)> {
		let period = BlockHashCount::get() as u64;
		let current_block = System::block_number()
			.saturated_into::<u64>()
			.saturating_sub(1);
		let tip = 0;
		let extra: SignedExtra = (
			frame_system::CheckTxVersion::<Runtime>::new(),
			frame_system::CheckGenesis::<Runtime>::new(),
			frame_system::CheckEra::<Runtime>::from(generic::Era::mortal(period, current_block)),
			frame_system::CheckNonce::<Runtime>::from(index),
			frame_system::CheckWeight::<Runtime>::new(),
			pallet_transaction_payment::ChargeTransactionPayment::<Runtime>::from(tip),
		);

		#[cfg_attr(not(feature = "std"), allow(unused_variables))]
		let raw_payload = SignedPayload::new(call, extra)
			.map_err(|e| {
				debug::native::warn!("SignedPayload error: {:?}", e);
			})
			.ok()?;

		let signature = raw_payload.using_encoded(|payload| C::sign(payload, public))?;

		let address = account;
		let (call, extra, _) = raw_payload.deconstruct();
		Some((call, (address, signature, extra)))
	}
}

// ...snip
}

There is a lot happening in the code. But basically we are:

  • Signing the call and extra, also called signed extension, and
  • Making the call(call, which includes the call paramters) and passing the sender address, signature of the data signature, and its signed extension extra on-chain as a transaction.

SignedExtra data type is defined later in the runtime.

src: runtimes/ocw-runtime/src/lib.rs


#![allow(unused_variables)]
fn main() {
/// The SignedExtension to the basic transaction logic.
pub type SignedExtra = (
	system::CheckTxVersion<Runtime>,
	system::CheckGenesis<Runtime>,
	system::CheckEra<Runtime>,
	system::CheckNonce<Runtime>,
	system::CheckWeight<Runtime>,
	transaction_payment::ChargeTransactionPayment<Runtime>,
);
}

Next, the remaining two traits are also implemented.


#![allow(unused_variables)]
fn main() {
impl frame_system::offchain::SigningTypes for Runtime {
	type Public = <Signature as sp_runtime::traits::Verify>::Signer;
	type Signature = Signature;
}

impl<C> frame_system::offchain::SendTransactionTypes<C> for Runtime
where
	Call: From<C>,
{
	type OverarchingCall = Call;
	type Extrinsic = UncheckedExtrinsic;
}
}

Sending Signed Transactions

A signed transaction is sent with T::SubmitSignedTransaction::submit_signed, as shown below:

src: pallets/offchain-demo/src/lib.rs


#![allow(unused_variables)]
fn main() {
fn send_signed(block_number: T::BlockNumber) -> Result<(), Error<T>> {
	use system::offchain::SubmitSignedTransaction;
	//..snip

	let submission: u64 = block_number.try_into().ok().unwrap() as u64;
	let call = Call::submit_number_signed(submission);

	// Using `SubmitSignedTransaction` associated type we create and submit a transaction
	//   representing the call, we've just created.
	let results = T::SubmitSignedTransaction::submit_signed(call);
	for (acc, res) in &results {
		match res {
			Ok(()) => { debug::native::info!("off-chain send_signed: acc: {}| number: {}", acc, submission); },
			Err(e) => {
				debug::native::error!("[{:?}] Failed to submit signed tx: {:?}", acc, e);
				return Err(<Error<T>>::SendSignedError);
			}
		};
	}
	Ok(())
}
}

We have a function reference to Call::submit_number_signed(submission). This is the function we are going to submit back to on-chain, and passing it to T::SubmitSignedTransaction::submit_signed(call).

You will notice that we run a for loop in the returned result. This implies that this call may make multiple transactions and return multiple results. It is because this call actually signs and sends the transaction with each of the accounts that can be found locally under the application crypto (which we defined earlier in pub mod crypto {...}). You can view this as the local accounts that are managed under this pallet namespace. Right now, we only have one key in the app crypto, so only one signed transaction is made.

Eventually, the call transaction is made on-chain via the create_transaction function we defined earlier when we implemented CreateTransaction trait in our runtime.

If you are wondering where we insert the local account in the pallet app crypto, it is actually in the outer node's service.

src: nodes/kitchen-node/src/service.rs


#![allow(unused_variables)]
fn main() {
pub fn new_full(config: Configuration<GenesisConfig>)
	-> Result<impl AbstractService, ServiceError>
{
	// ...snip
	let dev_seed = config.dev_key_seed.clone();

	// ...snip
	// Initialize seed for signing transaction using off-chain workers
	if let Some(seed) = dev_seed {
		service
			.keystore()
			.write()
			.insert_ephemeral_from_seed_by_type::<runtime::offchain_demo::crypto::Pair>(
				&seed,
				runtime::offchain_demo::KEY_TYPE,
			)
			.expect("Dev Seed should always succeed.");
	}
	// ...snip
}
}

Unsigned Transactions

Setup

By default, unsigned transactions are rejected by the Substrate runtime unless they are explicitly allowed. So we need to write the logic to allow unsigned transactions for certain particular dispatched functions as follows:

src: pallets/offchain-demo/src/lib.rs


#![allow(unused_variables)]
fn main() {
impl<T: Trait> support::unsigned::ValidateUnsigned for Module<T> {
	type Call = Call<T>;

	fn validate_unsigned(_source: TransactionSource, call: &Self::Call) -> TransactionValidity {
		if let Call::submit_number_unsigned(number) = call {
			debug::native::info!("off-chain send_unsigned: number: {}", number);

			ValidTransaction::with_tag_prefix("offchain-demo")
				.priority(T::UnsignedPriority::get())
				.and_provides([b"submit_number_unsigned"])
				.longevity(3)
				.propagate(true)
				.build()
		} else {
			InvalidTransaction::Call.into()
		}
	}
}
}

We implement ValidateUnsigned, and the allowance logic is added inside the validate_unsigned function. We check if the call is to Call::submit_number_unsigned and returns Ok() if this is the case. Otherwise, InvalidTransaction::Call.

The ValidTransaction object has some fields that touch on concepts that we have not discussed before:

  • priority: Ordering of two transactions, given their dependencies are satisfied.
  • requires: List of tags this transaction depends on. Not shown in the above code as it is not necessary in this case.
  • provides: List of tags provided by this transaction. Successfully importing the transaction will enable other transactions that depend on these tags to be included as well. provides and requires tags allow Substrate to build a dependency graph of transactions and import them in the right order.
  • longevity: Transaction longevity, which describes the minimum number of blocks the transaction is valid for. After this period the transaction should be removed from the pool or revalidated.
  • propagate: Indication if the transaction should be propagated to other peers. By setting to false the transaction will still be considered for inclusion in blocks that are authored on the current node, but will never be sent to other peers.

We are using the builder pattern to build up this object.

Finally, to tell the runtime that we have our own ValidateUnsigned logic, we also need to pass this as a parameter when constructing the runtime.

src: runtimes/ocw-runtime/src/lib.rs


#![allow(unused_variables)]
fn main() {
construct_runtime!(
	pub enum Runtime where
		Block = Block,
		NodeBlock = opaque::Block,
		UncheckedExtrinsic = UncheckedExtrinsic
	{
		//...snip
		OffchainDemo: offchain_demo::{Module, Call, Storage, Event<T>, ValidateUnsigned},
	}
);
}

Sending Unsigned Transactions

We can now make an unsigned transaction from offchain worker with the T::SubmitUnsignedTransaction::submit_unsigned function, as shown in the code.

src: pallets/offchain-demo/src/lib.rs


#![allow(unused_variables)]
fn main() {
fn send_unsigned(block_number: T::BlockNumber) -> Result<(), Error<T>> {
	use system::offchain::SubmitUnsignedTransaction;

	let submission: u64 = block_number.try_into().ok().unwrap() as u64;
	// the `block_number` param should be unique within each block generation phase
	let call = Call::submit_number_unsigned(block_number, submission);

	T::SubmitUnsignedTransaction::submit_unsigned(call).map_err(|e| {
		debug::native::error!("Failed to submit unsigned tx: {:?}", e);
		<Error<T>>::SendUnsignedError
	})
}
}

As in signed transactions, we prepare a function reference with its parameters and then call T::SubmitUnsignedTransaction::submit_unsigned.

Testing

For writing test cases for off-chain worker, refer to our testing section.

HTTP Fetching and JSON Parsing in Off-chain Workers

pallets/offchain-demo

HTTP Fetching

In traditional web app, it is often necessary to communicate with third-party APIs to fetch data that the app itself does not contains. But this becomes tricky in blockchain decentralized apps because HTTP requests are indeterministic. There are uncertainty in terms of whether the HTTP request will come back, how long it takes, and if the result stays the same when the result is being validated by another node at a future point.

In Substrate, we solve this problem by using off-chain workers to issue HTTP requests and get the result back.

In pallets/offchain-demo/src/lib.rs, we have an example of fetching information of github organization substrate-developer-hub via its public API. Then we extract the login, blog, and public_repos values out.

First, include the tools implemented in sp_runtime::offchain at the top.


#![allow(unused_variables)]
fn main() {
use sp_runtime::{
	offchain as rt_offchain
}
}

We then issue http requests inside the fetch_from_remote() function.


#![allow(unused_variables)]
fn main() {
// Initiate an external HTTP GET request. This is using high-level wrappers from `sp_runtime`.
let remote_url = str::from_utf8(&remote_url_bytes)
	.map_err(|_| <Error<T>>::HttpFetchingError)?;

let request = rt_offchain::http::Request::get(remote_url);
}

We should also set a timeout period so the http request does not hold indefinitely. For github API usage, we also need to add extra HTTP header information to it. This is how we do it.


#![allow(unused_variables)]
fn main() {
// Keeping the offchain worker execution time reasonable, so limiting the call to be within 3s.
//   `sp_io` pallet offers a timestamp() to get the current timestamp from off-chain perspective.
let timeout = sp_io::offchain::timestamp().add(rt_offchain::Duration::from_millis(3000));

// For github API request, we also need to specify `user-agent` in http request header.
//   See: https://developer.github.com/v3/#user-agent-required
let pending = request
	.add_header("User-Agent", str::from_utf8(&user_agent)
		.map_err(|_| <Error<T>>::HttpFetchingError)?)
	.deadline(timeout) // Setting the timeout time
	.send() // Sending the request out by the host
	.map_err(|_| <Error<T>>::HttpFetchingError)?; // Here we capture and return any http error.
}

HTTP requests from off-chain worker are fetched asynchronously. Here we use try_wait() to wait for the result to come back, and terminate and return if any errors occured.

Then, We check for the response status code to ensure it is okay with HTTP status code equals to 200. Any status code that is non-200 is regarded as error and return.


#![allow(unused_variables)]
fn main() {
let response = pending.try_wait(timeout)
	.map_err(|_| <Error<T>>::HttpFetchingError)?
	.map_err(|_| <Error<T>>::HttpFetchingError)?;

if response.code != 200 {
	debug::error!("Unexpected http request status code: {}", response.code);
	return Err(<Error<T>>::HttpFetchingError);
}
}

Finally, get the response back with response.body() iterator. Since we are in a no_std environment, we collect them back as a vector of bytes instead of a string and return.


#![allow(unused_variables)]
fn main() {
Ok(response.body().collect::<Vec<u8>>())
}

JSON Parsing

We usually get JSON objects back when requesting from HTTP APIs. The next task is to parse the JSON object and fetch the required (key, value) pair out. This is demonstrated in the fetch_n_parse function.

Setup

In Rust, serde and serde-json are the popular combo-package used for JSON parsing. Due to the project setup of compiling Substrate node with serde feature std on and cargo feature unification limitation, we cannot simultaneously have serde feature std off (no_std on) when compiling the runtime (details described in this issue). So we are going to use a renamed serde crate, alt_serde, in our offchain-demo pallet to remedy this situation.

src: pallets/offchain-demo/Cargo.toml

[package]
# ...

[dependencies]
# external dependencies
# ...

alt_serde = { version = "1", default-features = false, features = ["derive"] }
# updated to `alt_serde_json` when latest version supporting feature `alloc` is released
serde_json = { version = "1", default-features = false, git = "https://github.com/Xanewok/json", branch = "no-std", features = ["alloc"] }

# ...

We also use a modified version of serde_json that has the latest alloc feature and again depends on only alt_serde.

Another way of compiling serde with no_std in runtime is to use a cargo nightly feature, additional feature resolver (relevant doc).

Deserializing JSON string to struct

Then we use the usual serde-derive approach on deserializing. First we define the struct with fields we are interested to extract out.

src: pallets/offchain-demo/src/lib.rs


#![allow(unused_variables)]
fn main() {
// We use `alt_serde`, and Xanewok-modified `serde_json` so that we can compile the program
//   with serde(features `std`) and alt_serde(features `no_std`).
use alt_serde::{Deserialize, Deserializer};

// Specifying serde path as `alt_serde`
// ref: https://serde.rs/container-attrs.html#crate
#[serde(crate = "alt_serde")]
#[derive(Deserialize, Encode, Decode, Default)]
struct GithubInfo {
	// Specify our own deserializing function to convert JSON string to vector of bytes
	#[serde(deserialize_with = "de_string_to_bytes")]
	login: Vec<u8>,
	#[serde(deserialize_with = "de_string_to_bytes")]
	blog: Vec<u8>,
	public_repos: u32,
}
}

By default, serde deserialize JSON string to the datatype String. We want to write our own deserializer to convert it to vector of bytes.


#![allow(unused_variables)]
fn main() {
pub fn de_string_to_bytes<'de, D>(de: D) -> Result<Vec<u8>, D::Error>
where D: Deserializer<'de> {
	let s: &str = Deserialize::deserialize(de)?;
	Ok(s.as_bytes().to_vec())
}
}

Now the actual deserialization takes place in the Self::fetch_n_parse function.


#![allow(unused_variables)]
fn main() {
/// Fetch from remote and deserialize the JSON to a struct
fn fetch_n_parse() -> Result<GithubInfo, Error<T>> {
	let resp_bytes = Self::fetch_from_remote()
		.map_err(|e| {
			debug::error!("fetch_from_remote error: {:?}", e);
			<Error<T>>::HttpFetchingError
		})?;

	let resp_str = str::from_utf8(&resp_bytes)
		.map_err(|_| <Error<T>>::HttpFetchingError)?;

	// Deserializing JSON to struct, thanks to `serde` and `serde_derive`
	let gh_info: GithubInfo = serde_json::from_str(&resp_str).unwrap();
	Ok(gh_info)
}
}

Local Storage in Off-chain Workers

pallets/offchain-demo

Remember we mentioned that off-chain workers (short for ocw below) cannot write directly to the on-chain storage, that is why they have to submit transactions back on-chain to modify the state.

Fortunately, there is also a local storage that persist across runs in off-chain workers. Storage is local within off-chain workers and not passed within network. Storage of off-chain workers is persisted across runs of off-chain workers and blockchain re-organizations.

Off-chain workers are asynchronously run during block import. Since ocws are not limited by how long they run, at any single instance there could be multiple ocws running, being initiated by previous block imports. See diagram below.

More than one off-chain workers at a single instance

The storage has a similar API usage as on-chain StorageValue with get, set, and mutate. mutate is using a compare-and-set pattern. It compares the contents of a memory location with a given value and, only if they are the same, modifies the contents of that memory location to a new given value. This is done as a single atomic operation. The atomicity guarantees that the new value is calculated based on up-to-date information; if the value had been updated by another thread in the meantime, the write would fail.

In this recipe, we will add a cache and lock over our previous http fetching example. If the cached value existed, we will return using the cached value. Otherwise we acquire the lock and then fetch from github public API and save it to the cache.

Setup

First, include the relevant module.

src: offchain-demo/src/lib.rs


#![allow(unused_variables)]
fn main() {
use sp_runtime::{
	// ...
	offchain::{storage::StorageValueRef},
	// ...
}
}

Then, in the fetch_if_needed() function, we first define a storage reference used by the off-chain worker.


#![allow(unused_variables)]
fn main() {
fn fetch_if_needed() -> Result<(), Error<T>> {

	// Start off by creating a reference to Local Storage value.
	// Since the local storage is common for all offchain workers, it's a good practice
	// to prepend our entry with the pallet name.
	let storage = StorageValueRef::persistent(b"offchain-demo::gh-info");
	let s_lock = StorageValueRef::persistent(b"offchain-demo::lock");
	// ...
}
}

Looking at the API doc, we see there are two type of StorageValueRef, created via ::persistent() and ::local(). ::local() is not fully implemented yet and ::persistent() is enough for this use cases. We passed in a key as our storage key. As storage keys are namespaced globally, a good practice would be to prepend our pallet name in front of our storage key.

Access

Once we have the storage reference, we can access the storage via get, set, and mutate. Let's demonstrate the mutate function as the usage of the remaining two functions are pretty self-explanatory.

First we fetch to see if github info has been fetched and cached. If yes, we return early.


#![allow(unused_variables)]
fn main() {
fn fetch_if_needed() -> Result<(), Error<T>> {
	// ...
	if let Some(Some(gh_info)) = s_info.get::<GithubInfo>() {
		// gh-info has already been fetched. Return early.
		debug::info!("cached gh-info: {:?}", gh_info);
		return Ok(());
	}
	// ...
}
}

As with general on-chain storage, if we have a storage access pattern of get-check-set, it is a good indicator we should use mutate. This makes sure that multiple off-chain workers running concurrently does not modify the same storage entry.

We then try to acquire the lock in order to fetch github info.


#![allow(unused_variables)]
fn main() {
fn fetch_if_needed() -> Result<(), Error<T>> {
	//...
	// We are implementing a mutex lock here with `s_lock`
	let res: Result<Result<bool, bool>, Error<T>> = s_lock.mutate(|s: Option<Option<bool>>| {
		match s {
			// `s` can be one of the following:
			//   `None`: the lock has never been set. Treated as the lock is free
			//   `Some(None)`: unexpected case, treated it as AlreadyFetch
			//   `Some(Some(false))`: the lock is free
			//   `Some(Some(true))`: the lock is held

			// If the lock has never been set or is free (false), return true to execute `fetch_n_parse`
			None | Some(Some(false)) => Ok(true),

			// Otherwise, someone already hold the lock (true), we want to skip `fetch_n_parse`.
			// Covering cases: `Some(None)` and `Some(Some(true))`
			_ => Err(<Error<T>>::AlreadyFetched),
		}
	});
	//...
}
}

We use the mutate function to get and set the lock value, taking advantages of its compare-and-set access pattern. If the lock is being held by another ocw (with s equals value of Some(Some(true))), we return an error indicating the fetching is done by another ocw.

The return value of the mutate has a type of Result<Result<T, T>, E>, to indicate one of the following cases:

  • Ok(Ok(T)) - the value has been successfully set in the mutate closure and saved to the storage.
  • Ok(Err(T)) - the value has been successfully set in the mutate closure, but failed to save to the storage.
  • Err(_) - the value has NOT been set successfully in the mutate closure.

Now we check the returned value of the mutate function. If fetching is done by another ocw (returning Err(<Error<T>>)), or cannot acquire the lock (returning Ok(Err(true))), we skip the fetching.


#![allow(unused_variables)]
fn main() {
fn fetch_if_needed() -> Result<(), Error<T>> {
	// ...
	// Cases of `res` returned result:
	//   `Err(<Error<T>>)` - lock is held, so we want to skip `fetch_n_parse` function.
	//   `Ok(Err(true))` - Another ocw is writing to the storage while we set it,
	//                     we also skip `fetch_n_parse` in this case.
	//   `Ok(Ok(true))` - successfully acquire the lock, so we run `fetch_n_parse`
	if let Ok(Ok(true)) = res {
		match Self::fetch_n_parse() {
			Ok(gh_info) => {
				// set gh-info into the storage and release the lock
				s_info.set(&gh_info);
				s_lock.set(&false);

				debug::info!("fetched gh-info: {:?}", gh_info);
			},
			Err(err) => {
				// release the lock
				s_lock.set(&false);
				return Err(err);
			}
		}
	}
	Ok(())
}
}

Finally, whether the fetch_n_parse() function success or not, we release the lock by setting it to false.

Reference

Runtime APIs

pallets/sum-storage runtimes/api-runtime

Each Substrate node contains a runtime. The runtime contains the business logic of the chain. It defines what transactions are valid and invalid and determines how the chain's state changes in response to transactions. The runtime is compiled to Wasm to facilitate runtime upgrades. The "outer node", everything other than the runtime, does not compile to Wasm, only to native. The outer node is responsible for handling peer discovery, transaction pooling, block and transaction gossiping, consensus, and answering RPC calls from the outside world. While performing these tasks, the outer node sometimes needs to query the runtime for information, or provide information to the runtime. A Runtime API facilitates this kind of communication between the outer node and the runtime. In this recipe, we will write our own minimal runtime API.

Our Example

For this example, we will write a pallet called sum-storage with two storage items, both u32s.


#![allow(unused_variables)]
fn main() {
decl_storage! {
	trait Store for Module<T: Trait> as TemplateModule {
		Thing1 get(fn thing1): Option<u32>;
		Thing2 get(fn thing2): Option<u32>;
	}
}
}

Substrate already comes with a runtime API for querying storage values, which is why we can easily query our two storage values from a front-end. In this example we imagine that the outer node is interested in knowing the sum of the two values, rather than either individual value. Our runtime API will provide a way for the outer node to query the runtime for this sum. Before we define the actual runtime API, let's write a public helper function in the pallet to do the summing.


#![allow(unused_variables)]
fn main() {
impl<T: Trait> Module<T> {
	pub fn get_sum() -> u32 {
		Thing1::get() + Thing2::get()
	}
}
}

So far, nothing we've done is specific to runtime APIs. In the coming sections, we will use this helper function in our runtime API's implementation.

Defining the API

The first step in adding a runtime API to your runtime is defining its interface using a Rust trait. This is done in the sum-storage/runtime-api/src/lib.rs file. This file can live anywhere you like, but because it defines an API that is closely related to a particular pallet, it makes sense to include the API definition in the pallet's directory.

The code to define the API is quite simple, and looks almost like any old Rust trait. The one addition is that it must be placed in the decl_runtime_apis! macro. This macro allows the outer node to query the runtime API at specific blocks. Although this runtime API only provides a single function, you may write as many as you like.


#![allow(unused_variables)]
fn main() {
sp_api::decl_runtime_apis! {
	pub trait SumStorageApi {
		fn get_sum() -> u32;
	}
}
}

Implementing the API

With our pallet written and our runtime API defined, we may now implement the API for our runtime. This happens in the main runtime aggregation file. In our case we've provided the api-runtime in runtimes/api-runtime/src/lib.rs.

As with defining the API, implementing a runtime API looks similar to implementing any old Rust trait with the exception that the implementation must go inside of the impl_runtime_apis! macro. Every runtime must use iml_runtime_apis! because the Core API is required. We will add an implementation for our own API alongside the others in this macro. Our implementation is straight-forward as it merely calls the pallet's helper function that we wrote previously.


#![allow(unused_variables)]
fn main() {
impl_runtime_apis! {
  // --snip--

  impl sum_storage_rpc_runtime_api::SumStorageApi<Block> for Runtime {
		fn get_sum() -> u32 {
			SumStorage::get_sum()
		}
	}
}
}

You may be wondering about the Block type parameter which is present here, but not in our definition. This type parameter is added by the macros along with a few other features. All runtime APIs have this type parameter to facilitate querying the runtime at arbitrary blocks. Read more about this in the docs for impl_runtime_apis!.

Calling the Runtime API

We've now successfully added a runtime API to our runtime. The outer node can now call this API to query the runtime for the sum of two storage values. Given a reference to a 'client' we can make the call like this.


#![allow(unused_variables)]
fn main() {
let sum_at_block_fifty = client.runtime_api().get_sum(&50);
}

This recipe was about defining and implementing a custom runtime API. To see an example of calling this API in practice, see the recipe on custom RPCs, where we connect this runtime API to an RPC that can be called by an end user.

Custom RPCs

nodes/rpc-node runtime/api-runtime

Remote Procedure Calls, or RPCs, are a way for an external program (eg. a frontend) to communicate with a Substrate node. They are used for checking storage values, submitting transactions, and querying the current consensus authorities. Substrate comes with several default RPCs. In many cases it is useful to add custom RPCs to your node. In this recipe, we will add two custom RPCs to our node, one of which calls into a custom runtime API.

Defining an RPC

Every RPC that the node will use must be defined in a trait. We'll begin by defining a simple RPC called "silly rpc" which just returns constant integers. A Hello world of sorts. In the nodes/rpc-node/src/silly_rpc.rs file, we define a basic rpc as


#![allow(unused_variables)]
fn main() {
#[rpc]
pub trait SillyRpc {
	#[rpc(name = "silly_seven")]
	fn silly_7(&self) -> Result<u64>;

	#[rpc(name = "silly_double")]
	fn silly_double(&self, val: u64) -> Result<u64>;
}
}

This definition defines two RPC methods called hello_five and hello_seven. Each RPC method must take a &self reference and must return a Result. Next, we define a struct that implements this trait.


#![allow(unused_variables)]
fn main() {
pub struct Silly;

impl SillyRpc for Silly {
	fn silly_7(&self) -> Result<u64> {
		Ok(7)
	}

	fn silly_double(&self, val: u64) -> Result<u64> {
		Ok(2 * val)
	}
}
}

Finally, to make the contents of this new file usable, we need to add a line in our main.rs.


#![allow(unused_variables)]
fn main() {
mod silly_rpc;
}

Including the RPC

With our RPC written, we're ready to install it on our node. We begin with a few dependencies in our rpc-node's Cargo.toml.

jsonrpc-core = "14.0.3"
jsonrpc-core-client = "14.0.3"
jsonrpc-derive = "14.0.3"
sc-rpc = '2.0.0-rc3'

Next, in our rpc-node's service.rs file, we extend the service with our RPC. We've chosen to install this RPC for full nodes, so we've included the code in the new_full_start! macro. You could also install the RPC on a light client by making the corresponding changes to new_light.

The first change to this macro is a simple type definition


#![allow(unused_variables)]
fn main() {
type RpcExtension = jsonrpc_core::IoHandler<sc_rpc::Metadata>;
}

Then, once you've called the service builder, you can extend it with an RPC by using its with_rpc_extensions method as follows.


#![allow(unused_variables)]
fn main() {
.with_rpc_extensions(|builder| -> Result<RpcExtension, _> {
	// Make an io handler to be extended with individual RPCs
	let mut io = jsonrpc_core::IoHandler::default();

	// Use the fully qualified name starting from `crate` because we're in macro_rules!
	io.extend_with(crate::silly_rpc::SillyRpc::to_delegate(crate::silly_rpc::Silly{}));

	// --snip--

	Ok(io)
})
}

Calling the RPC

Once your node is running, you can test the RPC by calling it with any client that speaks json RPC. One widely available option is curl.

$ curl http://localhost:9933 -H "Content-Type:application/json;charset=utf-8" -d   '{
     "jsonrpc":"2.0",
      "id":1,
      "method":"silly_seven",
      "params": []
    }'

To which the RPC responds

{"jsonrpc":"2.0","result":7,"id":1}

You may have noticed that our second RPC takes a parameter, the value to double. You can supply this parameter by including its in the params list. For example:

$ curl http://localhost:9933 -H "Content-Type:application/json;charset=utf-8" -d   '{
     "jsonrpc":"2.0",
      "id":1,
      "method":"silly_double",
      "params": [7]
    }'

To which the RPC responds with the doubled parameter

{"jsonrpc":"2.0","result":14,"id":1}

RPC to Call a Runtime API

The silly RPC demonstrates the fundamentals of working with RPCs in Substrate. Nonetheless, most RPCs will go beyond what we've learned so far, and actually interact with other parts of the node. In this second example, we will include an RPC that calls into the sum-storage runtime API from the runtime API recipe. While it isn't strictly necessary to understand what the runtime API does, reading that recipe may provide helpful context.

Because this RPC's behavior is closely related to a specific pallet, we've chosen to define the RPC in the pallet's directory. In this case the RPC is defined in pallets/sum-storage/rpc. So rather than using the mod keyword as we did before, we must include this RPC definition in the node's Cargo.toml file.

sum-storage-rpc = { path = "../../pallets/sum-storage/rpc" }

Defining the RPC interface is similar to before, but there are a few differences worth noting. First, the struct that implements the RPC needs a reference to the client. This is necessary so we can actually call into the runtime. Second the struct is generic over the BlockHash type. This is because it will call a runtime API, and runtime APIs must always be called at a specific block.


#![allow(unused_variables)]
fn main() {
#[rpc]
pub trait SumStorageApi<BlockHash> {
	#[rpc(name = "sumStorage_getSum")]
	fn get_sum(
		&self,
		at: Option<BlockHash>
	) -> Result<u32>;
}

/// A struct that implements the `SumStorageApi`.
pub struct SumStorage<C, M> {
	client: Arc<C>,
	_marker: std::marker::PhantomData<M>,
}

impl<C, M> SumStorage<C, M> {
	/// Create new `SumStorage` instance with the given reference to the client.
	pub fn new(client: Arc<C>) -> Self {
		Self { client, _marker: Default::default() }
	}
}
}

The RPC's implementation is also similar to before. The additional syntax here is related to calling the runtime at a specific block, as well as ensuring that the runtime we're calling actually has the correct runtime API available.


#![allow(unused_variables)]
fn main() {
impl<C, Block> SumStorageApi<<Block as BlockT>::Hash>
	for SumStorage<C, Block>
where
	Block: BlockT,
	C: Send + Sync + 'static,
	C: ProvideRuntimeApi,
	C: HeaderBackend<Block>,
	C::Api: SumStorageRuntimeApi<Block>,
{
	fn get_sum(
		&self,
		at: Option<<Block as BlockT>::Hash>
	) -> Result<u32> {

		let api = self.client.runtime_api();
		let at = BlockId::hash(at.unwrap_or_else(||
			// If the block hash is not supplied assume the best block.
			self.client.info().best_hash
		));

		let runtime_api_result = api.get_sum(&at);
		runtime_api_result.map_err(|e| RpcError {
			code: ErrorCode::ServerError(9876), // No real reason for this value
			message: "Something wrong".into(),
			data: Some(format!("{:?}", e).into()),
		})
	}
}
}

Finally, to install this RPC on in our service, we expand the existing with_rpc_extensions call to


#![allow(unused_variables)]
fn main() {
.with_rpc_extensions(|builder| -> Result<RpcExtension, _> {
	// Make an io handler to be extended with individual RPCs
	let mut io = jsonrpc_core::IoHandler::default();

	// Add the first rpc extension
	io.extend_with(crate::silly_rpc::SillyRpc::to_delegate(crate::silly_rpc::Silly{}));

	// Add the second RPC extension
	// Because this one calls a Runtime API it needs a reference to the client.
	io.extend_with(sum_storage_rpc::SumStorageApi::to_delegate(sum_storage_rpc::SumStorage::new(builder.client().clone())));

	Ok(io)
})?;
}

Optional RPC Parameters

This RPC takes a parameter ,at, whose type is Option<_>. We may call this RPC by omitting the optional parameter entirely. In this case the implementation provides a default value of the best block.

$ curl http://localhost:9933 -H "Content-Type:application/json;charset=utf-8" -d   '{
     "jsonrpc":"2.0",
      "id":1,
      "method":"sumStorage_getSum",
      "params": []
    }'

We may also call the RPC by providing a block hash. One easy way to get a block hash to test this call is by copying it from the logs of a running node.

$ curl http://localhost:9933 -H "Content-Type:application/json;charset=utf-8" -d   '{
     "jsonrpc":"2.0",
      "id":1,
      "method":"sumStorage_getSum",
      "params": ["0x87b2e4b93e74d2f06a0bde8de78c9e2a9823ce559eb5e3c4710de40a1c1071ac"]
    }'

As an exercise, change the storage values and confirm that the RPC provides the correct updated sum. Then call the RPC at an old block and confirm you get the old sum.

Polkadot JS API

Many frontends interact with Substrate nodes through Polkadot JS API. While the Recipes does not strive to document that project, we have included a snippet of javascript for interacting with these custom RPCs in the nodes/rpc-node/js directory.

Sha3 Proof of Work Algorithms

consensus/sha3pow

Proof of Work is not a single consensus algorithm. Rather it is a class of algorithms represented in Substrate by the PowAlgorithm trait. Before we can build a PoW node we must specify a concrete PoW algorithm by implementing this trait. In this recipe we specify two concrete PoW algorithms, both of which are based on the sha3 hashing algorithm.

Minimal Sha3 PoW

First we turn our attention to a minimal working implementation. This consensus engine is kept intentionally simple. It omits some features that make Proof of Work practical for real-world use such as difficulty adjustment.

Begin by creating a struct that will implement the PowAlgorithm Trait.

/// A minimal PoW algorithm that uses Sha3 hashing.
/// Difficulty is fixed at 1_000_000
#[derive(Clone)]
pub struct MinimalSha3Algorithm;

Because this is a minimal PoW algorithm, our struct can also be quite simple. In fact, it is a unit struct. A more complex PoW algorithm that interfaces with the runtime would need to hold a reference to the client. An example of this (on an older Substrate codebase) can be seen in Kulupu's RandomXAlgorithm.

Difficulty

The first function we must provide returns the difficulty of the next block to be mined. In our minimal sha3 algorithm, this function is quite simple. The difficulty is fixed. This means that as more mining power joins the network, the block time will become faster.

impl<B: BlockT<Hash=H256>> PowAlgorithm<B> for Sha3Algorithm {
	type Difficulty = U256;

	fn difficulty(&self, _parent: &BlockId<B>) -> Result<Self::Difficulty, Error<B>> {
		// This basic PoW uses a fixed difficulty.
		// Raising this difficulty will make the block time slower.
		Ok(U256::from(1_000_000))
	}

	// --snip--
}

Verification

Our PoW algorithm must also be able to verify blocks provided by other authors. We are first given the pre-hash, which is a hash of the block before the proof of work seal is attached. We are also given the seal, which testifies that the work has been done, and the difficulty that the block author needed to meet. This function first confirms that the provided seal actually meets the target difficulty, then it confirms that the seal is actually valid for the given pre-hash.

fn verify(
	&self,
	_parent: &BlockId<B>,
	pre_hash: &H256,
	seal: &RawSeal,
	difficulty: Self::Difficulty
) -> Result<bool, Error<B>> {
	// Try to construct a seal object by decoding the raw seal given
	let seal = match Seal::decode(&mut &seal[..]) {
		Ok(seal) => seal,
		Err(_) => return Ok(false),
	};

	// See whether the hash meets the difficulty requirement. If not, fail fast.
	if !hash_meets_difficulty(&seal.work, difficulty) {
		return Ok(false)
	}

	// Make sure the provided work actually comes from the correct pre_hash
	let compute = Compute {
		difficulty,
		pre_hash: *pre_hash,
		nonce: seal.nonce,
	};

	if compute.compute() != seal {
		return Ok(false)
	}

	Ok(true)
}

Mining

Finally our proof of work algorithm needs to be able to mine blocks of our own.

fn mine(
	&self,
	_parent: &BlockId<B>,
	pre_hash: &H256,
	difficulty: Self::Difficulty,
	round: u32 // The number of nonces to try during this call
) -> Result<Option<RawSeal>, Error<B>> {
	// Get a randomness source from the environment; fail if one isn't available
	let mut rng = SmallRng::from_rng(&mut thread_rng())
		.map_err(|e| Error::Environment(format!("Initialize RNG failed for mining: {:?}", e)))?;

	// Loop the specified number of times
	for _ in 0..round {

		// Choose a new nonce
		let nonce = H256::random_using(&mut rng);

		// Calculate the seal
		let compute = Compute {
			difficulty,
			pre_hash: *pre_hash,
			nonce,
		};
		let seal = compute.compute();

		// If we solved the PoW then return, otherwise loop again
		if hash_meets_difficulty(&seal.work, difficulty) {
			return Ok(Some(seal.encode()))
		}
	}

	// Tried the specified number of rounds and never found a solution
	Ok(None)
}

Notice that this function takes a parameter for the number of rounds of mining it should attempt. If no block has been successfully mined in this time, the method will return. This gives the service a chance to check whether any new blocks have been received from other authors since the mining started. If a valid block has been received, then we will start mining on it. If no such block has been received, we will go in for another try at mining on the same block as before.

Realistic Sha3 PoW

Having understood the fundamentals, we can now build a more realistic sha3 algorithm. The primary difference here is that this algorithm will fetch the difficulty from the runtime via a runtime api. This change allows the runtime to dynamically adjust the difficulty based on block time. So if more mining power joins the network, the diffculty adjusts, and the blocktime remains constant.

Defining the Sha3Algorithm Struct

We begin as before by defining a struct that will implement the PowAlgorithm trait. Unlike before, this struct must hold a reference to the Client so it can call the appropriate runtime APIs.

/// A complete PoW Algorithm that uses Sha3 hashing.
/// Needs a reference to the client so it can grab the difficulty from the runtime.
pub struct Sha3Algorithm<C> {
	client: Arc<C>,
}

Next we provide a new method for conveniently creating instances of our new struct.

impl<C> Sha3Algorithm<C> {
	pub fn new(client: Arc<C>) -> Self {
		Self { client }
	}
}

And finally we manually implement Clone. We cannot derive clone as we did for the MinimalSha3Algorithm.

// Manually implement clone. Deriving doesn't work because
// it'll derive impl<C: Clone> Clone for Sha3Algorithm<C>. But C in practice isn't Clone.
impl<C> Clone for Sha3Algorithm<C> {
	fn clone(&self) -> Self {
		Self::new(self.client.clone())
	}
}

It isn't critical to understand why the manual Clone implementation is necessary, just that it is necessary.

Implementing the PowAlgorithm trait

As before we implement the PowAlgorithm trait for out Sha3Algorithm. This time we supply more complex trait bounds to ensure that the client the algorithm holds a reference to actually provides the DifficultyAPI necessary to fetch the PoW difficulty from the runtime.

// Here we implement the general PowAlgorithm trait for our concrete Sha3Algorithm
impl<B: BlockT<Hash=H256>, C> PowAlgorithm<B> for Sha3Algorithm<C> where
	C: ProvideRuntimeApi<B>,
	C::Api: DifficultyApi<B, U256>,
{
	type Difficulty = U256;

	// --snip
}

Difficulty

The implementation of PowAlgorithm's difficulty function, no longer returns a fxed value, but rather calls into the runtime API which is guaranteed to exist because of the trait bounds. It also maps any errors that may have occurred when using the API.

fn difficulty(&self, parent: B::Hash) -> Result<Self::Difficulty, Error<B>> {
	let parent_id = BlockId::<B>::hash(parent);
	self.client.runtime_api().difficulty(&parent_id)
		.map_err(|e| sc_consensus_pow::Error::Environment(
			format!("Fetching difficulty from runtime failed: {:?}", e)
		))
}

Verify and Mine

The verify and mine functions are unchanged from the MinimalSha3Algorithm implementation.

Basic Proof of Work

nodes/basic-pow

The basic-pow node demonstrates how to wire up a custom consensus engine into the Substrate Service. It uses a minimal proof of work consensus engine to reach agreement over the blockchain. It will teach us many useful aspects of dealing with consensus and prepare us to understand more advanced consensus engines in the future. In particular we will learn about:

The Structure of a Node

You may remember from the hello-substrate recipe that a Substrate node has two parts. An outer part that is responsible for gossiping transactions and blocks, handling rpc requests, and reaching consensus. And a runtime that is responsible for the business logic of the chain. This architecture diagram illustrates the distinction.

Substrate Architecture Diagram

In principle, the consensus engine (part of the outer node) is agnostic to the runtime that is used with it. But in practice, most consensus engines will require the runtime to provide certain runtime APIs that affect the engine. For example, Aura and Babe query the runtime for the set of validators. A more real-world PoW consensus would query the runtime for the block difficulty. Additionally, some runtimes rely on the consensus engine to provide pre-runtime digests. For example, runtimes that include the Babe pallet expect a pre-runtime digest containing information about the current babe slot.

In this recipe we will avoid those practical complexities by using the Minimal Sha3 Proof of Work consensus engine, and a dedicated pow-runtime which are truly isolated from each other. The contents of the runtime should be familiar, and will not be discussed here.

The Service Builder

The Substrate Service is the main coordinator of the various parts of a Substrate node, including consensus. The service is large and takes many parameters, so it is built with a ServiceBuilder following Rust's builder pattern. This code is demonstrated in the nodes src/service.rs file.

The particular builder method that is relevant here is with_import_queue. Here we construct an instance of the PowBlockImport struct, providing it with references to our client, our MinimalSha3Algorithm, and some other necessary data.

builder
	.with_import_queue(|_config, client, select_chain, _transaction_pool| {

		let pow_block_import = sc_consensus_pow::PowBlockImport::new(
			client.clone(),
			client.clone(),
			sha3pow::Sha3Algorithm,
			0, // check inherents starting at block 0
			select_chain,
			inherent_data_providers.clone(),
		);

		let import_queue = sc_consensus_pow::import_queue(
			Box::new(pow_block_import.clone()),
			sha3pow::Sha3Algorithm,
			inherent_data_providers.clone(),
		)?;

		import_setup = Some(pow_block_import);

		Ok(import_queue)
	})?;

Once the PowBlockImport is constructed, we can use it to create an actual import queue that the service will use for importing blocks into the client.

The Block Import Pipeline

You may have noticed that when we created the PowBlockImport we gave it two separate references to the client. The second reference will always be to a client. But the first is interesting. The rustdocs tell us that the first parameter is inner: BlockImport<B, Transaction = TransactionFor<C, B>>. Why would a block import have a reference to another block import? Because the "block import pipeline" is constructed in an onion-like fashion, where one layer of block import wraps the next. Learn more about this pattern in the knowledgebase article on the block import pipeline.

Inherent Data Providers

Both the BlockImport and the import_queue are given an instance called inherent_data_providers. This object is created in a helper function defined at the beginning of service.rs

pub fn build_inherent_data_providers() -> Result<InherentDataProviders, ServiceError> {
	let providers = InherentDataProviders::new();

	providers
		.register_provider(sp_timestamp::InherentDataProvider)
		.map_err(Into::into)
		.map_err(sp_consensus::error::Error::InherentData)?;

	Ok(providers)
}

Anything that implements the ProvideInherentData trait may be used here. The block authoring logic must supply all inherents that the runtime expects. In the case of this basic-pow chain, that is just the TimestampInherentData expected by the timestamp pallet. In order to register other inherents, you would call register_provider multiple times, and map errors accordingly.

Mining

We've already implemented a mining algorithm as part of our MinimalSha3Algorithm, but we haven't yet told our service to actually mine with that algorithm. This is our last task in the new_full function.

if participates_in_consensus {
	let proposer = sc_basic_authorship::ProposerFactory::new(
		service.client(),
		service.transaction_pool()
	);

	// The number of rounds of mining to try in a single call
	let rounds = 500;

	let client = service.client();
	let select_chain = service.select_chain()
		.ok_or(ServiceError::SelectChainRequired)?;

	let can_author_with =
		sp_consensus::CanAuthorWithNativeVersion::new(service.client().executor().clone());

	sc_consensus_pow::start_mine(
		Box::new(block_import),
		client,
		MinimalSha3Algorithm,
		proposer,
		None, // No preruntime digests
		rounds,
		service.network(),
		std::time::Duration::new(2, 0),
		Some(select_chain),
		inherent_data_providers.clone(),
		can_author_with,
	);
}

We begin by testing whether this node participates in consensus, which is to say we check whether the user wants the node to act as a miner. If this node is to be a miner, we gather references to various parts of the node that the start_mine function requires, and define that we will attempt 500 rounds of mining for each block before pausing. Finally we call start_mine.

The Light Client

The last thing in the service.rs file is constructing the light client's service. This code is quite similar to the construction of the full service.

Instead of using the with_import_queue function we used previously, we use the with_import_queue_and_fprb function. FPRB stand for FinalityProofRequestBuilder. In chains with deterministic finality, light clients must request proofs of finality from full nodes. But in our chain, we do not have deterministic finality, so we can use the DummyFinalityProofRequestBuilder which does nothing except satisfying Rust's type checker.

Once the dummy request builder is configured, the BlockImport and import queue are configured exactly as they were in the full node.

Note of Finality

If we run the basic-pow node now, we see in console logs, that the finalized block always remains at 0.

...
2020-03-22 12:50:09 Starting consensus session on top of parent 0x85811577d1033e918b425380222fd8c5aef980f81fa843d064d80fe027c79f5a
2020-03-22 12:50:09 Imported #189 (0x8581…9f5a)
2020-03-22 12:50:09 Prepared block for proposing at 190 [hash: 0xdd83ba96582acbed59aacd5304a9258962d1d4c2180acb8b77f725bd81461c4f; parent_hash: 0x8581…9f5a; extrinsics (1): [0x77a5…f7ad]]
2020-03-22 12:50:10 Idle (1 peers), best: #189 (0x8581…9f5a), finalized #0 (0xff0d…5cb9), ⬇ 0.2kiB/s ⬆ 0.4kiB/s
2020-03-22 12:50:15 Idle (1 peers), best: #189 (0x8581…9f5a), finalized #0 (0xff0d…5cb9), ⬇ 0 ⬆ 0

This is expected because Proof of Work is a consensus mechanism with probabilistic finality. This means a block is never truly finalized and can always be reverted. The further behind the blockchain head a block is, the less likely it is going to be reverted.

Hybrid Consensus

nodes/hybrid-consensus

This recipe demonstrates a Substrate-based node that employs hybrid consensus. Specifically, it uses Sha3 Proof of Work to dictate block authoring, and the Grandpa finality gadget to provide deterministic finality. The minimal proof of work consensus lives entirely outside of the runtime while the grandpa finality obtains its authorities from the runtime via the GrandpaAPI. Understanding this recipe requires familiarity with Substrate's block import pipeline.

The Block Import Pipeline

Substrate's block import pipeline is structured like an onion in the sense that it is layered. A Substrate node can compose pieces of block import logic by wrapping block imports in other block imports. In this node we need to ensure that blocks are valid according to both Pow and grandpa. So we will construct a block import for each of them and wrap one with the other. The end of the block import pipeline is always the client, which contains the underlying datbase of imported blocks.

We begin by creating the block import for grandpa. In addition to the block import itself, we get back a grandpa_link. This link is a channel over which the block import can communicate with the background task that actually casts grandpa votes. The details of the grandpa protocol are beyond the scope of this recipe.

let (grandpa_block_import, grandpa_link) =
	sc_finality_grandpa::block_import(
		client.clone(), &(client.clone() as std::sync::Arc<_>), select_chain
	)?;

This same block import will be used as a justification import, so we clone it right after constructing it.

let justification_import = grandpa_block_import.clone();

With the grandpa block import created, we can now create the PoW block import. The Pow block import is the outer-most layer of the block import onion and it wraps the grandpa block import.

let pow_block_import = sc_consensus_pow::PowBlockImport::new(
	grandpa_block_import,
	client.clone(),
	sha3pow::MinimalSha3Algorithm,
	0, // check inherents starting at block 0
	Some(select_chain),
	inherent_data_providers.clone(),
);

The Import Queue

With the block imports setup, we can proceed to creating the import queue. We make it using PoW's import_queue helper function. Notice that it requires the entire block import pipeline which we refer to as pow_block_import because PoW is the outermost layer.

let import_queue = sc_consensus_pow::import_queue(
	Box::new(pow_block_import),
	Some(Box::new(justification_import)),
	None,
	sha3pow::MinimalSha3Algorithm,
	inherent_data_providers.clone(),
	spawn_task_handle,
)?;

The Finality Proof Provider

Occasionally in the operation of a blockchain, other nodes will contact our node asking for proof that a particular block is finalized. To respond to these requests, we include a finality proof provider.

.with_finality_proof_provider(|client, backend| {
	let provider = client as Arc<dyn StorageAndProofProvider<_, _>>;
	Ok(Arc::new(GrandpaFinalityProofProvider::new(backend, provider)) as _)
})?

Spawning the PoW Authorship Task

Any node that is acting as an authority, typically called "miners" in the PoW context, must run a mining task in another thread.

sc_consensus_pow::start_mine(
	Box::new(block_import),
	client,
	MinimalSha3Algorithm,
	proposer,
	None, // TODO Do I need some grandpa preruntime digests?
	500, // Rounds
	service.network(),
	std::time::Duration::new(2, 0),
	Some(select_chain),
	inherent_data_providers.clone(),
	can_author_with,
);

The use of a separate thread for block authorship is unlike other Substrate-based authorship tasks which are typically run as async futures. Because mining is a CPU intensive process, it is necessary to provide a separate thread or else the mining task would run continually and other tasks such as transaction processing, gossiping, and peer discovery would be starved for CPU.

Spawning the Grandpa Task

Grandpa is not CPU intensive, so we will use a standard async worker to listen to and cast grandpa votes. We begin by creating a grandpa Config.

let grandpa_config = sc_finality_grandpa::Config {
	gossip_duration: Duration::from_millis(333),
	justification_period: 512,
	name: Some(name),
	observer_enabled: false,
	keystore,
	is_authority: role.is_network_authority(),
};

We can then use this config to create an instance of GrandpaParams.

let grandpa_config = sc_finality_grandpa::GrandpaParams {
	config: grandpa_config,
	link: grandpa_link,
	network: service.network(),
	inherent_data_providers: inherent_data_providers.clone(),
	telemetry_on_connect: Some(service.telemetry_on_connect_stream()),
	voting_rule: sc_finality_grandpa::VotingRulesBuilder::default().build(),
	prometheus_registry: service.prometheus_registry(),
};

With the parameters established, we can now create and spawn the authorship future.

service.spawn_essential_task(
	"grandpa-voter",
	sc_finality_grandpa::run_grandpa_voter(grandpa_config)?
);

Disabled Grandpa

Proof of Authority networks generally contain many full nodes that are not authorities. When Grandpa is present in the network, we still need to tell the node how to interpret grandpa-related messages it may receive (just ignore them).

sc_finality_grandpa::setup_disabled_grandpa(
	service.client(),
	&inherent_data_providers,
	service.network(),
)?;

Constraints on the Runtime

Runtime APIs

Grandpa relies on getting its authority sets from the runtime via the GrandpaAPI. So trying to build this node with a runtime that does not provide this API will fail to compile. For that reason, we have included the dedicated minimal-grandpa-runtime.

The opposite is not true, however. A node that does not require grandpa may use the minimal-grandpa-runtime successfully. The unused GrandpaAPI will remain as a harmless vestige in the runtime.

Manual Seal

nodes/manual-seal

This recipe demonstrates a Substrate node using the Manual Seal consensus. Unlike the other consensus engines included with Substrate, manual seal does not create blocks on a regular basis. Rather, it waits for an RPC call telling to create a block.

Using Manual Seal

Before we explore the code, let's begin by seeing how to use the manual-seal node. Build and start the node in the usual way.

cargo build --release -p manual-seal
./target/release/manual-seal

Manually Sealing Blocks

Once your node is running, you will see that it just sits there idly. It will accept transactions to the pool, but it will not author blocks on its own. In manual seal, the node does not author a block until we explicitly tell it to. We can tell it to author a block by calling the engine_createBlock RPC.

$ curl http://localhost:9933 -H "Content-Type:application/json;charset=utf-8" -d   '{
     "jsonrpc":"2.0",
      "id":1,
      "method":"engine_createBlock",
      "params": [true, false, null]
    }'

This call takes three parameters, each of which are worth exploring.

Create Empty

create_empty is a Boolean value indicating whether empty blocks may be created. Setting create-empty to true does not mean that an empty block will necessarily be created. Rather it means that the engine should go ahead creating a block even if no transaction are present. If transactions are present in the queue, they will be included regardless of create_empty's value.'

Finalize

finalize is a Boolean indicating whether the block (and its ancestors, recursively) should be finalized after creation. Manually controlling finality is interesting, but also dangerous. If you attempt to author and finalize a block that does not build on the best finalized chain, the block will not be imported. If you finalize one block in one node, and a conflicting block in another node, you will cause a safety violation when the nodes synchronize.

Parent Hash

parent_hash is an optional hash of a block to use as a parent. To set the parent, use the format "0x0e0626477621754200486f323e3858cd5f28fcbe52c69b2581aecb622e384764". To omit the parent, use null. When the parent is omitted the block is built on the current best block. Manually specifying the parent is useful for constructing fork scenarios and demonstrating chain reorganizations.

Manually Finalizing Blocks

In addition to finalizing blocks while creating them, they can be finalized later by using the second provided RPC call, engine_finalizeBlock.

$ curl http://localhost:9933 -H "Content-Type:application/json;charset=utf-8" -d   '{
     "jsonrpc":"2.0",
      "id":1,
      "method":"engine_finalizeBlock",
      "params": ["0x0e0626477621754200486f323e3858cd5f28fcbe52c69b2581aecb622e384764", null]
    }'

The two parameters are:

  • The hash of the block to finalize.
  • A Justification. TODO what is the justification and why might I want to use it?

Building the Service

So far we've learned how to use the manual seal node and why it might be useful. Let's now turn our attention to how the service is built in the nodes src/service.rs file.

The Import Queue

We begin by creating a manual-seal import queue. This process is identical to creating the import queue used in the Kitchen Node. It is also similar to, but simpler than, the basic-pow import queue.

.with_import_queue(|_config, client, _select_chain, _transaction_pool| {
	Ok(sc_consensus_manual_seal::import_queue::<_, sc_client_db::Backend<_>>(Box::new(client)))
})?;

What about the Light Client?

The light client is not yet supported in this node, but it likely will be in the future (See issue #238.) Because it will typically be used for learning, experimenting, and testing in a single-node environment this restriction should not cause many problems.. Instead we mark it as unimplemented!.

/// Builds a new service for a light client.
pub fn new_light(_config: Configuration) -> Result<impl AbstractService, ServiceError>
{
	unimplemented!("No light client for manual seal");

	// This needs to be here or it won't compile.
	#[allow(unreachable_code)]
	new_full(_config, false)
}

Because the return type of this function contains impl AbstractService, Rust's typechecker is unable to infer the concrete type. We give it a hand by calling new_full at the end, but don't worry, this code will never actually be executed. unimplemented! will panic first.

The Manual Seal RPC

Because the node runs in manual seal mode, we need to wire up the RPC commands that we explored earlier. This process is nearly identical to those described in the custom rpc recipe.

As prep work, we make a type alias,

type RpcExtension = jsonrpc_core::IoHandler<sc_rpc::Metadata>;

Next we create a channel over which the rpc handler and the authorship task can communicate with one another. The RPC handler will send messages asking to create or finalize a block and the import queue will receive the message and do so.

// channel for the rpc handler to communicate with the authorship task.
let (command_sink, commands_stream) = futures::channel::mpsc::channel(1000);
let service = builder
	// manual-seal relies on receiving sealing requests aka EngineCommands over rpc.
	.with_rpc_extensions(|_| -> Result<RpcExtension, _> {
		let mut io = jsonrpc_core::IoHandler::default();
		io.extend_with(
			// We provide the rpc handler with the sending end of the channel to allow the rpc
			// send EngineCommands to the background block authorship task.
			rpc::ManualSealApi::to_delegate(rpc::ManualSeal::new(command_sink)),
		);
		Ok(io)
	})?
	.build()?;

The Authorship Task

As with every authoring engine, manual seal needs to be run as an async authoring tasks. Here we provide the receiving end of the channel we created earlier.

// Background authorship future.
let authorship_future = manual_seal::run_manual_seal(
		Box::new(service.client()),
		proposer,
		service.client().clone(),
		service.transaction_pool().pool().clone(),
		commands_stream,
		service.select_chain().unwrap(),
		inherent_data_providers
	);

With the future created, we can now kick it off using the service's spawn_essential_task method.

// we spawn the future on a background thread managed by service.
service.spawn_essential_task("manual-seal", authorship_future);

Combining Instant Seal with Manual Seal

It is possible to combine the manual seal of the node we built above with the functionality of the Kitchen Node's instant seal to get the best of both worlds. This configuration may be desirable in development and testing environments. We can use the normal behavior of instant seal to create blocks any time a transaction is imported into the pool. On the other hand we can move forward in block number by instantly sealing empty blocks. The functionality may be familiar to developers of Ethereum smart contracts that have used ganache-cli.

Implementation

In the same directory for the manual seal node is a file called combined_service.rs. This file contains modified code of the service.rs file we just looked at in the section above. Some modification have been made. These modifications are numbered and begin at line 85 in the source.

let pool = service.transaction_pool().pool().clone();

The first step is to create an instance of a transaction pool that will be shared between the pool_stream which receives events whenever a new transaction is imported and the service builder.

let pool_stream = pool
	.validated_pool()
	.import_notification_stream()
	.map(|_| {
		// Every new block create an `EngineCommand` that will seal a new block.
		rpc::EngineCommand::SealNewBlock {
			create_empty: false,
			finalize: false,
			parent_hash: None,
			sender: None,
		}
	});

Next we implement the instant seal just as it's implemented under the covers in the call to run_instant_seal. Namely, we make sure that any new notifications we will submit an RPC EngineCommand to seal a new block.

let combined_stream = futures::stream::select(commands_stream, pool_stream);

We combine the futures using the select utility which will receive events from either one of the streams we pass to it. In this case, we're passing all notifications received from the manual seal stream and the instant seal stream together.

let authorship_future = manual_seal::run_manual_seal(
	Box::new(service.client()),
	proposer,
	service.client(), // 4) vvvvv
	pool,             // <- Use the same pool that we used to get `pool_stream`.
	combined_stream,  // <- Here we place the combined streams.
	service.select_chain().unwrap(),
	inherent_data_providers,
);

Finally we initialize the authorship_future with the combined streams.

In order to run this variant of the node you will need to uncomment two lines and rebuild the node. In command.rs comment the line that reads use crate::service; and uncomment use crate::combined_service as service;. In main.rs comment mod service; and uncomment mod combined_service'. Now you can rebuild the node and test out that it will seal blocks using the manual method and the instant method together.

Kitchen Node (Instant Seal)

nodes/kitchen-node

This recipe demonstrates a general purpose Substrate node that supports most of the recipes' runtimes, and uses Instant Seal consensus.

The kitchen node serves as the first point of entry for most aspiring chefs when they first encounter the recipes. By default it builds with the super-runtime, but it can be used with most of the runtimes in the recipes. Changing the runtime is described below. It features the instant seal consensus which is perfect for testing and iterating on a runtime.

Installing a Runtime

Cargo Dependency

The Cargo.toml file specifies the runtime as a dependency. The file imports the super-runtime, and has dependencies on other runtimes commented out.

# Common runtime configured with most Recipes pallets.
runtime = { package = "super-runtime", path = "../../runtimes/super-runtime" }

# Runtime with custom weight and fee calculation.
# runtime = { package = "weight-fee-runtime", path = "../../runtimes/weight-fee-runtime"}

# Runtime with off-chain worker enabled.
# To use this runtime, compile the node with `ocw` feature enabled,
#   `cargo build --release --features ocw`.
# runtime = { package = "ocw-runtime", path = "../../runtimes/ocw-runtime" }

# Runtime with custom runtime-api (custom API only used in rpc-node)
#runtime = { package = "api-runtime", path = "../../runtimes/api-runtime" }

Installing a different runtime in the node is just a matter of commenting out the super-runtime line, and enabling another one. Try the weight-fee runtime for example. Of course cargo will complain if you try to import two crates under the name runtime.

It is worth noting that this node does not work with all of the recipes' runtimes. In particular, it is not compatible with the babe-grandpa runtime. That runtime uses the babe pallet which requires a node that will include a special PreRuntime DigestItem.

Building a Service with the Runtime

With a runtime of our choosing listed among our dependencies, we can provide the runtime to the ServiceBuilder. The ServiceBuilder is responsible for assembling all of the necessary pieces that a node will need, and creating a Substrate Service which will manage the communication between them.

We begin by invoking the native_executor_instance! macro. This creates an executor which is responsible for executing transactions in the runtime and determining whether to run the native or Wasm version of the runtime.

native_executor_instance!(
	pub Executor,
	runtime::api::dispatch,
	runtime::native_version,
);

Finally, we create a new ServiceBuilder for a full node. (The $ in the syntax is because we are in a macro definition.

let builder = sc_service::ServiceBuilder::new_full::<
	runtime::opaque::Block, runtime::RuntimeApi, crate::service::Executor
>($config)?
// --snip--

Instant Seal Consensus

The instant seal consensus engine, and its cousin the manual seal consensus engine, are both included in the same sc-consensus-manual-seal crate. The recipes has a recipe dedicated to using manual seal. Instant seal is a very convenient tool for when you are developing or experimenting with a runtime. The consensus engine simply authors a new block whenever a new transaction is available in the queue. This is similar to Truffle Suite's Ganache in the Ethereum ecosystem, but without the UI.

The Cargo Dependencies

Installing the instant seal engine has three dependencies whereas the runtime had only one.

sc-consensus = '0.8.0-rc3'
sc-consensus-manual-seal = '0.8.0-rc3'
sp-consensus = '0.8.0-rc3'

The Proposer

We begin by creating a Proposer which will be responsible for creating proposing blocks in the chain.

let proposer = sc_basic_authorship::ProposerFactory::new(
	service.client().clone(),
	service.transaction_pool(),
);

The Import Queue

Next we make a manual-seal import queue. This process is identical to creating the import queue used in the Manual Seal Node. It is also similar to, but simpler than, the basic-pow import queue.

.with_import_queue(|_config, client, _select_chain, _transaction_pool| {
	Ok(sc_consensus_manual_seal::import_queue::<_, sc_client_db::Backend<_>>(Box::new(client)))
})?;

The Authorship Task

As with every authoring engine, instant seal needs to be run as an async authoring task.

let authorship_future = sc_consensus_manual_seal::run_instant_seal(
	Box::new(service.client()),
	proposer,
	service.client().clone(),
	service.transaction_pool().pool().clone(),
	service.select_chain().ok_or(ServiceError::SelectChainRequired)?,
	inherent_data_providers
);

With the future created, we can now kick it off using the service's spawn_essential_task method.

service.spawn_essential_task("instant-seal", authorship_future);

What about the Light Client?

The light client is not yet supported in this node, but it likely will be in the future (See issue #238.) Because it will typically be used for learning, experimenting, and testing in a single-node environment this restriction should not cause many problems.. Instead we mark it as unimplemented!.

/// Builds a new service for a light client.
pub fn new_light(_config: Configuration) -> Result<impl AbstractService, ServiceError>
{
	unimplemented!("No light client for manual seal");

	// This needs to be here or it won't compile.
	#[allow(unreachable_code)]
	new_full(_config, false)
}

BABE and GRANDPA Node

nodes/babe-grandpa-node

The babe-grandpa-node uses the BABE Proof of Authority consensus engine to determine who may author blocks, and the GRANDPA finality gadget to provide deterministic finality to past blocks. This is the same design used in Polkadot. Understanding this recipe requires familiarity with Substrate's block import pipeline.

In this recipe we will learn about:

The Block Import Pipeline

The babe-grandpa node's block import pipeline will have three layers. The inner-most layer is the Substrate Client, as always. We will wrap the client with a GrandpaBlockImport, and wrap that with a BabeBlockImport.

We begin by creating the block import for GRANDPA. In addition to the block import itself, we get back a grandpa_link. This link is a channel over which the block import can communicate with the background task that actually casts GRANDPA votes. The details of the GRANDPA protocol are beyond the scope of this recipe.

let (grandpa_block_import, grandpa_link) =
	sc_finality_grandpa::block_import(
		client.clone(), &(client.clone() as std::sync::Arc<_>), select_chain
	)?;

In addition to actual blocks, this same block import will be used to import Justifications, so we clone it right after constructing it.

let justification_import = grandpa_block_import.clone();

With the GRANDPA block import created, we can now create the BABE block import. The BABE block import is the outer-most layer of the block import onion and it wraps the GRANDPA block import.

let (babe_block_import, babe_link) = sc_consensus_babe::block_import(
	sc_consensus_babe::Config::get_or_compute(&*client)?,
	grandpa_block_import,
	client.clone(),
)?;

Again we are given back a BABE link which will be used to communicate with the import queue and background authoring worker.

The Import Queue

With the block import pipeline setup, we can proceed to creating the import queue which will feed blocks from the network into the import pipeline. We make it using BABE's import_queue helper function. Notice that it requires the BABE link, and the entire block import pipeline which we refer to as babe_block_import because BABE is the outermost layer.

let import_queue = sc_consensus_babe::import_queue(
	babe_link.clone(),
	babe_block_import.clone(),
	Some(Box::new(justification_import)),
	None,
	client,
	inherent_data_providers.clone(),
)?;

The Finality Proof Provider

Occasionally in the operation of a blockchain, other nodes will contact our node asking for proof that a particular block is finalized. To respond to these requests, we include a finality proof provider.

.with_finality_proof_provider(|client, backend| {
	let provider = client as Arc<dyn StorageAndProofProvider<_, _>>;
	Ok(Arc::new(GrandpaFinalityProofProvider::new(backend, provider)) as _)
})?

Spawning the BABE Authorship Task

Any node that is acting as an authority and participating in BABE consensus, must run an async authorship task. We begin by creating an instance of BabeParams.

let babe_config = sc_consensus_babe::BabeParams {
	keystore: service.keystore(),
	client,
	select_chain,
	env: proposer,
	block_import,
	sync_oracle: service.network(),
	inherent_data_providers: inherent_data_providers.clone(),
	force_authoring,
	babe_link,
	can_author_with,
};

With the parameters established, we can now create and spawn the authorship future.

let babe = sc_consensus_babe::start_babe(babe_config)?;
service.spawn_essential_task("babe", babe);

Spawning the GRANDPA Task

Just as we needed an async worker to author blocks with BABE, we need an async worker to listen to and cast GRANDPA votes. Again, we begin by creating an instance of GrandpaParams

let grandpa_config = sc_finality_grandpa::GrandpaParams {
	config: grandpa_config,
	link: grandpa_link,
	network: service.network(),
	inherent_data_providers: inherent_data_providers.clone(),
	telemetry_on_connect: Some(service.telemetry_on_connect_stream()),
	voting_rule: sc_finality_grandpa::VotingRulesBuilder::default().build(),
	prometheus_registry: service.prometheus_registry(),
};

With the parameters established, we can now create and spawn the authorship future.

service.spawn_essential_task(
	"grandpa-voter",
	sc_finality_grandpa::run_grandpa_voter(grandpa_config)?
);

Disabled GRANDPA

Proof of Authority networks generally contain many full nodes that are not authorities. When GRANDPA is present in the network, we still need to tell the node how to interpret GRANDPA-related messages it may receive (just ignore them) and ensure that the correct inherents are still included in blocks in the case that the node is an authority in BABE but not GRANDPA.

sc_finality_grandpa::setup_disabled_grandpa(
	service.client(),
	&inherent_data_providers,
	service.network(),
)?;

Constraints on the Runtime

Runtime APIs

Both BABE and GRANDPA rely on getting their authority sets from the runtime via the BabeAPI and the GrandpaAPI. So trying to build this node with a runtime that does not provide these APIs will fail to compile.

Pre-Runtime Digests

Just as we cannot use this node with a runtime that does not provide the appropriate runtime APIs, we also cannot use a runtime designed for this node with different consensus engines.

Because BABE is a slot-based consensus engine, it must inform the runtime which slot each block was intended for. To do this, it uses a technique known as a pre-runtime digest. It has two kinds, PrimaryPreDigest and SecondaryPlainPreDigest. The BABE authorship task automatically inserts these digest items in each block it authors.

Because the runtime needs to interpret these pre-runtime digests, they are not optional. That means runtimes that expect the pre-digests cannot be used, unmodified, in nodes that don't provide the pre-digests. Unlike other runtimes in the Recipes where runtimes can be freely swapped between nodes, the babe-grandpa-runtime can only be used in a node that is actually running BABE

Currency Types

pallets/lockable-currency, pallets/reservable-currency, pallets/currency-imbalances

Just Plain Currency

To use a balance type in the runtime, import the Currency trait from frame_support.

use support::traits::Currency;

The Currency trait provides an abstraction over a fungible assets system. To use such a fungible asset from your pallet, include an associated type with the Currency trait bound in your pallet's configuration trait.

pub trait Trait: system::Trait {
	type Currency: Currency<Self::AccountId>;
}

Defining an associated type with this trait bound allows this pallet to access the provided methods of Currency. For example, it is straightforward to check the total issuance of the system:

// in decl_module block
T::Currency::total_issuance();

As promised, it is also possible to type alias a balances type for use in the runtime:

type BalanceOf<T> = <<T as Trait>::Currency as Currency<<T as system::Trait>::AccountId>>::Balance;

This new BalanceOf<T> type satisfies the type constraints of Self::Balance for the provided methods of Currency. This means that this type can be used for transfer, minting, and much more.

Reservable Currency

Substrate's Treasury pallet uses the Currency type for bonding spending proposals. To reserve and unreserve funds for bonding, treasury uses the ReservableCurrency trait. The import and associated type declaration follow convention

use frame_support::traits::{Currency, ReservableCurrency};

pub trait Trait: system::Trait {
	type Currency: Currency<Self::AccountId> + ReservableCurrency<Self::AccountId>;
}

To lock or unlock some quantity of funds, it is sufficient to invoke reserve and unreserve respectively

pub fn reserve_funds(origin, amount: BalanceOf<T>) -> DispatchResult {
	let locker = ensure_signed(origin)?;

	T::Currency::reserve(&locker, amount)
			.map_err(|_| "locker can't afford to lock the amount requested")?;

	let now = <system::Module<T>>::block_number();

	Self::deposit_event(RawEvent::LockFunds(locker, amount, now));
	Ok(())
}
pub fn unreserve_funds(origin, amount: BalanceOf<T>) -> DispatchResult {
	let unlocker = ensure_signed(origin)?;

	T::Currency::unreserve(&unlocker, amount);
	// ReservableCurrency::unreserve does not fail (it will lock up as much as amount)

	let now = <system::Module<T>>::block_number();

	Self::deposit_event(RawEvent::UnlockFunds(unlocker, amount, now));
	Ok(())
}

Lockable Currency

Substrate's Staking pallet similarly uses LockableCurrency trait for more nuanced handling of capital locking based on time increments. This type can be very useful in the context of economic systems that enforce accountability by collateralizing fungible resources. Import this trait in the usual way

use frame_support::traits::{LockIdentifier, LockableCurrency}

To use LockableCurrency, it is necessary to define a LockIdentifier.

const EXAMPLE_ID: LockIdentifier = *b"example ";

By using this EXAMPLE_ID, it is straightforward to define logic within the runtime to schedule locking, unlocking, and extending existing locks.

fn lock_capital(origin, amount: BalanceOf<T>) -> DispatchResult {
	let user = ensure_signed(origin)?;

	T::Currency::set_lock(
		EXAMPLE_ID,
		&user,
		amount,
		WithdrawReasons::except(WithdrawReason::TransactionPayment),
	);

	Self::deposit_event(RawEvent::Locked(user, amount));
	Ok(())
}

Imbalances

Functions that alter balances return an object of the Imbalance type to express how much account balances have been altered in aggregate. This is useful in the context of state transitions that adjust the total supply of the Currency type in question.

To manage this supply adjustment, the OnUnbalanced handler is often used. An example might look something like

pub fn reward_funds(origin, to_reward: T::AccountId, reward: BalanceOf<T>) {
	let _ = ensure_signed(origin)?;

	let mut total_imbalance = <PositiveImbalanceOf<T>>::zero();

	let r = T::Currency::deposit_into_existing(&to_reward, reward).ok();
	total_imbalance.maybe_subsume(r);
	T::Reward::on_unbalanced(total_imbalance);

	let now = <system::Module<T>>::block_number();
	Self::deposit_event(RawEvent::RewardFunds(to_reward, reward, now));
}

takeaway

The way we represent value in the runtime dictates both the security and flexibility of the underlying transactional system. Likewise, it is convenient to be able to take advantage of Rust's flexible trait system when building systems intended to rethink how we exchange information and value 🚀

Generating Randomness

pallets/randomness

Randomness is useful in computer programs for everything from gambling, to generating DNA for digital kitties, to selecting block authors. Randomness is hard to come by in deterministic computers as explained at random.org. This is particularly true in the context of a blockchain when all the nodes in the network must agree on the state of the chain. Some techniques have been developed to address this problem including RanDAO and Verifiable Random Functions. Substrate abstracts the implementation of a randomness source using the Randomness trait, and provides a few implementations. This recipe will demonstrate using the Randomness trait and two concrete implementations.

Disclaimer

All of the randomness sources described here have limitations on their usefulness and security. This recipe shows how to use these randomness sources and makes an effort to explain their trade-offs. However, the author of this recipe is a blockchain chef, not a trained cryptographer. It is your responsibility to understand the security implications of using any of the techniques described in this recipe, before putting them to use. When in doubt, consult a trustworthy cryptographer.

The resources linked at the end of this recipe may be helpful in assessing the security and limitations of these randomness sources.

Randomness Trait

The randomness trait provides two methods, random_seed, and random, both of which provide a pesudo-random value of the type specified in the traits type parameter.

random_seed

The random_seed method takes no parameters and returns a random seed which changes once per block. If you call this method twice in the same block you will get the same result. This method is typically not as useful as its counterpart.

random

The random method takes a byte array, &[u8], known as the subject, and uses the subject's bytes along with the random seed described in the previous section to calculate a final random value. Using a subject in this way allows pallet (or multiple pallets) to seek randomness in the same block and get different results. The subject does not add entropy or security to the generation process, it merely prevents each call from returning identical values.

Common values to use for a subject include:

  • The block number
  • The caller's accountId
  • A Nonce
  • A pallet-specific identifier
  • A tuple containing several of the above

To bring a randomness source into scope, we include it in our configuration trait with the appropriate trait bound. This pallet, being a demo, will use two different sources. Using multiple sources is not necessary in practice.

pub trait Trait: system::Trait {
	type Event: From<Event> + Into<<Self as system::Trait>::Event>;

	type CollectiveFlipRandomnessSource: Randomness<H256>;

	type BabeRandomnessSource: Randomness<H256>;
}

We've provided the Output type as H256.

Collective Coin Flipping

Substrate's Randomness Collective Flip pallet uses a safe mixing algorithm to generate randomness using the entropy of previous block hashes. Because it is dependent on previous blocks, it can take many blocks for the seed to change.

A naive randomness source based on block hashes would take the hash of the previous block and use it as a random seed. Such a technique has the significant disadvantage that the block author can preview the random seed, and choose to discard the block choosing a slightly modified block with a more desirable hash. This pallet is subject to similar manipulation by the previous 81 block authors rather than just the previous 1.

Calling the randomness source from rust code is straightforward.

let random_seed = T::CollectiveFlipRandomnessSource::random_seed();
let random_result = T::CollectiveFlipRandomnessSource::random(&subject);

Although it may seem harmless, you should not hash the result of the randomness provided by the collective flip pallet. Secure hash functions satisfy the Avalance effect which means that each bit of input is equally likely to affect a given bit of the output. Hashing will negate the low-influence property provided by the pallet.

Babe VRF Output

Substrate's Babe pallet which is primarily responsible for managing validator rotation in Babe consensus, also collects the VRF outputs that Babe validators publish to demonstrate that they are permitted to author a block. These VRF outputs can be used to provide a random seed.

Because we are accessing the randomness via the Randomness trait, the calls look the same as before.

let random_seed = T::BabeRandomnessSource::random_seed();
let random_result = T::BabeRandomnessSource::random(&subject);

In production networks, Babe VRF output is preferable to Collective Flip. Collective Flip provides essentially no real security.

Down the Rabbit Hole

As mentioned previously, there are many tradeoffs and security concerns to be aware of when using these randomness sources. If you'd like to get into the research, here are some jumping off points.

Execution Schedule

pallets/execution-schedule

Blockchain-native mechanisms may use the block number as a proxy for time to schedule task execution. Although scheduled task execution through council governance is minimal in this example, it is not too hard to imagine tasks taking the form of subscription payments, grant payouts, or any other scheduled task execution.

This pallet demonstrates a permissioned task scheduler, in which members of a council: Vec<AccountId> can schedule tasks, which are stored in a vector in the runtime storage (decl_storage).

Members of the council vote on the tasks with SignalQuota voting power which is doled out equally to every member every ExecutionFrequency number of blocks.

Tasks with support are prioritized during execution every ExecutionFrequency number of blocks. More specifically, every ExecutionFrequency number of blocks, a maximum of TaskLimit number of tasks are executed. The priority of tasks is decided by the signalling of the council members.

The module's Trait:

// other type aliases
pub type PriorityScore = u32;

pub trait Trait: system::Trait {
    /// Overarching event type
    type Event: From<Event<Self>> + Into<<Self as system::Trait>::Event>;

    /// Quota for members to signal task priority every ExecutionFrequency
    type SignalQuota: Get<PriorityScore>;

    /// The frequency of batch executions for tasks (in `on_finalize`)
    type ExecutionFrequency: Get<Self::BlockNumber>;

    /// The maximum number of tasks that can be approved in an `ExecutionFrequency` period
    type TaskLimit: Get<PriorityScore>;
}

The task object is a struct,

pub type TaskId = Vec<u8>;
pub type PriorityScore = u32;

pub struct Task<BlockNumber> {
    id: TaskId,
    score: PriorityScore,
    proposed_at: BlockNumber,
}

The runtime method for proposing a task emits an event with the expected execution time. The calculation of the expected execution time was first naively to basically iterate the block number from the current block number until it was divisible by T::ExecutionFrequency::get(). While this is correct, it is clearly not the most efficient way to find the next block in which tasks are executed.

A more complex engine for predicting task execution time may run off-chain instead of in a runtime method.

Before adding a runtime method to estimate the execution_time, implement a naive implementation that iterates the global BlockNumber until it is divisible by ExecutionFrequency (which implies execution in on_finalize in this block).

fn naive_execution_estimate(now: T::BlockNumber) -> T::BlockNumber {
    // the frequency with which tasks are batch executed
    let batch_frequency = T::ExecutionFrequency::get();
    let mut expected_execution_time = now;
    loop {
        // the expected execution time is the next block number divisible by `ExecutionFrequency`
        if (expected_execution_time % batch_frequency).is_zero() {
            break;
        } else {
            expected_execution_time += 1.into();
        }
    }
    expected_execution_time
}

This naive implementation unsurprisingly worked...

#[test]
fn naive_estimator_works() {
    // should use quickcheck to cover entire range of checks
    ExtBuilder::default()
        .execution_frequency(8)
        .build()
        .execute_with(|| {
            let current_block = 5u64;
            assert_eq!(
                ExecutionSchedule::naive_execution_estimate(current_block.into()),
                8u64.into()
            );
            let next_block = 67u64;
            assert_eq!(
                ExecutionSchedule::naive_execution_estimate(next_block.into()),
                72u64.into()
            );
        })
}

...but it is obvious that there is a better way. If execution is scheduled every constant ExecutionFrequency number of blocks, then it should be straightforward to calculate the next execution block without this slow iterate and check modulus method. My first attempt at a better implementation of execution_estimate(n: T::BlockNumber) -> T::BlockNumber was

fn execution_estimate(n: T::BlockNumber) -> T::BlockNumber {
        let batch_frequency = T::ExecutionFrequency::get();
        let miss = n % batch_frequency;
        (n + miss) - batch_frequency
    }

The above code failed the estimator_works unit test

#[test]
fn estimator_works() {
    ExtBuilder::default()
        .execution_frequency(8)
        .build()
        .execute_with(|| {
            assert_eq!(
                ExecutionSchedule::execution_estimate(current_block.into()),
                8u64.into()
            );
            assert_eq!(
                ExecutionSchedule::execution_estimate(next_block.into()),
                72u64.into()
            );
        })
}

The error helped me catch the logic mistake and change it to

fn execution_estimate(n: T::BlockNumber) -> T::BlockNumber {
    let batch_frequency = T::ExecutionFrequency::get();
    let miss = n % batch_frequency;
    n + (batch_frequency - miss)
}

This makes more sense. Current block number % T::ExecutionFrequency::get() is, by definition of modulus, the number of blocks that the current block is past when tasks were last executed. To return the next block at which task execution is scheduled, the estimator adds the difference between T::ExecutionFrequency::get() and the modulus. This makes sense AND passes the estimators_work() test.

on_initialize updates vote data and round information

Each period of task proposals and voting is considered a round, expressed as RoundIndex: u32 such that the global round is stored in the runtime storage as Era.

pub type RoundIndex = u32;

decl_storage! {
    trait Store for Module<T: Trait> as ExecutionSchedule {
        Era get(fn era): RoundIndex;
    }
}

This storage value acts as a global counter of the round, which is also used as the prefix_key of a double_map that tracks the member's remaining voting power in the SignalBank runtime storage item. This map and the round counter are updated in the on_initialize hook.

// in on_initialize
let last_era = <Era>::get();
<SignalBank<T>>::remove_prefix(&last_era);
let next_era: RoundIndex = last_era + 1;
<Era>::put(next_era);
// see next code back

The SignalBank tracks the signalling power of each member of the council. By using a double-map with the prefix as the round number, it is straightforward to perform batch removal of state related to signalling in the previous round.

<SignalBank<T>>::remove_prefix(&last_era);

In practice, this organization of logic uses something like a ring buffer; the on_initialize both batch deletes all signalling records from the previous round while, in the same code block, doling out an equal amount of voting power to all members for the next round.

// ...continuation of last code block
let signal_quota = T::SignalQuota::get();
<Council<T>>::get().into_iter().for_each(|member| {
    <SignalBank<T>>::insert(next_era, &member, signal_quota);
});

The aforementioned ring buffer is maintained in the on_initialize block. The maintenance code is kept in an if statement that limits its invocation to blocks x that follow blocks y for which y % ExecutionFrequency == 0.

This is a common way of only exercising expensive batch execution functions every periodic number of blocks. Still, the second to last statement is confusing. The first time I encountered the problem, I placed the following in the on_initialize if statement that controls the maintenance of the SignalBank and Era storage values,

// in on_initialize(n: T::BlockNumber)
if (n % (T::ExecutionFrequency + 1.into())).is_zero() {
    //changing and repopulating of `Era` and `SignalBank`
}

I only noticed this mistake while testing whether eras progress as expected. Specifically, the following test failed

#[test]
    fn eras_change_correctly() {
    ExtBuilder::default()
        .execution_frequency(2)
        .build()
        .execute_with(|| {
            System::set_block_number(1);
            run_to_block(13);
            assert_eq!(ExecutionSchedule::era(), 6);
            run_to_block(32);
            assert_eq!(ExecutionSchedule::era(), 16);
        })
}

The test failed with an error message claiming that the first assert_eq! left side was 4 which does not equal 6. This error message caused me to inspect the if condition, which I realized should be changed to (the current implementation),

// in on_initialize(n: T::BlockNumber)
if ((n - 1.into()) % T::ExecutionFrequency).is_zero() {
    //changing and repopulating of `Era` and `SignalBank`
}

With this change, the eras_change_correctly test passes.

on_finalize execution priority

  • this pattern of sorting the tasks in on_finalize is inspired by the scored-pool pallet which should be referenced

  • when we schedule and reprioritize elements in this way, order of execution becomes extremely important

  • we execute tasks in on_finalize when n % T::ExecutionFrequency == 0. I should ensure that n != 0 as well, but I assume this is the case. The limit is maximum TaskLimit.

  • An improvement would be to also ensure that their is some minimum amount of score. It would be nice to write abstractions that have a more native sense of the collective voting power of all members

  • this lends itself to a follow up off-chain workers example for how it fits between on_finalize of the last block and on_initialize of the next block => there is this whole execution-schedule :p

Tightly- and Loosely-Coupled Pallets

pallets/check-membership

The check-membership crate contains two pallets that solve the same problems in slightly different ways. Both pallets implement a single dispatchable function that can only be successfully executed by callers who are members of an access control list. The job of maintaining the access control list is abstracted away to another pallet. This pallet and the membership-managing pallet can be coupled in two different ways which are demonstrated by the tight and loose variants of the pallet.

Twin Pallets

Before we dive into the pallet code, let's talk a bit more about the structure of the crate in the pallets/check-membership directory. This directory is a single Rust crate that contains two pallets. The two pallets live in the pallets/check-membership/tight and pallets/check-membership/loose directories. In the crate's main lib.rs we simply export each of these variants of the pallet.

pub mod loose;
pub mod tight;

This allows us to demonstrate both techniques while keeping the closely related work in a single crate.

Controlling Access

While the primary learning objective of these twin pallets is understanding the way in which they are coupled to the membership-managing pallets, they also demonstrate the concept of an access control list, which we will investigate first.

It is often useful to designate some functions as permissioned and, therefore, accessible only to a defined group of users. In this pallet, we check that the caller of the check_membership function corresponds to a member of the permissioned set.

The loosely coupled variant looks like this.

/// Checks whether the caller is a member of the set of Account Ids provided by the
/// MembershipSource type. Emits an event if they are, and errors if not.
fn check_membership(origin) -> DispatchResult {
	let caller = ensure_signed(origin)?;

	// Get the members from the vec-set pallet
	let members = T::MembershipSource::accounts();

	// Check whether the caller is a member
	ensure!(members.contains(&caller), Error::<T>::NotAMember);

	// If the previous call didn't error, then the caller is a member, so emit the event
	Self::deposit_event(RawEvent::IsAMember(caller));
	Ok(())
}

Coupling Pallets

Each check-membership pallet actually contains very little logic. It has no storage of its own and a single extrinsic that does the membership checking. All of the heavy lifting is abstracted away to another pallet. There are two different ways that pallets can be coupled to one another and this section investigates both.

Tight Coupling

Tightly coupling pallets is more explicit than loosely coupling them. When you are writing a pallet that you want to tightly couple with some other pallet as a dependency, you explicitly specify the name of the pallet on which you depend as a trait bound on the configuration trait of the pallet you are writing. This is demonstrated in the tightly coupled variant of check-membership.

pub trait Trait: system::Trait + vec_set::Trait {
	// --snip--
}

This pallet, and all pallets, are tightly coupled to frame_system.

Supplying this trait bound means that the tightly coupled variant of check-membership pallet can only be installed in a runtime that also has the vec-set pallet installed. We also see the tight coupling in the pallet's Cargo.toml file, where vec-set is listed by name.

vec-set = { path = '../vec-set', default-features = false }

To actually get the set of members, we have access to the getter function declared in vec-set.

// Get the members from the vec-set pallet
let members = vec_set::Module::<T>::members();

While tightly coupling pallets is conceptually simple, it has the disadvantage that it depends on a specific implementation rather than an abstract interface. This makes the code more difficult to maintain over time and is generally frowned upon. The tightly coupled version of check-membership depends on exactly the vec-set pallet rather than a behavior such as managing a set of accounts.

Loose Coupling

Loose coupling solves the problem of coupling to a specific implementation. When loosely coupling to another pallet, you add an associated type to the pallet's configuration trait and ensure the supplied type implements the necessary behavior by specifying a trait bound.

pub trait Trait: system::Trait {
	// --snip--

	/// A type that will supply a set of members to check access control against
	type MembershipSource: AccountSet<AccountId = Self::AccountId>;
}

Many palets throught the ecosystem are coupled to a token through the Currency trait.

Having this associated type means that the loosely coupled variant of the check-membership pallet can be installed in any runtime that can supply it with a set of accounts to use as an access control list. The code for the AccountSet trait lives in traits/account-set/src/lib.rs directory and is quite short.

pub trait AccountSet {
	type AccountId;

	fn accounts() -> BTreeSet<Self::AccountId>;
}

We also see the loose coupling in the pallet's Cargo.toml file, where account-set is listed.

account-set = { path = '../../traits/account-set', default-features = false }

To actually get the set of members, we use the accounts method supplied by the trait.

// Get the members from the vec-set pallet
let members = T::MembershipSource::accounts();

Testing

Although the Rust compiler ensures safe memory management, it cannot formally verify the correctness of a program's logic. Fortunately, Rust also comes with great libraries and documentation for writing unit and integration tests. When you initiate code with Cargo, test scaffolding is automatically generated to simplify the developer experience. Basic testing concepts and syntax are covered in depth in Chapter 11 of the Rust Book.

There's also more rigorous testing systems ranging from mocking and fuzzing to formal verification. See quickcheck for an example of a property-based testing framework ported from Haskell to Rust.

Kitchen Pallets with Unit Tests

The following modules in the kitchen have partial unit test coverage

Cooking in the Kitchen (Running Tests)

To run the tests, clone the repo

$ git clone https://github.com/substrate-developer-hub/recipes

Enter the path to the pallet to be tested

$ cd pallets/<some-module>

For example, to test constant-config, used in Configurable Constants,

$ cd pallets/constant-config/
$ cargo test

Writing unit tests is one of the best ways to understand the code. Although unit tests are not comprehensive, they provide a first check to verify that the programmer's basic invariants are not violated in the presence of obvious, expected state changes.

Mock Runtime for Unit Testing

See Testing page for list of kitchen pallets with unit test coverage.

There are two main patterns on writing tests for pallets. We can put the tests:

  1. At the bottom of the pallet, place unit tests in a separate Rust module with a special compilation attribute:

    #[cfg(test)]
    mod tests {
    	// -- snip --
    }
    
  2. In a separate file called tests.rs inside src folder, and conditionally include tests inside the main lib.rs. At the top of the lib.rs

    #[cfg(test)]
    mod tests;
    

Now, to use the logic from the pallet under test, bring Module and Trait into scope.

use crate::{Module, Trait};

Create the Outer Environment for Mock Runtime

Before we create the mock runtime that take our pallet to run tests, we first need to create the outer environment for the runtime as follows:

use support::{impl_outer_event, impl_outer_origin, parameter_types};
use runtime_primitives::{Perbill, traits::{IdentityLookup, BlakeTwo256}, testing::Header};
use runtime_io;
use primitives::{H256};

// We define the outer `Origin` enum and `Event` enum.
// You may not be aware that these enums are created when writing the runtime/pallet;
//   it is because they are created through the `construct_runtime!` macro.
// Also, these are not standard Rust but the syntax expected when parsed inside
//   these macros.
impl_outer_origin! {
	pub enum Origin for TestRuntime {}
}

// -- If you want to test events, add the following. Otherwise, please ignore --
mod test_events {
	pub use crate::Event;
}

impl_outer_event! {
	pub enum TestEvent for TestRuntime {
		test_events,
		system<T>,
	}
}
// -- End: Code setup for testing events --

Define Mock Runtime and Implement Necessary Pallet Traits

Now, declare the mock runtime as a unit structure

#[derive(Clone, PartialEq, Eq, Debug)]
pub struct TestRuntime;

The derive macro attribute provides implementations of the Clone, PartialEq, Eq, Debug traits for the TestRuntime struct.

The mock runtime also needs to implement the tested pallet's Trait. If it is unnecessary to test the pallet's Event type, the type can be set to (). See further below to test the pallet's Event enum.

impl Trait for TestRuntime {
	type Event = ();
}

Next, we create a new type that wraps the mock TestRuntime in the pallet's Module.

pub type TestPallet = Module<TestRuntime>;

It may be helpful to read this as type aliasing our configured mock runtime to work with the pallet's Module, which is what is ultimately being tested.

In many cases, the pallet's Trait is further bound by system::Trait like:

pub trait Trait: system::Trait {
	type Event: From<Event<Self>> + Into<<Self as system::Trait>::Event>;
}

The mock runtime must inherit and define the system::Trait associated types. To do so, impl the system::Trait for TestRuntime with types created previously and imported from other crates.

#[derive(Clone, PartialEq, Eq, Debug)]
pub struct TestRuntime;

parameter_types! {
	pub const BlockHashCount: u64 = 250;
	pub const MaximumBlockWeight: u32 = 1024;
	pub const MaximumBlockLength: u32 = 2 * 1024;
	pub const AvailableBlockRatio: Perbill = Perbill::one();
}

// First, implement the system pallet's configuration trait for `TestRuntime`
impl system::Trait for TestRuntime {
	type Origin = Origin;
	type Index = u64;
	type Call = ();
	type BlockNumber = u64;
	type Hash = H256;
	type Hashing = BlakeTwo256;
	type AccountId = u64;
	type Lookup = IdentityLookup<Self::AccountId>;
	type Header = Header;
	// To test events, use `TestEvent`. Otherwise, use the commented line
	type Event = TestEvent;
	// type Event = ();
	type BlockHashCount = BlockHashCount;
	type MaximumBlockWeight = MaximumBlockWeight;
	type MaximumBlockLength = MaximumBlockLength;
	type AvailableBlockRatio = AvailableBlockRatio;
	type Version = ();
	type ModuleToIndex = ();
	type AccountData = ();
	type OnNewAccount = ();
	type OnKilledAccount = ();
}

// Then implement our own pallet's configuration trait for `TestRuntime`
impl Trait for TestRuntime {
	type Event = TestEvent;
}

// Assign back to type variables so we can make dispatched calls of these modules later.
pub type System = system::Module<TestRuntime>;
pub type TestPallet = Module<TestRuntime>;

With this, it is possible to use this type in the unit tests. For example, the block number can be set with set_block_number

#[test]
fn add_emits_correct_event() {
	// ExtBuilder syntax is explained further below
	ExtBuilder::build().execute_with(|| {
		System::set_block_number(2);
		// some assert statements and HelloSubstrate calls
	}
}

Basic Test Environments

To build the test runtime environment, import runtime_io

use runtime_io;

In the Cargo.toml, this only needs to be imported under dev-dependencies since it is only used in the tests module. It also doesn't need to be feature gated in the std feature.

[dev-dependencies.sp-io]
default-features = false

version = '2.0.0-alpha.7'

There is more than one pattern for building a mock runtime environment for testing pallet logic. Two patterns are presented below. The latter is generally favored for reasons discussed in custom test environment

  • new_test_ext - consolidates all the logic for building the environment to a single public method, but isn't relatively configurable (i.e. uses one set of pallet constants)
  • ExtBuilder - define methods on the unit struct ExtBuilder to facilitate a flexible environment for tests (i.e. can reconfigure pallet constants in every test if necessary)

new_test_ext

pallets/smpl-treasury

In smpl-treasury, use the balances::GenesisConfig and the pallet's Genesis::<TestRuntime> to set the balances of the test accounts and establish council membership in the returned test environment.

pub fn new_test_ext() -> runtime_io::TestExternalities {
	let mut t = system::GenesisConfig::default().build_storage::<TestRuntime>().unwrap();
	balances::GenesisConfig::<TestRuntime> {
		balances: vec![
			// members of council (can also be users)
			(1, 13),
			(2, 11),
			(3, 1),
			(4, 3),
			(5, 19),
			(6, 23),
			(7, 17),
			// users, not members of council
			(8, 1),
			(9, 22),
			(10, 46),
		],
		vesting: vec![],
	}.assimilate_storage(&mut t).unwrap();
	GenesisConfig::<TestRuntime>{
		council: vec![
			1,
			2,
			3,
			4,
			5,
			6,
			7,
		]
	}.assimilate_storage(&mut t).unwrap();
	t.into()
}

More specifically, this sets the AccountIds in the range of [1, 7] inclusive as the members of the council. This is expressed in the decl_module block with the addition of an add_extra_genesis block,

add_extra_genesis {
	build(|config| {
		// ..other stuff..
		<Council<T>>::put(&config.council);
	});
}

To use new_test_ext in a runtime test, we call the method and call execute_with on the returned runtime_io::TestExternalities

#[test]
fn fake_test() {
	new_test_ext().execute_with(|| {
		// test logic
	})
}

execute_with executes all logic expressed in the closure within the configured runtime test environment specified in new_test_ext

ExtBuilder

pallets/struct-storage

Another approach providing for a more flexible runtime test environment, instantiates a unit struct ExtBuilder,

pub struct ExtBuilder;

The behavior for constructing the test environment is contained the methods on the ExtBuilder unit structure. This fosters multiple levels of configuration depending on if the test requires a common default instance of the environment or a more specific edge case configuration. The latter is explored in more detail in Custom Test Environment.

Like new_test_ext, the build() method on the ExtBuilder object returns an instance of TestExternalities. Externalities are an abstraction that allows the runtime to access features of the outer node such as storage or offchain workers.

In this case, create a mock storage from the default genesis configuration.

impl ExtBuilder {
	pub fn build() -> runtime_io::TestExternalities {
		let mut storage = system::GenesisConfig::default().build_storage::<TestRuntime>().unwrap();
		runtime_io::TestExternalities::from(storage)
	}
}

which calls some methods to create a test environment,

#[test]
fn fake_test_example() {
	ExtBuilder::build().execute_with(|| {
		// ...test conditions...
	})
}

While testing in this environment, runtimes that require signed extrinsics (i.e. take origin as a parameter) will require transactions coming from an Origin. This requires importing the impl_outer_origin macro from support

use support::{impl_outer_origin};

impl_outer_origin!{
	pub enum Origin for TestRuntime {}
}

It is possible to place signed transactions as parameters in runtime methods that require the origin input. See the full code in the kitchen, but this looks like

#[test]
fn last_value_updates() {
	ExtBuilder::build().execute_with(|| {
		HelloSubstrate::set_value(Origin::signed(1), 10u64);
		// some assert statements
	})
}

Run these tests with cargo test, an optional parameter is the test's name to only run that test and not all tests.

Note that the input to Origin::signed is the system::Trait's AccountId type which was set to u64 for the TestRuntime implementation. In theory, this could be set to some other type as long as it conforms to the trait bound,

pub trait Trait: 'static + Eq + Clone {
	//...
	type AccountId: Parameter + Member + MaybeSerializeDeserialize + Debug + MaybeDisplay + Ord + Default;
	//...
}

Setting for Testing Event Emittances

Events are not emitted on block 0. So when testing for whether events are emitted, we manually set the block number in the test environment from 0 to 1 like so:


#![allow(unused_variables)]
fn main() {
impl ExtBuilder {
	pub fn build() -> TestExternalities {
		let storage = system::GenesisConfig::default().build_storage::<TestRuntime>().unwrap();
		let mut ext = TestExternalities::from(storage);
		ext.execute_with(|| System::set_block_number(1));
		ext
	}
}
}

Common Tests

To verify that our pallet code behaves as expected, it is necessary to check a few conditions with unit tests. Intuitively, the order of the testing may resemble the structure of runtime method development.

  1. Within each runtime method, declarative checks are made prior to any state change. These checks ensure that any required conditions are met before all changes occur; need to ensure that panics panic.
  2. Next, verify that the expected storage changes occurred.
  3. Finally, check that the expected events were emitted with correct values.

Checks before Changes are Enforced (i.e. Panics Panic)

The Verify First, Write Last paradigm encourages verifying certain conditions before changing storage values. In tests, it might be desirable to verify that invalid inputs return the expected error message.

In pallets/adding-machine, the runtime method add checks for overflow

decl_module! {
    pub struct Module<T: Trait> for enum Call where origin: T::Origin {
        fn deposit_event() = default;

        fn add(origin, val1: u32, val2: u32) -> Result {
            let _ = ensure_signed(origin)?;
            // checks for overflow
            let result = match val1.checked_add(val2) {
                Some(r) => r,
                None => return Err("Addition overflowed"),
            };
            Self::deposit_event(Event::Added(val1, val2, result));
            Ok(())
        }
    }
}

The test below verifies that the expected error is thrown for a specific case of overflow.

#[test]
fn overflow_fails() {
	ExtBuilder::build().execute_with(|| {
		assert_err!(
			AddingMachine::add(Origin::signed(3), u32::max_value(), 1),
			"Addition overflowed"
		);
	})
}

This requires importing the assert_err macro from support. With all the previous imported objects,

#[cfg(test)]
mod tests {
	use support::{assert_err, impl_outer_event, impl_outer_origin, parameter_types};
	// more imports and tests
}

For more examples, see Substrate's own pallets -- mock.rs for mock runtime scaffolding and test.rs for unit tests.

Expected Changes to Storage are Triggered

pallets/single-value

Changes to storage can be checked by direct calls to the storage values. The syntax is the same as it would be in the pallet's runtime methods.

use crate::*;

#[test]
fn set_value_works() {
  ExtBuilder::build().execute_with(|| {
    assert_ok!(SingleValue::set_value(Origin::signed(1), 10));
    assert_eq!(SingleValue::stored_value(), 10);
    // Another way of accessing the storage. This pattern is needed if it is a more complexed data
    //   type, e.g. StorageMap, StorageLinkedMap
    assert_eq!(<StoredValue>::get(), 10);
  })
}

For context, the tested pallets's decl_storage block looks like

decl_storage! {
  trait Store for Module<T: Trait> as SingleValue {
    StoredValue get(fn stored_value): u32;
    StoredAccount get(fn stored_account): T::AccountId;
  }
}

Expected Events are Emitted

The common way of testing expected event emission behavior requires importing support's impl_outer_event! macro

use support::impl_outer_event;

The TestEvent enum imports and uses the pallet's Event enum. The new local pallet, hello_substrate, re-exports the contents of the root to give a name for the current crate to impl_outer_event!.

mod hello_substrate {
	pub use crate::Event;
}

impl_outer_event! {
	pub enum TestEvent for TestRuntime {
		hello_substrate<T>,
	}
}

impl Trait for TestRuntime {
	type Event = TestEvent;
}

Testing the correct emission of events compares constructions of expected events with the entries in the System::events vector of EventRecords. In pallets/adding-machine,

#[test]
fn add_emits_correct_event() {
	ExtBuilder::build().execute_with(|| {
		AddingMachine::add(Origin::signed(1), 6, 9);

		assert_eq!(
			System::events(),
			vec![
				EventRecord {
					phase: Phase::Initialization,
					event: TestEvent::added(crate::Event::Added(6, 9, 15)),
					topics: vec![],
				},
			]
		);
	})
}

This check requires importing from system

use system::{EventRecord, Phase};

A more ergonomic way of testing whether a specific event was emitted might use the System::events().iter(). This pattern doesn't require the previous imports, but it does require importing RawEvent (or Event) from the pallet and ensure_signed from system to convert signed extrinsics to the underlying AccountId,

#[cfg(test)]
mod tests {
	// other imports
	use system::ensure_signed;
	use super::RawEvent; // if no RawEvent, then `use super::Event;`
	// tests
}

In pallets/hello-substrate,

#[test]
fn last_value_updates() {
	ExtBuilder::build().execute_with(|| {
		HelloSubstrate::set_value(Origin::signed(1), 10u64);
		// some assert checks

		let id_1 = ensure_signed(Origin::signed(1)).unwrap();
		let expected_event1 = TestEvent::hello_substrate(
			RawEvent::ValueSet(id_1, 10),
		);
		assert!(System::events().iter().any(|a| a.event == expected_event1));
	})
}

This test constructs an expected_event1 based on the event that the developer expects will be emitted upon the successful execution of logic in HelloSubstrate::set_value. The assert!() statement checks if the expected_event1 matches the .event field for any EventRecord in the System::events() vector.

Off-chain Worker Test Environment

Learn more about how to set up and use offchain-workers in the offchain-demo entree.

Mock Runtime Setup

In addition to everything we need to set up in Basic Test Environment, we also need to set up the mock for SubmitTransaction, and implement the CreateTransaction trait for the runtime.

src: pallets/offchain-demo/src/tests.rs


#![allow(unused_variables)]
fn main() {
type TestExtrinsic = TestXt<Call<TestRuntime>, ()>;
type SubmitTransaction = system::offchain::TransactionSubmitter<
	crypto::Public,
	TestRuntime,
	TestExtrinsic
>;

impl Trait for TestRuntime {
	// ...snip
	// For signed transaction
	type SubmitSignedTransaction = SubmitTransaction;
	// For unsigned transaction
	type SubmitUnsignedTransaction = SubmitTransaction;
}

impl system::offchain::CreateTransaction<TestRuntime, TestExtrinsic> for TestRuntime {
	type Public = sr25519::Public;
	type Signature = sr25519::Signature;

	fn create_transaction<TSigner: system::offchain::Signer<Self::Public, Self::Signature>> (
		call: Call<TestRuntime>,
		public: Self::Public,
		_account: <TestRuntime as system::Trait>::AccountId,
		index: <TestRuntime as system::Trait>::Index,
	) -> Option<(Call<TestRuntime>, <TestExtrinsic as sp_runtime::traits::Extrinsic>::SignaturePayload)> {
		// This is the simplest setup we can do
		Some((call, (index, ())))
	}
}
}

Getting the Transaction Pool and Off-chain State

When writing test cases for off-chain workers, we need to look into the transaction pool and current off-chain state to ensure a certain transaction has made its way, and was passed with the right parameters and signature. So in addition to the regular test environment TestExternalities, we also need to return references to the transaction pool state and off-chain state for future inspection.

src: pallets/offchain-demo/src/tests.rs


#![allow(unused_variables)]
fn main() {
pub struct ExtBuilder;

impl ExtBuilder {
	pub fn build() -> (TestExternalities, Arc<RwLock<PoolState>>, Arc<RwLock<OffchainState>>) {
		const PHRASE: &str = "expire stage crawl shell boss any story swamp skull yellow bamboo copy";

		// Getting the transaction pool and off-chain state. Return them for future inspection.
		let (offchain, offchain_state) = testing::TestOffchainExt::new();
		let (pool, pool_state) = testing::TestTransactionPoolExt::new();

		// Initialize the keystore with a default key
		let keystore = KeyStore::new();
		keystore.write().sr25519_generate_new(
			KEY_TYPE,
			Some(&format!("{}/hunter1", PHRASE))
		).unwrap();

		// Initialize our genesis config
		let storage = system::GenesisConfig::default()
			.build_storage::<TestRuntime>()
			.unwrap();

		// Get the TestExternalities, register additional extension we just set up
		let mut t = TestExternalities::from(storage);
		t.register_extension(OffchainExt::new(offchain));
		t.register_extension(TransactionPoolExt::new(pool));
		t.register_extension(KeystoreExt(keystore));

		// Return the externalities and two necessary states
		(t, pool_state, offchain_state)
	}
}
}

Testing Off-chain Worker

When we write tests for off-chain workers, we should test only what our off-chain workers do. For example, when our off-chain workers will eventually make a signed transaction to dispatch function A, which does B, C, and D, we write our test for the off-chain worker to test only if function A is dispatched. But whether function A actually does B, C, and D should be tested separately in another test case. This way we keep our tests more robust.

This is how we write our test cases.

src: pallets/offchain-demo/src/tests.rs


#![allow(unused_variables)]
fn main() {
#[test]
fn offchain_send_signed_tx() {
	let (mut t, pool_state, offchain_state) = ExtBuilder::build();

	t.execute_with(|| {
		// when
		let num = 32;
		OffchainDemo::send_signed(num).unwrap();
		// then

		// Test only one transaction is in the pool.
		let tx = pool_state.write().transactions.pop().unwrap();
		assert!(pool_state.read().transactions.is_empty());

		let tx = TestExtrinsic::decode(&mut &*tx).unwrap();
		// Test the transaction is signed
		assert_eq!(tx.signature.unwrap().0, 0);
		// Test the transaction is calling the expected extrinsics with expected parameters
		assert_eq!(tx.call, Call::submit_number_signed(num));
	});
}
}

We test that when OffchainDemo::send_signed(num) function is being called,

  • There is only one transaction that made it to the transaction pool.
  • The transaction is signed.
  • The transaction is calling the Call::submit_number_signed on-chain function with the parameter num.

What's performed by the Call::submit_number_signed on-chain function is tested in another test case, which would be similar to how you test for dispatched extrinsic calls.

Custom Test Environment

execution-schedule's configuration trait has three configurable constants. For this mock runtime, the ExtBuilder defines setters to enable the TestExternalities instance for each unit test to configure the local test runtime environment with different value assignments. For context, the Trait for execution-schedule,

// other type aliases
pub type PriorityScore = u32;

pub trait Trait: system::Trait {
    /// Overarching event type
    type Event: From<Event<Self>> + Into<<Self as system::Trait>::Event>;

    /// Quota for members to signal task priority every ExecutionFrequency
    type SignalQuota: Get<PriorityScore>;

    /// The frequency of batch executions for tasks (in `on_finalize`)
    type ExecutionFrequency: Get<Self::BlockNumber>;

    /// The maximum number of tasks that can be approved in an `ExecutionFrequency` period
    type TaskLimit: Get<PriorityScore>;
}

The mock runtime environment extends the previously discussed ExtBuilder pattern with fields for each configurable constant and a default implementation.

This completes the builder pattern by defining a default configuraton to be used in a plurality of test cases while also providing setter methods to overwrite the values for each field.

pub struct ExtBuilder {
    signal_quota: u32,
    execution_frequency: u64,
    task_limit: u32,
}
impl Default for ExtBuilder {
    fn default() -> Self {
        Self {
            signal_quota: 100u32,
            execution_frequency: 5u64,
            task_limit: 10u32,
        }
    }
}

The setter methods for each configurable constant are defined in the ExtBuilder methods. This allows each instance of ExtBuilder to set the constant parameters for the unit test in question.

impl ExtBuilder {
    pub fn signal_quota(mut self, signal_quota: u32) -> Self {
        self.signal_quota = signal_quota;
        self
    }
    pub fn execution_frequency(mut self, execution_frequency: u64) -> Self {
        self.execution_frequency = execution_frequency;
        self
    }
    pub fn task_limit(mut self, task_limit: u32) -> Self {
        self.task_limit = task_limit;
        self
    }
    // more methods e.g. build()
}

To allow for separate copies of the constant objects to be used in each thread, the variables assigned as constants are declared as thread_local!,

thread_local! {
    static SIGNAL_QUOTA: RefCell<u32> = RefCell::new(0);
    static EXECUTION_FREQUENCY: RefCell<u64> = RefCell::new(0);
    static TASK_LIMIT: RefCell<u32> = RefCell::new(0);
}

Each configurable constant type also maintains unit structs with implementation of Get<T> from the type T assigned to the pallet constant in the mock runtime implementation.

pub struct SignalQuota;
impl Get<u32> for SignalQuota {
    fn get() -> u32 {
        SIGNAL_QUOTA.with(|v| *v.borrow())
    }
}

pub struct ExecutionFrequency;
impl Get<u64> for ExecutionFrequency {
    fn get() -> u64 {
        EXECUTION_FREQUENCY.with(|v| *v.borrow())
    }
}

pub struct TaskLimit;
impl Get<u32> for TaskLimit {
    fn get() -> u32 {
        TASK_LIMIT.with(|v| *v.borrow())
    }
}

The build method on ExtBuilder sets the associated constants before building the default storage configuration.

impl ExtBuilder {
    // setters
    pub fn set_associated_consts(&self) {
        SIGNAL_QUOTA.with(|v| *v.borrow_mut() = self.signal_quota);
        EXECUTION_FREQUENCY.with(|v| *v.borrow_mut() = self.execution_frequency);
        TASK_LIMIT.with(|v| *v.borrow_mut() = self.task_limit);
    }
    // build()
}

To build the default test environment, the syntax looks like

#[test]
fn fake_test() {
    ExtBuilder::default()
        .build()
        .execute_with(|| {
            // testing logic and checks
        })
}

To configure a test environment in which the execution_frequency is set to 2, the eras_change_correctly test invokes the execution_frequency setter declared in as a method on ExtBuilder,

#[test]
fn fake_test2() {
    ExtBuilder::default()
        .execution_frequency(2)
        .build()
        .execute_with(|| {
            // testing logic and checks
        })
}

The test environment mocked above is actually used for the cursory and incomplete test eras_change_correctly. This test guided the structure of the if condition in on_initialize to periodically reset the SignalBank and increment the Era.

For more examples of the mock runtime scaffolding pattern used in execution-schedule, see balances/mock.rs and contract/tests.rs.

Safe Math

We can use the checked traits in substrate-primitives to protect against overflow/underflow when incrementing/decrementing objects in our runtime. To follow the Substrate collectable tutorial example, use checked_add() to safely handle the possibility of overflow when incremementing a global counter. Note that this check is similar to SafeMath in Solidity.

use runtime_primitives::traits::CheckedAdd;

let all_people_count = Self::num_of_people();

let new_all_people_count = all_people_count.checked_add(1).ok_or("Overflow adding a new person")?;

ok_or() transforms an Option from Some(value) to Ok(value) or None to Err(error). The ? operator facilitates error propagation. In this case, using ok_or() is the same as writing

let new_all_people_count = match all_people_count.checked_add(1) {
    Some (c) => c,
    None => return Err("Overflow adding a new person"),
};

todo

  • ? for error propagation
  • Permill, Perbill, Fixed64 types for large arithmetic
  • quantization benchmarks in the treasury tests to verify that large arithmetic stays in a comfortable error bound
  • ADD BACK IN NEW RECIPE: collide and the question of whether maps prevent key collisions? could discuss sort, sort_unstable, and the ordering traits here...

More Resources

Substrate

Learn more about Substrate from these resources:

Rust

Once you've got the fundamentals of Substrate, it can only help to know more rust. Here is a collection of helpful docs and blog posts to take you down the rabbit hole.

API Design

To become more familiar with commmon design patterns in Rust, the following links might be helpful:

Optimizations

To optimize runtime performance, Substrate developers should make use of iterators, traits, and Rust's other "zero cost abstractions":

Concurrency

  • Lock-free Rust: Crossbeam in 2019 a high-level overview of concurrency in Rust.
  • Rayon splits your data into distinct pieces, gives each piece to a thread to do some kind of computation on it, and finally aggregates results. Its goal is to distribute CPU-intensive tasks onto a thread pool.
  • Tokio runs tasks which sometimes need to be paused in order to wait for asynchronous events. Handling tons of such tasks is no problem. Its goal is to distribute IO-intensive tasks onto a thread pool.
  • Crossbeam is all about low-level concurrency: atomics, concurrent data structures, synchronization primitives. Same idea as the std::sync module. Its goal is to provide tools on top of which libraries like Rayon and Tokio can be built.

Asynchrony

Are we async yet?

Conceptual

Projects

Concurrency

Conceptual

Projects