Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Pina

Pina is a high-performance Solana smart-contract framework built on top of pinocchio. The project focuses on low compute-unit usage, small dependency surface area, and strong account validation ergonomics for on-chain Rust programs.

This book is the single place for project documentation. It complements API reference docs by describing architecture, patterns, workflows, and quality standards used across the repository.

What you get in this book

  • The project’s goals and trade-offs.
  • Setup and day-to-day development workflow.
  • Core framework concepts (#[account], #[instruction], #[derive(Accounts)], discriminator model, and validation chains).
  • Codama IDL/client-generation workflow (including external-project invocation).
  • Guidance for examples and security-focused development.
  • CI/release pipeline expectations.
  • A practical recommendations roadmap for improving goal alignment.

Project Goals

Pina’s codebase currently optimizes for the following goals.

1. Performance and low compute units

  • Prefer pinocchio primitives over heavier Solana SDK surfaces.
  • Minimize instruction overhead by using zero-copy layouts and typed discriminators.
  • Keep runtime checks explicit but lightweight.

2. no_std-first smart contract ergonomics

  • Keep crates deployable to Solana SBF targets.
  • Avoid patterns that introduce allocator/runtime assumptions.
  • Gate entrypoint-specific behavior behind features.

3. Safety for account handling and state transitions

  • Strong discriminator and owner checks.
  • Explicit validation chains for signer, writable, PDA seeds, and type.
  • Defensive arithmetic and transfer operations.

4. Macro-powered developer experience

  • Reduce boilerplate with #[account], #[instruction], #[event], #[error], and #[derive(Accounts)].
  • Keep generated behavior predictable, documented, and tested.

5. Maintainability and release quality

  • Reproducible dev environments (devenv + pinned tooling).
  • CI coverage for linting, tests, and builds.
  • Changelog-driven release discipline via changesets.

Getting Started

Prerequisites

  • Rust nightly toolchain from rust-toolchain.toml
  • devenv (Nix-based environment)
  • gh (for GitHub workflows)

Setup

devenv shell
install:all

If pnpm-workspace.yaml sets useNodeVersion, devenv shell activates the matching pnpm-managed node/npm/npx/corepack toolchain automatically.

Build and test

cargo build --all-features
cargo test

Common quality checks

lint:clippy
lint:format
verify:docs

Generate a Codama IDL

pina idl --path ./examples/counter_program --output ./codama/idls/counter_program.json

See Codama Workflow for end-to-end generation and external-project usage.

Build this documentation

docs:build

The generated site is written to docs/book/.

Core Concepts

Discriminator layout (raw bytes)

Pina stores discriminator bytes directly in the struct itself as the first field of every #[account], #[instruction], and #[event] type. This is a discriminator-first layout, not an external header.

At runtime this means the parser does a fixed-byte read + size_of::<T>() validation, then a zero-copy cast.

offset | size | meaning
------ | ---- | -------
0      | N    | discriminator (N = BYTES of enum primitive: 1/2/4/8)
N      | ...  | payload fields

This contract is what enables:

  • deterministic size_of::<T>() checks,
  • zero-copy validation with as_account() / try_from_bytes(),
  • alignment-safe offsets for fixed-size Pod fields.

Why this is safer than implicit external headers

External fixed-size headers require manual casting logic in each parse path and make compiler-assist checks harder. With auto-injected first-field discriminators, the compiler can guarantee the exact struct layout and validate it in type-checked assertions.

Discriminator width and compatibility

The enum primitive width controls both on-chain layout and migration surface.

  • Width is set on the discriminator enum using #[discriminator(primitive = u8)] (default u8).
  • Allowed widths are u8, u16, u32, and u64.
  • The maximum practical width is capped at 8 bytes for zero-copy safety.

Discriminator and payload versioning

ChangeCompatibility impact
Add a new enum variantUsually backward-compatible if old clients ignore unknown variants
Change an existing variant valueBreaking for every historical byte slice
Reorder or remove struct fieldsBreaking (offsets change)
Append fields to a structMostly non-breaking, but consumers must accept the larger size
Switch primitive width (u8u16, etc.)Breaking for serialized payloads at that boundary

For on-chain accounts, treat layout as part of protocol ABI:

  • Keep field order stable.
  • Introduce optional version fields at the tail for in-place migration strategies.
  • Never change existing discriminator values in place.
  • When incompatible layout changes are required, perform explicit migration with a new account version and an operator upgrade flow.

For instruction payloads:

  • Prefer additive migration: add a new variant and keep legacy handlers for a release cycle.
  • Reject stale payload shapes with explicit errors rather than silently reinterpreting bytes.

Discriminator layout decision matrix

The discriminator strategy determines byte layout, parser guarantees, and cross-protocol compatibility.

GoalRecommended layout
Keep layout minimal and zero-copy while staying explicitCurrent Pina model: discriminator bytes are the first field inside #[account], #[instruction], and #[event] structs.
Preserve compatibility with existing Anchor-account payloads (SHA-256 hash prefixes)Legacy adapter model: custom raw wrapper types parse/write the existing 8-byte external prefix before converting to typed structs.
Minimize account size growth when you have many typesUse u8 (default) discriminator width.
You need more than 256 route variantsUse u16 / u32 / u64 by setting #[discriminator(primitive = ...)].
Avoid schema migrations across existing serialized dataKeep existing field order and discriminator values; only append fields.

Raw discriminator width by use-case

WidthMax variantsStorage cost (bytes)Recommended when
u82561Most programs and instructions
u1665,5362Medium-large routing tables and explicit version partitioning
u324,294,967,2964Very large enums, rarely needed
u6418,446,744,073,709,551,6168Legacy interoperability shims or reserved growth
  • Discriminator width only affects the first field bytes.
  • Widths above 8 are rejected at macro expansion time.
  • Wider discriminators improve variant space, but increase CPI payload and account rent by the exact number of bytes.

Zero-copy account models

#[account] and #[instruction] generate Pod/Zeroable-compatible layouts for in-place reinterpretation of account/instruction bytes.

Account validation chains

Validation methods on AccountView are composable:

#![allow(unused)]
fn main() {
account.assert_signer()?.assert_writable()?.assert_owner(&program_id)?;
}

This pattern improves readability while keeping checks explicit and audit-able.

Typed account conversions

Traits in crates/pina/src/impls.rs provide typed conversion paths from raw AccountView values into strongly typed account states.

Entrypoint model

nostd_entrypoint! wires BPF entrypoint plumbing while preserving no_std constraints for on-chain builds.

Pod types

TypeWrapsSize
PodBoolbool1 byte
PodU16u162 bytes
PodI16i162 bytes
PodU32u324 bytes
PodI32i324 bytes
PodU64u648 bytes
PodI64i648 bytes
PodU128u12816 bytes
PodI128i12816 bytes

All types are #[repr(transparent)] over byte arrays (or u8 for PodBool) and implement bytemuck::Pod + bytemuck::Zeroable.

Arithmetic operators (+, -, *) use wrapping semantics in release builds for CU efficiency and panic on overflow in debug builds. Use checked_add, checked_sub, checked_mul, checked_div where overflow must be detected in all build profiles.

Each Pod integer type provides ZERO, MIN, and MAX constants.

This means you can write ergonomic code like:

#![allow(unused)]
fn main() {
my_account.count += 1u64;
let fee = balance.checked_mul(3u64).unwrap_or(PodU64::MAX);
}

Instruction introspection

The pina::introspection module provides helpers for reading the Instructions sysvar at runtime. This enables:

  • Flash loan guards — verify the current instruction is not being invoked via CPI (assert_no_cpi)
  • Transaction inspection — count instructions (get_instruction_count) or find the current index (get_current_instruction_index)
  • Sandwich detection — check whether a specific program appears before or after the current instruction (has_instruction_before, has_instruction_after)

Crates and Features

CratePathDescription
pinacrates/pinaCore framework — traits, account loaders, CPI helpers, Pod types.
pina_macroscrates/pina_macrosProc macros — #[account], #[instruction], #[event], etc.
pina_clicrates/pina_cliCLI/library for IDL generation, Codama integration, scaffolding.
pina_codama_renderercrates/pina_codama_rendererRepository-local Codama Rust renderer for Pina-style clients.
pina_pod_primitivescrates/pina_pod_primitivesAlignment-safe no_std POD primitive wrappers.
pina_profilecrates/pina_profileStatic CU profiler for compiled SBF programs.
pina_sdk_idscrates/pina_sdk_idsTyped constants for well-known Solana program/sysvar IDs.

crates/pina

Core runtime crate for on-chain program logic.

Includes:

  • AccountView and validation chain helpers.
  • Typed account loaders and discriminator checks.
  • CPI/system/token helper utilities.
  • nostd_entrypoint! and instruction parsing helpers.
  • Instruction introspection (flash loan guards, sandwich detection).
  • Pod types with full arithmetic operator support.

Feature flags:

FeatureDefaultDescription
deriveYesEnables proc macros (#[account], #[instruction], etc.)
logsYesEnables on-chain logging via solana-program-log
tokenNoEnables SPL token / token-2022 helpers and ATA utilities

crates/pina_macros

Proc-macro crate used by pina.

Provides:

  • #[discriminator]
  • #[account]
  • #[instruction]
  • #[event]
  • #[error]
  • #[derive(Accounts)]

crates/pina_cli

Developer CLI and library.

Commands:

CommandDescription
pina init <name>Scaffold a new Pina program project
pina idl --path <dir>Generate a Codama IDL JSON from a Pina program
pina profile <path.so>Static CU profiler for compiled SBF binaries
pina codama generateGenerate Codama IDLs and Rust/JS clients for examples

The IDL parser supports multi-file programs — it follows mod declarations from src/lib.rs to discover accounts, instructions, and discriminators across all source files.

Library surface:

  • pina_cli::generate_idl(program_path, name_override)
  • pina_cli::init_project(path, package_name, force)

crates/pina_pod_primitives

no_std crate containing alignment-safe POD primitive wrappers (PodBool, PodU*, PodI*) and conversion macro helpers shared by pina and generated clients.

Arithmetic operators (+, -, *) use wrapping semantics in release builds for CU efficiency and panic on overflow in debug builds. Use checked_add, checked_sub, checked_mul, checked_div where overflow must be detected in all build profiles.

Each Pod integer type provides ZERO, MIN, and MAX constants.

crates/pina_profile

The pina profile command analyzes compiled SBF .so binaries to estimate per-function compute unit costs without requiring a running validator.

pina profile target/deploy/my_program.so          # text summary
pina profile target/deploy/my_program.so --json    # JSON for CI
pina profile target/deploy/my_program.so -o r.json # write to file

The profiler decodes each SBF instruction opcode and assigns costs: regular instructions cost 1 CU, syscalls cost 100 CU.

crates/pina_codama_renderer

Repository-local renderer that generates Pina-style Rust client code from Codama JSON IDLs. The renderer is organized into focused modules under src/render/:

  • accounts.rs — account page and PDA helpers
  • instructions.rs — instruction page, account metas
  • types.rs — Pod type rendering, defined types
  • errors.rs — error page rendering
  • discriminator.rs — discriminator rendering
  • seeds.rs — seed parameter/constant rendering

Use this when you want generated Rust models to match Pina’s fixed-size discriminator-first/bytemuck conventions.

crates/pina_sdk_ids

no_std crate that exports well-known Solana program/sysvar IDs as typed constants.

Use this crate to avoid hardcoded base58 literals in validation logic.

Codama Workflow

This repository uses Codama as the IDL and client-generation layer for Pina programs.

The flow has three stages:

  1. Generate Codama JSON from Rust programs (pina idl).
  2. Validate generated JSON against committed fixtures/tests.
  3. Render clients (JS with Codama renderers, Rust with pina_codama_renderer).

In This Repository

Generate and validate the whole workspace flow with devenv scripts:

# Generate Codama IDLs for all examples.
codama:idl:all

# Generate Rust + JS clients.
codama:clients:generate

# Generate IDLs + Rust/JS clients in one command.
pina codama generate

# Run the complete Codama pipeline.
codama:test

# Run IDL fixture drift + validation checks used by CI.
test:idl

Supporting scripts:

  • scripts/generate-codama-idls.sh: regenerates codama/idls/*.json fixtures for all examples.
  • scripts/verify-codama-idls.sh: regenerates IDLs/clients, verifies fixtures via Rust and JS tests, and enforces deterministic no-diff output.

In a Separate Project

You do not need to copy this entire repository to use Codama with Pina.

1. Generate IDL from your program

pina idl --path ./programs/my_program --output ./idls/my_program.json

2. Generate JS clients with Codama

pnpm add -D codama @codama/renderers-js
import { renderVisitor as renderJsVisitor } from "@codama/renderers-js";
import { createFromFile } from "codama";

const codama = await createFromFile("./idls/my_program.json");
await codama.accept(renderJsVisitor("./clients/js/my_program"));

3. Generate Pina-style Rust clients (optional)

This repository ships crates/pina_codama_renderer, which emits Rust models aligned with Pina’s discriminator-first, fixed-size POD layouts.

cargo run --manifest-path ./crates/pina_codama_renderer/Cargo.toml -- \
  --idl ./idls/my_program.json \
  --output ./clients/rust

You can pass multiple --idl flags or --idl-dir.

Renderer Constraints

pina_codama_renderer intentionally targets fixed-size layouts. Unsupported patterns produce explicit errors (for example variable-length strings/bytes, unsupported endian/number forms, and non-fixed arrays).

Source shapes that extract cleanly

Use the same program shapes described in crates/pina_cli/rules.md to keep IDL extraction predictable.

Multi-file layout

#![allow(unused)]
fn main() {
// src/lib.rs
use pina::*;

mod accounts;
mod instructions;
mod pda;
mod state;

declare_id!("Fg6PaFpoGXkYsidMpWTK6W2BeZ7FEfcYkg476zPFsLnS");
}

Canonical dispatch

#![allow(unused)]
fn main() {
#[cfg(feature = "bpf-entrypoint")]
pub mod entrypoint {
	use super::*;

	nostd_entrypoint!(process_instruction);

	pub fn process_instruction(
		program_id: &Address,
		accounts: &[AccountView],
		data: &[u8],
	) -> ProgramResult {
		let ix: MyInstruction = parse_instruction(program_id, &ID, data)?;

		// Add one arm per instruction variant.
		match ix {
			MyInstruction::Initialize => InitializeAccounts::try_from(accounts)?.process(data),
			MyInstruction::Update => UpdateAccounts::try_from(accounts)?.process(data),
		}
	}
}
}

Validation chains

#![allow(unused)]
fn main() {
impl<'a> ProcessAccountInfos<'a> for InitializeAccounts<'a> {
	fn process(&self, data: &[u8]) -> ProgramResult {
		let args = InitializeInstruction::try_from_bytes(data)?;
		let seeds = my_seeds!(self.authority.address().as_ref(), args.bump);

		self.authority.assert_signer()?;
		self.system_program.assert_address(&system::ID)?;
		self.token_program.assert_address(&token::ID)?;
		self.ata_program
			.assert_address(&associated_token_account::ID)?;
		self.state
			.assert_empty()?
			.assert_writable()?
			.assert_seeds_with_bump(seeds, &ID)?;

		Ok(())
	}
}
}

PDA seed helpers

#![allow(unused)]
fn main() {
const MY_SEED: &[u8] = b"my";

#[macro_export]
macro_rules! my_seeds {
	($authority:expr) => {
		&[MY_SEED, $authority]
	};
	($authority:expr, $bump:expr) => {
		&[MY_SEED, $authority, &[$bump]]
	};
}
}

Discriminators and account layouts

#![allow(unused)]
fn main() {
#[discriminator]
pub enum MyInstruction {
	Initialize = 0,
	Update = 1,
}

#[discriminator]
pub enum MyAccountType {
	MyState = 1,
}

#[instruction(discriminator = MyInstruction, variant = Initialize)]
pub struct InitializeInstruction {
	pub bump: u8,
}

#[instruction(discriminator = MyInstruction, variant = Update)]
pub struct UpdateInstruction {
	pub value: PodU64,
}

#[account(discriminator = MyAccountType)]
pub struct MyState {
	pub bump: u8,
	pub value: PodU64,
}
}

For the full checklist and rationale, see crates/pina_cli/rules.md.

CI Coverage

Codama checks are enforced in the ci workflow via test:idl.

Examples

The examples/ workspace members demonstrate practical usage patterns:

  • hello_solana: minimal program structure and instruction dispatch.
  • counter_program: PDA creation, mutation, and account validation.
  • todo_program: PDA-backed state with boolean + digest updates.
  • transfer_sol: lamport transfers and account checks.
  • escrow_program: richer multi-account flow and token-oriented logic.
  • pina_bpf: minimal pina-native BPF hello world with nightly build-std=core,alloc.
  • anchor_declare_id: first Anchor test parity port, focused on program-id mismatch checks.
  • anchor_declare_program: Anchor declare-program parity for external-program ID checks.
  • anchor_duplicate_mutable_accounts: explicit duplicate mutable account validation pattern.
  • anchor_errors: Anchor-style custom error code and guard helper parity.
  • anchor_events: event schema parity through deterministic serialization checks.
  • anchor_floats: float data account create/update flow with authority validation.
  • anchor_system_accounts: system-program owner validation parity.
  • anchor_sysvars: clock/rent/stake-history sysvar validation parity.
  • anchor_realloc: realloc growth and duplicate-target safety checks.

Use examples as reference implementations for account layout, instruction parsing, and validation ordering.

Anchor test-suite parity progress is tracked in Anchor Test Porting.

Every example directory includes a local readme.md with purpose, coverage, and run commands.

When adding new examples:

  • Keep instruction/account discriminator handling explicit.
  • Use checked arithmetic in state transitions.
  • Include unit tests and clear doc comments for every instruction path.

Your First Program


This tutorial walks through building a minimal Solana program from scratch using Pina. By the end you will have a working on-chain program that logs a greeting, complete with tests.

Prerequisites


  • A working development environment (see Getting Started).
  • Basic familiarity with Rust and the Solana account model.

Project setup


Create a new crate inside the workspace (or standalone):

# Cargo.toml
[package]
name = "hello_solana"
version = "0.0.0"
edition = "2024"

[lib]
crate-type = ["cdylib", "lib"]

[features]
bpf-entrypoint = []

[dependencies]
pina = { version = "...", features = ["logs", "derive"] }

The cdylib crate type is required for building a shared library that the Solana runtime can load. The lib type lets tests and other crates consume the program as a regular Rust library.

The bpf-entrypoint feature gates the on-chain entrypoint so that test builds do not pull in BPF-specific machinery.

Step 1: Declare a program ID


Every Solana program has a unique address. declare_id! parses a base58 string into a constant ID of type Address:

#![allow(unused)]
#![no_std]

fn main() {
use pina::*;

declare_id!("DCF5KBmtQ9ryDC7mQezKLwuJHem6coVUCmKkw37M9J4A");
}

The #![no_std] attribute is required for on-chain programs. Pina is designed to work without the standard library so the resulting binary stays small and does not depend on a heap allocator.

For native (non-BPF) builds outside of tests you need a small shim to provide the standard library:

#![allow(unused)]
fn main() {
#[cfg(all(
	not(any(target_os = "solana", target_arch = "bpf")),
	not(feature = "bpf-entrypoint"),
	not(test)
))]
extern crate std;
}

Step 2: Define an instruction discriminator


Pina programs use discriminator enums to identify instruction variants. The #[discriminator] macro generates TryFrom<u8> and the framework’s IntoDiscriminator trait:

#![allow(unused)]
fn main() {
#[discriminator]
pub enum HelloInstruction {
	Hello = 0,
}
}

The numeric value (0) becomes the first byte of the serialized instruction data. Clients send this byte so the program knows which handler to invoke.

Step 3: Define instruction data


The #[instruction] macro creates a Pod/Zeroable struct whose first field is an auto-injected discriminator byte. It also generates a TypedBuilder for ergonomic construction in tests:

#![allow(unused)]
fn main() {
#[instruction(discriminator = HelloInstruction, variant = Hello)]
pub struct HelloInstructionData {}
}

This instruction has no extra payload – it only needs the discriminator byte to be identified.

Step 4: Define an accounts struct


#[derive(Accounts)] generates a TryFromAccountInfos implementation that maps positional accounts from the transaction into named fields:

#![allow(unused)]
fn main() {
#[derive(Accounts, Debug)]
pub struct HelloAccounts<'a> {
	pub user: &'a AccountView,
}
}

If a transaction supplies fewer accounts than the struct declares, TryFrom returns ProgramError::NotEnoughAccountKeys.

Step 5: Implement the processor


The ProcessAccountInfos trait defines the process method that contains your instruction logic:

#![allow(unused)]
fn main() {
impl<'a> ProcessAccountInfos<'a> for HelloAccounts<'a> {
	fn process(&self, data: &[u8]) -> ProgramResult {
		let _ = HelloInstructionData::try_from_bytes(data)?;
		self.user.assert_signer()?;
		log!("Hello, Solana!");
		Ok(())
	}
}
}

try_from_bytes validates that the raw instruction data is the correct size and layout. assert_signer() verifies the user actually signed the transaction. If any check fails the program returns an error and the transaction is rejected.

Step 6: Wire up the entrypoint


The entrypoint module is gated behind bpf-entrypoint so it only compiles for on-chain builds:

#![allow(unused)]
fn main() {
#[cfg(feature = "bpf-entrypoint")]
pub mod entrypoint {
	use pina::*;

	use super::*;

	nostd_entrypoint!(process_instruction);

	#[inline(always)]
	pub fn process_instruction(
		program_id: &Address,
		accounts: &[AccountView],
		data: &[u8],
	) -> ProgramResult {
		let instruction: HelloInstruction = parse_instruction(program_id, &ID, data)?;

		match instruction {
			HelloInstruction::Hello => HelloAccounts::try_from(accounts)?.process(data),
		}
	}
}
}

nostd_entrypoint! wires up the BPF entrypoint, a minimal panic handler, and a no-allocation stub. parse_instruction reads the discriminator byte, verifies the program ID matches, and returns the typed enum variant.

The complete program


Putting it all together (this matches examples/hello_solana/src/lib.rs in the repository):

#![allow(unused)]
#![allow(clippy::inline_always)]
#![no_std]

fn main() {
#[cfg(all(
	not(any(target_os = "solana", target_arch = "bpf")),
	not(feature = "bpf-entrypoint"),
	not(test)
))]
extern crate std;

use pina::*;

declare_id!("DCF5KBmtQ9ryDC7mQezKLwuJHem6coVUCmKkw37M9J4A");

#[discriminator]
pub enum HelloInstruction {
	Hello = 0,
}

#[instruction(discriminator = HelloInstruction, variant = Hello)]
pub struct HelloInstructionData {}

#[derive(Accounts, Debug)]
pub struct HelloAccounts<'a> {
	pub user: &'a AccountView,
}

impl<'a> ProcessAccountInfos<'a> for HelloAccounts<'a> {
	fn process(&self, data: &[u8]) -> ProgramResult {
		let _ = HelloInstructionData::try_from_bytes(data)?;
		self.user.assert_signer()?;
		log!("Hello, Solana!");
		Ok(())
	}
}

#[cfg(feature = "bpf-entrypoint")]
pub mod entrypoint {
	use pina::*;

	use super::*;

	nostd_entrypoint!(process_instruction);

	#[inline(always)]
	pub fn process_instruction(
		program_id: &Address,
		accounts: &[AccountView],
		data: &[u8],
	) -> ProgramResult {
		let instruction: HelloInstruction = parse_instruction(program_id, &ID, data)?;

		match instruction {
			HelloInstruction::Hello => HelloAccounts::try_from(accounts)?.process(data),
		}
	}
}
}

Building for SBF


To compile the program for the Solana BPF target:

cargo build --release --target bpfel-unknown-none -p hello_solana -Z build-std -F bpf-entrypoint

The workspace .cargo/config.toml already sets the required linker flags for bpfel-unknown-none. The -Z build-std flag rebuilds core and alloc for the BPF target.

Writing tests


Tests run against the native Rust library (without bpf-entrypoint). You can verify discriminator values, instruction serialization, and program ID validity without needing a full Solana validator:

#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
	use super::*;

	#[test]
	fn discriminator_hello_value() {
		assert_eq!(HelloInstruction::Hello as u8, 0);
	}

	#[test]
	fn discriminator_roundtrip() {
		let parsed = HelloInstruction::try_from(0u8);
		assert!(parsed.is_ok());
	}

	#[test]
	fn discriminator_invalid_byte_fails() {
		let result = HelloInstruction::try_from(99u8);
		assert!(result.is_err());
	}

	#[test]
	fn instruction_data_has_discriminator() {
		assert!(HelloInstructionData::matches_discriminator(&[0u8]));
		assert!(!HelloInstructionData::matches_discriminator(&[1u8]));
	}

	#[test]
	fn program_id_is_valid() {
		assert_ne!(ID, Address::default());
	}
}
}

For full integration tests that simulate the Solana runtime, add mollusk-svm as a dev-dependency and use its transaction builder to invoke your program’s process_instruction function.

Next steps


  • Add on-chain state with #[account] – see the counter_program example.
  • Handle multiple instructions by adding more variants to your discriminator enum.
  • Add PDA-based accounts with create_program_account_with_bump.
  • Follow the Token Escrow Tutorial for a real-world program with token transfers and CPI.

Token Escrow Tutorial


This tutorial walks through the examples/escrow_program step by step. The program implements a trustless token exchange between two parties using a PDA-owned vault account.

How the escrow works


  1. Make – the maker deposits token A into a PDA-owned vault and records the desired amount of token B in an escrow state account.
  2. Take – the taker sends token B to the maker, the vault releases token A to the taker, and the escrow is closed with rent returned to the maker.

No party needs to trust the other. The program enforces the exchange atomically: either both transfers happen or neither does.

Project setup


The escrow program enables the token feature for SPL token helpers:

[dependencies]
pina = { workspace = true, features = ["logs", "token", "derive"] }

[dev-dependencies]
mollusk-svm = { workspace = true }

The token feature unlocks CPI wrappers for SPL Token, Token-2022, and Associated Token Account operations.

Program ID and discriminators


#![allow(unused)]
fn main() {
use pina::*;

declare_id!("4ibrEMW5F6hKnkW4jVedswYv6H6VtwPN6ar6dvXDN1nT");

#[discriminator]
pub enum EscrowInstruction {
	Make = 1,
	Take = 2,
}

#[discriminator]
pub enum EscrowAccount {
	EscrowState = 1,
}
}

Two discriminator enums serve different purposes. EscrowInstruction tags instruction data so the entrypoint can dispatch to the right handler. EscrowAccount tags on-chain account data so the program can verify it is reading the correct account type.

Custom errors


The #[error] macro converts an enum into a set of ProgramError::Custom error codes:

#![allow(unused)]
fn main() {
#[error]
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum EscrowError {
	OfferKeyMismatch = 0,
	TokenAccountMismatch = 1,
}
}

Each variant’s numeric value becomes the custom error code. You can return these from any processor via Err(EscrowError::OfferKeyMismatch.into()).

Escrow state account


The #[account] macro defines the on-chain state layout:

#![allow(unused)]
fn main() {
#[account(discriminator = EscrowAccount)]
pub struct EscrowState {
	pub maker: Address,
	pub mint_a: Address,
	pub mint_b: Address,
	pub amount_a: PodU64,
	pub amount_b: PodU64,
	pub seed: PodU64,
	pub bump: u8,
}
}

The macro auto-injects a discriminator field as the first byte (set to EscrowAccount::EscrowState). It also derives Pod, Zeroable, HasDiscriminator, and TypedBuilder. All fields use fixed-size types (Address is 32 bytes, PodU64 is 8 bytes little-endian) so the struct has a stable #[repr(C)] layout suitable for zero-copy reads.

The seed and bump fields are stored so that PDA derivation can be verified on subsequent instructions without re-computing it.

Instruction data


#![allow(unused)]
fn main() {
#[instruction(discriminator = EscrowInstruction, variant = Make)]
pub struct MakeInstruction {
	pub seed: PodU64,
	pub amount_a: PodU64,
	pub amount_b: PodU64,
	pub bump: u8,
}

#[instruction(discriminator = EscrowInstruction, variant = Take)]
pub struct TakeInstruction {}
}

MakeInstruction carries all the parameters needed to set up the escrow. TakeInstruction has no payload beyond its discriminator byte – the taker just needs to invoke the instruction with the right accounts.

PDA seeds


The escrow PDA is derived from a prefix, the maker’s address, and a user-chosen seed:

#![allow(unused)]
fn main() {
const SEED_PREFIX: &[u8] = b"escrow";

macro_rules! seeds_escrow {
	($maker:expr, $seed:expr) => {
		&[SEED_PREFIX, $maker, $seed]
	};
	($maker:expr, $seed:expr, $bump:expr) => {
		&[SEED_PREFIX, $maker, $seed, &[$bump]]
	};
}
}

The seed macro generates the PDA seeds array in both forms: without bump (for create_program_account_with_bump) and with bump (for assert_seeds_with_bump).

Make: accounts and validation


#![allow(unused)]
fn main() {
#[derive(Accounts, Debug)]
pub struct MakeAccounts<'a> {
	pub maker: &'a AccountView,
	pub mint_a: &'a AccountView,
	pub mint_b: &'a AccountView,
	pub maker_ata_a: &'a AccountView,
	pub escrow: &'a AccountView,
	pub vault: &'a AccountView,
	pub system_program: &'a AccountView,
	pub token_program: &'a AccountView,
}
}

Accounts are listed in the order clients must provide them. The #[derive(Accounts)] macro maps each positional AccountView to its named field.

The processor validates every account before performing any mutation:

#![allow(unused)]
fn main() {
const SPL_PROGRAM_IDS: [Address; 2] = [token::ID, token_2022::ID];

impl<'a> ProcessAccountInfos<'a> for MakeAccounts<'a> {
	fn process(&self, data: &[u8]) -> ProgramResult {
		let args = MakeInstruction::try_from_bytes(data)?;
		let escrow_seeds = seeds_escrow!(self.maker.address().as_ref(), &args.seed.0);
		let escrow_seeds_with_bump =
			seeds_escrow!(self.maker.address().as_ref(), &args.seed.0, args.bump);

		// Validate all accounts before mutating anything.
		self.token_program.assert_addresses(&SPL_PROGRAM_IDS)?;
		self.maker.assert_signer()?;
		self.mint_a.assert_owners(&SPL_PROGRAM_IDS)?;
		self.mint_b.assert_owners(&SPL_PROGRAM_IDS)?;
		self.maker_ata_a.assert_associated_token_address(
			self.maker.address(),
			self.mint_a.address(),
			self.token_program.address(),
		)?;
		self.escrow
			.assert_empty()?
			.assert_writable()?
			.assert_seeds_with_bump(escrow_seeds_with_bump, &ID)?;
		self.vault
			.assert_empty()?
			.assert_writable()?
			.assert_associated_token_address(
				self.escrow.address(),
				self.mint_a.address(),
				self.token_program.address(),
			)?;

		// ... create accounts and transfer tokens ...
		Ok(())
	}
}
}

Key validation patterns:

  • assert_addresses checks that the token program is either SPL Token or Token-2022.
  • assert_signer ensures the maker signed the transaction.
  • assert_owners verifies mint accounts are owned by a token program.
  • assert_associated_token_address derives the expected ATA address and compares.
  • assert_empty + assert_writable + assert_seeds_with_bump validates the PDA is fresh and derivable.

Validation methods return Result<&AccountView> so they chain naturally with ?.

Make: creating the escrow


After validation the processor creates the PDA account and initializes its state:

#![allow(unused)]
fn main() {
create_program_account_with_bump::<EscrowState>(
	self.escrow,
	self.maker,
	&ID,
	escrow_seeds,
	args.bump,
)?;

let escrow = self.escrow.as_account_mut::<EscrowState>(&ID)?;
*escrow = EscrowState::builder()
	.maker(*self.maker.address())
	.mint_a(*self.mint_a.address())
	.mint_b(*self.mint_b.address())
	.amount_a(args.amount_a)
	.amount_b(args.amount_b)
	.seed(args.seed)
	.bump(args.bump)
	.build();
}

create_program_account_with_bump issues a CreateAccount CPI to the system program, allocating size_of::<EscrowState>() bytes and setting the owner to this program.

as_account_mut reinterprets the raw account bytes as a mutable reference to EscrowState. The builder (generated by the #[account] macro) provides a type-safe way to populate all fields.

Make: token operations via CPI


With the escrow account created, the program creates the vault ATA and transfers tokens:

#![allow(unused)]
fn main() {
associated_token_account::instructions::Create {
	account: self.vault,
	funding_account: self.maker,
	wallet: self.escrow,
	mint: self.mint_a,
	system_program: self.system_program,
	token_program: self.token_program,
}
.invoke()?;

let decimals = self.mint_a.as_token_mint()?.decimals();
token_2022::instructions::TransferChecked {
	from: self.maker_ata_a,
	to: self.vault,
	authority: self.maker,
	amount: args.amount_a.into(),
	mint: self.mint_a,
	decimals,
	token_program: self.token_program.address(),
}
.invoke()?;
}

Pina’s token feature provides typed CPI instruction builders. You fill in the struct fields and call .invoke() – the framework handles account meta construction and the CPI call.

The vault is an ATA owned by the escrow PDA. This means only the escrow program (signing with the PDA seeds) can later release the tokens.

Take: completing the exchange


The Take instruction performs two token transfers and cleans up:

  1. Transfer token B from taker to maker (authorized by the taker’s signature).
  2. Transfer token A from vault to taker (authorized by the escrow PDA via invoke_signed).
  3. Close the vault account and return rent to the maker.
  4. Zero and close the escrow state account.
#![allow(unused)]
fn main() {
impl<'a> ProcessAccountInfos<'a> for TakeAccounts<'a> {
	fn process(&self, data: &[u8]) -> ProgramResult {
		let _ = TakeInstruction::try_from_bytes(data)?;

		// ... validation omitted for brevity ...

		let EscrowState {
			maker,
			seed,
			bump,
			amount_b,
			..
		} = self.escrow.as_account::<EscrowState>(&ID)?;

		// Transfer token B: taker -> maker
		token_2022::instructions::TransferChecked {
			from: self.taker_ata_b,
			mint: self.mint_b,
			to: self.maker_ata_b,
			authority: self.taker,
			amount: (*amount_b).into(),
			decimals: self.mint_b.as_token_2022_mint()?.decimals(),
			token_program: self.token_program.address(),
		}
		.invoke()?;

		// Transfer token A: vault -> taker (PDA-signed)
		let bump_as_seeds = [*bump];
		let escrow_seeds =
			seeds_escrow!(true, self.maker.address().as_ref(), &seed.0, &bump_as_seeds);
		let escrow_signer = Signer::from(&escrow_seeds);
		let signers = [escrow_signer];

		token_2022::instructions::TransferChecked {
			from: self.vault,
			mint: self.mint_a,
			to: self.taker_ata_a,
			authority: self.escrow,
			amount: self.vault.as_token_2022_account()?.amount(),
			decimals: self.mint_a.as_token_2022_mint()?.decimals(),
			token_program: self.token_program.address(),
		}
		.invoke_signed(&signers)?;

		// Close vault and escrow
		token_2022::instructions::CloseAccount {
			account: self.vault,
			destination: self.maker,
			authority: self.escrow,
			token_program: self.token_program.address(),
		}
		.invoke_signed(&signers)?;

		self.escrow.as_account_mut::<EscrowState>(&ID)?.zeroed();
		self.escrow.close_with_recipient(self.maker)
	}
}
}

The PDA signer is constructed from the same seeds used to derive the escrow address. invoke_signed passes these seeds to the runtime so it can verify the PDA signature.

close_with_recipient transfers remaining lamports to the maker and zeros the account data, reclaiming the rent.

Entrypoint


The entrypoint ties everything together with a simple match:

#![allow(unused)]
fn main() {
#[cfg(feature = "bpf-entrypoint")]
pub mod entrypoint {
	use pina::*;

	use super::*;

	nostd_entrypoint!(process_instruction);

	#[inline(always)]
	pub fn process_instruction(
		program_id: &Address,
		accounts: &[AccountView],
		data: &[u8],
	) -> ProgramResult {
		let instruction: EscrowInstruction = parse_instruction(program_id, &ID, data)?;

		match instruction {
			EscrowInstruction::Make => MakeAccounts::try_from(accounts)?.process(data),
			EscrowInstruction::Take => TakeAccounts::try_from(accounts)?.process(data),
		}
	}
}
}

Testing


Unit tests verify discriminator stability, seed construction, and program ID validation:

#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
	use super::*;

	#[test]
	fn instruction_discriminators_are_stable() {
		assert_eq!(EscrowInstruction::Make as u8, 1);
		assert_eq!(EscrowInstruction::Take as u8, 2);
	}

	#[test]
	fn seeds_macro_builds_expected_seed_arrays() {
		let maker = [3u8; 32];
		let seed = PodU64::from_primitive(42);
		let bump = 7u8;

		let seeds = seeds_escrow!(&maker, &seed.0);
		assert_eq!(seeds.len(), 3);

		let seeds_with_bump = seeds_escrow!(&maker, &seed.0, bump);
		assert_eq!(seeds_with_bump.len(), 4);
	}

	#[test]
	fn parse_instruction_rejects_program_id_mismatch() {
		let wrong_program_id: Address = [9u8; 32].into();
		let data = [EscrowInstruction::Make as u8];
		let result = parse_instruction::<EscrowInstruction>(&wrong_program_id, &ID, &data);
		assert!(matches!(result, Err(ProgramError::IncorrectProgramId)));
	}
}
}

For full integration tests, use mollusk-svm to simulate transactions with real token accounts and verify the entire Make/Take flow end-to-end.

Key takeaways


  • PDA vaults hold tokens on behalf of the program. Only the program can sign for them using invoke_signed.
  • Validation-first – check every account before performing any mutation.
  • Typed CPI builders in the token feature eliminate raw account-meta boilerplate.
  • Zero-copy state with #[account] avoids serialization overhead.
  • Feature-gated entrypoints let the same crate serve as both an on-chain program and a testable library.

Migrating from Anchor


This guide maps common Anchor patterns to their Pina equivalents. If you have an existing Anchor program and want to rewrite it with Pina for lower compute usage and smaller binaries, this is the reference to follow.

The repository includes several anchor_* example programs that demonstrate direct parity with Anchor’s own test suite. These are referenced throughout this guide.

Program structure


Anchor

#![allow(unused)]
fn main() {
use anchor_lang::prelude::*;

declare_id!("Fg6PaFpoGXk...");

#[program]
pub mod my_program {
	use super::*;

	pub fn initialize(ctx: Context<Initialize>) -> Result<()> {
		// ...
		Ok(())
	}
}

#[derive(Accounts)]
pub struct Initialize<'info> {
	#[account(mut)]
	pub user: Signer<'info>,
	#[account(init, payer = user, space = 8 + MyAccount::INIT_SPACE)]
	pub my_account: Account<'info, MyAccount>,
	pub system_program: Program<'info, System>,
}
}

Pina

#![allow(unused)]
fn main() {
use pina::*;

declare_id!("Fg6PaFpoGXk...");

#[discriminator]
pub enum MyInstruction {
	Initialize = 0,
}

#[instruction(discriminator = MyInstruction, variant = Initialize)]
pub struct InitializeInstruction {}

#[derive(Accounts, Debug)]
pub struct InitializeAccounts<'a> {
	pub user: &'a AccountView,
	pub my_account: &'a AccountView,
	pub system_program: &'a AccountView,
}

impl<'a> ProcessAccountInfos<'a> for InitializeAccounts<'a> {
	fn process(&self, data: &[u8]) -> ProgramResult {
		let _ = InitializeInstruction::try_from_bytes(data)?;
		self.user.assert_signer()?.assert_writable()?;
		self.my_account.assert_empty()?.assert_writable()?;
		self.system_program.assert_address(&system::ID)?;
		// ...
		Ok(())
	}
}
}

Key differences:

  • No #[program] module. Pina uses explicit discriminator enums and a manual match in the entrypoint.
  • No Context<T>. Each accounts struct receives &[AccountView] and the processor receives raw data: &[u8].
  • Constraints are code, not attributes. Validation happens inside process via chained assertions rather than #[account(...)] attribute directives.

Account constraints to validation chains


Anchor expresses constraints as attributes on account fields. Pina uses explicit method calls on AccountView references.

Anchor attributePina equivalent
Signer<'info>account.assert_signer()?
#[account(mut)]account.assert_writable()?
#[account(owner = program)]account.assert_owner(&program_id)?
#[account(address = KEY)]account.assert_address(&KEY)?
#[account(seeds = [...], bump)]account.assert_seeds_with_bump(seeds, &ID)?
#[account(init, ...)]account.assert_empty()? then create_program_account_with_bump(...)
#[account(constraint = expr)]Write the check directly in process and return an error
Account<'info, T> (type check)account.assert_type::<T>(&owner)?

Pina’s assertion methods return Result<&AccountView>, so they chain naturally:

#![allow(unused)]
fn main() {
self.counter
	.assert_not_empty()?
	.assert_writable()?
	.assert_type::<CounterState>(&ID)?;
}

See examples/counter_program for a complete PDA creation and validation example, and examples/anchor_duplicate_mutable_accounts for explicit duplicate-account safety checks.

Account data: Borsh to Pod


Anchor (Borsh)

#![allow(unused)]
fn main() {
#[account]
pub struct MyAccount {
	pub authority: Pubkey,
	pub value: u64,
	pub active: bool,
}
}

Anchor uses Borsh serialization by default. The #[account] macro adds an 8-byte discriminator (SHA-256 hash prefix) and derives BorshSerialize/BorshDeserialize.

Pina (Pod / zero-copy)

#![allow(unused)]
fn main() {
#[account(discriminator = MyAccountType)]
pub struct MyAccount {
	pub authority: Address,
	pub value: PodU64,
	pub active: PodBool,
}
}

Pina uses zero-copy (bytemuck::Pod) layouts. Every field must be a fixed-size, Copy type. This means:

Anchor typePina typeNotes
PubkeyAddressBoth are [u8; 32]
u64PodU64Little-endian, alignment-safe
u32PodU32Little-endian, alignment-safe
u16PodU16Little-endian, alignment-safe
i64PodI64Little-endian, alignment-safe
boolPodBoolSingle byte
String[u8; N]Fixed-size byte arrays only
Vec<T>Not supportedUse fixed-size arrays
Option<T>Manual encodingUse a sentinel value or a PodBool flag

Pod wrappers are needed because #[repr(C)] structs require all fields to have alignment 1 for bytemuck compatibility. Converting to and from native types:

#![allow(unused)]
fn main() {
// Creating Pod values
let value = PodU64::from_primitive(42);
let active = PodBool::from(true);

// Reading Pod values
let n: u64 = value.into();
let b: bool = active.into();
}

The #[account] macro’s discriminator is a single u8 (or configurable width) rather than Anchor’s 8-byte hash. This saves 7 bytes per account.

Discriminators


Anchor

Anchor generates 8-byte discriminators from sha256("account:<StructName>") or sha256("global:<method_name>"). These are implicit – you never write them manually.

Pina

Pina uses explicit discriminator enums with numeric values:

#![allow(unused)]
fn main() {
#[discriminator]
pub enum MyInstruction {
	Initialize = 0,
	Update = 1,
}

#[discriminator]
pub enum MyAccountType {
	MyAccount = 1,
}
}

Each #[instruction] or #[account] macro references its discriminator enum and variant:

#![allow(unused)]
fn main() {
#[instruction(discriminator = MyInstruction, variant = Initialize)]
pub struct InitializeInstruction {
	// ...
}

#[account(discriminator = MyAccountType)]
pub struct MyAccount {
	// ...
}
}

Benefits of explicit discriminators:

  • Stable, human-readable values (not hash-dependent).
  • Single byte by default (configurable to u16/u32/u64), saving space.
  • No hidden behavior – you control the exact values.

Migration from fixed 8-byte prefixes (Anchor-compatible data)

If you are coming from Anchor/Borsh with implicit 8-byte discriminators, there are two practical migration paths:

1) Keep old on-chain layouts and add compatibility readers

Use a lightweight adapter struct for legacy decoding, then convert into a pinned Pina struct in memory. This is useful when you cannot migrate all existing accounts immediately.

#![allow(unused)]
fn main() {
#[repr(C)]
pub struct LegacyAccountV0 {
	discriminator: [u8; 8],
	owner: [u8; 32],
	value: PodU64,
}

#[discriminator]
pub enum MyAccountType {
	MyAccountV0 = 0,
	MyAccount = 1,
}

impl LegacyAccountV0 {
	pub fn into_live(self) -> Result<MyAccount, ProgramError> {
		if self.discriminator != LEGACY_ACCOUNT_DISCRIMINATOR {
			return Err(ProgramError::InvalidAccountData);
		}
		Ok(MyAccount {
			discriminator: [MyAccountType::MyAccount as u8],
			owner: self.owner,
			value: self.value,
		})
	}
}
}

For long-lived accounts, add a migration instruction that rewrites every stored account from the legacy header to the new first-field discriminator layout. This gives you one canonical on-chain schema thereafter.

Discriminator layout decision matrix

The discriminator strategy determines byte layout, parser guarantees, and cross-protocol compatibility.

GoalRecommended layout
Keep layout minimal and zero-copy while staying explicitCurrent Pina model: discriminator bytes are the first field inside #[account], #[instruction], and #[event] structs.
Preserve compatibility with existing Anchor-account payloads (SHA-256 hash prefixes)Legacy adapter model: custom raw wrapper types parse/write the existing 8-byte external prefix before converting to typed structs.
Minimize account size growth when you have many typesUse u8 (default) discriminator width.
You need more than 256 route variantsUse u16 / u32 / u64 by setting #[discriminator(primitive = ...)].
Avoid schema migrations across existing serialized dataKeep existing field order and discriminator values; only append fields.

Raw discriminator width by use-case

WidthMax variantsStorage cost (bytes)Recommended when
u82561Most programs and instructions
u1665,5362Medium-large routing tables and explicit version partitioning
u324,294,967,2964Very large enums, rarely needed
u6418,446,744,073,709,551,6168Legacy interoperability shims or reserved growth
  • Discriminator width only affects the first field bytes.
  • Widths above 8 are rejected at macro expansion time.
  • Wider discriminators improve variant space, but increase CPI payload and account rent by the exact number of bytes.

Discriminator and payload versioning

ChangeCompatibility impact
Add a new enum variantUsually backward-compatible if old clients ignore unknown variants
Change an existing variant valueBreaking for every historical byte slice
Reorder or remove struct fieldsBreaking (offsets change)
Append fields to a structMostly non-breaking, but consumers must accept the larger size
Switch primitive width (u8u16, etc.)Breaking for serialized payloads at that boundary

For on-chain accounts, treat layout as part of protocol ABI:

  • Keep field order stable.
  • Introduce optional version fields at the tail for in-place migration strategies.
  • Never change existing discriminator values in place.
  • When incompatible layout changes are required, perform explicit migration with a new account version and an operator upgrade flow.

For instruction payloads:

  • Prefer additive migration: add a new variant and keep legacy handlers for a release cycle.
  • Reject stale payload shapes with explicit errors rather than silently reinterpreting bytes.

Errors


Anchor

#![allow(unused)]
fn main() {
#[error_code]
pub enum MyError {
	#[msg("Value is too large")]
	ValueTooLarge,
}
}

Anchor assigns error codes starting at 6000 and provides #[msg] for error messages.

Pina

#![allow(unused)]
fn main() {
#[error]
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum MyError {
	ValueTooLarge = 6000,
}
}

Pina’s #[error] macro generates From<MyError> for ProgramError using ProgramError::Custom(code). You choose the numeric code explicitly. To return an error:

#![allow(unused)]
fn main() {
return Err(MyError::ValueTooLarge.into());
}

See examples/anchor_errors for a complete parity port of Anchor’s error handling, including guard helpers like require_eq and require_gt.

Events


Anchor

#![allow(unused)]
fn main() {
#[event]
pub struct MyEvent {
	pub data: u64,
	pub label: String,
}

emit!(MyEvent {
	data: 5,
	label: "hello".into()
});
}

Pina

#![allow(unused)]
fn main() {
#[discriminator]
pub enum EventDiscriminator {
	MyEvent = 1,
}

#[event(discriminator = EventDiscriminator)]
#[derive(Debug)]
pub struct MyEvent {
	pub data: PodU64,
	pub label: [u8; 8],
}
}

Pina events are Pod structs with explicit discriminators, just like accounts and instructions. They do not have a built-in emit! macro – event emission is handled by writing bytes to the transaction log or via CPI patterns. The #[event] macro gives you HasDiscriminator, Pod, Zeroable, and TypedBuilder.

See examples/anchor_events for the full parity port.

CPI (Cross-Program Invocation)


Anchor

#![allow(unused)]
fn main() {
let cpi_accounts = Transfer {
	from: ctx.accounts.from.to_account_info(),
	to: ctx.accounts.to.to_account_info(),
	authority: ctx.accounts.authority.to_account_info(),
};
let cpi_ctx = CpiContext::new(ctx.accounts.token_program.to_account_info(), cpi_accounts);
token::transfer(cpi_ctx, amount)?;
}

Pina

#![allow(unused)]
fn main() {
token_2022::instructions::TransferChecked {
	from: self.from,
	to: self.to,
	authority: self.authority,
	amount,
	mint: self.mint,
	decimals,
	token_program: self.token_program.address(),
}
.invoke()?;
}

Pina’s CPI helpers (enabled with features = ["token"]) are typed instruction builders. Fill in the struct and call .invoke() or .invoke_signed(&signers) for PDA-authorized calls. No CpiContext wrapper is needed.

See examples/escrow_program for CPI usage with both token transfers and ATA creation.

Account creation


Anchor

#![allow(unused)]
fn main() {
#[account(init, payer = user, space = 8 + 32 + 8)]
pub my_account: Account<'info, MyData>,
}

Pina

#![allow(unused)]
fn main() {
// For PDA accounts:
create_program_account_with_bump::<MyData>(
	self.my_account,
	self.payer,
	&ID,
	seeds,
	bump,
)?;

// For regular accounts:
create_account(
	self.payer,
	self.my_account,
	size_of::<MyData>(),
	&ID,
)?;
}

Space is automatically computed from size_of::<MyData>() for the PDA helper. For create_account you pass the size explicitly. In both cases, rent-exemption lamports are calculated and transferred automatically.

no_std and the entrypoint


Anchor programs use #[program] which generates the entrypoint. Pina programs are #![no_std] and use a feature-gated entrypoint module:

#![allow(unused)]
#![no_std]

fn main() {
#[cfg(feature = "bpf-entrypoint")]
pub mod entrypoint {
	use pina::*;

	use super::*;

	nostd_entrypoint!(process_instruction);

	#[inline(always)]
	pub fn process_instruction(
		program_id: &Address,
		accounts: &[AccountView],
		data: &[u8],
	) -> ProgramResult {
		let instruction: MyInstruction = parse_instruction(program_id, &ID, data)?;

		match instruction {
			MyInstruction::Initialize => InitializeAccounts::try_from(accounts)?.process(data),
		}
	}
}
}

The feature gate means tests compile without BPF entrypoint overhead. The nostd_entrypoint! macro wires up the BPF program entrypoint, a minimal panic handler, and a no-allocation stub.

Testing


Anchor

Anchor programs are typically tested with TypeScript/Mocha tests that run against a local validator via anchor test.

Pina

Pina programs are tested as regular Rust libraries:

#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
	use super::*;

	#[test]
	fn discriminator_roundtrip() {
		assert!(MyInstruction::try_from(0u8).is_ok());
		assert!(MyInstruction::try_from(99u8).is_err());
	}
}
}

For integration tests, use mollusk-svm (a Solana SVM simulator) instead of a full validator:

[dev-dependencies]
mollusk-svm = { workspace = true }

This gives you fast, deterministic tests without network I/O.

Migration checklist


  1. Replace anchor_lang::prelude::* with use pina::*.
  2. Convert #[account] structs from Borsh to Pod types (PodU64, PodBool, Address, fixed-size arrays).
  3. Define explicit #[discriminator] enums for instructions and accounts.
  4. Replace #[account(...)] constraint attributes with validation chain calls in process.
  5. Replace Context<T> with #[derive(Accounts)] structs and ProcessAccountInfos.
  6. Replace CpiContext patterns with Pina’s typed CPI instruction builders.
  7. Replace #[error_code] with #[error] and explicit numeric codes.
  8. Replace #[event] + emit! with Pina’s Pod-based event structs.
  9. Add #![no_std] and the bpf-entrypoint feature gate.
  10. Port TypeScript tests to Rust using mollusk-svm or native unit tests.

Anchor Test Porting

This page tracks sequential parity ports from solana-foundation/anchor/tests into examples/, using Rust-first tests (mollusk/native unit tests) instead of TypeScript.

Port Status

  • anchor-cli-account (no direct parity yet; Anchor CLI account decoding over dynamic Vec/String data is not a direct pina/no-std match)
  • anchor-cli-idl (no direct parity yet; Anchor CLI IDL account lifecycle is Anchor-CLI-specific)
  • auction-house
  • bench
  • bpf-upgradeable-state
  • cashiers-check
  • cfo
  • chat
  • composite
  • cpi-returns
  • custom-coder
  • custom-discriminator
  • custom-program
  • declare-id -> examples/anchor_declare_id
  • declare-program -> examples/anchor_declare_program (adapted)
  • duplicate-mutable-accounts -> examples/anchor_duplicate_mutable_accounts (adapted)
  • errors -> examples/anchor_errors (adapted)
  • escrow -> examples/escrow_program (adapted with parity-focused tests)
  • events -> examples/anchor_events (adapted event schema parity)
  • floats -> examples/anchor_floats
  • idl
  • ido-pool
  • interface-account
  • lazy-account
  • lockup
  • misc
  • multiple-suites
  • multiple-suites-run-single
  • multisig
  • optional
  • pda-derivation
  • pyth
  • realloc -> examples/anchor_realloc (adapted)
  • relations-derivation
  • safety-checks
  • spl
  • swap
  • system-accounts -> examples/anchor_system_accounts (adapted)
  • sysvars -> examples/anchor_sysvars (adapted)
  • test-instruction-validation
  • tictactoe
  • typescript
  • validator-clone
  • zero-copy

Security Model

Pina’s safety posture is built around explicit validation and predictable state transitions.

Core invariants

  • Type correctness: account bytes must match expected discriminator and layout.
  • Authority correctness: signer/owner checks must precede mutation.
  • PDA correctness: seed and bump checks must gate PDA-bound operations.
  • Value correctness: arithmetic and balance mutations must be checked.

Version-safe binary layout and compatibility

The discriminator-first model makes byte layout part of protocol compatibility. Treat every #[account] struct as ABI:

  • Do not reorder fields.
  • Do not change existing discriminator values.
  • Do not alter field types in-place without migration.
  • If a struct grows, treat it as a new versioned shape and migrate state explicitly.

Discriminator and payload versioning

ChangeCompatibility impact
Add a new enum variantUsually backward-compatible if old clients ignore unknown variants
Change an existing variant valueBreaking for every historical byte slice
Reorder or remove struct fieldsBreaking (offsets change)
Append fields to a structMostly non-breaking, but consumers must accept the larger size
Switch primitive width (u8u16, etc.)Breaking for serialized payloads at that boundary

For on-chain accounts, treat layout as part of protocol ABI:

  • Keep field order stable.
  • Introduce optional version fields at the tail for in-place migration strategies.
  • Never change existing discriminator values in place.
  • When incompatible layout changes are required, perform explicit migration with a new account version and an operator upgrade flow.

For instruction payloads:

  • Prefer additive migration: add a new variant and keep legacy handlers for a release cycle.
  • Reject stale payload shapes with explicit errors rather than silently reinterpreting bytes.

High-priority guardrails

  • Prefer checked arithmetic (checked_add, checked_sub) for all user-facing or balance-affecting values.
  • Ensure all token account types used by helper traits implement AccountValidation.
  • Keep close/transfer helpers conservation-safe (no temporary double-crediting).

Best practices

  • Always call assert_signer() before trusting authority accounts
  • Always call assert_owner() / assert_owners() before as_token_*() methods
  • Always call assert_empty() before account initialization to prevent reinitialization attacks
  • Always verify program accounts with assert_address() / assert_program() before CPI invocations
  • Use assert_type::<T>() to prevent type cosplay — it checks discriminator, owner, and data size
  • Use close_with_recipient() with zeroed() to safely close accounts and prevent revival attacks
  • Prefer assert_seeds() / assert_canonical_bump() over assert_seeds_with_bump() to enforce canonical PDA bumps
  • Namespace PDA seeds with type-specific prefixes to prevent PDA sharing across account types

Testing strategy

  • Unit tests for negative validation cases.
  • Regression tests for every previously fixed bug class.
  • Integration tests for cross-account invariants where mutation order matters.

Development Workflow

Daily loop

devenv shell
cargo build --all-features
cargo test
lint:all
verify:docs
verify:security
test:idl

Formatting and linting

  • Rust and markdown formatting are enforced through dprint.
  • Clippy runs with strict workspace lint settings.

Reusable documentation blocks

  • Template providers live in template.t.md.
  • Run docs:sync after changing provider blocks to refresh all consumer blocks.
  • Run docs:check (or verify:docs) in CI to ensure docs stay synchronized.

Dependency/tooling updates

update:deps

Codama/IDL workflow

# Regenerate all example IDLs.
codama:idl:all

# Generate clients from Codama JSON.
codama:clients:generate

# Full Codama pipeline (build CLI, generate IDLs, generate clients, checks).
codama:test

# CI-oriented IDL validation.
test:idl

Dependency security

  • security:deny runs policy checks (license allow-list, source restrictions, dependency bans).
  • security:audit runs RustSec vulnerability checks over Cargo.lock.
  • verify:security runs both checks.

Coverage

Generate coverage locally for pina and pina_cli:

coverage:all

This produces an LCOV report at target/coverage/lcov.info.

For experimental Solana-VM coverage collection (non-blocking), run:

coverage:vm:experimental

Changesets

Any code changes in crates/ or examples/ should include a file in .changeset/ describing impact and release type.

CI and Releases

CI jobs

The GitHub CI workflow verifies:

  • lint:clippy
  • lint:format
  • verify:docs
  • verify:security
  • test:all (cargo test --all-features --locked)
  • test:anchor-parity (Anchor parity examples + pina_bpf nightly build (-Z build-std=core,alloc) + ignored BPF artifact verification tests)
  • test:idl (regenerate codama/idls, codama/clients/rust, codama/clients/js, validate outputs, and fail on any diff)
  • cargo build --locked
  • cargo build --all-features --locked

This keeps code quality, behavior, and documentation build health aligned.

Coverage

The coverage workflow runs focused coverage with cargo llvm-cov and publishes an LCOV artifact:

  • Command: coverage:all
  • Artifact: target/coverage/lcov.info
  • Optional upload: Codecov (fail_ci_if_error: false)

Docs publishing

The docs-pages workflow publishes the mdBook to GitHub Pages:

  • Trigger: pushes to main that touch docs + GitHub Release published
  • Build command: docs:build (output in docs/book)
  • Deploy target: GitHub Pages (https://pina-rs.github.io/pina/)

CLI asset releases

The assets workflow only publishes binaries for CLI tags:

  • Required tag format: pina_cli/v<version>
  • Tag/version check: release tag must match crates/pina_cli/Cargo.toml
  • Build scope: crates/pina_cli only (package = "pina_cli")

Release workflow

Use knope for changelog/release management:

knope document-change
knope release
knope publish

Keep changeset descriptions explicit and user-impact focused.

Review Follow-ups

This project now tracks and resolves relevant previously ignored pull-request feedback where it still applies to the current codebase.

Addressed items

  • Enabled solana-address curve25519 feature to ensure PDA helper APIs are available in host builds.
  • Replaced unchecked current + 1 increment in the counter example with checked arithmetic and ProgramError::ArithmeticOverflow on failure.
  • Fixed stale hello example docs that described behavior not present in code.
  • Added missing AccountValidation implementations for all token account/mint types used by token conversion helpers.

Explicitly ignored as not relevant

Some unresolved comments pointed to paths that no longer exist in the current repository (for example removed historical security/ and lints/ paths). These were not applied because there is no active code location to patch.

Recommendations

This section contains concrete suggestions to better align the codebase with Pina’s goals.

1. Add performance regression baselines

Goal alignment: low compute units.

  • Add benchmark harnesses for high-volume instruction paths (counter increment, escrow state transitions, token flows).
  • Track baseline CU budgets in CI and fail when regressions exceed threshold.
  • Keep benchmark inputs deterministic and versioned.

2. Strengthen feature-matrix testing

Goal alignment: no_std reliability + maintainability.

  • Test a matrix of feature combinations (default, --no-default-features, --features token, --all-features).
  • Include bpfel-unknown-none build checks for all example programs.
  • Add one CI lane for docs/tests under minimal features to catch accidental default-feature coupling.

3. Expand security regression coverage

Goal alignment: safety.

  • Add explicit regression tests for arithmetic overflow/underflow paths.
  • Add tests for token transfer edge cases (insufficient funds, overflow on destination).
  • Add tests for each account close/transfer helper to verify lamport conservation invariants.

4. Improve macro diagnostics quality

Goal alignment: developer experience.

  • Add compile-fail tests for malformed macro attributes and unsupported discriminator configurations.
  • Improve error messages to include expected/actual forms and actionable fix text.
  • Maintain a docs page mapping macro attributes to generated behaviors.

5. Centralize architecture decision records

Goal alignment: maintainability.

  • Add ADR-style markdown files (for example, discriminator approach, token feature boundaries, no-allocator policy).
  • Require new architecture-impacting PRs to link/update an ADR.

6. Publish a migration guide from Anchor-style patterns

Goal alignment: adoption.

  • Document direct mapping from common Anchor concepts to Pina equivalents.
  • Provide before/after examples for account validation and instruction routing.
  • Include expected CU/dependency differences for realistic workloads.