# Arete — Full Documentation > Arete is a system for programmable real-time data feeds on Solana. Stream any on-chain data to your app via WebSocket. Define data shapes in a Rust DSL, deploy, and consume with typed TypeScript, React, or Rust SDKs. This document concatenates every page of the Arete documentation. For a smaller index with links, see [llms.txt](https://docs.arete.run/llms.txt). For an abridged version, see [llms-small.txt](https://docs.arete.run/llms-small.txt). --- This is the full developer documentation for Arete # What is Arete? > A plain-English introduction to Arete and what you can build with it. **Programmable data streams for Solana.** You declare what you want — which programs, which accounts, which fields — and Arete delivers a typed, live stream straight to your app. No polling, no backend plumbing, no data wrangling. *** ## The Simple Version [Section titled “The Simple Version”](#the-simple-version) Here’s the flow: 1 ### Solana Blockchain Someone trades a token, mines ORE, or stakes → 2 ### Arete Transforms the raw data into what you need → 3 ### Your App Your dashboard updates in real-time 1. **Something happens on Solana** — a trade, a transfer, a vote, anything 2. **Arete processes it** — turns raw blockchain data into clean, structured information 3. **Your app gets it instantly** — no delay, no polling, no manual refreshing *** ## What Can You Build? [Section titled “What Can You Build?”](#what-can-you-build) Arete is especially good for apps that need **live data**: * **Dashboards** — Show mining stats, token prices, or DeFi positions updating in real-time * **Real-time UIs** — Build interfaces that react instantly to on-chain state changes without a single refresh * **Trading tools** — Display live order flow, liquidity changes, or market metrics * **Portfolio trackers** — Track wallet balances and activity as they happen * **Analytics** — Aggregate on-chain events (totals, counts, trends) without building your own backend * **Monitoring** — Watch specific programs or accounts for activity *** ## How People Build With It [Section titled “How People Build With It”](#how-people-build-with-it) Without Arete, getting live on-chain data means writing polling loops, handling RPC rate limits, and parsing raw account bytes. It’s a lot of plumbing before you write a single line of actual product. With Arete, you declare the stream — Arete handles the rest. The typed data flows to wherever you consume it: ```tsx // React UI — component re-renders live as chain state changes const { data: round } = views.OreRound.latest.use(); ``` ```ts // TypeScript backend — process events as they arrive const stream = await stack.views.OreRound.latest.subscribe(); for await (const round of stream) { await db.upsert(round); } ``` ```rust // Rust service — zero-copy typed events at the edge let mut stream = stack.views().ore_round().latest().subscribe().await?; while let Some(round) = stream.next().await { process(round?); } ``` The data is fully typed regardless of how you consume it. It updates the moment the chain does. ### Two ways to get there [Section titled “Two ways to get there”](#two-ways-to-get-there) **AI-assisted** — Describe what you want in plain English. Your AI coding tool (Cursor, Claude Code, etc.) writes all the Arete code for you. Most people have something running in under 30 minutes — no prior coding experience needed. **Direct SDK** — Use the React, TypeScript, or Rust SDKs yourself. Full type safety, React hooks, streaming iterators for backends, and native Rust for high-performance services. *** ## Key Ideas [Section titled “Key Ideas”](#key-ideas) Arete is built around a simple contract: **you declare what you want, we deliver the stream.** **Stacks** are your declaration. A Stack is a named collection of semantically related data — the accounts, fields, and programs needed for a specific feature or app. That data might come from one program or many. When you connect to a Stack, you’re saying *“give me everything I need for this.”* Existing stacks (like ORE) are ready to use. You can also build your own. **Entities** are the individual pieces of data a stack defines. Each entity represents a distinct concept in your app — a round, a miner, a treasury. You write them as Rust structs that declare exactly which on-chain fields you care about, across as many accounts and programs as needed. The ORE stack defines three: | Entity | What it represents | | ------------- | ---------------------------------------------------------- | | `OreRound` | A single mining round — state, results, grid data, entropy | | `OreTreasury` | The protocol-wide treasury account | | `OreMiner` | A miner’s rewards, state, and automation config | **Views** are projections over an entity’s data — they define what slice of the stream your app subscribes to. Every entity gets `list` (all items) and `state` (one item by key) by default. Custom views add sorted or filtered variants on top. The ORE stack exposes: | View | Entity | What it returns | | ------------------- | ----------- | ------------------------------------- | | `OreRound/state` | OreRound | A single round by key | | `OreRound/list` | OreRound | All rounds | | `OreRound/latest` | OreRound | Rounds sorted by round\_id descending | | `OreTreasury/state` | OreTreasury | The treasury account | | `OreTreasury/list` | OreTreasury | All treasury records | | `OreMiner/state` | OreMiner | A single miner by wallet address | | `OreMiner/list` | OreMiner | All miners | Note that views maintain a rolling cache — they’re optimised for live data, not full historical queries. Full historical access is on the roadmap. **SDKs** are the wire. React hooks, a TypeScript client, and a Rust client all connect to the same streams. Use whichever fits your stack — or mix them across services in the same project. *** ## Next Steps [Section titled “Next Steps”](#next-steps) New to Coding? Set up AI coding tools and build your first Arete app with zero programming experience. [Set up your tools →](/agent-skills/setup-tools/) Already a Developer? Jump straight to the SDK and start streaming live Solana data in minutes. [Quickstart →](/using-stacks/quickstart/) # AI Tooling Setup > Get started with Arete by setting up an AI coding assistant that writes code for you. You don’t need to know how to code to build with Arete. AI coding tools can write all the code for you — you just tell them what you want in plain English. Arete works with any AI coding agent. This guide uses **Cursor** as the example, but the same steps apply to Claude Code, Windsurf, OpenCode, or any other agent you prefer. *** ## What You’ll Need [Section titled “What You’ll Need”](#what-youll-need) * A computer (Mac, Windows, or Linux) * An internet connection * About 10 minutes That’s it. No programming experience required. *** ## Step 1: Install Cursor [Section titled “Step 1: Install Cursor”](#step-1-install-cursor) [Cursor](https://cursor.sh) is an AI-powered code editor — a good starting point if you don’t already have a tool you prefer. 1. Go to [cursor.sh](https://cursor.sh) and download the version for your OS 2. Open the downloaded file and follow the installation prompts 3. Launch Cursor and create a free account Already have an AI coding tool? If you’re already using Claude Code, Windsurf, OpenCode, or another agent, skip to [Step 2](#step-2-create-a-project-folder). Arete works with all of them. Cursor is free to start Cursor offers a free tier with limited AI queries per month — plenty for getting started. *** ## Step 2: Create a Project Folder [Section titled “Step 2: Create a Project Folder”](#step-2-create-a-project-folder) Create a folder for your project and open it in your AI tool. **On Mac:** Open Finder, navigate to Documents (or wherever you keep projects), right-click → **New Folder**, name it `my-arete-app`, then open it in Cursor via **File → Open Folder**. **On Windows:** Open File Explorer, navigate to Documents, right-click → **New → Folder**, name it `my-arete-app`, then open it in Cursor via **File → Open Folder**. **On Linux / terminal-based tools:** ```bash mkdir ~/my-arete-app cd ~/my-arete-app ``` Then open the folder in your editor, or start your agent in that directory. *** ## Step 3: Let Your AI Set Up Arete [Section titled “Step 3: Let Your AI Set Up Arete”](#step-3-let-your-ai-set-up-arete) Paste this prompt into your AI assistant’s chat: Read and follow the instructions to set up Arete in this project In Cursor, press `Cmd+L` (Mac) or `Ctrl+L` (Windows) to open the AI chat sidebar, paste the prompt, and press Enter. Approve any commands it asks to run. For terminal-based agents (Claude Code, OpenCode, etc.), just paste it directly and press Enter. ### What happens next [Section titled “What happens next”](#what-happens-next) Your AI will: 1. **Install the Arete CLI** — the command-line tool for Arete 2. **Install agent skills** — teaching itself how to use Arete correctly 3. **Discover available data** — finding out what blockchain data you can access 4. **Be ready to build** — waiting for your next instruction This takes about 1–2 minutes. *** ## Step 4: Scaffold the ORE Template [Section titled “Step 4: Scaffold the ORE Template”](#step-4-scaffold-the-ore-template) Run this in your terminal to create a working ORE dashboard: ```bash npx @usearete/a4 create my-ore-app --template react-ore cd my-ore-app npm install npm run dev ``` Open [localhost:5173](http://localhost:5173) and you’ll see live ORE mining data streaming from Solana. *** ## Step 5: Customise It With Your AI [Section titled “Step 5: Customise It With Your AI”](#step-5-customise-it-with-your-ai) Now open the project in Cursor and try this prompt: I have a Arete ORE dashboard running using the react-ore template. Update the mining grid so each square is coloured as a heatmap based on the amount of SOL deployed to that square — more SOL should glow brighter. Keep the existing dark theme. Your AI already knows how to use Arete from Step 3 — it will find the right views and update the component for you. *** ## Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) **“Command not found” errors** — You may need to install Node.js first. Download the LTS version from [nodejs.org](https://nodejs.org), install it, then restart your AI tool and try again. **The AI seems confused** — Make sure it completed Step 3. You can re-paste the setup prompt to re-run it. *** ## Next Steps [Section titled “Next Steps”](#next-steps) Build a Dashboard Follow our step-by-step tutorial to build a complete ORE mining dashboard. [Start tutorial →](/agent-skills/tutorial-ore-dashboard/) Copy-Paste Prompts Browse our cookbook of ready-to-use prompts for common tasks. [View prompts →](/agent-skills/prompts/) # Quickstart > Run the ORE demo and see live Solana data streaming in minutes. This quickstart gets you streaming live Solana data in a few minutes using the ORE demo — a real, deployed Arete stack for the ORE mining program. It’s the fastest way to see Arete in action. What this covers This guide walks through the ORE demo only. It’s a good starting point to understand how Arete feels, but it doesn’t cover building your own stacks or integrating with your own on-chain programs. For those paths, see the [Next Steps](#next-steps) section below. Prefer AI-assisted development? If you’re using Cursor, Claude Code, or another AI coding tool, check out [Build with AI](/agent-skills/overview/) — paste one prompt and your AI sets everything up. *** ## Prerequisites [Section titled “Prerequisites”](#prerequisites) Choose how you want to run the CLI: ### Step 1: Install the CLI [Section titled “Step 1: Install the CLI”](#step-1-install-the-cli) * Cargo Install the native binary via Cargo: ```bash cargo install a4-cli ``` Then use the `a4` command: ```bash a4 create my-app ``` This is the same CLI — Cargo installs it as `a4`, and so does npm. * npx No installation needed — just run: ```bash npx @usearete/a4 create my-app ``` This downloads and runs the CLI without installing it globally. * npm Install globally via npm: ```bash npm install -g @usearete/a4 ``` Then use the `a4` command: ```bash a4 create my-app ``` *** ## Create Your App [Section titled “Create Your App”](#create-your-app) ### Step 2: Scaffold the project [Section titled “Step 2: Scaffold the project”](#step-2-scaffold-the-project) * Cargo ```bash a4 create my-app ``` * npx ```bash npx @usearete/a4 create my-app ``` * npm ```bash a4 create my-app ``` You’ll be prompted to select a template: | Template | Description | Run Command | | ---------------- | ---------------------- | ------------------------------------------------------------ | | `react-ore` | React + Vite dashboard | `npm run dev` → open [localhost:5173](http://localhost:5173) | | `typescript-ore` | TypeScript CLI client | `npm start` | | `rust-ore` | Rust + Tokio client | `cargo run` | About the ORE demo All templates use the same example: **ORE mining rounds** — a real, live Solana program. The templates differ only in language/framework. Each connects to the same public ORE stack and streams identical data. ### Step 3: Or specify the template directly [Section titled “Step 3: Or specify the template directly”](#step-3-or-specify-the-template-directly) * Cargo ```bash a4 create my-app --template rust-ore # or react-ore, typescript-ore ``` * npx ```bash npx @usearete/a4 create my-app --template react-ore # or typescript-ore, rust-ore ``` * npm ```bash a4 create my-app --template react-ore # or typescript-ore, rust-ore ``` That’s it. You’re streaming live Solana data. *** ## What You Just Built [Section titled “What You Just Built”](#what-you-just-built) The scaffolded app connects to a deployed **Arete Stack** — a streaming data pipeline that: 1. **Watches Solana** for ORE mining program activity 2. **Transforms raw transactions** into structured round data (round ID, motherlode, deployment totals) 3. **Streams updates** to your app via WebSocket as they happen on-chain 1 ### Solana ORE program → 2 ### Arete ORE stack → 3 ### Your App Live feed No RPC calls. No polling. No custom indexer. Just streaming data. *** ## Available Templates [Section titled “Available Templates”](#available-templates) | Template | Command | What You Get | | ------------------ | -------------------------------- | ----------------------------------------------------- | | **react-ore** | `npx @usearete/a4 create my-app` | React + Vite dashboard showing live ORE mining rounds | | **typescript-ore** | `npx @usearete/a4 create my-app` | TypeScript CLI that streams ORE data to your terminal | | **rust-ore** | `npx @usearete/a4 create my-app` | Rust + Tokio client streaming ORE round updates | All templates connect to the public ORE stack at `wss://ore.stack.arete.run`. You can also specify the template directly: ```bash npx @usearete/a4 create my-app --template react-ore ``` *** ## What’s Inside [Section titled “What’s Inside”](#whats-inside) * React Template The React template uses `arete-react` with a pre-built stack definition: App.tsx ```tsx import { AreteProvider } from "arete-react"; import { OreDashboard } from "./components/OreDashboard"; export default function App() { return ( ); } ``` components/OreDashboard.tsx ```tsx import { useArete } from "arete-react"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; export function OreDashboard() { const { views, isConnected } = useArete(ORE_STREAM_STACK); const { data: rounds } = views.OreRound.latest.use({ take: 5 }); return (

{isConnected ? "Live" : "Connecting..."}

{rounds?.map((round) => (
Round #{round.id?.round_id} — Motherlode: {round.state?.motherlode}
))}
); } ``` * TypeScript Template The TypeScript template uses `arete-typescript` with typed views: main.ts ```typescript import { Arete } from "arete-typescript"; import { ORE_STREAM_STACK, type OreRound } from "arete-stacks/ore"; const a4 = await Arete.connect("wss://ore.stack.arete.run", { stack: ORE_STREAM_STACK, }); for await (const update of a4.views.OreRound.latest.watch({ take: 1 })) { if (update.type === "upsert" || update.type === "patch") { console.log(`Round #${update.data.id?.round_id}`); console.log(`Motherlode: ${update.data.state?.motherlode}`); } } ``` * Rust Template The Rust template uses `a4-sdk` with typed views: main.rs ```rust use a4_sdk::prelude::*; use a4_stacks::ore::{OreStack, OreRound}; #[tokio::main] async fn main() -> anyhow::Result<()> { let a4 = Arete::::connect().await?; let mut stream = a4.views.ore_round.latest().listen(); while let Some(round) = stream.next().await { println!("Round # {:?}", round.id.round_id); println!("Motherlode: {:?}", round.state.motherlode); } Ok(()) } ``` *** ## Using an AI Coding Agent? [Section titled “Using an AI Coding Agent?”](#using-an-ai-coding-agent) Install Arete agent skills so your AI can write correct code without you looking up docs: ```bash npx skills add AreteA4/skills ``` Now try asking your agent: “Show me the ORE mining round data in a table with live updates.” See [Build with AI](/agent-skills/overview/) for the full guide and prompt cookbook. *** ## Next Steps [Section titled “Next Steps”](#next-steps) Now that you’ve seen Arete in action, where you go next depends on what you’re building: **Using an existing on-chain program that has a Arete stack?** * [Connect to a Stack](/using-stacks/connect/) — Add Arete to an existing project * [React SDK](/sdks/react/) — Build a full React app against a deployed stack * [TypeScript SDK](/sdks/typescript/) — Use Arete in Node.js, Vue, Svelte, or vanilla JS * [Rust SDK](/sdks/rust/) — Native Rust client **Have your own on-chain program and want to stream its data?** * [Build Your Own Stack](/building-stacks/workflow/) — Create a custom data stream for any Solana program **Want to understand what’s happening under the hood?** * [How It Works](/using-stacks/how-it-works/) — Stacks, Views, and how live data flows **Using an AI coding tool?** * [Build with AI](/agent-skills/overview/) — Let your agent write Arete code with the right context # Connect to a Stack > Add Arete to an existing project and stream live Solana data. This page shows how to add Arete to an existing project and connect to a deployed stack. It uses the public **ORE mining stack** as the example — no account or API key required. Does a stack exist for your program? This guide is for connecting to a stack that’s already deployed. If you want to stream data from your **own on-chain program**, you’ll need to build a stack for it first — see [Build Your Own Stack](/building-stacks/workflow/). Deploying custom stacks is currently in closed beta. [Get in touch](https://arete.run) if you’d like early access. Starting from scratch? If you’re creating a new project and just want to see Arete working with the ORE demo, the [Quickstart](/using-stacks/quickstart/) scaffolds everything automatically. *** ## Install & Connect [Section titled “Install & Connect”](#install--connect) * React ### React: Install [Section titled “React: Install”](#react-install) ```bash npm install arete-react zustand ``` ### React: Add to your app [Section titled “React: Add to your app”](#react-add-to-your-app) App.tsx ```tsx import { AreteProvider, useArete } from "arete-react"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; function OreRounds() { const { views, isConnected } = useArete(ORE_STREAM_STACK); const { data: rounds, isLoading } = views.OreRound.latest.use({ take: 5, }); if (isLoading) return
Connecting...
; return (

Live ORE Mining Rounds {isConnected && "🟢"}

{rounds?.map((round) => (
Round #{round.id?.round_id} — Motherlode: {round.state?.motherlode}
))}
); } export default function App() { return ( ); } ``` ### React: Run it [Section titled “React: Run it”](#react-run-it) ```bash npm run dev ``` * TypeScript / Node.js ### TypeScript: Install [Section titled “TypeScript: Install”](#typescript-install) ```bash npm install arete-typescript ``` ### TypeScript: Create a script [Section titled “TypeScript: Create a script”](#typescript-create-a-script) stream.ts ```typescript import { Arete } from "arete-typescript"; import { ORE_STREAM_STACK, type OreRound } from "arete-stacks/ore"; async function main() { const a4 = await Arete.connect("wss://ore.stack.arete.run", { stack: ORE_STREAM_STACK, }); for await (const update of a4.views.OreRound.latest.watch()) { if (update.type === "upsert") { const round = update.data; console.log(`Round #${round.id?.round_id}`); console.log(` Motherlode: ${round.state?.motherlode}`); } } } main().catch(console.error); ``` ### TypeScript: Run it [Section titled “TypeScript: Run it”](#typescript-run-it) ```bash npx tsx stream.ts ``` * Rust ### Rust: Add to Cargo.toml [Section titled “Rust: Add to Cargo.toml”](#rust-add-to-cargotoml) ```toml [dependencies] a4-sdk = "0.1.1" a4-stacks = "0.1.1" ``` ### Rust: Write the code [Section titled “Rust: Write the code”](#rust-write-the-code) src/main.rs ```rust use a4_sdk::prelude::*; use a4_stacks::ore::{OreStack, OreRound}; #[tokio::main] async fn main() -> anyhow::Result<()> { let a4 = Arete::::connect().await?; let mut stream = a4.views.ore_round.latest().listen(); while let Some(round) = stream.next().await { println!("Round #{:?}", round.id.round_id); println!(" Motherlode: {:?}", round.state.motherlode); } Ok(()) } ``` ### Rust: Run it [Section titled “Rust: Run it”](#rust-run-it) ```bash cargo run ``` * Browser (Raw WebSocket) For quick inspection without any SDK, open your browser console and paste: ```javascript const ws = new WebSocket("wss://ore.stack.arete.run"); ws.onopen = () => { ws.send(JSON.stringify({ type: "subscribe", view: "OreRound/latest" })); }; ws.onmessage = (event) => { const data = JSON.parse(event.data); if (data.type === "upsert") console.log("Round update:", data.data); }; ``` *** ## How it Works [Section titled “How it Works”](#how-it-works) You connected to a deployed Arete stack. The ORE stack watches the Solana blockchain, extracts round data from on-chain transactions, and pushes typed updates to your app via WebSocket as they happen — no polling, no RPC calls, no indexer to run. 1 ### Solana ORE program on-chain → 2 ### Arete ORE stack (deployed) → 3 ### Your App Typed live stream The stack is public — just point your SDK at the WebSocket URL. *** ## About Stack SDKs [Section titled “About Stack SDKs”](#about-stack-sdks) A stack SDK tells the Arete client what entities and views are available, and provides the types for each. There are two ways to get one: **Pre-built for publicly deployed stacks (like ORE)** — We publish ready-to-use SDKs for both TypeScript and Rust: ```bash # TypeScript / React npm install arete-stacks ``` ```typescript import { ORE_STREAM_STACK } from "arete-stacks/ore"; ``` ```toml # Rust — add to Cargo.toml [dependencies] a4-stacks = "0.1.1" ``` ```rust use a4_stacks::ore::{OreStack, OreRound}; ``` **Generated from your own stack** — When you build a custom stack, use the CLI to generate an SDK for any language: ```bash a4 sdk create typescript my-stack a4 sdk create rust my-stack ``` Both approaches produce the same result: a typed SDK that works identically with the Arete client. *** ## Available Public Stacks [Section titled “Available Public Stacks”](#available-public-stacks) | Stack | WebSocket URL | Data | | --------------------- | --------------------------- | ----------------------------- | | **ORE Mining Rounds** | `wss://ore.stack.arete.run` | Live ORE mining round updates | *** ## ORE Data Shape [Section titled “ORE Data Shape”](#ore-data-shape) Each `OreRound` update has this structure: ```json { "id": { "round_id": 142857, "round_address": "7xKXtg2CW87d97TXJSDpbD5jBkheTqA83TZRuJosgAsU" }, "state": { "expires_at": 312645000, "motherlode": 5000000000, "total_deployed": 125000000000, "total_vaulted": 12500000000, "total_winnings": 98500000000 }, "results": { "top_miner": "9WzDXwBbmkg8ZTbNMqUxvQRAyrZzDsGYdLVL9zYtAWWM", "top_miner_reward": 2500000000, "winning_square": 18, "did_hit_motherlode": false }, "metrics": { "deploy_count": 1847, "total_deployed_sol": 125000000000, "checkpoint_count": 423 } } ``` | Section | Description | | --------- | ----------------------------------------------------------- | | `id` | Primary key (`round_id`) and lookup index (`round_address`) | | `state` | Current round state from on-chain account | | `results` | Round outcome including computed fields | | `metrics` | Aggregated counts and sums from instructions | View the source The full ORE stack definition is on GitHub: [stacks/ore/src/stack.rs](https://github.com/AreteA4/arete/blob/main/stacks/ore/src/stack.rs) *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [How It Works](/using-stacks/how-it-works/) — Understand Stacks, Views, and how the stream model works * [React SDK](/sdks/react/) — React hooks and patterns in depth * [TypeScript SDK](/sdks/typescript/) — Use with Node.js, Vue, Svelte, or vanilla JS * [Rust SDK](/sdks/rust/) — Native Rust client * [Build Your Own Stack](/building-stacks/workflow/) — Stream data from your own on-chain program # Agent Skills > How the Arete CLI, agent skills, and agent.md work together to give your AI agent everything it needs to build with Arete. New to coding? If you haven’t set up an AI coding tool yet, start with [AI Tooling Setup](/agent-skills/setup-tools/) which walks you through everything from scratch. Arete is designed to be built with AI agents. This page explains the three pieces that make that work: the **CLI**, the **agent skills**, and **agent.md**. *** ## The CLI [Section titled “The CLI”](#the-cli) The Arete CLI (`a4`) is the primary interface between your agent and Arete. It handles scaffolding, deployment, SDK generation, and — most importantly — live schema discovery. ```bash # Install via Cargo (recommended) cargo install a4-cli # Or via npm npm install -g @usearete/a4 ``` The key command for agents is `a4 explore`: ```bash a4 explore --json ``` This queries the Arete API and returns the live schemas for all available stacks — their entities, fields, views, and types. Because your agent runs this at setup time, it always works with accurate, up-to-date type information rather than guessing from training data. *** ## Agent Skills [Section titled “Agent Skills”](#agent-skills) Agent skills are markdown files that teach your agent how to use Arete correctly. They’re installed into your project so your agent picks them up automatically as context. Three skills are installed: | Skill | What it teaches | | --------------- | ------------------------------------------------------------------------- | | `arete` | Router skill — detects intent and routes the agent to the right sub-skill | | `arete-consume` | SDK patterns for connecting to streams in TypeScript, React, and Rust | | `arete-build` | Rust DSL syntax for building custom stacks from Solana program IDLs | Install them manually with: ```bash npx skills add AreteA4/skills ``` Once installed, your agent knows the correct hook names, view patterns, subscription syntax, and CLI commands. Without the skills, an agent will hallucinate API shapes based on outdated training data. *** ## agent.md [Section titled “agent.md”](#agentmd) `agent.md` is a plain text file hosted at `https://docs.arete.run/agent.md`. It’s the bootstrap instruction set — a single URL your agent can read to set everything up from scratch. When your agent reads `agent.md`, it: 1. Installs the Arete CLI (prefers Cargo, falls back to npm) 2. Installs the agent skills into the project 3. Runs `a4 explore --json` to load live schemas 4. Understands it’s ready to build This is why the one-liner works: Read and follow the instructions to set up Arete in this project *** ## How They Fit Together [Section titled “How They Fit Together”](#how-they-fit-together) ```plaintext agent.md ──▶ CLI installed ──▶ a4 explore --json ──▶ live schemas Skills installed ─▶ SDK patterns + DSL syntax ┌────────────────────────────┐ │ AI Agent │ │ (Cursor, Claude Code, etc) │ │ │ │ Skills + Live schemas │ └────────────┬───────────────┘ │ builds correct code │ ▼ Your Arete app ``` The CLI gives the agent **live data**. The skills give the agent **correct patterns**. Together they remove the two main failure modes: wrong types and wrong API usage. *** ## Prefer to Set Up Manually? [Section titled “Prefer to Set Up Manually?”](#prefer-to-set-up-manually) * New Project ```bash cargo install a4-cli # Or: npm install -g @usearete/a4 npx @usearete/a4 create my-app --template react-ore cd my-app npx skills add AreteA4/skills ``` * Existing Project ```bash cargo install a4-cli # Or: npm install -g @usearete/a4 npx skills add AreteA4/skills ``` For editor-specific file locations and manual configuration, see [Editor Setup](/agent-skills/setup/). *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [Prompt Cookbook](/agent-skills/prompts/) — Copy-paste prompts for common tasks * [Tutorial: ORE Dashboard](/agent-skills/tutorial-ore-dashboard/) — Build a complete app step by step * [Schema Discovery](/agent-skills/explore/) — How `a4 explore` provides live type information * [Editor Setup](/agent-skills/setup/) — Manual setup for specific editors # Prompt Cookbook > Copy-paste prompts for building with Arete using AI coding agents. Curated prompts to help your AI agent build with Arete. These are designed to be copied directly into your agent chat (Cursor, Claude Code, etc.) to automate discovery and implementation. Skills required These prompts assume the Arete agent skills are installed. The skills teach your agent the SDK patterns, CLI commands, and when to run `a4 explore` — so you don’t have to spell it out in every prompt. Not installed yet? See [Editor Setup](/agent-skills/setup/) to get started. *** ## Consuming ORE (React) [Section titled “Consuming ORE (React)”](#consuming-ore-react) ### Live ORE Mining Dashboard [Section titled “Live ORE Mining Dashboard”](#live-ore-mining-dashboard) Build a React + Vite app that shows live ORE mining data. Display: * Current round number and motherlode amount * Total miners and total SOL deployed * Countdown to round expiration * Connection status indicator Use Tailwind for styling. **What you should get:** A React app with `AreteProvider`, `useArete` hook, live-updating round stats, and a connection badge. ### ORE Round History Table [Section titled “ORE Round History Table”](#ore-round-history-table) Build a React component that shows a table of historical ORE mining rounds. Columns: Round #, Motherlode, Total Miners, Deploy Count, Top Miner (truncated address), and whether the motherlode was hit. Sort by round descending. Add a filter for minimum motherlode amount. **What you should get:** A paginated or scrollable table showing historical ORE rounds with real-time updates as new rounds complete. ### Live ORE Grid Heatmap [Section titled “Live ORE Grid Heatmap”](#live-ore-grid-heatmap) Build a React component that visualizes the current ORE mining round as an interactive 5x5 grid heatmap. Use `OreRound.latest` to get the live round data. For each square, show the deployed SOL amount (`state.deployed_per_square_ui`) with color intensity based on the amount. Show miner counts per square (`state.count_per_square`) on hover. Include a countdown to round expiration using `state.estimated_expires_at_unix`. Highlight the winning square (`results.winning_square`) when the round completes. **What you should get:** A live-updating visual grid showing where miners are deploying SOL in the current round, with real-time countdown and win detection. *** ## Consuming ORE (TypeScript CLI) [Section titled “Consuming ORE (TypeScript CLI)”](#consuming-ore-typescript-cli) ### ORE Round Monitor CLI [Section titled “ORE Round Monitor CLI”](#ore-round-monitor-cli) Build a TypeScript CLI tool that monitors ORE mining rounds in real-time. On each update, print: Round number, motherlode amount, total miners, total deployed, deploy count, and time until expiration. Exit cleanly on Ctrl+C. **What you should get:** A terminal utility that prints a clean, live-updating summary of every ORE round. ### ORE Treasury Tracker [Section titled “ORE Treasury Tracker”](#ore-treasury-tracker) Build a TypeScript script that tracks the ORE treasury state in real-time. Log: treasury balance, motherlode, total staked, total refined, and total unclaimed. Calculate and display the percentage of ORE that is staked vs unclaimed. **What you should get:** A script for monitoring treasury health and distribution metrics. *** ## Building Custom Stacks [Section titled “Building Custom Stacks”](#building-custom-stacks) ### Build a custom stack for \[Protocol X] [Section titled “Build a custom stack for \[Protocol X\]”](#build-a-custom-stack-for-protocol-x) I want to build a Arete stack that tracks \[describe the protocol and what data you want]. The program ID is \[PROGRAM\_ID]. Find the IDL and build a stack that defines entities for the account types I want to track. **What you should get:** The agent finds or guides you to the IDL, then scaffolds a new Arete project with entities, field mappings, and views. ### Add a new entity to my stack [Section titled “Add a new entity to my stack”](#add-a-new-entity-to-my-stack) I have an existing Arete stack and I want to add a new entity that tracks \[describe account type]. The IDL is already at `idl/[program].json`. Add an entity for the \[AccountName] account with appropriate sections and field mappings, following the same patterns as the existing entities. **What you should get:** A new entity definition added to your `stack.rs` with correct type mappings from the IDL. *** ## Exploring and Discovery [Section titled “Exploring and Discovery”](#exploring-and-discovery) ### What data is available? [Section titled “What data is available?”](#what-data-is-available) Run `a4 explore --json` and tell me what Arete stacks are available. For each stack, run `a4 explore [stack-name] --json` to list the entities. Then for the most interesting entities, run `a4 explore [stack-name] [Entity] --json` to show me the fields and types. Summarize what data I can stream from each stack. **What you should get:** A comprehensive overview of all data you can currently access through your Arete installation. # Tutorial: ORE Dashboard > Build a live ORE mining dashboard by prompting your AI agent step by step. This tutorial shows you how to build a real-time ORE mining dashboard without writing any code. You will use your AI agent to discover the data, set up the project, and implement the UI. New to coding? If you haven’t set up a coding tool yet, start with the [Setup Guide](/agent-skills/setup-tools/) first — it takes about 5 minutes. *** ## Prerequisites [Section titled “Prerequisites”](#prerequisites) * Install agent skills: `npx skills add AreteA4/skills` * Install Arete CLI: `npm install -g @usearete/a4` * A code editor with an AI agent (Cursor, VS Code with Copilot, etc.) *** ## Step 1: Scaffold the Project [Section titled “Step 1: Scaffold the Project”](#step-1-scaffold-the-project) Start by creating the basic project structure and installing dependencies. **Prompt:** Create a new React + Vite + TypeScript project. Install these dependencies: arete-react, arete-stacks, tailwindcss. Set up Tailwind with the Vite plugin. *** ## Step 2: Discover the ORE Schema [Section titled “Step 2: Discover the ORE Schema”](#step-2-discover-the-ore-schema) Ask your agent to look at the available ORE data to understand what you can build. **Prompt:** Run `a4 explore ore --json` and `a4 explore ore OreRound --json` to understand the ORE stack schema. Summarize the available entities, views, and fields. *** ## Step 3: Build the Main Layout [Section titled “Step 3: Build the Main Layout”](#step-3-build-the-main-layout) Set up the application shell with a connection status indicator. **Prompt:** Create the main `App.tsx` with a `AreteProvider` (autoConnect=true). Create an `OreDashboard` component that will hold our mining data display. Use a dark theme with tailwind (bg-gray-900, text-white). Add a header that says “ORE Mining Dashboard” with a connection status indicator. *** ## Step 4: Add Live Round Data [Section titled “Step 4: Add Live Round Data”](#step-4-add-live-round-data) Connect to the live round view to show the current mining status. **Prompt:** In the `OreDashboard` component, use the `useArete` hook with `ORE_STREAM_STACK` from `arete-stacks/ore`. Subscribe to the `OreRound.latest` view using `useOne()` to get the current round. Display: Round number (`id.round_id`), Motherlode amount (`state.motherlode`), Total miners (`state.total_miners`), Total deployed (`state.total_deployed`), and Deploy count (`metrics.deploy_count`). Format large numbers with commas. *** ## Step 5: Add Treasury Data [Section titled “Step 5: Add Treasury Data”](#step-5-add-treasury-data) Include high-level stats about the ORE treasury. **Prompt:** Add a section that shows the ORE treasury state. Use `OreTreasury.list` view with `useOne()`. Display: Total staked, Total refined, Total unclaimed, and Treasury balance. Put these in a grid of stat cards. *** ## Step 6: Add Round History [Section titled “Step 6: Add Round History”](#step-6-add-round-history) List previous rounds to show historical mining activity. **Prompt:** Add a section below the stats that shows recent rounds in a table. Use `OreRound.list` view with `use()`. Show the last 10 rounds with columns: Round #, Motherlode, Miners, Deployed, Motherlode Hit (boolean as a checkmark or X). Sort by `round_id` descending in the component. *** ## Step 7: Polish the UI [Section titled “Step 7: Polish the UI”](#step-7-polish-the-ui) Make the dashboard look professional and responsive. **Prompt:** Clean up the layout. Add subtle animations for when data updates like a brief highlight or pulse effect. Make the stat cards have a slightly lighter background (`bg-gray-800`). Add a footer that says “Powered by Arete” with a link to `docs.arete.run`. Ensure the app is responsive on mobile devices. *** ## The Result [Section titled “The Result”](#the-result) Once your agent finishes the final step, you will have a fully functional, real-time dashboard. The app connects to the Arete ORE stream and updates automatically as new data hits the Solana blockchain. You built a complex, data-heavy application in minutes without writing a single line of manual code. # TypeScript SDK The `arete-typescript` SDK is a framework-agnostic client for consuming streaming data from Arete. It uses `AsyncIterable`-based streaming and works in any JavaScript environment — Node.js, browsers, Deno, or Bun. Using React? If you’re building a React application, use the [React SDK](/sdks/react/) instead. It provides hooks and providers built on top of this core SDK. *** ## Installation [Section titled “Installation”](#installation) ```bash npm install arete-typescript ``` No peer dependencies. Works anywhere JavaScript runs. *** ## Quick Start [Section titled “Quick Start”](#quick-start) ```typescript import { Arete } from "arete-typescript"; import { ORE_STREAM_STACK, type OreRound } from "arete-stacks/ore"; // Connect using the stack (URL is embedded in the stack definition) const a4 = await Arete.connect(ORE_STREAM_STACK); // Stream entities with full type safety for await (const round of a4.views.OreRound.latest.use()) { console.log("Round:", round.id.round_id); } ``` *** ## Connection [Section titled “Connection”](#connection) ### Connect with Stack Definition [Section titled “Connect with Stack Definition”](#connect-with-stack-definition) Each stack definition includes its own URL, so connecting is simple: ```typescript import { Arete } from "arete-typescript"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; // Stack includes its URL - just pass the stack const a4 = await Arete.connect(ORE_STREAM_STACK); // Now you have fully typed views const rounds = await a4.views.OreRound.latest.get(); ``` ### Override Stack URL [Section titled “Override Stack URL”](#override-stack-url) You can override the stack’s default URL if needed: ```typescript const a4 = await Arete.connect(ORE_STREAM_STACK, { url: "wss://custom.endpoint.com", }); ``` ### Connection Options [Section titled “Connection Options”](#connection-options) ```typescript const a4 = await Arete.connect(ORE_STREAM_STACK, { maxEntriesPerView: 5000, // Limit entries per view (default: 10000) }); ``` | Option | Type | Default | Description | | ------------------- | ---------------- | ----------- | ----------------------------------------------------------- | | `url` | `string` | `stack.url` | Override the stack’s default URL | | `maxEntriesPerView` | `number \| null` | `10000` | Max entries per view before LRU eviction | | `validateFrames` | `boolean` | `false` | Validate incoming frames against Zod schemas before storing | ### Connection State [Section titled “Connection State”](#connection-state) ```typescript // Check current state console.log(a4.connectionState); // 'connected' | 'connecting' | 'disconnected' | 'reconnecting' | 'error' // Listen for changes const unsubscribe = a4.onConnectionStateChange((state) => { console.log("Connection state:", state); }); // Later: stop listening unsubscribe(); ``` ### Disconnect [Section titled “Disconnect”](#disconnect) ```typescript await a4.disconnect(); ``` *** ## Views [Section titled “Views”](#views) ### View Modes [Section titled “View Modes”](#view-modes) Every view operates in one of two modes, which determines how you access data: | Mode | Description | Key Required | Returns | | --------- | --------------------------------- | ------------ | --------------------------- | | **State** | Lookup individual entities by key | Yes | Single entity (`T \| null`) | | **List** | Access a collection of entities | No | Array of entities (`T[]`) | ### Default Views [Section titled “Default Views”](#default-views) By default, each entity in a stack exposes two views: | View Name | Mode | Description | | --------- | ----- | -------------------------------------------- | | `state` | State | Lookup any entity by its key (e.g., address) | | `list` | List | Collection of all entities | ```typescript // State view — get a specific round by address const round = await a4.views.OreRound.state.get(roundAddress); // List view — get all rounds const rounds = await a4.views.OreRound.list.get(); ``` ### Custom Views [Section titled “Custom Views”](#custom-views) Stacks can define additional views beyond the defaults. For example, the ORE stack includes a custom `latest` view that streams only recent rounds. Custom views are defined using the Rust DSL when building your stack — see [Stack Definitions](/building-stacks/stack-definitions/) for details. Custom views are accessed the same way as default views: ```typescript // Custom list view defined in the ORE stack for await (const round of a4.views.OreRound.latest.use()) { console.log("Recent round:", round.id.round_id); } ``` Each view type supports both **streaming** (real-time updates) and **one-shot** (point-in-time snapshot) access patterns. *** ## Streaming Methods [Section titled “Streaming Methods”](#streaming-methods) Streaming methods return `AsyncIterable` and continuously emit data as entities change. The connection stays open until you break out of the loop or abort. | Method | Emits | Use Case | | -------------- | --------------- | -------------------------------------------------------------- | | `.use()` | `T` | Simplest — just the current entity state after each change | | `.watch()` | `Update` | When you need to know the operation type (upsert/patch/delete) | | `.watchRich()` | `RichUpdate` | When you need before/after comparison | ### `.use()` — Stream Merged Entities [Section titled “.use() — Stream Merged Entities”](#use--stream-merged-entities) The simplest streaming method. Emits the full merged entity after each change: ```typescript // List view — no key required for await (const round of a4.views.OreRound.latest.use()) { console.log("Round:", round.id.round_id); console.log("Motherlode:", round.state.motherlode); } // State view — key required const roundAddress = "So11111111111111111111111111111111111111112"; for await (const round of a4.views.OreRound.state.use(roundAddress)) { console.log("Round updated:", round.state.motherlode); } ``` **Signatures:** ```typescript // State view use(key: string, options?: WatchOptions): AsyncIterable // List view use(options?: WatchOptions): AsyncIterable ``` ### `.watch()` — Stream Raw Updates [Section titled “.watch() — Stream Raw Updates”](#watch--stream-raw-updates) Use when you need to know what operation occurred: ```typescript for await (const update of a4.views.OreRound.latest.watch()) { switch (update.type) { case "upsert": console.log("Created or replaced:", update.data); break; case "patch": console.log("Partial update:", update.data); break; case "delete": console.log("Deleted:", update.key); break; } } ``` **Signatures:** ```typescript // State view watch(key: string, options?: WatchOptions): AsyncIterable> // List view watch(options?: WatchOptions): AsyncIterable> ``` **Update types:** ```typescript type Update = | { type: "upsert"; key: string; data: T } // Full entity create/replace | { type: "patch"; key: string; data: Partial } // Partial update | { type: "delete"; key: string }; // Entity removed ``` ### `.watchRich()` — Stream with Before/After Diffs [Section titled “.watchRich() — Stream with Before/After Diffs”](#watchrich--stream-with-beforeafter-diffs) Use when you need to compare the previous and new state: ```typescript for await (const update of a4.views.OreRound.latest.watchRich()) { switch (update.type) { case "created": console.log("New entity:", update.data); break; case "updated": console.log(`Changed: ${update.before.state.motherlode} → ${update.after.state.motherlode}`); break; case "deleted": console.log("Removed:", update.lastKnown); break; } } ``` **Signatures:** ```typescript // State view watchRich(key: string, options?: WatchOptions): AsyncIterable> // List view watchRich(options?: WatchOptions): AsyncIterable> ``` **RichUpdate types:** ```typescript type RichUpdate = | { type: "created"; key: string; data: T } | { type: "updated"; key: string; before: T; after: T; patch?: unknown } | { type: "deleted"; key: string; lastKnown?: T }; ``` *** ## One-Shot Methods [Section titled “One-Shot Methods”](#one-shot-methods) One-shot methods return a point-in-time snapshot without subscribing to updates. Use these when you need the current state but don’t need real-time streaming. | Method | Returns | Behavior | | ------------ | ---------------- | ----------------------------------------------------- | | `.get()` | `Promise` | Async — waits for data if not yet loaded | | `.getSync()` | `T \| undefined` | Sync — returns immediately, `undefined` if not loaded | ### `.get()` — Async Snapshot [Section titled “.get() — Async Snapshot”](#get--async-snapshot) Fetches the current state. Returns a promise that resolves when data is available: ```typescript // List view — returns all entities const rounds = await a4.views.OreRound.latest.get(); console.log(`Found ${rounds.length} rounds`); // State view — returns single entity or null const round = await a4.views.OreRound.state.get(roundAddress); if (round) { console.log("Round:", round.id.round_id); } ``` **Signatures:** ```typescript // State view — returns single entity or null if not found get(key: string): Promise // List view — returns array of all entities get(): Promise ``` ### `.getSync()` — Synchronous Snapshot [Section titled “.getSync() — Synchronous Snapshot”](#getsync--synchronous-snapshot) Returns immediately with cached data. Returns `undefined` if data hasn’t been loaded yet: ```typescript // List view const rounds = a4.views.OreRound.latest.getSync(); if (rounds) { console.log(`Cached: ${rounds.length} rounds`); } else { console.log("Data not yet loaded"); } // State view const round = a4.views.OreRound.state.getSync(roundAddress); ``` **Signatures:** ```typescript // State view — returns entity, null (not found), or undefined (not loaded) getSync(key: string): T | null | undefined // List view — returns array or undefined (not loaded) getSync(): T[] | undefined ``` When to use getSync() Use `getSync()` in synchronous contexts like React render functions where you can’t await. The return value distinguishes between “not found” (`null`) and “not yet loaded” (`undefined`). *** ## Store Size Limits [Section titled “Store Size Limits”](#store-size-limits) Each view maintains an in-memory store of entities. By default, stores are limited to 10,000 entries to prevent memory issues on long-running clients. When the limit is reached, oldest entries are evicted (LRU). ```typescript const a4 = await Arete.connect(ORE_STREAM_STACK, { maxEntriesPerView: 5000, // Custom limit }); ``` To disable limiting (not recommended for production): ```typescript const a4 = await Arete.connect(ORE_STREAM_STACK, { maxEntriesPerView: null, // Unlimited }); ``` *** ## Subscription Options [Section titled “Subscription Options”](#subscription-options) The streaming methods (`.use()`, `.watch()`, `.watchRich()`) accept options for pagination and validation: ```typescript interface WatchOptions { take?: number; // Limit number of entities skip?: number; // Skip first N entities schema?: Schema; // Validate and filter entities with a Zod schema } ``` ### Limit Results [Section titled “Limit Results”](#limit-results) ```typescript // Only receive the first 10 entities for await (const round of a4.views.OreRound.latest.use({ take: 10 })) { console.log("Round:", round.id.round_id); } ``` ### Pagination [Section titled “Pagination”](#pagination) ```typescript // Skip first 20, take next 10 for await (const round of a4.views.OreRound.latest.use({ skip: 20, take: 10 })) { console.log("Round:", round.id.round_id); } ``` ### Server-Side Filtering [Section titled “Server-Side Filtering”](#server-side-filtering) For server-side filtering beyond pagination, use **custom views** defined in your stack. Custom views apply filters, sorting, and limits at the server level, reducing bandwidth before data reaches the client. See [Filtering Feeds](/using-stacks/filtering-feeds/) for details on all filtering options, or [Stack Definitions](/building-stacks/stack-definitions/) for how to define custom views using the Rust DSL. *** ## Stream Control [Section titled “Stream Control”](#stream-control) Use standard `AsyncIterable` patterns to control streams client-side. ### Stop on Condition [Section titled “Stop on Condition”](#stop-on-condition) ```typescript for await (const update of a4.views.OreRound.latest.watch()) { if (update.type === "upsert") { const round = update.data; if (round.state.motherlode && round.state.motherlode > 1_000_000_000) { console.log("Found high-value round:", round.id.round_id); break; } } } ``` ### Cancellable Streams [Section titled “Cancellable Streams”](#cancellable-streams) Use an `AbortController` to cancel from outside the loop: ```typescript const controller = new AbortController(); setTimeout(() => controller.abort(), 30_000); // Cancel after 30s try { for await (const update of a4.views.OreRound.latest.watch()) { if (controller.signal.aborted) break; console.log("Update:", update.data); } } catch (e) { if (!controller.signal.aborted) throw e; } ``` ### Client-Side Filtering [Section titled “Client-Side Filtering”](#client-side-filtering) ```typescript for await (const update of a4.views.OreRound.latest.watch()) { if (update.type !== "upsert") continue; if ((update.data.metrics.deploy_count ?? 0) < 100) continue; console.log("Active round:", update.data.id.round_id); } ``` *** ## Schema Validation [Section titled “Schema Validation”](#schema-validation) Every stack ships with [Zod](https://zod.dev) schemas alongside its TypeScript interfaces. Use them to validate data at two levels: ### Frame Validation [Section titled “Frame Validation”](#frame-validation) Enable `validateFrames` on connect to automatically drop malformed data before it enters your local store: ```typescript const a4 = await Arete.connect(ORE_STREAM_STACK, { validateFrames: true, }); ``` ### Query-Level Validation [Section titled “Query-Level Validation”](#query-level-validation) Pass a `schema` to `.use()` to filter out entities that don’t match: ```typescript import { OreRoundCompletedSchema } from "arete-stacks/ore"; // Only emit rounds where every field is present for await (const round of a4.views.OreRound.latest.use({ schema: OreRoundCompletedSchema, })) { console.log(round.id.round_id, round.state.motherlode); } ``` See [Schema Validation](/sdks/validation/) for the full guide on generated schemas, custom schemas, and the “Completed” schema pattern. *** ## Resolved Data [Section titled “Resolved Data”](#resolved-data) Arete can automatically enrich your entities with data that doesn’t live on-chain. For example, the ORE stack includes token metadata (name, symbol, decimals, logo) resolved server-side from the DAS API: ```typescript for await (const round of a4.views.OreRound.latest.use()) { // Token metadata is resolved automatically — no extra API calls console.log(round.ore_metadata?.name); // "Ore" console.log(round.ore_metadata?.symbol); // "ORE" console.log(round.ore_metadata?.decimals); // 11 } ``` Resolved data arrives as part of the entity alongside on-chain fields. Your client code reads it the same way — no distinction between on-chain and resolved data. See [Resolvers](/building-stacks/rust-dsl/resolvers/) for how to add resolved data when building your own stack. *** ## Error Handling [Section titled “Error Handling”](#error-handling) ```typescript import { Arete, AreteError } from "arete-typescript"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; try { const a4 = await Arete.connect(ORE_STREAM_STACK); } catch (error) { if (error instanceof AreteError) { console.error("Arete error:", error.message); console.error("Code:", error.code); } else { throw error; } } ``` *** ## API Reference [Section titled “API Reference”](#api-reference) ### `Arete.connect(stack, options?)` [Section titled “Arete.connect(stack, options?)”](#areteconnectstack-options) Establishes a WebSocket connection to a Arete stack. **Parameters:** * `stack` — Stack definition (includes URL and typed views) * `options.url` — Override the stack’s default URL (optional) * `options.maxEntriesPerView` — Max entries per view before LRU eviction (optional) * `options.validateFrames` — Validate incoming frames against Zod schemas (optional) **Returns:** `Promise` ### `a4.views` [Section titled “a4.views”](#a4views) Typed views interface based on your stack definition. Access pattern: ```typescript a4.views...() // Examples with default views: a4.views.OreRound.state.get(key) // State mode — requires key a4.views.OreRound.list.get() // List mode — no key ``` #### State Mode Methods [Section titled “State Mode Methods”](#state-mode-methods) For views in state mode (keyed lookup). All methods require a `key` parameter: | Method | Signature | Returns | | ----------- | -------------------------- | ------------------------------ | | `use` | `use(key, options?)` | `AsyncIterable` | | `watch` | `watch(key, options?)` | `AsyncIterable>` | | `watchRich` | `watchRich(key, options?)` | `AsyncIterable>` | | `get` | `get(key)` | `Promise` | | `getSync` | `getSync(key)` | `T \| null \| undefined` | #### List Mode Methods [Section titled “List Mode Methods”](#list-mode-methods) For views in list mode (collections). No key parameter: | Method | Signature | Returns | | ----------- | --------------------- | ------------------------------ | | `use` | `use(options?)` | `AsyncIterable` | | `watch` | `watch(options?)` | `AsyncIterable>` | | `watchRich` | `watchRich(options?)` | `AsyncIterable>` | | `get` | `get()` | `Promise` | | `getSync` | `getSync()` | `T[] \| undefined` | #### WatchOptions [Section titled “WatchOptions”](#watchoptions) Options for streaming methods: ```typescript interface WatchOptions { take?: number; // Limit number of entities skip?: number; // Skip first N entities schema?: Schema; // Validate and filter entities with a Zod schema } ``` See [Schema Validation](/sdks/validation/) for details on using schemas to filter and validate streamed data. ### `a4.connectionState` [Section titled “a4.connectionState”](#a4connectionstate) Current connection state: * `disconnected` — Not connected * `connecting` — Establishing connection * `connected` — Active connection * `reconnecting` — Auto-reconnecting after failure * `error` — Connection failed ### `a4.onConnectionStateChange(callback)` [Section titled “a4.onConnectionStateChange(callback)”](#a4onconnectionstatechangecallback) Subscribe to connection state changes. Returns unsubscribe function. ### `a4.disconnect()` [Section titled “a4.disconnect()”](#a4disconnect) Close the WebSocket connection gracefully. *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [Schema Validation](/sdks/validation/) — Zod schemas for runtime validation and filtering * [React SDK](/sdks/react/) — Hooks and providers for React apps * [Rust SDK](/sdks/rust/) — Native Rust client * [Resolvers](/building-stacks/rust-dsl/resolvers/) — Enrich entities with token metadata and computed fields * [Build Your Own Stack](/building-stacks/workflow) — Create custom data streams # React SDK The `arete-react` SDK provides hooks and providers for building live Solana applications with React. It’s built on top of `arete-typescript` and adds React-specific features like automatic re-rendering, connection state management, and data transformation. Not using React? See the [TypeScript SDK](/sdks/typescript/) for the framework-agnostic core SDK. *** ## Quick Start [Section titled “Quick Start”](#quick-start) ```tsx import { AreteProvider, useArete } from "arete-react"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; function App() { return ( ); } function Dashboard() { const { views, isConnected } = useArete(ORE_STREAM_STACK); const { data: rounds, isLoading } = views.OreRound.latest.use(); if (isLoading) return

Loading...

; return (

{isConnected ? "🟢 Live" : "Connecting..."}

    {rounds?.map((round) => (
  • Round #{round.id?.round_id}
  • ))}
); } ``` *** ## Installation [Section titled “Installation”](#installation) ```bash npm install arete-react zustand ``` ### Peer Dependencies [Section titled “Peer Dependencies”](#peer-dependencies) The SDK requires: * **React** (v19.0.0+) * **Zustand** (v4.0.0+) — Used for internal state management *** ## Project Setup [Section titled “Project Setup”](#project-setup) ### 1. Wrap Your App with the Provider [Section titled “1. Wrap Your App with the Provider”](#1-wrap-your-app-with-the-provider) src/main.tsx ```tsx import React from "react"; import ReactDOM from "react-dom/client"; import { AreteProvider } from "arete-react"; import App from "./App"; ReactDOM.createRoot(document.getElementById("root")!).render( , ); ``` The provider manages connections to stacks. Each stack definition includes its own URL, so the provider doesn’t need a URL prop. #### Provider Props [Section titled “Provider Props”](#provider-props) | Prop | Type | Default | Description | | ------------------- | ---------------- | ------- | ---------------------------------------- | | `websocketUrl` | `string` | - | Override URL for all stacks (optional) | | `autoConnect` | `boolean` | `true` | Auto-connect on mount | | `maxEntriesPerView` | `number \| null` | `10000` | Max entries per view before LRU eviction | *** ### 2. Import Your Stack Definition [Section titled “2. Import Your Stack Definition”](#2-import-your-stack-definition) You have two options for stack definitions: **Option A: Use curated Arete feeds** Install the `arete-stacks` package for pre-built, typed definitions of popular Solana programs: ```bash npm install arete-stacks ``` ```typescript import { ORE_STREAM_STACK } from "arete-stacks/ore"; ``` Each stack includes its default URL (e.g., `wss://ore.stack.arete.run`). **Option B: Generate from your own stack** If you’ve [built your own stack](/building-stacks/workflow/), generate a typed SDK using the CLI: ```bash a4 sdk create typescript my-stack ``` ```typescript import { MY_STACK } from "./stack"; ``` See [CLI Commands](/cli/commands/) for all `a4 sdk create` options. *** ## Using Views [Section titled “Using Views”](#using-views) ### useArete Hook [Section titled “useArete Hook”](#usearete-hook) Access your stack’s typed interface: ```tsx import { useArete } from "arete-react"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; function RoundList() { const { views } = useArete(ORE_STREAM_STACK); const { data: rounds, isLoading, error } = views.OreRound.list.use(); if (isLoading) return

Loading...

; if (error) return

Error: {error.message}

; return (
{rounds?.map((round) => (
Round #{round.id?.round_id} — {round.state?.motherlode}
))}
); } ``` ### List View: All Entities [Section titled “List View: All Entities”](#list-view-all-entities) ```tsx function RoundList() { const { views } = useArete(ORE_STREAM_STACK); const { data: rounds, isLoading, error } = views.OreRound.list.use(); if (isLoading) return
Connecting...
; if (error) return
Error: {error.message}
; return (
    {rounds?.map((round) => (
  • Round #{round.id?.round_id} — Motherlode: {round.state?.motherlode}
  • ))}
); } ``` ### State View: Single Entity [Section titled “State View: Single Entity”](#state-view-single-entity) ```tsx function RoundDetail({ roundAddress }: { roundAddress: string }) { const { views } = useArete(ORE_STREAM_STACK); const { data: round, isLoading } = views.OreRound.state.use({ key: roundAddress, }); if (isLoading) return
Loading...
; if (!round) return
Round not found
; return (

Round #{round.id?.round_id}

Motherlode: {round.state?.motherlode}

Total Deployed: {round.state?.total_deployed}

); } ``` *** ## View Options [Section titled “View Options”](#view-options) ### Single Item Query [Section titled “Single Item Query”](#single-item-query) ```tsx const { views } = useArete(MY_STACK); // Get only one item with type-safe return (T | undefined instead of T[]) const { data: latestToken } = views.tokens.list.use({ take: 1 }); // data: Token | undefined // Or use the dedicated useOne method const { data: latestToken } = views.tokens.list.useOne(); // data: Token | undefined // useOne with filters const { data: topToken } = views.tokens.list.useOne({ where: { volume: { gte: 10000 } }, }); ``` ### Filtering [Section titled “Filtering”](#filtering) The SDK supports both server-side and client-side filtering: **Server-side** options reduce data sent over the wire: ```tsx const { views } = useArete(ORE_STREAM_STACK); const { data: rounds } = views.OreRound.list.use({ take: 10, // Limit to 10 entities from server skip: 20, // Skip first 20 (for pagination) }); ``` For advanced server-side filtering, use [custom views](/using-stacks/filtering-feeds/#server-side-filtering) defined in your stack. **Client-side** filtering happens after data is received. Use `where` with comparison operators: ```tsx const { views } = useArete(ORE_STREAM_STACK); const { data: highValueRounds } = views.OreRound.list.use({ where: { // Supported operators: gte, lte, gt, lt, or exact match motherlode: { gte: 1000000 }, // Greater than or equal difficulty: { lt: 50 }, // Less than }, limit: 10, // Keep only first 10 matching results }); ``` | Operator | Description | | --------- | --------------------- | | `gte` | Greater than or equal | | `gt` | Greater than | | `lte` | Less than or equal | | `lt` | Less than | | *(value)* | Exact match | ```tsx // Exact match example const { views } = useArete(ORE_STREAM_STACK); const { data } = views.OreRound.list.use({ where: { status: "active" }, // Exact equality }); ``` ### Conditional Subscription [Section titled “Conditional Subscription”](#conditional-subscription) ```tsx const { views } = useArete(ORE_STREAM_STACK); // Only subscribe when we have a valid address const { data: round } = views.OreRound.state.use( { key: roundAddress }, { enabled: !!roundAddress }, ); ``` ### Initial Data [Section titled “Initial Data”](#initial-data) ```tsx const { views } = useArete(ORE_STREAM_STACK); // Show placeholder while connecting const { data: rounds } = views.OreRound.list.use({}, { initialData: [] }); ``` ### View Hook Parameters [Section titled “View Hook Parameters”](#view-hook-parameters) The `.use()` method accepts two arguments: params and options. #### List Params [Section titled “List Params”](#list-params) | Param | Type | Side | Description | | ------- | -------- | ------ | ----------------------------------- | | `take` | `number` | Server | Limit entities returned from server | | `skip` | `number` | Server | Skip first N entities (pagination) | | `where` | `object` | Client | Filter with comparison operators | | `limit` | `number` | Client | Max results to keep after filtering | ```typescript const { views } = useArete(ORE_STREAM_STACK); const { data } = views.OreRound.list.use({ take: 50, // Server sends max 50 skip: 0, // Start from beginning where: { motherlode: { gte: 100000 } }, // Filter locally limit: 10, // Keep first 10 matches }); ``` #### View Hook Options [Section titled “View Hook Options”](#view-hook-options) ```typescript const { views } = useArete(MY_STACK); const { data } = views.tokens.list.use( { limit: 10 }, // Params { enabled: true, // Enable/disable subscription initialData: [], // Initial data before first response }, ); ``` ### View Hook Return Value [Section titled “View Hook Return Value”](#view-hook-return-value) | Property | Type | Description | | ----------- | ----------------------- | ------------------------------ | | `data` | `T \| T[] \| undefined` | Current view data | | `isLoading` | `boolean` | True until first data received | | `error` | `Error \| undefined` | Subscription error if any | | `refresh` | `() => void` | Manually trigger refresh | *** ## Connection State [Section titled “Connection State”](#connection-state) The `useArete` hook returns connection state directly: ```tsx import { useArete } from "arete-react"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; function ConnectionStatus() { const { connectionState, isConnected } = useArete(ORE_STREAM_STACK); const statusColors = { connected: "green", connecting: "yellow", reconnecting: "orange", disconnected: "gray", error: "red", }; return (
{isConnected ? "Live" : connectionState}
); } ``` ### useArete Return Values [Section titled “useArete Return Values”](#usearete-return-values) | Property | Type | Description | | ----------------- | ----------------- | ------------------------------------------------ | | `views` | `object` | Typed view accessors | | `connectionState` | `ConnectionState` | Current WebSocket state | | `isConnected` | `boolean` | Convenience: `true` when `state === 'connected'` | | `isLoading` | `boolean` | `true` until client is ready | | `error` | `Error \| null` | Connection error if any | | `client` | `Arete` | Low-level client instance | ### Connection States [Section titled “Connection States”](#connection-states) | State | Description | | -------------- | ------------------------------- | | `disconnected` | Not connected | | `connecting` | Establishing connection | | `connected` | Active and healthy | | `reconnecting` | Auto-reconnecting after failure | | `error` | Connection failed | ### Standalone useConnectionState Hook [Section titled “Standalone useConnectionState Hook”](#standalone-useconnectionstate-hook) For cases where you need connection state outside of a component using `useArete`, you can use the standalone hook: ```tsx import { useConnectionState } from "arete-react"; function GlobalConnectionIndicator() { const state = useConnectionState(); return
{state}
; } ``` Note `useConnectionState()` without arguments returns the state of the single active client. If you have multiple stacks, pass the stack definition: `useConnectionState(MY_STACK)`. *** ## API Reference [Section titled “API Reference”](#api-reference) ### `useArete(stackDefinition, options?)` [Section titled “useArete(stackDefinition, options?)”](#usearetestackdefinition-options) Returns a typed interface to your stack’s views, along with connection state. ```typescript const { views, connectionState, isConnected, isLoading, error, client } = useArete(ORE_STREAM_STACK); // Access views views.OreRound.list.use() views.OreRound.state.use({ key }) views.OreRound.latest.use() // Check connection if (isConnected) { console.log('Live!'); } ``` #### Options [Section titled “Options”](#options) | Option | Type | Description | | ------ | -------- | -------------------------------- | | `url` | `string` | Override the stack’s default URL | ```typescript // Connect to local development server const { views, isConnected } = useArete(ORE_STREAM_STACK, { url: "ws://localhost:8878" }); ``` ### `useConnectionState(stack?)` [Section titled “useConnectionState(stack?)”](#useconnectionstatestack) Standalone hook for connection state. Prefer using `connectionState` from `useArete` when possible. ```typescript // Without argument: returns state of single active client const state = useConnectionState(); // With stack: returns state for specific stack const state = useConnectionState(ORE_STREAM_STACK); // Returns: 'disconnected' | 'connecting' | 'connected' | 'reconnecting' | 'error' ``` ### `AreteProvider` [Section titled “AreteProvider”](#areteprovider) Wrap your application to initialize the SDK: ```tsx ``` To override the URL for all stacks (e.g., for local development): ```tsx ``` ### View `.useOne()` [Section titled “View .useOne()”](#view-useone) A convenience method for fetching a single item from a list view with proper typing. ```typescript const { views } = useArete(MY_STACK); const { data } = views.tokens.list.useOne(); // data: Token | undefined (not Token[]) ``` Equivalent to `.use({ take: 1 })` but with a cleaner API and explicit intent. *** ## Complete Example [Section titled “Complete Example”](#complete-example) A full React component combining everything: src/App.tsx ```tsx import { useArete } from "arete-react"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; function OreDashboard() { const { views, connectionState, isConnected } = useArete(ORE_STREAM_STACK); const { data: rounds, isLoading, error, } = views.OreRound.latest.use({ take: 5, }); return (

Live ORE Mining Rounds

{isConnected ? "Live" : connectionState}
{isLoading &&

Connecting to stream...

} {error &&

Error: {error.message}

} {rounds && ( <>

{rounds.length} rounds streaming

{rounds.map((round) => (

Round #{round.id?.round_id}

{round.state?.motherlode}

Total Deployed: {round.state?.total_deployed}

))}
)}
); } export default OreDashboard; ``` *** ## Schema Validation [Section titled “Schema Validation”](#schema-validation) Every stack ships with Zod schemas that you can use to filter entities at the hook level. Pass a `schema` to `.use()` or `.useOne()` — entities that fail validation are silently excluded from results. ### Filter with Generated Schemas [Section titled “Filter with Generated Schemas”](#filter-with-generated-schemas) Use the “Completed” schema variant to only render entities where all fields are present: ```tsx import { useArete } from "arete-react"; import { ORE_STREAM_STACK, OreRoundCompletedSchema, } from "arete-stacks/ore"; function FullyLoadedRounds() { const { views } = useArete(ORE_STREAM_STACK); const { data: rounds } = views.OreRound.latest.use({ schema: OreRoundCompletedSchema, }); return (
    {rounds?.map((round) => (
  • Round #{round.id.round_id} — {round.state.motherlode}
  • ))}
); } ``` ### Filter with Custom Schemas [Section titled “Filter with Custom Schemas”](#filter-with-custom-schemas) Define your own Zod schema to validate only the fields your component needs: ```tsx import { z } from "zod"; import { useArete } from "arete-react"; import { PUMPFUN_STREAM_STACK } from "arete-stacks/pumpfun"; const TradableTokenSchema = z.object({ id: z.object({ mint: z.string() }), reserves: z.object({ current_price_sol: z.number() }), trading: z.object({ total_volume: z.number() }), }); function TokenList() { const { views } = useArete(PUMPFUN_STREAM_STACK); // Only tokens with price and volume data const { data: tokens } = views.PumpfunToken.list.use({ schema: TradableTokenSchema, }); return (
    {tokens?.map((token) => (
  • {token.reserves.current_price_sol} SOL — Vol: {token.trading.total_volume}
  • ))}
); } ``` See [Schema Validation](/sdks/validation/) for the full guide on frame validation, generated schemas, and the `Schema` interface. *** ## Resolved Data [Section titled “Resolved Data”](#resolved-data) Stacks can include data resolved server-side that doesn’t live on-chain. For example, the ORE stack enriches rounds with token metadata (name, symbol, decimals) — no extra API calls needed: ```tsx function RoundWithMetadata() { const { views } = useArete(ORE_STREAM_STACK); const { data: round } = views.OreRound.latest.useOne(); return (

Token: {round?.ore_metadata?.name} ({round?.ore_metadata?.symbol})

Decimals: {round?.ore_metadata?.decimals}

); } ``` Resolved data arrives as part of the entity alongside on-chain fields. See [Resolvers](/building-stacks/rust-dsl/resolvers/) for details on how resolved data works. *** ## Accessing Core SDK [Section titled “Accessing Core SDK”](#accessing-core-sdk) The React SDK re-exports everything from `arete-typescript`. Access low-level APIs when needed: ```typescript // Import core features from React SDK import { Arete, ConnectionManager } from "arete-react"; ``` *** ## Choosing Between SDKs [Section titled “Choosing Between SDKs”](#choosing-between-sdks) | Use Case | Recommended SDK | | ------------------------ | ------------------ | | React/Next.js apps | `arete-react` | | Vue, Svelte, Solid, etc. | `arete-typescript` | | Node.js backend/scripts | `arete-typescript` | | Custom state management | `arete-typescript` | | Need React hooks | `arete-react` | | Maximum control | `arete-typescript` | *** ## Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) | Issue | Solution | | ----------------------------- | -------------------------------------------------------------------------- | | ”WebSocket connection failed” | Check your connection to `wss://ore.stack.arete.run` | | Data not updating | Verify the view path matches the stack entity name (e.g., `OreRound/list`) | | TypeScript errors | Ensure your interface matches the stack’s data shape | *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [Schema Validation](/sdks/validation/) — Zod schemas for runtime validation and filtering * [TypeScript SDK](/sdks/typescript/) — Use Arete without React * [Resolvers](/building-stacks/rust-dsl/resolvers/) — Enrich entities with token metadata and computed fields * [Build Your Own Stack](/building-stacks/workflow) — Create custom data streams # a4-server > Run your Arete projections as a WebSocket server. `a4-server` is a Rust crate that lets you run your compiled stack as a standalone WebSocket server. It connects to a Yellowstone gRPC endpoint, processes Solana data through your projections, and streams the results to connected clients. ## Prerequisites [Section titled “Prerequisites”](#prerequisites) To run a stack with `a4-server`, you need: | Requirement | Description | | ----------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | | **Yellowstone gRPC Endpoint** | A URL to a Solana node running the [Yellowstone Geyser plugin](https://github.com/rpcpool/yellowstone-grpc). This is your source of blockchain data. | | **X-Token** | Authentication token for your Yellowstone provider (most providers require this). | | **Rust 1.75+** | Required to build the server binary. | | **A compiled stack** | Your stack crate with the `#[arete(...)]` macro, which generates a `spec()` function. | Yellowstone Access Required You must have access to a Yellowstone gRPC endpoint. Options include: * Running your own Solana validator with the Yellowstone Geyser plugin * Using a provider like Triton, Helius, or similar that offers Yellowstone gRPC ## Environment Variables [Section titled “Environment Variables”](#environment-variables) `a4-server` reads configuration from environment variables: | Variable | Required | Description | | ---------------------- | -------- | ------------------------------------- | | `YELLOWSTONE_ENDPOINT` | Yes | Your Yellowstone gRPC endpoint URL | | `YELLOWSTONE_X_TOKEN` | Usually | Authentication token for the endpoint | ## When to Use a4-server [Section titled “When to Use a4-server”](#when-to-use-a4-server) **Use a4-server when:** * Developing and testing stacks locally * Running a single stack in your own infrastructure * You need full control over the server deployment * You already have Yellowstone gRPC access **Consider Arete Cloud when:** * You need multi-stack orchestration * You want managed Yellowstone infrastructure * You need built-in authentication and API keys * You prefer zero-config deployment # Configuration Reference > Complete reference for a4-server configuration options. This page documents all configuration options available in `a4-server`. ## Server Builder API [Section titled “Server Builder API”](#server-builder-api) The server is configured using a fluent builder pattern: ```rust use a4_server::Server; Server::builder() .spec(my_spec()) // Required: your stack specification .websocket() // Enable WebSocket server .bind("[::]:8877".parse()?) .yellowstone(config) // Optional: manual Yellowstone config .health_monitoring() // Enable health monitoring .http_health() // Enable HTTP health endpoints .reconnection() // Enable auto-reconnection .start() .await?; ``` ## Environment Variables [Section titled “Environment Variables”](#environment-variables) These environment variables are read automatically by the generated parser code: | Variable | Required | Default | Description | | ---------------------- | -------- | ------- | -------------------------------------------------------- | | `YELLOWSTONE_ENDPOINT` | Yes | — | Yellowstone gRPC endpoint URL | | `YELLOWSTONE_X_TOKEN` | Usually | — | Authentication token for the endpoint | | `RUST_LOG` | No | `info` | Log level filter (e.g., `debug`, `info,a4_server=debug`) | ## WebSocket Configuration [Section titled “WebSocket Configuration”](#websocket-configuration) ### Basic Usage [Section titled “Basic Usage”](#basic-usage) ```rust // Enable with defaults (binds to [::]:8877) Server::builder() .websocket() .start() .await?; // Or specify a custom bind address Server::builder() .websocket() .bind("[::]:9000".parse()?) .start() .await?; ``` ### WebSocketConfig [Section titled “WebSocketConfig”](#websocketconfig) For full control, use `websocket_config()`: ```rust use a4_server::WebSocketConfig; let ws_config = WebSocketConfig { bind_address: "[::]:8877".parse()?, }; Server::builder() .websocket_config(ws_config) .start() .await?; ``` | Field | Type | Default | Description | | -------------- | ------------ | ----------- | ----------------------------------------- | | `bind_address` | `SocketAddr` | `[::]:8877` | Address and port for the WebSocket server | ## Yellowstone Configuration [Section titled “Yellowstone Configuration”](#yellowstone-configuration) The Yellowstone gRPC connection is typically configured via environment variables. However, you can also configure it programmatically: ```rust use a4_server::YellowstoneConfig; let yellowstone = YellowstoneConfig::new("https://your-endpoint.com") .with_token("your-secret-token"); Server::builder() .yellowstone(yellowstone) .start() .await?; ``` ### YellowstoneConfig [Section titled “YellowstoneConfig”](#yellowstoneconfig) | Field | Type | Default | Description | | ---------- | ---------------- | ------- | ----------------------------- | | `endpoint` | `String` | — | Yellowstone gRPC endpoint URL | | `x_token` | `Option` | `None` | Authentication token | ### Builder Methods [Section titled “Builder Methods”](#builder-methods) | Method | Description | | ---------------------------------- | ------------------------ | | `YellowstoneConfig::new(endpoint)` | Create with endpoint | | `.with_token(token)` | Set authentication token | ## Health Monitoring [Section titled “Health Monitoring”](#health-monitoring) Health monitoring tracks stream connectivity and detects issues like stale connections. ### Basic Usage [Section titled “Basic Usage”](#basic-usage-1) ```rust // Enable with defaults Server::builder() .health_monitoring() .start() .await?; ``` ### HealthConfig [Section titled “HealthConfig”](#healthconfig) ```rust use a4_server::HealthConfig; use std::time::Duration; let health = HealthConfig::new() .with_heartbeat_interval(Duration::from_secs(30)) .with_health_check_timeout(Duration::from_secs(10)); Server::builder() .health_config(health) .start() .await?; ``` | Field | Type | Default | Description | | ---------------------- | ---------- | ------- | ----------------------------------- | | `heartbeat_interval` | `Duration` | 30s | How often to check stream health | | `health_check_timeout` | `Duration` | 10s | Timeout for health check operations | ### Builder Methods [Section titled “Builder Methods”](#builder-methods-1) | Method | Description | | -------------------------------------- | ------------------------ | | `HealthConfig::new()` | Create with defaults | | `.with_heartbeat_interval(duration)` | Set heartbeat interval | | `.with_health_check_timeout(duration)` | Set health check timeout | ## HTTP Health Server [Section titled “HTTP Health Server”](#http-health-server) Exposes HTTP endpoints for orchestrators like Kubernetes to perform health checks. ### Basic Usage [Section titled “Basic Usage”](#basic-usage-2) ```rust // Enable with defaults (binds to [::]:8081) Server::builder() .http_health() .start() .await?; // Or specify a custom bind address Server::builder() .http_health() .health_bind("0.0.0.0:8081".parse()?) .start() .await?; ``` ### HttpHealthConfig [Section titled “HttpHealthConfig”](#httphealthconfig) ```rust use a4_server::HttpHealthConfig; let http_health = HttpHealthConfig::new("0.0.0.0:9090".parse()?); Server::builder() .http_health_config(http_health) .start() .await?; ``` | Field | Type | Default | Description | | -------------- | ------------ | ----------- | ------------------------------------------- | | `bind_address` | `SocketAddr` | `[::]:8081` | Address and port for the HTTP health server | ### Health Endpoints [Section titled “Health Endpoints”](#health-endpoints) | Endpoint | Method | Description | | ------------------------ | ------ | ------------------------------------------------------------------------ | | `/health` or `/healthz` | GET | Liveness check — returns `200 OK` if server is running | | `/ready` or `/readiness` | GET | Readiness check — returns `200 OK` if stream is healthy, `503` otherwise | | `/status` | GET | Detailed JSON status with health state and error count | #### Example `/status` Response [Section titled “Example /status Response”](#example-status-response) ```json { "healthy": true, "status": "Connected", "error_count": 0 } ``` ## Reconnection Configuration [Section titled “Reconnection Configuration”](#reconnection-configuration) Controls automatic reconnection behavior when the Yellowstone gRPC connection drops. ### Basic Usage [Section titled “Basic Usage”](#basic-usage-3) ```rust // Enable with defaults Server::builder() .reconnection() .start() .await?; ``` ### ReconnectionConfig [Section titled “ReconnectionConfig”](#reconnectionconfig) ```rust use a4_server::ReconnectionConfig; use std::time::Duration; let reconnect = ReconnectionConfig::new() .with_initial_delay(Duration::from_millis(100)) .with_max_delay(Duration::from_secs(60)) .with_max_attempts(10) .with_backoff_multiplier(2.0) .with_http2_keep_alive_interval(Duration::from_secs(30)); Server::builder() .reconnection_config(reconnect) .start() .await?; ``` | Field | Type | Default | Description | | --------------------------- | ------------------ | ----------------- | --------------------------------------------------------- | | `initial_delay` | `Duration` | 100ms | Delay before first reconnection attempt | | `max_delay` | `Duration` | 60s | Maximum delay between attempts (caps exponential backoff) | | `max_attempts` | `Option` | `None` (infinite) | Maximum reconnection attempts before giving up | | `backoff_multiplier` | `f64` | 2.0 | Multiplier for exponential backoff | | `http2_keep_alive_interval` | `Option` | 30s | HTTP/2 keep-alive to prevent silent disconnects | ### Builder Methods [Section titled “Builder Methods”](#builder-methods-2) | Method | Description | | ------------------------------------------- | ---------------------------------- | | `ReconnectionConfig::new()` | Create with defaults | | `.with_initial_delay(duration)` | Set initial reconnection delay | | `.with_max_delay(duration)` | Set maximum backoff delay | | `.with_max_attempts(n)` | Limit reconnection attempts | | `.with_backoff_multiplier(m)` | Set exponential backoff multiplier | | `.with_http2_keep_alive_interval(duration)` | Set HTTP/2 keep-alive interval | ## Feature Flags [Section titled “Feature Flags”](#feature-flags) Enable optional features in your `Cargo.toml`: ```toml [dependencies] a4-server = { version = "0.1.1", features = ["otel"] } ``` | Feature | Default | Description | | ------- | ------- | ------------------------------------------------------------- | | `otel` | No | OpenTelemetry integration for metrics and distributed tracing | ### Using OpenTelemetry Metrics [Section titled “Using OpenTelemetry Metrics”](#using-opentelemetry-metrics) When the `otel` feature is enabled: ```rust use a4_server::Metrics; let metrics = Metrics::new(); Server::builder() .metrics(metrics) .start() .await?; ``` ## Complete Example [Section titled “Complete Example”](#complete-example) Here’s a production-ready configuration combining all options: ```rust use a4_server::{ Server, HealthConfig, HttpHealthConfig, ReconnectionConfig }; use std::time::Duration; #[tokio::main] async fn main() -> anyhow::Result<()> { // TLS provider for gRPC rustls::crypto::ring::default_provider() .install_default() .expect("Failed to install rustls crypto provider"); // Load environment variables dotenvy::dotenv().ok(); // Initialize logging tracing_subscriber::fmt() .with_env_filter( tracing_subscriber::EnvFilter::try_from_default_env() .unwrap_or_else(|_| "info,a4_server=debug".into()), ) .init(); let spec = my_stack::spec(); Server::builder() .spec(spec) // WebSocket on port 8877 .websocket() .bind("[::]:8877".parse()?) // Health monitoring with custom intervals .health_config( HealthConfig::new() .with_heartbeat_interval(Duration::from_secs(15)) ) // HTTP health endpoints on port 8081 .http_health() .health_bind("0.0.0.0:8081".parse()?) // Reconnection with limited attempts .reconnection_config( ReconnectionConfig::new() .with_max_attempts(100) .with_max_delay(Duration::from_secs(30)) ) .start() .await?; Ok(()) } ``` # Running a Stack > Step-by-step guide to running a stack with a4-server. This guide shows how to run your compiled stack as a WebSocket server using `a4-server`. ## 1. Set Environment Variables [Section titled “1. Set Environment Variables”](#1-set-environment-variables) Before running, configure your Yellowstone connection: ```bash export YELLOWSTONE_ENDPOINT="https://your-geyser-endpoint.com" export YELLOWSTONE_X_TOKEN="your-secret-token" ``` Or create a `.env` file in your project root: .env ```bash YELLOWSTONE_ENDPOINT=https://your-geyser-endpoint.com YELLOWSTONE_X_TOKEN=your-secret-token ``` Tip Most Yellowstone providers (Triton, Helius, etc.) will give you both an endpoint URL and an authentication token when you sign up. ## 2. Create the Server Binary [Section titled “2. Create the Server Binary”](#2-create-the-server-binary) Add dependencies to your `Cargo.toml`: Cargo.toml ```toml [dependencies] your-stack = { path = "../path/to/your/stack" } a4-server = "0.1.1" tokio = { version = "1.0", features = ["full"] } anyhow = "1.0" tracing-subscriber = { version = "0.3", features = ["env-filter"] } dotenvy = "0.15" # Required for TLS rustls = { version = "0.23", default-features = false, features = ["ring"] } ``` Create your `main.rs`: src/main.rs ```rust use a4_server::Server; use your_stack as my_stream; use std::net::SocketAddr; #[tokio::main] async fn main() -> anyhow::Result<()> { // Install TLS provider (required for gRPC) rustls::crypto::ring::default_provider() .install_default() .expect("Failed to install rustls crypto provider"); // Load .env file if present dotenvy::dotenv().ok(); // Initialize logging tracing_subscriber::fmt() .with_env_filter( tracing_subscriber::EnvFilter::try_from_default_env() .unwrap_or_else(|_| "info".into()), ) .init(); // Get the spec from your stack let spec = my_stream::spec(); // Start the server Server::builder() .spec(spec) .websocket() .bind("[::]:8877".parse::()?) .health_monitoring() .start() .await?; Ok(()) } ``` ## 3. Run the Server [Section titled “3. Run the Server”](#3-run-the-server) ```bash cargo run --release ``` You should see output like: ```plaintext INFO a4_server: Starting WebSocket server on [::]:8877 INFO a4_server: Connected to Yellowstone gRPC INFO a4_server: Health monitoring enabled ``` ## 4. Connect Clients [Section titled “4. Connect Clients”](#4-connect-clients) Once running, connect using any Arete SDK: TypeScript ```typescript import { Arete } from "arete-react"; const stack = new Arete({ endpoint: "ws://localhost:8877", }); ``` Rust ```rust use a4_sdk::Arete; let stack = Arete::connect("ws://localhost:8877").await?; ``` ## Production Tips [Section titled “Production Tips”](#production-tips) ### Health Endpoints [Section titled “Health Endpoints”](#health-endpoints) Enable HTTP health checks for orchestrators like Kubernetes: ```rust Server::builder() .spec(spec) .websocket() .bind("[::]:8877".parse()?) .health_monitoring() .http_health() .health_bind("0.0.0.0:8081".parse()?) .start() .await?; ``` ### Metrics [Section titled “Metrics”](#metrics) Enable OpenTelemetry for Prometheus metrics: Cargo.toml ```toml a4-server = { version = "0.1.1", features = ["otel"] } ``` ### Graceful Shutdown [Section titled “Graceful Shutdown”](#graceful-shutdown) `a4-server` handles `SIGINT` and `SIGTERM` automatically, ensuring clean disconnection from the Yellowstone stream. ### Resource Considerations [Section titled “Resource Considerations”](#resource-considerations) The Yellowstone gRPC stream is bandwidth-intensive. Ensure your environment has: * Sufficient network throughput * CPU capacity for block deserialization * Stable, low-latency connection to your Yellowstone provider # Explore Stacks > How agents use a4 explore to discover stacks and get live schema information. The `a4 explore` command is how agents (and humans) discover available stacks and introspect their schemas. Agent skills instruct the AI to run this command before writing any Arete code, so it always has accurate entity names, field paths, and types. *** ## Usage [Section titled “Usage”](#usage) ```bash # List all available stacks a4 explore # Show entities and views for a stack a4 explore # Show fields and types for a specific entity a4 explore ``` Add `--json` to any command for machine-readable output (this is what agents use): ```bash a4 explore --json a4 explore ore --json a4 explore ore OreRound --json ``` *** ## Listing Stacks [Section titled “Listing Stacks”](#listing-stacks) ```bash $ a4 explore Public Registry ──────────────────────────────────────────────────────── ore wss://ore.stack.arete.run Entities: OreRound, OreTreasury, OreMiner Your Stacks ──────────────────────────────────────────────────────── my-game wss://my-game-abc123.stack.arete.run [active] ``` Without authentication, you see public registry stacks only. When logged in (`a4 auth login`), you also see your own stacks. *** ## Exploring a Stack [Section titled “Exploring a Stack”](#exploring-a-stack) ```bash $ a4 explore ore Stack: ore (OreStream) URL: wss://ore.stack.arete.run Entities ──────────────────────────────────────────────────────── OreRound 3 views (state, list, latest) Primary key: id.round_id Fields: 15 OreTreasury 2 views (state, list) Primary key: id.treasury_address Fields: 8 OreMiner 2 views (state, list) Primary key: id.miner_address Fields: 12 Tip: Run `a4 explore ore OreRound` for field details ``` *** ## Entity Fields [Section titled “Entity Fields”](#entity-fields) ```bash $ a4 explore ore OreRound Entity: OreRound Primary key: id.round_id Fields ────────────────────────────────────────────────────────────────── ● id id.round_id u64 id.round_address Pubkey? ● state state.motherlode u64? state.total_miners u64? state.total_deployed u64? state.expires_at i64? ● metrics metrics.deploy_count u64? metrics.checkpoint_count u64? ● results results.top_miner Pubkey? results.top_miner_reward u64? Views ────────────────────────────────────────────────────────────────── state State list List latest List (sort by id.round_id desc) ``` Field types with `?` are nullable (wrapped in `Option` on the Rust side, optional in TypeScript). *** ## JSON Output [Section titled “JSON Output”](#json-output) Agents use the `--json` flag to get structured output they can parse: ```bash $ a4 explore ore OreRound --json ``` ```json { "name": "OreRound", "primary_keys": ["id.round_id"], "fields": [ { "path": "id.round_id", "rust_type": "u64", "nullable": false, "section": "id" }, { "path": "id.round_address", "rust_type": "Pubkey", "nullable": true, "section": "id" }, { "path": "state.motherlode", "rust_type": "u64", "nullable": true, "section": "state" } ], "views": [ { "id": "OreRound/state", "mode": "state", "pipeline": [] }, { "id": "OreRound/list", "mode": "list", "pipeline": [] }, { "id": "OreRound/latest", "mode": "list", "pipeline": [{"Sort": {"key": "id.round_id", "order": "desc"}}] } ] } ``` This is what the agent skills instruct the AI to call. The AI then uses the field paths and types to write correct SDK code. *** ## How Authentication Affects Results [Section titled “How Authentication Affects Results”](#how-authentication-affects-results) | State | What you see | | ------------- | ----------------------------------------------- | | Not logged in | Public stacks only | | Logged in | Public stacks + global stacks + your own stacks | Public stacks like `ore` are always available to everyone. Global stacks are available to all authenticated users. Your own deployed stacks are only visible when logged in. ```bash # Log in to see all stacks a4 auth login # Now explore shows everything a4 explore ``` *** ## Why This Matters for Agents [Section titled “Why This Matters for Agents”](#why-this-matters-for-agents) The core problem with AI coding tools is stale context. If an agent has outdated schema information, it writes code with wrong field names or missing types. `a4 explore` solves this because: * It queries the live Arete API, not static files * It reflects the exact schema of the deployed stack * The CLI version you have installed determines the API compatibility * Agent skills tell the AI to always run it before writing code This means you never need to update the skill files when a stack changes. The agent gets fresh data every time. # MCP Server > Connect AI tools to the Arete documentation MCP server. The Arete documentation MCP server gives AI tools first-class access to the docs instead of requiring them to scrape pages. ## Endpoint [Section titled “Endpoint”](#endpoint) Use the canonical HTTP endpoint: ```text https://docs.arete.run/mcp ``` The legacy alias `https://docs.arete.run/mcp/sse` is still available for older configurations. ## Tools [Section titled “Tools”](#tools) | Tool | Purpose | | ------------- | ----------------------------------------------------------------- | | `search_docs` | Search the Arete docs and return ranked snippets with page slugs. | | `fetch_page` | Fetch a documentation page as raw markdown by slug. | The server also exposes the Arete platform skill as a resource at `https://docs.arete.run/skill.md`. ## Claude Code [Section titled “Claude Code”](#claude-code) ```bash claude mcp add --transport http Arete https://docs.arete.run/mcp ``` Verify the connection: ```bash claude mcp list ``` ## Cursor [Section titled “Cursor”](#cursor) Open MCP settings and add: ```json { "mcpServers": { "Arete": { "url": "https://docs.arete.run/mcp" } } } ``` Then ask Cursor: “What tools do you have available?” It should list the Arete documentation tools. ## VS Code [Section titled “VS Code”](#vs-code) Create or update `.vscode/mcp.json`: ```json { "servers": { "Arete": { "type": "http", "url": "https://docs.arete.run/mcp" } } } ``` ## Discovery [Section titled “Discovery”](#discovery) Discovery metadata is available at: * `https://docs.arete.run/.well-known/mcp` * `https://docs.arete.run/.well-known/mcp.json` * `https://docs.arete.run/.well-known/mcp/server-card.json` # Skills Reference > Per-editor file locations, agent flags, verification, and updating for Arete agent skills. Reference for installing Arete agent skills manually. For an overview of how skills, the CLI, and agent.md fit together, see [Agent Skills](/agent-skills/overview/). The universal install command (auto-detects your editor): ```bash npx skills add AreteA4/skills ``` *** ## Per-Editor Installation [Section titled “Per-Editor Installation”](#per-editor-installation) * Cursor ```bash npx skills add AreteA4/skills --agent cursor ``` Files are created in `.agents/skills/`: ```plaintext .agents/skills/ ├── arete/SKILL.md ├── arete-consume/SKILL.md └── arete-build/SKILL.md ``` Allow CLI access When Cursor asks to run `a4 explore`, approve it. * Claude Code ```bash npx skills add AreteA4/skills --agent claude-code ``` Files are created in `.claude/skills/`: ```plaintext .claude/skills/ ├── arete/SKILL.md ├── arete-consume/SKILL.md └── arete-build/SKILL.md ``` You can also set project-level context in `CLAUDE.md` at the project root. * Windsurf ```bash npx skills add AreteA4/skills --agent windsurf ``` Files are created in `.windsurf/skills/`: ```plaintext .windsurf/skills/ ├── arete/SKILL.md ├── arete-consume/SKILL.md └── arete-build/SKILL.md ``` * OpenCode ```bash npx skills add AreteA4/skills --agent opencode ``` Files are created in `.agents/skills/`. You can also load skills directly using the `/skill` command in OpenCode. * GitHub Copilot ```bash npx skills add AreteA4/skills --agent github-copilot ``` Files are created in `.agents/skills/`. * Cline ```bash npx skills add AreteA4/skills --agent cline ``` Files are created in `.cline/skills/`. * Other `npx skills` supports 35+ agents: | Agent | Flag | | ---------- | -------------------- | | Codex | `--agent codex` | | Gemini CLI | `--agent gemini-cli` | | Roo Code | `--agent roo` | | Goose | `--agent goose` | | Continue | `--agent continue` | If your editor isn’t listed, manually copy the SKILL.md files from the [skills repository](https://github.com/AreteA4/skills) into whatever directory your editor reads custom context from. *** ## Verifying Installation [Section titled “Verifying Installation”](#verifying-installation) Ask your agent: What stacks are available on Arete? Show me the entities and fields for the ore stack. The agent should run `a4 explore --json`, then `a4 explore ore --json`, and present the entity names, field paths, and types. If it doesn’t recognise Arete commands, check that the skill files are in the correct directory for your editor. *** ## Updating Skills [Section titled “Updating Skills”](#updating-skills) ```bash npx skills add AreteA4/skills ``` This overwrites existing skill files with the latest versions. The static patterns (SDK syntax, DSL macros) change rarely — dynamic data (entity schemas, field types) always comes from `a4 explore` at runtime, so it’s always current regardless of skill version. # Configuration > Complete reference for arete.toml configuration options. The `arete.toml` file is the central configuration for your Arete project. It defines your project metadata, SDK generation settings, build preferences, and stack definitions. The file is created automatically when you run `a4 init`, but you can customize it for advanced use cases. *** ## Creating the Configuration File [Section titled “Creating the Configuration File”](#creating-the-configuration-file) The easiest way to create `arete.toml` is with the CLI: ```bash a4 init ``` This command: * Creates `arete.toml` with a project name based on your directory * Auto-discovers stacks from `.arete/*.stack.json` files * Creates a `.arete/` directory if it doesn’t exist You can then customize the generated file for your needs. *** ## File Location [Section titled “File Location”](#file-location) By default, the CLI looks for `arete.toml` in the current directory. You can specify a different path with the `--config` flag: ```bash a4 --config ./config/arete.toml up ``` *** ## Minimal Configuration [Section titled “Minimal Configuration”](#minimal-configuration) For most projects, you only need a project name: ```toml [project] name = "my-stack" ``` The CLI auto-discovers stacks from `.arete/*.stack.json` files created during `cargo build`. *** ## Full Configuration Reference [Section titled “Full Configuration Reference”](#full-configuration-reference) ```toml [project] name = "my-project" description = "A brief description of your project" version = "1.0.0" # SDK generation settings [sdk] output_dir = "./generated" # Default output for both languages typescript_output_dir = "./frontend/src/generated" # Override for TypeScript only rust_output_dir = "./crates/generated" # Override for Rust only typescript_package = "@myorg/my-sdk" # Package name for TypeScript rust_module_mode = false # Generate Rust SDKs as modules by default # Build preferences [build] watch_by_default = true # Stack definitions (auto-discovered by default, but can be explicit) [[stacks]] name = "my-game" stack = "SettlementGame" # Stack name or path to .stack.json description = "Settlement game tracking" typescript_output_file = "./src/generated/game.ts" # Per-stack TypeScript output path rust_output_crate = "./crates/game-stack" # Per-stack Rust output path rust_module = true # Per-stack: generate as module instead of crate ``` *** ## Sections [Section titled “Sections”](#sections) ### `[project]` — Project Metadata [Section titled “\[project\] — Project Metadata”](#project--project-metadata) | Option | Type | Required | Description | | ------------- | ------ | -------- | ------------------------------------------ | | `name` | string | Yes | Project name (used for SDK package naming) | | `description` | string | No | Project description | | `version` | string | No | Project version | ### `[sdk]` — SDK Generation Settings [Section titled “\[sdk\] — SDK Generation Settings”](#sdk--sdk-generation-settings) | Option | Type | Default | Description | | ----------------------- | ------- | ----------------------------- | --------------------------------------------------------------------- | | `output_dir` | string | `"./generated"` | Default output directory for all SDKs | | `typescript_output_dir` | string | `output_dir` | TypeScript SDK output directory | | `rust_output_dir` | string | `output_dir` | Rust SDK output directory | | `typescript_package` | string | `"arete-stacks/{stack_name}"` | NPM package name for TypeScript SDKs | | `rust_module_mode` | boolean | `false` | Generate Rust SDKs as modules (`mod.rs`) instead of standalone crates | **Note:** When `rust_module_mode = true`, generated Rust SDKs are created as modules that can be embedded directly in your existing crate. When `false`, each SDK is generated as a standalone crate with its own `Cargo.toml`. ### `[build]` — Build Preferences [Section titled “\[build\] — Build Preferences”](#build--build-preferences) | Option | Type | Default | Description | | ------------------ | ------- | ------- | -------------------------------------------------------------- | | `watch_by_default` | boolean | `true` | Enable file watching for automatic rebuilds during development | ### `[[stacks]]` — Stack Definitions [Section titled “\[\[stacks\]\] — Stack Definitions”](#stacks--stack-definitions) Define stacks explicitly for custom naming or per-stack overrides. If not defined, stacks are auto-discovered from `.arete/*.stack.json` files. | Option | Type | Required | Description | | ------------------------ | ------- | -------- | ------------------------------------------------------------ | | `name` | string | Yes | Stack name (used for SDK generation and CLI commands) | | `stack` | string | Yes | Stack name from your Rust code or path to `.stack.json` file | | `description` | string | No | Stack description | | `typescript_output_file` | string | No | Per-stack TypeScript output file path | | `rust_output_crate` | string | No | Per-stack Rust output crate/module directory | | `rust_module` | boolean | No | Per-stack override for module vs crate generation | *** ## Common Use Cases [Section titled “Common Use Cases”](#common-use-cases) ### Multiple Stacks in One Project [Section titled “Multiple Stacks in One Project”](#multiple-stacks-in-one-project) Each stack is a separate `#[arete]` module that can contain multiple entities. Use multiple `[[stacks]]` entries when you have separate programs or data sources: ```toml [project] name = "my-defi-protocol" [[stacks]] name = "lending" stack = "LendingMarket" description = "Lending pool positions and interest rates" [[stacks]] name = "dex" stack = "DexPool" description = "DEX liquidity pools and swaps" ``` ### Separate SDK Output Directories [Section titled “Separate SDK Output Directories”](#separate-sdk-output-directories) ```toml [project] name = "my-project" [sdk] typescript_output_dir = "./frontend/src/generated" rust_output_dir = "./crates/generated" ``` ### Custom TypeScript Package Name [Section titled “Custom TypeScript Package Name”](#custom-typescript-package-name) ```toml [project] name = "my-project" [sdk] typescript_package = "@myorg/arete-sdk" ``` ### Rust SDK as Module (for Monorepos) [Section titled “Rust SDK as Module (for Monorepos)”](#rust-sdk-as-module-for-monorepos) ```toml [project] name = "my-project" [sdk] rust_module_mode = true # All Rust SDKs generated as modules ``` Or per-stack: ```toml [[stacks]] name = "my-game" stack = "SettlementGame" rust_module = true rust_output_crate = "./src/generated" # Will create mod.rs here ``` ### Per-Stack TypeScript Output File [Section titled “Per-Stack TypeScript Output File”](#per-stack-typescript-output-file) ```toml [[stacks]] name = "game" stack = "GameState" typescript_output_file = "./src/game.ts" ``` *** ## Environment Variables [Section titled “Environment Variables”](#environment-variables) Environment variables override configuration file values: | Variable | Description | Overrides | | --------------- | -------------------------- | ------------------ | | `ARETE_API_URL` | Override the API endpoint | Default API URL | | `ARETE_API_KEY` | API key for authentication | (no file override) | *** ## Credentials File [Section titled “Credentials File”](#credentials-file) Authentication credentials are stored separately from your project configuration in: ```plaintext ~/.arete/credentials.toml ``` This file contains your API key for accessing Arete Cloud: ```toml api_key = "your-api-key-here" ``` The credentials file is created automatically when you authenticate via the CLI. Since this file contains sensitive information, it is stored in your home directory and should never be committed to version control. Closed Beta During closed beta, you need an API key to deploy. [Contact us on X](https://x.com/usearete) to request access. *** ## Validation [Section titled “Validation”](#validation) Validate your configuration: ```bash a4 config validate ``` This checks: * Required fields are present * File paths are valid * Stack references resolve to valid `.stack.json` files * No conflicting settings *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [CLI Commands](/cli/commands) — Full CLI reference * [Your First Stack](/building-stacks/your-first-stack) — Complete tutorial * [SDK Reference: Rust](/sdks/rust) — Rust SDK usage and module mode # Finding IDLs > How to find IDL files for Solana programs when building custom Arete stacks. Arete uses IDL (Interface Definition Language) files to generate type-safe bindings for your data streams. An IDL is a JSON file that describes a Solana program’s accounts, instructions, and custom types. Arete supports Anchor and other framework IDL formats. To build a custom stack, you need the IDL for the program you want to track. *** ## Methods to Find IDLs [Section titled “Methods to Find IDLs”](#methods-to-find-idls) Follow these steps in order to find the IDL for any Solana program. ### 1. Program GitHub Repository [Section titled “1. Program GitHub Repository”](#1-program-github-repository) Most Solana projects are open source. Search GitHub for the protocol name followed by “idl.json” or look in the project’s repository. Common locations include: * `target/idl/program_name.json` * `idl/program_name.json` * The “Releases” page as an attached asset ### 2. Anchor CLI Fetch [Section titled “2. Anchor CLI Fetch”](#2-anchor-cli-fetch) If a program is built with Anchor and the developers uploaded the IDL on-chain, you can fetch it directly using the Anchor CLI: ```bash anchor idl fetch --provider.cluster mainnet -o idl/program.json ``` Replace `` with the program’s on-chain address. Not every program has an IDL on-chain, so this may return an error if it’s missing. ### 3. NPM or Rust Packages [Section titled “3. NPM or Rust Packages”](#3-npm-or-rust-packages) Many protocols publish SDKs for developers. These packages often include the IDL file so the SDK can encode and decode instructions. * **NPM:** Check `node_modules/@protocol-name/sdk/dist/idl.json` or similar paths. * **Crates.io:** Some Rust crates include the IDL as a resource. ### 4. Block Explorers [Section titled “4. Block Explorers”](#4-block-explorers) Explorers like Solscan or Solana.fm sometimes host IDLs for verified programs. Look for a “Contract” or “IDL” tab when viewing a program address. You can often download the JSON directly from these pages. ### 5. Manual Creation [Section titled “5. Manual Creation”](#5-manual-creation) If an IDL isn’t available, you can create one manually by examining the program’s source code. Tools like Kinobi or Codama can generate IDLs by parsing the Rust source. *** ## Where to Put IDLs [Section titled “Where to Put IDLs”](#where-to-put-idls) In a Arete project, store your IDL files in an `idl/` directory at the root of your stack. This keeps your project organized and makes it easy to reference them in your code. ```text my-stack/ ├── idl/ │ └── program_name.json # IDL goes here ├── src/ │ └── stack.rs # References the IDL ├── arete.toml └── Cargo.toml ``` Reference the IDL in your `src/stack.rs` using the `#[arete]` macro: ```rust #[arete(idl = "idl/program_name.json")] pub struct MyStack; ``` *** ## Common IDL Locations [Section titled “Common IDL Locations”](#common-idl-locations) | Protocol | Program ID | Source | | -------------- | --------------------------------------------- | ---------------------------------------------- | | ORE | `oreo7nRnU86QCen6Np3iH6q8C6c6K6c6K6c6K6c6K6c` | [GitHub](https://github.com/regolith-labs/ore) | | System Program | `11111111111111111111111111111111` | Built-in | | Token Program | `TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA` | Built-in | *** ## Exploring IDLs with the CLI [Section titled “Exploring IDLs with the CLI”](#exploring-idls-with-the-cli) Once you have an IDL file, use `a4 idl` to explore it before writing your stack: ```bash # Get a quick overview a4 idl summary idl/program.json # Browse all instructions a4 idl instructions idl/program.json # Find how accounts relate to each other a4 idl relations idl/program.json # Search for anything a4 idl search idl/program.json ``` See the [a4 idl reference](/cli/idl/) for the full command list. # Environment Setup Set up your development environment for building Arete stacks. Just want to use an existing stack? If you’re consuming data from a deployed stack, see [Using Stacks → Installation](/using-stacks/installation) instead. *** ## Prerequisites [Section titled “Prerequisites”](#prerequisites) * **Rust** 1.70+ ([install via rustup](https://rustup.rs/)) * A code editor with Rust support (VS Code + rust-analyzer recommended) *** ## CLI [Section titled “CLI”](#cli) The Arete CLI (`a4`) handles project initialization, deployment, and SDK generation. ```bash cargo install a4-cli ``` Verify the installation: ```bash a4 --version ``` PATH configuration Ensure `~/.cargo/bin` is in your PATH. Most Rust installations configure this automatically. *** ## Rust Dependencies [Section titled “Rust Dependencies”](#rust-dependencies) Add the Arete crate to your stack project’s `Cargo.toml`: ```toml [dependencies] arete = "0.1.1" serde = { version = "1.0", features = ["derive"] } ``` This gives you access to: * `#[arete]` — The main macro for defining stacks * `#[entity]`, `#[map]`, `#[aggregate]` — Field-level attributes * `Stream` derive macro — Generates streaming infrastructure *** ## Project Initialization [Section titled “Project Initialization”](#project-initialization) Create a new stack project: ```bash cargo new my-stack --lib cd my-stack ``` Initialize Arete configuration: ```bash a4 init ``` This creates `arete.toml` and a `.arete/` directory for generated stack files. **arete.toml:** ```toml [project] name = "my-stack" [sdk] output_dir = "./generated" ``` Validate your setup: ```bash a4 config validate ``` *** ## Environment Variables [Section titled “Environment Variables”](#environment-variables) | Variable | Description | Default | | --------------- | --------------------- | -------------- | | `ARETE_API_URL` | Override API endpoint | Production API | *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [Stack Definitions](/building-stacks/stack-definitions) — Understand the declarative model * [Your First Stack](/building-stacks/your-first-stack) — Build an end-to-end streaming app * [Macros Reference](/building-stacks/rust-dsl/macros) — Complete attribute documentation *** ## Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) | Issue | Solution | | ----------------------- | ----------------------------------------- | | `a4: command not found` | Add `~/.cargo/bin` to your PATH | | `cargo build` fails | Ensure `arete = "0.1.1"` is in Cargo.toml | | `a4 init` fails | Check you’re in a valid Cargo project | # Macro Reference Arete uses Rust procedural macros to define data pipelines declaratively. These macros transform your Rust structs into a unified stack spec (`.stack.json`), which is then used for both local execution and cloud deployment. *** ## Module Macro [Section titled “Module Macro”](#module-macro) ### `#[arete]` [Section titled “#\[arete\]”](#arete) The entry point for any Arete stream definition. It must be applied to a `pub mod` that contains your entity definitions. ```rust #[arete(idl = "idl.json")] pub mod my_stream { // Entity definitions... } ``` **Arguments:** | Argument | Type | Required | Description | | --------------- | ------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | `idl` | `string` \| `array` | No\* | Path(s) to Anchor IDL JSON file(s) relative to `Cargo.toml`. Use an array for multi-program stacks: `idl = ["ore.json", "entropy.json"]`. | | `proto` | `string` \| `array` | No\* | Path(s) to `.proto` files for Protobuf-based streams. | | `skip_decoders` | `bool` | No | If true, skips generating instruction decoders (useful for manual decoding). | *\* Either `idl` or `proto` must be provided.* *** ## Entity Macro [Section titled “Entity Macro”](#entity-macro) ### `#[entity]` [Section titled “#\[entity\]”](#entity) Defines a struct as a Arete entity (state projection). Each entity results in a separate typed stream. ```rust #[entity(name = "TradeTracker")] struct Tracker { // Field mappings... } ``` **Arguments:** | Argument | Type | Required | Description | | -------- | -------- | -------- | -------------------------------------------------------- | | `name` | `string` | No | Custom name for the entity. Defaults to the struct name. | *** ## Field Mapping Macros [Section titled “Field Mapping Macros”](#field-mapping-macros) These macros are applied to fields within an `#[entity]` struct to define how data is captured and updated. ### `#[map]` [Section titled “#\[map\]”](#map) Maps a field from a Solana account directly to an entity field. ```rust #[map(pump_sdk::accounts::BondingCurve::virtual_sol_reserves, strategy = LastWrite)] pub reserves: u64, ``` The path prefix (`pump_sdk::accounts::`) is derived from the IDL’s program name. See [Stack Definitions](/building-stacks/stack-definitions) for the naming convention. **Arguments:** | Argument | Type | Required | Description | | ---------------- | -------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `from` | `path` | Yes | Source account field (e.g., `AccountType::field_name`). | | `primary_key` | `bool` | No | Marks this field as the primary key for the entity. | | `lookup_index` | `bool` \| `fn` | No | Creates a lookup index for this field. Accepts an optional `register_from` parameter for cross-account PDA resolution (see [Cross-Account Resolution](#cross-account-resolution-with-register_from) below). | | `strategy` | `Strategy` | No | Update strategy (default: `SetOnce`). | | `transform` | `Transform` | No | Transformation to apply before storing. | | `rename` | `string` | No | Custom target field name in the projection. | | `temporal_field` | `string` | No | Secondary field for temporal indexing. | | `join_on` | `string` | No | Field to join on for multi-entity lookups. | ### `#[from_instruction]` [Section titled “#\[from\_instruction\]”](#from_instruction) Maps a field from an instruction’s arguments or accounts. ```rust #[from_instruction(PlaceTrade::amount, strategy = Append)] pub trade_amounts: Vec, ``` **Arguments:** Accepts the same arguments as `#[map]`. ### `#[event]` [Section titled “#\[event\]”](#event) Captures multiple fields from an instruction as a single structured event. ```rust #[event( from = PlaceTrade, fields = [amount, accounts::user], strategy = Append )] pub trades: Vec, ``` **Arguments:** | Argument | Type | Required | Description | | ------------ | ---------- | -------- | -------------------------------------------------------------------------------------------------------------------------- | | `from` | `path` | Yes | The source instruction type. | | `fields` | `array` | Yes | List of fields to capture. Use `accounts::name` for instruction accounts and `args::name` (or `data::name`) for arguments. | | `strategy` | `Strategy` | No | Update strategy (default: `SetOnce`). | | `transforms` | `array` | No | List of `(field, Transform)` tuples for processing captured fields. | | `lookup_by` | `field` | No | Field used to resolve the entity key. | | `rename` | `string` | No | Custom target field name. | | `join_on` | `field` | No | Join field for multi-entity lookups. | ### `#[snapshot]` [Section titled “#\[snapshot\]”](#snapshot) Captures the entire state of a source account as a snapshot. ```rust #[snapshot(from = BondingCurve, strategy = LastWrite)] pub latest_state: BondingCurve, ``` **Arguments:** | Argument | Type | Required | Description | | ------------ | ---------- | -------- | ------------------------------------------------------------ | | `from` | `path` | No | Source account type (inferred from field type if omitted). | | `strategy` | `Strategy` | No | Only `SetOnce` or `LastWrite` allowed. | | `transforms` | `array` | No | List of `(field, Transform)` tuples for specific sub-fields. | | `lookup_by` | `field` | No | Field used to resolve the entity key. | | `rename` | `string` | No | Custom target field name. | | `join_on` | `field` | No | Join field for multi-entity lookups. | ### `#[aggregate]` [Section titled “#\[aggregate\]”](#aggregate) Defines a declarative aggregation from instructions. ```rust #[aggregate(from = [Buy, Sell], field = amount, strategy = Sum)] pub total_volume: u64, ``` **Arguments:** | Argument | Type | Required | Description | | ----------- | ----------------- | -------- | --------------------------------------------------------------------------------------- | | `from` | `path` \| `array` | Yes | Instruction(s) to aggregate from. | | `field` | `field` | No | Field to aggregate. Use `accounts::name` or `args::name`. If omitted, performs `Count`. | | `strategy` | `Strategy` | No | `Sum`, `Count`, `Min`, `Max`, `UniqueCount`. | | `condition` | `string` | No | Boolean expression (e.g., `"amount > 1_000_000"`). | | `transform` | `Transform` | No | Transform to apply before aggregating. | | `lookup_by` | `field` | No | Field used to resolve the entity key. | | `rename` | `string` | No | Custom target field name. | | `join_on` | `field` | No | Join field for multi-entity lookups. | ### `#[computed]` [Section titled “#\[computed\]”](#computed) Defines a field derived from other fields in the same entity using a Rust-like expression. ```rust #[computed(total_buy_volume + total_sell_volume)] pub total_volume: u64, ``` **Arguments:** Takes a single Rust expression. Can reference other fields in the entity. ### `#[resolve]` [Section titled “#\[resolve\]”](#resolve) Attaches a resolver to a field. Arete fetches the external data server-side and delivers it as part of the entity — no extra API calls needed from the client. ```rust #[resolve(address = "oreoU2P8bN6jkk3jbaiVxYnG1dCXcYxwhwyK9jSybcp")] pub ore_metadata: Option, ``` **Arguments:** | Argument | Type | Required | Description | | --------- | -------- | -------- | ----------------------------------------------------------------------------------- | | `address` | `string` | Yes | The fixed address to resolve against. For `TokenMetadata` this is the mint address. | **Available resolvers:** | Resolver | Type field | What it fetches | | --------------- | ----------------------- | ----------------------------------------------------------------- | | `TokenMetadata` | `Option` | SPL token metadata (name, symbol, decimals, logo) via the DAS API | Once resolved, the data is available to other fields in the same entity. Use `#[computed]` to derive values from it, or reference the resolver fields directly in `#[map]` transforms: ```rust // Option A: use resolver decimals in a transform on #[map] #[map(ore_sdk::accounts::Round::motherlode, strategy = LastWrite, transform = ui_amount(ore_metadata.decimals))] pub motherlode: Option, // Option B: use resolver computed methods in #[computed] #[map(ore_sdk::accounts::Round::motherlode, strategy = LastWrite)] pub motherlode_raw: Option, #[computed(state.motherlode_raw.and_then(|v| ore_metadata.ui_amount(v)))] pub motherlode_ui: Option, ``` Resolver data is cached server-side — metadata is fetched once per address and reused across all entities that reference it. See [Resolvers](./resolvers) for the full reference on `TokenMetadata` fields and computed methods. ### `#[derive_from]` [Section titled “#\[derive\_from\]”](#derive_from) Derives values from instruction metadata or arguments. ```rust #[derive_from(from = [Buy, Sell], field = __timestamp)] pub last_updated: i64, ``` **Arguments:** | Argument | Type | Required | Description | | ----------- | ----------------- | -------- | ------------------------------------------------------------------ | | `from` | `path` \| `array` | Yes | Instruction(s) to derive from. | | `field` | `field` | Yes | Target field. Can be a special field or a regular instruction arg. | | `strategy` | `Strategy` | No | `LastWrite` or `SetOnce`. | | `condition` | `string` | No | Boolean expression for conditional derivation. | | `transform` | `Transform` | No | Transform to apply. | | `lookup_by` | `field` | No | Field used to resolve the entity key. | **Special Fields:** | Field | Description | | ------------- | ----------------------------------------------------------- | | `__timestamp` | The Unix timestamp of the block containing the instruction. | | `__slot` | The slot number of the block. | | `__signature` | The transaction signature (Base58 encoded). | *** ## Cross-Account Resolution with `register_from` [Section titled “Cross-Account Resolution with register\_from”](#cross-account-resolution-with-register_from) When an entity’s state includes data from a **secondary account** (one that doesn’t store the entity’s primary key directly), you need a way to tell Arete how to map that account’s address back to an entity instance. The `lookup_index(register_from = [...])` syntax on `#[map]` handles this declaratively. ### The Problem [Section titled “The Problem”](#the-problem) Consider a Solana program where a `BondingCurve` PDA is derived from a token `mint`. When a `BondingCurve` account update arrives, Arete needs to know which `Token` entity (keyed by `mint`) it belongs to. The `BondingCurve` account itself doesn’t store the `mint` — the relationship only exists in instructions that reference both accounts together. ### The Solution [Section titled “The Solution”](#the-solution) Add `register_from` to a `lookup_index` field that maps the secondary account’s address: ```rust #[map(pump_sdk::accounts::BondingCurve::__account_address, lookup_index( register_from = [ (pump_sdk::instructions::Create, accounts::bonding_curve, accounts::mint), (pump_sdk::instructions::Buy, accounts::bonding_curve, accounts::mint), (pump_sdk::instructions::Sell, accounts::bonding_curve, accounts::mint) ] ), strategy = SetOnce)] pub bonding_curve: String, ``` Each tuple in `register_from` specifies: | Position | Meaning | Example | | -------- | ----------------------------------------------------------- | -------------------------------- | | 1st | The instruction type to watch | `pump_sdk::instructions::Create` | | 2nd | The instruction account containing the PDA address | `accounts::bonding_curve` | | 3rd | The instruction account containing the entity’s primary key | `accounts::mint` | When any of these instructions are processed, Arete registers the mapping `bonding_curve_address → mint`. Subsequent `BondingCurve` account updates are then routed to the correct `Token` entity. ### Cross-Program Example [Section titled “Cross-Program Example”](#cross-program-example) `register_from` also works across programs in multi-IDL stacks. For example, linking an entropy program’s `Var` account to an ore program’s `Round` entity: ```rust #[arete(idl = ["idl/ore.json", "idl/entropy.json"])] pub mod ore_stream { // ... entity definition ... #[derive(Debug, Clone, Serialize, Deserialize, Stream)] pub struct EntropyState { #[map(entropy_sdk::accounts::Var::value, strategy = LastWrite, transform = Base58Encode)] pub entropy_value: Option, // The lookup_index with register_from links Var accounts to Round entities #[map(entropy_sdk::accounts::Var::__account_address, lookup_index( register_from = [ (ore_sdk::instructions::Deploy, accounts::entropyVar, accounts::round), (ore_sdk::instructions::Reset, accounts::entropyVar, accounts::round) ] ), strategy = SetOnce)] pub entropy_var_address: Option, } } ``` Here, the `Deploy` and `Reset` instructions (from the ore program) reference both the `entropyVar` account (from the entropy program) and the `round` account. This is enough for Arete to establish the mapping. When to use `register_from` Use `register_from` whenever you map fields from an account that **doesn’t contain the entity’s primary key in its own data**. If the account already contains the primary key (or its address *is* the primary key), a plain `lookup_index` is sufficient. Advanced: `#[resolve_key]` / `#[register_pda]` The `register_from` syntax generates the same code as the standalone `#[resolve_key]` and `#[register_pda]` declarative hooks described below. Those hooks remain available as power-user escape hatches for custom resolution strategies or non-standard instruction patterns, but `register_from` is the preferred approach for most use cases. *** ## Declarative Hooks (Advanced) [Section titled “Declarative Hooks (Advanced)”](#declarative-hooks-advanced) Declarative hooks are struct-level annotations for custom key resolution logic and PDA mappings. For most use cases, prefer `lookup_index(register_from = [...])` on field mappings (see above). These hooks are available as escape hatches for advanced scenarios. ### `#[resolve_key]` [Section titled “#\[resolve\_key\]”](#resolve_key) Defines how an account’s primary key is resolved. This is essential when an account doesn’t store its “owner” ID directly, but its address can be derived via PDA or looked up in a registry. ```rust #[resolve_key( account = UserProfile, strategy = "pda_reverse_lookup", lookup_name = "user_pda" )] struct UserResolver; ``` **Arguments:** | Argument | Type | Required | Description | | ------------- | -------- | -------- | ------------------------------------------------------------------------------ | | `account` | `path` | Yes | The account type this resolver applies to. | | `strategy` | `string` | No | `"pda_reverse_lookup"` (default) or `"direct_field"`. | | `lookup_name` | `string` | No | The name of the registry to use for reverse lookups. | | `queue_until` | `array` | No | List of instructions to wait for before resolving (ensures PDA is registered). | ### `#[register_pda]` [Section titled “#\[register\_pda\]”](#register_pda) Registers a mapping between a PDA address and a primary key during an instruction. This mapping is stored in a temporary registry to enable `#[resolve_key]` to work for accounts that are created or updated in the same transaction. ```rust #[register_pda( instruction = CreateUser, pda_field = accounts::user_pda, primary_key = args::user_id, lookup_name = "user_pda" )] struct PdaMapper; ``` **Arguments:** | Argument | Type | Required | Description | | ------------- | -------- | -------- | ------------------------------------------------------------------- | | `instruction` | `path` | Yes | The instruction where the PDA is created/referenced. | | `pda_field` | `field` | Yes | The field containing the PDA address (e.g., `accounts::user_pda`). | | `primary_key` | `field` | Yes | The primary key to associate with this PDA (e.g., `args::user_id`). | | `lookup_name` | `string` | No | The name of the registry to store this mapping in. | *** ## Quick Reference [Section titled “Quick Reference”](#quick-reference) ### Update Strategies [Section titled “Update Strategies”](#update-strategies) | Strategy | Description | | ------------- | ------------------------------------------- | | `SetOnce` | Only write if the field is currently empty. | | `LastWrite` | Always overwrite with the latest value. | | `Append` | Append to a `Vec`. | | `Merge` | Deep-merge objects (for nested structs). | | `Max` | Keep the maximum value. | | `Sum` | Accumulate numeric values. | | `Count` | Increment by 1 for each occurrence. | | `Min` | Keep the minimum value. | | `UniqueCount` | Track unique values and store the count. | ### Transformations [Section titled “Transformations”](#transformations) | Transform | Description | | -------------- | ---------------------------------------------------- | | `Base58Encode` | Encode bytes to Base58 string (default for Pubkeys). | | `Base58Decode` | Decode Base58 string to bytes. | | `HexEncode` | Encode bytes to Hex string. | | `HexDecode` | Decode Hex string to bytes. | | `ToString` | Convert value to string. | | `ToNumber` | Convert value to number. | ### Resolver Computed Methods [Section titled “Resolver Computed Methods”](#resolver-computed-methods) These methods are available in `#[computed]` expressions when using the `TokenMetadata` resolver. See [Resolvers](./resolvers) for details. | Method | Description | | ----------------------------------------- | ----------------------------------------------------- | | `TokenMetadata::ui_amount(raw, decimals)` | Convert raw token amount to human-readable UI amount. | | `TokenMetadata::raw_amount(ui, decimals)` | Convert UI amount back to raw token amount. | # Overview The Arete Rust DSL (Domain Specific Language) is a declarative syntax for defining streaming data pipelines. Using procedural macros, you describe **what** data you want from Solana, not **how** to fetch it. *** ## Overview [Section titled “Overview”](#overview) Instead of writing complex ETL pipelines with manual account parsing and event handling, the DSL lets you define: * **Entities** — Structured data objects that project on-chain state * **Field Mappings** — How data flows from Solana accounts into your entities * **Aggregations** — Computed metrics that update automatically * **Resolvers** — External data (like token metadata) enriched automatically * **Strategies** — How incoming data merges with existing state The macros transform your Rust code into a JSON-based stack spec (`.stack.json`), which Arete compiles into optimized bytecode for real-time execution. *** ## The DSL in Practice [Section titled “The DSL in Practice”](#the-dsl-in-practice) ```rust #[arete(idl = "my_program.json")] pub mod my_stream { #[entity] pub struct Token { // Map account fields directly #[map(my_program_sdk::accounts::TokenAccount::balance, strategy = LastWrite)] pub balance: u64, // Aggregate events into metrics #[aggregate(from = my_program_sdk::instructions::Trade, field = amount, strategy = Sum)] pub total_volume: u64, // Derive computed values #[computed(balance * price)] pub tvl: u64, // Enrich with off-chain token metadata (name, symbol, decimals, logo) #[resolve(address = "So11111111111111111111111111111111111111112")] pub token_metadata: Option, } } ``` Note Field paths use the `{name}_sdk::accounts::` and `{name}_sdk::instructions::` prefix, where the name is derived from the IDL’s program metadata. For example, an IDL named “my\_program” generates `my_program_sdk::accounts::*` and `my_program_sdk::instructions::*`. *** ## Key Concepts [Section titled “Key Concepts”](#key-concepts) ### Module-Level [Section titled “Module-Level”](#module-level) | Macro | Purpose | | ---------- | ------------------------------------------------------------------------- | | `#[arete]` | Entry point — defines the stream and links to data sources (IDL/Protobuf) | ### Entity-Level [Section titled “Entity-Level”](#entity-level) | Macro | Purpose | | ----------- | ---------------------------------------------------- | | `#[entity]` | Marks a struct as a data projection | | `#[view]` | Defines queryable views (list, state) for the entity | ### Field-Level [Section titled “Field-Level”](#field-level) | Macro | Purpose | | --------------------- | ------------------------------------------- | | `#[map]` | Maps fields from Solana account state | | `#[from_instruction]` | Extracts data from instruction arguments | | `#[aggregate]` | Computes running values (Sum, Count, etc.) | | `#[event]` | Captures instructions as structured events | | `#[snapshot]` | Captures complete account state | | `#[computed]` | Derives values from other entity fields | | `#[derive_from]` | Populates from instruction metadata | | `#[resolve]` | Fetches off-chain data via a named resolver | ### Resolvers [Section titled “Resolvers”](#resolvers) Resolvers are types that fetch external data server-side and deliver it as part of your entity. You attach them to fields using `#[resolve]`. | Resolver | Purpose | | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `TokenMetadata` | Fetches SPL token metadata (name, symbol, decimals, logo) for a mint address via the DAS API. Also exposes `ui_amount` and `raw_amount` helper methods for use in `#[computed]` transforms. | For example, the ORE stack resolves token metadata for the ORE mint at a known address: ```rust #[resolve(address = "oreoU2P8bN6jkk3jbaiVxYnG1dCXcYxwhwyK9jSybcp")] pub ore_metadata: Option, // The resolved decimals can then be used in transforms: #[map(ore_sdk::accounts::Round::motherlode, strategy = LastWrite, transform = ui_amount(ore_metadata.decimals))] pub motherlode: Option, ``` See [Resolvers](./resolvers) for the full reference. ### Cross-Account Resolution [Section titled “Cross-Account Resolution”](#cross-account-resolution) | Syntax | Purpose | | ------------------------------------- | ---------------------------------------------------- | | `lookup_index(register_from = [...])` | Inline PDA resolution on `#[map]` fields (preferred) | | `#[resolve_key]` | Advanced: custom primary key resolution | | `#[register_pda]` | Advanced: manual PDA mapping registration | *** ## Population Strategies [Section titled “Population Strategies”](#population-strategies) When data arrives, **strategies** determine how it’s merged with existing state: | Strategy | Behavior | Example Use | | ------------- | --------------------- | ------------------------ | | `LastWrite` | Overwrite with latest | Current balances | | `SetOnce` | Write only if empty | IDs, creation timestamps | | `Sum` | Add to existing total | Volume, TVL | | `Count` | Increment by 1 | Trade count | | `Append` | Add to list | Event history | | `Max` / `Min` | Keep extreme value | Price highs/lows | *** ## Next Steps [Section titled “Next Steps”](#next-steps) * **[Macro Reference](./macros)** — Complete documentation of every macro and its arguments * **[Population Strategies](./strategies)** — Deep dive into update strategies and when to use each * **[Resolvers](./resolvers)** — Enrich entities with token metadata and computed fields # Resolvers Resolvers enrich your entities with data that doesn’t live on-chain. When you define a resolver on an entity field, Arete automatically fetches the external data server-side and delivers it to your clients as part of the entity — no extra API calls needed. *** ## Token Metadata [Section titled “Token Metadata”](#token-metadata) The built-in `TokenMetadata` resolver enriches your entity with SPL token metadata (name, symbol, decimals, logo) for any mint address. Arete resolves this automatically server-side when your entity includes a field typed as `TokenMetadata`: ```rust #[arete(idl = "idl/ore.json")] pub mod ore_stream { #[entity] pub struct OreRound { #[map(ore_sdk::accounts::Round::reward_mint, primary_key, strategy = SetOnce)] pub mint: String, // Arete resolves this automatically from the mint pub ore_metadata: Option, } } ``` When a new `OreRound` entity is created, Arete sees the `TokenMetadata` field, resolves the metadata server-side, and delivers it as part of the entity. By the time data reaches your TypeScript client, the field is already filled in: ```typescript for await (const round of a4.views.OreRound.latest.use()) { console.log(round.ore_metadata?.name); // "Ore" console.log(round.ore_metadata?.symbol); // "ORE" console.log(round.ore_metadata?.decimals); // 11 console.log(round.ore_metadata?.logo_uri); // "https://..." } ``` ### TokenMetadata Fields [Section titled “TokenMetadata Fields”](#tokenmetadata-fields) | Field | Type | Description | | ---------- | ---------------- | --------------------------------- | | `mint` | `string` | The mint address (always present) | | `name` | `string \| null` | Token name from on-chain metadata | | `symbol` | `string \| null` | Token ticker symbol | | `decimals` | `number \| null` | Number of decimal places | | `logo_uri` | `string \| null` | URL to the token’s logo image | ### Generated TypeScript [Section titled “Generated TypeScript”](#generated-typescript) The CLI generates both a TypeScript interface and a Zod schema for `TokenMetadata` in your stack SDK: ```typescript // Auto-generated in your stack SDK export interface TokenMetadata { mint: string; name?: string | null; symbol?: string | null; decimals?: number | null; logo_uri?: string | null; } export const TokenMetadataSchema = z.object({ mint: z.string(), name: z.string().nullable().optional(), symbol: z.string().nullable().optional(), decimals: z.number().nullable().optional(), logo_uri: z.string().nullable().optional(), }); ``` *** ## Computed Fields from Resolvers [Section titled “Computed Fields from Resolvers”](#computed-fields-from-resolvers) Resolvers also provide **computed methods** — functions that derive new values from the resolved data. These are evaluated server-side and delivered to your client as regular entity fields. The `TokenMetadata` resolver provides two computed methods: | Method | Description | Example | | ------------ | ----------------------------------------------------- | --------------------------------------- | | `ui_amount` | Converts raw token amount to human-readable UI amount | `1_000_000_000` with 9 decimals → `1.0` | | `raw_amount` | Converts human-readable UI amount to raw token amount | `1.0` with 9 decimals → `1_000_000_000` | Use these in `#[computed]` expressions: ```rust #[entity] pub struct OreRound { pub ore_metadata: Option, #[map(ore_sdk::accounts::Round::motherlode, strategy = LastWrite)] pub motherlode_raw: u64, // Server-side: converts raw amount using the resolved decimals #[computed(TokenMetadata::ui_amount(motherlode_raw, ore_metadata.decimals))] pub motherlode_ui: Option, } ``` On the client, `motherlode_ui` arrives as a ready-to-display number: ```typescript for await (const round of a4.views.OreRound.latest.use()) { console.log(round.motherlode_ui); // 1.5 (human-readable ORE amount) } ``` *** ## How It Works [Section titled “How It Works”](#how-it-works) 1. **You define** a `TokenMetadata` field on your entity in Rust 2. **Arete resolves** the metadata server-side when the entity is first created 3. **Computed fields** referencing the resolver are evaluated server-side on every update 4. **Your client receives** the fully enriched entity — metadata and computed values included The resolution happens transparently. Your TypeScript and React code simply reads the fields like any other entity data. Note Resolver data is cached server-side. Token metadata is fetched once per mint and reused across all entities that reference it. *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [Schema Validation](/sdks/validation/) — Validate resolved data with Zod schemas on the client * [Macro Reference](./macros) — Complete documentation of `#[computed]` and other field macros * [Population Strategies](./strategies) — How incoming data merges with existing state # Population Strategies Population strategies define how an entity’s state is updated when new data arrives from the blockchain. Choosing the right strategy is critical for ensuring your projected state accurately reflects the underlying on-chain activity. ## What are Population Strategies? [Section titled “What are Population Strategies?”](#what-are-population-strategies) When a Arete handler processes a transaction or account update, it maps data from the source (e.g., an Anchor instruction or a Protobuf event) into your entity’s fields. A **Population Strategy** determines how that incoming value interacts with the existing value in that field. For example, should a “Total Volume” field be overwritten by the latest trade amount, or should the latest amount be added to the current total? Strategies answer this question. ## Strategy Selection Guide [Section titled “Strategy Selection Guide”](#strategy-selection-guide) Use this decision tree to identify the correct strategy for your field. ```mermaid graph TD A[Is this field set once and never changes?] -->|YES| B(SetOnce) A -->|NO| C[Does it need to track the latest value?] C -->|YES| D(LastWrite) C -->|NO| E[Is it a numeric aggregation?] E -->|YES| F(Max / Min / Sum / Count) E -->|NO| G[Is it a collection of events?] G -->|YES| H(Append) G -->|NO| I(UniqueCount) ``` ### Quick Reference Table [Section titled “Quick Reference Table”](#quick-reference-table) | Strategy | Behavior | Best For | | :------------ | :------------------------------------ | :----------------------------------- | | `LastWrite` | Overwrites with newest data (Default) | Balances, Status, Current Prices | | `SetOnce` | Only sets if field is empty | IDs, Creation Timestamps, Owners | | `Sum` | Adds incoming value to current | Volume, Total Rewards, TVL | | `Count` | Increments by 1 | Trade Count, User Count, Event Count | | `Append` | Adds to an array/list | Trade History, Activity Logs | | `Max` / `Min` | Tracks the peak/trough | High/Low Prices, Peak Liquidity | | `UniqueCount` | Counts distinct occurrences | Active Users, Unique Voters | | `Merge` | Merges keys in an object | Configuration, Metadata | *** ## Detailed Reference [Section titled “Detailed Reference”](#detailed-reference) ### LastWrite (Default) [Section titled “LastWrite (Default)”](#lastwrite-default) The most common strategy. Whenever a new value arrives, it completely replaces the previous one. * **Use Case**: Tracking the current state of an account. * **Example**: ```rust #[map(from = "TokenAccount", field = "amount", strategy = LastWrite)] pub balance: u64, ``` ### SetOnce [Section titled “SetOnce”](#setonce) The field is populated only once. Subsequent updates for the same entity will ignore this mapping if the field already has a value. * **Use Case**: Immutable properties or “First Seen” metadata. * **Example**: ```rust #[map(from = "Market", field = "initializer", strategy = SetOnce)] pub creator: Pubkey, ``` ### Sum [Section titled “Sum”](#sum) Numeric values are added to the existing value. This is the foundation of tracking volume and throughput. * **Use Case**: Financial metrics and cumulative totals. * **Example**: ```rust #[aggregate(from = "Swap", field = "amount_in", strategy = Sum)] pub total_volume: u64, ``` ### Count [Section titled “Count”](#count) Ignores the incoming value and simply increments the field by 1 for every match. * **Use Case**: Tracking throughput or frequency. * **Example**: ```rust #[aggregate(from = "Trade", strategy = Count)] pub trade_count: u64, ``` ### Append [Section titled “Append”](#append) Adds the incoming value to a list. Use this to maintain a linear history of events within an entity. * **Use Case**: Event logs, audit trails. * **Example**: ```rust #[event(from = "Liquidate", strategy = Append)] pub liquidation_history: Vec, ``` ### Max / Min [Section titled “Max / Min”](#max--min) Keeps the highest or lowest value seen across all updates. * **Use Case**: 24h Highs/Lows, price discovery. * **Example**: ```rust #[aggregate(from = "OracleUpdate", field = "price", strategy = Max)] pub all_time_high: u64, ``` ### UniqueCount [Section titled “UniqueCount”](#uniquecount) Maintains a set of unique values internally but projects only the count of that set. * **Use Case**: Tracking “Active Users” or unique participants. * **Example**: ```rust #[aggregate(from = "Vote", field = "voter", strategy = UniqueCount)] pub total_unique_voters: u32, ``` ### Merge [Section titled “Merge”](#merge) Used for object-like fields where you want to update specific keys without blowing away the entire object. * **Use Case**: Dynamic configuration maps. * **Example**: ```rust #[map(from = "Config", strategy = Merge)] pub metadata: Map, ``` *** ## Common Patterns [Section titled “Common Patterns”](#common-patterns) ### Token Price tracking [Section titled “Token Price tracking”](#token-price-tracking) | Field | Strategy | Why | | :-------------- | :---------- | :------------------------------------------------- | | `current_price` | `LastWrite` | You only care about the most recent oracle update. | | `day_high` | `Max` | Tracks the peak price seen in the stream. | | `daily_volume` | `Sum` | Every swap adds to the total. | ### Governance tracking [Section titled “Governance tracking”](#governance-tracking) | Field | Strategy | Why | | :-------------- | :------------ | :----------------------------------- | | `total_votes` | `Sum` | Sum of vote weights. | | `voter_count` | `UniqueCount` | Count unique public keys that voted. | | `first_vote_at` | `SetOnce` | Record when the first vote was cast. | ## Common Mistakes to Avoid [Section titled “Common Mistakes to Avoid”](#common-mistakes-to-avoid) 1. **Summing non-numeric types**: `Sum` only works on integers and floats. Applying it to a `Pubkey` or `String` will result in a compilation error. 2. **Using LastWrite for Volume**: If you use `LastWrite` for a volume field, it will only show the amount of the *most recent* trade, not the total. 3. **Appending unnecessarily**: `Append` grows the size of your entity state. Avoid appending thousands of items to a single entity if you only need the latest state; use a separate stream or specific aggregations instead. 4. **Forgetting the Default**: If you don’t specify a strategy, `LastWrite` is used. Always explicitly state the strategy for aggregations to ensure clarity. # Stack Definitions > Introduction to the Arete Rust DSL for defining streaming data specifications. Arete uses a declarative Rust DSL (Domain Specific Language) to define how on-chain Solana data should be transformed, aggregated, and streamed to your application. Instead of writing complex ETL pipelines, you simply declare the final state you want, and Arete handles the rest. *** ## Why Declarative? [Section titled “Why Declarative?”](#why-declarative) Building data pipelines for Solana typically involves manual account parsing, complex event handling, and managing state synchronization. Arete replaces this imperative approach with a declarative model: | Imperative Approach (Traditional) | Declarative Approach (Arete) | | --------------------------------------------- | --------------------------------------------- | | Write custom decoding logic for every account | Use `#[map]` to link IDL fields to your state | | Manually track and sum event values | Use `#[aggregate(strategy = Sum)]` | | Manage WebSocket connections and state diffs | Define entities and let Arete stream updates | | Build custom backend services for data | Deploy a stack and use generated SDKs | *** ## Anatomy of a Stack Definition [Section titled “Anatomy of a Stack Definition”](#anatomy-of-a-stack-definition) A Arete definition is a Rust module annotated with `#[arete]`. Inside this module, you define **Entities** — the structured data objects your application will consume. A single module can contain multiple `#[entity]` structs, all packaged into one stack. The ORE stack is a real example. Here’s a simplified version showing the core concepts with a single IDL: ```rust use arete::prelude::*; #[arete(idl = "idl/ore.json")] pub mod ore_stream { use arete::macros::Stream; use serde::{Deserialize, Serialize}; // OreRound is the main entity -- one instance per mining round. // The `latest` view sorts rounds descending by round_id. #[entity(name = "OreRound")] #[view(name = "latest", sort_by = "id.round_id", order = "desc")] pub struct OreRound { pub id: RoundId, pub state: RoundState, pub metrics: RoundMetrics, } #[derive(Debug, Clone, Serialize, Deserialize, Stream)] pub struct RoundId { // Primary key -- set once, never overwritten #[map(ore_sdk::accounts::Round::id, primary_key, strategy = SetOnce)] pub round_id: u64, #[map(ore_sdk::accounts::Round::__account_address, lookup_index, strategy = SetOnce)] pub round_address: String, } #[derive(Debug, Clone, Serialize, Deserialize, Stream)] pub struct RoundState { // LastWrite: overwritten each time the account updates #[map(ore_sdk::accounts::Round::motherlode, strategy = LastWrite)] pub motherlode: Option, #[map(ore_sdk::accounts::Round::total_deployed, strategy = LastWrite)] pub total_deployed: Option, // Computed field: derived from other fields in this entity #[computed(state.total_deployed.map(|d| d / 1_000_000_000))] pub total_deployed_sol: Option, } #[derive(Debug, Clone, Serialize, Deserialize, Stream)] pub struct RoundMetrics { // Aggregate: counts Deploy instructions referencing this round #[aggregate(from = ore_sdk::instructions::Deploy, strategy = Count, lookup_by = accounts::round)] pub deploy_count: Option, } } ``` SDK Module Naming Arete derives the program name from the IDL metadata and generates a typed SDK module named `{name}_sdk`. The ORE IDL produces `ore_sdk::accounts::*` and `ore_sdk::instructions::*`. The Entropy IDL produces `entropy_sdk::*`. All `#[map]` and `#[aggregate]` paths use these prefixes. ### Key Components [Section titled “Key Components”](#key-components) 1. **`#[arete]` Module** — The container for your definition. The `idl` argument accepts a single path or an array for multi-program stacks. 2. **`#[entity]` Struct** — An entity is an individual definition of some part of your app’s data. Each entity represents a distinct concept — a round, a miner, a treasury — and declares exactly which on-chain fields belong to it, across as many accounts or programs as needed. A stack can have many entities. 3. **`#[view]`** — A view is a projection over an entity’s data. It defines what slice of the stream a client subscribes to. Every entity gets `state` (one item by key) and `list` (all items) by default. `#[view]` adds custom sorted or filtered projections on top — like `OreRound/latest` which streams rounds sorted by `round_id` descending. 4. **`#[derive(Stream)]` Structs** — Nested structs containing the actual field mappings. Must derive `Stream`, `Debug`, `Clone`, `Serialize`, and `Deserialize`. 5. **Primary Key** — Every entity needs one (annotated `primary_key`). This is how Arete tracks individual entity instances. 6. **Field Mappings** — Attributes on struct fields defining where data comes from and how it’s processed. 7. **Resolvers** — Fetch off-chain data (e.g. token metadata) and make it available to transforms. *** ## Mapping Types [Section titled “Mapping Types”](#mapping-types) Arete provides several mapping attributes to populate your entity fields: | Attribute | Source | Description | | --------------------- | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `#[map]` | Account State | Tracks fields within a Solana account. Updates whenever the account changes. Supports `lookup_index(register_from = [...])` for cross-account PDA resolution. | | `#[from_instruction]` | Instructions | Extracts arguments or account keys from a specific instruction. | | `#[aggregate]` | Events/Instructions | Computes running values (Sum, Count, etc.) from a stream of events. | | `#[event]` | Events | Captures specific instructions as a log of events within the entity. | | `#[snapshot]` | Account State | Captures the entire state of an account at a specific point in time. | | `#[computed]` | Local Fields | Derives a new value by performing calculations on other fields in the same entity. | | `#[derive_from]` | Instructions | Populates fields by deriving data from instruction context. | *** ## Population Strategies [Section titled “Population Strategies”](#population-strategies) When data arrives, **Strategies** determine how the field value is updated. This is particularly powerful for aggregations. | Strategy | Behavior | | ------------- | --------------------------------------------------------------------- | | `LastWrite` | (Default) Overwrites the field with the latest value. | | `SetOnce` | Sets the value once and ignores subsequent updates (perfect for IDs). | | `Sum` | Adds the incoming value to the existing total. | | `Count` | Increments the total by 1 for every matching event. | | `Append` | Adds the incoming value to a list (creating an event log). | | `Max` / `Min` | Keeps only the highest or lowest value seen. | *** ## Multi-Program Stacks [Section titled “Multi-Program Stacks”](#multi-program-stacks) A single stack can consume data from multiple Solana programs by passing an array of IDL files: ```rust #[arete(idl = ["idl/ore.json", "idl/entropy.json"])] pub mod ore_stream { // ore_sdk::accounts::* and ore_sdk::instructions::* are available // entropy_sdk::accounts::* and entropy_sdk::instructions::* are available } ``` Each IDL generates its own namespaced SDK module (e.g., `ore_sdk`, `entropy_sdk`). You can then map fields from any program’s accounts and reference instructions from any program in `register_from`, `#[aggregate]`, `#[event]`, etc. To link accounts across programs, use `lookup_index(register_from = [...])` — see the [Macro Reference](/building-stacks/rust-dsl/macros#cross-account-resolution-with-register_from) for the full syntax and examples. *** ## Example WebSocket Frames [Section titled “Example WebSocket Frames”](#example-websocket-frames) The stack definition above produces WebSocket frames with the following structure. An `upsert` frame contains the full entity state: ```json { "op": "upsert", "mode": "state", "entity": "Token", "key": "So11111111111111111111111111111111111111112", "data": { "id": { "mint": "So11111111111111111111111111111111111111112" }, "state": { "reserves": 1500000000, "tvl": 150000000000 }, "metrics": { "total_volume": 42000000000 } } } ``` When only specific fields change, a `patch` frame contains just the updated values: ```json { "op": "patch", "mode": "state", "entity": "Token", "key": "So11111111111111111111111111111111111111112", "data": { "state": { "reserves": 1520000000, "tvl": 152000000000 } } } ``` The SDK merges patches into local state automatically, so your application always sees the complete entity. *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [Mapping Macros](./rust-dsl/macros) — Deep dive into every mapping attribute and its parameters. * [Aggregation Strategies](./rust-dsl/strategies) — Learn how to build complex metrics using different strategies. * [CLI Reference](/cli/commands) — Learn how to build and deploy your stacks. # Workflow > The end-to-end process of building and deploying an Arete data pipeline. Building a stack follows a straightforward four-step workflow: define your data model in Rust, compile to build the stack, deploy to Arete Cloud, and connect from your application. 1 ### Write Rust Define entities using #\[arete] macro → 2 ### Build Stack Compile with cargo build → 3 ### Deploy via CLI Push to cloud with a4 up → 4 ### Connect from App Use generated SDK to stream *** ## Step 1: Write Your Stack Definition [Section titled “Step 1: Write Your Stack Definition”](#step-1-write-your-stack-definition) A stack definition is a Rust module that maps structure from the IDL into a rich, queryable state ready to consume in your application layer. Using Arete’s expressive DSL, you define **entities**, field mappings, aggregations, computed fields, relationships, and more — all in declarative Rust syntax. ```rust use arete::{arete, Stream}; #[arete(idl = "my_program.json")] pub mod ore_stack { #[entity] #[derive(Stream)] pub struct OreRound { #[map(RoundState::round_id, primary_key)] pub round_id: u64, #[map(RoundState::motherlode)] pub motherlode: u64, #[map(RoundState::difficulty)] pub difficulty: u64, } } ``` The Rust code is purely declarative—you’re describing **what** data you want, not **how** to fetch it. Arete handles all the account parsing, event processing, and state management. Note In practice, field paths use fully qualified names with the generated SDK module: e.g., `ore_sdk::accounts::RoundState::round_id`. The module name (`ore_sdk`) is derived from the IDL’s program metadata. The example above is simplified for clarity. → [Stack Definitions](/building-stacks/stack-definitions) — Learn the full DSL syntax *** ## Step 2: Build the Stack [Section titled “Step 2: Build the Stack”](#step-2-build-the-stack) When you compile your Rust project, Arete macros transform your definition into a portable JSON specification. This is the compiled representation of your data pipeline. ```bash cargo build ``` After building, you’ll find the generated specification in `.arete/`: ```plaintext my-stack/ ├── src/lib.rs ├── Cargo.toml └── .arete/ └── OreStack.stack.json # Generated stack specification ``` The `.stack.json` file contains everything Arete needs to execute your pipeline: all entities, field mappings, aggregation logic, key resolution rules, and more. A single stack file can contain multiple entities. *** ## Step 3: Deploy with the CLI [Section titled “Step 3: Deploy with the CLI”](#step-3-deploy-with-the-cli) The Arete CLI (`a4`) handles deployment to Arete Cloud. A single command pushes your stack specification, builds the execution environment, and deploys to a global edge network. Closed Beta Arete is currently in closed beta. You’ll need an API key to deploy. [Contact us on X](https://x.com/usearete) to request access. ```bash # Initialize project config (creates arete.toml) a4 init # Deploy your stack a4 up ``` On success, you’ll receive a WebSocket URL: ```plaintext ✔ Stack pushed (v1) ✔ Build completed 🚀 Deployed to: wss://ore.stack.arete.run ``` → [CLI Reference](/cli/commands) — Full command documentation *** ## Step 4: Generate SDK and Connect [Section titled “Step 4: Generate SDK and Connect”](#step-4-generate-sdk-and-connect) Once deployed, generate a typed SDK for your stack: * Generate SDK — TypeScript ```bash a4 sdk create typescript ore ``` * Generate SDK — Rust ```bash a4 sdk create rust ore ``` This creates a package containing the stack definition that tells the Arete client how to interact with your feed, including all entities and their views. Import it in your application: * Connect from app — React ```tsx import { AreteProvider, useArete } from "arete-react"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; // Generated in Step 4 // Wrap your app ; // In your component const stack = useArete(ORE_STREAM_STACK); const { data: rounds } = stack.views.OreRound.list.use(); ``` * Connect from app — TypeScript ```typescript import { Arete } from "arete-typescript"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; // Generated in Step 4 const a4 = await Arete.connect(ORE_STREAM_STACK); for await (const round of a4.views.OreRound.list.use()) { console.log("Round updated:", round); } ``` * Connect from app — Rust ```rust use a4_sdk::prelude::*; use ore_stack::{OreStack, OreRound}; // Generated in Step 4 #[tokio::main] async fn main() -> anyhow::Result<()> { // Connect to your deployed stack let a4 = Arete::::connect().await?; // Stream updates via typed views let mut stream = a4.views.ore_round.latest().listen(); while let Some(round) = stream.next().await { println!("Round #{:?}: motherlode={:?}", round.id.round_id, round.state.motherlode); } Ok(()) } ``` Share the generated SDK with your team or publish it — anyone with the SDK can connect to your stack’s feed (assuming they’re authorized). → [Your First Stack](/building-stacks/your-first-stack) — Complete tutorial with working code *** ## Next Steps [Section titled “Next Steps”](#next-steps) | Goal | Page | | ----------------------------------- | ------------------------------------------------------------- | | Understand the DSL in depth | [Stack Definitions](/building-stacks/stack-definitions) | | Set up your development environment | [Installation](/building-stacks/installation) | | Build a complete example | [Your First Stack](/building-stacks/your-first-stack) | | Learn all available macros | [Macro Reference](/building-stacks/rust-dsl/macros) | | Master aggregation strategies | [Population Strategies](/building-stacks/rust-dsl/strategies) | # Your First Stack In this tutorial, you’ll build an end-to-end streaming data pipeline in 15 minutes. We’ll create an “Ore” stack that tracks ORE mining rounds and displays live round data in a React app. ## What You’ll Build [Section titled “What You’ll Build”](#what-youll-build) A full-stack streaming application consisting of: 1. **A Rust Definition**: Defines how to transform on-chain ORE mining data into a queryable “OreRound” entity. 2. **A Cloud Deployment**: A managed Arete instance that processes the stream. 3. **A React Frontend**: A live dashboard showing the latest ORE mining rounds that updates instantly as on-chain events occur. Demo Version This tutorial uses a simplified version of the ORE stack. For the full production-ready version with all features (computed fields, snapshots, round results, etc.), run `a4 create` and select the `ore` template. *** ## Part 1: Create the Stack (Rust) [Section titled “Part 1: Create the Stack (Rust)”](#part-1-create-the-stack-rust) The stack definition is the heart of your application. It tells Arete which on-chain data to watch and how to project it into a useful state. ### 1. Create a new Rust project [Section titled “1. Create a new Rust project”](#1-create-a-new-rust-project) ```bash cargo new ore-stack --lib cd ore-stack ``` ### 2. Add dependencies [Section titled “2. Add dependencies”](#2-add-dependencies) Add `arete` to your `Cargo.toml`: ```toml [dependencies] arete = "0.1.1" serde = { version = "1.0", features = ["derive"] } ``` ### 3. Get the ORE IDL [Section titled “3. Get the ORE IDL”](#3-get-the-ore-idl) Download the ORE program IDL file and place it in an `idl/` directory: ```bash mkdir -p idl curl -o idl/ore.json https://docs.arete.run/ore.json ``` What is an IDL? The IDL (Interface Definition Language) describes the Solana program’s account structures and instructions. Arete uses it to generate type-safe code for accessing on-chain data. Generated SDK Naming When you point to an IDL (e.g., `ore.json`), Arete reads the program name from the IDL metadata and generates a typed SDK module named `{name}_sdk` — so `ore.json` produces `ore_sdk::accounts::*` and `ore_sdk::instructions::*`. All `#[map]` and `#[aggregate]` paths use this prefix. ### 4. Write your stack definition [Section titled “4. Write your stack definition”](#4-write-your-stack-definition) Open `src/lib.rs` and define your projection. We’ll create a simplified version of the ORE mining round tracker: src/lib.rs ```rust use arete::prelude::*; #[arete(idl = "idl/ore.json")] pub mod ore_stream { use arete::macros::Stream; use serde::{Deserialize, Serialize}; // The main entity representing an ORE mining round #[entity(name = "OreRound")] #[view(name = "latest", sort_by = "id.round_id", order = "desc")] pub struct OreRound { pub id: RoundId, pub state: RoundState, pub metrics: RoundMetrics, } // Round identification with primary key #[derive(Debug, Clone, Serialize, Deserialize, Stream)] pub struct RoundId { #[map(ore_sdk::accounts::Round::id, primary_key, strategy = SetOnce)] pub round_id: u64, #[map(ore_sdk::accounts::Round::__account_address, lookup_index, strategy = SetOnce)] pub round_address: String, } // Round state fields - updated on every change #[derive(Debug, Clone, Serialize, Deserialize, Stream)] pub struct RoundState { #[map(ore_sdk::accounts::Round::expires_at, strategy = LastWrite)] pub expires_at: Option, #[map(ore_sdk::accounts::Round::total_deployed, strategy = LastWrite)] pub total_deployed: Option, #[map(ore_sdk::accounts::Round::total_winnings, strategy = LastWrite)] pub total_winnings: Option, } // Aggregated metrics computed from transactions #[derive(Debug, Clone, Serialize, Deserialize, Stream)] pub struct RoundMetrics { // Count of deploy instructions for this round #[aggregate(from = ore_sdk::instructions::Deploy, strategy = Count, lookup_by = accounts::round)] pub deploy_count: Option, // Sum of all deployed SOL amounts #[aggregate(from = ore_sdk::instructions::Deploy, field = amount, strategy = Sum, lookup_by = accounts::round)] pub total_deployed_sol: Option, } } ``` ### 5. Build to generate the stack file [Section titled “5. Build to generate the stack file”](#5-build-to-generate-the-stack-file) When you compile your code, Arete macros automatically generate a `.arete/OreStream.stack.json` file containing all entities in your stack. ```bash cargo build ``` Checkpoint Verify that the file `.arete/OreStream.stack.json` exists in your project root. This JSON file contains the compiled specification for your stack, including all entities and their field mappings. *** ## Part 2: Deploy to Arete Cloud [Section titled “Part 2: Deploy to Arete Cloud”](#part-2-deploy-to-arete-cloud) Now let’s push your stack to the cloud. Arete manages the infrastructure, so you don’t need to worry about scaling or WebSocket servers. Closed Beta Arete is currently in closed beta. You’ll need an API key to deploy. [Contact us on X](https://x.com/usearete) to request access. ### 1. Initialize your project [Section titled “1. Initialize your project”](#1-initialize-your-project) This creates a `arete.toml` file that links your local stack to the cloud. ```bash a4 init ``` ### 2. Push and Deploy [Section titled “2. Push and Deploy”](#2-push-and-deploy) The `up` command is a shortcut that pushes your stack, builds the container, and deploys it to a global cluster. ```bash a4 up ``` **Expected Output:** ```text ✔ Stack pushed (v1) ✔ Build created (id: bld_123...) ✔ Build completed 🚀 Deployed to: wss://.stack.arete.run ``` Checkpoint Copy the **WebSocket URL** from the output. You’ll need it for the React app. *** ## Part 3: Generate SDK and Connect [Section titled “Part 3: Generate SDK and Connect”](#part-3-generate-sdk-and-connect) Finally, let’s build the frontend. First, generate a typed SDK from your deployed stack, then connect using React or TypeScript. What is the SDK? The generated SDK contains the typed interface to your stack — it tells the Arete client what entities exist, what views are available, and the TypeScript types for each. You can share this SDK with your team or publish it. ### 1. Generate the SDK [Section titled “1. Generate the SDK”](#1-generate-the-sdk) From your stack project directory (where you ran `a4 up`): ```bash a4 sdk create typescript ore-stack --output ./sdk ``` This creates a typed SDK in `./sdk/` with full TypeScript support. *** * React ### React: Set up your React app [Section titled “React: Set up your React app”](#react-set-up-your-react-app) ```bash npx create-react-app my-ore-app --template typescript cd my-ore-app npm install arete-react zustand ``` Copy the generated SDK into your React app: ```bash cp -r ../ore-stack/sdk ./src/ore-sdk ``` ### React: Configure the Provider [Section titled “React: Configure the Provider”](#react-configure-the-provider) Wrap your app in `AreteProvider` using the URL from `a4 up`. src/index.tsx ```tsx import { AreteProvider } from "arete-react"; const root = ReactDOM.createRoot(document.getElementById("root")!); root.render( , ); ``` ### React: Use the Generated Stack [Section titled “React: Use the Generated Stack”](#react-use-the-generated-stack) src/App.tsx ```tsx import { useArete } from "arete-react"; import { OreStack } from "./ore-sdk"; export default function App() { const stack = useArete(OreStack); const { data: rounds, isLoading } = stack.views.OreRound.latest.use({ take: 5, }); if (isLoading) return
Connecting to stream...
; return (

Latest ORE Mining Rounds

{rounds?.map((round) => (

Round #{round.id.round_id}

Address: {round.id.round_address}

Total Deployed: {round.state.total_deployed?.toLocaleString()}{" "} lamports

Total Winnings: {round.state.total_winnings?.toLocaleString()}{" "} lamports

Deploy Count: {round.metrics.deploy_count || 0}

Total SOL Deployed:{" "} {round.metrics.total_deployed_sol?.toLocaleString()} lamports

))}
); } ``` * TypeScript (Non-React) ### TypeScript: Set up your project [Section titled “TypeScript: Set up your project”](#typescript-set-up-your-project) ```bash mkdir my-ore-app && cd my-ore-app npm init -y npm install arete-typescript typescript ``` Copy the generated SDK: ```bash cp -r ../ore-stack/sdk ./src/ore-sdk ``` ### TypeScript: Connect and stream data [Section titled “TypeScript: Connect and stream data”](#typescript-connect-and-stream-data) src/index.ts ```typescript import { Arete } from "arete-typescript"; import { OreStack, type OreRound } from "./ore-sdk"; async function main() { // Connect to your deployed stack using the generated SDK const a4 = await Arete.connect( "wss://.stack.arete.run", { stack: OreStack }, ); console.log("Connected! Streaming ORE rounds...\n"); // Stream updates with full type safety for await (const update of a4.views.OreRound.latest.watch()) { if (update.type === "upsert") { const round = update.data; console.log(`Round #${round.id.round_id}:`); console.log(` Address: ${round.id.round_address}`); console.log( ` Total Deployed: ${round.state.total_deployed?.toLocaleString()} lamports`, ); console.log( ` Total Winnings: ${round.state.total_winnings?.toLocaleString()} lamports`, ); console.log(` Deploy Count: ${round.metrics.deploy_count || 0}`); console.log(""); } } } main().catch(console.error); ``` ### TypeScript: Run your app [Section titled “TypeScript: Run your app”](#typescript-run-your-app) ```bash npx tsx src/index.ts ``` *** ## Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) ### ”Stack file not found” [Section titled “”Stack file not found””](#stack-file-not-found) Ensure you ran `cargo build` in your Rust project. The `#[arete]` macro generates the `.stack.json` file during compilation. ### WebSocket connection fails [Section titled “WebSocket connection fails”](#websocket-connection-fails) 1. Check that the URL matches the output of `a4 up`. 2. Ensure your stack is successfully deployed by running `a4 stack list`. ### Data is not appearing [Section titled “Data is not appearing”](#data-is-not-appearing) 1. Verify that the ORE program is emitting events and updating the Round accounts. 2. Check the browser console for any subscription errors. 3. Ensure your view path matches the entity name exactly (e.g., `OreRound/latest`). ## Next Steps [Section titled “Next Steps”](#next-steps) * Learn more about [Stack Macros](/building-stacks/rust-dsl/macros) to build complex pipelines. * Explore [Population Strategies](/building-stacks/rust-dsl/strategies) for different data access patterns. * Check the [CLI Reference](/cli/commands) for advanced deployment options. # CLI Command Reference Complete reference for the Arete CLI (`a4`). *** ## Global Options [Section titled “Global Options”](#global-options) | Option | Description | | ----------------------- | ---------------------------------------------------------------- | | `--config, -c ` | Path to arete.toml (default: `arete.toml`) | | `--json` | Output as JSON | | `--verbose` | Enable verbose output | | `--help, -h` | Show help | | `--version, -V` | Show version | | `--completions ` | Generate shell completions (bash, zsh, fish, powershell, elvish) | *** ## Quick Reference [Section titled “Quick Reference”](#quick-reference) | Command | Description | | --------------------- | ------------------------------------ | | `a4 create [name]` | Scaffold a new app from a template | | `a4 init` | Initialize a stack project | | `a4 up [stack]` | Deploy stack (push + build + deploy) | | `a4 push [stack]` | Push stack to remote (alias) | | `a4 status` | Show project overview | | `a4 stack list` | List all stacks | | `a4 stack show` | Show stack details | | `a4 telemetry status` | Show telemetry status | | `a4 explore` | Discover stacks and schemas | npm vs Cargo When installed via npm (`npm install -g @usearete/a4`), the command is `a4`. When installed via Cargo (`cargo install a4-cli`), the command is also `a4`. Both are the same CLI. *** ## Create a New App [Section titled “Create a New App”](#create-a-new-app) ### a4 create \[name] [Section titled “a4 create \[name\]”](#a4-create-name) Scaffold a new Arete project from a template. This is the fastest way to get started. ```bash # Interactive — prompts for name and template npx @usearete/a4 create # With project name npx @usearete/a4 create my-app # With specific template npx @usearete/a4 create my-app --template react-ore ``` **Available templates:** | Template | Aliases | Description | | ---------------- | ------------------------------------ | ----------------------------------------- | | `react-ore` | `ore-react` | ORE mining rounds viewer (React + Vite) | | `rust-ore` | `ore-rust` | ORE mining rounds client (Rust + Tokio) | | `typescript-ore` | `ore-typescript`, `ts-ore`, `ore-ts` | ORE mining rounds client (TypeScript CLI) | **Options:** | Flag | Description | | ------------------- | ------------------------------------- | | `--template ` | Skip interactive selection | | `--offline` | Use cached templates only | | `--force-refresh` | Clear template cache and re-download | | `--skip-install` | Don’t run `npm install` automatically | Templates are downloaded from GitHub releases and cached in `~/.arete/templates/`. *** ## Project Setup [Section titled “Project Setup”](#project-setup) ### a4 init [Section titled “a4 init”](#a4-init) Initialize a new Arete project. ```bash a4 init ``` Creates `arete.toml` with auto-discovered stacks from `.arete/*.stack.json`. ### a4 config validate [Section titled “a4 config validate”](#a4-config-validate) Validate your configuration. ```bash a4 config validate ``` *** ## Authentication [Section titled “Authentication”](#authentication) Arete is currently in closed beta. Contact us to receive an API key. ### a4 auth login [Section titled “a4 auth login”](#a4-auth-login) Save your API key to authenticate. ```bash # Interactive — prompts for API key a4 auth login # Or pass directly a4 auth login --key ``` **Options:** | Flag | Description | | ----------- | --------------------------------- | | `--key, -k` | API key (prompts if not provided) | ### a4 auth logout [Section titled “a4 auth logout”](#a4-auth-logout) Remove stored credentials. ```bash a4 auth logout ``` ### a4 auth status [Section titled “a4 auth status”](#a4-auth-status) Check local authentication status. ```bash a4 auth status ``` ### a4 auth whoami [Section titled “a4 auth whoami”](#a4-auth-whoami) Verify authentication with server. ```bash a4 auth whoami ``` **Credentials location:** `~/.arete/credentials.toml` *** ## Schema Discovery [Section titled “Schema Discovery”](#schema-discovery) ### a4 explore [Section titled “a4 explore”](#a4-explore) Discover available stacks and explore their schemas. Works without authentication for public stacks. ```bash # List all available stacks a4 explore # Show entities and views for a stack a4 explore ore # Show fields and types for a specific entity a4 explore ore OreRound # JSON output (for agents and scripts) a4 explore --json a4 explore ore --json a4 explore ore OreRound --json ``` **Arguments:** | Argument | Description | | ---------- | --------------------------------- | | `[name]` | Stack name to explore | | `[entity]` | Entity name to show field details | **Output varies by specificity:** | Command | Shows | | ----------------------------- | --------------------------------------------- | | `a4 explore` | All available stacks with entity names | | `a4 explore ` | Stack entities, views, and field counts | | `a4 explore ` | Entity fields with types, sections, and views | Public stacks are visible without authentication. Log in with `a4 auth login` to also see global stacks and your own deployed stacks. *** ## Deployment [Section titled “Deployment”](#deployment) ### a4 up \[stack-name] [Section titled “a4 up \[stack-name\]”](#a4-up-stack-name) Deploy a stack: push, build, and deploy in one command. ```bash # Deploy all stacks a4 up # Deploy specific stack a4 up my-stack # Deploy to branch a4 up my-stack --branch staging # Creates: my-stack-staging.stack.arete.run # Preview deployment a4 up my-stack --preview # Preview what would be deployed (no actual deployment) a4 up my-stack --dry-run ``` **Options:** | Flag | Description | | --------------------- | --------------------------------------------- | | `--branch, -b ` | Deploy to named branch | | `--preview` | Create preview deployment | | `--dry-run` | Show what would be deployed without deploying | ### a4 status [Section titled “a4 status”](#a4-status) Show overview of all stacks, builds, and deployments. ```bash a4 status a4 status --json ``` *** ## Stack Management [Section titled “Stack Management”](#stack-management) ### a4 stack list [Section titled “a4 stack list”](#a4-stack-list) List all stacks with their deployment status. ```bash a4 stack list a4 stack list --json ``` **Output:** ```plaintext STACK STATUS VERSION URL settlement-game active v3 wss://settlement-game.stack.arete.run token-tracker active v1 wss://token-tracker.stack.arete.run ``` ### a4 stack push \[stack-name] [Section titled “a4 stack push \[stack-name\]”](#a4-stack-push-stack-name) Push local stacks to remote. ```bash # Push all stacks a4 stack push # Push specific stack a4 stack push my-stack ``` ### a4 stack show \ [Section titled “a4 stack show \”](#a4-stack-show-stack-name) Show detailed stack information including deployment status and versions. ```bash a4 stack show my-stack a4 stack show my-stack --version 3 a4 stack show my-stack -v 3 ``` **Options:** | Flag | Description | | ------------------- | ----------------------------- | | `--version, -v ` | Show specific version details | **Output includes:** * Entity information * Deployment status and URL * Latest version details * Recent builds ### a4 stack versions \ [Section titled “a4 stack versions \”](#a4-stack-versions-stack-name) Show version history. ```bash a4 stack versions my-stack a4 stack versions my-stack --limit 10 a4 stack versions my-stack -l 10 ``` **Options:** | Flag | Description | | ----------------- | ---------------------------------------- | | `--limit, -l ` | Maximum number of versions (default: 20) | ### a4 stack delete \ [Section titled “a4 stack delete \”](#a4-stack-delete-stack-name) Delete a stack from remote. ```bash a4 stack delete my-stack a4 stack delete my-stack --force # Skip confirmation a4 stack delete my-stack -f ``` **Options:** | Flag | Description | | ------------- | ------------------------ | | `--force, -f` | Skip confirmation prompt | ### a4 stack rollback \ [Section titled “a4 stack rollback \”](#a4-stack-rollback-stack-name) Rollback to a previous deployment. ```bash # Rollback to previous version a4 stack rollback my-stack # Rollback to specific version a4 stack rollback my-stack --to 2 # Rollback to specific build a4 stack rollback my-stack --build 123 # Rollback branch deployment a4 stack rollback my-stack --branch staging ``` **Options:** | Flag | Description | | ----------------- | ---------------------------------------- | | `--to ` | Target version | | `--build ` | Target build ID | | `--branch ` | Branch to rollback (default: production) | | `--rebuild` | Force full rebuild | | `--no-wait` | Don’t watch progress | ### a4 stack stop \ [Section titled “a4 stack stop \”](#a4-stack-stop-stack-name) Stop a deployment. ```bash a4 stack stop my-stack a4 stack stop my-stack --branch staging a4 stack stop my-stack --force # Skip confirmation ``` **Options:** | Flag | Description | | ----------------- | ------------------------- | | `--branch ` | Branch deployment to stop | | `--force, -f` | Skip confirmation prompt | *** ## SDK Generation [Section titled “SDK Generation”](#sdk-generation) ### a4 sdk list [Section titled “a4 sdk list”](#a4-sdk-list) List stacks available for SDK generation. ```bash a4 sdk list ``` ### a4 sdk create typescript \ [Section titled “a4 sdk create typescript \”](#a4-sdk-create-typescript-stack-name) Generate TypeScript SDK. ```bash a4 sdk create typescript my-stack a4 sdk create typescript my-stack --output ./src/generated/ a4 sdk create typescript my-stack --package-name @myorg/my-sdk a4 sdk create typescript my-stack --url wss://my-stack.stack.arete.run ``` **Options:** | Flag | Description | | --------------------------- | ----------------------------------- | | `--output, -o ` | Output file path (overrides config) | | `--package-name, -p ` | Package name for TypeScript | | `--url ` | WebSocket URL for the stack | ### a4 sdk create rust \ [Section titled “a4 sdk create rust \”](#a4-sdk-create-rust-stack-name) Generate Rust SDK crate. ```bash a4 sdk create rust my-stack a4 sdk create rust my-stack --output ./crates/ a4 sdk create rust my-stack --crate-name my-stack-sdk a4 sdk create rust my-stack --module # Generate as module instead of crate a4 sdk create rust my-stack --url wss://my-stack.stack.arete.run ``` **Options:** | Flag | Description | | --------------------- | ----------------------------------------------------------- | | `--output, -o ` | Output directory path (overrides config) | | `--crate-name ` | Custom crate name for generated Rust crate | | `--module` | Generate as a module (mod.rs) instead of a standalone crate | | `--url ` | WebSocket URL for the stack | The `--module` flag generates the SDK as a Rust module (with `mod.rs`) that can be embedded directly into an existing crate, rather than creating a standalone crate with its own `Cargo.toml`. This is useful for monorepo setups or when you want to include generated code within your own crate. *** ## Configuration File [Section titled “Configuration File”](#configuration-file) **File:** `arete.toml` ```toml [project] name = "my-project" # SDK generation settings [sdk] output_dir = "./generated" # Default output for both languages typescript_output_dir = "./frontend/src/generated" # Override for TypeScript only rust_output_dir = "./crates/generated" # Override for Rust only typescript_package = "@myorg/my-sdk" # Package name for TypeScript rust_module_mode = false # Generate Rust SDKs as modules by default # Build preferences [build] watch_by_default = true # Stack definitions # Auto-discovered from .arete/*.stack.json # Define explicitly for custom naming or per-stack overrides: [[stacks]] name = "my-game" stack = "SettlementGame" # Stack name or path to .stack.json description = "Settlement game tracking" typescript_output_file = "./src/generated/game.ts" # Per-stack TypeScript output path rust_output_crate = "./crates/game-stack" # Per-stack Rust output path rust_module = true # Per-stack: generate as module instead of crate ``` ### SDK Configuration Options [Section titled “SDK Configuration Options”](#sdk-configuration-options) | Option | Scope | Description | | ------------------------ | ------------ | ------------------------------------------------- | | `output_dir` | `[sdk]` | Default output directory for both languages | | `typescript_output_dir` | `[sdk]` | Override output directory for TypeScript SDKs | | `rust_output_dir` | `[sdk]` | Override output directory for Rust SDKs | | `typescript_package` | `[sdk]` | Package name for generated TypeScript code | | `rust_module_mode` | `[sdk]` | Generate Rust SDKs as modules by default | | `typescript_output_file` | `[[stacks]]` | Per-stack TypeScript output file path | | `rust_output_crate` | `[[stacks]]` | Per-stack Rust output crate/module directory | | `rust_module` | `[[stacks]]` | Per-stack override for module vs crate generation | For most projects, you only need: ```toml [project] name = "my-project" ``` The CLI auto-discovers stacks from `.arete/*.stack.json` files. *** ## Build Process [Section titled “Build Process”](#build-process) ### Status Flow [Section titled “Status Flow”](#status-flow) ```plaintext pending → uploading → queued → building → pushing → deploying → completed ↘ failed ``` ### Build Phases [Section titled “Build Phases”](#build-phases) | Phase | Description | | ----------------- | -------------------------- | | SUBMITTED | Queued for processing | | PROVISIONING | Starting build environment | | DOWNLOAD\_SOURCE | Preparing source | | INSTALL | Installing dependencies | | PRE\_BUILD | Preparing build | | BUILD | Compiling | | POST\_BUILD | Finalizing | | UPLOAD\_ARTIFACTS | Publishing image | | FINALIZING | Deploying to runtime | *** ## Typical Workflows [Section titled “Typical Workflows”](#typical-workflows) ### First-Time Setup [Section titled “First-Time Setup”](#first-time-setup) ```bash # Initialize project a4 init # Login with your API key a4 auth login # Validate configuration a4 config validate ``` ### Development Cycle [Section titled “Development Cycle”](#development-cycle) ```bash # Make changes to stack, rebuild Rust crate cargo build # Deploy a4 up my-stack # Check status a4 status ``` ### Branch Deploys [Section titled “Branch Deploys”](#branch-deploys) ```bash # Deploy feature branch a4 up my-stack --branch feature-x # URL: my-stack-feature-x.stack.arete.run # Check deployment a4 stack list # Clean up when done a4 stack stop my-stack --branch feature-x ``` ### Rollback [Section titled “Rollback”](#rollback) ```bash # Quick rollback to previous version a4 stack rollback my-stack # Rollback to specific version a4 stack rollback my-stack --to 2 ``` *** ## Telemetry [Section titled “Telemetry”](#telemetry) Arete collects anonymous usage data to improve the CLI. No personal information or project details are sent. ### a4 telemetry status [Section titled “a4 telemetry status”](#a4-telemetry-status) Show current telemetry status. ```bash a4 telemetry status ``` ### a4 telemetry enable [Section titled “a4 telemetry enable”](#a4-telemetry-enable) Enable telemetry collection. ```bash a4 telemetry enable ``` ### a4 telemetry disable [Section titled “a4 telemetry disable”](#a4-telemetry-disable) Disable telemetry collection. ```bash a4 telemetry disable ``` *** ## Environment Variables [Section titled “Environment Variables”](#environment-variables) | Variable | Description | | ---------------------------- | ---------------------------------- | | `ARETE_API_URL` | Override API endpoint | | `DO_NOT_TRACK=1` | Disable telemetry (standard) | | `ARETE_TELEMETRY_DISABLED=1` | Disable telemetry (Arete-specific) | *** ## Error Reference [Section titled “Error Reference”](#error-reference) | Error | Solution | | ---------------------- | ------------------------------------------ | | `Not authenticated` | Run `a4 auth login` | | `Stack not found` | Check `a4 stack list` for available stacks | | `Stack file not found` | Run `cargo build` to generate stack spec | | `Build failed` | Try again | | `Config invalid` | Run `a4 config validate` | *** ## WebSocket URL Patterns [Section titled “WebSocket URL Patterns”](#websocket-url-patterns) After successful deployment: | Type | Pattern | | ---------- | --------------------------------------------- | | Production | `wss://{stack-name}.stack.arete.run` | | Branch | `wss://{stack-name}-{branch}.stack.arete.run` | | Local | `ws://localhost:8080` | Get the URL with: ```bash a4 stack show ``` # a4 idl — IDL Explorer Explore and analyze Solana IDL (Interface Definition Language) files directly from the command line. The `a4 idl` suite helps you understand a program’s structure, account layouts, and relationships before you start building your stack. *** ## Global Behavior [Section titled “Global Behavior”](#global-behavior) These rules apply to all `a4 idl` subcommands: * **Path argument**: Every subcommand takes `` as its first positional argument (the path to the IDL JSON file). * **JSON output**: Most commands support the `--json` flag for machine-readable output. * **Fuzzy matching**: Lookups for instructions, accounts, and types are case-insensitive. If you make a typo, the CLI suggests the closest match. * **Error handling**: File-not-found or invalid names exit with code 1 and a clear message. *** ## Quick Reference [Section titled “Quick Reference”](#quick-reference) | Command | Description | | ---------------------------------------- | ---------------------------------------- | | [`summary`](#a4-idl-summary) | Quick overview of the IDL structure | | [`instructions`](#a4-idl-instructions) | List all instructions | | [`instruction`](#a4-idl-instruction) | Detail view of a specific instruction | | [`accounts`](#a4-idl-accounts) | List all account types | | [`account`](#a4-idl-account) | Detail view of a specific account | | [`types`](#a4-idl-types) | List all custom types | | [`type`](#a4-idl-type) | Detail view of a specific custom type | | [`errors`](#a4-idl-errors) | List all program errors | | [`events`](#a4-idl-events) | List all program events | | [`constants`](#a4-idl-constants) | List all defined constants | | [`search`](#a4-idl-search) | Fuzzy search across the entire IDL | | [`discriminator`](#a4-idl-discriminator) | Compute Anchor discriminators | | [`relations`](#a4-idl-relations) | Categorize accounts by their role | | [`account-usage`](#a4-idl-account-usage) | Find all instructions using an account | | [`links`](#a4-idl-links) | Find instructions linking two accounts | | [`pda-graph`](#a4-idl-pda-graph) | Visualize PDA derivation paths | | [`type-graph`](#a4-idl-type-graph) | Visualize field-to-account relationships | | [`connect`](#a4-idl-connect) | Analyze how to connect new accounts | *** ## Data Commands [Section titled “Data Commands”](#data-commands) ### a4 idl summary [Section titled “a4 idl summary”](#a4-idl-summary) Show a high-level overview of the program. ```bash # Get a quick summary a4 idl summary meteora_dlmm.json # Output: Name: meteora_dlmm, Format: modern, Instructions: 74, Constants: 30 ``` Shows program name, IDL format (modern or legacy), program address, version, and counts for all sections. ### a4 idl instructions [Section titled “a4 idl instructions”](#a4-idl-instructions) List all instructions defined in the IDL. ```bash # List all instructions a4 idl instructions ore.json # Count instructions via JSON a4 idl instructions ore.json --json | jq length # Output: 19 ``` **Options:** | Flag | Description | | -------- | --------------------------------- | | `--json` | Output as JSON (machine-readable) | ### a4 idl instruction [Section titled “a4 idl instruction”](#a4-idl-instruction) Show detailed information about a specific instruction, including its discriminator, accounts, and arguments. ```bash # Detail view for an instruction a4 idl instruction ore.json deposit # Fuzzy matching on typos a4 idl instruction ore.json depositt # Error: instruction 'depositt' not found, did you mean: deposit? ``` **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl accounts [Section titled “a4 idl accounts”](#a4-idl-accounts) List all account structures defined in the IDL. ```bash # List all accounts a4 idl accounts meteora_dlmm.json ``` **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl account [Section titled “a4 idl account”](#a4-idl-account) Show the detailed field layout of a specific account type. ```bash # View account fields and types a4 idl account meteora_dlmm.json lb_pair ``` **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl types [Section titled “a4 idl types”](#a4-idl-types) List all custom types and enums. ```bash # List all custom types a4 idl types ore.json ``` **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl type [Section titled “a4 idl type”](#a4-idl-type) Show the full definition of a custom type or enum. ```bash # View type details a4 idl type ore.json Config ``` **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl errors [Section titled “a4 idl errors”](#a4-idl-errors) List all custom program errors with their codes and messages. ```bash # List all errors a4 idl errors meteora_dlmm.json ``` **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl events [Section titled “a4 idl events”](#a4-idl-events) List all events emitted by the program. ```bash # List all events a4 idl events meteora_dlmm.json ``` **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl constants [Section titled “a4 idl constants”](#a4-idl-constants) List all constants defined in the program. ```bash # List all constants a4 idl constants ore.json ``` **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl search [Section titled “a4 idl search”](#a4-idl-search) Perform a fuzzy search across instructions, accounts, types, errors, events, and constants. ```bash # Search for anything related to "swap" a4 idl search meteora_dlmm.json swap # Use JSON to see where matches were found a4 idl search meteora_dlmm.json swap --json | jq '.[].section' | sort -u # Output: "error", "event", "instruction", "type" ``` **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl discriminator [Section titled “a4 idl discriminator”](#a4-idl-discriminator) Compute the Anchor discriminator for an instruction or account. ```bash # Compute instruction discriminator (global namespace) a4 idl discriminator pump.json buy # Output: Name: buy, Namespace: global, Discriminator: [66, 06, 3d, 12, 01, da, eb, ea] # Compute account discriminator (account namespace) a4 idl discriminator pump.json LastIdlBlock ``` **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | *** ## Relationship Commands [Section titled “Relationship Commands”](#relationship-commands) ### a4 idl relations [Section titled “a4 idl relations”](#a4-idl-relations) Analyze and categorize all accounts in the IDL. ```bash # See how accounts are categorized a4 idl relations meteora_dlmm.json # Find core entity accounts a4 idl relations meteora_dlmm.json --json | jq '.[] | select(.category == "Entity") | .account_name' # Output includes: "lb_pair" ``` Accounts are classified into: * **Entity**: Core data accounts that appear across many instructions. * **Infrastructure**: System programs or token programs. * **Role**: Authorities, signers, or administrative accounts. * **Other**: Miscellaneous accounts. **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl account-usage [Section titled “a4 idl account-usage”](#a4-idl-account-usage) Find every instruction that uses a specific account. ```bash # See where 'lb_pair' is used a4 idl account-usage meteora_dlmm.json lb_pair # Count usages a4 idl account-usage meteora_dlmm.json lb_pair --json | jq length # Output: 59 ``` Instructions are grouped by the role the account plays: Writable, Signer, or Readonly. **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl links [Section titled “a4 idl links”](#a4-idl-links) Find instructions that involve a specific pair of accounts. This is useful for discovering how two entities interact. ```bash # Find instructions linking 'round' and 'entropyVar' a4 idl links ore.json round entropyVar # Count shared instructions a4 idl links ore.json round entropyVar --json | jq length # Output: 2 ``` **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl pda-graph [Section titled “a4 idl pda-graph”](#a4-idl-pda-graph) Extract and visualize the PDA (Program Derived Address) derivation graph. ```bash # Extract PDA graph a4 idl pda-graph idl.json ``` Shows seeds (constants, accounts, or arguments) used to derive each PDA account. **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | ### a4 idl type-graph [Section titled “a4 idl type-graph”](#a4-idl-type-graph) Analyze field types to find implicit references to account types. ```bash # Extract type reference graph a4 idl type-graph idl.json ``` Useful for identifying when a field named `lb_pair` (type `Pubkey`) refers to an account of type `LbPair`. **Options:** | Flag | Description | | -------- | -------------- | | `--json` | Output as JSON | *** ## Connection Command [Section titled “Connection Command”](#connection-command) ### a4 idl connect [Section titled “a4 idl connect”](#a4-idl-connect) Analyze how a new account can be connected to existing accounts in the program. ```bash # Analyze connection between reward_vault and lb_pair a4 idl connect meteora_dlmm.json reward_vault --existing lb_pair # With Arete-specific suggestions a4 idl connect meteora_dlmm.json reward_vault --existing lb_pair --suggest-a4 # Shows: register_from suggestions # Partial success handling a4 idl connect ore.json entropyVar --existing round,bogus_account # Warning: account 'bogus_account' not found in IDL, skipping # (then shows connections to round) ``` **Options:** | Flag | Description | | -------------------- | ----------------------------------------------------------------- | | `--existing ` | Comma-separated list of existing account names | | `--json` | Output as JSON | | `--suggest-a4` | Show Arete integration suggestions (`register_from`, `aggregate`) | Arete Suggestions The `--suggest-a4` flag is specifically for Arete stack builders. It suggests the most appropriate projection patterns based on the detected relationship. # Arete > Build Solana apps with live blockchain data **Programmable data streams for Solana.** No indexers. No polling. No infrastructure to manage. Arete streams real-time blockchain data directly to your app. Define what data you need and Arete transforms on-chain events into typed, live feeds — consumed via React hooks, TypeScript streams, or Rust clients. *** ## Choose Your Path [Section titled “Choose Your Path”](#choose-your-path) I'm New to Coding **Start here if you’ve never built an app.** We’ll walk you through setting up AI tools that write code for you. No programming experience required. [Get started →](/agent-skills/setup-tools/) I Use AI Coding Tools **Already have Cursor, Claude Code, or similar?** Paste one prompt and your AI handles the rest. Works with 30+ AI coding assistants. [Build with AI →](/agent-skills/overview/) I'm a Developer **Know React, TypeScript, or Rust?** Jump straight to the SDK and start streaming data in minutes. Full API reference available. [Quickstart →](/using-stacks/quickstart/) *** ## What is Arete? [Section titled “What is Arete?”](#what-is-arete) Arete connects your app to live Solana blockchain data. Instead of polling RPCs or building custom indexers, you define what data you want and Arete streams it to you as it happens on-chain. 1 ### Solana On-chain data → 2 ### Arete Transforms + streams → 3 ### Your App Live feed **Think of it like a live database query against the blockchain** — you subscribe once and get updates pushed to you automatically. *** ## See It In Action [Section titled “See It In Action”](#see-it-in-action) Connect to the public ORE mining stack — a live feed of blockchain mining activity. No account required. * React ```tsx import { AreteProvider, useArete } from 'arete-react'; import { ORE_STREAM_STACK } from 'arete-stacks/ore'; function App() { return ( ); } function LatestRounds() { const { views, isConnected } = useArete(ORE_STREAM_STACK); const { data: rounds, isLoading } = views.OreRound.latest.use({ take: 5 }); if (isLoading) return
Connecting...
; return (

{isConnected ? "🟢 Live" : "Connecting..."}

    {rounds?.map((round) => (
  • Round #{round.id?.round_id} — Motherlode: {round.state?.motherlode}
  • ))}
); } ``` * TypeScript ```typescript import { Arete } from 'arete-typescript'; import { ORE_STREAM_STACK } from 'arete-stacks/ore'; // Connect using the stack (URL is embedded in the stack definition) const a4 = await Arete.connect(ORE_STREAM_STACK); for await (const update of a4.views.OreRound.latest.watch({ take: 1 })) { if (update.type === 'upsert') { console.log(`Round #${update.data.id?.round_id}`); console.log(`Motherlode: ${update.data.state?.motherlode}`); } } ``` * Rust ```rust use a4_sdk::prelude::*; use a4_stacks::ore::{OreStack, OreRound}; #[tokio::main] async fn main() -> anyhow::Result<()> { let a4 = Arete::::connect().await?; let mut stream = a4.views.ore_round.latest().listen().take(1); while let Some(round) = stream.next().await { println!("Round # {:?}", round.id.round_id); println!("Motherlode: {:?}", round.state.motherlode); } Ok(()) } ``` Don’t want to write code? Paste the prompt below into your AI coding assistant — it handles everything. *** ## How It Works [Section titled “How It Works”](#how-it-works) Arete handles the full pipeline from chain to client: 1 ### Solana On-chain data → 2 ### Arete Transform + stream → 3 ### Your App Live feed 1. **Define** the data shape you need using a declarative Rust DSL 2. **Deploy** your definition — Arete manages the infrastructure 3. **Stream** live updates to any client via WebSocket *** ## Capabilities [Section titled “Capabilities”](#capabilities) | Feature | Description | | ------------------- | -------------------------------------------------------------------- | | **Account state** | Map fields directly from on-chain accounts | | **Instructions** | Extract arguments and accounts from program instructions | | **Aggregations** | Compute Sum, Count, Max, Min, UniqueCount across events | | **Computed fields** | Derive values from other fields | | **PDA resolution** | Automatically resolve related accounts | | **Live streaming** | Updates pushed via WebSocket as they happen on-chain | | **Type-safe SDKs** | Generated TypeScript and Rust SDKs — share with your team or publish | *** ## For Developers [Section titled “For Developers”](#for-developers) Already comfortable with code? Here are the fastest paths: Quickstart **Scaffold a new app** — The CLI creates a complete working project in under 2 minutes. Choose React, TypeScript, or Rust templates. [Start here →](/using-stacks/quickstart/) Connect to a Stack **Add to existing project** — Install the SDK and start streaming with a few lines of code. Works with any React, TypeScript, or Rust project. [Connect now →](/using-stacks/connect/) *** ## Build with AI [Section titled “Build with AI”](#build-with-ai) Paste this into any AI coding assistant (Cursor, Claude Code, Windsurf, or any agent that can read URLs) and it will set up Arete and build an ORE mining dashboard for you: Read and follow the instructions to set up Arete in this project. Then build a React dashboard that shows live ORE mining round data with Tailwind CSS dark theme. Your AI will install the CLI, learn the SDK patterns, connect to live Solana data, and build the app — all from that one prompt. New to Coding? **Never used AI coding tools?** We’ll walk you through installing Cursor or similar from scratch. [Set up your tools →](/agent-skills/setup-tools/) Prompt Cookbook **Browse ready-made prompts** for dashboards, analytics, and custom data feeds. [View prompts →](/agent-skills/prompts/) *** ## Build Custom Streams [Section titled “Build Custom Streams”](#build-custom-streams) Want to create your own data feeds? Build stacks that transform on-chain events into structured, streaming data. Your First Stack **End-to-end tutorial** — Build, deploy, and connect to your own custom stack. Define entities, write transformations, deploy to Arete Cloud. [Build now →](/building-stacks/your-first-stack/) How It Works **Architecture & concepts** — Understand the full Arete system. Learn about entities, mappings, aggregations, and deployment. [Learn more →](/using-stacks/how-it-works/) # Rust SDK The Arete Rust SDK is an asynchronous client for consuming streaming data from Arete. It is designed for backend services, trading bots, and CLI tools that require type-safe access to Solana state projections. *** ## Installation [Section titled “Installation”](#installation) Add to your `Cargo.toml`: ```toml [dependencies] a4-sdk = "0.1.1" a4-stacks = { version = "0.1.1", optional = true } tokio = { version = "1", features = ["full"] } anyhow = "1" futures = "0.3" ``` The `a4-stacks` crate provides pre-built stack definitions for popular Solana protocols (optional but recommended). ### TLS Options [Section titled “TLS Options”](#tls-options) By default, the SDK uses `rustls` for TLS. You can switch to native TLS: ```toml [dependencies] a4-sdk = { version = "0.1.1", default-features = false, features = ["native-tls"] } ``` ### Tokio Runtime [Section titled “Tokio Runtime”](#tokio-runtime) The SDK requires the [Tokio](https://tokio.rs/) runtime. Ensure you have it enabled in your project (specifically the `rt-multi-thread`, `macros`, and `time` features). *** ## Quick Start [Section titled “Quick Start”](#quick-start) ### Connect and Stream [Section titled “Connect and Stream”](#connect-and-stream) ```rust use a4_sdk::prelude::*; use a4_stacks::ore::{OreStack, OreRound}; #[tokio::main] async fn main() -> anyhow::Result<()> { // Connect to the ORE stack (URL is defined in OreStack) let a4 = Arete::::connect().await?; println!("Connected! Streaming ORE rounds...\n"); // Access views directly let mut stream = a4.views.ore_round.latest().listen(); // Stream updates while let Some(round) = stream.next().await { println!("Round # {:?}", round.id.round_id); println!(" Motherlode: {:?}", round.state.motherlode); println!(" Total difficulty: {:?}\n", round.state.total_difficulty); } Ok(()) } ``` Run with: ```bash cargo run ``` *** ## Project Setup [Section titled “Project Setup”](#project-setup) ### 1. Create a New Project [Section titled “1. Create a New Project”](#1-create-a-new-project) ```bash cargo new my-arete-app cd my-arete-app ``` ### 2. Add Dependencies [Section titled “2. Add Dependencies”](#2-add-dependencies) ```toml [dependencies] a4-sdk = "0.1.1" a4-stacks = "0.1.1" tokio = { version = "1", features = ["rt-multi-thread", "macros"] } anyhow = "1" futures = "0.3" ``` ### 3. Basic Structure [Section titled “3. Basic Structure”](#3-basic-structure) src/main.rs ```rust use a4_sdk::prelude::*; use a4_stacks::ore::{OreStack, OreRound}; #[tokio::main] async fn main() -> anyhow::Result<()> { let a4 = Arete::::connect().await?; let mut stream = a4.views.ore_round.latest().listen(); while let Some(round) = stream.next().await { println!("Round: {:?}", round.id.round_id); } Ok(()) } ``` *** ## Connection Management [Section titled “Connection Management”](#connection-management) ### Basic Connection [Section titled “Basic Connection”](#basic-connection) Each stack defines its own URL, so connection is simple: ```rust let a4 = Arete::::connect().await?; ``` ### With Custom URL [Section titled “With Custom URL”](#with-custom-url) Override the default URL if needed: ```rust let a4 = Arete::::connect_url("wss://custom.endpoint.com").await?; ``` ### Builder Pattern [Section titled “Builder Pattern”](#builder-pattern) Configure the client with custom reconnection logic and intervals: ```rust use std::time::Duration; let a4 = Arete::::builder() .url("wss://custom.endpoint.com") // Optional: override default .auto_reconnect(true) .max_reconnect_attempts(10) .reconnect_intervals(vec![ Duration::from_secs(1), Duration::from_secs(2), Duration::from_secs(5), ]) .ping_interval(Duration::from_secs(30)) .initial_data_timeout(Duration::from_secs(5)) .max_entries_per_view(5000) .connect() .await?; ``` ### Connection State [Section titled “Connection State”](#connection-state) ```rust // Check current state match a4.connection_state().await { ConnectionState::Connected => println!("Connected!"), ConnectionState::Connecting => println!("Connecting..."), ConnectionState::Disconnected => println!("Disconnected"), ConnectionState::Reconnecting { attempt } => println!("Reconnecting (attempt {})...", attempt), ConnectionState::Error => println!("Error"), } ``` ### Disconnect [Section titled “Disconnect”](#disconnect) ```rust // Graceful disconnect a4.disconnect().await; ``` *** ## Store Size Limits [Section titled “Store Size Limits”](#store-size-limits) By default, each view is limited to 10,000 entries to prevent memory issues on long-running clients. When the limit is reached, oldest entries are evicted (LRU). ```rust // Custom limit let a4 = Arete::::builder() .max_entries_per_view(5000) .connect() .await?; // Unlimited (not recommended for long-running clients) let a4 = Arete::::builder() .unlimited_entries() .connect() .await?; ``` *** ## Views [Section titled “Views”](#views) Views provide typed access to your stack’s data. Access them directly through `a4.views`: ```rust // Direct field access let rounds = a4.views.ore_round.latest().get().await; let all_rounds = a4.views.ore_round.list().get().await; let specific = a4.views.ore_round.state().get("round_key").await; ``` ### Pre-Built Stacks [Section titled “Pre-Built Stacks”](#pre-built-stacks) The `a4-stacks` crate includes view definitions for popular protocols: ```rust use a4_stacks::ore::OreStack; let ore = Arete::::connect().await?; ``` ### Building Your Own Stack [Section titled “Building Your Own Stack”](#building-your-own-stack) To create views for your own Solana programs, you’ll need to [build a stack](/building-stacks/workflow/). The CLI then generates typed view accessors for you automatically. *** ## Streaming Data [Section titled “Streaming Data”](#streaming-data) The SDK provides three streaming methods with different levels of detail: | Method | Returns | Description | | --------------- | --------------- | ------------------------------------------------------- | | `.listen()` | `T` | Merged entity directly (simplest - filters out deletes) | | `.watch()` | `Update` | Upsert/Patch/Delete events | | `.watch_rich()` | `RichUpdate` | Before/after diffs for tracking changes | ### Simple Streaming with `.listen()` [Section titled “Simple Streaming with .listen()”](#simple-streaming-with-listen) The simplest API - emits merged entities directly, filtering out deletes: ```rust let mut stream = a4.views.ore_round.latest().listen(); while let Some(round) = stream.next().await { println!("Round: {:?}", round.id.round_id); } ``` ### Watch with Event Types [Section titled “Watch with Event Types”](#watch-with-event-types) Stream all update types (upsert, patch, delete): ```rust let mut stream = a4.views.ore_round.latest().watch(); while let Some(update) = stream.next().await { match update { Update::Upsert { data, .. } => { println!("New/Updated round: {:?}", data.id.round_id); } Update::Patch { data, .. } => { println!("Patched round: {:?}", data.id.round_id); } Update::Delete { key } => { println!("Removed round: {:?}", key); } } } ``` ### Rich Streaming with Before/After [Section titled “Rich Streaming with Before/After”](#rich-streaming-with-beforeafter) Track changes over time with before/after diffs: ```rust let mut stream = a4.views.ore_round.latest().watch_rich(); while let Some(update) = stream.next().await { match update { RichUpdate::Created { data, .. } => { println!("Created: {:?}", data.id.round_id); } RichUpdate::Updated { before, after, .. } => { println!("Updated from {:?} to {:?}", before.state.motherlode, after.state.motherlode); } RichUpdate::Deleted { key, last_known } => { println!("Deleted: {:?}", key); } } } ``` ### Watch a Specific Entity [Section titled “Watch a Specific Entity”](#watch-a-specific-entity) Stream updates for a specific key using state views: ```rust let specific_round = "some-round-key"; let mut stream = a4.views.ore_round.state().watch(specific_round); while let Some(update) = stream.next().await { println!("Round updated: {:?}", update); } ``` ### Chainable Options [Section titled “Chainable Options”](#chainable-options) All stream builders support server-side options: ```rust // Limit to top 10 items let mut stream = a4.views.ore_round.list().watch().take(10); // Skip first 5, take next 10 let mut stream = a4.views.ore_round.list().watch().skip(5).take(10); // Filter by field let mut stream = a4.views.ore_round.list().watch().filter("status", "active"); ``` ### Client-Side Filtering [Section titled “Client-Side Filtering”](#client-side-filtering) Use standard stream adapters for client-side filtering: ```rust use futures::StreamExt; let mut stream = a4.views.ore_round.latest().watch().filter(|update| { futures::future::ready(matches!(update, Update::Upsert { .. })) }); // Only receives upsert events while let Some(update) = stream.next().await { println!("Upsert: {:?}", update); } ``` ### Lazy Streams [Section titled “Lazy Streams”](#lazy-streams) Streams are **lazy** - calling `watch()` returns immediately without subscribing. The subscription happens automatically on first poll. This enables ergonomic method chaining: ```rust use std::collections::HashSet; let watchlist: HashSet = /* tokens to watch */; let mut price_alerts = a4.views.ore_round.list() .watch_rich() .filter(move |u| watchlist.contains(u.key())) .filter_map(|update| match update { RichUpdate::Updated { before, after, .. } => { let prev = before.trading.last_trade_price.flatten().unwrap_or(0.0); let curr = after.trading.last_trade_price.flatten().unwrap_or(0.0); if prev > 0.0 { let pct = (curr - prev) / prev * 100.0; if pct.abs() > 0.1 { return Some((after.info.name.clone(), pct)); } } None } _ => None, }); while let Some((name, pct)) = price_alerts.next().await { println!("[PRICE] {:?} changed by {:.2}%", name, pct); } ``` *** ## One-Shot Queries [Section titled “One-Shot Queries”](#one-shot-queries) Fetch current state without streaming: ```rust // Get all items from a view let rounds: Vec = a4.views.ore_round.latest().get().await; println!("Found {} rounds", rounds.len()); // Get a specific entity by key let round: Option = a4.views.ore_round.state().get("round-key").await; if let Some(r) = round { println!("Round: {:?}", r.id.round_id); } ``` ### Synchronous Cache Access [Section titled “Synchronous Cache Access”](#synchronous-cache-access) For hot paths where you can’t await, use sync methods to read from cache: ```rust // Synchronous - returns cached data immediately let cached_rounds = a4.views.ore_round.latest().get_sync(); let cached_round = a4.views.ore_round.state().get_sync("round-key"); ``` Note: Sync methods return empty/None if data hasn’t been loaded yet. *** ## Core Methods Reference [Section titled “Core Methods Reference”](#core-methods-reference) ### ViewHandle Methods (list/derived views) [Section titled “ViewHandle Methods (list/derived views)”](#viewhandle-methods-listderived-views) | Method | Returns | Description | | ---------------------- | ----------------------- | ----------------------------------- | | `.get().await` | `Vec` | Get all items | | `.get_sync()` | `Vec` | Synchronous cache read | | `.listen()` | `Stream` | Stream merged entities (no deletes) | | `.watch()` | `Stream>` | Stream all update types | | `.watch_rich()` | `Stream>` | Stream with before/after diffs | | `.watch_keys(&[keys])` | `Stream>` | Stream updates for specific keys | ### StateView Methods (keyed access) [Section titled “StateView Methods (keyed access)”](#stateview-methods-keyed-access) | Method | Returns | Description | | ------------------ | ----------------------- | --------------------------- | | `.get(key).await` | `Option` | Get entity by key | | `.get_sync(key)` | `Option` | Synchronous cache read | | `.listen(key)` | `Stream` | Stream merged entity values | | `.watch(key)` | `Stream>` | Stream updates for key | | `.watch_rich(key)` | `Stream>` | Stream with diffs for key | ### Stream Builder Options [Section titled “Stream Builder Options”](#stream-builder-options) | Method | Description | | --------------------- | ---------------------------- | | `.take(n)` | Server-side limit to N items | | `.skip(n)` | Server-side offset | | `.filter(key, value)` | Server-side filter | *** ## Update Types [Section titled “Update Types”](#update-types) When streaming with `watch()`, you receive `Update` variants: ```rust pub enum Update { Upsert { key: String, data: T }, // Full entity update Patch { key: String, data: T }, // Partial update (merged) Delete { key: String }, // Entity removed } ``` Helper methods: `key()`, `data()`, `is_delete()`, `has_data()`, `into_data()`, `into_key()`, `map(f)` ### Rich Updates (Before/After Diffs) [Section titled “Rich Updates (Before/After Diffs)”](#rich-updates-beforeafter-diffs) For tracking changes over time, use `watch_rich()`: ```rust pub enum RichUpdate { Created { key: String, data: T }, Updated { key: String, before: T, after: T, patch: Option }, Deleted { key: String, last_known: Option }, } ``` The `Updated` variant includes `patch` - the raw JSON of changed fields, useful for checking what specifically changed: ```rust if update.has_patch_field("trading") { // The trading field was modified } ``` *** ## Understanding `Option>` Fields [Section titled “Understanding Option\> Fields”](#understanding-optionoptiont-fields) Generated entity types often have fields typed as `Option>`. This represents the **patch semantics** of Arete updates: | Value | Meaning | | ------------------- | ----------------------------------------------------- | | `None` | Field was **not included** in this update (no change) | | `Some(None)` | Field was **explicitly set to null** | | `Some(Some(value))` | Field has a **concrete value** | This distinction matters for partial updates (patches). When the server sends a patch, only changed fields are included. An absent field means “keep the previous value”, while an explicit `null` means “clear this field”. ### Working with `Option>` [Section titled “Working with Option\>”](#working-with-optionoptiont) ```rust // Access a nested optional field let price = token.trading.last_trade_price.flatten().unwrap_or(0.0); // Check if field was explicitly set (vs absent from patch) match &token.reserves.current_price_sol { None => println!("Price not in this update"), Some(None) => println!("Price explicitly cleared"), Some(Some(price)) => println!("Price: {}", price), } // Compare values in before/after if before.trading.last_trade_price != after.trading.last_trade_price { println!("Price changed!"); } ``` *** ## Generating a Rust SDK [Section titled “Generating a Rust SDK”](#generating-a-rust-sdk) Use the Arete CLI to generate a typed Rust SDK from your spec. See [CLI Commands](/cli/commands/) for the full reference and [Configuration](/building-stacks/configuration/) for `arete.toml` options. ```bash # Generate SDK crate a4 sdk create rust settlement-game # With custom output directory a4 sdk create rust settlement-game --output ./crates/game-sdk # With custom crate name a4 sdk create rust settlement-game --crate-name game-sdk # Generate as a module instead of a standalone crate a4 sdk create rust settlement-game --module --output ./src/stacks/game ``` ### Crate vs Module Output [Section titled “Crate vs Module Output”](#crate-vs-module-output) By default, the CLI generates a **standalone crate** with its own `Cargo.toml`: ```plaintext generated/settlement-game-stack/ ├── Cargo.toml └── src/ ├── lib.rs # Re-exports ├── types.rs # Data structs (with Option> for patchable fields) └── entity.rs # Stack and Views implementations ``` With the `--module` flag, the CLI generates a **module** that can be embedded in an existing crate: ```plaintext src/stacks/game/ ├── mod.rs # Re-exports ├── types.rs # Data structs └── entity.rs # Stack and Views implementations ``` ### Using the Generated Code [Section titled “Using the Generated Code”](#using-the-generated-code) Add the generated crate to your `Cargo.toml`: ```toml [dependencies] a4-sdk = "0.1.1" settlement-game-stack = { path = "./generated/settlement-game-stack" } ``` Then use it: ```rust use a4_sdk::prelude::*; use settlement_game_stack::{SettlementStack, GameState}; let a4 = Arete::::connect().await?; let scores = a4.views.player_score.leaderboard().get().await; let game = a4.views.game_state.state().get("game-key").await; ``` Or if using module mode, add to your `lib.rs`: ```rust pub mod game; // Points to src/game/mod.rs ``` *** ## Error Handling [Section titled “Error Handling”](#error-handling) ```rust use a4_sdk::AreteError; match Arete::::connect().await { Ok(a4) => println!("Connected!"), Err(AreteError::Connection(e)) => { eprintln!("Connection failed: {}", e); } Err(AreteError::Authentication(e)) => { eprintln!("Auth failed: {}", e); } Err(e) => { eprintln!("Unexpected error: {:?}", e); } } ``` *** ## Auto-Reconnection [Section titled “Auto-Reconnection”](#auto-reconnection) The SDK automatically reconnects on connection loss with configurable backoff: ```rust let a4 = Arete::::builder() .auto_reconnect(true) .reconnect_intervals(vec![ Duration::from_secs(1), Duration::from_secs(2), Duration::from_secs(5), Duration::from_secs(10), ]) .max_reconnect_attempts(20) .connect() .await?; ``` *** ## Complete Example [Section titled “Complete Example”](#complete-example) A full command-line app that streams ORE mining rounds: src/main.rs ```rust use a4_sdk::prelude::*; use a4_stacks::ore::{OreStack, OreRound}; use std::time::Duration; use tokio::time::timeout; #[tokio::main] async fn main() -> anyhow::Result<()> { println!("-----------------------------------"); println!(" Arete ORE Round Monitor "); println!("-----------------------------------\n"); // Connect with 10 second timeout let a4 = match timeout( Duration::from_secs(10), Arete::::connect() ).await { Ok(Ok(a4)) => { println!("Connected to ORE stack\n"); a4 } Ok(Err(e)) => { eprintln!("Connection error: {}", e); return Err(e.into()); } Err(_) => { eprintln!("Connection timeout"); return Err(anyhow::anyhow!("Connection timeout")); } }; // Stream with stats let mut round_count = 0; let mut update_count = 0; let mut stream = a4.views.ore_round.latest().watch(); println!("Streaming live ORE rounds (press Ctrl+C to exit)...\n"); while let Some(update) = stream.next().await { update_count += 1; match update { Update::Upsert { data, .. } => { round_count += 1; println!( "[#{:03}] Round #{} - Motherlode: {} SOL", update_count, data.id.round_id, data.state.motherlode as f64 / 1_000_000_000.0 ); } Update::Patch { data, .. } => { println!( "[#{:03}] Updated Round #{} - Difficulty: {}", update_count, data.id.round_id, data.state.total_difficulty ); } Update::Delete { key } => { println!("[#{:03}] Removed round: {:?}", update_count, key); } } // Print stats every 10 updates if update_count % 10 == 0 { println!("\n--- Stats: {} rounds tracked, {} updates received ---\n", round_count, update_count); } } Ok(()) } ``` Run it: ```bash cargo run ``` *** ## Advanced Patterns [Section titled “Advanced Patterns”](#advanced-patterns) ### Multiple Concurrent Streams [Section titled “Multiple Concurrent Streams”](#multiple-concurrent-streams) ```rust use futures::future::join; // Watch multiple views concurrently let latest_stream = a4.views.ore_round.latest().watch(); let list_stream = a4.views.ore_round.list().watch(); let (latest_result, list_result) = join( process_stream(latest_stream), process_stream(list_stream) ).await; ``` ### Reconnection Handling [Section titled “Reconnection Handling”](#reconnection-handling) ```rust loop { match Arete::::connect().await { Ok(a4) => { println!("Connected!"); let mut stream = a4.views.ore_round.latest().watch(); while let Some(update) = stream.next().await { process_update(update); } // Stream ended - connection lost println!("Connection lost, reconnecting..."); } Err(e) => { eprintln!("Connection failed: {}, retrying in 5s...", e); tokio::time::sleep(Duration::from_secs(5)).await; } } } ``` ### Graceful Shutdown [Section titled “Graceful Shutdown”](#graceful-shutdown) ```rust use tokio::signal; let a4 = Arete::::connect().await?; let mut stream = a4.views.ore_round.latest().watch(); loop { tokio::select! { Some(update) = stream.next() => { process_update(update); } _ = signal::ctrl_c() => { println!("\nShutting down gracefully..."); a4.disconnect().await; break; } } } ``` *** ## Examples [Section titled “Examples”](#examples) Scaffold a complete Rust example that streams ORE mining rounds: ```bash a4 create my-ore-app --template rust-ore cd my-ore-app cargo run ``` Or run `a4 create` interactively and select the `rust-ore` template. *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [TypeScript SDK](/sdks/typescript/) - Use Arete in Node.js or browsers * [React SDK](/sdks/react/) - Build web apps with React hooks * [Build Your Own Stack](/building-stacks/workflow) - Create custom data streams for any Solana program * [CLI Reference](/cli/commands) - Deployment and management commands # Schema Validation Every Arete stack ships with [Zod](https://zod.dev) schemas alongside its TypeScript interfaces. These schemas give you runtime validation at two levels: **automatic frame validation** on the wire, and **per-query validation** in your application code. *** ## Generated Schemas [Section titled “Generated Schemas”](#generated-schemas) When you run `a4 sdk create typescript`, the CLI generates a Zod schema for every entity and sub-type. These mirror the TypeScript interfaces exactly: ```typescript import { PumpfunTokenSchema, PumpfunTokenIdSchema, PumpfunTokenReservesSchema, PumpfunTokenCompletedSchema, } from "arete-stacks/pumpfun"; ``` Each entity gets two schema variants: | Schema | Fields | Use Case | | ----------------------------- | ------------ | --------------------------------- | | `PumpfunTokenSchema` | All optional | Partial updates during streaming | | `PumpfunTokenCompletedSchema` | All required | Asserting a fully-hydrated entity | The “Completed” variant is useful when you want to guarantee every field is present before rendering — for example, filtering out tokens that haven’t received all their data yet. *** ## Frame Validation [Section titled “Frame Validation”](#frame-validation) Enable `validateFrames` when connecting to automatically validate every incoming WebSocket frame against the stack’s schemas. Invalid frames are dropped with a console warning instead of corrupting your local store. ### TypeScript (Core SDK) [Section titled “TypeScript (Core SDK)”](#typescript-core-sdk) ```typescript import { Arete } from "arete-typescript"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; const a4 = await Arete.connect(ORE_STREAM_STACK, { validateFrames: true, }); ``` ### React [Section titled “React”](#react) ```tsx import { AreteProvider } from "arete-react"; function App() { return ( ); } ``` When a frame fails validation, the SDK logs a warning with the view path and error details, then silently discards the update. Your application never sees malformed data. When to enable Frame validation adds a small overhead per update. Enable it during development to catch data issues early, and in production if data integrity is critical for your use case. *** ## Query-Level Validation [Section titled “Query-Level Validation”](#query-level-validation) Pass a `schema` option to any streaming method to filter entities that don’t match. This works on both the core SDK and the React hooks. ### TypeScript (Core SDK) [Section titled “TypeScript (Core SDK)”](#typescript-core-sdk-1) The `.use()` method accepts a `schema` in its options. Entities that fail validation are silently skipped: ```typescript import { Arete } from "arete-typescript"; import { ORE_STREAM_STACK, OreRoundCompletedSchema, } from "arete-stacks/ore"; const a4 = await Arete.connect(ORE_STREAM_STACK); // Only emit rounds where every field is present for await (const round of a4.views.OreRound.latest.use({ schema: OreRoundCompletedSchema, })) { // round is guaranteed to have all fields populated console.log(round.id.round_id, round.state.motherlode); } ``` ### React Hooks [Section titled “React Hooks”](#react-hooks) Both `.use()` and `.useOne()` accept `schema` in their params. Entities that fail validation are filtered out of the results: ```tsx import { useArete } from "arete-react"; import { ORE_STREAM_STACK, OreRoundCompletedSchema, } from "arete-stacks/ore"; function FullyLoadedRounds() { const { views } = useArete(ORE_STREAM_STACK); // Only returns rounds where all fields are present const { data: rounds } = views.OreRound.latest.use({ schema: OreRoundCompletedSchema, }); return (
    {rounds?.map((round) => (
  • Round #{round.id.round_id} — {round.state.motherlode}
  • ))}
); } ``` *** ## Custom Schemas [Section titled “Custom Schemas”](#custom-schemas) You can define your own Zod schemas to validate only the fields you care about. This is useful for building views that require specific data to render: ```typescript import { z } from "zod"; // Only accept tokens with complete trading data const TradableTokenSchema = z.object({ id: z.object({ mint: z.string(), }), reserves: z.object({ current_price_sol: z.number(), market_cap_sol: z.number(), }), trading: z.object({ total_volume: z.number(), total_trades: z.number(), }), }); // Use in React const { data: tokens } = views.PumpfunToken.list.use({ schema: TradableTokenSchema, }); // Or in core SDK for await (const token of a4.views.PumpfunToken.list.use({ schema: TradableTokenSchema, })) { console.log(token.id.mint, token.reserves.current_price_sol); } ``` Any entity missing the required fields is silently excluded. *** ## The Schema Interface [Section titled “The Schema Interface”](#the-schema-interface) The SDK defines a minimal `Schema` interface that is natively compatible with Zod: ```typescript interface Schema { safeParse: (input: unknown) => SchemaResult; } type SchemaResult = | { success: true; data: T } | { success: false; error: unknown }; ``` Any object with a `safeParse` method works — Zod schemas satisfy this out of the box, but you can also use custom validators if needed. *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [TypeScript SDK](/sdks/typescript/) — Core streaming API reference * [React SDK](/sdks/react/) — Hooks and providers for React apps * [Resolvers](/building-stacks/rust-dsl/resolvers/) — How Arete enriches entities with external data like token metadata # Filtering Feeds > Control what data streams to your client with pagination, sorting, and custom views. When streaming data from views, you can control what the server sends to reduce bandwidth and improve performance. *** ## Feed Parameters [Section titled “Feed Parameters”](#feed-parameters) All Arete SDKs support these parameters when subscribing to feeds: | Parameter | Type | Description | | --------- | -------- | ------------------------------------------- | | `take` | `number` | Maximum number of entities to return | | `skip` | `number` | Number of entities to skip (for pagination) | | `key` | `string` | Entity key for state view subscriptions | ### Pagination [Section titled “Pagination”](#pagination) Use `take` and `skip` to paginate through large datasets: * Pagination — React ```typescript import { useArete } from "arete-react"; import { TOKEN_STACK } from "arete-stacks/token"; function TokenList() { const a4 = useArete(TOKEN_STACK); const [page, setPage] = useState(1); const pageSize = 10; // Paginated subscription const { data: tokens } = a4.views.Token.list.use({ take: pageSize, skip: (page - 1) * pageSize, }); return (
{tokens?.map(t => )}
); } ``` * Pagination — TypeScript ```typescript import { Arete } from "arete-typescript"; import { TOKEN_STACK } from "arete-stacks/token"; const a4 = await Arete.connect("wss://token.stack.arete.run", { stack: TOKEN_STACK, }); // Get first 10 entities for await (const token of a4.views.Token.list.use({ take: 10 })) { console.log(token.id.mint); } // Get page 3 (skip 20, take 10) const page = 3; const pageSize = 10; for await (const token of a4.views.Token.list.use({ take: pageSize, skip: (page - 1) * pageSize })) { console.log(token.id.mint); } ``` * Pagination — Rust ```rust use a4_sdk::prelude::*; use a4_stacks::token::{TokenStack, Token}; let a4 = Arete::::connect().await?; // Get first 10 entities let mut stream = a4.views.token.list() .listen() .take(10); // Get page 3 (skip 20, take 10) let page = 3; let page_size = 10; let mut stream = a4.views.token.list() .listen() .skip((page - 1) * page_size) .take(page_size); while let Some(token) = stream.next().await { println!("Token: {:?}", token.id.mint); } ``` ### State View Keys [Section titled “State View Keys”](#state-view-keys) For state views, pass the entity key to subscribe to a specific entity: * State view — React ```typescript const { data: token } = a4.views.Token.state.use(tokenAddress); ``` * State view — TypeScript ```typescript const tokenAddress = "So11111111111111111111111111111111111111112"; // One-shot query const token = await a4.views.Token.state.get(tokenAddress); // Stream updates for await (const token of a4.views.Token.state.use(tokenAddress)) { console.log("Token updated:", token.id.mint); } ``` * State view — Rust ```rust let token_address = "So11111111111111111111111111111111111111112"; let mut stream = views.state(token_address).watch(); while let Some(update) = stream.next().await { println!("Token updated: {:?}", update.data.id.mint); } ``` *** ## Custom Views [Section titled “Custom Views”](#custom-views) Beyond the default `state` and `list` views, stacks can define **custom views** with sorting and limits applied at the server level. For stack builders Custom views are defined when [building a stack](/building-stacks/stack-definitions/) using the Rust DSL. If you’re just consuming an existing stack, skip to the [Accessing Custom Views](#accessing-custom-views) section below. ### Why Custom Views? [Section titled “Why Custom Views?”](#why-custom-views) * **Reduced bandwidth** - Server applies sorting/limits before transmitting * **Consistent ordering** - Sort order is defined once in your stack * **Pre-configured limits** - Useful for “top N” or “latest N” views ### Defining Custom Views [Section titled “Defining Custom Views”](#defining-custom-views) Custom views are defined using the `#[view]` attribute on your entity struct: ```rust use arete::prelude::*; #[arete(idl = "idl/ore.json")] pub mod ore_stream { #[entity(name = "OreRound")] #[view(name = "latest", sort_by = "id.round_id", order = "desc")] pub struct OreRound { pub id: RoundId, pub state: RoundState, // ... other fields } } ``` ### View Parameters [Section titled “View Parameters”](#view-parameters) | Parameter | Type | Description | | --------- | ------------------- | --------------------------------------------- | | `name` | `string` | View name (e.g., `"latest"`) | | `sort_by` | `string` | Field path to sort by (e.g., `"id.round_id"`) | | `order` | `"asc"` \| `"desc"` | Sort order (default: `"desc"`) | | `take` | `number` | Optional limit on results | ### Accessing Custom Views [Section titled “Accessing Custom Views”](#accessing-custom-views) Custom views appear alongside the default views in your SDK: * Custom views — React ```typescript import { useArete } from "arete-react"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; function LatestRounds() { const a4 = useArete(ORE_STREAM_STACK); // Default views const { data: allRounds } = a4.views.OreRound.list.use(); const { data: round } = a4.views.OreRound.state.use(roundAddress); // Custom view - sorted by round_id desc const { data: latestRounds } = a4.views.OreRound.latest.use(); return
{latestRounds?.map(r => )}
; } ``` * Custom views — TypeScript ```typescript import { Arete } from "arete-typescript"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; const a4 = await Arete.connect("wss://ore.stack.arete.run", { stack: ORE_STREAM_STACK, }); // Default views const allRounds = await a4.views.OreRound.list.get(); const round = await a4.views.OreRound.state.get(roundAddress); // Custom view - sorted by round_id desc for await (const round of a4.views.OreRound.latest.use()) { console.log("Round:", round.id.round_id); } ``` * Custom views — Rust ```rust use a4_sdk::prelude::*; use a4_stacks::ore::{OreStack, OreRound}; let a4 = Arete::::connect().await?; // Default views let mut list_stream = a4.views.ore_round.list().listen(); let round_address = "some-round-address"; let mut state_stream = a4.views.ore_round.state().listen(round_address); // Custom view - sorted by round_id desc let mut latest_stream = a4.views.ore_round.latest().listen(); while let Some(round) = latest_stream.next().await { println!("Round: {:?}", round.id.round_id); } ``` ### Example: Top 10 View [Section titled “Example: Top 10 View”](#example-top-10-view) ```rust #[entity(name = "Token")] #[view(name = "topByVolume", sort_by = "metrics.total_volume", order = "desc", take = 10)] pub struct Token { pub id: TokenId, pub metrics: TokenMetrics, } ``` ```typescript // Get top 10 tokens by volume for await (const token of a4.views.Token.topByVolume.use()) { console.log(token.id.mint, token.metrics.total_volume); } ``` See [Stack Definitions](/building-stacks/stack-definitions/) for complete documentation on defining entities and views. *** ## Client-Side Filtering [Section titled “Client-Side Filtering”](#client-side-filtering) If you need dynamic filtering that changes at runtime, apply filters client-side after receiving data: * Client-side filter — React The React SDK provides a `where` clause for declarative client-side filtering: ```typescript import { useArete } from "arete-react"; import { TOKEN_STACK } from "arete-stacks/token"; function HighVolumeTokens() { const a4 = useArete(TOKEN_STACK); // Client-side filtering with where clause const { data: tokens } = a4.views.Token.list.use({ where: { volume: { gte: 10000 }, price: { lte: 100 } }, limit: 20, // Client-side limit }); return ; } ``` **Supported `where` operators:** | Operator | Description | | --------- | --------------------- | | `gte` | Greater than or equal | | `lte` | Less than or equal | | `gt` | Greater than | | `lt` | Less than | | *(value)* | Exact match | * Client-side filter — TypeScript Filter data as it streams using standard JavaScript: ```typescript import { Arete } from "arete-typescript"; import { ORE_STREAM_STACK } from "arete-stacks/ore"; const a4 = await Arete.connect("wss://ore.stack.arete.run", { stack: ORE_STREAM_STACK, }); // Filter in the streaming loop for await (const round of a4.views.OreRound.latest.use()) { // Skip rounds below threshold if ((round.state.motherlode ?? 0) < 1_000_000_000) continue; console.log("High-value round:", round.id.round_id); } ``` * Client-side filter — Rust Use iterator adapters to filter streams: ```rust use a4_sdk::prelude::*; use a4_stacks::ore::{OreStack, OreRound}; use futures::StreamExt; let a4 = Arete::::connect().await?; // Filter using stream adapters let mut stream = a4.views.ore_round.latest() .listen() .filter(|round| { let high_value = round.state.motherlode.flatten().unwrap_or(0) >= 1_000_000_000; futures::future::ready(high_value) }); while let Some(round) = stream.next().await { println!("High-value round: {:?}", round.id.round_id); } ``` # How It Works > Architecture, core concepts, and data flow in Arete. Arete is a declarative data layer for Solana. You define the data shape you need, and Arete handles all the infrastructure to stream it to your app. Prefer AI-assisted development? You don’t need to understand all of this to build with Arete. Check out the [Build with AI](/agent-skills/overview/) path to get started with prompts instead of code. *** ## The Problem [Section titled “The Problem”](#the-problem) Building data pipelines for Solana apps is painful: 1. **Manual parsing** - You write custom code to decode accounts and instructions 2. **ETL complexity** - You build pipelines to transform, aggregate, and store data 3. **RPC overhead** - You manage websocket connections, retries, and state sync 4. **Type mismatches** - On-chain types don’t match your app types Most teams spend weeks on infrastructure before shipping features. *** ## The Arete Solution [Section titled “The Arete Solution”](#the-arete-solution) Instead of building infrastructure, you **declare what data you need**: ```rust #[entity(name = "Token")] pub struct Token { #[from_instruction(Create::mint, primary_key)] pub mint: String, #[map(BondingCurve::virtual_sol_reserves)] pub sol_reserves: u64, #[aggregate(from = Buy, field = amount, strategy = Sum)] pub total_volume: u64, } ``` Arete then: * Subscribes to the relevant on-chain events * Transforms raw data into your entity shape * Streams updates to your app as they happen on-chain * Generates type-safe SDKs for your frontend *** ## Architecture Overview [Section titled “Architecture Overview”](#architecture-overview) ```plaintext ┌──────────────────────────────────────────────────────────────┐ │ YOUR APP │ │ React hooks, TypeScript streams, or raw WebSocket │ └──────────────────────────────────────────────────────────────┘ ▲ │ WebSocket (live updates) ▼ ┌──────────────────────────────────────────────────────────────┐ │ ARETE CLOUD │ │ │ │ ┌────────────┐ ┌────────────┐ ┌────────────┐ │ │ │ Compiler │ -> │ VM │ -> │ Streamer │ │ │ └────────────┘ └────────────┘ └────────────┘ │ │ ^ ^ │ │ │ AST Bytecode State Tables WebSocket │ │ │ │ ┌────────────────────────────────────────────────────┐ │ │ │ Yellowstone gRPC │ │ │ │ (Live Solana data feed) │ │ │ └────────────────────────────────────────────────────┘ │ └──────────────────────────────────────────────────────────────┘ ▲ │ Yellowstone gRPC (raw transactions) ▼ ┌──────────────────────────────────────────────────────────────┐ │ SOLANA │ │ On-chain programs emitting transactions and state changes │ └──────────────────────────────────────────────────────────────┘ ``` *** ## Key Concepts [Section titled “Key Concepts”](#key-concepts) ### Stacks [Section titled “Stacks”](#stacks) A **Stack** is a deployed data pipeline that: 1. **Declares** one or more Solana programs it cares about — each with its own IDL 2. **Subscribes** to on-chain events across all of those programs 3. **Transforms** raw transaction data into structured entities 4. **Aggregates** values over time (sums, counts, etc.) 5. **Streams** updates to connected clients as they happen A stack isn’t limited to a single program. You can pull data from multiple programs into a single unified set of entities — as long as you provide an IDL for each. For example, a DeFi stack might combine instructions from a DEX program, a token program, and a rewards program into one coherent stream. Think of it as a “live query” across Solana — you declare the programs and data shape you need, and Arete keeps it updated. ### Entities [Section titled “Entities”](#entities) An **Entity** is a structured object representing on-chain state. A stack can contain multiple entities — for example, a DeFi stack might have separate entities for pools, positions, and trades. Each entity has: * **Primary key** - Unique identifier (usually a pubkey) * **Fields** - Data attributes with population strategies * **Sections** - Logical groupings of fields ```rust #[entity(name = "Token")] pub struct Token { pub id: TokenId, // Primary key section pub info: TokenInfo, // Metadata section pub trading: Trading, // Metrics section } ``` ### Mappings [Section titled “Mappings”](#mappings) **Mappings** define how on-chain data flows into entity fields: | Mapping Type | Source | Example | | --------------------- | ----------------------- | ------------------------------------------ | | `#[map]` | Account field | `#[map(BondingCurve::reserves)]` | | `#[from_instruction]` | Instruction arg/account | `#[from_instruction(Create::mint)]` | | `#[snapshot]` | Account snapshot | `#[snapshot(from = Account)]` | | `#[derive_from]` | Derive from instruction | `#[derive_from(from = Buy, field = user)]` | | `#[aggregate]` | Computed from events | `#[aggregate(from = Buy, strategy = Sum)]` | | `#[event]` | Captured instruction | `#[event(strategy = Append)]` | | `#[computed]` | Derived from fields | `#[computed(field_a + field_b)]` | ### Population Strategies [Section titled “Population Strategies”](#population-strategies) **Strategies** control how field values are updated: | Strategy | Behavior | | ------------- | ------------------------- | | `SetOnce` | Set once, never overwrite | | `LastWrite` | Always use latest value | | `Append` | Collect into array | | `Sum` | Running total | | `Count` | Event counter | | `UniqueCount` | Count unique values | | `Max` / `Min` | Track extremes | ### Views [Section titled “Views”](#views) **Views** are how you access data from a stack. By default, every entity gets two built-in views: | Mode | Path | Returns | Use Case | | ------- | -------------- | ----------------- | ------------ | | `list` | `Entity/list` | Array of entities | All entities | | `state` | `Entity/state` | Single entity | Get by key | ```typescript // List - all tokens as array const { data: tokens } = stack.views.tokens.list.use(); // State - single token by key const { data: token } = stack.views.tokens.state.use({ key: mintAddress }); ``` **Custom Views:** If you need different sorting, filtering, or aggregation logic, you can define custom views using the `#[view]` macro in your stack specification. This lets you create specialized access patterns beyond the default list and state views. ### Stack SDKs [Section titled “Stack SDKs”](#stack-sdks) A **Stack SDK** is a generated package that tells the Arete client how to interact with a specific feed. It contains: * **Entity types** — The shape of your data * **View definitions** — How to access entities (list, state, etc.) * **Helpers** — Shared formatting and transformation logic You can share generated SDKs with your team or publish them to npm/crates.io. *** ## Data Flow [Section titled “Data Flow”](#data-flow) ### Specification Time [Section titled “Specification Time”](#specification-time) 1. Write stream definition using Rust macros 2. Build generates stack spec (`.arete/*.stack.json`) 3. Deploy with `a4 up` 4. Cloud compiles spec to bytecode ### Client [Section titled “Client”](#client) 1. Connect to WebSocket 2. Subscribe to views 3. Receive live updates 4. React components re-render automatically *** ## Live Updates [Section titled “Live Updates”](#live-updates) Unlike traditional APIs where you poll for changes, Arete **pushes** updates to you. ### Update Types [Section titled “Update Types”](#update-types) | Type | Meaning | | -------- | ----------------------------- | | `upsert` | Entity was created or updated | | `delete` | Entity was removed | ```typescript for await (const update of a4.views.token.list.watch()) { if (update.type === "upsert") { console.log("Token changed:", update.data); } else if (update.type === "delete") { console.log("Token removed:", update.key); } } ``` *** ## Connection Lifecycle [Section titled “Connection Lifecycle”](#connection-lifecycle) ```plaintext disconnected → connecting → connected → (reconnecting) → connected ↘ error ``` | State | Meaning | | -------------- | -------------------------------------- | | `disconnected` | Not connected | | `connecting` | Establishing WebSocket connection | | `connected` | Active and receiving updates | | `reconnecting` | Connection lost, attempting to restore | | `error` | Failed to connect (check URL, network) | The SDK handles reconnection automatically. Your subscriptions resume when the connection is restored. *** ## Type Safety [Section titled “Type Safety”](#type-safety) Arete provides end-to-end type safety: 1. **Spec types** - Rust ensures valid mappings at compile time 2. **Generated types** - SDK types match your entity definitions 3. **Runtime types** - Data arrives pre-shaped, no parsing needed ```typescript // TypeScript knows token.mint is string, token.volume is bigint const { data: token } = stack.views.tokens.state.use({ key }); console.log(token.mint); // string console.log(token.volume); // bigint ``` *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [Quickstart](/using-stacks/quickstart) - Scaffold a working app in under 2 minutes * [Connect to a Stack](/using-stacks/connect/) - Add Arete to your existing project * [React SDK](/sdks/react/) - Build a complete React app * [TypeScript SDK](/sdks/typescript/) - Framework-agnostic SDK * [Rust SDK](/sdks/rust/) - Native Rust client * [Building Stacks](/building-stacks/workflow) - Create custom data streams * [CLI Commands](/cli/commands) - Deployment and management # Installation Reference Complete installation reference for all Arete client SDKs. New to Arete? Start with the [Quickstart](/using-stacks/quickstart) to scaffold a working app, or [Connect to a Stack](/using-stacks/connect/) to add Arete to an existing project. Building your own stack? If you’re creating a custom data feed, see [Building Stacks → Environment Setup](/building-stacks/installation) instead. *** ## TypeScript (Core) [Section titled “TypeScript (Core)”](#typescript-core) For Node.js, Vue, Svelte, or vanilla JS: ```bash npm install arete-typescript ``` Framework-agnostic with AsyncIterable-based streaming. No peer dependencies. *** ## TypeScript (React) [Section titled “TypeScript (React)”](#typescript-react) For React applications: ```bash npm install arete-react ``` **Peer dependencies:** `react` ^19.0.0, `zustand` ^4.0.0 ```bash npm install react zustand ``` Which package? * **React/Next.js** → `arete-react` * **Everything else** → `arete-typescript` *** ## Rust [Section titled “Rust”](#rust) Add to your `Cargo.toml`: ```toml [dependencies] a4-sdk = "0.1.1" ``` *** ## Next Steps [Section titled “Next Steps”](#next-steps) * [Quickstart](/using-stacks/quickstart) — Scaffold a working app in under 2 minutes * [React SDK](/sdks/react/) — Build a complete React app * [TypeScript SDK](/sdks/typescript/) — Use with Node.js, Vue, Svelte * [Rust SDK](/sdks/rust/) — Native Rust client *** ## CLI [Section titled “CLI”](#cli) The Arete CLI (`a4`) handles project scaffolding, deployment, and SDK generation. **Via npm (global):** ```bash npm install -g @usearete/a4 ``` Or use without installing: ```bash npx @usearete/a4 --help ``` **Via Cargo:** ```bash cargo install a4-cli ``` *** ## Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) | Issue | Solution | | ------------------------------ | ------------------------------------------ | | `Cannot find module 'arete-*'` | Run `npm install` for the relevant package | | Peer dependency warnings | Install `react` and `zustand` explicitly |