Integrating Self Protocol with Rust

A code-dependent life form.
So there I was, excited to integrate Self, this awesome privacy-first identity verification protocol into my project. Self uses zero-knowledge proofs to verify real human identities without exposing sensitive data. Perfect for preventing bots, protecting airdrops, and building Sybil-resistant applications.
I also really wanted to build my backend in Rust for performance.
Want to follow along? All the code from this article is available on GitHub. Feel free to clone, experiment, and build upon it!
The Challenge
Self provides excellent JavaScript/TypeScript SDKs with comprehensive documentation and support. As a Rust developer, this presented an interesting architectural challenge: how could I leverage Self’s robust JS SDK while building the high-performance Rust backend I wanted?
I had a few options:
Use Node - Leverage the official SDK directly
Wait for a Rust SDK - Maybe I’ll contribute one to the community someday, I’ve really been wanting to. :P
Get creative - Find a way to use both languages together
Option 1 meant giving up Rust’s benefits. Option 2 would delay my project (and reimplementing cryptographic verification seemed unnecessarily complex when Self already provides a battle-tested implementation).
So I chose option 3: What if I could combine Rust’s performance with Self’s JavaScript SDK?
The Solution
Here’s the idea: Rust handles what it’s great at (HTTP, routing, concurrency, type safety), while Node workers leverage Self’s official, well-maintained SDK for verification.

The flow works like this:
User opens the web app and sees a QR code
User scans with Self mobile app
Mobile app generates a zero-knowledge proof and POSTs it to the Rust backend
Rust receives the request and forwards it to an available Node worker
Node worker calls the Self SDK to verify the proof
Worker returns the result to Rust
Rust responds to the mobile app and caches the result
Frontend polls the debug endpoint to display verification details
This gives us the best of both worlds: Rust’s performance and type safety for the HTTP layer, and Self’s official, fully-featured SDK for verification, no need to reimplement anything.
The Backend
Let me walk you through the Rust implementation. The core is an Axum HTTP server that manages a pool of Node worker processes.
Setting Up the Worker Pool
First, I needed a way to spawn and manage Node processes from Rust. That’s where the node-workers crate comes in:
use node_workers::WorkerPool;
use std::sync::{Arc, Mutex};
// Create a pool that can manage up to 4 workers.
let mut pool = WorkerPool::setup("worker/worker.mjs", 4);
// Enable debug logging (helpful during development).
pool.with_debug(debug_workers);
// Here's the clever part: warm up 2 workers at startup.
let handle = pool.warmup(2);
if let Err(err) = handle.join() {
error!("Failed to warm up node workers: {:?}", err);
}
Why warm up workers at startup? Latency.
If I waited until the first verification request to spawn a Node process, users would experience a noticeable delay. By pre-spawning 2 workers, they’re already running and ready to handle requests immediately. The pool can still scale up to 4 workers under heavy load.
The Application State
The Rust server maintains three pieces of shared state:
struct AppState {
config: Config, // Scope and endpoint config.
pool: Arc<Mutex<WorkerPool>>, // The Node worker pool.
last_result: RwLock<Option<Value>>, // Cache for debug endpoint.
}
type SharedState = State<Arc<AppState>>;
This state is shared across all request handlers using Axum’s State extractor.
The Verification Endpoint
This is where the magic happens. When the Self mobile app sends a verification proof, this endpoint handles it:
async fn verify_handler(
State(state): State<SharedState>,
Json(body): Json<Value>,
) -> impl IntoResponse {
info!("Received verification request");
// 1. Extract the proof data from the request.
let attestation_id = body.get("attestationId").cloned();
let proof = body.get("proof").cloned();
let public_signals = body.get("publicSignals").cloned();
let user_context_data = body.get("userContextData").cloned();
// Validate all required fields are present.
if attestation_id.is_none() || proof.is_none() ||
public_signals.is_none() || user_context_data.is_none() {
return error_response("Required fields missing");
}
// 2. Prepare the payload for the Node worker.
let payload_for_node = json!({
"attestationId": attestation_id.unwrap(),
"proof": proof.unwrap(),
"publicSignals": public_signals.unwrap(),
"userContextData": user_context_data.unwrap(),
});
info!("Calling Node worker for verification...");
// 3. Call the Node worker in a blocking thread.
let pool_arc = state.pool.clone();
let node_result = tokio::task::spawn_blocking(move || {
let mut pool = pool_arc.lock().expect("worker pool poisoned");
pool.perform::<Value, _>("verifyProof", vec![payload_for_node])
}).await;
// 4. Extract and process the result.
let verification_result = match node_result {
Ok(Ok(results)) => {
results.into_iter().next().flatten()
}
Ok(Err(e)) => {
error!("Worker error: {:?}", e);
return error_response("Worker error");
}
Err(e) => {
error!("Join error: {:?}", e);
return error_response("Internal error");
}
};
let verification_result = match verification_result {
Some(vr) => vr,
None => return error_response("No result from worker"),
};
// 5. Check the verification details.
let is_valid = verification_result["isValidDetails"]["isValid"]
.as_bool().unwrap_or(false);
let is_min_age_valid = verification_result["isValidDetails"]["isMinimumAgeValid"]
.as_bool().unwrap_or(false);
let is_ofac_valid = verification_result["isValidDetails"]["isOfacValid"]
.as_bool().unwrap_or(false);
info!("Verification result - valid: {}, min_age: {}, ofac: {}",
is_valid, is_min_age_valid, is_ofac_valid);
// 6. Store the result for the debug endpoint.
*state.last_result.write().await = Some(verification_result.clone());
// 7. Return success or failure.
// Note: is_ofac_valid=true means user IS on OFAC list (failed check)
if !is_valid || !is_min_age_valid || is_ofac_valid {
return error_response_with_details(verification_result);
}
Json(json!({
"status": "success",
"result": true,
"credentialSubject": verification_result["discloseOutput"],
"userData": verification_result["userData"],
}))
}
Why spawn_blocking?
Notice the tokio::task::spawn_blocking wrapper? That’s crucial. The node-workers crate uses synchronous mutex operations that would block the async runtime. By spawning it in a blocking thread, we prevent it from blocking other async tasks.
This is a common pattern when integrating synchronous libraries into async Rust code.
The Node Worker
Now let’s look at the Node side. The worker is a single file that communicates with Rust via stdin/stdout.
The Communication Protocol
The node-workers crate uses a simple line-based protocol:
Rust sends:
PAYLOAD_CHUNK:<data>(can be multiple chunks)Rust sends:
PAYLOAD_ENDNode responds:
PAYLOAD_OKRust sends:
CMD:verifyProofNode sends:
RESULT_CHUNK:<data>(can be multiple chunks)Node sends:
OK
Here’s how the worker implements this:
import readline from "readline";
import { SelfBackendVerifier, AllIds, DefaultConfigStore } from "@selfxyz/core";
import dotenv from "dotenv";
dotenv.config();
// Initialize the Self SDK verifier.
const scope = process.env.SELF_SCOPE;
const endpoint = process.env.SELF_ENDPOINT;
const selfBackendVerifier = new SelfBackendVerifier(
scope,
endpoint,
true, // mockPassport: true for staging/testing.
AllIds, // Accept all document types (passport, ID, Aadhaar).
new DefaultConfigStore({
minimumAge: 18,
excludedCountries: [],
ofac: true, // Enable OFAC sanctions screening.
}),
"hex", // User ID format.
);
// Set up stdin/stdout communication.
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
terminal: false,
});
let payloadStr = "";
let payload = null;
// Handle incoming commands from Rust.
rl.on("line", async (line) => {
switch (line) {
case "PAYLOAD_END":
payload = JSON.parse(payloadStr || "null");
if (payload && payload._inner_payload) {
payload = payload._inner_payload;
}
payloadStr = "";
console.log("PAYLOAD_OK");
break;
default:
if (line.startsWith("PAYLOAD_CHUNK:")) {
payloadStr += line.replace("PAYLOAD_CHUNK:", "").trim();
} else if (line.startsWith("CMD:")) {
const cmd = line.replace("CMD:", "").trim();
try {
await handleCommand(cmd);
} catch (err) {
console.error("ERROR:", err.message);
process.exit(1); // Crash the worker, pool will spawn a new one.
}
}
}
});
// Signal to Rust that worker is ready.
console.log("READY");
The Verification Handler
When Rust sends the verifyProof command, the worker calls the Self SDK:
async function handleCommand(cmd) {
const currentPayload = payload;
payload = null;
if (cmd === "verifyProof") {
const { attestationId, proof, publicSignals, userContextData } =
currentPayload ?? {};
if (!attestationId || !proof || !publicSignals || !userContextData) {
throw new Error("Required fields missing in verifyProof");
}
// Call the Self SDK to verify the proof.
const result = await selfBackendVerifier.verify(
attestationId,
proof,
publicSignals,
userContextData,
);
// Send result back to Rust in chunks (max 1000 chars each).
const str = JSON.stringify(result);
const chunks = str.match(/.{1,1000}/g) || [];
for (const chunk of chunks) {
console.log(`RESULT_CHUNK: ${chunk}`);
}
} else if (cmd === "ping") {
// Health check.
const str = JSON.stringify({ ok: true, ts: new Date().toISOString() });
console.log(`RESULT_CHUNK: ${str}`);
} else {
throw new Error(`Task "${cmd}" not found`);
}
console.log("OK");
}
The chunking is important, large JSON responses need to be split to avoid buffering issues in the IPC mechanism.
The Frontend
The frontend is relatively straightforward, it’s a Next.js 16 app that displays a QR code and shows verification results.

Generating the QR Code
"use client";
import { SelfAppBuilder } from "@selfxyz/core";
import { SelfQRcodeWrapper } from "@selfxyz/qrcode";
import { ethers } from "ethers";
import { useState } from "react";
export default function Home() {
const [showQr, setShowQr] = useState(true);
const [status, setStatus] = useState("Scan the QR code with Self app");
const [verificationDetails, setVerificationDetails] = useState(null);
// Build the Self app configuration.
const selfApp = new SelfAppBuilder({
version: 2,
appName: process.env.NEXT_PUBLIC_SELF_APP_NAME || "Self Demo",
scope: process.env.NEXT_PUBLIC_SELF_SCOPE || "demo-scope",
endpoint: process.env.NEXT_PUBLIC_SELF_ENDPOINT ||
"http://localhost:3001/api/verify",
logoBase64: "https://i.postimg.cc/mrmVf9hm/self.png",
userId: ethers.ZeroAddress, // Placeholder user ID.
endpointType: "staging_https", // Use staging for mock passports.
userIdType: "hex",
userDefinedData: "demo-user",
disclosures: {
minimumAge: 18,
excludedCountries: [],
ofac: true,
nationality: true,
gender: true,
},
}).build();
return (
<div className="app-container">
{showQr && (
<SelfQRcodeWrapper
selfApp={selfApp}
onSuccess={handleSuccessfulVerification}
onError={handleError}
type="websocket"
size={260}
/>
)}
<p className="status">{status}</p>
{verificationDetails && (
<VerificationResult details={verificationDetails} />
)}
</div>
);
}
Fetching Verification Results
When the QR code callback fires onSuccess, it means the mobile app completed verification but the frontend doesn’t have the details yet. So it polls the debug endpoint:
const handleSuccessfulVerification = async () => {
setStatus("Verification succeeded! Loading details...");
setShowQr(false);
try {
const debugEndpoint = process.env.NEXT_PUBLIC_SELF_DEBUG_ENDPOINT ||
"http://localhost:3001/debug/last-result";
const res = await fetch(debugEndpoint);
const data = await res.json();
if (data && data.verificationResult) {
setVerificationDetails(data.verificationResult);
setStatus("Verification complete!");
}
} catch (err) {
console.error("Failed to fetch verification details:", err);
setStatus("Verification succeeded but couldn't load details");
}
};
The debug endpoint is simple—it just returns the cached verification result:
async fn last_result_handler(State(state): State<SharedState>) -> impl IntoResponse {
let last = state.last_result.read().await.clone();
let body = if let Some(result) = last {
Json(json!({
"status": "ok",
"verificationResult": result,
}))
} else {
Json(json!({ "status": "empty" }))
};
let mut response = body.into_response();
response.headers_mut().insert(
header::ACCESS_CONTROL_ALLOW_ORIGIN,
HeaderValue::from_static("*"),
);
response
}

The Devil in the Details
Environment Configuration Hell
Getting all the environment variables right was surprisingly tricky. Here’s what you need:
Backend (server/.env):
PORT=3001
SELF_SCOPE=demo-scope
SELF_ENDPOINT=https://your-ngrok-url.ngrok-free.app/api/verify
Frontend (client/.env):
NEXT_PUBLIC_SELF_APP_NAME=Self Verification Demo
NEXT_PUBLIC_SELF_SCOPE=demo-scope
NEXT_PUBLIC_SELF_ENDPOINT=https://your-ngrok-url.ngrok-free.app/api/verify
NEXT_PUBLIC_SELF_DEBUG_ENDPOINT=http://localhost:3001/debug/last-result
Critical points:
Scope must match exactly between frontend and backend
Endpoint must match exactly between frontend and backend
Endpoint must be HTTPS and publicly accessible (hence ngrok)
Debug endpoint can be localhost (browser → local server)
Ngrok
For local development, you need ngrok to expose your Rust server publicly:
ngrok http 3001
This gives you a URL like https://abc123.ngrok-free.app. You need to:
Update
SELF_ENDPOINTinserver/.envUpdate
NEXT_PUBLIC_SELF_ENDPOINTinclient/.envRestart both the Rust server and Next.js dev server
Every time ngrok restarts, you get a new URL (unless you pay for a static domain). It’s annoying but necessary for testing with the mobile app.
The OFAC Logic
This tripped me up initially. The isOfacValid field is inverted:
isOfacValid: false= User is NOT on OFAC lists (passed the check)isOfacValid: true= User IS on OFAC lists (failed the check)
So the verification logic rejects when is_ofac_valid is true:
if !is_valid || !is_min_age_valid || is_ofac_valid {
return error_response_with_details(verification_result);
}
What I Learned
This Architecture Actually Rules
I went into this thinking “it’s a hacky workaround,” but honestly? This pattern is brilliant for polyglot backends. The benefits:
Clean separation: Rust handles all HTTP concerns, Node only does SDK work
Official SDK: We get to use Self’s production-ready, well-tested SDK directly
Best of both worlds: Rust’s performance for I/O, Self’s comprehensive SDK for verification
Isolation: Worker crashes don’t take down the server
Scalability: Pool grows/shrinks with demand
The node-workers crate makes this pattern incredibly simple. The entire Rust-Node bridge is maybe 50 lines of code.
Trade-offs
Of course, there are downsides:
Complexity: Two runtimes to manage, debug, and deploy
Memory: Each Node worker is ~50-100MB of RAM
IPC overhead: Serializing/deserializing JSON adds microseconds
Debugging: Errors can happen in Rust or Node, need to check both logs
For a simple CRUD app, this would be overkill. But for a performance-critical backend that needs JavaScript-only libraries? Perfect fit.
Try It Yourself
Want to run this? Here’s the setup:
Prerequisites
Node 18+ (for the worker)
Rust (for the backend)
Ngrok (for HTTPS tunneling)
Installation
# 1. Clone the repo.
git clone https://github.com/kartikmehta8/self-offchain-rust-starter
cd self-offchain-rust-starter
# 2. Install worker dependencies.
cd server/worker
npm install
cd ../..
# 3. Install frontend dependencies.
cd client
npm install
cd ..
# 4. Build Rust backend (optional, cargo run will build).
cargo build --manifest-path server/Cargo.toml
Running
# Terminal 1: Start Rust backend.
cd server
cargo run
# Terminal 2: Start ngrok.
ngrok http 3001
# Copy the HTTPS URL and update both .env files.
# Terminal 3: Start frontend.
cd client
npm run dev
Open http://localhost:3000, scan the QR code with the Self mobile app, and watch the magic happen!
Closing Thoughts
Building this taught me that language boundaries don’t have to limit your choices. Self’s decision to focus on JavaScript/TypeScript SDKs makes total sense, it’s where most of their developer community is. But with a little creativity, you can still leverage their excellent SDK from any language.
Would I use this in production? Absolutely. With proper monitoring, error handling, and scaling, this architecture could easily handle thousands of verifications per second.
The coolest part? Self’s zero-knowledge proofs mean users can prove they’re real humans without doxxing themselves. No centralized identity database, no honeypot of personal data. Just cryptographic proof of humanity. The Self team has done an incredible job making privacy-preserving identity verification accessible to developers.
If you ever need help or just want to chat, DM me on Twitter / X or LinkedIn.




