What is a demonstration
The core data collection mechanism inside The Forge where users record themselves completing tasks so that AI can learn from real human interactions. Every demonstration is evaluated using a grading system that determines reward payouts and AI training effectiveness.
This ensures that only high-quality demonstrations improve AI models, while farmers are fairly compensated based on their performance.
How Demonstrations Work?
1. Users Record Demonstrations
Farmers perform a task on their computer while the system records every action.
The demonstration captures clicks, keystrokes, UI navigation, and task execution.
The AI observes and processes how humans complete tasks.
π Example: A user records a demonstration of sending a Base transaction using Metamask Wallet, navigating through wallet settings, entering recipient addresses, and confirming fees.
2. Processing & Structuring Data for AI Training
After recording, users process their demonstration to structure it into clear, repeatable steps AI can learn from.
AI models analyze workflow sequences, decision-making logic, and UI interactions, allowing them to mimic human behavior efficiently.
Farmers can review their submission to ensure itβs accurate and useful.
π Example: The system learns to break the Base transaction prompt into structured steps like βOpen Metamask Walletβ, βEnter Recipient Addressβ, βReview Gas Feesβ & βConfirm Transactionβ.
3. Submission & Quality Review
Once processed, users upload their demonstration for AI training.
Each submission is evaluated by CLONES data quality agent, which grades it based on clarity, accuracy, and effectiveness.
The higher the quality, the better the AI learns and the greater the reward for the contributor.
π Example: A well-structured Solana transaction demo receives an 85% quality rating, qualifying for near-max rewards.
Grading System: How Demonstrations Are Scored
Each uploaded demonstration is scored by an AI-powered data quality agent. The grading process evaluates the submission across multiple dimensions:
1. Clarity & Step-by-Step Execution (40%)
Are the actions performed in a clear, structured, and repeatable way?
Does the demonstration include all necessary steps without skipping any?
Is the recording free of unnecessary delays or misclicks?
π Example: A contributor records a clear, step-by-step demonstration of sending crypto without extra delays β High Score
2. Accuracy & Task Completion (30%)
Did the user correctly complete the task from start to finish?
Is the workflow accurate and applicable to real-world use?
Are errors corrected quickly without affecting the AIβs ability to learn?
π Example: A user enters a wrong wallet address but fixes it immediately and completes the transaction successfully β Good Score VS A user submits a demonstration with missing steps, like forgetting to confirm a transaction β Low Score
3. AI Training Usefulness (20%)
Is this demonstration generalizable so AI can apply it to different cases?
Does it help the AI recognize patterns in human decision-making?
Is it a new, valuable contribution, or a duplicate of an existing submission?
π Example: A unique demonstration of interacting with a complex UI workflow β High Score VS A duplicate of an existing task without meaningful variation β Low Score
4. Efficiency & Flow (10%)
Was the demonstration efficiently completed without unnecessary delays?
Did the user execute the task smoothly without excessive hesitations?
Was the workflow consistent and optimized for AI learning?
π Example: A user executes a workflow quickly and effectively without mistakes β High Score VS A user takes too long or has inconsistent actions, making it hard for AI to learn β Low Score
Reward Payouts Based on Grading
Quality Score
Reward Payout
Notes
90β100%
100%
Perfect execution, maximally useful AI training
80β89%
85%
High quality, minor inefficiencies or small errors
70β79%
70%
Good submission, may need slight improvements
50β69%
50%
Basic level, needs optimization
Below 50%
0% (Refunded)
Poor quality, fully refunded to the training pool
π Examples:
85% Score: Task valued at $0.20 β Farmer receives $0.153 (after 10% platform fee), $0.03 refunded to pool
40% Score: No payout β Full $0.20 returned to pool for future high-quality submissions
Dynamic Quality Incentives
Automatic Pool Efficiency
Unused funds from low-quality submissions are returned to the Factory pool
Ensures AI only learns from high-quality data
Incentivizes farmers to submit clear, structured, and useful demonstrations
Market-Based Optimization
Factory Creators can adjust reward structures to optimize for quality and speed:
Need more high-quality submissions? β Increase rewards ($0.20 β $0.30 per demo)
Too many low-quality attempts? β Maintain or lower rates to encourage quality focus
Market dynamics automatically balance quality, speed, and cost
Reputation Building
Farmers build quality scores over time
High-reputation farmers unlock access to premium, high-paying Factories
Consistent quality leads to exclusive opportunities and bonus rewards
The Result β A self-improving system where quality is rewarded, poor submissions are filtered out and farmers are incentivized to deliver their best work for optimal AI training effectiveness.
"Quality demonstrations today become the AI capabilities of tomorrow & farmers capture value from both"
Last updated