# What is a demonstration

The **core data collection mechanism** inside The Forge where users record themselves completing tasks so that AI can learn from **real human interactions**. Every demonstration is evaluated using **a grading system** that determines **reward payouts and AI training effectiveness**.

This ensures that **only high-quality demonstrations improve AI models**, while farmers are fairly **compensated** based on their performance.

***

## **How Demonstrations Work?** <a href="#how-demonstrations-work" id="how-demonstrations-work"></a>

#### 1. **Users Record Demonstrations** <a href="#id-1.-users-record-demonstrations" id="id-1.-users-record-demonstrations"></a>

* Farmers **perform a task on their computer** while the system records every action.
* The demonstration captures **clicks, keystrokes, UI navigation, and task execution**.
* The AI **observes and processes** how humans complete tasks.

📌 **Example:** A user records a demonstration of sending a Base transaction using Metamask Wallet, navigating through wallet settings, entering recipient addresses, and confirming fees.

#### 2. **Processing & Structuring Data for AI Training** <a href="#id-2.-processing-and-structuring-data-for-ai-training" id="id-2.-processing-and-structuring-data-for-ai-training"></a>

* After recording, users **process their demonstration** to structure it into **clear, repeatable steps** AI can learn from.
* AI models analyze **workflow sequences, decision-making logic, and UI interactions**, allowing them to **mimic human behavior efficiently**.
* Farmers can **review their submission** to ensure it’s accurate and useful.

📌 **Example:** The system learns to break the Base transaction prompt into structured steps like **“Open Metamask Wallet”, “Enter Recipient Address”, “Review Gas Fees” & “Confirm Transaction”.**

#### 3. **Submission & Quality Review** <a href="#id-3.-submission-and-quality-review" id="id-3.-submission-and-quality-review"></a>

* Once processed, users **upload their demonstration** for AI training.
* Each submission is evaluated by **CLONES data quality agent**, which grades it based on **clarity, accuracy, and effectiveness**.
* The **higher the quality, the better the AI learns and the greater the reward for the contributor**.

📌 **Example:** A well-structured Base transaction demo receives an **85% quality rating**, qualifying for near-max rewards.

***

## **Grading System: How Demonstrations Are Scored** <a href="#grading-system-how-demonstrations-are-scored" id="grading-system-how-demonstrations-are-scored"></a>

Each uploaded demonstration is **scored by an AI-powered data quality agent**. The grading process evaluates the submission across multiple dimensions:

**1. Clarity & Step-by-Step Execution (40%)**

* Are the actions performed in **a clear, structured, and repeatable** way?
* Does the demonstration include **all necessary steps without skipping any?**
* Is the recording **free of unnecessary delays or misclicks?**

📌 **Example:** A contributor records a **clear**, step-by-step demonstration of sending crypto without extra delays → **High Score**

**2. Accuracy & Task Completion (30%)**

* Did the user **correctly complete the task** from start to finish?
* Is the workflow **accurate and applicable to real-world use?**
* Are **errors corrected quickly** without affecting the AI’s ability to learn?

📌 **Example:** A user enters a **wrong wallet address** but **fixes it immediately and completes the transaction successfully** → **Good Score VS** A user **submits a demonstration with missing steps**, like forgetting to confirm a transaction → **Low Score**

**3. AI Training Usefulness (20%)**

* Is this demonstration **generalizable** so AI can apply it to different cases?
* Does it help the AI **recognize patterns in human decision-making?**
* Is it a **new, valuable contribution**, or a duplicate of an existing submission?

📌 **Example:** A **unique demonstration** of interacting with a complex UI workflow → **High Score VS** \
A **duplicate of an existing task without meaningful variation** → **Low Score**

**4. Efficiency & Flow (10%)**

* Was the demonstration **efficiently completed** without unnecessary delays?
* Did the user **execute the task smoothly** without excessive hesitations?
* Was the workflow **consistent and optimized** for AI learning?

📌 **Example:** A user **executes a workflow quickly and effectively** without mistakes → **High Score VS** A user **takes too long or has inconsistent actions**, making it hard for AI to learn → **Low Score**

***

## **Reward Payouts Based on Grading**

| **Quality Score** | **Reward Payout** | **Notes**                                          |
| ----------------- | ----------------- | -------------------------------------------------- |
| 90–100%           | 100%              | Perfect execution, maximally useful AI training    |
| 80–89%            | 85%               | High quality, minor inefficiencies or small errors |
| 70–79%            | 70%               | Good submission, may need slight improvements      |
| 50–69%            | 50%               | Basic level, needs optimization                    |
| Below 50%         | 0% (Refunded)     | Poor quality, fully refunded to the training pool  |

#### 📌 Examples:

* **85% Score:** Task valued at $0.20 → Farmer receives **$0.153** (after 10% platform fee), $0.03 refunded to pool
* **40% Score:** No payout → Full $0.20 returned to pool for future high-quality submissions

***

## Dynamic Quality Incentives

#### **Automatic Pool Efficiency**

* **Unused funds** from low-quality submissions are returned to the Factory pool
* Ensures AI only learns from **high-quality data**
* Incentivizes farmers to submit **clear, structured, and useful demonstrations**

#### **Market-Based Optimization**

Factory Creators can adjust reward structures to optimize for quality and speed:

* **Need more high-quality submissions?** → Increase rewards ($0.20 → $0.30 per demo)
* **Too many low-quality attempts?** → Maintain or lower rates to encourage quality focus
* **Market dynamics** automatically balance quality, speed, and cost

#### **Reputation Building**

* Farmers build **quality scores** over time
* **High-reputation farmers** unlock access to premium, high-paying Factories
* **Consistent quality** leads to exclusive opportunities and bonus rewards

**The Result ⇒** A self-improving system where **quality is rewarded**, **poor submissions are filtered out and** **farmers are incentivized to deliver their best work** for optimal AI training effectiveness.

> ***"***&#x51;uality demonstrations today become the AI capabilities of tomorrow & farmers capture value from bot&#x68;***"***


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://clones.gitbook.io/clones.docs/the-forge/what-is-a-demonstration.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
