On-Chain Model Governance: Auditing AI Decisions

1. Introduction: The Crisis of Computational Trust

The integration of Artificial Intelligence (AI) into the foundational strata of the global economy has precipitated a governance crisis of unprecedented scale. As algorithmic decision-making systems increasingly mediate high-stakes outcomes—ranging from creditworthiness assessments and medical diagnostics to autonomous financial trading and judicial sentencing—the opacity of these systems has become a systemic risk. We currently operate in a “Black Box” paradigm where AI is consumed as a trusted service provided by a centralized oligopoly. Stakeholders, regulators, and downstream users interact with these systems through opaque Application Programming Interfaces (APIs), receiving probabilistic outputs without cryptographic assurance regarding the model’s provenance, the integrity of the inference process, or the privacy of the input data.1

This centralization creates a profound “trust gap.” In traditional information technology audits, verification focuses on static codebases and financial statements—deterministic artifacts that can be reviewed retrospectively. However, AI systems are dynamic, stochastic, and often non-interpretable. They drift over time, are susceptible to adversarial perturbations, and their decision-making logic—embedded within billions of parameters—defies manual inspection.1 Consequently, the prevailing “trust us” model employed by major AI labs is insufficient for the next generation of critical infrastructure, particularly in the Web3 domain where trustlessness is a core tenet.

On-Chain Model Governance emerges as the necessary technological response to this deficit. By anchoring the governance logic, audit trails, and verification mechanisms on distributed ledgers, this paradigm seeks to transform AI auditing from a subjective, periodic human process into an objective, continuous cryptographic protocol.3 This report provides an exhaustive analysis of the technological pillars, economic incentives, and regulatory frameworks defining this transition. We explore how Zero-Knowledge Machine Learning (zkML), Optimistic Machine Learning (opML), Trusted Execution Environments (TEEs), and Cryptoeconomic Consensus are being synthesized to create a verifiable AI supply chain. Furthermore, we examine the rise of the Initial Model Offering (IMO) and the complex interplay between immutable audit logs and emerging legislation such as the EU AI Act and the US GENIUS Act.

1.1 The Auditing Deficit in the Age of Generative AI

The fundamental challenge in auditing modern AI lies in the disconnect between the model’s training phase and its inference phase. A model may be trained on compliant, ethical datasets, yet be secretly swapped for a cheaper, less robust model during inference to save computational costs. Alternatively, a model may be subject to “weight poisoning” or subtle biases that are invisible to the end-user interacting with an API.4 Traditional auditing frameworks, such as the NIST AI Risk Management Framework or ISO 42001, provide guidelines for governance but lack the technical enforcement mechanisms to guarantee adherence in real-time.5

Internal audit teams are urged to act as “AI catalysts,” embedding assurance early in the deployment process.5 However, without technical tools to trace the lineage of a decision back to the specific model version and input data, these audits remain superficial. The “Shadow AI” phenomenon—where unregulated AI tools are adopted across an organization without oversight—further exacerbates this, creating a fragmented landscape of unmonitored algorithmic risk.6 On-chain governance proposes a radical transparency: recording the hash of the model architecture, the cryptographic commitment of the training dataset, and the validity proof of every inference on an immutable public ledger. This ensures that the “digital record offers insight into the framework behind AI and the provenance of the data,” effectively bridging the gap between high-level policy and low-level execution.3

2. Foundations of On-Chain AI Assurance

To understand the architecture of on-chain governance, one must first dissect the principles of AI assurance and how they map to blockchain primitives. The convergence of these fields is not merely about storage; it is about encoding the “Five Pillars of AI Assurance”—transparency, fairness, privacy, reliability, and accountability—into smart contracts and cryptographic proofs.5

2.1 Transparency and Immutable Provenance

Transparency in an on-chain context transcends open-source code. It necessitates the creation of a tamper-proof “Chain of Thought” for AI systems. Organizations like IBM and Palo Alto Networks emphasize that transparency allows stakeholders to evaluate system designs and decision-making processes.1 In a decentralized setting, this is achieved by hashing the model weights and storing them (or a commitment to them) on-chain. When an inference is requested, the system generates a proof that the output was derived from that specific hashed model state. This eliminates the “bait-and-switch” attack vector where a provider claims to use a sophisticated model like GPT-4 but serves requests using a cheaper, inferior model.8

Furthermore, transparency extends to the data supply chain. A “poisoned” dataset can corrupt a model’s behavior in subtle ways that defy output analysis.4 On-chain governance requires Data Provenance, where training datasets are cryptographically signed and their usage tracked. This allows auditors to verify that a model was not trained on copyrighted or sensitive data without authorization, addressing the “trust problem surrounding AI data”.9

2.2 Fairness and Algorithmic Bias

Fairness ensures that AI systems do not propagate historical biases or discriminate against protected groups. Traditional fairness audits involve running test datasets through a model and analyzing the statistical distribution of outcomes.7 On-chain governance automates this via Algorithmic Auditing Contracts. These smart contracts can be programmed to periodically challenge a deployed model with a “fairness benchmark dataset.” If the model’s outputs deviate from established fairness metrics (e.g., demographic parity), the contract can automatically trigger a circuit breaker, pausing the model or slashing the stake of the model provider.1 This moves fairness enforcement from a reactive legal process to a proactive cryptoeconomic one.

2.3 Accountability and Agentic Liability

As AI systems evolve from passive tools to active agents capable of autonomous financial transactions (Agentic AI), accountability becomes the linchpin of governance. Who is responsible when an AI agent liquidates a treasury or executes a disastrous trade?.10 On-chain governance introduces the concept of Identity and Access Governance (IAG) for non-human actors. AI agents are assigned “Soulbound Tokens” or on-chain identities that track their reputation, transaction history, and liability insurance.11

Auditors are currently grappling with the “ephemeral identity” problem, where AI agents spin up temporary accounts to execute tasks and then vanish, leaving no audit trail.10 By enforcing that all agentic actions are signed by a registered on-chain identity, organizations can ensure traceable accountability. If an agent violates a policy (e.g., referencing prohibited data or exceeding risk limits), the immutable record provides the evidence necessary for dispute resolution or slashing penalties.11

2.4 Privacy in a Transparent World

The most significant tension in on-chain governance is the “Privacy-Transparency Paradox.” Blockchains are inherently transparent, designed to broadcast transaction data to all nodes. Conversely, AI models often rely on proprietary intellectual property (weights) and sensitive private data (inputs).12 Reconciling these opposing requirements forces the adoption of advanced cryptographic techniques. The goal is to verify the correctness of the computation without revealing the content of the data or the structure of the model. This necessity has driven the rapid maturation of Zero-Knowledge Machine Learning (zkML) and Trusted Execution Environments (TEEs), which serve as the technical bedrock for privacy-preserving audits.13

3. The Technological Pillars of Verification

The central bottleneck for on-chain AI governance is computational cost. Ethereum and similar blockchains are severely constrained environments; a simple matrix multiplication of 1000×1000 integers would consume approximately 3 billion gas, far exceeding the block gas limit and making native on-chain inference economically impossible.15 Consequently, the industry has coalesced around three primary scaling solutions that move computation off-chain while anchoring verification on-chain: zkML, opML, and TEEs.

3.1 Zero-Knowledge Machine Learning (zkML)

zkML represents the “Holy Grail” of verifiable compute. It utilizes Zero-Knowledge Proofs (ZKPs) to allow a “prover” (the model host) to demonstrate to a “verifier” (the smart contract) that a specific output was generated by a specific model using specific input, without revealing the underlying data or model parameters.13

3.1.1 Mechanisms: Circuitizing Intelligence

The core workflow of zkML involves transpiling a machine learning model (typically in ONNX format) into an arithmetic circuit—a representation composed of addition and multiplication gates compatible with ZK proving systems.

  • Frameworks: Leading the charge is EZKL, a library that converts ONNX files into zk-SNARK circuits using the Halo2 proving system.16 Halo2 is particularly suited for this because it supports “lookup arguments,” which optimize the proving of non-linear operations (like ReLU activations) that are computationally expensive in traditional R1CS circuits.13
  • The Witness: During inference, the system generates a “witness”—a comprehensive trace of all intermediate values in the neural network. The ZK prover then uses this witness to generate a succinct proof. This proof is tiny (kilobytes) and can be verified by a smart contract in milliseconds, regardless of the complexity of the original computation.17

3.1.2 Performance Bottlenecks and Benchmarks

Despite its theoretical elegance, zkML faces severe practicality hurdles regarding proving time and memory usage.

  • Proving Overhead: Generating a proof is computationally exhaustive. Benchmarks indicate that proving a model can take 1,000 times longer than running the inference natively.15 For instance, generating a proof for a simple ResNet or nanoGPT model might take nearly 80 minutes on standard hardware, whereas the inference itself takes milliseconds.18
  • Memory Consumption: The memory required to generate proofs scales poorly with model size. Proving a 7-billion parameter model (like LLaMA-7B) via zkML would require Terabytes (TB) or even Petabytes (PB) of RAM to hold the circuit and witness data, rendering it infeasible on current hardware.15
  • Benchmarks: In comparative tests, EZKL has shown to be significantly faster than competitors like RiscZero (65x faster proving time) and Orion (2.9x faster), while consuming 98% less memory than RiscZero.19 However, even with these optimizations, zkML is currently restricted to small models (e.g., Random Forests, small CNNs) or specific high-value, low-latency logic like biometric verification (e.g., Worldcoin).20

3.1.3 Hardware Acceleration

To bridge this gap, protocols are investing heavily in hardware acceleration. Utilizing GPUs for Multi-Scalar Multiplication (MSM) and Number Theoretic Transforms (NTT)—the heavy lifting of ZK proving—has shown to reduce MSM times by 98% and total proof times by 35%.21 Projects like Ingonyama and Cysic are developing ASICs specifically for ZK proving, which may eventually make real-time zkML feasible for larger models.

3.2 Optimistic Machine Learning (opML)

Recognizing the physical limits of zkML, ORA (formerly Hyper Oracle) introduced opML. This approach applies the “Optimistic Rollup” philosophy to AI inference. It assumes that the result posted to the blockchain is correct unless proven otherwise.20

3.2.1 The Interactive Dispute Game

In opML, the heavy ML inference occurs off-chain on standard hardware (e.g., a GPU). The result is committed to the blockchain. A “Challenge Period” then begins, during which “validators” or “watchers” can verify the result off-chain. If they detect a discrepancy, they can initiate a dispute.22

  • Bisection Protocol: The dispute resolution mechanism does not re-run the entire model on-chain. Instead, it uses a bisection protocol (similar to Arbitrum or Truebit) to narrow down the dispute to a single computation step (a single opcode).
  • Fraud Proof Virtual Machine (FPVM): Only this single disputed step is executed on-chain via the FPVM (e.g., ORA’s implementation based on MIPS architecture). The smart contract compares the on-chain execution of that single step with the submitter’s claim. If the submitter is wrong, they are slashed.22

3.2.2 Economics and Scalability

opML decouples the cost of verification from the complexity of the model.

  • Cost: Because on-chain computation is only triggered during a dispute (which is rare in a functioning game-theoretic system), the gas cost is negligible compared to zkML.
  • Scalability: opML can support models of any size, including massive LLMs like Grok (314B parameters) or LLaMA-3. It runs on standard GPUs and does not require the massive RAM overhead of circuit generation.23
  • Latency Trade-off: The primary disadvantage is finality latency. Users must wait for the challenge period to expire (e.g., minutes to hours) before the result is considered final.24 However, for many governance and non-HFT finance use cases, this delay is acceptable.

3.3 Trusted Execution Environments (TEEs)

TEEs offer a middle ground, relying on hardware security rather than math (zkML) or game theory (opML). TEEs, such as Intel SGX or AWS Nitro Enclaves, are isolated areas of a processor that guarantee code execution integrity and data confidentiality.14

3.3.1 The Confidential Coprocessor

Protocols like Flashbots SUAVE and Ritual leverage TEEs to act as “AI Coprocessors.”

  • Mechanism: The AI model and encrypted user data are loaded into the secure enclave. The hardware ensures that even the server administrator (or the node operator) cannot view the memory contents or tamper with the execution. The enclave generates a “Remote Attestation”—a digital signature signed by the hardware manufacturer’s key—proving that a specific workload was executed.14
  • Performance: TEEs operate at near-native speeds, making them orders of magnitude faster than zkML and free from the challenge delays of opML.
  • Governance Utility: TEEs are particularly suited for Privacy-Preserving Audits, where an auditor (or a smart contract) needs to verify that a model complies with regulations (e.g., GDPR) without actually seeing the private user data.

3.3.2 Vulnerabilities

The trust model of TEEs is centralized around the hardware vendor (e.g., Intel). Furthermore, TEEs are susceptible to side-channel attacks (e.g., analyzing power consumption or memory access patterns to infer data). Recent research has focused on mitigating these through “oblivious RAM” and other hardening techniques, but the risk of a hardware compromise (like the Foreshadow or Meltdown vulnerabilities) remains a systemic concern for high-value assets.25

4. Protocol Architectures: The Governance Ecosystem

The theoretical frameworks described above are being operationalized by a new wave of protocols. These platforms are not just running AI; they are creating decentralized markets for intelligence, complete with their own monetary policies, governance structures, and auditing layers.

4.1 Bittensor: The Incentivized Intelligence Market

Bittensor creates a peer-to-peer market for machine intelligence, organized into “subnets” that specialize in different tasks (e.g., text generation, image creation, storage, scraping).26

4.1.1 Yuma Consensus as Auditing

Bittensor’s governance innovation is Yuma Consensus, a mechanism that functions as a decentralized, continuous audit.

  • Miners vs. Validators: “Miners” produce AI outputs (intelligence). “Validators” generate tasks, query miners, and score their responses.
  • The Weight Matrix: Validators assign “weights” to miners based on performance. The Yuma algorithm aggregates these weights to produce a consensus score. Crucially, Yuma rewards validators who align with the majority consensus. If a validator provides scores that deviate significantly from the group (indicating incompetence or collusion), their own reward is slashed.27
  • Governance Implication: This creates a self-regulating audit system. Validators act as the “auditors” of the network, and the consensus mechanism audits the auditors. This ensures that the definition of “quality” or “intelligence” is determined by the collective stake-weighted agreement of the network rather than a central authority.28

4.1.2 The Senate and Dynamic TAO

Governance in Bittensor is evolving toward a bicameral system. The Senate, composed of the top 64 validators (by stake), oversees high-level network parameters and the registration of new subnets.29 The recent Dynamic TAO (dTAO) upgrade further democratizes governance by allowing market forces to determine the allocation of emissions to different subnets. If a subnet (e.g., a medical diagnostic subnet) provides high value, the price of its specific “Dynamic Token” rises, attracting more miners and validators, effectively “voting with capital” on which AI models deserve resources.30

4.2 ORA: Tokenizing the Model Lifecycle

ORA focuses on the financialization and governance of specific AI models through Initial Model Offerings (IMOs).

4.2.1 The IMO Mechanism

An IMO allows open-source model developers to monetize their work directly. ORA tokenizes a model (e.g., OpenLM) using the ERC-7641 (Intrinsic RevShare Token) standard.31

  • Mechanism: Investors buy ERC-7641 tokens (e.g., $OLM) to fund the model’s development. In return, they receive a claim on future revenue generated by the model.
  • On-Chain AI Oracle (OAO): When the model is queried via ORA’s OAO (using opML verification), users pay a fee. This fee is automatically routed to the ERC-7641 contract and distributed to token holders.23
  • Governance: Token holders form a DAO that governs the model’s parameters, updates, and usage policies. This effectively treats an AI model as a sovereign economic entity managed by its community.

4.2.2 Verifiable Content (ERC-7007)

ORA also utilizes the ERC-7007 standard for AI-Generated Content (AIGC). This standard links an NFT (the content) to a ZK or opML proof (the verification). This creates an unbreakable chain of custody, proving that a specific piece of art or text was generated by a specific model version, addressing the deepfake and copyright provenance issues plaguing the generative AI space.31

4.3 Gensyn: Verifiable Training at Scale

While other protocols focus on inference, Gensyn targets the training phase—the most computationally expensive part of the AI lifecycle. Gensyn aims to unite the world’s idle compute (e.g., post-Merge Ethereum miners) into a global supercomputer.32

4.3.1 Probabilistic Proof-of-Learning

Verifying that a node honestly performed quadrillions of floating-point operations to train a model is non-trivial. Re-running the training to verify it would double the cost. Gensyn solves this with Probabilistic Proof-of-Learning.33

  • Pinpoint Protocol: The verification process involves a “Solver” (worker), a “Verifier,” and a “Whistleblower.” The Verifier does not re-run the whole task. Instead, utilizing gradients and checkpoints, they re-run small, random segments of the computation.
  • Graph-based Verification: By treating the training process as a computation graph, the protocol can pinpoint exactly where a divergence occurred. If a Solver cheats, they are caught with high probability and their stake is slashed.
  • Cost Arbitrage: This “trustless” verification allows Gensyn to offer compute at prices theoretically 80% lower than centralized providers like AWS, who charge high margins for “trusted” infrastructure.34

4.4 Ritual: The Sovereign AI Execution Layer

Ritual positions itself as the “AI Coprocessor” for blockchains. Its flagship product, Infernet, facilitates the execution of AI models off-chain with results consumed by on-chain smart contracts.2

4.4.1 Heterogeneous Execution

Ritual acknowledges that no single verification method suits all use cases. Its node architecture is heterogeneous, supporting zkML, opML, and TEEs.35

  • Resonance: Ritual introduces a fee market mechanism called Resonance. It acts as a broker, matching user inference requests (with specific budget and security constraints) to nodes capable of fulfilling them. A user requiring high privacy might pay a premium for a TEE node, while a user needing low cost might opt for an opML node.36
  • Model Storage: Ritual integrates with decentralized storage solutions (like Arweave or IPFS) but manages the access and execution logic, effectively becoming the orchestration layer for the decentralized AI stack.37

5. Comparative Analysis: Choosing the Right Audit Tool

The landscape of on-chain governance offers a toolkit rather than a single solution. The choice of mechanism—zkML, opML, or TEE—imposes strict trade-offs regarding cost, latency, and trust.

Table 1: Comparative Feature Matrix of Verification Mechanisms

Feature zkML (e.g., EZKL) opML (e.g., ORA) TEE (e.g., Flashbots/Ritual) Cryptoeconomic (e.g., Bittensor)
Trust Source Math (Cryptography) Game Theory (Economics) Hardware (Intel/AMD) Social Consensus / Stake
Cost Profile Extremely High (Proof Gen is 1000x inference) Low (Native execution + storage) Low (Native execution) Variable (Incentive emissions)
Latency High (Proving time: mins to hours) High (Challenge period: mins to hours) Low (Real-time inference) Low (Real-time consensus)
Privacy Maximum (Inputs/Weights hidden) Low (Data public for fraud proofs) High (Enclave encrypted) Variable (Subnet dependent)
Model Scale Limited (Small CNNs, <1B params) Unlimited (LLMs, >100B params) Limited by Enclave RAM Unlimited
Audit Type Deterministic / Absolute Optimistic / Dispute-based Attested / Hardware-based Subjective / Peer-Review

Insight:

  • zkML is the definitive solution for high-stakes, privacy-critical decisions (e.g., a DAO verifying a credit score without seeing the user’s bank history). However, until hardware acceleration matures, it is unusable for LLMs.15
  • opML is the pragmatic “Layer 2” for AI. It is the only decentralized way to run state-of-the-art LLMs (like LLaMA-3) today. The latency is the price paid for scalability.24
  • TEEs serve as a high-performance bridge. They are ideal for private auctions (MEV) or agentic workflows where speed is critical, but they retain a centralized dependency on the hardware manufacturer.14

6. The Economic and Regulatory Frontier

On-chain governance does not exist in a vacuum; it interacts with dynamic economic markets and an increasingly aggressive regulatory environment.

6.1 Tokenomics of Intelligence: The IMO

The Initial Model Offering (IMO) represents a fundamental shift in how AI is funded. Traditionally, AI development is funded by Venture Capital, locking models behind corporate APIs to capture value. The IMO model tokenizes the future revenue stream of the model itself.31

  • Implications: This aligns incentives between developers (who get funded), users (who pay for inference), and token holders (who govern the model). It creates a “Model-as-a-DAO” structure. If a model becomes biased or outdated, the token holders—incentivized by revenue—will vote to update or fine-tune it, creating a market-driven governance mechanism that penalizes poor performance.38

6.2 Regulatory Collision: The “Dual-Use” Dilemma

The democratization of AI via on-chain governance directly conflicts with emerging national security regulations. The US government defines advanced models as “Dual-Use Foundation Models” with potential for misuse in bio-weaponry or cyberattacks.39

  • The Conflict: Regulations like the EU AI Act or the proposed GENIUS Act in the US imply strict controls over who can access model weights.40 Decentralized protocols like Bittensor or ORA distribute these weights globally to ensure censorship resistance.
  • The “Right to be Forgotten”: The GDPR requirement to delete personal data contradicts blockchain immutability. On-chain governance must evolve to store proofs on-chain while keeping the data in compliant, mutable off-chain storage (like IPFS with access control), using TEEs or ZKPs to bridge the gap.42
  • Legislative Outlook: Recent proposals like the Deploying American Blockchains Act 43 suggest a softening stance, recognizing blockchain’s role in competitiveness. However, the tension between “verifiable, open AI” and “controlled, safe AI” will define the policy landscape for the next decade.

6.3 Hybrid Governance: The “opp/ai” Approach

To navigate these constraints, hybrid frameworks like opp/ai are emerging. This architecture partitions a neural network into two segments:

  1. Privacy-Sensitive Layers: Processed inside a ZK circuit or TEE to protect input data (e.g., patient records).
  2. Compute-Intensive Layers: Processed via opML to ensure scalability and cost-efficiency.44
    This hybrid approach allows for “Optimistic Privacy-Preserving AI,” balancing the rigorous privacy demands of regulators with the performance demands of users.

7. Conclusion: The Era of the Algorithmic Audit

On-Chain Model Governance is not merely a niche application of blockchain; it is the infrastructure required to civilize the “Wild West” of Artificial Intelligence. As AI agents begin to manage assets, enforce laws, and diagnose diseases, the “Black Box” model of the Web2 era becomes an unacceptable liability.

The transition to on-chain auditing moves governance from social trust (trusting OpenAI or Google) to cryptographic truth.

  • zkML provides the mathematical certainty that a computation is correct and private.
  • opML provides the economic scalability to apply this certainty to massive intelligence models.
  • TEEs provide the secure environments to execute these models at speed.
  • Protocols like Bittensor and ORA provide the market mechanisms to value and monetize this verifiable intelligence.

For auditors, developers, and policymakers, the implications are profound. The future of auditing is not in spreadsheets or interview rooms—it is in the mempool, the ZK circuit, and the fraud proof. By mandating that AI decisions be recorded and verified on-chain, we can build an ecosystem where AI is not only powerful but also provably fair, transparent, and accountable. The era of the AI Audit has arrived, and it is immutable.

References

  • AI Auditing & Governance:.1
  • zkML Mechanics & Benchmarks:.13
  • opML & ORA Protocols:.18
  • Bittensor Ecosystem:.26
  • Gensyn & Compute:.32
  • Ritual & TEEs:.2
  • Regulatory & Economic Context:.11

Works cited

  1. What Is an AI Audit? | IBM, accessed on December 21, 2025, https://www.ibm.com/think/topics/ai-audit
  2. Introducing Ritual, accessed on December 21, 2025, https://ritual.net/blog/introducing-ritual
  3. Auditable AI: Building trust in AI through blockchain – CoinGeek, accessed on December 21, 2025, https://coingeek.com/auditable-ai-building-trust-in-ai-through-blockchain/
  4. AI Supply Chain Security: Why It’s Becoming Harder to Ignore | Wiz, accessed on December 21, 2025, https://www.wiz.io/academy/ai-security/ai-supply-chain-security
  5. Internal Audit’s role in strengthening AI governance | Deloitte US, accessed on December 21, 2025, https://www.deloitte.com/us/en/services/audit-assurance/blogs/accounting-finance/audit-ai-risk-management.html
  6. Tailor Your AI Governance Framework to Your Model Portfolio – Elevate Consult, accessed on December 21, 2025, https://elevateconsult.com/insights/tailor-your-ai-governance-framework-to-your-model-portfolio/
  7. What Is AI Governance? – Palo Alto Networks, accessed on December 21, 2025, https://www.paloaltonetworks.com/cyberpedia/ai-governance
  8. ZKML: Verifiable Machine Learning using Zero-Knowledge Proof – Joo Yeon Cho, accessed on December 21, 2025, https://kudelskisecurity.com/modern-ciso-blog/zkml-verifiable-machine-learning-using-zero-knowledge-proof
  9. AI and Blockchain Disruption: Unveiling Perfect Synergy Use Cases | Onchain, accessed on December 21, 2025, https://onchain.org/research/ai-and-blockchain-disruption/
  10. Industry News 2025 The Growing Challenge of Auditing Agentic AI – ISACA, accessed on December 21, 2025, https://www.isaca.org/resources/news-and-trends/industry-news/2025/the-growing-challenge-of-auditing-agentic-ai
  11. Decentralized Governance of AI Agents – arXiv, accessed on December 21, 2025, https://arxiv.org/html/2412.17114v3
  12. Pros and Cons of Blockchain and AI Integrations – AuditOne, accessed on December 21, 2025, https://www.auditone.io/blog-posts/pros-and-cons-of-blockchain-and-ai-integrations
  13. JOLT Atlas Reaching For SOTA In Zero Knowledge Machine Learning (zkML). – Kinic, accessed on December 21, 2025, https://www.kinic.io/blog/joltx-reaching-for-sota-in-zero-knowledge-machine-learning-zkml
  14. Sirrah: Speedrunning a TEE Coprocessor – Flashbots Writings, accessed on December 21, 2025, https://writings.flashbots.net/suave-tee-coprocessor
  15. opML: Optimistic Machine Learning on Blockchain – arXiv, accessed on December 21, 2025, https://arxiv.org/pdf/2401.17555
  16. The EZKL System, accessed on December 21, 2025, https://docs.ezkl.xyz/
  17. Prove – EZKL, accessed on December 21, 2025, https://docs.ezkl.xyz/getting-started/prove/
  18. opML is All You Need: Run a 13B ML Model on Ethereum, accessed on December 21, 2025, https://ethresear.ch/t/opml-is-all-you-need-run-a-13b-ml-model-on-ethereum/17175
  19. Benchmarking ZKML Frameworks – EZKL Blog, accessed on December 21, 2025, https://blog.ezkl.xyz/post/benchmarks/
  20. What’s the Difference Between zkML and opML? – Hackernoon, accessed on December 21, 2025, https://hackernoon.com/whats-the-difference-between-zkml-and-opml
  21. Steps in Hardware, Leaps in Performance – EZKL Blog, accessed on December 21, 2025, https://blog.ezkl.xyz/post/acceleration/
  22. opML – ORA, accessed on December 21, 2025, https://docs.ora.io/doc/onchain-ai-oracle-oao/fraud-proof-virtual-machine-fpvm-and-frameworks/opml
  23. ORA: Trustless Artificial Intelligence on Ethereum, Reshaping the Blockchain AI Ecosystem, accessed on December 21, 2025, https://www.rootdata.com/news/232693
  24. Comparison of Proving Frameworks – ORA, accessed on December 21, 2025, https://docs.ora.io/doc/onchain-ai-oracle-oao/fraud-proof-virtual-machine-fpvm-and-frameworks/comparison-of-proving-frameworks
  25. Sirrah TEE Coprocessor – SUAVE – The Flashbots Collective, accessed on December 21, 2025, https://collective.flashbots.net/t/sirrah-tee-coprocessor/2992
  26. Validating in Bittensor, accessed on December 21, 2025, https://docs.learnbittensor.org/validators
  27. Deep Dive: What are Bittensor Subnets | Techandtips123 on Binance Square, accessed on December 21, 2025, https://www.binance.com/en/square/post/26175311418761
  28. A Brief Introduction to Bittensor – Nansen Research, accessed on December 21, 2025, https://research.nansen.ai/articles/a-brief-introduction-to-bittensor
  29. Docs Home | Bittensor, accessed on December 21, 2025, https://docs.learnbittensor.org/
  30. AI L1 Deep Research Report on Bittensor: Continuously Optimizing the Market Economy of Machine Intelligence | Biteye on Binance Square, accessed on December 21, 2025, https://www.binance.com/en/square/post/24525984077705
  31. IMO Overview – ORA, accessed on December 21, 2025, https://docs.ora.io/doc/initial-model-offering-imo/imo-overview
  32. Gensyn: Building a Global, Verifiable Network for Machine Intelligence | by Denis Belousov, accessed on December 21, 2025, https://intelpocik.medium.com/gensyn-building-a-global-verifiable-network-for-machine-intelligence-5f4786cf3b4d
  33. Litepaper (legacy) | Gensyn, accessed on December 21, 2025, https://docs.gensyn.ai/litepaper
  34. Explore Investing Thesis on Decentralized Computing Market: Gensyn – Insights, accessed on December 21, 2025, https://insights.blockbase.co/explore-decentralized-computing-market-gensyn/
  35. Frequently Asked Questions – Ritual Foundation, accessed on December 21, 2025, https://ritualfoundation.org/docs/reference/faq
  36. The Resonance Mechanism and its Properties – Ritual, accessed on December 21, 2025, https://ritual.net/blog/resonance-pt2
  37. Ritual in the Crypto × AI Landscape, accessed on December 21, 2025, https://www.ritualfoundation.org/docs/landscape/ritual-vs-other-crypto-x-ai
  38. Known as truly open OpenAI: A deep dive into “IMO”, the ORA project leads us into the era of AI model tokenization | Una繁星 on Binance Square, accessed on December 21, 2025, https://www.binance.com/en-IN/square/post/6629777965353
  39. Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights, accessed on December 21, 2025, https://www.federalregister.gov/documents/2024/02/26/2024-03763/dual-use-foundation-artificial-intelligence-models-with-widely-available-model-weights
  40. US Crypto Policy Tracker: Legislative Developments – Latham & Watkins LLP, accessed on December 21, 2025, https://www.lw.com/en/us-crypto-policy-tracker/legislative-developments
  41. S.394 – GENIUS Act of 2025 119th Congress (2025-2026), accessed on December 21, 2025, https://www.congress.gov/bill/119th-congress/senate-bill/394
  42. Privacy-Enhancing and Privacy- Preserving Technologies in AI: – Centre for Information Policy Leadership, accessed on December 21, 2025, https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_pets_and_ppts_in_ai_mar25.pdf
  43. S. Rept. 119-84 – DEPLOYING AMERICAN BLOCKCHAINS ACT OF 2025 | Congress.gov, accessed on December 21, 2025, https://www.congress.gov/committee-report/119th-congress/senate-report/84/1
  44. opp/ai: Optimistic Privacy-Preserving AI on Blockchain – arXiv, accessed on December 21, 2025, https://arxiv.org/pdf/2402.15006
  45. What is AI Governance? | IBM, accessed on December 21, 2025, https://www.ibm.com/think/topics/ai-governance
  46. What is AI Auditing ? Where to Start – Centraleyes, accessed on December 21, 2025, https://www.centraleyes.com/glossary/ai-auditing/
  47. Why zkML is the Game-Changer for Secure and Scalable AI | by ARPA Official – Medium, accessed on December 21, 2025, https://arpa.medium.com/why-zkml-is-the-game-changer-for-secure-and-scalable-ai-abd3f647a2b6
  48. ora-io/opml – OPtimistic Machine Learning on Blockchain – GitHub, accessed on December 21, 2025, https://github.com/ora-io/opml
  49. ORA Protocol – Organizations – IQ.wiki, accessed on December 21, 2025, https://iq.wiki/wiki/ora-protocol
  50. ORA will launch OpenLM IMO on April 13, with a maximum purchase amount of 1 ETH, accessed on December 21, 2025, https://www.binance.com/en/square/post/2024-04-11-ora-4-13-openlm-imo-1-eth-6628334799313
  51. IMO Participation Rules – ORA, accessed on December 21, 2025, https://docs.ora.io/doc/initial-model-offering-imo/imo-participation-rules
  52. Bittensor Meaning – Ledger, accessed on December 21, 2025, https://www.ledger.com/academy/glossary/bittensor
  53. What Is Bittensor (TAO) and How Does It Work? – tastycrypto, accessed on December 21, 2025, https://www.tastycrypto.com/blog/bittensor/
  54. Bittensor Paradigm, accessed on December 21, 2025, https://bittensor.com/about
  55. Inside Bittensor, the blockchain for AI – 21Shares, accessed on December 21, 2025, https://www.21shares.com/en-eu/research/inside-bittensor-the-blockchain-for-ai
  56. Products & Research – Gensyn, accessed on December 21, 2025, https://docs.gensyn.ai/products-and-research
  57. Verde Verification System In Production – Gensyn, accessed on December 21, 2025, https://blog.gensyn.ai/verde-verification-system-in-production/
  58. From Bundles to Time: A Theory of Decentralised Compute Markets – Gensyn, accessed on December 21, 2025, https://blog.gensyn.ai/from-bundles-to-time-a-theory-of-decentralised-compute-markets/
  59. A Beginner’s Guide to Understanding Gensyn – Gate.com, accessed on December 21, 2025, https://www.gate.com/learn/articles/a-beginners-guide-to-understanding-gensyn/3594
  60. A Simple Guide to Ritual: The Open AI Infrastructure Network – Gate.com, accessed on December 21, 2025, https://www.gate.com/learn/articles/a-simple-guide-to-ritual-the-open-ai-infrastructure-network/4594
  61. Centralized vs. Federated vs. Decentralized AI Governance – InfosecTrain, accessed on December 21, 2025, https://www.infosectrain.com/blog/centralized-vs-federated-vs-decentralized-ai-governance/