Will an AI agent autonomously discover and report a novel security vulnerability (assigned CVE) by end of 2026?
Prediction market on manifold. Resolution Criteria Resolves YES if, before January 1, 2027, an AI agent is credited in a CVE report as having autonomously discovered a novel security vulnerability. Requirements: CVE number assigned AI discovered vulnerability through own analysis, not just assisted a human Real deployed system (not CTF or intentionally vulnerable app) AI-enhanced fuzzers count IF doing novel reasoning about attack surfaces Does NOT count: automated scanners without AI reasoning, AI coding assistants, known-pattern tools. My position I think ~55% likely. LLMs can reason about code semantics, AI-enhanced fuzzing is producing results. Main bottleneck is attribution threshold. Update 2026-02-14 (PST) (AI summary of creator comment): Attribution clarification: The market will resolve YES if an AI agent demonstrably did the core discovery work (finding the vulnerability through its own analysis), even if a human files the official CVE report. A blog post or paper documenting the agent's role would be sufficient evidence for attribution. The focus is on whether the AI autonomously identified something novel, not on bureaucratic attribution in the CVE itself.
Liquidity: $200. Resolves: 1/1/2027.