Discover ANY AI to make more online for less.

select between over 22,900 AI Tool and 17,900 AI News Posts.


venturebeat
The 70% factuality ceiling: why Google’s new ‘FACTS’ benchmark is a wake-up call for enterprise AI

There's no shortage of generative AI benchmarks designed to measure the performance and accuracy of a given model on completing various helpful enterprise tasks — from coding to instruction following to agentic web browsing and tool use. But many of these benchmarks have one major shortcoming: they measure the AI's ability to complete specific problems and requests, not how factual the model is in its outputs — how well it generates objectively correct information tied to real-world data — especially when dealing with information contained in imagery or graphics.For industries where accuracy is paramount — legal, finance, and medical — the lack of a standardized way to measure factuality has been a critical blind spot.That changes today: Google’s FACTS team and its data science unit Kaggle released the FACTS Benchmark Suite, a comprehensive evaluation framework designed to close this gap. The associated research paper reveals a more nuanced definition of the problem, splitting "factuality" into two distinct operational scenarios: "contextual factuality" (grounding responses in provided data) and "world knowledge factuality" (retrieving information from memory or the web).While the headline news is Gemini 3 Pro’s top-tier placement, the deeper story for builders is the industry-wide "factuality wall."According to the initial results, no model—including Gemini 3 Pro, GPT-5, or Claude 4.5 Opus—managed to crack a 70% accuracy score across the suite of problems. For technical leaders, this is a signal: the era of "trust but verify" is far from over.Deconstructing the BenchmarkThe FACTS suite moves beyond simple Q&A. It is composed of four distinct tests, each simulating a different real-world failure mode that developers encounter in production:Parametric Benchmark (Internal Knowledge): Can the model accurately answer trivia-style questions using only its training data?Search Benchmark (Tool Use): Can the model effectively use a web search tool to retrieve and synthesize live information?Multimodal Benchmark (Vision): Can the model accurately interpret charts, diagrams, and images without hallucinating?Grounding Benchmark v2 (Context): Can the model stick strictly to the provided source text?Google has released 3,513 examples to the public, while Kaggle holds a private set to prevent developers from training on the test data—a common issue known as "contamination."The Leaderboard: A Game of InchesThe initial run of the benchmark places Gemini 3 Pro in the lead with a comprehensive FACTS Score of 68.8%, followed by Gemini 2.5 Pro (62.1%) and OpenAI’s GPT-5 (61.8%).However, a closer look at the data reveals where the real battlegrounds are for engineering teams.ModelFACTS Score (Avg)Search (RAG Capability)Multimodal (Vision)Gemini 3 Pro68.883.846.1Gemini 2.5 Pro62.163.946.9GPT-561.877.744.1Grok 453.675.325.7Claude 4.5 Opus51.373.239.2Data sourced from the FACTS Team release notes.For Builders: The "Search" vs. "Parametric" GapFor developers building RAG (Retrieval-Augmented Generation) systems, the Search Benchmark is the most critical metric.The data shows a massive discrepancy between a model's ability to "know" things (Parametric) and its ability to "find" things (Search). For instance, Gemini 3 Pro scores a high 83.8% on Search tasks but only 76.4% on Parametric tasks. This validates the current enterprise architecture standard: do not rely on a model's internal memory for critical facts.If you are building an internal knowledge bot, the FACTS results suggest that hooking your model up to a search tool or vector database is not optional—it is the only way to push accuracy toward acceptable production levels.The Multimodal WarningThe most alarming data point for product managers is the performance on Multimodal tasks. The scores here are universally low. Even the category leader, Gemini 2.5 Pro, only hit 46.9% accuracy.The benchmark tasks included reading charts, interpreting diagrams, and identifying objects in nature. With less than 50% accuracy across the board, this suggests that Multimodal AI is not yet ready for unsupervised data extraction. Bottom line: If your product roadmap involves having an AI automatically scrape data from invoices or interpret financial charts without human-in-the-loop review, you are likely introducing significant error rates into your pipeline.Why This Matters for Your StackThe FACTS Benchmark is likely to become a standard reference point for procurement. When evaluating models for enterprise use, technical leaders should look beyond the composite score and drill into the specific sub-benchmark that matches their use case:Building a Customer Support Bot? Look at the Grounding score to ensure the bot sticks to your policy documents. (Gemini 2.5 Pro actually outscored Gemini 3 Pro here, 74.2 vs 69.0).Building a Research Assistant? Prioritize Search scores.Building an Image Analysis Tool? Proceed with extreme caution.As the FACTS team noted in their release, "All evaluated models achieved an overall accuracy below 70%, leaving considerable headroom for future progress."For now, the message to the industry is clear: The models are getting smarter, but they aren't yet infallible. Design your systems with the assumption that, roughly one-third of the time, the raw model might just be wrong.

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
GitHub leads the enterprise, Claude leads the pack—Cursor’s speed canâ€

<p>In the race to deploy generative AI for coding, the fastest tools are not winning enterprise deals. A new VentureBeat analysis, combining a comprehensive survey of 86 engineering teams with o [...]

Match Score: 92.65

How to record a phone call on an iPhone
How to record a phone call on an iPhone

<p>With <a data-i13n="cpos:1;pos:1" href="https://www.engadget.com/mobile/ios-26-is-finally-here-everything-to-know-about-the-free-iphone-software-update-135749206.html"&g [...]

Match Score: 75.65

venturebeat
Databricks' OfficeQA uncovers disconnect: AI agents ace abstract tests but

<p>There is no shortage of AI benchmarks in the market today, with popular options like<a href="https://venturebeat.com/ai/beyond-arc-agi-gaia-and-the-search-for-a-real-intelligence-benc [...]

Match Score: 73.00

FACTS benchmark shows that even top AI models struggle with the truth
FACTS benchmark shows that even top AI models struggle with the truth

<p><img width="2560" height="1440" src="https://the-decoder.com/wp-content/uploads/2025/12/Deepmind-FACTS-Benchmark-scaled.webp" class="attachment-full size [...]

Match Score: 62.95

venturebeat
Google unveils Gemini 3 claiming the lead in math, science, multimodal and

<p>After more than a month of rumors and feverish speculation — including <a href="https://polymarket.com/event/gemini-3pt0-released-by">Polymarket wagering on the release date [...]

Match Score: 61.89

venturebeat
The next AI battleground: Google’s Gemini Enterprise and AWS’s Quick Su

<p>The friction of having to open a separate chat window to prompt an agent could be a hassle for many enterprises. And AI companies are seeing an opportunity to bring more and more <a href=& [...]

Match Score: 61.17

venturebeat
Terminal-Bench 2.0 launches alongside Harbor, a new framework for testing a

<p>The developers of Terminal-Bench, a benchmark suite for evaluating the performance of autonomous AI agents on real-world terminal-based tasks, have released <a href="https://www.tbenc [...]

Match Score: 60.45

venturebeat
Why observable AI is the missing SRE layer enterprises need for reliable LL

<p>As AI systems enter production, reliability and governance can’t depend on wishful thinking. Here’s how observability turns <a href="https://venturebeat.com/ai/from-shiny-object-t [...]

Match Score: 59.80

venturebeat
Writer's AI agents can actually do your work—not just chat about it

<p><a href="https://writer.com/"><u>Writer</u></a>, a San Francisco-based artificial intelligence startup, is launching a unified AI agent platform designed to [...]

Match Score: 59.24