Artificial intelligence is rushing into finance — but few people realize how fragile it still is beneath the surface.
The recent article Defeating Nondeterminism in LLM Inference by Thinking Machines tackles one of the most overlooked challenges in AI: nondeterminism — the tendency for large language models (LLMs) to produce different answers even when given the same input.
It’s a dense, technical read, but the message is simple and important:
Even if you tell an AI model to give you the same answer every time, it might not — because the mathematics inside it isn’t perfectly repeatable.
Why identical inputs can yield different answers
Inside every LLM are billions of parallel calculations running on GPUs. These chips use “floating-point math” — lightning fast, but not perfectly precise. When billions of operations happen in parallel, even tiny rounding differences accumulate.
That means that (a + b) + c ≠ a + (b + c) once you scale it to model-sized maths.
The result: two identical prompts can produce slightly different answers, even at “temperature zero.”
In most settings, this is harmless — a different adjective, a slightly altered phrasing. But in finance, that same instability can turn a model from “innovative” into non-compliant.
When you’re generating valuations, growth rates, or risk ratios, “close enough” isn’t close enough.
The deeper problem: unreliable and unlicensed data
Even if you fixed the maths, most models fail at a deeper level — their data.
All foundational models are trained or extended on unverified web data. That includes scraped pages, public APIs, and community content — none of which are:
- Licensed for commercial or regulated use
- Guaranteed accurate, current, or complete
- Structured in a way that supports compliance or auditability
Worse, many LLM-based “finance agents” try to fill their knowledge gaps by reaching into the internet — scraping sites like Yahoo or Google Finance — to deliver “real-time” answers or prices
That’s not innovation; that’s a compliance nightmare waiting to happen. That data is not licensed for professional use, often delayed or inaccurate, and will not survive regulatory scrutiny. If your “AI agent” is scraping Google Finance for prices, you’re not automating insight — you’re setting yourself up for an audit and a big penalty.
viaNexus and vAST: fixing hallucination from the ground up
At viaNexus, we’re not trying to patch hallucination after it happens — we’re preventing it entirely by controlling the data fabric that AI operates on.
Our platform, viaNexus, and vAST (viaNexus Agentic Services Technology), are built to make hallucination impossible in structured domains.
Here’s how:
1. Deterministic, licensed data
Every dataset in viaNexus — filings, fundamentals, prices, symbology, news — is fully licensed, and permissioned to the agent level. Agents don’t guess what a number means; they query an exact, canonical source of truth. Our proprietary equity prices can also tell agents the how stocks and the market are performing right now (not 15 minutes or a day ago!).
2. Guardrails for agentic reasoning
vAST defines strict rules for how AI agents interact with data. Given the same inputs, you always get the same outputs. No improvisation, no drift, no mystery.
3. Precalculated market intelligence
LLMs are not good at maths. They were never built for precise calculation.
So at viaNexus, we precompute many of the statistics, ratios, and derived metrics that agents might otherwise try (and fail) to calculate in real time — things like intraday sector performance, valuation multiples, and change ratios. That means agents can focus on reasoning and explanation, while the platform handles the quantitative truth.
4. Separation of labor
We isolate structured data reasoning from free-text generation. LLMs can narrate, summarize, and explain — but they don’t calculate, and they don’t invent data because THEY DON'T NEED TO.
Why this matters
Current day AI systems in finance are brittle because they rely on nondeterministic computation layered over unlicensed, unstructured data.
If Thinking Machines is solving nondeterminism at the numerical level, viaNexus and vAST solve it at the semantic level — ensuring that financial data, logic, and meaning remain consistent, licensed, and reproducible.
Together, these two approaches define the foundation for trustworthy financial AI: numerical precision + semantic integrity + licensed data.
The future of agentic finance
As AI agents evolve to drive analytics, research, and decision-making, the industry faces a simple choice: Build on unverified data scraped from the web, or build on a deterministic, auditable, and licensed foundation.
At viaNexus, we’ve chosen the latter. Because in regulated, high-stakes domains like finance, hallucination isn’t something to mitigate — it’s something to eliminate.
Reach out to us if you'd like to work with us on defining the future of agentic finance. Meanwhile be sure to check out askNexus, our demo of how easy it is to build a compliant financial agent on vAST.