AI vs Reliability

Natarajan Santhosh
2 min readOct 13, 2023

--

The LLM AI technology generation is optimized to be fluently conversational and not to be factually correct all the time.

1) Hallucinations often appear because LLMs are designed to create fluent, coherent text.

2) LLMs have no understanding of the underlying reality that language describes.

3) LLMs use statistics to generate language that is grammatically and semantically correct within the context of the prompt.

It sacrifices accuracy for being good at conversations as it is designed to do. All these criticisms of hallucinations are missing the point.

Generative AI is generative and basically is a specialist at making things up. It’s going to take things like multiprompting, network AIs that fact check output, and a host of other technologies or even entirely new models of AI to solve these problems, but don’t make the mistake of thinking that the system is supposed to be working without hallucinations right now — that’s not what it’s optimized for.

I make (sigh) AI for a living, and arguably have been since before we started calling it AI.

Based on my own first-hand experience, if the first thing a company has to say about a product or feature is that it’s powered by AI, that is a strong signal that it isn’t actually very useful. If they had found it to be useful for reliably solving one or more real, clearly-identified problems, they would start by talking about that, because that sends a stronger signal to a higher-quality pool of potential customers.

The thing is, companies who have that kind of product are relatively rare, because getting to that point takes work. Lots of it. And it’s often quite grueling work. The kind of work that’s fundamentally unattractive to the swarms of ambitious, entrepreneurial-minded people looking to get rich starting their own business who drive most attempts at launching new products.

Corollary: If a product marketed as AI is useful, that’s a strong signal it’s a logistic regression.

I don’t know who said it, but an amazing quote I love is: “they call it AI until it starts working, see autocomplete”

I love this because when a company tells me they do AI (as a software engineer) they tacitly say that they have little to no knowledge of where they want to go or what services they will be offering with that AI.

--

--