Why Some People Get Incredible Results from AI and Others Don’t

An observation from working closely with generative AI across real-world business systems.

Across LinkedIn and the wider web, there’s a recurring complaint about AI models:

“They hallucinate.”

“They make things up.”

“You can’t trust them.”

And yet, there’s an equally strong group of users reporting the opposite experience:

AI accelerates their work, strengthens their thinking, and helps them produce high-quality output at speed.

The gap between these two experiences is striking. The same tools, radically different outcomes.

Understanding why this happens is important — not to score points, but to help people get the results they expect.

 

1. AI Amplifies the Quality of the Input

LLMs don’t generate answers in a vacuum.

They respond to whatever information, structure, and constraints the user provides.

When prompts are vague:

“Write me something about X”

“Tell me how to do Y”

the model fills in the blanks and that’s often where errors appear.

When prompts are specific:

  • Clear objective
  • Context
  • Constraints
  • Tone
  • Success criteria
  • Examples

 

the model has far less room to guess, meaning far fewer inaccuracies.

AI isn’t guessing because it’s broken.

AI guesses because the prompt leaves space for guessing.

 

2. AI Works Best With Structured Thinking

People who naturally think in frameworks, breaking problems into parts, defining assumptions, identifying constraints, tend to get stronger outputs.

AI models respond exceptionally well to:

  • Sequential logic (“Step 1 → Step 2 → Step 3”)
  • Defined roles (“act as an auditor / strategist / analyst”)
  • Clear boundaries (“don’t invent facts; use only the data provided”)
  • Context-rich briefings
  • Iteration (“improve this; challenge this; tighten this”)

 

This isn’t about intelligence.

It’s simply that structured thinking gives the model structure to follow.

 

3. Context Compounds Accuracy

AI performs better the more it knows.

Users who provide context upfront such as business details, data sources, examples of past work, constraints, effectively give the model a “knowledge base.”

Users who start from scratch every time stay stuck at “cold start” accuracy.

The more context the model holds, the less it hallucinates.

 

4. People Expect Google, Not a Collaborative System

A common mistake is treating AI like a search engine.

Search engines return facts.

AI systems return reasoning, structure, and predictions.

If someone expects a perfect answer on the first try, they will be disappointed.

If they expect a collaborator they can refine, they’ll get far better results.

The most effective users iterate:

“This part is wrong — fix it using this source.”

“Tighten points 2 and 4.”

“Challenge the assumptions.”

That back-and-forth is where AI becomes reliable.

 

5. Error Tolerance Impacts Perception

Some users see a single mistake and conclude the tool is useless.

Others view mistakes as signals:

“What did I not specify?”

“What constraint is missing?”

“Where did the model interpret ambiguity?”

The second group steadily improves accuracy through interaction.

The first group stops at the first friction point.

 

6. Use Cases Matter

AI is strongest at:

  • Strategy
  • Structuring
  • Writing
  • Planning
  • Problem framing
  • Document production
  • Workflow mapping
  • Communication
  • Analysis of text data

 

AI is weakest at:

  • Real-time facts
  • Dates, numbers, citations
  • Highly niche technical minutiae
  • Anything requiring up-to-the-minute data

If someone only tests AI in its weakest zones, they’ll assume the whole thing is unreliable.

If someone uses it in its strongest zones, it quickly becomes indispensable.

 

So Why Do Results Vary So Widely?

Because AI scales the user’s:

  • clarity
  • structure
  • context
  • constraints
  • precision
  • iteration habits
  • error tolerance

 

Two users can ask the same question, but the one who provides better inputs will get significantly better outputs.

This isn’t about being “better at AI.”

It’s about understanding that AI doesn’t replace thinking — it scales it.

 

How Anyone Can Improve Their AI Results

1. Add context — even 2–3 lines makes a big difference.

What’s the objective? What constraints matter? What audience are you writing for?

2. Give examples.

Show AI the tone, format, or style you want. It will match it.

3. Break tasks down.

AI thrives on steps, not giant questions.

4. Iterate, don’t expect perfection.

Correct, refine, challenge.

5. Tell AI what NOT to do.

“Don’t invent facts.”

“Use only the following data.”

“Keep to this structure.”

6. Use it for the right jobs.

Strategy, planning, writing, workflows, creative work — all strong zones.

 

Final Thought

When people say AI is unreliable, the more accurate statement is:

AI is only as reliable as the structure you give it to work with.

Once users understand that, their results tend to change very quickly.

more insights

The Rise of Shadow AI

The Transformation Happening Inside Your Organisation Without Permission AI adoption inside enterprises is rarely linear. It doesn’t begin with a strategy, a roadmap, training sessions,

Read more >