- The Datable
- Posts
- Trust & AI: Showing Your Work
Trust & AI: Showing Your Work
Citations and Query Types Will Drive Trust

Major AI platforms have adopted a new approach to building trust: showing their work.
This reminds me of what my kids (and I) learned in school, with teachers insisting we demonstrate our process rather than just provide answers.
OpenAI and Google's Gemini use a two-panel design that displays the AI's answer and its sources side-by-side.
This visual approach lets users see what the AI thinks and why it thinks that.

AI is improving quickly.
This setup addresses a key question: How can we trust AI-generated answers, especially for queries we can't immediately verify?
The citation systems in leading AI platforms change how we interact with artificial intelligence.
Recent studies from Vectara show hallucination rates in leading AI models are now significantly lower than previously thought. Google's Gemini shows just 1.3% hallucination rate, and GPT-4o at 1.5%, demonstrating remarkable progress in AI accuracy (Vectara, 2025).
These improvements in factual reliability, combined with transparent citation systems, show how AI platforms compete to deliver answers and verifiable information users can trust.
The industry's rapid advancement in reducing hallucinations while improving citation capabilities illustrates the growing maturity of AI as a reliable information partner.
The citation approach in AI interfaces follows a framework based on query structure.
This framework considers three intersecting binaries: objective versus subjective, branded versus unbranded, and simple versus complex.
Objective questions seek factual information with clear right or wrong answers, while subjective questions involve matters of opinion, preference, or personal experience.
Branded queries reference specific companies, products, or services, while unbranded queries are general in nature.
Simple questions require straightforward, direct answers, while complex questions need more nuanced, multi-faceted responses.
When these three dimensions combine, they create eight distinct query spaces (see the diagram), each with its citation profile.

These Quadrants Form Eight Separate Query Spaces.
A simple, branded objective question like "What time does this Wendy's open?" draws from different sources than a complex, unbranded, subjective query like "Find me a better financial advisor."
Different AI platforms have implemented citation features with varying approaches.
Google's Gemini embeds citation icons directly within answers that users can click to view sources, while Microsoft's Copilot places citations along the bottom, similar to academic footnotes.
These design choices reflect different philosophies about integrating citations with the user experience.
For complex queries, AI systems often respond with clarifying questions before providing an answer.
This interactive process helps refine the query and ensures the cited sources address the user's intent.
Citation accuracy matters as AI systems integrate into critical decision-making processes.
In medical research, AI's ability to generate credible references has significant implications for patient care and scientific advancement.
The transparency provided by citations helps users identify potential biases in AI responses.
When we can see which sources inform an answer, we can better evaluate whether those sources present a balanced view or might skew toward particular perspectives.
By showing the decision-making process and underlying algorithms, users can understand and evaluate the reliability and fairness of AI systems, enhancing trust in these technologies.
This transparency transforms AI from a mysterious "black box" into a more accountable research assistant.
Trust in AI isn't just a nice-to-have feature. It's fundamental to adoption.
AI transparency builds customer trust by ensuring fair, accurate, and explainable systems.
The companies that lead in trusted AI will likely lead the market.
AI citation systems continue to evolve with more sophisticated approaches.
OpenAI's Deep Research makes the research process visible to users with real-time updates, displaying charts and graphs on-screen, and annotating trends as it works through data (TechCrunch, 2025).
This allows users to observe how the AI reaches its conclusions.
MIT researchers have created ContextCite, a new tool that identifies the specific parts of external context used to generate any particular statement. This allows users to verify AI-generated information and detect potential "poisoning attacks" where malicious sources might trick AI systems (MIT News, 2024).
These advancements signal where citation technology is headed.
Future AI citations will likely include more nuanced evaluation of source credibility.
Beyond providing links, AI systems might eventually explain why certain sources were prioritized and how conflicting information was reconciled.
Most major citation styles now provide specific guidance for referencing AI-generated content, including details like version information, prompt text, and generation date.
Patterns in Citation Sources
When we examine how the three dimensions of queries affect citation patterns, we see different patterns in different combinations.
For example:
When users ask objective, complex, and unbranded questions, AI systems prioritize sources from search results, government agencies, professional associations, and educational resources.
These sources provide factual, authoritative information that can be verified.
For unbranded but subjective and straightforward requests, citation sources shift dramatically. The AI pulls more from user review platforms, local blogs, and social media content. Sources that capture personal experiences rather than objective facts.
When handling branded, objective queries about specific businesses, AI systems rely on official websites, structured data, and verified directories.
This approach ensures that company-specific information comes from authorized sources.
This three-part framework shows how the structure of your query directly influences which sources the AI will cite.
Understanding this relationship gives users more control over the information they receive.
Combining these query elements gives us a comprehensive framework for understanding how AI systems approach citation.
This understanding helps users craft more effective queries and better evaluate the information they receive.
Despite progress, AI citation systems face significant challenges.
According to the Columbia Journalism Review's study of eight AI search engines, premium chatbots provided more confidently incorrect answers than their free counterparts, and multiple chatbots bypassed robots.txt (exclusion) preferences.
Trust in AI shouldn't be viewed as binary. I don’t want to trust or distrust anything from these tools completely.
Researchers propose "algorithmic vigilance" as an ideal midpoint between "algorithm aversion" (extreme distrust) and "loafing" (excessive trust leading to complacency).
Citations help users maintain this vigilant stance.

This Max Trust Zone will also Disappear at Some Point.
Some research suggests that transparency in AI decisions can harm trust in specific contexts, affecting user confidence in decision-making tools.
By showing their work through transparent sourcing, AI platforms build the trust necessary for deeper integration into our decision-making processes.
For users, the key insight remains: query structure determines citation sources.
And citations matter.
Reply