A few months ago I needed to understand how municipal bond taxation works. Not at a surface level โ€” I'd seen the basic explanation a hundred times โ€” but specifically: how does the tax treatment differ across states, what are the edge cases that trip people up, and what questions should I be asking a financial advisor before making a decision?

I spent twenty minutes with Claude. By the end I understood the mechanics well enough to have an informed conversation with a professional, spot the situations where the standard advice wouldn't apply to my situation, and identify the two or three things I actually needed to look up from primary sources. That's a different outcome than a Google search, which would have given me the same basic explainer I'd read before.

The difference wasn't the AI. It was the method. Here is the method.

Why Most People Get Surface-Level Answers

When most people start researching a topic with AI, they ask a single question: "Explain [topic] to me." The AI gives a competent overview. They read it, feel like they understand it, and move on. Two days later, they realise they still don't understand the specific part that was relevant to their actual situation.

The problem is not that the AI gave a bad answer. The problem is that a single overview question produces a single overview answer. An overview is not understanding. Understanding comes from working through the topic โ€” asking follow-up questions, testing your mental model against edge cases, having gaps exposed and filled.

AI is well-suited to that kind of iterative, conversational research. But it only works that way if you treat it that way. Here is how.

Step 1: Start With an Orientation Question

Before you ask about the specific thing you want to know, ask for a map of the territory. Something like: "I want to understand [topic]. Before I ask specific questions, give me a brief overview of the main concepts I'll need to understand, the most important distinctions in the field, and the common misconceptions people bring to it."

This does a few things. It gives you vocabulary. It surfaces the structure of the topic before you get into it. And the misconceptions piece is particularly useful โ€” it tells you what traps to avoid before you walk into them.

For the municipal bonds question, I started with: "Give me the map of municipal bond taxation โ€” the key concepts, the main distinctions that matter, and the most common things people misunderstand about it." That single response gave me enough vocabulary to ask useful follow-up questions instead of generic ones.

Step 2: Ask About the Part That's Actually Relevant to You

Once you have the overview, narrow down. Tell the AI specifically what your situation is and what you need to understand. "I'm a resident of [state], considering [specific type of investment], trying to figure out [specific question]. Given what I just learned, what are the most important things I need to understand about how this applies to my situation?"

The more specific your situation, the more useful the answer. Vague questions produce generic answers. Specific questions โ€” even about unfamiliar topics โ€” produce targeted answers because the AI has enough context to focus.

This step is where most people's AI research sessions end. It should be where they shift gears.

Step 3: Test Your Understanding With a Specific Case

After you have received an answer you think you understand, test it with a concrete scenario. "Let me make sure I understand this correctly. If [specific situation], then [what I think would happen] โ€” is that right? What am I missing?"

This step is disproportionately valuable. It forces you to translate abstract information into a concrete situation, which is where misunderstandings reveal themselves. It also gives the AI specific feedback to work with โ€” if your mental model is wrong, the AI can correct the specific error rather than re-explaining the whole topic.

I find that most of my genuine understanding of a topic comes from this step, not from reading the overview. The overview tells me what I should know. The scenario test shows me whether I actually know it.

Step 4: Ask for the Edge Cases and Exceptions

"What are the situations where the standard guidance doesn't apply? What are the edge cases that catch people out? What should I be alert to that isn't covered in the basic explanation?"

Every field has edge cases that the introductory material glosses over because they make the explanation more complicated. Those edge cases are often exactly where the decisions that matter to you live. Asking for them explicitly surfaces the complexity that the overview left out.

This question also tends to be where AI is most useful relative to a Google search. Google returns the standard explanation, optimised for the most common query. The edge cases are buried in forums, specialist articles, and professional resources that require knowing what to search for. If you don't already know the terminology, you can't find them. With AI, you can ask for them directly.

Step 5: Ask What You Should Look Up From Primary Sources

AI is trained on data with a cutoff date. It can be confidently wrong about recent changes, jurisdiction-specific rules, and anything that requires current information. After you have built your understanding, ask: "What parts of this should I verify from current primary sources, and where should I look?"

This step is not just a caution against AI errors. It is also a targeting tool. After working through a topic conversationally, you now know enough to know what you don't know โ€” which is exactly what you need to use primary sources efficiently. You are not searching broadly anymore. You are looking for a specific piece of current information to confirm or update a specific part of your mental model.

That is how professionals do research. They use secondary sources to build a framework, then use primary sources to fill the specific gaps. AI has made the framework-building step faster and more interactive. The primary source step is still yours to do.

A Note on Trusting the Output

None of this changes the fundamental limitation of AI on factual claims: it can be confidently wrong. The method above reduces the risk because you are testing your understanding rather than just accepting the output, and because you are explicitly asking for the parts that need verification. But it does not eliminate the risk.

The frame I find useful: treat AI research the same way you would treat a conversation with a smart, well-read friend who knows a lot about the topic but is not a licensed expert and may be out of date on specifics. Extremely useful for orientation and framework. Not a replacement for professional advice on decisions that matter or verification of specific facts from authoritative sources.

With that frame in mind, the method I have described is genuinely useful โ€” not because it produces perfect information, but because it produces understanding, which is what makes the information you do get elsewhere usable.