AI can write a pretty good draft of a business email in about ten seconds. It cannot tell you whether that email is a good idea to send. Understanding that line โ€” what AI does well versus what it doesn't โ€” is where almost all practical AI literacy begins.

Most people who have tried ChatGPT come away with one of two reactions: either it felt like magic and they can't figure out why it sometimes fails so badly, or it gave them wrong information once and they haven't trusted it since. Both reactions are understandable. Both miss the more useful middle ground, which is a clearer sense of which tasks AI handles reliably and which it doesn't.

That's what this is. Not a review, not a ranking, not a list of prompts. Just an honest map.

Where ChatGPT Genuinely Helps

The tasks ChatGPT handles well share something in common: they involve working with language that already exists, transforming it, reorganizing it, or generating more of it based on clear patterns. When you understand that, the strong use cases start to make sense.

Drafting and rewriting text. This is the clearest strong suit. Give ChatGPT a rough version of an email, a proposal summary, or a cover letter and ask it to make it more professional, shorter, or clearer โ€” it does this reliably well. It's not that the AI "understands" your purpose better than you do; it's that it has absorbed enormous amounts of text and knows what well-structured professional writing looks like. Your job is to supply the intent and the facts. Its job is to give those a better form.

In my own work I've used this for drafting difficult emails โ€” the kind you've been putting off because you can't find the right tone. You give ChatGPT the situation and what you want to say; it gives you a version that's more measured than what you'd write while frustrated. You edit it back toward your voice. The result is often better than either version alone.

Summarizing and condensing. Paste in a long document, an article, a set of meeting notes, or a research paper and ask for a plain-language summary. This works well consistently. The AI reads the text and surfaces the main points, usually accurately and far faster than reading it yourself. This is probably the most universally useful AI task for knowledge workers โ€” the ability to process and distill text at speed.

Brainstorming and generating options. Ask ChatGPT to give you ten possible names for something, ten angles for an article, ten questions you should ask before making a decision โ€” it returns answers quickly and usually includes several that are genuinely useful. It's not that every suggestion is good; it's that having twenty options to evaluate is often faster than generating five from scratch. The AI expands the option space. You still do the judging.

Explaining concepts in plain language. "Explain compound interest like I'm 15." "Summarize what the Federal Reserve actually does." "What's the difference between a Roth and a Traditional IRA?" On explanatory tasks with stable, well-documented information, ChatGPT is reliable and often clearer than the first result you'd find on Google. The caveat โ€” explained below โ€” is that this reliability breaks down with topics that are specialized, recent, or contested.

Three Things It Still Gets Wrong

The failures are as patterned as the strengths โ€” which means they're learnable. Here are the three that catch people most often.

1. Current events and recent information

ChatGPT's knowledge has a training cutoff. The free version's cutoff is typically a year or more behind the present. Ask it about something that happened in the last six months and it either doesn't know, makes something up, or gives you information that's outdated. This is the source of many of the "it lied to me" experiences people have โ€” not malice or randomness, but the model confidently applying patterns from old data to questions that require current information.

If you need current information, use Perplexity or ChatGPT with web browsing enabled (available in the paid version). Or just Google it. Knowing which tool fits which task is most of practical AI literacy.

2. Specific facts that need to be right

When AI confidently states something that's wrong โ€” a wrong date, a misattributed quote, an invented statistic โ€” that's called a hallucination. It's not a bug that will be fixed eventually; it's a structural feature of how large language models work. They predict what plausible text looks like. Plausible isn't the same as accurate.

This matters most when the stakes of being wrong are high. If you're asking AI to help you understand a concept, being approximately right is usually fine. If you're asking it for the specific interest rate on a loan, the exact terms of a contract, or a statistic you plan to cite publicly, verify independently. Always. Not because AI is usually wrong, but because the cost of the times it is can be significant.

A useful mental model: treat ChatGPT answers the way you'd treat information from a very well-read friend who might be misremembering. You'd take their general explanation seriously and verify the specific facts before acting on them.

3. Judgment calls that require knowing you

Should you send that email? Is this job worth taking? Is the argument in this document actually sound, or does it just sound good? These are questions that require context, values, and situational judgment that the AI doesn't have. It can give you a framework for thinking about them. It cannot make the call.

This is the line the opening sentence was pointing at. AI is genuinely good at helping you express your judgment. It is not a substitute for having it. When people feel burned by following AI advice on something consequential, it's usually because they outsourced the judgment call, not just the drafting.

A Note on Claude, Gemini, and the Others

Most of what's described above applies across the main AI assistants โ€” Claude (made by Anthropic), Gemini (Google), and Copilot (Microsoft's integration of OpenAI models into its products). The strengths and limitations are broadly similar because the underlying technology is similar.

There are differences worth knowing. Claude tends to be stronger at nuance in longer documents โ€” if you're asking it to hold a complex argument across a long text, it often does this better than ChatGPT. Perplexity is specifically built for research tasks that require current information, with citations. Gemini is tightly integrated into Google Workspace, which matters if your work life runs on Google Docs and Gmail.

The practical takeaway: you don't need to master every tool. Start with one, learn where it helps in your specific work, and expand from there. Most of what you learn about one AI assistant transfers reasonably well to the others.

How to Use This

The most useful thing you can take from this isn't a list of what AI can and can't do. It's a habit of asking, before you start a task: "Is this a drafting/transforming/summarizing job, or a judgment call?" If it's the former, AI can meaningfully help. If it's the latter, AI can support your thinking but not replace it.

That distinction, applied consistently, is what separates people who find AI genuinely useful from people who are either underwhelmed or periodically embarrassed by it.

The practical AI question isn't "is AI trustworthy?" It's "trustworthy for what?" Answer that specifically, and most of the frustration goes away.

Tom Weston

Tom spent time watching technology transform industries from the inside โ€” and created AI Tuning Fork to be the steady, practical guide he wished his friends had. Writing for people navigating AI without wanting to become technologists.