The Korea Herald

지나쌤

[Tyler Cowen] How should you talk to ChatGPT?

By Korea Herald

Published : Feb. 9, 2023 - 05:31

    • Link copied

About 100 million people used ChatGPT in the month of January, according to one estimate, which would be the fastest-growing user base ever. Yet I often speak to people who are less than impressed with ChatGPT, citing its mistakes and banalities, and they suggest it is a passing fad.

In response, allow me to offer a short guide to using ChatGPT. It can do many things for you -- organize your notes, correct your grammar, work with mathematical symbols. But I will focus on the most basic use: querying it. To use it well, you need to let go of some of your intuitions about talking to humans. ChatGPT is a bot.

If you are talking to a human and he or she doesn’t understand you, you might make your question simpler. That is exactly the wrong tack to take with ChatGPT. You will get a better result by making your question more detailed and complex.

Ask ChatGPT “What is Marxism?” for example, and you will get a passable answer, probably no better than what you would get by using Wikipedia or Google. Instead, make the question more specific: “Which were the important developments in French Marxism in the second half of the 19th century?” ChatGPT will do much better -- and it’s also the kind of question it’s hard to use Google and Wikipedia to answer.

ChatGPT will do better yet if you ask it sequential questions along an evolving line of inquiry. Ask it about the specific French Marxists it cites, what they did, and how they differed from their German counterparts. Keep on going.

ChatGPT does especially well at “compare and contrast.” In essence, ChatGPT needs you to point it in the right direction. A finely honed question gives it more fixed points of reference. You need to set the mood and tone and intellectual level of your question, depending on the kind of answer you want. It’s not unlike trying to steer the conversation at a dinner party. Or, to use another analogy, think of working with ChatGPT as like training a dog.

Another way to hone ChatGPT’s capabilities is to ask it for responses in the voice of a third person. Ask, “What are the costs of inflation?” and you might get answers that aren’t wrong exactly, but neither are they impressive. Instead, try this, “What are the costs of inflation? Please answer using the ideas of Milton Friedman.”

By mentioning Friedman, you have pointed it to a more intelligent corner of the ideas universe. If Friedman isn’t the right guide for you, choose another economist (don’t forget yours truly!). Better yet, ask it to compare and contrast the views of two economists.

To understand this unusual behavior, it helps to know a bit about how these models work. They do not reason, and they do not consult the internet (though they have read the internet previously). Instead, ChatGPT works by trying to predict which words should come next to provide a likely and satisfying answer.

Here is a simple example: If I said, “The Star-Spangled (blank),” you would probably guess the next word is “Banner,” because the US national anthem is called “The Star-Spangled Banner,” and you often hear those words in that order. That challenge is an easy one, but for most questions or prompts the proper sequence isn’t so obvious. ChatGPT needs some guidance from you to set off in the right direction. Act like a dog trainer and let it know what you are looking for.

ChatGPT is most likely to go wrong when it is trying too hard to please you. I have found that it creates associations, affiliations, co-authorships and even marriages that do not exist. I was recently using it with the legal scholar Cass Sunstein, a former Bloomberg Opinion columnist, and we asked it whether the two of us had written a book together. The genius of ChatGPT is that it hit upon exactly the book we might have written, given our overlapping areas of research interest -- a book on the philosophical foundations of cost-benefit analysis.

The absurdity is that no such co-authored book exists. Perhaps ChatGPT has been trained on bodies of text where most queries of affiliation are answered in the affirmative, and so to fulfill its task of predicting words well, it will make things up. So while it is wonderful for helping with your workflow or generating ideas, ChatGPT is not good as a fact-checker.

I didn’t use ChatGPT in drafting this column, by the way. If I had, however, I know exactly how I would have phrased my query -- and I would have been sure to ask for something in the style of Tyler Cowen.

Tyler Cowen

Tyler Cowen is a Bloomberg Opinion columnist. -- Ed.

(Tribune Content Agency)