Large Language Models (LLMs) changed the way I acquire Japanese vocabulary.


Using LLMs as I am about to outline to learn Japanese requires that you can first generally read Japanese. That means fluency in the kana and some kind of command over the jouyou kanji. This is not a hard requirement--you could have only a command of kana and interact with the LLM in just kana and romaji, but I would personally find that less efficient and more frustrating than using kanji.

Using LLMs

There is no secret trick to this technique. You simply talk to the LLM. At the time of this writing, "talking" is via text, but in the not so distant future I would not be surprised if literally speaking with LLMs is a feature available to all netizens at little or no cost and no technical complexity. The beauty of talking with LLMs is how many non-essential things from the "old way" you can avoid. I'll cover that later in the next section. For now, I'll just describe my interactions.

Say I wanted to learn about the differences between Shinto temples (神社, "jinja") and Buddhist temples (寺院, "ji-in") in Japan. All I have to do is ask. I don't have to hunt around Japanese content that likely contains ads and asides and then repeat that process when I want to dive into something tangentially related.

Here is a truncated and annotated exchange with X's Grok--you can use any LLM that "speaks" Japanese.

[Please explain the differences between Jinja and Ji-in.]

こんにちは!神社と寺院はどちらも日本で信仰の対象となる場所ですが、 いくつか違いがあります。


  • 神道の信仰の対象となる場所です。
  • 神道では、自然や精霊を崇拝します。
  • 御神体(おしんたい)と呼ばれる祭祀の対象があり、その周囲に建てられています。
  • 鳥居(とりい)、手水舎(ちょうずしゃ)、狛犬(こまいぬ)など、神社の 特徴的な建造物があります。
  • 例:伊勢神宮、出雲大社、明治神宮

My query is not even perfectly grammatical Japanese--this is to show the resiliency and accommodation of interacting with LLMs in a language that is not my native one.

The key in the response is getting exposed to vocabulary related to shrines and temples that would then allow me to have better conversations on the subject with other humans. Grok provides plenty of useful vocabulary:

  • 神道: shinto
  • 信仰: religious faith
  • 精霊: spirits of ancestors
  • 参拝: going and worshipping at a shrine
  • 祭祀: ritual

and more. In the truncated excerpt above, Grok ends the list with an example of jinja, one of which is 明治神宮 (Meiji Jinguu), a famous shrine in Tokyo. If we wanted to learn more about Meiji Jinguu in particular, all we have to do is ask.

Switching gears, now I want to learn what kind of vocabulary would be useful when registering a sole proprietorship in Japan:

[I'm thinking of starting a sole proprietorship. What kind of vocabulary might be useful?]

And Grok simply gives me a list of vocabulary. (I actually used this output, and it was very helpful.)

Changing pace again, enough with the lists; I'd like longer-form content. No problem.

[Please teach me about the history of Tokyo.]

And it replies with long-form content.

Acquiring new vocabulary is as simple as reading the output, picking out what you don't know, and if you're into SRS, making cards for the new terms, complete with context sentences.

What makes this whole process even easier is if you are interacting with the LLM on a device that supports browser extensions that translate words into a language you understand when you mouse-over the term.

Before LLMs

The above is just a description of interacting with an LLM, nothing novel or special to anyone who has done so routinely before. What makes the above special to me though is how it compares to my previous way of acquiring targeted Japanese. In particular, lately I've wanted to learn vocabulary about banking and software engineering. Doing this the now old-fashioned way of browsing the internet and reading books is fruitful but is just slower than interacting with an LLM. It's slower because there is often a lot more fluff around what I want to learn when--introductions, asides, personal thoughts, entire pieces of content that don't actually go into any substantial depth, and the like. And the moment I find something that piques my interest but is not explored further in the content I'm currently reading, I have to repeat the searching and filtering process all over again.

One specific example of what my vocabulary acquisition process used to look like is using my current employer's access to O'Reilly Learning to read digital versions of Japanese translations of books on software topics, such as the Japanese version of High Performance Python. I would just start reading until I hit a word I didn't understand, and there was a high probability that the word I did not understand was not in the specific domain I was trying to acquire from. Textbooks are long, and modern technical textbooks flow more like stories with technical details than they do like recipes or algorithm definitions.

LLMs, true to their nature, summarize and distill information on user-provided topics and so are a huge efficiency gain in targeted language acquisition.

On Factual Correctness

LLMs do not always tell the "truth," but the beauty in this context is that it doesn't matter. All that matters is that they use vocabulary you don't know but want to learn.