Solution : c) Tokenization.
Reason: Tokenization is a pre-processing step in NLP that involves breaking down a text into individual words or tokens.