Out-of-vocabulary (OOV) words pose a common challenge in language modeling, particularly when encountering rare or unseen terms during inference. To address this challenge, language models employ various techniques, such as subword tokenization and character-level modeling. Subword tokenization algorithms, such as Byte Pair Encoding (BPE) and WordPiece, dynamically segment words into smaller subword units based on their frequency and contextual relevance, enabling the model to handle OOV words more effectively.