Which tokenizer splits the text field into tokens, treating whitespace and punctuation as delimeters?
a) Lower Case Tokenizer
b) Standard Tokenizer
c) Classic Tokenizer
d) ICU Tokenizer