Tokenizers split large chunks of text into small, searchable units called tokens
Before text is indexed, it is first split into searchable units called tokens.The default tokenizer in ParadeDB is the unicode tokenizer. It splits text according to word boundaries defined by the Unicode Standard Annex #29 rules. All characters are lowercased by default. To visualize how this tokenizer works, you can cast a text string to the tokenizer type, and then to text[]:
Copy
Ask AI
SELECT 'Hello world!'::pdb.simple::text[];
Expected Response
Copy
Ask AI
text--------------- {hello,world}(1 row)
On the other hand, the ngrams tokenizer splits text into “grams” of size n. In this example, n = 3:
Choosing the right tokenizer is crucial to getting the search results you want. For instance, the simple tokenizer works best for whole-word matching like “hello” or “world”, while the ngram tokenizer enables partial matching.To configure a tokenizer for a column in the index, simply cast it to the desired tokenizer type:
Copy
Ask AI
CREATE INDEX search_idx ON mock_itemsUSING bm25 (id, (description::pdb.ngram(3,3)))WITH (key_field='id');