About 50 results
Open links in new tab
  1. What does Keras Tokenizer method exactly do? - Stack Overflow

    On occasion, circumstances require us to do the following: from keras.preprocessing.text import Tokenizer tokenizer = Tokenizer(num_words=my_max) Then, invariably, we chant this mantra: …

  2. How to do Tokenizer Batch processing? - HuggingFace

    Jun 7, 2023 · in the Tokenizer documentation from huggingface, the call fuction accepts List [List [str]] and says: text (str, List [str], List [List [str]], optional) — The sequence or batch of sequences to be …

  3. Unable to get the tokenizer of Gemma-3 - Stack Overflow

    Mar 22, 2025 · 3 I am trying to get the tokenizer using huggingface AutoTokenizer library, but I am unable to fetch, is there any other way to get it? Where I am doing wrong?

  4. Looking for a clear definition of what a "tokenizer", "parser" and ...

    Mar 28, 2018 · A tokenizer breaks a stream of text into tokens, usually by looking for whitespace (tabs, spaces, new lines). A lexer is basically a tokenizer, but it usually attaches extra context to the tokens …

  5. OpenAI API: How do I count tokens before(!) I send an API request?

    Mar 21, 2023 · How do I count tokens before (!) I send an API request? As stated in the official OpenAI article: To further explore tokenization, you can use our interactive Tokenizer tool, which allows you …

  6. How to add new tokens to an existing Huggingface tokenizer?

    May 8, 2023 · And then it points to the train_new_from_iterator() function in Chapter 7 but I can't seem to find reference to how to use it to extend the tokenizer without re-training it.

  7. How to add all standard special tokens to my hugging face tokenizer …

    Aug 11, 2022 · I want all special tokens to always be available. How do I do this? My first attempt to give it to my tokenizer: def does_t5_have_sep_token(): tokenizer: PreTrainedTokenizerFast = …

  8. python - AutoTokenizer.from_pretrained fails to load locally saved ...

    from transformers import AutoTokenizer, AutoConfig tokenizer = AutoTokenizer.from_pretrained('distilroberta-base') config = AutoConfig.from_pretrained('distilroberta …

  9. How to apply max_length to truncate the token sequence from the left …

    May 11, 2022 · In the HuggingFace tokenizer, applying the max_length argument specifies the length of the tokenized text. I believe it truncates the sequence to max_length-2 (if truncation=True) by cutting …

  10. Building a custom tokenizer via HuggingFace Tokenizers library from ...

    Mar 4, 2024 · 1 I try to create a custom Tokenizer via the HuggingFace Tokenizers library from scratch, following this tutorial. My dataset consists of 80 million Chinese sentences. The structure of my …