flash tokenizer

2025-12-11 0 516

The world\’s fastest CPU tokenizer library!

?? 简体中文 ??한국어 ??日本語

EFFICIENT AND OPTIMIZED TOKENIZER ENGINE FOR LLM INFERENCE SERVING

FlashTokenizer is a high-performance tokenizer implementation in C++ of the BertTokenizer used for LLM inference. It has the highest speed and accuracy of any tokenizer, such as FlashAttention and FlashInfer, and is 10 times faster than BertTokenizerFast in transformers.

Performance Benchmark Demo Video

Note

Why?

  • We need a tokenizer that is faster, more accurate, and easier to use than Huggingface\’s BertTokenizerFast. (link1, link2, link3)

  • PaddleNLP\’s BertTokenizerFast achieves a 1.2x performance improvement by implementing Huggingface\’s Rust version in C++. However, using it requires installing both the massive PaddlePaddle and PaddleNLP packages.

  • Tensorflow-text\’s FastBertTokenizer actually demonstrates slower performance in comparison.

  • Microsoft\’s Blingfire takes over 8 hours to train on custom data and shows relatively lower accuracy.

  • Rapid\’s cuDF provides a GPU-based BertTokenizer, but it suffers from accuracy issues.

  • Unfortunately, FastBertTokenizer and BertTokenizers developed in C# and cannot be used in Python.

  • This is why we developed FlashTokenizer. It can be easily installed via pip and is developed in C++ for straightforward maintenance. Plus, it guarantees extremely fast speeds. We\’ve created an implementation that\’s faster than Blingfire and easier to use. FlashTokenizer is implemented using the LinMax Tokenizer proposed in Fast WordPiece Tokenization, enabling tokenization in linear time. Finally It supports parallel processing at the C++ level for batch encoding, delivering outstanding speed.


FlashTokenizer includes the following core features

Tip

  • Implemented in C++17.

    • MacOS: clang++.
    • Windows: Visual Studio 2022.
    • Ubuntu: g++.
  • Equally fast in Python via pybind11.

  • Support for parallel processing at the C++ level using OPENMP.

News

Important

[Apr 02 2025]

  • Add performance benchmarking code
  • Performance benchmarking is conducted using Python, and required packages can be installed via setup.sh.
  • A minor performance improvement has been achieved by adding the tokenize_early_stop feature to BasicTokenizer.
  • OpenMP demonstrated better performance than std::thread across Windows, Linux, and macOS, so we\’ve switched exclusively to OpenMP.

[Mar 31 2025]

  • Modified to provide pre-built whl files for each OS.

[Mar 22 2025]

  • Added DFA to AC Trie.

[Mar 21 2025]

  • Improving Tokenizer Accuracy

[Mar 19 2025]

  • Memory reduction and slight performance improvement by applying LinMaxMatching from Aho–Corasick algorithm.
  • Improved branch pipelining of all functions and force-inline applied.
  • Removed unnecessary operations of WordpieceTokenizer(Backward).
  • Optimizing all functions to operate except for Bloom filter is faster than caching.
  • punctuation, control, and whitespace are defined as constexprs in advance and used as Bloom filters.
  • Reduce unnecessary memory allocation with statistical memory profiling.
  • In FlashTokenizer, bert-base-uncased can process 35K texts per second on a single core, with an approximate processing time of 28ns per text.

[Mar 18 2025]

  • Improvements to the accuracy of the BasicTokenizer have improved the overall accuracy and, in particular, produce more accurate results for Unicode input.

[Mar 14 2025]

  • The performance of the WordPieceTokenizer and WordPieceBackwordTokenizer has been improved using Trie, which was introduced in Fast WordPiece Tokenization.
  • Using FastPoolAllocator in std::list improves performance in SingleEncoding, but it is not thread-safe, so std::list<std::string> is used as is in BatchEncoding. In BatchEncoding, OPENMP is completely removed and only std::thread is used.

[Mar 10 2025]

  • Performance improvements through faster token mapping with robin_hood and memory copy minimization with std::list.

Token Ids Map Table Performance Test.

Token and Ids Map used the fastest robin_hood::unordered_flat_map<std::string, int>.

[Mar 09 2025] Completed development of flash-tokenizer for BertTokenizer.

1. Installation

Requirements

  • Windows(AMD64), MacOS(ARM64), Ubuntu(x86-64) .
  • g++ / clang++ / MSVC.
  • python 3.8 ~ 3.13.

Install from PIP

On Windows, you need to install vc_redist.x64.exe.

# Windows
pip install -U flash-tokenizer
# Linux
pip install -U flash-tokenizer
# MacOS
pip install -U flash-tokenizer

Install from Source

git clone https://git*hu*b.c*om/NLPOptimize/flash-tokenizer
cd flash-tokenizer/prj
pip install .

2. Sample

from flash_tokenizer import BertTokenizerFlash
from transformers import BertTokenizer

titles = [
    \'绝不能放弃,世界上没有失败,只有放弃。\',
    \'is there any doubt about it \"None whatsoever\"\',
    \"세상 어떤 짐승이 이를 드러내고 사냥을 해? 약한 짐승이나 몸을 부풀리지, 진짜 짐승은 누구보다 침착하지.\",
    \'そのように二番目に死を偽装して生き残るようになったイタドリがどうして初めて見る自分をこんなに気遣ってくれるのかと尋ねると「私が大切にする人たちがあなたを大切にするから」と答えては\'
]

tokenizer1 = BertTokenizerFlash.from_pretrained(\'bert-base-multilingual-cased\')
tokenizer2 = BertTokenizer.from_pretrained(\'bert-base-multilingual-cased\')

correct = 0
for title in titles:
    print(title)
    tokens1 = tokenizer1.tokenize(title)
    tokens2 = tokenizer2.tokenize(title)
    ids1 = tokenizer1(title, max_length=512, padding=\"longest\").input_ids[0]
    ids2 = tokenizer2(title, max_length=512, padding=\"longest\", return_tensors=\"np\").input_ids[0].tolist()
    if tokens1 == tokens2 and ids1 == ids2:
        correct += 1
        print(\"Accept!\")
    else:
        print(\"Wrong Answer\")
    print(ids1)
    print(ids2)
    print()

print(f\'Accuracy: {correct * 100.0 / len(titles):.2f}%\')
绝不能放弃,世界上没有失败,只有放弃。
Accept!
[101, 6346, 2080, 6546, 4284, 3704, 10064, 2087, 5621, 2078, 4917, 4461, 3204, 7480, 10064, 2751, 4461, 4284, 3704, 1882, 102]
[101, 6346, 2080, 6546, 4284, 3704, 10064, 2087, 5621, 2078, 4917, 4461, 3204, 7480, 10064, 2751, 4461, 4284, 3704, 1882, 102]

is there any doubt about it \"None whatsoever\"
Accept!
[101, 10124, 11155, 11178, 86697, 10978, 10271, 107, 86481, 12976, 11669, 23433, 107, 102]
[101, 10124, 11155, 11178, 86697, 10978, 10271, 107, 86481, 12976, 11669, 23433, 107, 102]

세상 어떤 짐승이 이를 드러내고 사냥을 해? 약한 짐승이나 몸을 부풀리지, 진짜 짐승은 누구보다 침착하지.
Accept!
[101, 9435, 14871, 55910, 9710, 48210, 10739, 35756, 9113, 30873, 31605, 11664, 9405, 118729, 10622, 9960, 136, 9539, 11102, 9710, 48210, 43739, 9288, 10622, 9365, 119407, 12692, 12508, 117, 9708, 119235, 9710, 48210, 10892, 9032, 17196, 80001, 9783, 119248, 23665, 119, 102]
[101, 9435, 14871, 55910, 9710, 48210, 10739, 35756, 9113, 30873, 31605, 11664, 9405, 118729, 10622, 9960, 136, 9539, 11102, 9710, 48210, 43739, 9288, 10622, 9365, 119407, 12692, 12508, 117, 9708, 119235, 9710, 48210, 10892, 9032, 17196, 80001, 9783, 119248, 23665, 119, 102]

そのように二番目に死を偽装して生き残るようになったイタドリがどうして初めて見る自分をこんなに気遣ってくれるのかと尋ねると「私が大切にする人たちがあなたを大切にするから」と答えては
Accept!
[101, 11332, 24273, 2150, 5632, 5755, 1943, 4805, 1980, 2371, 7104, 11592, 5600, 1913, 4814, 1975, 27969, 15970, 21462, 15713, 21612, 10898, 56910, 22526, 22267, 2547, 19945, 7143, 1975, 6621, 2534, 1980, 28442, 60907, 11312, 4854, 7770, 14813, 18825, 58174, 75191, 11662, 3456, 1945, 100812, 1890, 5949, 1912, 3197, 2535, 84543, 2179, 78776, 111787, 22946, 20058, 11377, 3197, 2535, 84543, 16867, 1891, 1940, 6076, 27144, 11588, 102]
[101, 11332, 24273, 2150, 5632, 5755, 1943, 4805, 1980, 2371, 7104, 11592, 5600, 1913, 4814, 1975, 27969, 15970, 21462, 15713, 21612, 10898, 56910, 22526, 22267, 2547, 19945, 7143, 1975, 6621, 2534, 1980, 28442, 60907, 11312, 4854, 7770, 14813, 18825, 58174, 75191, 11662, 3456, 1945, 100812, 1890, 5949, 1912, 3197, 2535, 84543, 2179, 78776, 111787, 22946, 20058, 11377, 3197, 2535, 84543, 16867, 1891, 1940, 6076, 27144, 11588, 102]

Accuracy: 100.00%

3. Other Implementations

Most BERT-based models use the WordPiece Tokenizer, whose code can be found here.
(A simple implementation of Huggingface can be found here).

Since the BertTokenizer is a CPU intensive algorithm, inference can be a bottleneck, and unoptimized tokenizers can be severely slow. A good example is the BidirectionalWordpieceTokenizer introduced in KR-BERT. Most of the code is the same, but the algorithm traverses the sub token backwards and writes a larger value compared to the forward traversal. The paper claims accuracy improvements, but it\’s hard to find other quantitative metrics, and the accuracy improvements aren\’t significant, and the tokenizer is seriously slowed down.

  • transformers (Rust Impl, PyO3)
  • paddlenlp (C++ Impl, pybind)
  • tensorflow-text (C++ Impl, pybind)
  • blingfire (C++ Impl, Native binary call)

Most developers will either use transformers.BertTokenizer or transformers.AutoTokenizer, but using AutoTokenizer will return transformers.BertTokenizerFast.

Naturally, it\’s faster than BertTokenizer, but the results aren\’t exactly the same, which means you\’re already giving up 100% accuracy starting with the tokenizer.

BertTokenizer is not only provided by transformers. PaddleNLP and tensorflow-text also provide BertTokenizer.

Then there\’s Blingfire, which is developed by Microsoft and is being abandoned.

PaddleNLP requires PaddlePaddle and provides tokenizer functionality starting with version 3.0rc. You can install it as follows

##### Install PaddlePaddle, PaddleNLP
python -m pip install paddlepaddle==3.0.0b1 -i https://www.paddlep*ad*dle.*org.cn/packages/stable/cpu/
pip install --upgrade paddlenlp==3.0.0b3
##### Install transformers
pip install transformers==4.47.1
##### Install tf-text
pip install tensorflow-text==2.18.1
##### Install blingfire
pip install blingfire

With the exception of blingfire, vocab.txt is all you need to run the tokenizer right away.
(blingfire also requires only vocab.txt and can be used after 8 hours of learning).

The implementations we\’ll look at in detail are PaddleNLP\'s BertTokenizerFast and blingfire.

  • blingfire: Uses a Deterministic Finite State Machine (DFSM) to eliminate one linear scan and unnecessary comparisons, resulting in a time of O(n), which is impressive.

    • Advantages: 5-10x faster than other implementations.
    • Disadvantages: Long training time (8 hours) and lower accuracy than other implementations. (+Difficult to get help due to de facto development hiatus).
  • PaddleNLP: As shown in the experiments below, PaddleNLP is always faster than BertTokenizerFast (HF) to the same number of decimal places, and is always faster on any OS, whether X86 or Arm.

    • Advantages: Internal implementation is in C++ Compared to transformers.BertTokenizerFast implemented in Rust, it is 1.2x faster while outputting exactly the same values.

      • You can\’t specify pt(pytorch tensor) in return_tensors, but this is not a problem.
    • Disadvantages: none, other than the need to install PaddlePaddle and PaddleNLP.

4. Performance test

4.1 Performance test (Single text encoding)

Accuracy is the result of measuring google\’s BertTokenizerFast as a baseline. If even one of the input_ids is incorrect, the answer is considered incorrect.

Tokenizer Performance Comparison

google-bert/bert-base-cased

Tokenizer Elapsed Time texts Accuracy
BertTokenizerFast(Huggingface) 84.3700s 1,000,000 99.9226%
BertTokenizerFast(PaddleNLP) 75.6551s 1,000,000 99.9226%
FastBertTokenizer(Tensorflow) 219.1259s 1,000,000 99.9160%
Blingfire 13.6183s 1,000,000 99.8991%
FlashBertTokenizer 8.1968s 1,000,000 99.8216%

google-bert/bert-base-uncased

Tokenizer Elapsed Time texts Accuracy
BertTokenizerFast(Huggingface) 91.7882s 1,000,000 99.9326%
BertTokenizerFast(PaddleNLP) 83.6839s 1,000,000 99.9326%
FastBertTokenizer(Tensorflow) 204.2240s 1,000,000 99.1379%
Blingfire 13.2374s 1,000,000 99.8588%
FlashBertTokenizer 7.6313s 1,000,000 99.6884%

google-bert/bert-base-multilingual-cased

Tokenizer Elapsed Time texts Accuracy
BertTokenizerFast(Huggingface) 212.1570s 2,000,000 99.7964%
BertTokenizerFast(PaddleNLP) 193.9921s 2,000,000 99.7964%
FastBertTokenizer(Tensorflow) 394.1574s 2,000,000 99.7892%
Blingfire 38.9013s 2,000,000 99.9780%
FlashBertTokenizer 20.4570s 2,000,000 99.8970%

beomi/kcbert-base

Tokenizer Elapsed Time texts Accuracy
BertTokenizerFast(Huggingface) 52.5744s 1,000,000 99.6754%
BertTokenizerFast(PaddleNLP) 44.8943s 1,000,000 99.6754%
FastBertTokenizer(Tensorflow) 198.0270s 1,000,000 99.6639%
Blingfire 13.0701s 1,000,000 99.9434%
FlashBertTokenizer 5.2601s 1,000,000 99.9484%
Tokenizer Elapsed Time texts Accuracy
FlashBertTokenizer 5.1875s 1,000,001 99.9484%
Blingfire 13.2783s 1,000,001 99.9435%
rust_tokenizers(guillaume-be) 16.6308s 1,000,001 99.9829%
BertTokenizerFast(PaddleNLP) 44.5476s 1,000,001 99.6754%
BertTokenizerFast(Huggingface) 53.2525s 1,000,001 99.6754%
FastBertTokenizer(Tensorflow) 202.1633s 1,000,001 99.6639%

microsoft/llmlingua-2-bert-base-multilingual-cased-meetingbank

Tokenizer Elapsed Time texts Accuracy
BertTokenizerFast(Huggingface) 208.8858s 2,000,000 99.7964%
BertTokenizerFast(PaddleNLP) 192.6593s 2,000,000 99.7964%
FastBertTokenizer(Tensorflow) 413.2010s 2,000,000 99.7892%
Blingfire 39.3765s 2,000,000 99.9780%
FlashBertTokenizer 22.8820s 2,000,000 99.8970%
Tokenizer Elapsed Time texts Accuracy
FlashBertTokenizer 22.0901s 2,000,001 99.8971%
Blingfire 37.9836s 2,000,001 99.9780%
rust_tokenizers(guillaume-be) 98.0366s 2,000,001 99.9976%
BertTokenizerFast(PaddleNLP) 208.6889s 2,000,001 99.7964%
BertTokenizerFast(Huggingface) 219.2644s 2,000,001 99.7964%
FastBertTokenizer(Tensorflow) 413.9725s 2,000,001 99.7892%

KR-BERT

Tokenizer Elapsed Time texts Accuracy
BertTokenizerBidirectional(KR-BERT Original) 128.3320s 1,000,000 100.0000%
FlashBertTokenizer(Bidirectional) 10.4492s 1,000,000 99.9631%
%%{ init: { \"er\" : { \"layoutDirection\" : \"LR\" } } }%%
erDiagram
    Text ||--o{ Preprocess : tokenize
    Preprocess o{--|| Inference : memcpy_h2d
    Inference o{--|| Postprocess : memcpy_d2h



Loading


6. Compatibility

FlashBertTokenizer can be used with any framework. CUDA version compatibility for each framework is also important for fast inference of LLMs.

  • PyTorch no longer supports installation using conda.
  • ONNXRUNTIME is separated by CUDA version.
  • PyTorch is also looking to ditch CUDA 12.x in favor of the newer CUDA 12.8. However, the trend is to keep CUDA 11.8 in all frameworks.
    • CUDA 12.x was made for the newest GPUs, Hopper and Blackwell, and on GPUs like Volta, CUDA 11.8 is faster than CUDA 12.x.
DL Framework Version OS CPU CUDA 11.8 CUDA 12.3 CUDA 12.4 CUDA 12.6 CUDA 12.8
PyTorch 2.6 Linux, Windows
PyTorch 2.7 Linux, Windows
ONNXRUNTIME(11) 1.20.x Linux, Windows
ONNXRUNTIME(12) 1.20.x Linux, Windows
PaddlePaddle 3.0-beta Linux, Windows

7. GPU Tokenizer

Here is an example of installing and running cuDF in Run State of the Art NLP Workloads at Scale with RAPIDS, HuggingFace, and Dask.
(It\’s incredibly fast)

You can run WordPiece Tokenizer on GPUs on rapids(cudf).

  • Implemention
  • Example

As you can see in how to install rapids, it only supports Linux and the CUDA version is not the same as other frameworks, so docker is the best choice, which is faster than CPU for batch processing but slower than CPU for streaming processing.

There are good example codes and explanations in the[ blog](https://developer.nv*i*d*ia.com/blog/run-state-of-the-art-nlp-workloads-at-scale-with-rapids-huggingface-and-dask/#:~:text=,and then used in subsequent). To use cuDF, you must first convert vocab.txt to hash_vocab as shown below. The problem is that the hash_vocab function cannot convert multilingual. Therefore, the WordpieceTokenizer of cuDF cannot be used if there are any characters other than English/Chinese in the vocab.

import cudf
from cudf.utils.hash_vocab_utils import hash_vocab
hash_vocab(\'bert-base-cased-vocab.txt\', \'voc_hash.txt\')

TODO

  • BidirectionalWordPieceTokenizer
  • BatchEncoder with Multithreading.
  • Replace std::list to boost::intrusive::list.
  • MaxMatch-Dropout: Subword Regularization for WordPiece Option.
  • Use stack memory for reduce memory allocation. (C-Style, alloca, _alloca)
  • Support for parallel processing option for single encode.
  • circle.ai

    • Implement distribution of compiled wheel packages for installation.
  • SIMD
  • CUDA Version.

Acknowledgement

FlashTokenizer is inspired by FlashAttention, FlashInfer, FastBertTokenizer and tokenizers-cpp projects.

Performance comparison

  • WordPiece

    • ? huggingface/tokenizers (Rust)
      • Rust implementation of transformers.BertTokenizerFast, provided as a Python package.
      • ? Provided as a Python package.
    • FastBertTokenizer (C#)
      • It demonstrates incredibly fast performance, but accuracy significantly decreases for non-English queries.
    • BertTokenizers (C#)
      • It can be confirmed from FastBertTokenizer (C#) VS BertTokenizers (C#) that FastBertTokenizer(C#) is faster.
    • rust-tokenizers (Rust)
      • Slower than BertTokenizerFlash and Blingfire but faster and more accurate than other implementations.
      • ? Provided as a Python package.
    • tokenizers-cpp (C++)
      • tokenizer-cpp is a wrapper around SentencePiece and HuggingFace\’s Rust implementation, so performance benchmarking is meaningless.
    • bertTokenizer (Java)
      • Java is not covered.
    • ✅ ZhuoruLin/fast-wordpiece (Rust)
      • A Rust implementation using LinMaxMatching, runnable only in Rust, and expected to be no faster than the C++ implementation.
    • huggingface_tokenizer_cpp (C++)
      • Very slow due to naive C++ implementation.
    • SeanLee97/BertWordPieceTokenizer.jl (Julia)
      • Julia is not covered.
  • BPE

    • https://gi*th*ub.com*/openai/tiktoken
  • SentencePiece

    • google/sentencepiece (C++)

History

References

  • https://me*dium.c**om/@techhara/which-bert-tokenizer-is-faster-b832aa978b46
  • https://*me*di*um.com/@atharv6f_47401/wordpiece-tokenization-a-bpe-variant-73cc48865cbf
  • https://www.re*st**ack.io/p/transformer-models-bert-answer-fast-berttokenizerfast-cat-ai
  • https://m*e*d*ium.com/@anmolkohli/my-notes-on-bert-tokenizer-and-model-98dc22d0b64
  • https://nocom*p*l*exity.com/documents/fossml/nlpframeworks.html
  • https://gi*th**ub.com/martinus/robin-hood-hashing
  • https://*ar*xiv.o*rg/abs/2012.15524
  • https://gith*ub.**com/google/highway

下载源码

通过命令行克隆项目:

git clone https://github.com/NLPOptimize/flash-tokenizer.git

收藏 (0) 打赏

感谢您的支持,我会继续努力的!

打开微信/支付宝扫一扫,即可进行扫码打赏哦,分享从这里开始,精彩与您同在
点赞 (0)

申明:本文由第三方发布,内容仅代表作者观点,与本网站无关。对本文以及其中全部或者部分内容的真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。本网发布或转载文章出于传递更多信息之目的,并不意味着赞同其观点或证实其描述,也不代表本网对其真实性负责。

左子网 建站资源 flash tokenizer https://www.zuozi.net/34837.html

codeface
上一篇: codeface
mediaelement plugins
下一篇: mediaelement plugins
常见问题
  • 1、自动:拍下后,点击(下载)链接即可下载;2、手动:拍下后,联系卖家发放即可或者联系官方找开发者发货。
查看详情
  • 1、源码默认交易周期:手动发货商品为1-3天,并且用户付款金额将会进入平台担保直到交易完成或者3-7天即可发放,如遇纠纷无限期延长收款金额直至纠纷解决或者退款!;
查看详情
  • 1、描述:源码描述(含标题)与实际源码不一致的(例:货不对板); 2、演示:有演示站时,与实际源码小于95%一致的(但描述中有”不保证完全一样、有变化的可能性”类似显著声明的除外); 3、发货:不发货可无理由退款; 4、安装:免费提供安装服务的源码但卖家不履行的; 5、收费:价格虚标,额外收取其他费用的(但描述中有显著声明或双方交易前有商定的除外); 6、其他:如质量方面的硬性常规问题BUG等。 注:经核实符合上述任一,均支持退款,但卖家予以积极解决问题则除外。
查看详情
  • 1、左子会对双方交易的过程及交易商品的快照进行永久存档,以确保交易的真实、有效、安全! 2、左子无法对如“永久包更新”、“永久技术支持”等类似交易之后的商家承诺做担保,请买家自行鉴别; 3、在源码同时有网站演示与图片演示,且站演与图演不一致时,默认按图演作为纠纷评判依据(特别声明或有商定除外); 4、在没有”无任何正当退款依据”的前提下,商品写有”一旦售出,概不支持退款”等类似的声明,视为无效声明; 5、在未拍下前,双方在QQ上所商定的交易内容,亦可成为纠纷评判依据(商定与描述冲突时,商定为准); 6、因聊天记录可作为纠纷评判依据,故双方联系时,只与对方在左子上所留的QQ、手机号沟通,以防对方不承认自我承诺。 7、虽然交易产生纠纷的几率很小,但一定要保留如聊天记录、手机短信等这样的重要信息,以防产生纠纷时便于左子介入快速处理。
查看详情

相关文章

猜你喜欢
发表评论
暂无评论
官方客服团队

为您解决烦忧 - 24小时在线 专业服务