This is a leaderboard comparing Turkish performances of embedding models, using MTEB-TR benchmark developed by Selman Baysan in his MS Thesis.

Paper: https://aclanthology.org/2025.findings-emnlp.471/

Repo: https://github.com/selmanbaysan/mteb_tr

Rank Model Parameters Mean (Task) Mean (Type) Bitext Classification Clustering Pair Classification Retrieval STS
1 intfloat/multilingual-e5-large 0.6B 66.74 72.95 99.43 71.79 60.58 64.12 60.62 81.18
2 Qwen/Qwen3-Embedding-4B 4B 66.12 71.35 97.86 70.32 61.08 60.10 61.76 76.96
3 ytu-ce-cosmos/turkish-e5-large 0.6B 65.96 72.33 99.24 72.62 60.77 62.72 58.65 80.00
4 google/embeddinggemma-300m 0.3B 65.20 70.52 96.84 71.81 62.36 60.57 58.60 72.93
5 microsoft/harrier-oss-v1-0.6b 0.6B 64.29 70.57 98.58 71.18 63.60 58.63 56.90 74.54
6 microsoft/harrier-oss-v1-270m 0.27B 62.61 69.54 98.30 69.88 62.19 57.42 54.45 75.01
7 Qwen/Qwen3-Embedding-0.6B 0.6B 60.97 66.47 92.43 65.82 60.62 58.16 54.87 66.91
8 sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 0.1B 58.84 68.29 93.67 65.76 56.56 68.80 46.57 78.41
9 sentence-transformers/LaBSE 0.5B 57.48 66.43 99.53 65.60 58.42 56.80 46.47 71.75
10 emrecan/bert-base-turkish-cased-mean-nli-stsb-tr 0.1B 54.82 57.75 31.47 66.44 56.79 66.23 42.46 83.13
11 ytu-ce-cosmos/modernbert-tr-base-1k 0.15B 48.89 46.78 12.29 71.41 64.20 49.86 32.66 50.24
12 boun-tabilab/TabiBERT 0.15B 41.10 42.53 8.58 65.42 60.76 49.94 19.49 50.97