provided by Alconost localization services provider

From the team behind Alconost — professional localization since 2004
0
1
H
1
0
e
0
1
l
0
1
l
0
1
o
0
1
W
0
1
r
0
l
1
d
0
!
1
?
0
#
0
1
H
1
0
e
0
1
l
0
1
l
0
1
o
0
1
W
0
1
r
0
l
1
d
0
!
1
?
0
#
0
1
H
1
0
e
0
1
l
0
1
l
0
1
o
0
1
W
0
1
r
0
l
1
d
0
!
1
?
0
#
0
1
H
1
0
e
0
1
l
0
1
l
0
1
o
0
1
W
0
1
r
0
l
1
d
0
!
1
?
0
#
0
1
A
0
1
0
1
ع
0
1
Ω
1
0
Ж
0
1
0
1
א
0
1
0
Z
1
0
0
1
A
0
1
0
1
ع
0
1
Ω
1
0
Ж
0
1
0
1
א
0
1
0
Z
1
0
0
1
A
0
1
0
1
ع
0
1
Ω
1
0
Ж
0
1
0
1
א
0
1
0
Z
1
0
0
1
A
0
1
0
1
ع
0
1
Ω
1
0
Ж
0
1
0
1
א
0
1
0
Z
1
0

Building the Future of Localization

Alconost.MT is a laboratory of Alconost where we design and validate new language technology, combining software engineering and linguistics with AI and 20+ years of localization expertise.

Evaluate

AI-Powered

Evaluate and fix the quality of translation using top-tier LLMs.

LLMs identify specific error types in the translation (accuracy, fluency, etc.) and assess their severity to generate quality scores.

Key features

  • Translation errors annotation (types, severities)
  • Corrected translations and detailed error explanations
  • Segment and document level translation quality scores
  • Customizable prompt for Context, Style Guide, and Glossary support
  • Online and PDF reports
  • Import/export for CSV and direct text entry
  • Selection of SOTA LLMs for evaluation
100%

Annotate translation errors using the MQM framework — the industry-standard methodology for translation quality evaluation.

Built for LLM evaluation, machine translation benchmarking, and human-in-the-loop QA workflows.

Key features

  • Error-based MQM taxonomy (accuracy, fluency, terminology, style, and more)
  • Segment-level and document-level annotations with scores
  • Annotation data export in CSV/TSV and JSON formats for analysis and benchmarks
  • Online and PDF reports with adjustable error weights and pass/fail thresholds
  • API for import/export to automate human-in-the-loop pipelines Agentic

Metrics

Coming soon

Measure translation quality using industry-standard and state-of-the-art LLM-based metrics.

Compare translations against references and calculate scores like COMET, xCOMET, BLEU, chrF++, and MetricX to choose the best-performing engine or human linguist with confidence.

Key features

  • Standard metrics (Free): BLEU, chrF++, nTER, WER, hLEPOR
  • LLM-based metrics (Paid): BERTScore, COMET, xCOMET, MetricX (Base to XXL)
  • Reference-based & Reference-free (QE) modes (e.g. CometKiwi)
  • Weighted Quality Index (WQI) for unified scoring
  • CSV Upload for automated workflows
  • Comprehensive PDF & CSV reports
  • API for Agentic evals & automations Agentic