X

Large Language Models: How They Mimic the Human Translation Process

- August 11, 2023
      1148   0

In a groundbreaking research paper published on May 6, 2023, a collaborative effort involving Shanghai Jiao Tong University, Tsinghua University, and Tencent AI Lab unveiled a significant discovery: large language models (LLMs) possess the capacity to emulate human translation strategies.

Understanding the Human Translation Process

The paper emphasized that professional human translators typically undertake preparatory measures before performing a translation task. This includes gathering and analyzing information such as keywords, topics, and example sentences to ensure accuracy and consistency. Although LLM is most often used in the field of natural language processing (NLP) to help enhance the understanding of human language, there have been massive advancements in recent years.

Challenges in Traditional Machine Translation

Compared to human translators, traditional machine translation (MT) systems tend to focus solely on direct source-to-target mapping, disregarding the preparatory steps used by human experts. However, the researchers discovered that Large-Language-Models-based translation can effectively simulate the human translation process.

Introducing MAPS: Multi-Aspect Prompting and Selection

The researchers proposed a novel method called MAPS (Multi-Aspect Prompting and Selection) to incorporate the preparatory steps into large language models translation.

MAPS comprises three main steps: knowledge mining, knowledge integration, and knowledge selection.

Knowledge Mining

In the knowledge mining step, the large language models analyze the source text and extracts three types of translation-related knowledge:

  • Keywords: These are essential for conveying the core meaning of the text and ensuring faithfulness and consistency in translation.
  • Topics: Help translators avoid ambiguous translations and adapt to specific subject matters.
  • Demonstrations: Provide examples that aid in finding suitable equivalents in the target language, resulting in natural, fluent, and engaging translations.

Knowledge Integration

During the knowledge integration step, the acquired knowledge is seamlessly incorporated into the LLM’s prompt context. This integration serves as a guiding force, empowering the large language model to generate translation candidates with greater accuracy. As a result, the LLM gains a deeper understanding of the source text, facilitating the production of translations that closely align with the intended meaning.

Knowledge Selection

The researchers use a content filtering mechanism to enhance translation quality further during the knowledge selection step. This step aims to eliminate any noise or unhelpful knowledge that the large language model may generate. To rank translation candidates and determine the final output, the researchers employ reference-free quality estimation (QE) as part of the knowledge selection step. Using QE, the LLM evaluates each translation candidate without relying on external references or comparison to the original source. Instead, the model independently assesses the quality and fluency of the translations based on its own understanding and contextual knowledge.

By utilizing reference-free QE, the MAPS approach ensures that the selected translation aligns with the intended meaning and exhibits high-quality linguistic output. The MAPS approach can be enhanced by integrating a RAG pipeline to retrieve and incorporate relevant external knowledge during translation. The candidate with the highest QE score is chosen as the final translation, enhancing the overall accuracy and naturalness of the LLM-generated output. Additionally, the possibility of employing the LLM itself as a QE scorer is also explored, showcasing the potential of a purely LLM-driven evaluation process. Additionally, they explore the possibility of using the large language model itself as a QE scorer, showcasing the potential of a pure large language model implementation.

Advantages of the MAPS Approach

The extensive experiments conducted across eight translation directions validated the effectiveness of the MAPS approach. It consistently outperformed other baselines, leading to higher-quality translations.

Addressing Hallucination Issues

The introduction of the MAPS approach has proven to be a significant breakthrough in tackling hallucination issues during translation. Hallucination, wherein the large language model generates inaccurate or fictional content not present in the source text, has been a persistent challenge in machine translation.

Through the knowledge integration step, MAPS equips the LLM with essential translation-related knowledge extracted from the source text. This knowledge serves as a guiding context, enabling the model to make more informed decisions during the translation process.

As a result, the integration of this extracted knowledge has been instrumental in effectively resolving up to 59% of hallucination mistakes in translation. By drawing on the relevant information and context, the LLM is less prone to generating spurious or misleading content, resulting in more faithful and reliable translations.

The reduction of hallucination mistakes through the use of the MAPS approach contributes to a higher level of translation accuracy and builds confidence in the capabilities of large language models for reliable language processing tasks. As research continues, the MAPS approach is expected to pave the way for even more significant improvements, ultimately advancing the quality and reliability of machine translation systems.

Domain-Specific Preparation Eliminated

In contrast to other large language model (LLM)-based translation methods, which heavily rely on domain-specific assumptions and necessitate the use of extensive glossaries, dictionaries, or sample pools, the MAPS (Multi-Aspect Prompting and Selection) approach distinguishes itself by prioritizing the translation of general scenarios. This key feature enhances the practicality and versatility of MAPS for a wide range of translation tasks and language pairs.

By focusing on general scenarios, MAPS reduces the need for domain-specific preparation, making it more accessible and applicable to various real-world translation scenarios. Translating content in diverse subject areas becomes more seamless as the approach does not depend on a pre-established domain knowledge base.

Knowledge Integration and Its Role in Guiding LLMs

To achieve more human-like translations, the MAPS (Multi-Aspect Prompting and Selection) approach introduces a pivotal step known as “Knowledge Integration.” This step is crucial in guiding Large Language Models (LLMs) toward generating more accurate and contextually relevant translations, akin to the decision-making process employed by professional human translators.

Enhancing Contextual Understanding

Knowledge Integration seamlessly combines the translation-related knowledge mined from the source text into the LLM’s prompt context. The acquired knowledge serves as background information, enabling the large language model to grasp the subtleties and nuances of the source sentence better. By integrating this valuable information into the translation process, the LLM gains a more profound contextual understanding, aligning it more closely with human translators who employ preparatory steps to comprehend the text thoroughly.

Guiding the Translation Process

The integrated knowledge serves as a guiding compass for the LLM throughout the translation process. Just as human translators rely on their understanding of keywords, topics, and example sentences, the LLM leverages the extracted knowledge to make more informed decisions about generating translation candidates. This step aids in steering the LLM away from potential misinterpretations or ambiguous translations, leading to improved translation accuracy and fluency.

Empowering Adaptation to Various Contexts

In a multilingual world where language can vary significantly based on different subject matters and cultural nuances, adaptation is a key aspect of translation. By incorporating knowledge integration, the LLM gains the ability to adapt to diverse contexts, similar to how professional human translators adjust their approach based on the subject matter and the target audience.

Beyond Word-to-Word Mapping

Unlike conventional machine translation methods that focus primarily on word-to-word mapping, knowledge integration allows the LLM to consider the broader context of the source text. This results in translations that go beyond literal renditions and capture the essence and intent behind the original content, making the translated text more cohesive and contextually appropriate.

The Future of LLMs in Translation

The advent of large language models (LLMs) and the innovative MAPS (Multi-Aspect Prompting and Selection) approach has marked a transformative milestone in the field of machine translation. As we look ahead, the future of LLMs in translation promises to unlock even greater possibilities and usher in a new era of multilingual communication.

1. Continued Advancements in Language Models

As research and development in natural language processing (NLP) continue to evolve, we can anticipate significant advancements in LLMs. Ongoing improvements in model architecture, training techniques, and data handling will lead to even more sophisticated language models capable of understanding context, idiomatic expressions, and cultural nuances with greater precision.

2. Multilingual and Low-Resource Language Support

LLMs hold immense potential for facilitating translation across a wide array of languages, including low-resource and underrepresented languages. Future efforts are expected to focus on enhancing the proficiency of LLMs in translating such languages, making translation services accessible to a more diverse global audience.

3. Personalized and Adaptive Translations

As LLMs gain a deeper understanding of individual users’ preferences and writing styles, personalized translations could become a reality. Adaptive translations, tailored to specific user needs and contexts, will further enhance the user experience and foster seamless cross-cultural communication.

4. Collaborative Human-AI Translation

The future of translation lies not in replacing human translators but in empowering them with advanced AI tools. Collaborative approaches that blend human expertise with the capabilities of LLMs will likely become prevalent, leading to faster, more efficient, and higher-quality translations.

5. Real-Time Translation Across Platforms

With advancements in cloud computing and edge computing, real-time translation capabilities powered by LLMs will be seamlessly integrated into various applications and platforms. From messaging apps to content creation tools, users will have instant access to translation services in their daily digital interactions.

6. Global Impact on Communication and Understanding

As LLM-based translation becomes more refined and widely accessible, it will play an integral role in bridging linguistic barriers worldwide. The potential for fostering cross-cultural understanding and promoting communication among diverse communities holds significant promise for a more interconnected and collaborative global society.

Wrapping up in Large Language Models and Their Mimicking of Human Translations

In conclusion, the MAPS approach represents a significant advancement in LLM-based translation, closely emulating the human translation process by incorporating preparatory steps and leveraging self-generated knowledge. The research opens up new possibilities for achieving higher-quality and more accurate translations without the constraints of domain-specific preparation, bringing us closer to seamless multilingual communication in the digital age.

The future of LLMs in translation is filled with boundless potential. Through continuous research, responsible development, and collaborative efforts, LLMs are poised to revolutionize multilingual communication, empowering individuals and businesses to connect, understand, and thrive in a truly globalized world.