AI Professional
By Trados AppStore Team
Free
Description
The AI Professional plugin for Trados Studio 2022 leverages from both Azure and OpenAI's language models to assist users in translation projects. Key features include a Translation Provider, an AI companion that is available from the Editor and Terminology-aware Translation Suggestions.
The plugin supports Azure OpenAI models, alongside OpenAI (same models used by ChatGPT), each with different capabilities and performance levels. Users can create custom prompts to guide the AI, and the plugin offers a few default prompts to get you started.
The AI Professional plugin can be installed via the RWS AppStore or through the integrated AppStore in Trados Studio. To use the plugin, users must sign up for an OpenAI account, obtain an API key, and specify the desired model.
More details about AI Professional can be found in this blog article.
Technical details
1.0.6.0 - Trados Studio 2022 (SR2)
Changelog:
Resolved bug when attempting to retrieve parent tags of the segment
Checksum: 478b42adcfd177c8c38b6c0f2d848f8bc0a438b6882c5f5c3d3d1a94ee8843dc
Release date: 2024-02-18
No related information.
1.0.6.0 - Trados Studio 2022 (SR2)
Resolved bug when attempting to retrieve parent tags of the segment
1.0.5.1 - Trados Studio 2022 (SR2)
- Updated handlers for telemetry to capture when translations are applied from the Translation Provider in the Editor as well as the AI Companion view.
- Included option to reset cache results
- Included options to ignore translated & locked segments from translation.
- Implemented word count & edit distance providers for telemetry.
- Improved the managed async translation requests from the AI Companion, reducing cancelled operations when navigating through the segments.
- Resolved bug where source terms were included in the translation response.
1.0.4.0 - Trados Studio 2022 (SR2)
- Updated handlers for telemetry to capture when translations are applied from the Translation Provider in the Editor as well as the AI Companion view.
- Included option to reset cache results
- Included options to ignore translated & locked segments from translation.
- Implemented word count & edit distance providers for telemetry.
- Improved the managed async translation requests from the AI Companion, reducing cancelled operations when navigating through the segments.
- Resolved bug where source terms were included in the translation response.
1.0.3.3 - Trados Studio 2022 (SR2)
- Updated handlers for telemetry to capture when translations are applied from the Translation Provider in the Editor as well as the AI Companion view.
- Included option to reset cache results
- Included options to ignore translated & locked segments from translation.
- Implemented word count & edit distance providers for telemetry.
- Improved the managed async translation requests from the AI Companion, reducing cancelled operations when navigating through the segments.
1.0.2.1 - Trados Studio 2022 (SR2)
Suppport for locked tags:
- Locked tags will now be displayed in the Translation Results and AI Companion view.
- Content within inline locked tags will not be translated.
- No search for translation suggestions will be peformed when the entire segment is locked.
1.0.1.2 - Trados Studio 2022 (SR2)
- Support for Language Cloud & GroupShare Server based Terminology Providers
- Updated error handling responses to provide more detail, especially useful when testing connections from the settings area.
1.0.0.15 - Trados Studio 2022 (SR2)
Resolved bug in applying markup of recognized terms in the AI Companion view
Resolved bug in rebuilding the tag markup in the cached results.
1.0.0.14 - Trados Studio 2022 (SR2)
Optimize prompt token size by removing the obsolete system reinforcement messages
1.0.0.13 - Trados Studio 2022 (SR2)
Enable compatibility for local file-based (e.g. multiterm) and third-party termbase providers. Note: LanguageCloud and Server (GroupShare) termbase providers not supported.
1.0.0.12 - Trados Studio 2022 (SR2)
Initial release with key features:
- Translation Provider
- AI Companion
- Terminology-aware Translation Suggestions
- Support for tags
- Revised prompt content structure
Translation Provider
The Translation Provider integration aims to improve efficiency in facilitating batch tasks like analysis or pre-translation for enhanced productivity. Given this automation nature of batch tasks, it's now more important than ever to be aware of the rate limits that OpenAI imposes on the usage of it services via API.
AI Companion
The Companion is a view available from the Editor. It offers an array of advanced functionalities designed to enhance the translation process. One of its key features is the ability to search for alternative Translation Suggestions. Users can customize these suggestions by specifying specific prompts and settings, including Terminology-aware, according to their preferences. The Companion also highlights translated terms & provides a visual representation of comparison changes, making it easier for users to identify and track modifications made during the translation.
Terminology-aware Translation Suggestions
This option enhances translation suggestions by incorporating terminology context with the prompt when requesting translations from OpenAI technology. By enabling this feature, OpenAI can refine the generated translation suggestions using translated terms, leading to more accurate and contextually relevant translations. This smart integration of terminology context ensures precision and consistency in all your translated content.
Support for tags
Full roundtrip support for tags in the xml source/target content that is included with the prompts. This enhancement greatly reduces time required in post-editing work to reintroduce tags and formatting after applying translation suggestions from the OpenAI technology.
Revised prompt content structure
We revised the content structure that is included with the prompts when requesting translations. This was necessary to simplify how we write the prompt referring to elements in the content structure and ensure what we communicate is fully understood by the OpenAI technology.
Content structure
- TransUnit: The translation unit; must contain 1 <Source> element and can contain 1 <Translation> element and/or 1 <Terms> element.
- Source: The source segment content.
- Translation: The target segment content. Note: Optionally enbled by the user if they are explicitly making reference to the existing translation in the prompt and therefore should be included in the translation request. For example, if the prompt is using the existing translation as context to provide a new translation.
- Terms: A list of terms that are matched against the source segment from the default Terminology Provider that is attached to the project. Note: Optionally enabled by the user. If enabled and terms exist, then there is an additional system message added to the prompt to instruct OpenAI to always use the provided translations for terms when generating the translated output, except for those labeled as 'Forbidden'.
<TransUnit> <Source Language="en-US">source text</Source> <Translation Language="it-IT">translated text</Translation> <Terms> <Term> <Source>source term</Source> <Translation>translated term</Translation> <Status>Preferred</Status> </Term> </Terms> </TransUnit>
Can you give me an example of a simple prompt to translate the source content?
If we want to write a prompt to translate the source segment, then we could simply write “Translate this content” or “Translate the source content”. It is clear to the OpenAI technology that you want to translate the content from the <Source> element of the <TransUnit>
What if I’m a reviewer and already have the translations, but would like to update them given the translated terms?
No problem, the prompt might look like this “Update this translation using the translated terms.” It’s clear to the OpenAI technology that it should match the terms in the content and use the provided translations of those terms in the translation result.
What if I also wanted to instruct the AI technology to use gender neutral nouns and pronouns as well as applying the translated terms?
Easy, the prompt might look like this “Update this translation using the translated terms. Use gender neutral nouns and pronouns in the translation.”
OpenAI API Rate Limits
Rate limits are restrictions that OpenAI imposes on the number of times a user or client can access services within a specified period of time.
How do these rate limits work?
Rate limits are measured in five ways: RPM (requests per minute), RPD (requests per day), TPM (tokens per minute), TPD (tokens per day), and IPM (images per minute). Rate limits can be hit across any of the options depending on what occurs first. For example, you might send 20 requests with only 100 tokens to the ChatCompletions endpoint and that would fill your limit (if your RPM was 20), even if you did not send 150k tokens (if your TPM limit was 150k) within those 20 requests.
Other important things worth noting:
- Rate limits are imposed at the organization level, not user level.
- Rate limits vary by the model being used.
- Limits are also placed on the total amount an organization can spend on the API each month. These are also known as "usage limits".
Tier 1 rate limits per model
Usage tiers
You can view the rate and usage limits for your organization under the limits section of your account settings. As your usage of the OpenAI API and your spend on OpenAI API goes up, you will be automatically graduated to the next usage tier. This usually results in an increase in rate limits across most models.