How google translator work


Basically Google Translater is a platform where you can translate any sentence from one language to other. Google Translate is a free online language translation service developed by Google. How amazing is the Google translator work let's try it for free!
It allows users to translate text or web pages from one language into another.There are many other platforms for translations one such best platform is Gizoogle translations which is very easy to use In the new system, Google used Recurrent Neural Networks (RNN) which are well known to perform well on sequences (of words and phrases). By using this approach, Google has been able to continually improve quality of translations by enabling their systems to take into consideration not only source words and phrases, but also broader contexts of where they appear in sentences, and what are the other words and phrases around them.
Translate From here what you want and which is your language

For instance, they describe in the report using Japanese-English and Korean-English pairs to train their multi-lingual system. Then they are able to ask the system for translations of pairs it has not seen before, namely Korean-Japanese in the above example. Amazingly, the system produced reasonable translations for such pairs too.

This advancement is truly great, as it clearly demonstrates an improvement in trying to reach the ultimate goal of getting computers to understand semantics and meanings, as opposed to just simple syntactic mappings of words and phrases between individual language pairs.

Google Translate is a free online language translation service developed by Google. It allows users to translate text or web pages from one language into another.

Google Translate uses a combination of technologies to provide translations between numerous languages. The system relies heavily on statistical models derived from large amounts of parallel data (texts in multiple languages). Google Translate employs machine translation algorithms based on various computational models. 

They show that embeddings of words with similar meanings are close together in the target space, which is a tremendous advance, as (simple) syntactic approaches yield wildly different mappings on even small syntactic differences, never mind similar meanings such as synonyms. There is also additional great benefit of compact representations, as vectors of several hundred real numbers are much easier to deal with than simple-minded one-hot encodings where dimensions run into millions.

Word embedding vectors of low dimensionality are thus viewed as representations of meanings of phrases, which is what Google’s Zero-Shot Multi-Lingual Translation relies upon. Their system is creating embeddings which are langage-independent, which is truly amazing, if you think about it.

Word2vec is not really a deep learning system, as it is not based on multi-layer artificial neural networks. Its great power comes from the facts that it is an unsupervised method requiring no training data, scales extremely well to (many) billions of words and is able to preserve semantic similarities in distances in target compact vector spaces.

Google’s Machine Translation efforts are a great example of amazing advances in Deep Learning, which are about not only the quality of translations, but also about getting closer to the holy grail of computers understanding semantics and meanings.

A good way to test the quality of the sentence/ phrases translation is to translate into another language you know well. In general I found the quality of translation depends on the machine ‘understanding’ the meaning of the phrase mostly - not the specific language.

Post a Comment

0 Comments