Meet ‘Effidit,’ Tencent AI’s New Writing Assistant Powered by Artificial Intelligence

Meet ‘Effidit,’ Tencent AI’s New Writing Assistant Powered by Artificial Intelligence

What all things would you expect from your most advanced text assistant, whether you are writing an academic report or a business email? It would seamlessly autocomplete the text and give perfect text suggestions. It would correct any error that you have made. It would re-structure your sentence for the target audience. It would also generate sentences given keywords. Tencent AI has made Effidit, which is such first of its kind writing assistant that outperforms every available writing assistant in terms of functionalities as well as performance. We will now discuss how they have made it possible.

Very-large-scale language models are one of the most popular research topics nowadays as it significantly helps our day-to-day life. Frameworks like GPT are based on a transformer-based encoder and a decoder. The decoder generates the most likely text given probabilities of generating previous texts. This type of modeling cannot produce good quality text, as it tends to produce tokens produced by transformer to be very close to each other; that is, highly similar tokens. This leads to producing repetitive texts at different positions, creating degeneracy. So, the goal is to produce tokens that are discriminative in nature. If the similarities between distinct tokens are low or the token similarity matrix is sparse, then the tokens will, by default, be discriminative.

Chart

Description automatically generated
Source: https://arxiv.org/pdf/2208.01815.pdf

Yan Wang from TencentAI and his colleagues from Deepmind and other labs created the framework SimCTG (Simple Contrastive framework for neural Text Generation) in early 2022, which addresses the token similarity issue by preserving the sparsity of the token similarity matrix. It trains the language model to pull away distances between two distinct tokens. It does so by training the model to minimize the similarity between different tokens and maximize the similarity with itself. This is a type of contrastive training. The model also maximizes a token’s likelihood given previously generated tokens’ probabilities.

To avoid degenerate solutions, they have devised a clever approach for decoding or generating output texts. They have first made a set of most probable candidates predicted by the model. The model’s confidence is the probability of predicting a candidate. The decoded sequence will have the highest confidence of the model, and a penalty is applied for selecting a text which is highly similar to previously generated texts. This will lead to avoiding choosing similar tokens. They have called this decoding method ‘contrastive search’.

Using the contrastive training and contrastive decoding approach, SimCTG achieves to produce high-quality text. Tencent AI has used this method to create Effidit. Currently, Effidit supports writing assistance for only two domains: general texts and academic writing. The key functionalities of Effidit include high-quality phrase and text completion, rewriting texts, and correcting possible errors. It can create sentences given some keywords. The editor also suggests high-quality text while writing. In the future, more improvement of individual functionalities is to be seen.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'EFFIDIT: YOUR AI WRITING ASSISTANT'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and demo.

Please Don't Forget To Join Our ML Subreddit


I’m Arkaprava from Kolkata, India. I have completed my B.Tech. in Electronics and Communication Engineering in the year 2020 from Kalyani Government Engineering College, India. During my B.Tech. I’ve developed a keen interest in Signal Processing and its applications. Currently I’m pursuing MS degree from IIT Kanpur in Signal Processing, doing research on Audio Analysis using Deep Learning. Currently I’m working on unsupervised or semi-supervised learning frameworks for several tasks in audio.


Source link

Share This
COMMENTS

Leave a Reply

Your email address will not be published.