[Everyone’s AI] Explore AI Model Q&A Generate Model

Input : Artificial intelligence is human learning ability, reasoning ability, perceptual ability, and other artificially implemented computer programs.

Q : What is human learning ability, reasoning ability, perceptual ability, and other artificially implemented computer programs?

A : Artificial intelligence

The simplest way to generate a question is to enter information that will act as an answer into the model and generate a question for such answer, taking into account the context of the information. One of the reasons why question generation is not as mainstream as Q&A generation, is that most thesis use complex pipelines and cannot use pre-trained models. Therefore, in order to solve this problem, Suraj Patil uses an end-to-end method without a complicated pipeline to disclose not only a pre-trained question generation model but also a question and answer generation model. We are going to explain this method below.

If you want to check out the project right away, please refer to the following link!

Demo : https://master-question-generation-wook-2.endpoint.ainize.ai/

API : https://ainize.ai/Wook-2/question_generation

Github : https://github.com/patil-suraj/question_generation


For question & answer generation, a total of three models are generally required: first, an answer extraction model; second, a question generation model for the extracted answers, and third, a model for generating answers by receiving questions. The reason why there are two models that generate answers here is to compare the models to check whether the generated question is correct.

However, having three models for a single task is not only inconvenient, but also inefficient. Because of this we need a multi-task model that can do all three tasks, and for this project we are going to introduce a model called T5 that is used for the multi-task model. Now, let’s talk about the T5.

Let’s solve it at once: T5

The BERT-based models that have been previously used in the AI field showed excellent performance in many tasks. However, these models had a problem because the type of input and output for each task was different. T5 (The Text-To-Text Transfer Transformer) came out to solve this. Unlike the BERT-based model, T5 solves the problem by converting all language tasks into text-to-text tasks.

For example, if you want to translate English to German, use “translate English to German: That is good.” as input and “Das ist gut.” as output. Now, if you want to evaluate the adequacy or logic of a sentence, use: “cola sentence: The course is jumping well…” as input and “not acceptable” as output. In this example, we can see that there is no logic or correspondence between those 2 sentences so we can use this method to corroborate this. When using a text-to-text task approach like this, you can use the same model, loss function and hyperparameters.

1_tG8tzPuUo03IlOnAFmCJQw
Source : Google AI Blog — Exploring Transfer Learning with T5: the Text-To-Text Transfer Transformer

The question-and-answer generation model performs multi-tasking method extracting answers, such as generating questions on extracted answers as well as taking questions and generating answers by taking action prefixes such as “extract answer:”, “generate question:”

1_rtcpvvbQXFZ875juCsQXHQ
Source : question_generation Github

Usage

  • Using the Question Generator DEMO provided by Ainize

First, let’s use the Question Generator DEMO provided by Ainize.

Enter some information in the Typing Box and then click the Generate button in order to generate a question and answer. When entering information, it is recommended that you copy it from an encyclopedia and enter it. DEMO is available in this link .

  • Using the Question Generator API provided by Ainize

This time, we will use the Question Generator using the Question Generator API provided by Ainize. Information about the API can be found in this link .

  • Using the model through the pipeline

This time, we will use the model through the pipeline. The advantage of using pipeline is that all steps of data preprocessing and result derivation can be processed at once without having to process separately.

More details can be found in this link .

How can I use this model? In my case, the first thing that comes in my mind is the field of education. You can check or review what you have studied by putting information on what you have studied in this model, creating a question, and then answering such question. How else should we use it? Please leave your opinions in the comments!

Reference

  1. Exploring Transfer Learning with T5: the Text-To-Text Transfer Transformer
  2. Artificial intelligence
  3. question generation Github
  4. question generation Ainize