You are here

Automatic Question Generation

In instructional material, it is common for authors to add a set of 'self-review' questions, using which a learner can assess his/her understanding of the material. These questions may range in complexity from 'deep understanding of the important concepts in the passage' to 'assimilation of key facts'.

'Self-review' questions are usually formulated by the author, and try to address different levels of learners. However, a learner may not be able to answer the 'self-review' questions at the end of the passage if 'key' facts have not been identified and assimilated. Authors may also not be able to cater to the special needs of ESL learners (English as Second Language), and individuals with relatively low Working Memory Spans. Clearly, an author cannot formulate a question for every 'factoid' in every sentence of a passage. Our technology addresses this gap.

It has recently been recognized that standalone Question Generation systems can be useful in instructional settings (see, for e.g., [1]). However, as with the other applications discussed in previous sections, Question Generation (as well as Question Answering) is crucially dependent on the output of a high-quality Deep Parser. A flawed parse-tree will result in flawed questions that need to be rectified by humans - this has impacted scaling up of QG systems. This problem can be side-stepped if the QG system is made aware of potential flaws in the input parse-tree; however, most Deep Parsers are unable to provide such information about the quality of the parse-trees they generate. Given a high-quality parse-tree, it is relatively straight-forward to generate questions and answers.

Automatically generated questions for each sentence can also help ESL learners understand complex syntactic structures in English. The process of forming questions from complex sentences can also be instructive in itself.

An illustration on the following page will show how the technology should work.


References