Scholarly article on topic 'Interactive Document Expansion for Answer Extraction of Question Answering System'

Interactive Document Expansion for Answer Extraction of Question Answering System Academic research paper on "Computer and information sciences"

CC BY-NC-ND
0
0
Share paper
Academic journal
Procedia Computer Science
OECD Field of science
Keywords
{"Question Answering" / "User interaction" / "Named Entity extraction"}

Abstract of research paper on Computer and information sciences, author of scientific article — Junichi Fukumoto, Noriaki Aburai, Ryosuke Yamanishi

Abstract In this paper, we propose a method to navigate getting a correct answer for Question Answering (QA) system using user interaction. QA is a technology which extracts appropriate answer strings for a given question sentence from huge documents such as Web, newspaper articles etc. If a given question is ambiguous, answers will be various ones according to its possible understandings and retrieved documents with query words of the question sentence will consists of various types of information. In order to focus on intended topic, it is necessary to provide more information to narrow down search area for a question. In our approach, a QA system selects a clue word to decide an appropriate topic from topics in retrieved documents and interacts with a user whether this clue word is appropriate one or not. Then, search space will be reduced using this clue word. However, such narrowing down reduces the number of answer candidates because the number of target documents will be decreased. We will re-retrieve documents using this clue words and expand search space to increase possibility of getting correct answer candidates.

Academic research paper on topic "Interactive Document Expansion for Answer Extraction of Question Answering System"

Available online at www.sciencedirect.com

ScienceDirect PrOC6Cl ¡0

Computer Science

Procedia Computer Science 22 (2013) 991 - 1000

17th International Conference in Knowledge Based and Intelligent Information and

Engineering Systems - KES2013

Interactive Document Expansion for Answer Extraction of Question Answering System

Junichi Fukumotoa, Noriaki Aburaib, Ryosuke Yamanishia

aDept. of Media Technology, Ritsumeikan University bGraduate School of Science and Technoklogy, Ritsumeikan University 1-1-1 Nojihigashi, Kusatsu, Shiga 525-8577 JAPAN

Abstract

In this paper, we propose a method to navigate getting a correct answer for Question Answering (QA) system using user interaction. QA is a technology which extracts appropriate answer strings for a given question sentence from huge documents such as Web, newspaper articles etc. If a given question is ambiguous, answers will be various ones according to its possible understandings and retrieved documents with query words of the question sentence will consists of various types of information. In order to focus on intended topic, it is necessary to provide more information to narrow down search area for a question. In our approach, a QA system selects a clue word to decide an appropriate topic from topics in retrieved documents and interacts with a user whether this clue word is appropriate one or not. Then, search space will be reduced using this clue word. However, such narrowing down reduces the number of answer candidates because the number of target documents will be decreased. We will re-retrieve documents using this clue words and expand search space to increase possibility of getting correct answer candidates.

© 2013 TheAuthors.Publishedby Elsevier B.V.

Selectionand peer-reviewunder responsibility of KES International

Keywords: Question Answering, User interaction, Named Entity extraction

1. Introduction

Question Answering (QA) is a technology which extracts appropriate answer strings for a given question sentence from huge documents such as Web, newspaper articles etc. For example, when a question sentence "Who is a prime minister of Japan?" is given, QA system will provide answer string "Mr. Abe" from huge documents. There have been a number of work on QA and the most of them are on some evaluation workshops such as TREC QA track [1, 2]1, CLEF2 and NTCIR QAC [3, 4]. In these tasks, task participants are required to answer for given questions and answers are prepared for these questions for evaluation. However, in practical areas, there are several cases that questions are ambiguous because they do not have enough information to focus on a specific question topic.

Email addresses: fukumoto@media.ritsumei.ac.jp (Junichi Fukumoto), aburai@nlp.is.ritsumei.ac.jp (Noriaki Aburai), ryama@fc.ritsumei.ac.jp (Ryosuke Yamanishi)

1 http://trec.nist.gov/

2http://clef-qa.fbk.eu/

1877-0509 © 2013 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of KES International doi:10.1016/j.procs.2013.09.184

If a given question is ambiguous, it will be difficult to get one intended specific answer. In this case, obtained answers will be various ones according to possible understandings of a question and retrieved documents with query words of a question sentence will be various types of information. In order to focus on intended topic, it is necessary to provide more information to narrow down search area for a question. HITIQA system [5] is an interactive open-domain question answering system which reports to satisfy a given scenario template, and this system obtains information interactively, but interaction is based on some scenario. Therefore, their interaction strategy will not be flexible. Shirai et. al. [6] proposed a method to extract an appropriate information to understand an ambiguous question. However, they only extracted some information to narrow down a question topic and did not realize actual user interaction in question answering dialogues.

In our approach, a QA system selects a clue word to decide an appropriate topic from topics in retrieved documents and interacts with a user whether this clue word is appropriate one or not in user interaction. Then, search space will be reduced using this clue word. However, such narrowing down reduces the number of answer candidates because the number of target documents will be decreased. We will re-retrieve documents using this clue words and expand search space to increase possibility of getting correct answer candidates.

In order to select such a clue word, our QA system [7] firstly extracts modifying words to query words of an input question because modifying words will be constraints to query words. Then, the system classifies the extracted modifying words according to their Named Entity types such as city names, sports names and so on and the most frequent type will be selected. Words in this type will be used for narrowing down and re-retrieving documents. In order to select an appropriate word, the system retrieves documents using each word in this type and chooses the document set which includes the largest number of answer candidates. QA system provides the word as the best clue word which can retrieve the largest number of answer candidates using user interaction. QA system will continue to the above interaction until a user get an appropriate answer to the given question.

In the following sections, we will firstly describe a brief overview of QA system and how user-interaction works in QA system. Then, we will show extraction method of clue words from retrieved documents and their classification for implementing user-interaction. Next, we will show how to use the clue word for re-retrieval of documents with interaction. Finally, some experimental results of our method using some question sentences and interaction will be presented.

2. Overview of QA system with user-interaction

We will firstly describe how a QA system provides answers for a given question. Then, we will show an overall process of user interactive question answering in our QA system.

2.1. An overview of Question Answering

In a QA system, a given question will be analyzed, then question type and queries are determined. Question type is information which a given question is asked for and queries are used to retrieve documents from information source. For example, when a question sentence "Who is a prime minister of Japan?" is given, the question type is person and queries are "prime," "minister" and "Japan." The word "who" will not be extracted because it is a question term and is not appropriate for document retrieval. Then, a QA system will retrieve target documents from information sources using IR engine such as Google engine. Usually, the number of documents will be ten to thirty or fifty although it depends on QA system setting. The more the number of retrieved documents is, the more time and noise an answer extraction module will have. Answer candidates will be extracted from the retrieved documents using named entity recognition [8] module. If a given question requires a person name, possible person names in retrieved documents are recognized using named entity recognition module. The extracted answer candidates will have their weight for ordering their certainty of the correct answer. There have been many ways of calculating their weight such as word distance using query words. According to weight calculation, possible correct answer will be given with their ordering. For the above sample question, a QA system will provide answer string "Mr. Abe."

2.2. User interaction to focus on intended topic

If a given question has few information and is ambiguous, retrieved documents for such a question consist of several topics. It is necessary to provide some more information to choose an appropriate topic from the retrieved documents. Yamauchi et. al. [9] proposed a method to focus on intended topic using user interaction. Their approach extracts an appropriate word from retrieved documents and utilizes the word to narrow down search space which is obtained by document retrieval in QA. Such a word is chosen from modifying words to query words in a question sentence.

Figure 1 shows the process of narrowing down search space using a question sentence V yKy^

? (Who won gold medal in Olympic?)." 3 There are many topics in the retrieved documents and QA system will choose topic about Olympic cities. The left figure indicates that the retrieved documents contains topic about Olympic game cities: Athens, Beijing and London. When a user responds Beijing in user interaction, QA system can selected documents about Beijing Olympic. The focused document still contains several topic and QA system will choose sports event at Beijing Olympic. The center figure indicates topic about sports event such as Judo, swimming and marathon. Then a user required to give information about it and responds Judo. Documents about Judo event at Beijing Olympic will be selected and the right figure shows that there are no more topics in the selected documents. QA system provides answer "Mr. Satoshi Ishii" for the answer of the given question.

Retrieved Documents Beijing Olympics Beijing Olympics, Player ofJudo

Fig. 1. Narrowing down search space in QA

In their approach, QA system can narrow down search space to get appropriate answer using user interaction, however, the number of documents to extract answers will be getting decreased and the possibility of getting a correct answer will become lower.

2.3. Search space expansion with user interaction

In our method, QA system extracts such information from the retrieved document and ask a user whether this information is appropriate or not using user interaction. If QA system can get additional information to focus on intended topic, system will retrieve document again using a clue word obtained by interaction.

We will show an overall process of user interactive question answering on our system in the followings.

1. QA system analyzes a given question sentence to get a question type and query terms for document retrieval.

2. Query terms are used to retrieve target documents using IR engine such as Google engine.

3. QA system extracts modifying words to query words in the retrieved documents and classifies the modifying words according to their information types determined by Named Entity recognizer.

4. The number of documents for each information type will be counted and the most frequent one will be chosen as target information type.

3In this paper, we have implemented our QA system for Japanese question answering. and all the example questions and answer expressions are in Japanese. We leave these Japanese expressions in original expressions. However, we put their pronunciation and English meaning in the next Parentheses.

5. For each clue word in the chosen information type, IR engine retrieves documents together with the first query words, and the number of answer candidates of the analyzed question type is counted.

6. The clue word which has the most frequent answer candidates in the retrieved documents will be selected for interaction.

7. QA system asks a user whether an appropriate topic is related to the selected clue word or not.

8. If a user replies "yes," the target document set for answer extraction will be the selected one and go to the above process 3. If "no," QA system will provide answer candidates of the current document set.

9. If there is no clue words in the selected information type, the second information type will be selected. Then go to the process 5 until there is no information type.

For the selection of information type of clue words in the above process 4, the more words exist in some information type, the more effective QA system can focus on an appropriate topic. This is because the system explores answer candidates in the most popular domain. In the process 6, our method prefers a situation which has more answer candidates. It increases a possibility to find a correct answer for QA system.

3. Extraction of clue words from retrieved document

Target documents are retrieved with query words which are extracted from a given question. Query words appear in these documents and are used to extract answer candidates in QA system. A query word sometime is composed of compound noun with some other words. For example, in a question sentence "Who is a gold medalist at Olympic game?," The word "Olympic" in this question sentence might appear with the other words "London," "Beijing," "2012," "winter" in the form of compound nouns. These adjacent words put semantic constraints on the word "Olympic," that is, London Olympic is Olympic which is in London. In another case, modifying words to query words sometimes appear with Japanese particle "® : no (of)." This modifying phrase with the particle "® : no (of)" will also make semantic constraints on query word. For example, in Olympic no suiei (swimming of Olympic)", sport

event name (swimming)"is described using Japanese particle "® (of)" and modifies the word "77 ^ (Olympic)" as semantic constraints. We will extract such modifying words using the following patterns shown in Table 1. In this able, the expression <extracting word> is a word which will be extracted and the expression <query word> is a query word in a question sentence.

Table 1. Extraction patterns for modifying words

Num. pattern example example (English)

1 <extracting word> <query word> n London Olympic

2 <query word> <extracting word> 2012 Olympic 2012

3 ( <extracting word> ) <query word> (2008) (2008)Olympic

4 <query word> ( <extracting word> ) (2010) Worldcup(2010)

5 <extracting word> ■ <query word> 2012- tVy^y? 2012Olympic

6 <query word> ■ <extracting word> Worldcupsoccor

7 <extracting word> © <query word> rnrnotvy^y? Olympic of Judo

The extracted words will be classified into some information types using Named Entity recognizer. We used Named Entity Recognize call iNExT, which is an improved version of the original NExT system [10, 11], which has 77 Named Entity categories. For example, "Beijing," "London," "Athens" and "Tokyo" are classified as category CITY and "Swimming," "Judo," "Tennis" are as category SPORTS.

The extracted words are classified into several groups according to identified Named Entity categories. The category which includes the most different kinds of words will be selected. In the above example, there are four different words in the category CITY and three in the category SPORTS, then the category CITY will be selected. We will count the number of different words, not the sum of the words.

When a category is selected, our method retrieves documents using each word in the selected category and the original query words, then checks the number of answer candidates in the retrieved documents. In the above example, when the category CITY is selected, our system retrieves documents using the clue words "Beijing," "London," "Athens" and "Tokyo" and the number of answer candidates in each retrieval results will be counted. If the retrieval results using "Beijing," "London," "Athens" and "Tokyo" includes 5, 3, 6 and 1 answer candidates in the retrieved results, respectively, the clue word "London" will be selected.

We will show a diagram of re-retrieval of documents of each category in Figure 2. The middle circle indicates category classification of the first retrieved documents. For three categories, clue words "Beijing," "Athens" and "London" are used for re-retrieval and document set will be expanded in each category.

Athens Olympics

Player of soccer Player of Judo |

Player oftennis / Player ofswimming

Athens Olympics

I Player of archery. I i

Player of Judo

Player of swimming

London Olympics

Fig. 2. Explanatory diagram of hierarchical coordination

The newly retrieved documents are also classified according to identified category in each document set. In the retrieved document using clue word "Beijing," the selected category is SPORTS, which clue words are "Boxing," "Judo," "marathon" and "swimming." In the retrieved document using clue word "London," the selected category is also SPORTS, which clue words are "Boxing," "Judo," "archery" and "swimming." These new categories depend on their document set.

4. Sentence generation for user-interaction

When a clue word is selected, a user is required to judge whether focusing of topic indicated with this clue word is appropriate or not. For this inquiry, a sentence will be generated using the following pattern.

<clue word> © <query word> T^fr? (Is <query word> of <clue word> ? )

The <clue word> means the selected clue word and the <query word> means the query word which the selected clue word modifies. A user replies "yes," the clue word will be used for re-retrieval of documents with the first query words. For example, when the selected clue word is "Beijing" which modifies the query word "Olympic," a sentence "Jt T^^ ? (Is Olympic of Beijing?)" will be generated.

Then, QA system can understand the word "Olympic" in a question sentence is actually intended to express

'Beijing Olympic" and try to find answer candidates in the newly retrieved documents using the clue word "Beijing." If a user responds "no," the second most clue word in the selected category will be selected. If a user responds "no" more than three times in the selected category, the second category will be chosen. New clue words extracted from the newly retrieved documents as the same approach as the first category.

After selecting a clue word, QA system extracts answer candidates from the newly retrieved documents. If no new answer candidate cannot be found in the newly retrieved documents, then system will present the last answer candidates in some order and all the process will terminate.

5. Experiments of question asnwering

In this section, we will show some experimental results of interaction on our QA system. It might be difficult to show numerical performance of our method, so we will present actual interactions and answer lists of some question sentences. In these results, "question" shows a given question with English translation in the brackets and "query" indicates a list of query words extracted from this question sentence. English translation sometimes consists of several English words although Japanese is one word. There are top ten answers with their score. After the answer list, interaction started. When interactions terminated, top ten answers from re-retrieved documents are presented.

Figure 3 shows an example of interaction of a question " (Who is the

main cast in Taiga drama?)." In the first three interactions, YEAR category is selected and clue words are "fifteen," "2013" and "2012." The answer list is not intended one for a user, so s/he replied "no." Then, the new category PERSON/ACTOR will be selected and user intended one "Atsuhime princess," which is used for an additional query word, is selected in the fifth interaction. The correct answer is at 10th in the first answer list but is at 1st and 8th in the second successfully.

In the second example shown in Figure 4, category TITLE is selected and clue words used in interaction are "Oshin" and "Teacher Umechan." A user replied "yes" for the second word, then this clue word is used for re-retrieval of documents. The intended answer is "Maki Horikita", then 4th and 5th in the first answer list are correct and 1st and 3rd are in the second answer list. After the interaction, correct answers are ranked in higher order.

In Figure 5, the selected query word for interaction is "Japan" and its modifying words are "football," "soccer" and "F1" (SPORTS category). In this case, the generated sentences for interaction are not good because combination of these words is not natural. If system can select query word "Grand Prix" or "Japan Grand Prix," sentence generation might work well. However, the correct answer in the first list is ranked in 10th and the one in the second list is in 7th, then fortunately the result was not bad.

6. Discussions

In the experiments, we prepared 10 questions. User interaction occurred in all questions after presenting the first answer list. We will discuss how our method works for interaction in QA system from several points.

• Extraction of clue words

In the experiments, we have succeeded to extract modifiers for query words because many query words have modifying words in documents. This means that selecting modifiers is effective to identify ambiguity of query words in documents. The extracted modifiers tend to be categorized into more than one category and there are several words in each category. Therefore, modifiers for query words can be useful clue information to narrowing down information in documents.

• Query words in compound noun

In our current implementation, a word which has a lot of modifiers will be selected for interaction, so a frequent word tends to be selected. In the third example shown in Figure 5, "Japan Grand Prix" is a compound noun. The word "Japan" is frequent and then, modifiers to this word are used to make

Input question: (Who is the main cast in Taiga drama?)

Intended answer: (Aoi Miyazaki)

query = (Taiga drama), (main cast)

***** SCORE and ANSWER *****

1st 11.504 / WMH^fc (Haruka Ayase)

2nd 10.779 / (Kiyomori Taira)

3rd 10.366 / HH (Okada)

4th 9.396 / (Ken Matsuyama)

5th 5.913 / WÄSJ (Kanbei)

6th 5.122 / ^^AS (Yae Niijima)

7th 4.828 / MHWÄffi (Kanbei Kuroda)

8th 4.555 / SS^a (Hideyoshi Toyotomi)

9th 4.197 / (Hideyoshi)

10th 3.880 / feteV (Aoi)

(yes | no) (Is Taiga drama of 15th?)

(yes | no) (Is Taiga drama of 2013?)

20 14¥®^MF'7VT*tfr? (yes | no) (Is Taiga drama of 2012?) >no.

(yes | no) (Is Taiga drama of Kiyomori Taira?)

(yes | no) (Is Taiga drama of Atsuhime princess?)

query = (Taiga drama), (main cast), (Atsuhime princess)

1st 2nd 3rd 4th 5th 6th 7th 8th 9th 10th

SCORE and 17.282 / 12.285 / 9.715 9.299 9.035 7.795 7.681 7.550 7.496 7.461 /

ANSWER ***** feteV (Aoi) ASIS (Tenshouin) ÉURS (Iesada Tokugawa) ÉUI (Tokugawa) ffiA (Masato)

-föSSA (Keiko Matsuzaka) ft® (Kazunomiya) gl^fefeV^A (Aoi Miyazaki) fflP^SA (Kanako Higuchi) Aft^A (Tatewaki Komatsu)

Fig. 3. Sample interaction in QA system (1)

category to interaction. The expression "Japan Grand Prix" had to be used in one query word in this case. It might be necessary to handle a compound noun as one query word but we need further analysis for improvement.

• Determination of categories

Words in the same Named Entity type will be used for interaction; therefore, it is important to set appropriate detail level of Named Entity types. We used our Named Entity recognition tool which has 77 types in two levels. Upper types are person name, organization name, place name etc. and there are several subtypes for upper types. Upper and lower types are recognized as different, for example, athlete name and person name are treated in different. It is difficult to identify Named Entity types correctly for many elements. If there are many athlete names in documents, all of them cannot be recognized as athlete names in recognition of Named Entity. This is because NE tool has little information to identify them or such names have no clue to identify to be athlete name.

• Re-retrieval of documents

In the experiments, there was a case that re-retrieval of documents failed. The question is to ask

Input question: (Who is the main cast in TV novel?)

Intended answer: MitM^ (Maki Horikita)

query = fVt" (TV), Jj^ (novel), iM (main cast)

***** SCORE and ANSWER *****

1st 9.080 / Ш (Jun)

2nd 7.427 / ШЪ^^^^ (Teacher Umechan)

3rd 6.567 / Ж (Natsu)

4th 6.186 / (Maki Horikita)

5th 4.668 / (Ms Maki Horikita)

6th 3.766 / У7У (fun)

7th 3.603 / ^^ (Reina)

8th 3.142 / fei (Fukushi)

9th 3.000 / (Hideaki Takizawa)

10th 2.918 / (Maki Kita)

fcLX®/^?^^? (yes | no) (Is novel of Oshin?)

(yes | no) (Is novel of Teacher Umechan?)

query = rVt" (TV), (novel), iM (main cast), W^^^fti (Teacher Umechan)

1st 2nd 3rd 4th 5th 6th 7th 8th 9th 10th

SCORE and 15.021 / 9.551 6.550 6.247 4.135 4.112 3.966 3.848 3.801 3.760 /

ANSWER *****

(Maki Horikita) W^^^fti (Teacher Umechan) (Ms Maki Horikita)

W? (Umeko) ÏÏM^fà (Maki Kita) i^ffii^^ (Katsumi Takahashi) SS^ (kaho Minami)

(Kenji Kawai) (Takeo) iffiS! (Megumi Koide)

Fig. 4. Sample interaction in QA system (2)

a certain period of time. System focused on a topic and re-retrieved related documents with user interaction but there are more answer candidates then the previous documents. Then, scoring of answers did not work well. In our current implementation, selection of appropriate document set is determined by the number of answer candidates but it is necessary to evaluate this method using more examples.

• Sentence generation for interaction

We applied simple approach in generation of interaction sentence. In user interaction, we use only one clue word in sentence generation. It might be possible to ask a user more than two clue words in one time for effective focusing.

7. Conclusion and Future Work

In this paper, we propose a method to navigate getting a correct answer for QA system using user interaction. When a question is given, QA system will narrow down search area to focus on user' s intended topic of documents. QA system selects a clue word to decide an appropriate topic from topics in retrieved documents and interacts with a user about whether this clue word is appropriate one or not. System will re-retrieve documents using this clue word and expand search space to increase possibility of getting correct answer candidates. We have conducted experiments using ten questions and shown effectiveness of our interaction method in question answering.

Input question: B^^V^UAft^LfceaBAAA? (Who won Japan Grand Prix?) Intended answer: —V^^(Michael Schumacher)

query = (Japan), (Grand Prix), ШШ (won)

***** SCORE and ANSWER *****

1st 14.592 / (Kobayashi)

2nd 9.207 / t^Xf+y (Sebastian)

3rd 7.083 / АУ— (Renault)

4th 6.865 / УГУ (fan)

5th 6.463 / (Williams)

6th 5.723 / ШШШШ (Takuma Sato)

7th 5.587 / ^^ (Suzuki)

8th 4.790 / V—^ (Mark)

9th 4.655 / (Paul)

10th 4.554 / (Schumacher)

7 (yes | no) (Is Japan of football?)

(yes | no) (Is Japan of soccer?)

(yes | no) (Is Japan of F1?)

query = 0^ (Japan), AV^AU (Grand Prix), ШШ (won), Fl (F1) ***** SCORE and ANSWER *****

1st 14.261 / (Kobayashi)

2nd 6.923 / 77У (fan)

3rd 6.180 / (Aaron)

4th 5.651 / t^Xf+y (Sebastian)

5th 5.305 / (Renault)

6th 4.378 / ^^ (Suzuki)

7th 4.318 / j^a—(Schumacher)

8th 4.068 / (Jen)

9th 4.030 / tt (Senna)

10th 3.949 / f^ (Tel)

Fig. 5. Sample interaction in QA system (3)

In the future, it is necessary to continue experiments using more questions and evaluate our method for improvement. According the experiments, there are some points to improve in selection of query words and categorization of clue words. Moreover, sentence generation method for interaction is simple, then we have much space to improve sentence generation such as inquiry of several clue words. It will be also possible to apply this question answering mechanism for some applications.

References

[1] E. M. Voorhees, Overview of the TREC 2003 question answering track, in: Proc. of the Twelfth Text REtrieval Conference (TREC 2003), 2004, pp. 54-68.

[2] E. M. Voorhees, Overview of the TREC 2004 question answering track, in: Proc. of the Thirteenth Text REtrieval Conference (TREC 2004), 2005, pp. 53-62.

[3] J. Fukumoto, T. Kato, F. Masui, T. Mori, An overview of the 4th Question Answering Challenge (QAC-4) at NTCIR Workshop 6, in: Proc. of the Sixth NTCIR Workshop Meeting, 2007, pp. 433-440.

[4] T. Kato, J. Fukumoto, F. Masui, An overview of NTCIR-5 QAC3, in: Proc. of the Fifth NTCIR Workshop Meeting, 2005, pp. 361-372.

[5] S. Small, T. Strzalkowski, T. Janack, T. Liu, S. Ryan, R. Salkin, N. Shimizu, P. Kantor, D. Kelly, R. Rittman, et al., Hitiqa: Scenario based question answering, in: Proceedings of HLT, 2004.

[6] K. Shirai, H. Tokue, Fundamental studies on generation of questions to users in an interactive question answering system, in: IPSJ SIG 2005-NL-165, 2005, pp. 53-56, (in Japanese).

[7] N. Aburai, R. Yamanishi, J. Fukumoto, Interactive supports for insufficient questions in qa system, in: Proc. of the Fourth Asian Joint Workshop on Information Technologies, 2012, p. 62.

[8] E. Marsh, D. Perzanowski, Muc-7 evaluation of ie technology: Overview of results, in: Message Understanding Conference, 1998.

[9] S. Yamauchi, J. Fukumoto, Focusing answer candidates with user interaction in qa system, in: NLC2008-23, 2008, pp. 23-28, (in Japanese).

[10] I. Watanabe, F. Masui, J. Fukumoto, Improvement of next performance: Elavolating precision and userbility of the named entity extraction tool, in: the 10th Annual Meeting of The Association for Natural Language Processing, 2004, pp. 413-415, (in Japanese).

[11] F. Masui, S. Suzuki, J. Fukumoto, A named entity extraction tool(next) for text processing, in: the 8th Annual Meeting of The Association for Natural Language Processing, 2002, pp. 176-179, (in Japanese).