Keywords

1 Introduction

Japan is facing a hyper-aging society. According to the Cabinet Office, while the total population of Japan is decreasing, the proportion of the elderly population is rising. The elderly population is predicated to account for 33.3% of the total. Also, the number of people with dementia (PWD) will reach 7 million in 2025, where one-fifth of five elderly people in Japan will suffer from dementia [5]. Against this background, effective and sustainable support for the elderly and the PWD is needed.

Validation therapy [6] and reminiscence [11] are known as a non-drug therapy for symptoms of dementia. Validaticon therapy is a care method that recognizes the meaning behind the PWD’s confused behaviors and unrealistic behaviors, and calms them by showing the correspondence between acceptance and empathy. Reminiscence is a care method that looks back on the past experience, and responds empathically and receptively to the process to improve the psychological stability of the PWD. In these care methods, continuous conversation between the care provider and the patient is important. However, it is difficult economically and timely for specialists to provide counseling on a daily basis. In addition, the transition of home care has been progressing recently, and the burden on family care has also increased.

Among them, our research group proposes a system that enables a PWD to communicate at any time at home using a virtual agent (VA), which is a robot program capable of voice dialogue [10]. The technical challenges to realize continuous dialogue with VA is how to generate dialogues that is close to the person. So far, we have proposed a method to generate topics related to the individual based on the life history of the user [8], and a method to topical events and trends according to user’s age.

We also define life history and birth year as Personal Ontology. We have developed a system that has VA ask users questions and dynamically generate personal ontology based on their answers [7]. This system saves the built Personal Ontology in the form of Linked Data [4], which is a set of langle subject, predicate, and object triples rangle. Furthermore, this system connects Personal Ontology to external knowledge by linking with Linked Open Data (LOD)[3], and acquires related knowledge to expand the topic. However, in the current system, only the part that builds Personal Ontology by dialogue is implemented. The task of creating a dialogue close to the individual by using the generated Personal Ontology is a future work. In order to generate attractive dialogues for users, it is important to find the concepts of particular interest from the large amount of concepts accumulated in Personal Ontology.

Therefore, in this research, we propose a method for finding concepts of particular interest to users from individual ontology accumulated in dialogue with VA. In this paper, we define interest as “the emotions and directions in which an individual wants to be more involved in a certain matter”, and call matters and concepts that an individual is particularly interested in Personal Interests. Then, we extract individual interests through dialogue with VA, and extract the feelings for each interest to evaluate the degree of each interest. More specifically, the proposed method consists of the following two parts.

(A1) Extraction of Personal Interests: We have VA ask questions about personal preferences and have the user answer. Build Personal Ontology in the form of LinkedData from the answer based on the method of previous research [7].

(A2) Evaluation of Personal Interests: For each concept in the personal ontology constructed in LinkedData format, we evaluate the degree of interest of the user based on the following three criteria.

  • P1: The larger the number of characters in answer 2, the more the user is interested in C1.

  • P2: The greater the number of concepts C2 included in Answer 2, the more the user is interested in C1.

  • P3: When there are multiple links to C2, the user is interested in C2.

We implemented the proposed method as a dialog scenario using the preceding system, and conducted an experiment in which seven subjects interacted with VA. In the dialogue scenario, VA asks the user questions about five genres: food, sports, places and scenery, hobbies, and “others,” and the user answers the questions by voice. The system analyzes the answer by speech recognition and text analysis, and builds a personal ontology in LinkedData format.

We analyzed the constructed LinkedData and calculated the degree of interest of each concept based on P1, P2, and P3. After the experiment, we conducted a questionnaire for each subject and asked them to answer their interest in the extracted concepts. Finally, we performed a correlation analysis between the score calculated by the proposed method and the degree of interest in the questionnaire. The analysis showed a high correlation between the degree of interest in the questionnaire and the calculated score. As a result, it was found that the proposed method was effective in extracting and specifying personal interests.

2 Preliminary

2.1 Communication System for PWD at Home

Our research group has developed a virtual care giver (VCG) system that provides communication care through dialogues for the elderly at home and those with dementia [7]. Figure 1 shows the VCG screen. We have realized dialogues with a home user by using a virtual agent (VA) that can communicate by voice in VCG. VCG can be used as a dialogue partner for PWD regardless of time, and can be expected to reduce the burden on human caregivers.

In order to realize a continuous dialogue between the PWD and VA, it is necessary to provide topics close to individuals. In previous research, we proposed a method of dynamically generating topics that are close to individuals using Life History and Linked Open Data (LOD) [8].

Fig. 1.
figure 1

VirtualCareGiver

2.2 Linked Data, Linked Open Data (LOD)

Linked Data [4] is data that is linked by means of links made meaningful using Web technology. Linked Data is one of the technical components for realizing the Semantic Web, and is described in the Resource Description Framework (RDF) that structurally represents arbitrary information on the Web as resources. Opened Linked Data as open data and shared on the Internet is called Linked Open Data (LOD). By linking the published data, it is possible to form a huge knowledge database on the Web.

In the RDF model, data is represented by triples that combine three elements: subject, predicate, and object. In some cases, the subject and object are represented by ellipses (resources), and the predicates are represented by directed graphs, which are represented by arrows (links) connecting the two ellipses. However, the object can be a string constant (literal) instead of a URI, in which case the object is represented by a rectangle.

Figure 2 shows an RDF graph example. The URI can be abbreviated by using the namespace prefix. In this figure, dbpedia-ja: Tokyo means http://ja.dbpedia.org/resource/Tokyo. In this example, the figure shows two things: “dbpedia-ja: The URI of Tokyo indicates Tokyo” and “The country where Tokyo is located is Japan”.

2.3 Building of Personal Ontology Based on Dialogue with VA [7]

In our previous study [7], we named individuals’ knowledge needed to generate topics closely related to individuals as Personal Ontology. Then, we extended the VCG system and realized a method of dynamically building and managing Personal Ontology in the form of LinkedData through dialogue with VA.

More specifically, the individual ontology is represented by triples of the three elements described in 2.2. For example, \(\langle \)“Tokuda”, “Favorite thing”, “Board game”\(\rangle \) represents a personal ontology that “The favorite thing of user “Tokuda” is a board game.”

Fig. 2.
figure 2

RDF graph example

As a method of building an Personal Ontology, VA asks U about P and generates a question that asks O from U. For example, when the VA asks, “What do you like about Tokuda?” and the user answers, “I like board games”, the system generates a Personal Ontolopy \(\langle ``Tokuda'', ``Favorite Things'', ``Board Game'' \rangle \). It is also possible to ask places and people as questions.

When \(\langle U, P, O \rangle \) is created, the system converts this to RDF format and represents this as LinkedData. Specifically, we define U as a resource representing a user, O as a resource representing information about the user, and P as a link representing the relationship between the user and information. Also, the system uses http://cs27.org/personal-ontology/resource/ and http://cs27.org/personal-ontology/rproperty/ as the namespace URI, with the prefix written as ex: and ex-prop:. We use these to describe ex: uid rdfs: label “U” and ex: uid ex-prop: P ex: O to obtain “U’s P is O”. The converted RDF data is stored in a dedicated database called RDF store.

From the implementation of the above system and subject experiments, it has been found that when the response from the user to the question from the VA is grammatically correct and the speech recognition and syntax analysis are successful, a personal ontology can be automatically generated. However, in the current system, only the generation and accumulation of Personal Ontology is performed, and the part to generate continuous ontology using these is a future task.

2.4 Personal Interests

In the system under development, we basically assume that the VA interacts by providing topics. At this time, if the topic provided by VA is not interested in the user, continuous conversation cannot be expected. In addition, it is important to know what the individual has interests in personal care because of the necessity of close conversation with the individual for dementia care.

In this paper, we define interests as “the emotions and directions of trying to have greater involvement in a certain matter”, and call “the matter or concept that an individual is interested in” Personal Interests. For example, for an individual who likes travel and travels many times a year, it is presumed that “travel” is the individual’s Personal Interests. Also, from the definition of interest, Personal Interests are not necessarily hobbies or interests, and individuals who have “spent at home” or “pain in the feet” as Personal Interests are also conceivable.

3 Proposed Method

In this research, we extend the system described in 2.3 and propose a method to discover Personal Interests based on Personal Ontology obtained by interacting with VA. The proposed method consists of A1: Extraction of Personal Interests and A2: Evaluation of Personal Interests.

3.1 A1: Extraction of Personal Interests

In A1, VA asks the user questions and extracts Personal Interests. The VA introduces a certain genre and asks the user to answer the concerns of the genre. In addition, we collect the data necessary for “A2: Evaluation of personal interests” by asking the user to tell the episodes of each interest appearing in the answer. We build and store the user’s answer in the LinkedData format as a Personal Ontology based on the method of previous research described in 2.3.

Step 1 Obtain User ID (uid): VA first asks for the user’s name and uses it to create the user’s resource U and the identifier uid. In the construction of the following individual ontology, uid refers to U.

Step 2 Obtain Personal Interests: VA talks about the genres defined in the system in advance and asks the user to find out what may be the subject of Personal Interest. First, VA talks about the genre P and asks the user U, “U, are you interested in P?” When the user answers “Yes”, the VA asks “What is U’s P?” And accepts the user U’ s interests in the genre P by free answer. The VA then asks, “Tell me anything else,” and gets as much user response as possible. The answer obtained at this time is called Answer 1. The system extracts noun phrases included in Answer 1 and uses them as candidates for interest. VA asks the user whether the extracted noun phrase is correct, and if so, repeats the question. If there is no mistake, generate \( \langle U, P, C1 \rangle \) for the concept C1 represented by each extracted noun phrase, and use it as a Personal Ontology.

Step 3 Obtain Episode: By asking the user to answer episodes for Personal Interests extracted from Answer 1 in Step 2, we collect their thoughts and feelings. Specifically, for each concept C1 extracted as personal interests from Answer 1, VA asked “Please tell me the episode about C1 of U” and have the user answer in free form. The answer obtained at this time is called Answer 2. The system extracts noun phrases included in Answer 2 and extracts nouns from the noun phrases. Then, after confirming that there are no mistakes as in Step 2, the system generates Personal Ontology \( \ langle C1, episode, C2 rangle \). When there are other candidates of interest, the system conducts similar dialogues and collect episodes.

Step 4 Convert Personal Ontology to Linked Data: The system converts the Personal Ontology generated in Steps 2 and 3 into RDF format and manages it as LinkedData. Conversion to RDF is performed by the method described in 2.3. Since resources should be referenced by URIs based on words, when O contains multiple words, they are split into words and converted into resources, and these are grouped by blank nodes . Also, O as the original text is connected to the same group as the label. This makes it possible to refer to resources for each word while maintaining O information. In addition, the system keeps original text of Answer 1 and Answer 2 from users as comments in RDF so that the source of personal ontology can be traced at any time. We describe a specific procedure for converting a personal ontology obtained from a user’s answer (answer 1 or answer 2) into RDF.

  1. 1)

    Create a blank node B0 and set B0 as the root node of Personal Ontology obtained from the answer.

  2. 2)

    To save the answer text, create a literal L0 corresponding to the answer text, and create a link from B0 to L0 with rdf : comment as a predicate.

  3. 3)

    For each concept C in each Personal Ontology \( \langle U, P, C \rangle \) (or \( \langle C1, ``episode'', C \rangle \)), create a literal L1 corresponding to C. Also, create the words \( C_1, C_2, ..., C_n \) are extracted from C, and the corresponding resources \( R_1, R_2, ..., R_n \)(when they do not already exist). Create a blank node B1 that groups them.

  4. 4)

    Create a link of the predicate rdf : label from B1 to L1. Also, create a link of the each predicate \( R_i \) from B1 to \( rdf_i \).

  5. 5)

    Create a link of the predicate \( rdf_1 \) from B0 to B1.

By the above procedure, a tree of Personal Interests and a tree of episodes related to each interest are generated from each of Answer 1 and 2. Then, the system connect these trees hierarchically.

  1. 6)

    Create a resource corresponding to user U, a predicate on genre P, and a predicate “episode” (when they do not exist).

  2. 7)

    For the root node B0 of tree T1 obtained from Answer 1, create a link of predicate P from U to B0.

  3. 8)

    For each concept C in the tree T1, we define a blank node B1 that groups C. We also define the root node \( B0 '\) of the tree T2 obtained from the episode (Answer 2) about C. At this time,create a link of the predicate episode from B1 to \(B0'\).

As an example, the Personal Ontology obtained from Answer 1 of Step 2 “I like udon (Japanese noodle food)” is converted to RDF format as follows.

figure a

In addition, from the answer 2 of Step 3 “I came to eat often in the cafeteria since I entered the university”, the system generates the following RDF,

figure b

The link that connects them is generated as follows.

figure c
Fig. 3.
figure 3

Extraction of Personal Interests flowchart

3.2 A2: Evaluation of Personal Interests

A2 is the phase in which we evaluate the personal interests extracted in A1 and thereby identify the concepts of particular interest to the user. In A1, the system first identified the user’s interests in a certain genre from Answer 1 in terms of noun phrases, and then had the user asks the episode (Answer 2) about each interest (C1). Therefore, Answer 2 is a great clue to the user’s interest in C1. In A2, the system analyzes the structure of Personal Ontology based on the following three Criteria P1, P2, P3 and evaluates the degree of interest in each concept in the ontology.

  • P1: The more characters in the episode (Answer 2) about C1, the more the user is interested in C1.

  • P2: The more concepts included in the episode (Answer 2) about C1, the more the user is interested in C1.

  • P3: When there are multiple links to a C2 resource, the user is implicitly interested in the C2.

3.3 Evaluation of Personal Interests by P1

P1 is an evaluation criterion that “the more the number of characters in an episode related to an interest, the stronger the interest in that”. This is based on the hypothesis that “people will tell more about their interests”. We use episode number of characters as a specific evaluation scale for criterion P1.

In Linked Data of personal ontology constructed in A1, the original text of the episode exists as a literal linked by rdf : comment. By counting the number of characters, the system calculates the interest level score (called P1 score) of the concept C1 as the link source of the episode. The system evaluates that the higher the P1 score, the higher the interest in the concept.

Evaluation of Personal Interests by P2. P2 is an evaluation criterion that “the more episodes of an interest include more concepts, the stronger the interest is in that”. This is based on the hypothesis that “interests are accompanied by more knowledge and experience, so more concepts will appear in the episode.” In P2, the evaluation criterion is calculated using number of concepts appearing in the episode.

In Personal Ontology Linked Data constructed in A1, concepts that appear in episodes exist as resources in a tree linked by episode. By counting these numbers, the system calculates the interest level score (called P2 score) of the concept C1 from which the episode is linked. The system evaluates that the higher the P2 score, the higher the interest in the concept.

Evaluation of Personal Interests by P3. P3 is an evaluation criterion that “when a concept has many links from another, the concept is an implicit Personal Interest”. This is based on the hypothesis that “what the user is interested in will unconsciously say that in various contexts.” The specific evaluation scale of P3 uses number of link sources of concept.

In Linked Data of Personal Ontology constructed in A1, the presence of multiple link sources in the resource corresponding to a C2 indicates that C2 has appeared in various contexts. Therefore, the number of links entering the resource is defined as the interest level score of C2 (called P3 score). The system evaluates that the higher the P3 score, the higher the interest in the concept or the concept.

4 Implementation

We implemented a prototype system for extracting and evaluating personal interests proposed in 3. The extraction of Personal Interests in A1 was realized by reusing and expanding the system of previous research. Specifically, for the interaction between the VA and the user in Steps 1 to 3, we implemented a new dialog scenario with the VA on the preceding system. We also added new functions to the predecessor system regarding the extraction of ontology and conversion to linked data.

The technology used for implementation is as follows.

  • Infrastructure system: Java, Virtual Care Giver [7], MMDAgent [9]

  • Dialogue scenario: Ruby

  • Server: Apache Tomcat, Apache Axis2 (Web-API)

  • Linked Data processing: Apache Jena [1], Apache Jena Fuseki

  • Natural language processing: COTOHA API [2], kuromoji-ipadic-neologd

We used the predecessor system as the base system, and implemented the dialog scenario in Ruby. We used the Apache Jena framework and the Fuseki RDF store to efficiently manage and accumulate linked data of personal ontology. We used the COTOHA API to parse the user’s answer by natural language processing and extract phrases in noun units, and kuromoji-ipadic-neologd to morphologically analyze words and extract named entities.

This time, the evaluation part of Personal Interests of A2 visualized the RDF generated by the system in the form of a graph and evaluated that by manual analysis.

5 Evaluation Experiment

In order to confirm the effectiveness of the proposed method, we conducted an experiment in which subjects were asked to interact with the implemented system and whether or not personal interests could be extracted and evaluated.

5.1 Purpose and Method of Experiment

The purpose of the experiment is to confirm whether the Personal Interests extracted and evaluated by the proposed method are actually the interests of the individual. In the experiment, we first evaluate the Personal Interests extracted from the dialogue with the VA using the P1-P3 scores. Next, we ask the subjects to answer the questionnaire to what extent they are interested in each of the extracted personal interests. Finally, we analyze the correlation between the P1-P3 score and the questionnaire results.

A total of seven subjects participated in the experiment: five men in their twenties, one woman in their twenties, and one man in their forties. We conducted experiments using the system implemented between January 15 and 20, 2020. We used a wireless headset (Logicool G G533) as a device for dialogue with VA in order to reduce noise during speech recognition and increase the sound insulation of conversation. This time, five categories of personal interests were used: food, sports, places and scenes, hobbies, and other favorite things. VA asked questions according to the procedure described in 3.1, and asked the subjects to answer their interests in each genre and episodes related to them.

After the experiment, we conducted a questionnaire survey to determine how much interest the subjects actually had in the concepts extracted by the dialogue. Since this time the experiment was asking for “likes and things”, we set evaluation on the 7 levels of “dislike” as 1, “normal” as 3, “like” as 5, and “extremely like” as 7. We asked the subjects to evaluate the degree of interest in each concept in seven levels of evaluation. In addition, if they felt that the concept was inappropriate as an object of interest, we asked subjects answer 0 (not applicable). As a supplementary survey, we asked subjects to rate their interest in the genres of interest “food”, “sports”, and “places and scenery”. We conducted the survey using Google Forms.

5.2 Analysis Method

Since the P1, P2 and P3 scores have different application ranges, we classified the concepts extracted by the dialogue between the subject and the system as follows. In each classification, we analyze the correlation between the evaluation value by the proposed method and the answer value of the questionnaire.

  • Target C1: Concepts extracted directly as concerns in Answer 1 of Step 2 of A1. We analyze the correlation between the evaluation value by the P1 and P2 scores and the evaluation value by subjects.

  • Taget C2: Concepts that appeared in the episode of interest in Answer 2 of Step 3 of A1. We analyze the correlation between the evaluation value by the P3 score and the evaluation value by subjects.

  • Target C3: Concepts corresponding to the genre of interest. We evaluate by the total value of P1 and P2 scores of all interests in the genre, and analyze the correlation with the evaluation value by subjects.

5.3 Experimental Result

Table 1 shows the results. This table shows the correlation coefficient between the interest level score of the concept extracted by the system for each subject and the interest level evaluation by the subjects. We perform correlation analysis for each subject because the length of the episode, the number of concepts, and the subjectivity in the questionnaire evaluation vary greatly depending on the subject. Values in bold indicate places where high correlation is observed.

First, for subject C1, relatively high correlations are seen for subjects U2, U3, U4, U5, and U7. In other words, it is found that the number of characters in an episode and the number of concepts included in the episode for a particular interest are related to the degree of interest in the individual. On the other hand, there is no negative correlation for subject U1 and no correlation for U6. The reason is discussed in 5.4. In addition, the correlation value between the P3 score and the degree of interest of the individual is positive for the subject C2, but there is almost no correlation except for the subject U4. This is due to the fact that among the nouns extracted from Answer 2 by the system, a very large number of nouns with a P3 score of 1 are extracted, and they contain a considerable number of potential interests. On the other hand, there are many nouns that are evaluated by users as inappropriate (0 not applicable). We perform a more detailed analysis considering these in 5.4.

Target C3 is to check whether the sum of the P1 and P2 scores of interests in each genre is related to the degree of interest in that genre. This is not for the evaluation of interests but for the genre, so it is out of the scope of the original proposed method, but we perform correlation analysis as a supplementary analysis. From the results of the rightmost two columns in the table 1, there is a high correlation among subjects U2, U4, U5, U6, and U7. Subject U1 has no correlation, and subject U3 has a negative correlation.

Table 1. Correlation between extracted concept score and subjects evaluation
Fig. 4.
figure 4

P1, P2 score and questionnaire evaluation of subject U4

5.4 Consideration

Effectiveness and Limitations of Criteria P1 and P2 for Target C1. Regarding the concept of Target C1, the P1 score and the P2 score showed relatively high correlations for five subjects except U1 and U6. Therefore, we consider that the Criteria P1 and P2 are effective, although limited, as criteria for measuring the level of personal interest.

We analyze the case of U4 where there is a significant difference between the P1 and P2 scores. Figure 4 plots the P1 score, P2 score, and questionnaire evaluation value for the subject U4’s six interests. The P2 score is normalized based on the P1 score to make the scales uniform. We consider why the P1 score did not correlate well in the U4 case. First, regarding the place genres “Kobe” and “Kyoto”, the evaluation by the questionnaire shows the same degree of interest, but there is a large difference in the length of the episodes (P1 score). On the other hand, the number of concepts in an episode (P2 score) is not so different. In many cases, the length of the episode is proportional to the number of concepts, but in this case, the number of concepts that appear is smaller than that of Kobe. In the comparison between the food genres “apple” and “cannure”, both P1 and P2 scores reflect the actual questionnaire evaluation. Although there is a large difference in the questionnaire results between the hobby genre “youtube” and “game”, there is no difference in the P1 score. This is due to the difference in the amount of information per unit character that words have. In the episode of “game”, frequently there are words with a large number of characters such as “Breath of The Wild” and “switch”. On the other hand, answers about the concept “youtube” often used words with short characters such as “Jikitsu”, “Toki”, “Hima” (a very short word in Japanese). Although there is a large difference in the number of concepts in the P2 score, there is no difference in the number of characters in the P1 score.

In order to find out why the scores did not correlate with subjects U1 and U6, we interviewed the subjects after the experiment. As a result, a cause is found. The cause is that “we couldn’t extract enough interest and its episodes.” Subjects said they could not externalize their thoughts in words well due to inexperience in dialogue with VA. At the beginning of the dialogue, the subject commented, “I didn’t know how much to talk about.”, “I was impatient to answer immediately. I wanted to talk more, but I couldn’t talk much at that moment.”. Effectiveness and Limitations of Criteria P3 in Subject C2. We consider why there is little correlation between P3 score and individual interest level for the concept of target C2. Table  shows the frequency of questionnaire evaluation values for all subjects for each P3 score value obtained in the experiment. Each row in the table shows how many points the subject rated for a concept with a certain P3 score. From this table it can be seen that concepts with P3 score of 1 make up the majority of all samples, including a significant number of potential personal interests (questionnaire ratings 6, 7). This data bias greatly affects the correlation coefficient.

In order to remove this bias, we analyze ratio of the questionnaire response value for each class of P3 score. Figure 5 shows a graph of the percentage of questionnaire response values for each concept class with P3 scores of 1, 2, 3, and 4. It can be seen that as the P3 score increases, the proportion of concepts with a particularly high degree of interest (questionnaire evaluation 6, 7 points) increases. Therefore, we think that it is reasonable to think that the interpretation of the criterion P3 is not “the higher the P3 score, the higher the level of individual interest in those concepts”, but “the higher the P3 score, the higher the personal interest in those concepts”.

In addition, in the Table 2, there are many concepts that are not considered to be of interest (question evaluation 0 points). These are pronouns such as “place” and “this”, and nouns indicating time such as “recent” and “around”. It is concepts that appeared frequently in ordinary conversations, but could not be of interest by itself. There are also concepts where language analysis did not work well such as “eat” and “like”. Improvement of these non-essential concepts can be expected by creating a dictionary of words (stop words) to be excluded and filtering them when extracting ontology.

Table 2. Frequency of questionnaire response value for each P3 score
Fig. 5.
figure 5

Percentage of concepts of high interest for each P3 score

Discussion on Target C3. The analysis for the target C3 is a supplementary result that analyzes whether the degree of interest in the genre can be characterized by the sum of the evaluation scores of the interests in that genre. As a result, P1 score and P2 score showed high correlation in cases other than subjects U1 and U3. The effectiveness of the genre is limited due to the small number of samples of interest in the genre. However, by accumulating data and understanding which genre an individual is interested in, it can be useful for strategies to select topics from which genres.

6 Conclusion

In this paper, we propose a method for extracting and evaluating the concept “Personal Interests” that users interest in through dialogue with a virtual agent (VA). In the proposed method, VA first introduce into a genre, ask topics in the genre , and ask the user talk episodes about each genre. In the proposed method, we extract the personal knowledge “Personal Ontology” from the user’s answer, and built and store that in the form of Linked Data. The extracted interests are evaluated based on the three criteria P1 (length of the episode), P2 (the number of concepts included in the episode), and P3 (the number of reference sources of the concepts), and the level of interest in each concept Is calculated as a numerical value.

We implemented the prototype of the proposed method by reusing and extending the preceding system. We conducted an experiment to extract and evaluate personal interests in seven subjects using the implemented prototype. As a result, it was found that the scores calculated by the criterion P1 and criterion P2 correlated with the actual interest level of the person. It was also suggested that the criterion P3 could discover new concerns from other concepts included in the episode.

Future tasks include improving the issues of the proposed method that were clarified in experiments, such as the problem of the familiarity with dialogue with VA and the problem of accuracy of concept extraction. While improving these, we will refine the personal interest evaluation unit and implement this in a complete system. Furthermore, it is important to evaluate the receptivity and satisfaction of the system by applying and experimenting with the proposed system for the elderly at home and persons with dementia.