Abstract
In this research, we aimed to support improvement in users’ daily lives by behavior modification for people who cannot self-manage their daily activities, such as bed-making, cleaning, tidying, and sleeping. We focused on the preceding behaviors of others who get along with users to encourage them to act on daily matter. Our proposed system adopts an anthropomorphic animation agent that shows its own activities in daily life to users to stimulate their incentives to follow actions of the agent as a familiar and ambient presence. We conducted a series of four-day experiments to investigate whether the preceding behaviors of the agent were repeated and continuously affected users’ spontaneous actions. From the results, we suggest that the proposed system has a possibility to induce daily activities of users that will ultimately become spontaneous.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Our daily lives involve many things we have to do. If people neglect to clean, tidy up, arrange their living spaces, and manage their time in daily life, they suffer losses to quality of life. Lazy lifestyles cause various problems, including pressure on the living space due to scattering of things, waste of time that could be used meaningfully, and mental pressure from tasks to do later. To improve quality of life, we need to do our daily activities without accumulating them.
Not only children with autism spectrum disorder [1] or attention–deficit disorder [2, 3] have difficulty in their daily lives from lacking sufficient social and time management skills; there are adult people who lost or never acquired such the skills. Here, especially for elderly people with dementia, various daily activities are difficult to accomplish constantly and continuously. Many patients lose their motivation of daily tasks such as operation of brushing their teeth or bathing. From the viewpoint of rehabilitation, such activities are considered to be very important because the activities stimulate the patients and trigger their next actions.
To draw the patients’ interests and motivation toward activities, we considered suggesting or guiding them on activities in direct/indirect ways through families and other familiar people; however, direct suggestions sometimes happen to generate their resistance or rebound. To induce elderly people with dementia in a natural and smooth way, using indirect guidance has been considered effective.
The second point is who should provide the guidance or suggestion. Even if a familiar person were to suggest the next activities to do, the elderly person in question would be annoyed and would ultimately disregard the person. It is presumed that elderly people with dementia become social withdrawal to prevent from sad or shameful emotions occurred by scolding and indication by the people surrounding them. Artificial reminders are one solutions, as there are alarm systems based on dosing schedules. Such systems cannot become effective or be continuously used unless a user understands or feels their necessity. The system should not only notify users on the next activities, but should try to persuade them on the activities’ merits and stimulate or elevate the motivation. On the other hand, interactive systems are controllable according to the user’s state.
Motivation stimulation using gamification, which is called serious games [4, 5], and social motivation [6] are considered effective. In their daily activities, children sometimes play simple games with rewards in the form of snacks and so forth; however, most elderly people do not prefer childish stimulation. Accordingly, we focused on the second type of stimulation: social motivation [6]. When one considers Maslow’s hierarchy of needs theory [7], humans are expected to do harmonious and unifying actions with others to satisfy social needs of belonging to a group; overheard communication [8] is known as an indirect inducer of a listener’s actions.
Based on this perspective, the preceding activities of other people shown to a target person have possibilities to lead that person’s actions. We thus focused on the preceding behavior of others as both a trigger of conforming behavior [9, 10] and a sample case that showed the merits of an activity.
Here, we focused on anthropomorphic agents from the viewpoint of 1) human-like persuasion, 2) an appropriate level of the sense of distance, and 3) sympathetic interaction. In this research, we propose an anthropomorphic animation agent, that shows its own activities in daily life to users and stimulate their incentives to follow the actions of the agent as a familiar and ambient presence alongside them. From the viewpoint of social motivation and syntonic activities, anthropomorphic agents that show the state of their daily activities to users are considered effective stimulants of user’s incentives to act.
Feedback reflecting users’ behavior generates a higher effect on motivation [11] and satisfies their social needs. Consequently, we adopted camera-based user observation to detect users’ daily activities, such as tidying. In addition, emotional feedback also has a strong effect on human motivation [12, 13]. On that subject, we adopted the agent’s smiles to correspond to the user’s daily-life activities that follow to the agent’s behavior.
We investigated the effect of the agent’s preceding behavior on promoting users’ spontaneous activities with a series of four-day experiments. The aim of the experiment was to confirm not only a short-term effect but also a long-term behavior modification.
2 Related Research
First, we describe the flow that promotes users’ behavior modification through a famous Japanese maxim by Isoroku Yamamoto, which reads, “Show them, tell them, have them do it, and then praise them; otherwise, people won’t do anything.” There is a series of flows for making a person to act: 1) show examples, 2) encourage the person’s understanding of methods and purposes, 3) encourage practice experience, and 4) improve behavioral motivation by reward. As a distinctive point in this maxim, we focused on the step of showing the instructor’s example behavior before telling how and why to do the activity. Direct persuasion to do an activity at the beginning may reduce a person’s motivation toward the activity.
Suzuki et al. [14] discussed the possibility that an agent would diminish a user’s motivation for behavior by a forced feeling imposed by directly persuading the user to act. To improve users’ behavioral motivation, they proposed presenting persuasion scenes where one agent persuaded another, which could indirectly show a third party viewpoint. They expected their method to reduce the psychological load on the user; however, it is necessary to pay attention to the linguistic persuasion of the agent’s scripts with cognitive loads [15,16,17], so it is difficult to provide the user intuitive interpretation.
On the other hand, there are several studies on co-eating agents [18, 19]. Inoue et al. [18] described the possibility of a user finding it easy to eat at a scene where an agent starts to eat first and the user’s dining is promoted. Their research showed the possibility of triggering user’s behavior by an agent’s preceding behavior (nonverbal expression) without linguistic explanation. Accordingly, we designed a series of activities for an agent, showing its preceding behavior at first before encouraging action by users in the subsequent steps.
According to an effective process for customary behaviors advocated by Weinschenk (Fig. 1-A) [20], we designed a process (Fig. 1-B) in which an agent shows preceding behavior and a user follows the behavior. The agent represents preceding behaviors to the user based on triggers of the user’s existing habits.
Furthermore, we also focused on improvement of the user’s behavioral motivation in line with the second flow mentioned above. We aimed at improving the user’s behavioral motivation by smiling of the agent [21], which implies an empathic attitude [22,23,24] and positive encouragement. Additionally, the system shows preceding behaviors triggered by the user’s unconscious behaviors to avoid interfering with the user’s concentration on the other tasks [25]. The system expresses the agent’s preceding behavior and smiles only when user switches behaviors [26] or focuses on the agent to avoid the interruption [27]. We expected such an ambient presence to continuously encourage the user with a hands-off attitude.
3 System Implementation
3.1 Overview of System
In our proposed system, an agent shows preceding behavior of daily activity as an indirect promotion of behavior modification. The agent’s behavior is triggered by the user’s habitual activity; for example, the agent’s bed-making behavior is triggered when the user leaves a bed. In this implementation, we adopted three triggers for the system to ascertain: 1) whether the user enters the room, 2) whether the user sits on the work desk, and 3) whether the user closes the window of the main task at the end of the PC work. We chose these triggers to show the agent’s behaviors of 1) operating a vacuum cleaner, 2) checking the schedule, 3-1) cleaning up the desk, and 3-2) discarding the dust around the desk, all of which preceded the user’s activities. In addition, we aimed to habituate users’ daily activities by continuously promoting spontaneous behavior.
The system consists of 1) a user task state recognition section, 2) an animation control section presenting an agent, 3) a voice synthesis section for the agent’s utterance, and 4) a text display section. Figure 2 shows the system flow, and Fig. 3 shows the system view. The system flow simply consists of a) a recognition of the user’s state (Fig. 2-A) and b) an agent presentation (Fig. 2-B) outputting animation and information to the user. The agent presentation part (b) includes 2) the animation control, 3) voice synthesis, and 4) text display sections.
An animation that includes the agent’s daily activities, living environment, and facial expression is displayed on the PC monitor corresponding to both the user’s behavior and his or her state of attention to the agent. Speech of the agent is read aloud by the speech synthesis software, SofTalkFootnote 1, and its script text is displayed on the PC screen in a speech balloon. To detect the state of the user, the system processes 1) facial detection of the user through OpenCV and Haar-like features, 2) motion detection of the user through optical flow, and 3) on/off detection of the light in the room with respect to the image of the PC’s built-in camera set at the upper part of the monitor. Furthermore, the system also detects 4) the PC’s processing situation controlled by the user based on observation of the external windows and the active window using the Windows API. These functions are implemented in C++.
Because such a system resides on the terminal as an ambient task, the agent is always reactive to user’s situations as something closely tied to their lives. To avoid the system occupying the display area during other work by the user, the display window of the agent becomes inactive when an external window is used by the user.
3.2 Designs for Preceding Behavior of Agent
Since people can conduct daily activities without competition with other tasks by managing their overall daily schedules, the agent shows as though it is checking the schedule at the beginning of the user’s activities for a day. From the viewpoint of working efficiency, the system displays an animation of the agent vacuuming its room after detecting that the user has entered the room, and the system displays an animation of the agent confirming its schedules again after detecting that the user has sat down at the working desk.
At the end of the work, people should clean up the top of the desk and throw away the garbage or tidy up for the next use. Accordingly, the system displays an animation of the agent cleaning up the desk and dumping the trash after detecting that the user has finished working.
Figure 4 shows the animation views of the agent’s preceding behaviors: 1) confirming its daily schedule (Fig. 4-A), 2-a) cleaning with a vacuum cleaner (Fig. 4-B), 2-b) cleaning on the desk (Fig. 4-C), and 2-c) disposing of garbage in the basket (Fig. 4-D). In order to prevent user misunderstanding of its behavior, the agent utters and shows scripts about its actions. Finally, the agent shows how it feels the merits of acting with a positive expression.
3.3 Situation Recognition of User
In this configuration, the system currently detects 1) lighting state, 2) seating state of the user, and 3) working state of the user in the work room. The system uses brightness changes in the camera image for detecting that a user has turned the light on. For seating recognition, it uses motion and face detection. When the system detects a user’s face after detecting motion, it recognizes that the user has come to the front of the PC and sat down. To recognize the user’s working state or end of work, the system detects the state of external windows in the PC.
3.4 Agent’s Smiles as Degree of Affinity
The agent smiles at the user to increase affinity. To show the relationship of gradually becoming closer to the user, the quantity of facial expressions of the agent increases according to the number of times the user has looked at the agent. At first, the system shows only a smile Lv.1 (Fig. 5-A) with the degree of affinity Lv.1. When the user looks at the agent a number of times exceeding a certain threshold number, the degree of affinity Lv.2 is obtained. At that point, the smile Lv.2 (Fig. 5-B) is presented (Table 1) in addition to the smile Lv.1. To avoid the user’s recognition of the agent as a mechanical presence, the agent’s smile is performed randomly when the user looks at it. Table 1 also shows the probability of the agent smiling. The degree of affinity is increased by the number of times the user looks at the agent, with the tenth look raising the degree of affinity from Lv.1 to Lv.2.
3.5 Space-Sharing Expressions
To make the user recognize the agent as a communal, living personality sharing the living space, it is necessary to express interactions of sharing the space by use of animated expressions. Accordingly, we prepared two types of interactions; when the user stays awake during the designated sleeping time of the agent (for example, from 24 o’clock to 7 o’clock), the system shows animations as though the light of the user space is leaking into the agent space (see Fig. 6), and when the user does not vacuum the room for a long time, the system shows animations simulating dust movement from the user space to the agent space (see Fig. 7).
4 Evaluation: Effect of Agent’s Continuous Preceding Behavior on Users’ Spontaneous Daily Activity
We conducted the following experiment to verify the effect of the proposed agent’s continuous preceding behavior on user’s spontaneous daily activity and its habituation.
Experimental Hypothesis: We hypothesized the following items.
Hypothesis 1: The agent’s preceding behaviors can induce users’ activities.
Hypothesis 2-1: The continuous stimuli of the agent’s preceding behaviors enable users’ daily activities even when the system stops the stimuli.
Hypothesis 2-2: Direct instruction of the agent can only induce users’ activities when they are being instructed to do the activity. That is, direct instruction cannot provide continuous effects on users’ spontaneous activities.
Hypothesis 3: Instruction from the agent is recognized as bothersome by users and reduces their motivations.
Conditions: The experiment was conducted with six conditions (three levels for a between-subject factor [Factor A] and two levels for a within-subject factor [Factor B]. Factor A, the agent’s behavior, had three levels: [A1] showing preceding behaviors, [A2] instructing users’ action, and [A3] promoting no action. The factor B, before and after measurements of the agent’s promotion, had two levels: [b1] before the promotion (first day) and [b2] after the promotion (fourth day) in the flow of the experiment.
Experiment Procedures: The participants in this experiment were 16 university students ranging from 21 to 23 years old (13 males and 3 females). The experimental environment is shown in Fig. 8. As an initial state, there were a book, an instruction manual for experiment, and two pieces of garbage on the desk, and there were a trash box and three pieces of garbage on the floor. A PC on which the proposed system was installed was set on the desk. A trash box was placed at a distance of about 75 cm from the right foot of the chair, in a place where participants could not extend their hands. The situations of the participants during the experiment were observed from the PC’s built-in camera and another camera was installed under the desk in a position invisible to the participants. We instructed the participants that we would capture the experimental scenes using cameras and obtained their consent beforehand.
In the experiment, we seated the participants on the chair in front of the desk and instructed them to do the following tasks: 1) check a sample figure shown on the PC as in Fig. 9; 2) cut the paper craftworks, as in Fig. 10; and 3) stick the parts on an A4 paper. After finishing the task, the participants closed the PC window that showed the task instructions, moved to a chair outside of the experimental area, and took a one-minute break. During the break, the experimenter measured the trash weight.
The above sequences are defined as one experimental term. Four experimental terms were conducted in one day, and the experiment continued for four days. Thus, the total number of experimental terms was 16, as shown in Fig. 11. During the tasks, the agent was always displayed on the PC and showed the same tasks to the users (Fig. 12-C). On the second and third days, when the participants closed the instruction window, agent behaviors were shown according to the following experimental conditions: A1, preceding behavior of throwing garbage into the trash box, as in Fig. 12-A; A2, indicating direct instruction with the agent’s script, such as saying, “Please throw garbage into the trash,” as in Fig. 12-B; and A3, continuing the agent’s paperwork task, as in Fig. 12-C. On the first and fourth days, the agent behavior of A3 was displayed in all conditions.
Procedures of Subjective and Objective Evaluation: As observable data, we measured the number of times participants discarded trash in one day (four-term works), the ratio of the discarded garbage into the whole trash at the end of the fourth term, and the total amount of garbage generated from the initial placement to the end of the fourth task. After all the experiments, the participants answered the following statements using a five-point scale (5: very relevant, 4: somewhat relevant, 3: neutral, 2: somewhat irrelevant, 1: irrelevant):
-
Q1 Discarding trash was bothersome.
-
Q2 Discarding trash was necessary.
-
Q3 You felt the benefits of discarding trash.
-
Q4 You made an effort to discard trash.
-
Q5 You did not like to discard trash.
-
Q6 You felt uncomfortable with the agent
-
Q7 The agent was burdensome.
Analyses of Number of Times and Amount of Garbage: Table 2 shows the analysis of variance (ANOVA) of the number of times users disposed of trash and the ratio of discarded trash. Figure 13 shows the average number of times users disposed trash, and Fig. 14 shows the average ratio of the discarded trash.
From Table 2, we confirmed a significant difference in the number of times in Factor A: a significant difference between A1 and A2 resulted from the multiple comparisons. The number of times users discarded trash increased in A1 compared to in A2. We also confirmed a significant difference in the number of times in Factor B. The number of times users discarded trash and the amount of trash increased in B2 (after the experiment) compared to in B1 (before the experiment).
Next, we discuss the interaction between Factors A and B. We found significant differences between A1-A2 and A2-A3 with the level B2, whereby the number of times users discarded trash increased in A2 compared to in A1 and A3. We also found a significant difference in Factor B with the level A2, in that the number of times users discarded trash increased in B2 compared to in B1. These results suggest that instruction from the agent increased the number of trash disposal occurrences.
Next, from the multiple comparisons of the amount of discarded trash in Factor A with the level B2, we confirmed significant differences between A2 and, both A1 and A3, such that the amount of discarded trash was increased in A2 compared to in A1 and A3. In addition, from the significant difference of factor B with the level A2, the average amount of discarded trash increased in B2 compared to in B1. Furthermore, with the level A1, we found a significant tendency (p < .10) in Factor B, and so there was a possibility that the amount of discarded trash increased in B2 over that in B1. These results suggest that the agent’s preceding behaviors and instructions may have increased the amount of discarded trash.
MOS Results: Figure 15 shows the results of mean opinion scores (MOS), and Table 3 shows ANOVA results of MOS. No significant differences were found in the ANOVA results in all questions.
5 Discussion
From analysis of the results of the amount of discarded garbage in Table 2 and Fig. 14, we confirmed the strong effect of direct indication, although there was a weak tendency of the preceding behavior to increase the discarded trash. Here we considered that the immediate effect was stronger when the agent directly indicated what to do to the user, and that the agent still has a possibility to induce users’ trash disposal actions by showing a scene of the agent discarding trash. In addition, the resulting comparison of B1 and B2 suggests the possibility that the induced effects persist even in the scenes where no subsequent preceding behaviors are presented, as the B2 level did not show the preceding behaviors. Thus, hypotheses 1 and 2 were both weakly supported.
Furthermore, from the camera image, it is evident that participants in the A1 level, whereby the agent indicated its preceding behavior, threw the trash into the basket at the end of their work. By its continuous preceding behavior, the agent could possibly induce users to spontaneously discard garbage at the end of a task.
Regarding the result of only a significant tendency for the effect of the preceding behavior, we conjecture that an agent with preceding behavior but no other interaction might not be recognized as a social, interactive presence. In contrast, direct indication obviously demonstrates the agent’s ability to talk to the user. Here, gaze and facial expression are known as effective tools for agents to induce user behaviors [22]. From the viewpoint of the agent’s social presence, interactive experiences and engagement [28] should be built beforehand to make the agent be recognized for its social existence at first.
Although there was no significant difference in subjective evaluation, we conjecture that the subjects did not want to discard garbage, even with the preceding behavior of the agent. To avoid a negative response to such behavior-inducing activity and to improve users’ motivation, it is necessary to express the agent’s positive attitude through such means as smiles immediately after a user’s activities. We did not evaluate the emotional rewards in this paper, but this interaction would increase positive effects in establishing the engagement between the user and the agent.
On the other hand, there are the results against hypothesis 2. From the frequency/amount of trash being discarded, we observed that direct instructions to the participants after the tasks could continue to induce user’s activities afterwards even without further instruction (B2). Moreover, there are results against hypothesis 3. The participants did not feel annoyed by the agent’s instructions, which also did not decrease the behavioral motivation. We should verify the continued effectiveness of both direct indication and indirect preceding behavior after a week or a month.
Although we did not obtain significant differences in the subjective evaluations, the results suggest that the agent’s preceding behaviors might be more annoying than its instructions. The agent’s preceding behaviors may be perceived as a tacit pressure requesting activities of the user. It is also worth considering that anxiety about embarrassment or error [24] due to uncertainty in using the system has become one cause of annoyance. To solve these problems, we believe that it is necessary to depict the agent’s behavior as though it is independent of users’ daily lives to some extent.
6 Conclusion
In this paper, we discussed basic research on our proposed ambient agent, which promotes users’ spontaneous activities in daily life by showing activities of its own that precede the users’ as triggers to stimulate users’ incentive to follow the action of the agent. The agent should become a familiar and ambient presence alongside the user. To improve the user’s behaviors, it also smiles according to the user’s attention and daily activities based on the agent’s expression of affinity toward the user. We designed an agent system that does not require conscious inputs from the user to avoid forced feelings and get users to accept the agent as a resident system.
From the results of the evaluation, there was a tendency for the agent to lead to user activities by showing its own preceding behaviors. There was also a tendency for the agent to promote customary behaviors of users by continuously showing behavior, even when the agent would subsequently stop representing preceding behavior. These results also appeared in the direct indication conditions; however, the long-term verification would reveal differences between two types of the agent’s expressions.
For future work, to establish habits that users enjoy, we should evaluate the effects of smiling or showing cooperative behaviors as the reward-strengthening stimuli when users’ actions in daily life action precede the agent’s activities. It is also necessary to consider the psychological effect of the agent’s syntonic actions following to the user’s activities on the sense of affinity. As future developments in the system implementations, the system may adopt multiple output devices such as wall projection, to be alongside the user all day. Built-in sensors in smart houses are expected to be of use for recognizing more behaviors by the user. The system design, development, and evaluation for elderly people should be also considered from the viewpoint of ways to indirectly induce people without making them stressed or uncomfortable. Especially for people with dementia, daily support should be provided continuously without interfering in their spontaneous activities.
Notes
- 1.
softalk: free software to synthesize characteristic speech voice. http://www.gigafree.net/media/record/softalk.html.
References
Emily, G., Grace, I.: Family quality of life and ASD: the role of child adaptive functioning and behavior problems. Autism Res. 8(2), 199–213 (2015)
Leo, J.: Attention deficit disorder. Skeptic (Altadena, CA) 8(1), 63 (2000)
Kelly, K., Ramundo, P.: You Mean I’m Not Lazy, Stupid or Crazy?!: The Classic Self-help Book for Adults with Attention Deficit Disorder. Simon and Schuster (2006)
Michael, D.R., Chen, S.: Serious Games: Games That Educate Train and Info, Course Technology. Muska & Lipman/Premier-Trade (2005)
Nakajima, T., Lehdonvirta, V., Tokunaga, E., Ayabe, M., Kimura, H., Okuda, Y.: Lifestyle ubiquitous gaming: making daily lives more plesurable. In: 13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, RTCSA 2007, pp. 257–266. IEEE, August 2007
Cofer, C.N., Appley, M.H.: Motivation: Theory and Research (1964)
Maslow, A.H.: The farther reaches of human nature. Viking Adult (1971)
Walster, E., Festinger, L.: The effectiveness of “overheard” persuasive communications. J. Abnorm. Soc. Psychol. 65(6), 395–402 (1962)
Appley, M.H., Moeller, G.: Conforming behavior and personality variables in college women. J. Abnorm. Soc. Psychol. 66(3), 284 (1963)
Balsa, A.I., Homer, J.F., French, M.T., Norton, E.C.: Alcohol use and popularity: social payoffs from conforming to peers’ behavior. J. Res. Adolesc. 21(3), 559–568 (2011)
Nakajima, T., Lehdonvirta, V., Tokunaga, E., Kimura, H.: Reflecting human behavior to motivate desirable lifestyle. In: Proceedings of the 7th ACM conference on Designing Interactive Systems, pp. 405–414. ACM, February 2008
Terzis, V., Moridis, C.N., Economides, A.A.: The effect of emotional feedback on behavioral intention to use computer based assessment. Comput. Educ. 59(2), 710–721 (2012)
Beale, R., Creed, C.: Affective interaction: how emotional agents affect users. Int. J. Hum. Comput. Stud. 67, 755–776 (2009)
Suzuki, S.V., Yamada, S.: Persuasion through overheard communication by life-like agents. In: Proceedings of IEEE/WIC/ACM International Conference on Intelligent Agent Technology, pp. 225–231, September 2004
Plass, J.L., Moreno, R., Brunken, R.: Cognitive Load Theory. Cambridge University Press, Cambridge (2010)
Tabbers, H.K., Martens, R.L., Merrienboer, J.J.: Multimedia instructions and cognitive load theory: effects of modality and cueing. Br. J. Educ. Psychol. 74(1), 71–81 (2004)
Paas, F., Renkl, A., Sweller, J.: Cognitive load theory and instructional design: recent developments. Educ. Psychol. 38(1), 1–4 (2003)
Inoue, T., Shiobara, T.: A dining agent system for comfortable meal to a solo diner. IPSJ Trans. Digit. Contents 2(2), 29–37 (2014). (In Japanese)
Liu, R., Inoue, T.: Application of an anthropomorphic dining agent to idea generation. In: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, pp. 607–612. ACM, September 2014
Weinchenk, S.: How to get people to do stuff: master the art and science of persuasion and motivation. New Riders (2013)
Rajap, P., Nakadai, S., Nishi, M., Yuasa, M., Mukawa, N.: Impression design of a life-like agent by its appearance, facial expressions, and gaze behaviors-analysis of agent’s sidelong glance. In: 2007 IEEE International Conference on Systems, Man and Cybernetics, pp. 2630–2635 (2007)
Yuasa, M., Tokunaga, H., Mukawa, N.: Autonomous turn-taking agent system based on behavior model. In: HCII 2009 Proceedings Part III, pp. 19–24 (2009)
Yuasa, M., Yasumura, Y., Nitta, K.: Giving advice in negotiation using physiological information. In: IEEE International Conference on Systems, Man, and Cybernetics, vol. 1, pp. 248–253 (2000)
Takeuchi, Y., Katagiri, Y.: Establishing affinity relationships toward agents: effects of sympathetic agent behaviors toward human responses. in: WET ICE 1999, pp. 253–258 (1999)
Fukayama, A., Ohno, T., Mukawa, N., Sawaki, M., Hagita, N.: Messages embedded in gaze of interface agents – impression management with agent’s gaze. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2002), pp. 41–48. ACM, New York (2002)
Tanaka, T., Fujita, K.: Study of user interruptibility estimation based on focused application switching. In: Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, CSCW 2011, pp. 721–724. ACM, New York (2011)
Tanaka, T., Fujita, K.: Secretary agent for mediating interaction initiation. In: Proceedings of Human Agent Interaction 2013, II-2-p5 (2013)
Glas, N., Pelachaud, C.: Definitions of engagement in human-agent interaction. In: 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 944–949, September 2015
Acknowledgments
This research was supported in part by JSPS Kakenhi 25700021, 19H04154, 19K12090, and 18K11383.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Yonezawa, T., Yoshida, N., Nagao, K., Wan, X. (2020). Partner Agent Showing Continuous and Preceding Daily Activities for Users’ Behavior Modification. In: Duffy, V. (eds) Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Posture, Motion and Health. HCII 2020. Lecture Notes in Computer Science(), vol 12198. Springer, Cham. https://doi.org/10.1007/978-3-030-49904-4_46
Download citation
DOI: https://doi.org/10.1007/978-3-030-49904-4_46
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-49903-7
Online ISBN: 978-3-030-49904-4
eBook Packages: Computer ScienceComputer Science (R0)