Nothing Special   »   [go: up one dir, main page]

US20230290343A1 - Electronic device and control method therefor - Google Patents

Electronic device and control method therefor Download PDF

Info

Publication number
US20230290343A1
US20230290343A1 US18/321,146 US202318321146A US2023290343A1 US 20230290343 A1 US20230290343 A1 US 20230290343A1 US 202318321146 A US202318321146 A US 202318321146A US 2023290343 A1 US2023290343 A1 US 2023290343A1
Authority
US
United States
Prior art keywords
keyword
electronic device
call
user
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/321,146
Inventor
Dongnam BYUN
Seohee KIM
Hyunhan KIM
Yujin Jeong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to US18/321,146 priority Critical patent/US20230290343A1/en
Publication of US20230290343A1 publication Critical patent/US20230290343A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3334Selection or weighting of terms from queries, including natural language queries
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/258Heading extraction; Automatic titling; Numbering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • H04M1/72472User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons wherein the items are sorted according to specific criteria, e.g. frequency of use
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/64Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
    • H04M1/65Recording arrangements for recording a message from the calling party
    • H04M1/656Recording arrangements for recording a message from the calling party for recording conversations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72445User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting Internet browser applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/34Microprocessors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/36Memories
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/38Displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Definitions

  • This disclosure relates to an electronic device and a control method therefor. More particularly, the disclosure relates to an electronic device which may obtain at least one keyword from a call content to be used for various functions and a control method therefor.
  • the disclosure relates to an artificial intelligence (AI) system utilizing machine learning algorithm and an application thereof.
  • AI artificial intelligence
  • An AI system is a system in which a machine learns, judges, and iteratively improves analysis and decision making, unlike an existing rule-based smart system.
  • An accuracy, a recognition rate and understanding or anticipation of a user’s taste may be correspondingly increased.
  • existing rule-based smart systems are gradually being replaced by deep learning-based AI systems.
  • AI technology is composed of machine learning, for example deep learning, and elementary technologies that utilize machine learning.
  • Machine learning is an algorithmic technology that is capable of classifying or learning characteristics of input data.
  • Element technology is a technology that simulates functions, such as recognition and judgment of a human brain, using machine learning algorithms, such as deep learning.
  • Machine learning is composed of technical fields such as linguistic understanding, visual understanding, reasoning, prediction, knowledge representation, motion control, or the like.
  • Linguistic understanding is a technology for recognizing, applying, and/or processing human language or characters and includes natural language processing, machine translation, dialogue system, question and answer, speech recognition or synthesis, and the like.
  • Visual understanding is a technique for recognizing and processing objects as human vision, including object recognition, object tracking, image search, human recognition, scene understanding, spatial understanding, image enhancement, and the like.
  • Inference prediction is a technique for judging and logically inferring and predicting information, including knowledge-based and probability-based inference, optimization prediction, preference-based planning, recommendation, or the like.
  • Knowledge representation is a technology for automating human experience information into knowledge data, including knowledge building (data generation or classification), knowledge management (data utilization), or the like.
  • Motion control is a technique for controlling the autonomous running of the vehicle and the motion of the robot, including motion control (navigation, collision, driving), operation control (behavior control), or the like.
  • an electronic device mounted with an artificial intelligence personal assistant platform is provided for efficient management of information and a variety of user experience.
  • a user may record a call content or perform a web search based on the call content, and this task may include, for example, a task of recording a call content in an application, such as a memo application, a calendar application, a schedule application, or the like, and searching for relevant information in a web browser, or the like.
  • an application such as a memo application, a calendar application, a schedule application, or the like
  • searching for relevant information in a web browser, or the like may be cumbersome.
  • an artificial intelligence personal assistant platform has a structure of directly giving a command and thus there is a limitation of passivity.
  • the disclosure has been made to solve the above-described problems, and an object of the disclosure is to provide an electronic device capable of obtaining at least one keyword from a call content for utilizing the keyword for various functions and a control method therefor.
  • a control method of an electronic device includes, during a call with a user of another electronic device by using the electronic device, obtaining at least one keyword from a content of the call with the user of the another electronic device; displaying the at least one keyword during the call; and providing a search result for a keyword selected by a user from among the displayed at least one keyword.
  • the displaying may include displaying the at least one keyword on a call screen with the user of the another electronic device.
  • the displaying may include displaying at least one circular user interface (UI) element including each of the at least one keyword.
  • UI circular user interface
  • the method may further include determining a level of importance of the at least one keyword according to a number of mentioning of the keyword during a call with the user of the another electronic device, and the displaying may include displaying a size of the at least one circular UI element differently according to the determined level of importance of the at least one keyword.
  • the displaying may include, based on the obtained at least one keyword being plural, determining an arrangement interval of the plurality of keywords according to an interval mentioned in a call with the user of the another electronic device and displaying the plurality of keywords according to the determined arrangement interval.
  • the method may further include, based on the displayed at least one keyword being plural, in response to receiving a user input to sequentially select two or more keywords from among the plurality of keywords, determining the selected two or more keywords as one set, and the providing the search result may include providing a search result about the determined set.
  • the method may further include displaying a first UI component corresponding to a search function for the determined set and a second UI component corresponding to a memo function for the determined set, and the providing the search result may include, based on the first UI being selected, providing a search result for the determined set.
  • the method may further include, based on the second UI component being selected, storing the determined set in a memo application.
  • the obtaining may include obtaining at least one keyword from a content of a call with the user of the another electronic device using a learned artificial intelligence (AI) model.
  • AI artificial intelligence
  • the method may further include, based on a call with the user of the another electronic device being ended, determining a recommended application among the plurality of applications of the electronic device based on the at least one obtained keyword; and displaying a UI for asking whether to perform a task associated with the determined recommended application and the obtained at least one keyword.
  • an electronic includes a communicator, a display, a memory configured to store computer executable instructions, and a processor, by executing the computer executable instructions, configured to, during a call with a user of another electronic device by using the electronic device, obtain at least one keyword from a content of the call with the user of the another electronic device, control the display to display the at least one keyword during the call, and provide a search result for a keyword selected by a user from among the displayed at least one keyword.
  • the processor is further configured to control the display to display the at least one keyword on a call screen with the user of the another electronic device.
  • the processor is further configured to control the display to display at least one circular user interface (UI) element including each of the at least one keyword.
  • UI circular user interface
  • the processor is further configured to determine a level of importance of the at least one keyword according to a number of mentioning of the keyword during a call with the user of the another electronic device and control the display to display a size of the at least one circular UI element differently according to the determined level of importance of the at least one keyword.
  • the processor is further configured to, based on the obtained at least one keyword being plural, determine an arrangement interval of the plurality of keywords according to an interval mentioned in a call with the user of the another electronic device and display the plurality of keywords according to the determined arrangement interval.
  • the processor may, based on the displayed at least one keyword being plural, in response to receiving a user input to sequentially select two or more keywords from among the plurality of keywords, determine the selected two or more keywords as one set, and may provide a search result about the determined set.
  • the processor may control the display to display a first UI component corresponding to a search function for the determined set and a second UI component corresponding to a memo function for the determined set, and, based on the first UI being selected, provide a search result for the determined set.
  • the processor may, based on the second UI component being selected, store the determined set in a memo application.
  • the processor may obtain at least one keyword from a content of a call with the user of the another electronic device using a learned artificial intelligence (AI) model.
  • AI artificial intelligence
  • the processor may, based on a call with the user of the another electronic device being ended, determine a recommended application among the plurality of applications of the electronic device based on the at least one obtained keyword and control the display to display a UI for asking whether to perform a task associated with the determined recommended application and the obtained at least one keyword.
  • FIG. 1 is a diagram illustrating an electronic device to display keywords obtained from a call according to an embodiment
  • FIGS. 2 and 3 are diagrams illustrating a configuration of an electronic device according to various embodiments
  • FIGS. 4 and 5 are diagrams illustrating an embodiment of obtaining a keyword from a call content and displaying the obtained keyword on a call screen according to an embodiment
  • FIG. 6 is a diagram illustrating an embodiment of selecting keywords displayed on a call screen according to an embodiment
  • FIG. 7 is a diagram illustrating a user interface (UI) for providing a memo function or a search function with selected keywords according to an embodiment
  • FIG. 8 is a diagram illustrating an embodiment of displaying a screen on a call screen according to searching based on the obtained keywords according to an embodiment
  • FIGS. 9 and 10 are diagrams illustrating an embodiment of changing a keyword on a real time basis on a call screen according to a call content according to an embodiment
  • FIG. 11 is a diagram illustrating a user interface (UI) for providing a memory function or a search function with the selected keywords according to another embodiment
  • FIG. 12 is a diagram illustrating an embodiment of registering a memory with the keywords obtained from a call
  • FIG. 13 is a diagram illustrating a UI for recommending a task with the keywords obtained from a call immediately after a call according to an embodiment
  • FIG. 14 is a diagram illustrating an embodiment of providing personalization information with the keywords obtained from a call according to an embodiment
  • FIG. 15 is a diagram illustrating an overall structure of a service providing various functions by obtaining a keyword from a call according to an embodiment
  • FIG. 16 is a block diagram illustrating a processor for learning and using a recognition model according to an embodiment
  • FIGS. 17 to 19 are block diagrams illustrating a learning unit and an analysis unit according to various embodiments.
  • FIG. 20 is a flowchart of a network system using an AI model according to an embodiment.
  • FIG. 21 is a flowchart illustrating a control method of an electronic device according to an embodiment.
  • the expressions “have,” “may have,” “including,” or “may include” may be used to denote the presence of a feature (e.g., a numerical value, a function, an operation), and do not exclude the presence of additional features.
  • the expressions “A or B,” “at least one of A and / or B,” or “one or more of A and / or B,” and the like include all possible combinations of the listed items.
  • “A or B,” “at least one of A and B,” or “at least one of A or B” includes (1) at least one A, (2) at least one B, or (3) at least one A and at least one B together.
  • first”, “second”, or the like used in the disclosure may indicate various components regardless of a sequence and/or importance of the components, may be used to distinguish one component from the other components, and do not limit the corresponding components.
  • a first user device and a second user device may indicate different user devices regardless of a sequence or importance thereof.
  • the first component may be named the second component and the second component may also be similarly named the first component, without departing from the scope of the disclosure.
  • module such as “module,” “unit,” “part”, and so on may be used to refer to an element that performs at least one function or operation, and such element may be implemented as hardware or software, or a combination of hardware and software. Further, except for when each of a plurality of “modules”, “units”, “parts”, and the like needs to be realized in an individual hardware, the components may be integrated in at least one module or chip and be realized in at least one processor.
  • an element e.g., a first element that is “operatively or communicatively coupled with / to” another element (e.g., a second element) may be directly connected to the other element or may be connected via another element (e.g., a third element).
  • another element e.g., a third element
  • an element e.g., a first element
  • another element e.g., a second element
  • there is no other element e.g., a third element
  • the expression “configured to” may be used interchangeably with, for example, “suitable for,” “having the capacity to,” “designed to,” “adapted to,” “made to,” or “capable of.”
  • the expression “configured to” does not necessarily mean “specifically designed to” in a hardware sense. Instead, under some circumstances, “a device configured to” may indicate that such a device can perform an action along with another device or part.
  • a processor configured to perform A, B, and C may indicate an exclusive processor (e.g., an embedded processor) to perform the corresponding action, or a generic-purpose processor (e.g., a central processing unit (CPU) or application processor (AP)) that can perform the corresponding actions by executing one or more software programs stored in the memory device.
  • an exclusive processor e.g., an embedded processor
  • a generic-purpose processor e.g., a central processing unit (CPU) or application processor (AP)
  • the electronic apparatus may include, for example, smartphones, tablet personal computers (PCs), mobile phones, video telephones, electronic book readers, desktop PCs, laptop PCs, netbook computers, workstations, servers, a personal digital assistance (PDA), a portable multimedia player (PMP), an MP3 player, a medical device, a camera, or a wearable device.
  • PCs personal computers
  • PDA personal digital assistance
  • PMP portable multimedia player
  • MP3 player MP3 player
  • the wearable device may include any one or any combination of the accessory type (e.g., as a watch, a ring, a bracelet, a bracelet, a necklace, a pair of glasses, a contact lens or a head-mounted-device (HMD)); a fabric or a garment-embedded type (e.g., an electronic clothing), a skin-attached type (e.g., a skin pad or a tattoo); or a bio-implantable circuit.
  • the accessory type e.g., as a watch, a ring, a bracelet, a bracelet, a necklace, a pair of glasses, a contact lens or a head-mounted-device (HMD)
  • a fabric or a garment-embedded type e.g., an electronic clothing
  • a skin-attached type e.g., a skin pad or a tattoo
  • bio-implantable circuit e.g., as a watch, a ring, a bracelet
  • the electronic device may be a home appliance.
  • the home appliance may include at least one of, for example, a television, a digital video disk (DVD) player, an audio system, a refrigerator, air-conditioner, a cleaner, an oven, a microwave, a washing machine, an air purifier, a set top box, a home automation control panel, a security control panel, a media box (e.g., SAMSUNG HOMESYNCTM, APPLE TVTM, or GOOGLE TVTM), a game console (e.g., XBOXTM, PLAYSTATIONTM), an electronic dictionary, an electronic key, a camcorder, or an electronic frame.
  • the electronic device may include at least one of a variety of medical devices (e.g., various portable medical measurement devices such as a blood glucose meter, a heart rate meter, a blood pressure meter, or a temperature measuring device), magnetic resonance angiography (MRA), magnetic resonance imaging (MRI), computed tomography (CT), or ultrasonic wave device, etc.), a navigation system, a global navigation satellite system (GNSS), an event data recorder (EDR), a flight data recorder (FDR), an automotive infotainment devices, a marine electronic equipment (e.g., marine navigation devices, gyro compasses, etc.), avionics, a security device, a car head unit, industrial or domestic robots, a drone, an automated teller machine (ATM), a point of sale (POS) of a store, or an Internet of Things (IoT) device (e.g., light bulbs, sensors, sprinkler devices, fire alarms, thermostats, street lights, toasters, exercise equipment,
  • the electronic device may include a part of furniture or building / structure, an electronic board, an electronic signature receiving device, a projector, any of various measuring devices (e.g., water, electricity, gas, or electromagnetic wave measuring devices, or the like), or the like.
  • the electronic device may be one or more of the various devices described above.
  • the electronic device may be a flexible electronic device.
  • the electronic device according to an embodiment is not limited to the devices described above, and may include a new electronic device according to technology development.
  • One object of the disclosure is to provide an electronic device having an artificial intelligence agent system for providing keyword-based personalized recommendation information by obtaining a keyword from a call content and performing a related command, and a control method thereof.
  • the electronic device may perform, for example, a simple search and a memo function based on a keyword obtained from a call during a call.
  • an application predicted to be used by a user based on the keyword obtained from the call may be determined and executed in association with the obtained keyword.
  • the key words obtained in the call may be stored to provide personalization information on the basis of the keywords anytime.
  • FIG. 1 is a diagram illustrating an electronic device to obtain and provide a keyword from a call with another party according to an embodiment.
  • a user may perform a call with a user of another electronic device using an electronic device 100 .
  • the electronic device 100 may display the keywords obtained in the call on a call screen of the other party during a call.
  • the displayed keywords are selectable user interface (UI) elements.
  • UI selectable user interface
  • various functions may be executed based on the selected keyword. For example, a search function may be performed based on the selected keyword, and a memo function may be performed based on the selected keyword.
  • the electronic device 100 may provide a UI asking the user whether to perform a task to apply (or input) keywords obtained in an application (memo application, calendar application, schedule application, search application, etc.) that is expected to be executed by a user based on the obtained keywords.
  • an application memo application, calendar application, schedule application, search application, etc.
  • the electronic device 100 may store keywords obtained during a call and may provide personalized information based on the keyword obtained from a call anytime, upon request of a user.
  • FIG. 2 is a block diagram illustrating a configuration of the electronic device 100 according to an embodiment.
  • the electronic device 100 includes a communicator 110 , a memory 120 , a processor 130 , and a display 140 . Some of the configurations may be omitted depending on the embodiment, and any suitable hardware/software configuration, although not shown, apparent to those of ordinary skill in the art may be further included in the electronic device 100 .
  • the communicator 110 may be connected to network via wireless communication or wired communication to communicate with an external device.
  • Wireless communication may include cellular communication using any one or any combination of the following, for example, long-term evolution (LTE), LTE advanced (LTE-A), a code division multiple access (CDMA), a wideband CDMA (WCDMA), and a universal mobile telecommunications system (UMTS), a wireless broadband (WiBro), or a global system for mobile communications (GSM), and the like.
  • the wireless communication may include, for example, any one or any combination of Wi-Fi, Bluetooth, Bluetooth low energy (BLE), Zigbee, near field communication (NFC), magnetic secure transmission, radio frequency (RF), or body area network (BAN).
  • Wired communication may include, for example, a universal serial bus (USB), a high definition multimedia interface (HDMI), a recommended standard 232 (RS-232), a power line communication, or a plain old telephone service (POTS).
  • the network over which the wireless or wired communication is performed may include any one or any combination of a telecommunications network, for example, a computer network (for example, a local area network (LAN) or a wide area network (WAN)), the Internet, or a telephone network.
  • a telecommunications network for example, a computer network (for example, a local area network (LAN) or a wide area network (WAN)), the Internet, or a telephone network.
  • the communicator 110 may include a cellular module, a Wi-Fi module, a Bluetooth module, global navigation satellite system (GNSS) module (for example: a global positioning system (GPS) module, Glonass module, Beidou module, or Galileo module), a near field communication (NFC) module, radio frequency (RF) module, or the like.
  • GNSS global navigation satellite system
  • GPS global positioning system
  • NFC near field communication
  • RF radio frequency
  • the cellular module may provide at least one of, for example, and without limitation, a voice call, a video call, a text service, an Internet service, or the like, through a communication network.
  • the cellular module may perform the discrimination and authentication of an electronic device within the communication network using a subscriber identity module (example: a subscriber identification module (SIM) card).
  • SIM subscriber identification module
  • the cellular module may perform at least some of the functions that the processor may provide.
  • the cellular module may include, for example, a communication processor (CP).
  • Each of the Wi-Fi module, the Bluetooth module, the GNSS module, or the NFC module may include various communication circuitry and a processor for processing data to be transmitted and received. According to an example embodiment, at least a portion (example: two or more) of a cellular module, a Wi-Fi module, a Bluetooth module, a GNSS module, an NFC module, or the like, may be included in one integrated chip (IC) or an IC package.
  • IC integrated chip
  • the RF module may, for example, transmit and receive a communication signal (example: an RF signal).
  • the RF module may include, for example, and without limitation, a transceiver, a power amp module (PAM), a frequency filter, a low noise amplifier (LNA), an antenna, or the like.
  • PAM power amp module
  • LNA low noise amplifier
  • at least one of the cellular module, the Wi-Fi module, the Bluetooth module, the GNSS module, the NFC module, etc. may transmit and receive an RF signal through a separate RF module.
  • the memory 120 may include, for example, an internal memory or an external memory.
  • the internal memory may be implemented as at least one of a volatile memory such as a dynamic random access memory (DRAM), a static random access memory (SRAM), a synchronous dynamic random access memory (SDRAM), or a nonvolatile memory (for example, one time programmable ROM (OTPROM), programmable ROM (PROM), erasable and programmable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), mask ROM, flash ROM, a flash memory (for example, NAND flash or NOR flash), a hard disk drive (HDD) or a solid state drive (SSD).
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • SDRAM synchronous dynamic random access memory
  • OTPROM one time programmable ROM
  • PROM programmable ROM
  • EPROM erasable and programmable ROM
  • EEPROM electrically erasable and programmable ROM
  • the external memory may include a flash drive, for example, a compact flash (CF), secure digital (SD), micro secure digital (micro-SD), mini secure digital (mini-SD), extreme digital (xD), multi-media card (MMC), or a memory stick.
  • CF compact flash
  • SD secure digital
  • micro-SD micro secure digital
  • mini-SD mini secure digital
  • xD extreme digital
  • MMC multi-media card
  • the memory 120 may be accessed by the processor 130 , and reading/writing/modifying/updating of data by the processor 130 may be performed associated with the data.
  • the term memory may include a memory separately provided from the processor 130 , and at least one of a read-only memory (ROM, not shown) and a random access memory (RAM, not shown).
  • ROM read-only memory
  • RAM random access memory
  • the memory 120 may store trained AI model, learning data, or the like.
  • the memory 120 may store various applications such as a memory application, a schedule application, a calendar application, a web browser application, a call application, or the like.
  • the display 140 is configured to output an image.
  • the display 140 may be implemented as a liquid crystal display (LCD), light-emitting diode (LED) display, an organic light-emitting diode (OLED) display (e.g., active-matrix organic light-emitting diode (AMOLED), passive-matrix OLED (PMOLED), microelectromechanical systems (MEMS) display, or an electronic paper display.
  • the display 140 may be implemented as a touch screen.
  • the processor 130 is configured to control overall operations of the electronic device 100 .
  • the processor 130 may drive an operating system and an application to control a host of hardware or software elements connected to the processor 130 and may perform various data processing and operations.
  • the processor 130 may be a central processing unit (CPU) or a graphics-processing unit (GPU), or both.
  • the processor 130 may be implemented as at least one of a general processor, a digital signal processor, an application specific integrated circuit (ASIC), a system on chip (SoC), a microcomputer (MICOM), or the like.
  • the processor 130 may perform various operations using the trained AI model. For example, the processor 130 may obtain at least one keyword in a call with the other party performed using the electronic device 100 using the learned artificial intelligence model. According to an embodiment, the processor 130 may obtain a proper noun by inputting a call content to a named entity recognition (NER) model, and select a keyword among the proper nouns. All the obtained proper nouns may be selected as a keyword, or only those satisfying specific criteria among the obtained proper nouns may be selected as keywords.
  • the specific criteria may be, for example, whether the number of mentioning the proper noun is greater than or equal to the preset number of times, whether an interval of mentioning the proper noun is less than a preset time, or the like.
  • a proper noun belonging to a category corresponding to the keyword may be selected as a keyword.
  • a category such as a time, a person, and a place may be a category corresponding to a keyword.
  • the user may obtain a keyword by inputting a proper noun to the artificial intelligence model learned to select the main keyword.
  • the artificial intelligence model is a model trained based on an artificial intelligence algorithm, for example, it may be a model based on a neural network.
  • the learned AI model may include a plurality of weighted network nodes that may be designed to simulate the human brain structure on a computer and simulate a neuron of a human neural network. The plurality of network nodes may each establish a connection relationship so that the neurons simulate the synaptic activity of the neurons sending and receiving signals through the synapse.
  • the learned AI model may include, for example, a neural network model or a deep learning model developed from a neural network model.
  • a plurality of network nodes are located at different depths (or layers), and may transmit and receive data according to a convolution connection relationship.
  • learned determination models include, but are not limited to, Deep Neural Network (DNN), Recurrent Neural Network (RNN), and Bidirectional Recurrent Deep Neural Network (BRDNN).
  • the electronic device 100 may use an artificial intelligence (AI)-exclusive program (or an artificial intelligence agent) and a personal assistant program (e.g. BixbyTM).
  • the personal assistant program is a dedicated program for providing an AI-based service.
  • a conventional general-use processor for example, CPU
  • a single-purpose processor e.g., GPU, field programmable gate array (FPGA), ASIC, or the like
  • the electronic device 100 may include a plurality of processors, for example, a processor dedicated to artificial intelligence and a processor for other processing.
  • the AI agent when a pre-set user input (e.g., an icon touch corresponding to a personal agent chatbot, a user voice including a preset word, etc.) is input or a button (e.g., a button for executing an AI agent) provided in the electronic device 100 is pressed, the AI agent may be operated (or executed). Alternatively, the AI agent may be in a standby state before a preset user input is detected or a button provided on the electronic device 100 is selected.
  • the standby state is a state of sensing that a pre-defined user voice (e.g., a user voice including a preset keyword (e.g., Bixby) is input) is received to control the start of the operation of the AI agent.
  • the electronic device 100 may operate the AI agent.
  • the AI agent may perform the function of the electronic device 100 based on the voice when the user voice is received, and output an answer when the voice is a voice for asking.
  • An operation based on AI may be performed in the electronic device 100 and may be performed through an external server.
  • the AI model is stored in a server, and the electronic device 100 may provide data to be input to the AI model to the server, and the server may input the data to the AI model and provide the obtained result data to the electronic device 100 .
  • FIG. 3 is a block diagram illustrating a configuration of the electronic device 100 according to another embodiment.
  • the electronic device 100 may include a communicator 110 , a memory 120 , a processor 130 , a display 140 , an inputter 150 , an audio outputter 160 , and a microphone 170 .
  • Some of the configurations may be omitted depending on the embodiment, and any suitable hardware/software configurations apparent to those of ordinary skilled in the art may be further included in the electronic device 100 , although not shown.
  • the communicator 110 , the memory 120 , the processor 130 , and the display 140 are described in FIG. 2 , and thus the description thereof will be omitted.
  • the microphone 170 is configured to receive a user voice or other sound and convert the voice or sound into a digital signal.
  • the processor 130 may obtain at least one keyword in a call voice input through the microphone 170 .
  • the microphone 170 may be provided inside the electronic device 100 , but it is only one embodiment, and may be provided outside the electronic device 100 and electrically connected to the electronic device 100 .
  • the inputter 150 may receive a user input and transfer the user input to the processor 130 .
  • the inputter 150 may include a touch sensor, a (digital) pen sensor, a pressure sensor, a key, or a microphone.
  • the touch sensor may use, for example, at least one of an electrostatic type, a pressure sensitive type, an infrared type, and an ultrasonic type.
  • the (digital) pen sensor may, for example, be a part of a touch panel or may include a separate sheet for recognition.
  • the key may include, for example, a physical button, an optical key, or a keypad.
  • the display 140 and a touch sensor of the inputter 150 may form a mutual layer structure and may be implemented with a touch screen.
  • the audio outputter 160 may output an audio signal.
  • the audio outputter 110 may output a calling voice of the other party received through the communicator 110 .
  • the audio outputter 160 may output audio data stored in the memory 120 .
  • the audio outputter 160 may output various notification sounds and may output the voice of an AI assistant.
  • the audio outputter 160 may include a receiver, a speaker, a buzzer, etc.
  • the memory 120 may store computer executable instructions and the control method of the electronic device 100 described in this disclosure may be performed when it is executed by the processor 130 .
  • the processor 130 may, by executing a computer executable instruction, obtain at least one keyword from a call with a user of another electronic device during a call with a user of another electronic device through the communicator 110 , and may control the display 140 so that the obtained keyword is displayed during the call.
  • the processor 130 may collect the voice which is input through the microphone and the voice of the other party of the another electronic device received through the communicator 110 , convert the collected voice into a text, and may obtain a keyword from the converted text.
  • the processor 130 may obtain at least one keyword from a call with a user of another electronic device using the trained AI model.
  • the processor 130 may obtain at least one keyword from a call with a user of another electronic device based on a rule. For example, the processor 130 may obtain at least one keyword based on the frequency of a word, which is mentioned in a call with a user of another electronic device, the intensity of the sound of a spoken word, etc.
  • FIG. 4 illustrates a call content between a user 10 of the electronic device 100 and a user 20 of another electronic device.
  • the processor 130 may obtain at least one keyword (underlined) from a call content of the call. For example, the processor 130 may obtain nouns such as “tomorrow,” “7 pm,” “brother Dongnam,” “Gangnam Station,” “delicious restaurant,” or the like, as keywords.
  • the processor 130 may control the display 140 to display at least one obtained keyword.
  • the processor 130 may control the display 140 to display at least one keyword obtained on a call screen 500 with the user 20 of another electronic device.
  • the call screen 500 is a screen that is displayed on the display 140 during a call with the user 20 of another electronic device and may include information such as a name, contact information, etc. of the user 20 of another electronic device and may include a button user interface (UI) corresponding to a function such as conversation over a speaker, telephone disconnection, a sound mute, or the like.
  • UI button user interface
  • the processor 130 may control the display 140 to display at least one UI element 51 to 58 including at least one keyword obtained from a call.
  • the at least one UI element 51 to 58 may be circular, and more specifically, may be a bubble shape.
  • the shape of the UI element including the keyword is not particularly limited.
  • the UI element including the keyword may have a rectangular shape.
  • the processor 130 may display the size of the at least one UI element 51-58 including a keyword differently according to importance of the keyword included therein.
  • the size of the UI element may be displayed in a larger size than an UI element including a keyword having a low importance.
  • UI elements 51 , 54 , 56 , and 58 including keywords having a relatively high importance may be displayed to be larger than UI elements 52 , 53 , 55 , 57 including a keyword having a relatively low importance.
  • the processor 130 may obtain information about an importance of keywords by inputting the keywords to the trained AI model.
  • the processor 130 may determine the level of importance of a keyword according to a weight set differently from each category. For example, if a weight (number in a parenthesis), such as a time category 5, a person category 4, a place category 2, is set, a keyword belonging to a time category has higher importance level than a keyword belonging to a place category.
  • a weight number in a parenthesis
  • the processor 130 may determine a level of importance according to the number of mentioning the keywords from a call. That is, the larger the number of mentioning, the higher the level of importance.
  • the processor 130 may control to display only a predetermined number of keywords on the call screen 500 .
  • the processor 130 may control so as to display only some keywords having a high level of importance.
  • the keywords displayed in the call screen 500 may disappear from the call screen 500 according to user manipulation. If a specific keyword disappears, another new keyword may be displayed on the call screen 500 . For example, a new keyword having a subsequent level of importance may be displayed on the call screen 500 .
  • the processor 130 may remove the corresponding UI element from the call screen 500 and may display the UI element including the keyword having the subsequent level of importance on the call screen 500 .
  • the user input to remove the UI element may be, for example, a user’s touch motion to move the UI element out of the call screen 500 .
  • the UI element may be a user input to double touch the UI element.
  • Various other user inputs are possible.
  • a graphic effect of bubble burst may be provided when the UI elements are removed.
  • the processor 130 may arrange keywords according to the level of association between keywords so that the association between keywords may be more intuitively grasped. According to one embodiment, the processor 130 may determine an arrangement interval of a plurality of keywords according to the interval stated in a call with the user 20 of the other electronic device and may control the display 140 to display a plurality of keywords as the determined arrangement interval.
  • keywords mentioned together may be displayed to be adjacent with each other. For example, if a call includes a sentence “how about meeting at Gangnam Station at 7 o′clock, tomorrow?”, a UI element 55 including a keyword “tomorrow”, a UI element 51 including “7 o′clock” and a UI element 56 including a keyword “Gangnam Station” may be displayed on positions adjacent to each other.
  • the user 10 may view the keywords displayed in the call screen 500 during a call, and may be able to intuitively know with which subject a dialogue is proceeded.
  • the keywords obtained in the call may be used in various functions.
  • the processor 130 may apply the keyword selected by the user, among at least one keyword displayed on the display 140 , to the application of the electronic device 100 , so as to perform a function related to the keyword in the corresponding application.
  • the processor 130 may provide a search result for a keyword selected by a user among at least one keyword displayed on the display 140 .
  • the processor 130 may register a keyword selected by the user, among at least one keyword displayed on the display 140 , to a memo application.
  • various functions may be provided.
  • a function is executed using the one keyword, and if a user selects two or more keywords, it is possible that a function is executed using two or more keywords.
  • the two or more keywords may be selected by various user operations, such as by sequentially touching two or more keywords, simultaneously touching two or more keywords (multi touch), and dragging to connect two or more keywords, or the like.
  • FIG. 6 is a diagram illustrating an embodiment of selecting two or more keywords.
  • a user may select keywords “Gangnam Station” and “delicious restaurant” by performing a dragging manipulation starting from the UI element 56 including “Gangnam Station” and ending at the UI element 58 including the keyword “delicious restaurant.”
  • the processor 130 may determine the selected two or more keywords as one set. Referring to FIG. 6 , the processor 130 may determine the keywords “Gangnam Station” and “delicious restaurant” as one set.
  • the electronic device 100 may perform search or write a memo with the determined set.
  • the description about the embodiment will refer to FIG. 7 .
  • FIG. 7 is a diagram illustrating one embodiment of the disclosure for performing a search or writing a memo with a keyword selected by a user among the keywords obtained from the call.
  • the processor 130 may determine two or more keywords as one set and may display the two or more keywords selected by the user in one region 710 of the call screen 500 as in FIG. 7 .
  • the processor 130 may control the display 140 to display a first UI element 720 corresponding to a memo function for the determined set and a second UI element 730 corresponding to a search function for the determined set.
  • the processor 130 may store a keyword set in a memo application.
  • the processor 130 may provide a search result for the set of keywords. For example, as shown in FIG. 8 , a search result 810 corresponding to “Gangnam Station” and “delicious restaurant” may be provided. The search result 810 may also be provided on the call screen 500 .
  • the user 10 may continue calling with the user 20 of the another electronic device as illustrated in FIG. 9 .
  • the processor 130 may obtain keywords in real time from a call. Accordingly, the keywords shown in the call screen 500 during a call may be changed in real time.
  • FIG. 10 illustrates the call screen 500 after performing a call as shown in FIG. 9 .
  • the keywords displayed in the call screen 500 of FIG. 10 may be identified as being different from the keywords displayed in the call screen 500 of FIG. 5 .
  • the processor 130 may determine three keywords selected by the dragging operation to one set and display the three keywords selected by the user as illustrated in FIG. 11 on one region 710 of the call screen 500 .
  • keywords may be arranged in a natural format according to language structure.
  • the processor 130 may arrange a keyword according to a category (time, place, activity). For example, the processor 130 may place a keyword (tasting room) belonging to a place category to behind the keyword belonging to the time category (7 o′clock), and may place a keyword (booking) belonging to an activity category to behind the keyword belonging to a place category.
  • the processor 130 may control the display 140 to display a first UI element 720 corresponding to a search function for a set of three keywords and a second UI element 730 corresponding to a memo function for a set of three keywords.
  • the processor 130 may store a keyword set in the memo application. For example, as shown in FIG. 12 , the processor 130 may input and store a keyword set “booking of a Tasting Room at 7 o′clock” in a list of keywords in the memo application.
  • the processor 130 when the call with the user of another electronic device ends, may determine a recommended application among the plurality of applications of the electronic device 100 based on at least one keyword obtained in the call, and may control the display 140 to display a UI for asking whether to perform a task related to the determined recommended application and the obtained at least one keyword. When the UI for asking the task is selected, the processor 130 may perform the task.
  • the processor 130 may determine an application corresponding to a keyword (or a category of the obtained keyword) obtained in the call to a recommended application based on a mapping list between the keyword (or the category of the keyword) and the application.
  • the mapping list may be stored in the memory 120 .
  • the processor 130 may display a UI asking whether to perform a task of registering a keyword related to a schedule of the obtained at least one keyword in a schedule application.
  • the embodiment will be described with reference to FIG. 13 .
  • the processor 130 may control the display 140 to display a UI 1310 for asking whether to perform a task of registering the keyword obtained in the call to the schedule application immediately after the call is completed.
  • the processor 130 may perform a task of registering a schedule in the schedule application.
  • the processor 130 may control the display 140 to display the UI (e.g., “Do you want to remit 15000 Won to Kim, Young-hee?”) to ask whether to perform a task of remitting the invoice amount through the remittance application based on a keyword associated with the invoice amount among at least one obtained keyword, when the recommended application determined based on the at least one keyword obtained in the call is a remittance application.
  • the UI e.g., “Do you want to remit 15000 Won to Kim, Young-hee?”
  • the processor 130 may, when a recommended application determined based on at least one keyword obtained in a call is a contact list application, control the display 140 to display a UI (e.g., “Do you want to transmit a contact number of Kim, Cheol-soo to Kim, Yeong-hee?”) for asking whether to perform a task to transmit information about a first contact number stored in the contact list application to a second contact stored in the contact list application, based on a keyword associated with transmission of a contact list among at least one keyword.
  • a UI e.g., “Do you want to transmit a contact number of Kim, Cheol-soo to Kim, Yeong-hee?”
  • the processor 130 may control the display 140 to display a UI for asking whether to perform a task of transmitting a character composed of keywords based on a keyword related to a message transmission among at least one keyword when the recommended application determined based on the at least one keyword obtained in the call is a message application.
  • personalization information may be provided based on at least one keyword obtained during a call.
  • An example of personalized information will be described with reference to FIG. 14 .
  • FIG. 14 is a diagram illustrating a personalization information provision screen according to an embodiment.
  • the electronic device 100 may provide personalization information based on user profile information.
  • the user profile information may include, for example, a user name, age, gender, occupation, schedule, a route of movement of a user (a route through which the electronic device 100 moves), and at least one keyword obtained in a call.
  • the electronic device 100 may provide information based on the user profile information.
  • the electronic device 100 may provide information based on a keyword obtained in a call. For example, when a keyword of “dinner”, “7 o′clock”, and “Gangnam Station” are obtained in a call with a user of another electronic device as shown in FIG. 14 , information about a “recommended amusements around Gangnam Station at 7 pm” may be provided based on the keywords. According to an embodiment, information which further suits a context of use based on the call content may be provided.
  • FIG. 15 is a diagram illustrating an overall structure of a service providing various functions by obtaining a keyword from a call according to an embodiment.
  • a call voice (voice of a user and other electronic device) is input to perform pre-processing for incoming call voice and feature vectors may be obtained.
  • a trained artificial intelligence model for example, a named entity recognition (NER) model
  • NER named entity recognition
  • the electronic device 100 may select a keyword among the extracted proper nouns. At this time, the importance of the keyword may be considered.
  • the electronic device 100 may provide the selected keywords to the user, perform a search with the keywords selected by the user, or perform a memo function.
  • the electronic device 100 may also use the selected keywords to make a deep learning-based behavior recommendation (e.g., schedule registration, transmission, contact transmission, short message service (SMS) transmission, or the like).
  • a deep learning-based behavior recommendation e.g., schedule registration, transmission, contact transmission, short message service (SMS) transmission, or the like.
  • the functions performed in the embodiments described above may be performed using an artificial intelligence model. For example, obtaining a keyword from a call content, determination of importance for the keyword, the arrangement of a keyword, determination of a recommended application based on the keyword, or the like, may be performed using the artificial intelligence model.
  • FIG. 16 is a block diagram illustrating a processor for learning and using a recognition model according to an embodiment.
  • a processor 1600 may include at least one of a learning unit 1610 and an analysis unit 1620 .
  • the learning unit 1610 may generate an artificial intelligence model having an identification criterion using learning data.
  • the learning unit 1610 may generate and train an artificial intelligence model to obtain a keyword from voice data with the voice data as learning data.
  • the analysis unit 1620 may input voice data to the artificial intelligence model to obtain a keyword from the voice data.
  • the artificial intelligence model may include a speech to text (STT) module, a named entity recognition (NER) module, and a keyword selection module, or the like, by functions.
  • the STT module converts the input speech to text.
  • the NER module receiving the converted text may extract a proper noun from a text.
  • the keyword selection module may identify the importance of the extracted proper noun, and may select a keyword. The importance may be determined according to a weight of a proper noun, a weight of a category to which the proper noun belongs, a frequency of use of a proper noun, an usage interval of a proper noun, a volume of a speech of a user uttering the proper noun, or the like.
  • At least a portion of the learning unit 1610 and at least a portion of the analysis unit 1620 may be implemented as software modules or at least one hardware chip form and mounted in the electronic device 100 .
  • either one or both of the learning unit 1610 and the analysis unit 1620 may be manufactured in the form of an exclusive-use hardware chip for artificial intelligence (AI), or a conventional general purpose processor (e.g., a CPU or an application processor) or a graphics-only processor (e.g., a GPU) and may be mounted on the electronic device 100 described above or a server providing an analysis result to the electronic device 100 .
  • AI exclusive-use hardware chip for artificial intelligence
  • a conventional general purpose processor e.g., a CPU or an application processor
  • a graphics-only processor e.g., a GPU
  • the exclusive-use hardware chip for artificial intelligence is a dedicated processor for probability calculation, and it has higher parallel processing performance than existing general purpose processor, so it can quickly process computation tasks in artificial intelligence such as machine learning.
  • the learning unit 1610 and the analysis unit 1620 are implemented as a software module (or a program module including an instruction)
  • the software module may be stored in a computer-readable non-transitory computer readable media.
  • the software module may be provided by an operating system (OS) or by a predetermined application.
  • OS operating system
  • O/S Alternatively, some of the software modules may be provided by an O/S, and some of the software modules may be provided by a predetermined application.
  • the learning unit 1610 and the analysis unit 1620 may be mounted on one server, or may be mounted on separate servers, respectively.
  • the processor 1600 of FIG. 16 may be the processor 130 of FIG. 2 or FIG. 3 .
  • one of the learning unit 1610 and the analysis unit 1620 may be included in the electronic device 100 , and the other one may be included in an external server.
  • the learning unit 1610 and the analysis unit 1620 may provide the model information constructed by the learning unit 1610 to the analysis unit 1620 via wired or wireless communication, and provide data that is input to the analysis unit 1620 to the learning unit 1610 as additional learning data.
  • FIG. 17 is a block diagram illustrating the learning unit 1610 according to an embodiment.
  • the learning unit 1610 may implement a learning data acquisition unit 1610 - 1 and a model learning unit 1610 - 4 .
  • the learning unit 1610 may further selectively implement at least one of a learning data preprocessor 1610 - 2 , a learning data selection unit 1610 - 3 , and a model evaluation unit 1610 - 5 .
  • the learning data acquisition unit 1610 - 1 may obtain learning data to train a model for acquiring a keyword from voice data.
  • the learning data may be data collected or tested by the learning unit 1610 or the manufacturer of the learning unit 1610 .
  • learning data may include voice or text.
  • the model learning unit 1610 - 4 can use the learning data so that the model has a criterion for understanding, knowing, recognizing, identifying, inferring input data.
  • the model learning unit 1610 - 4 may train a model through supervised learning using at least a part of learning data as a criterion for identification.
  • the model learning unit 1610 - 4 may learn, for example, by itself using learning data without specific guidance to make the artificial intelligence model learn through unsupervised learning which detects a criterion for identifying a situation.
  • the model learning unit 1610 - 4 can learn the artificial intelligence model through reinforcement learning using, for example, feedback on whether the result of providing the response according to learning is correct.
  • the model learning unit 1610 - 4 can also make an artificial intelligence model learn using, for example, a learning algorithm including an error back-propagation method or a gradient descent.
  • the model learning unit 1610 - 4 can determine a model having a great relevance between the input learning data and the basic learning data as a recognition model to be learned when there are a plurality of models previously constructed.
  • the basic learning data may be pre-classified according to the type of data, and the model may be pre-constructed for each type of data.
  • basic learning data may be pre-classified based on various criteria such as a region in which learning data is generated, time when learning data is generated, size of learning data, a genre of learning data, a generator of learning data, or the like.
  • the model learning unit 1610 - 4 can store the learned model.
  • the model learning unit 1610 - 4 can store the learned model in the memory 120 of the electronic device 100 .
  • the model learning unit 1610 - 4 may store the learned model in a memory of a server connected to the electronic device 100 via a wired or wireless network.
  • the learning unit 1610 may further implement a learning data preprocessor 1610 - 2 and a learning data selection unit 1610 - 3 to improve the processing capability of the model or to save resources or time required for generation of the model.
  • the learning data preprocessor 1610 - 2 can preprocess acquired data so that the data obtained in the learning for identifying a situation may be used. That is, the learning data preprocessor 1610 - 2 can process the acquired data into a predetermined format so that the model learning unit 1610 - 4 may use the acquired data for learning to identify a situation.
  • the learning data selection unit 1610 - 3 can select data required for learning from the data acquired by the learning data acquisition unit 1610 - 1 or the data preprocessed by the learning data preprocessor 1610 - 2 .
  • the selected learning data may be provided to the model learning unit 1610 - 4 .
  • the learning data selection unit 1610 - 3 can select learning data necessary for learning from the acquired or preprocessed data in accordance with a predetermined selection criterion.
  • the learning data selection unit 1610 - 3 may also select learning data according to a predetermined selection criterion by learning by the model learning unit 1610 - 4 .
  • the learning unit 1610 may further implement the model evaluation unit 1610 - 5 to improve a processing capability of the model.
  • the model evaluation unit 1610 - 5 may input evaluation data to the model, and if the analysis result which is output from the evaluation result does not satisfy a predetermined criterion, the model evaluation unit 1610 - 5 may make the model learning unit 1610 - 4 learn again.
  • the evaluation data may be predefined data to evaluate the model.
  • the model evaluation unit 1610 - 5 may evaluate that the data does not satisfy a predetermined criterion.
  • the model evaluation unit 1610 - 5 may evaluate whether each learned model satisfies a predetermined criterion, and determine the model which satisfies a predetermined criterion as a final model.
  • the model evaluation unit 1610 - 5 may determine one or a predetermined number of models which are set in an order of higher evaluation score as a final model.
  • the analysis unit 1620 may include a data acquisition unit 1620 - 1 and an analysis result provision unit 1620 - 4 .
  • the analysis unit 1620 may further implement at least one of a data preprocessor 1620 - 2 , a data selection unit 1620 - 3 , and a model update unit 1620 - 5 in a selective manner.
  • the data acquisition unit 1620 - 1 may obtain data necessary for identifying a situation.
  • the analysis result provision unit 1620 - 4 may apply the acquired data obtained from the data acquisition unit 1620 - 1 to the learned model as an input value.
  • the analysis result provision unit 1620 - 4 applies the data selected by the data preprocessor 1620 - 2 or the data selection unit 1620 - 3 to be described later to the model to obtain the analysis result.
  • the analysis result may be determined by the model.
  • the analysis result provision unit 1620 - 4 may obtain at least one keyword by applying voice data obtained from the data acquisition unit 1620 - 1 to the AI model.
  • the analysis unit 1620 may further include the data preprocessor 1620 - 2 and the data selection unit 1620 - 3 in order to improve an analysis result of the model or save resources or time to provide the analysis result.
  • the data preprocessor 1620 - 2 may preprocess the acquired data so that the acquired data may be used to identify a situation. That is, the data preprocessor 1620 - 2 can process the obtained data into the pre-defined format so that the analysis result provision unit 1620 - 4 may use the acquired data.
  • the data selection unit 1620 - 3 can select data required for identifying a situation from the data acquired by the data acquisition unit 1620 - 1 or the data preprocessed by the data preprocessor 1620 - 2 .
  • the selected data may be provided to the analysis result provision unit 1620 - 4 .
  • the data selection unit 1620 - 3 can select some or all of the obtained or preprocessed data according to a predetermined selection criterion for identifying a situation.
  • the data selection unit 1620 - 3 can also select data according to a predetermined selection criterion by learning by the model learning unit 1610 - 4 .
  • the model update unit 1620 - 5 can control the updating of the model based on the evaluation of the analysis result provided by the analysis result provision unit 1620 - 4 .
  • the model update unit 1620 - 5 may provide the analysis result provided by the analysis result provision unit 1620 - 4 to the model learning unit 1610 - 4 so that the model learning unit 1610 - 4 can ask for further learning or updating the model.
  • FIG. 19 is a diagram illustrating an embodiment in which the learning unit 1610 and the analysis unit 1620 are implemented in different devices.
  • the external server 200 may include the learning unit 1610 and the electronic device 100 may include the analysis unit 1620 .
  • the electronic device 100 and the server 200 may communicate with each other on a network.
  • the analysis result provision unit 1620 - 4 of the electronic device 100 applies the data selected by the data selection unit 1620 - 3 to the model generated by the server 200 to obtain the analysis result.
  • the analysis result provision unit 1620 - 4 of the electronic device 100 may receive a model generated by the server 200 from the server 200 , and may use the received model to obtain at least one keyword in a call with the user of the electronic device 100 .
  • FIG. 20 is a flowchart of a network system using an AI model according to various embodiments.
  • the network system using the AI model may include a first component 2010 and a second component 2020 .
  • the first component 2010 may be the electronic device 100 and the second component 2020 may be a server storing the AI model.
  • the first component 2010 may be a general purpose processor and the second component 2020 may be an AI dedicated processor.
  • the first component 2010 may be at least one application, and the second component 2020 may be an operating system (OS). That is, the second component 2020 may be more integrated than the first component 2010 , may be dedicated, delay less, have an outstanding performance, or have many resources.
  • the second component 2020 may be a component that can process many operations required for generating, updating, or applying the model faster and more efficiently than the first component 2010 .
  • An interface for transmitting/receiving data between the first component 2010 and the second component 2020 may be defined.
  • an application program interface having learning data (or an intermediate value or a transfer value) to be applied to the model as a factor value
  • the API may be defined as a group of subroutines or functions that may be called for any processing of any protocol (for example, a protocol defined in the electronic device 100 ) to another protocol (e.g., a protocol defined in an external server of the electronic device 100 ). That is, an environment may be provided in which an operation of another protocol may be performed through any one protocol through the API.
  • the user of the first component 2010 may perform a call with a user of another device in operation S 2001 .
  • the first component 2010 may transmit the call content between the user and the other party to the second component 2020 in operation S 2003 .
  • audio data including the call voice may be transmitted to the second component 2020
  • the text converted to the second component 2020 may be transmitted by converting the speech into text from the first component 2010 .
  • the second component 2020 may obtain at least one keyword in a call using the learned artificial intelligence model in operation S 2005 .
  • the keyword may be a proper noun having a high importance among the proper nouns included in the call. The importance may be determined based on the frequency of the proper noun, the weight set for the category to which the proper noun belongs, or the like.
  • the second component 2020 may transmit at least one keyword obtained from the call to the first component 2010 in operation S 2007 .
  • the first component 2010 may provide received at least one keyword in operation S 2009 .
  • the first component 2010 may provide the received keyword on the call screen and perform various functions using the keyword in response to the user selecting the keyword. As described above, a search function may be provided or a memo generation function may be provided.
  • the first component 2010 may determine a recommended application based on a keyword received from the second component 2020 and may execute a recommended application by associating with a keyword.
  • FIG. 21 is a flowchart illustrating a method for controlling an electronic device according to an embodiment.
  • the flowchart illustrated in FIG. 21 may be configured as operations processed in the electronic device 100 described herein. Accordingly, the contents described with respect to the electronic device 100 may be applied to the flowchart shown in FIG. 21 , even if the contents are omitted below.
  • At least one keyword is obtained from a call content with a user of another electronic device during a call with a user of another electronic device using the electronic device in operation S 2110 .
  • the at least one obtained keyword is displayed during a call in operation S 2120 .
  • the obtained keywords may be displayed on the call screen.
  • the keyword may be automatically extinguished on a call screen according to an algorithm considering the frequency of mentioning of the key word and the last time of mentioning.
  • a keyword on the call screen may able to move or disappear by a user interaction (move to a recycle bin using a long click).
  • a search result for a keyword selected by a user, among the displayed at least one keyword, is provided in operation S 2130 .
  • a keyword selected by a user among at least one displayed keywords may be stored in the memo application.
  • the user may specify a plurality of keywords for searching or taking a memo. For example, when the initial keyword is touched, a keyword to be searched or taken to a memo together can be touched in order and designated, and the electronic device 100 may identify that the selection of the key word has been completed when a long touch is done in the last keyword, and may display a screen asking whether to search or take a memo by determining a set from an initially selected keyword to a lastly selected keyword.
  • the electronic device may predict a scenario which the user may perform after a call, and display a UI for asking the user whether to perform a task using the keyword.
  • the UI is selected, it is identified that the user agrees to the task performance, and the user may automatically perform the task of inputting the keyword to the application by utilizing the API.
  • the personalized recommendation information may be actively provided from the extracted keywords during a call without intervention of a user, thereby significantly reducing a series of processes performed by a user one by one during a call and after a call, and increasing efficiency and a user satisfaction. Also, a service that is more suitable to a context of use may be provided based on the call content.
  • example embodiments described above may be implemented in software, hardware, or the combination of software and hardware.
  • the example embodiments of the disclosure may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, or electric units for performing other functions.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, micro-controllers, microprocessors, or electric units for performing other functions.
  • example embodiments described herein may be implemented by the processor 130 of the electronic device 100 .
  • example embodiments of the disclosure such as the procedures and functions described herein may be implemented with separate software modules. Each of the above-described software modules may perform one or more of the functions
  • various embodiments of the disclosure may be implemented in software, including instructions stored on machine-readable storage media readable by a machine (e.g., a computer).
  • a machine is a device operable to call an instruction from a storage medium and operable according to a called instruction and may include the electronic device 100 of the disclosed embodiments.
  • the processor may perform the function corresponding to the instruction, either directly or under the control of the processor, using other components.
  • the instructions may contain a code made by a compiler or a code executable by an interpreter. For example, by executing an instruction stored in the storage medium by a processor, the control method of the electronic device 100 may be executed.
  • a control method including (obtaining at least one keyword from a call content with a user of another electronic device during a call with a user of another electronic device using an electronic device, displaying the at least one keyword during a call, and providing a search result for a keyword selected by a user among the at least one displayed keywords may be performed.
  • non-transitory storage medium may not include a signal but is tangible, and does not distinguish the case in which a data is semi-permanently stored in a storage medium from the case in which a data is temporarily stored in a storage medium.
  • the method according to the above-described embodiments may be included in a computer program product.
  • the computer program product may be traded as a product between a seller and a consumer.
  • the computer program product may be distributed online in the form of machine-readable storage media (e.g., compact disc read only memory (CD-ROM)) or through an application store (e.g., Play StoreTM, APP StoreTM) or distributed online directly.
  • an application store e.g., Play StoreTM, APP StoreTM
  • at least a portion of the computer program product may be at least temporarily stored or temporarily generated in a server of the manufacturer, a server of the application store, or a machine-readable storage medium such as memory of a relay server.
  • the respective elements (e.g., module or program) of the elements mentioned above may include a single entity or a plurality of entities.
  • at least one element or operation from among the corresponding elements mentioned above may be omitted, or at least one other element or operation may be added.
  • a plurality of components e.g., module or program
  • the integrated entity may perform functions of at least one function of an element of each of the plurality of elements in the same manner as or in a similar manner to that performed by the corresponding element from among the plurality of elements before integration.
  • the module, a program module, or operations executed by other elements may be executed consecutively, in parallel, repeatedly, or heuristically, or at least some operations may be executed according to a different order, may be omitted, or the other operation may be added thereto.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

A control method for an electronic device is disclosed. The control method comprises the steps of during a call with a user of another electronic device by means of an electronic device, acquiring at least one keyword from a content of the call with the user of the another electronic device displaying the at least one keyword during the call and providing a search result for a keyword selected by a user from among the displayed at least one keyword. Particularly, at least a part of the control method of the present disclosure may use an artificial intelligence model learned according to at least one of machine learning, a neural network, and a deep learning algorithm.

Description

    CROSS REFERENCE TO RELATED APPLICATION(S)
  • This application is a continuation application of prior application number 17/255,605, filed on Dec. 23, 2020, which is a U.S. National Stage Application under 35 U.S.C. § 371 of an International application number PCT/KR2019/011135, filed on Aug. 30, 2019, which is based on and claimed priority of a Korean patent application number 10-2018-0106303, filed on Sep. 6, 2018, in the Korean Intellectual Property Office, the disclosure of which is incorporated by reference herein in its entirety.
  • TECHNICAL FIELD
  • This disclosure relates to an electronic device and a control method therefor. More particularly, the disclosure relates to an electronic device which may obtain at least one keyword from a call content to be used for various functions and a control method therefor.
  • The disclosure relates to an artificial intelligence (AI) system utilizing machine learning algorithm and an application thereof.
  • BACKGROUND ART
  • In recent years, AI systems have been used in various fields. An AI system is a system in which a machine learns, judges, and iteratively improves analysis and decision making, unlike an existing rule-based smart system. As the use of AI systems increases, for example, an accuracy, a recognition rate and understanding or anticipation of a user’s taste may be correspondingly increased. As such, existing rule-based smart systems are gradually being replaced by deep learning-based AI systems.
  • AI technology is composed of machine learning, for example deep learning, and elementary technologies that utilize machine learning.
  • Machine learning is an algorithmic technology that is capable of classifying or learning characteristics of input data. Element technology is a technology that simulates functions, such as recognition and judgment of a human brain, using machine learning algorithms, such as deep learning. Machine learning is composed of technical fields such as linguistic understanding, visual understanding, reasoning, prediction, knowledge representation, motion control, or the like.
  • Various fields implementing AI technology may include the following. Linguistic understanding is a technology for recognizing, applying, and/or processing human language or characters and includes natural language processing, machine translation, dialogue system, question and answer, speech recognition or synthesis, and the like. Visual understanding is a technique for recognizing and processing objects as human vision, including object recognition, object tracking, image search, human recognition, scene understanding, spatial understanding, image enhancement, and the like. Inference prediction is a technique for judging and logically inferring and predicting information, including knowledge-based and probability-based inference, optimization prediction, preference-based planning, recommendation, or the like. Knowledge representation is a technology for automating human experience information into knowledge data, including knowledge building (data generation or classification), knowledge management (data utilization), or the like. Motion control is a technique for controlling the autonomous running of the vehicle and the motion of the robot, including motion control (navigation, collision, driving), operation control (behavior control), or the like.
  • Recently, an electronic device mounted with an artificial intelligence personal assistant platform is provided for efficient management of information and a variety of user experience.
  • During a call or after a call using an electronic device such as a smartphone, a user may record a call content or perform a web search based on the call content, and this task may include, for example, a task of recording a call content in an application, such as a memo application, a calendar application, a schedule application, or the like, and searching for relevant information in a web browser, or the like. However, these tasks may be cumbersome. In a related-art electronic device, there is no function of performing such tasks, and even an artificial intelligence personal assistant platform has a structure of directly giving a command and thus there is a limitation of passivity.
  • DISCLOSURE Technical Problem
  • The disclosure has been made to solve the above-described problems, and an object of the disclosure is to provide an electronic device capable of obtaining at least one keyword from a call content for utilizing the keyword for various functions and a control method therefor.
  • Technical Solution
  • According to an embodiment, a control method of an electronic device includes, during a call with a user of another electronic device by using the electronic device, obtaining at least one keyword from a content of the call with the user of the another electronic device; displaying the at least one keyword during the call; and providing a search result for a keyword selected by a user from among the displayed at least one keyword.
  • The displaying may include displaying the at least one keyword on a call screen with the user of the another electronic device.
  • The displaying may include displaying at least one circular user interface (UI) element including each of the at least one keyword.
  • The method may further include determining a level of importance of the at least one keyword according to a number of mentioning of the keyword during a call with the user of the another electronic device, and the displaying may include displaying a size of the at least one circular UI element differently according to the determined level of importance of the at least one keyword.
  • The displaying may include, based on the obtained at least one keyword being plural, determining an arrangement interval of the plurality of keywords according to an interval mentioned in a call with the user of the another electronic device and displaying the plurality of keywords according to the determined arrangement interval.
  • The method may further include, based on the displayed at least one keyword being plural, in response to receiving a user input to sequentially select two or more keywords from among the plurality of keywords, determining the selected two or more keywords as one set, and the providing the search result may include providing a search result about the determined set.
  • The method may further include displaying a first UI component corresponding to a search function for the determined set and a second UI component corresponding to a memo function for the determined set, and the providing the search result may include, based on the first UI being selected, providing a search result for the determined set.
  • The method may further include, based on the second UI component being selected, storing the determined set in a memo application.
  • The obtaining may include obtaining at least one keyword from a content of a call with the user of the another electronic device using a learned artificial intelligence (AI) model.
  • The method may further include, based on a call with the user of the another electronic device being ended, determining a recommended application among the plurality of applications of the electronic device based on the at least one obtained keyword; and displaying a UI for asking whether to perform a task associated with the determined recommended application and the obtained at least one keyword.
  • According to an embodiment, an electronic includes a communicator, a display, a memory configured to store computer executable instructions, and a processor, by executing the computer executable instructions, configured to, during a call with a user of another electronic device by using the electronic device, obtain at least one keyword from a content of the call with the user of the another electronic device, control the display to display the at least one keyword during the call, and provide a search result for a keyword selected by a user from among the displayed at least one keyword.
  • The processor is further configured to control the display to display the at least one keyword on a call screen with the user of the another electronic device.
  • The processor is further configured to control the display to display at least one circular user interface (UI) element including each of the at least one keyword.
  • The processor is further configured to determine a level of importance of the at least one keyword according to a number of mentioning of the keyword during a call with the user of the another electronic device and control the display to display a size of the at least one circular UI element differently according to the determined level of importance of the at least one keyword.
  • The processor is further configured to, based on the obtained at least one keyword being plural, determine an arrangement interval of the plurality of keywords according to an interval mentioned in a call with the user of the another electronic device and display the plurality of keywords according to the determined arrangement interval.
  • The processor may, based on the displayed at least one keyword being plural, in response to receiving a user input to sequentially select two or more keywords from among the plurality of keywords, determine the selected two or more keywords as one set, and may provide a search result about the determined set.
  • The processor may control the display to display a first UI component corresponding to a search function for the determined set and a second UI component corresponding to a memo function for the determined set, and, based on the first UI being selected, provide a search result for the determined set.
  • The processor may, based on the second UI component being selected, store the determined set in a memo application.
  • The processor may obtain at least one keyword from a content of a call with the user of the another electronic device using a learned artificial intelligence (AI) model.
  • The processor may, based on a call with the user of the another electronic device being ended, determine a recommended application among the plurality of applications of the electronic device based on the at least one obtained keyword and control the display to display a UI for asking whether to perform a task associated with the determined recommended application and the obtained at least one keyword.
  • DESCRIPTION OF DRAWINGS
  • FIG. 1 is a diagram illustrating an electronic device to display keywords obtained from a call according to an embodiment;
  • FIGS. 2 and 3 are diagrams illustrating a configuration of an electronic device according to various embodiments;
  • FIGS. 4 and 5 are diagrams illustrating an embodiment of obtaining a keyword from a call content and displaying the obtained keyword on a call screen according to an embodiment;
  • FIG. 6 is a diagram illustrating an embodiment of selecting keywords displayed on a call screen according to an embodiment;
  • FIG. 7 is a diagram illustrating a user interface (UI) for providing a memo function or a search function with selected keywords according to an embodiment;
  • FIG. 8 is a diagram illustrating an embodiment of displaying a screen on a call screen according to searching based on the obtained keywords according to an embodiment;
  • FIGS. 9 and 10 are diagrams illustrating an embodiment of changing a keyword on a real time basis on a call screen according to a call content according to an embodiment;
  • FIG. 11 is a diagram illustrating a user interface (UI) for providing a memory function or a search function with the selected keywords according to another embodiment;
  • FIG. 12 is a diagram illustrating an embodiment of registering a memory with the keywords obtained from a call;
  • FIG. 13 is a diagram illustrating a UI for recommending a task with the keywords obtained from a call immediately after a call according to an embodiment;
  • FIG. 14 is a diagram illustrating an embodiment of providing personalization information with the keywords obtained from a call according to an embodiment;
  • FIG. 15 is a diagram illustrating an overall structure of a service providing various functions by obtaining a keyword from a call according to an embodiment;
  • FIG. 16 is a block diagram illustrating a processor for learning and using a recognition model according to an embodiment;
  • FIGS. 17 to 19 are block diagrams illustrating a learning unit and an analysis unit according to various embodiments;
  • FIG. 20 is a flowchart of a network system using an AI model according to an embodiment; and
  • FIG. 21 is a flowchart illustrating a control method of an electronic device according to an embodiment.
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • Various embodiments will be described with reference to the attached drawings.
  • Hereinafter, embodiments of the disclosure will be described with reference to the accompanying drawings. However, this disclosure is not intended to limit the embodiments described herein but includes various modifications, equivalents, and / or alternatives. In the context of the description of the drawings, like reference numerals may be used for similar components.
  • In this document, the expressions “have,” “may have,” “including,” or “may include” may be used to denote the presence of a feature (e.g., a numerical value, a function, an operation), and do not exclude the presence of additional features.
  • In this document, the expressions “A or B,” “at least one of A and / or B,” or “one or more of A and / or B,” and the like include all possible combinations of the listed items. For example, “A or B,” “at least one of A and B,” or “at least one of A or B” includes (1) at least one A, (2) at least one B, or (3) at least one A and at least one B together.
  • In addition, expressions “first”, “second”, or the like, used in the disclosure may indicate various components regardless of a sequence and/or importance of the components, may be used to distinguish one component from the other components, and do not limit the corresponding components. For example, a first user device and a second user device may indicate different user devices regardless of a sequence or importance thereof. For example, the first component may be named the second component and the second component may also be similarly named the first component, without departing from the scope of the disclosure.
  • The term such as “module,” “unit,” “part”, and so on may be used to refer to an element that performs at least one function or operation, and such element may be implemented as hardware or software, or a combination of hardware and software. Further, except for when each of a plurality of “modules”, “units”, “parts”, and the like needs to be realized in an individual hardware, the components may be integrated in at least one module or chip and be realized in at least one processor.
  • It is to be understood that an element (e.g., a first element) that is “operatively or communicatively coupled with / to” another element (e.g., a second element) may be directly connected to the other element or may be connected via another element (e.g., a third element). Alternatively, when an element (e.g., a first element) is “directly connected” or “directly accessed” to another element (e.g., a second element), it may be understood that there is no other element (e.g., a third element) between the other elements.
  • Herein, the expression “configured to” may be used interchangeably with, for example, “suitable for,” “having the capacity to,” “designed to,” “adapted to,” “made to,” or “capable of.” The expression “configured to” does not necessarily mean “specifically designed to” in a hardware sense. Instead, under some circumstances, “a device configured to” may indicate that such a device can perform an action along with another device or part. For example, the expression “a processor configured to perform A, B, and C” may indicate an exclusive processor (e.g., an embedded processor) to perform the corresponding action, or a generic-purpose processor (e.g., a central processing unit (CPU) or application processor (AP)) that can perform the corresponding actions by executing one or more software programs stored in the memory device.
  • Terms used in the disclosure may be used to describe specific embodiments rather than restricting the scope of other embodiments. Singular forms are intended to include plural forms unless the context clearly indicates otherwise. Terms used in the disclosure including technical and scientific terms may have the same meanings as those that are generally understood by those skilled in the art to which the disclosure pertains. Terms defined in a general dictionary among terms used in the disclosure may be interpreted as having meanings that are the same as or similar to meanings within a context of the related art, and are not interpreted as ideal or excessively formal meanings unless clearly defined in the disclosure. In some cases, terms may not be interpreted to exclude embodiments of the disclosure even where they may be defined in the disclosure.
  • The electronic apparatus may include, for example, smartphones, tablet personal computers (PCs), mobile phones, video telephones, electronic book readers, desktop PCs, laptop PCs, netbook computers, workstations, servers, a personal digital assistance (PDA), a portable multimedia player (PMP), an MP3 player, a medical device, a camera, or a wearable device. The wearable device may include any one or any combination of the accessory type (e.g., as a watch, a ring, a bracelet, a bracelet, a necklace, a pair of glasses, a contact lens or a head-mounted-device (HMD)); a fabric or a garment-embedded type (e.g., an electronic clothing), a skin-attached type (e.g., a skin pad or a tattoo); or a bio-implantable circuit.
  • In some embodiments, the electronic device may be a home appliance. The home appliance may include at least one of, for example, a television, a digital video disk (DVD) player, an audio system, a refrigerator, air-conditioner, a cleaner, an oven, a microwave, a washing machine, an air purifier, a set top box, a home automation control panel, a security control panel, a media box (e.g., SAMSUNG HOMESYNC™, APPLE TV™, or GOOGLE TV™), a game console (e.g., XBOX™, PLAYSTATION™), an electronic dictionary, an electronic key, a camcorder, or an electronic frame.
  • In other embodiments, the electronic device may include at least one of a variety of medical devices (e.g., various portable medical measurement devices such as a blood glucose meter, a heart rate meter, a blood pressure meter, or a temperature measuring device), magnetic resonance angiography (MRA), magnetic resonance imaging (MRI), computed tomography (CT), or ultrasonic wave device, etc.), a navigation system, a global navigation satellite system (GNSS), an event data recorder (EDR), a flight data recorder (FDR), an automotive infotainment devices, a marine electronic equipment (e.g., marine navigation devices, gyro compasses, etc.), avionics, a security device, a car head unit, industrial or domestic robots, a drone, an automated teller machine (ATM), a point of sale (POS) of a store, or an Internet of Things (IoT) device (e.g., light bulbs, sensors, sprinkler devices, fire alarms, thermostats, street lights, toasters, exercise equipment, hot water tanks, heater, boiler, etc.).
  • According to some embodiments, the electronic device may include a part of furniture or building / structure, an electronic board, an electronic signature receiving device, a projector, any of various measuring devices (e.g., water, electricity, gas, or electromagnetic wave measuring devices, or the like), or the like. In various embodiments, the electronic device may be one or more of the various devices described above. In some embodiments, the electronic device may be a flexible electronic device. The electronic device according to an embodiment is not limited to the devices described above, and may include a new electronic device according to technology development.
  • One object of the disclosure is to provide an electronic device having an artificial intelligence agent system for providing keyword-based personalized recommendation information by obtaining a keyword from a call content and performing a related command, and a control method thereof.
  • According to the embodiments of the disclosure, the electronic device may perform, for example, a simple search and a memo function based on a keyword obtained from a call during a call. Immediately after the call, an application predicted to be used by a user based on the keyword obtained from the call may be determined and executed in association with the obtained keyword. The key words obtained in the call may be stored to provide personalization information on the basis of the keywords anytime.
  • The disclosure will be further described with reference to the drawings.
  • FIG. 1 is a diagram illustrating an electronic device to obtain and provide a keyword from a call with another party according to an embodiment.
  • Referring to FIG. 1 , a user may perform a call with a user of another electronic device using an electronic device 100. The electronic device 100 may display the keywords obtained in the call on a call screen of the other party during a call. The displayed keywords are selectable user interface (UI) elements. When the user selects a keyword, various functions may be executed based on the selected keyword. For example, a search function may be performed based on the selected keyword, and a memo function may be performed based on the selected keyword.
  • After the call ends, the electronic device 100 may provide a UI asking the user whether to perform a task to apply (or input) keywords obtained in an application (memo application, calendar application, schedule application, search application, etc.) that is expected to be executed by a user based on the obtained keywords.
  • The electronic device 100 may store keywords obtained during a call and may provide personalized information based on the keyword obtained from a call anytime, upon request of a user.
  • FIG. 2 is a block diagram illustrating a configuration of the electronic device 100 according to an embodiment.
  • Referring to FIG. 2 , the electronic device 100 includes a communicator 110, a memory 120, a processor 130, and a display 140. Some of the configurations may be omitted depending on the embodiment, and any suitable hardware/software configuration, although not shown, apparent to those of ordinary skill in the art may be further included in the electronic device 100.
  • The communicator 110 may be connected to network via wireless communication or wired communication to communicate with an external device. Wireless communication may include cellular communication using any one or any combination of the following, for example, long-term evolution (LTE), LTE advanced (LTE-A), a code division multiple access (CDMA), a wideband CDMA (WCDMA), and a universal mobile telecommunications system (UMTS), a wireless broadband (WiBro), or a global system for mobile communications (GSM), and the like. According to an embodiment, the wireless communication may include, for example, any one or any combination of Wi-Fi, Bluetooth, Bluetooth low energy (BLE), Zigbee, near field communication (NFC), magnetic secure transmission, radio frequency (RF), or body area network (BAN). Wired communication may include, for example, a universal serial bus (USB), a high definition multimedia interface (HDMI), a recommended standard 232 (RS-232), a power line communication, or a plain old telephone service (POTS). The network over which the wireless or wired communication is performed may include any one or any combination of a telecommunications network, for example, a computer network (for example, a local area network (LAN) or a wide area network (WAN)), the Internet, or a telephone network.
  • The communicator 110 may include a cellular module, a Wi-Fi module, a Bluetooth module, global navigation satellite system (GNSS) module (for example: a global positioning system (GPS) module, Glonass module, Beidou module, or Galileo module), a near field communication (NFC) module, radio frequency (RF) module, or the like.
  • The cellular module may provide at least one of, for example, and without limitation, a voice call, a video call, a text service, an Internet service, or the like, through a communication network. According to an embodiment, the cellular module may perform the discrimination and authentication of an electronic device within the communication network using a subscriber identity module (example: a subscriber identification module (SIM) card). According to an embodiment, the cellular module may perform at least some of the functions that the processor may provide. According to an example embodiment, the cellular module may include, for example, a communication processor (CP).
  • Each of the Wi-Fi module, the Bluetooth module, the GNSS module, or the NFC module may include various communication circuitry and a processor for processing data to be transmitted and received. According to an example embodiment, at least a portion (example: two or more) of a cellular module, a Wi-Fi module, a Bluetooth module, a GNSS module, an NFC module, or the like, may be included in one integrated chip (IC) or an IC package.
  • The RF module may, for example, transmit and receive a communication signal (example: an RF signal). The RF module may include, for example, and without limitation, a transceiver, a power amp module (PAM), a frequency filter, a low noise amplifier (LNA), an antenna, or the like. According to an embodiment, at least one of the cellular module, the Wi-Fi module, the Bluetooth module, the GNSS module, the NFC module, etc. may transmit and receive an RF signal through a separate RF module.
  • The memory 120 may include, for example, an internal memory or an external memory. The internal memory may be implemented as at least one of a volatile memory such as a dynamic random access memory (DRAM), a static random access memory (SRAM), a synchronous dynamic random access memory (SDRAM), or a nonvolatile memory (for example, one time programmable ROM (OTPROM), programmable ROM (PROM), erasable and programmable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), mask ROM, flash ROM, a flash memory (for example, NAND flash or NOR flash), a hard disk drive (HDD) or a solid state drive (SSD).
  • The external memory may include a flash drive, for example, a compact flash (CF), secure digital (SD), micro secure digital (micro-SD), mini secure digital (mini-SD), extreme digital (xD), multi-media card (MMC), or a memory stick.
  • The memory 120 may be accessed by the processor 130, and reading/writing/modifying/updating of data by the processor 130 may be performed associated with the data.
  • The term memory may include a memory separately provided from the processor 130, and at least one of a read-only memory (ROM, not shown) and a random access memory (RAM, not shown).
  • The memory 120 may store trained AI model, learning data, or the like.
  • The memory 120 may store various applications such as a memory application, a schedule application, a calendar application, a web browser application, a call application, or the like.
  • The display 140 is configured to output an image. The display 140, for example, may be implemented as a liquid crystal display (LCD), light-emitting diode (LED) display, an organic light-emitting diode (OLED) display (e.g., active-matrix organic light-emitting diode (AMOLED), passive-matrix OLED (PMOLED), microelectromechanical systems (MEMS) display, or an electronic paper display. The display 140 may be implemented as a touch screen.
  • The processor 130 is configured to control overall operations of the electronic device 100. For example, the processor 130 may drive an operating system and an application to control a host of hardware or software elements connected to the processor 130 and may perform various data processing and operations. The processor 130 may be a central processing unit (CPU) or a graphics-processing unit (GPU), or both. The processor 130 may be implemented as at least one of a general processor, a digital signal processor, an application specific integrated circuit (ASIC), a system on chip (SoC), a microcomputer (MICOM), or the like.
  • The processor 130 may perform various operations using the trained AI model. For example, the processor 130 may obtain at least one keyword in a call with the other party performed using the electronic device 100 using the learned artificial intelligence model. According to an embodiment, the processor 130 may obtain a proper noun by inputting a call content to a named entity recognition (NER) model, and select a keyword among the proper nouns. All the obtained proper nouns may be selected as a keyword, or only those satisfying specific criteria among the obtained proper nouns may be selected as keywords. The specific criteria may be, for example, whether the number of mentioning the proper noun is greater than or equal to the preset number of times, whether an interval of mentioning the proper noun is less than a preset time, or the like. Alternatively, a proper noun belonging to a category corresponding to the keyword may be selected as a keyword. For example, a category such as a time, a person, and a place may be a category corresponding to a keyword. Alternatively, the user may obtain a keyword by inputting a proper noun to the artificial intelligence model learned to select the main keyword.
  • Various AI model specialized to an application field may be used. The artificial intelligence model is a model trained based on an artificial intelligence algorithm, for example, it may be a model based on a neural network. The learned AI model may include a plurality of weighted network nodes that may be designed to simulate the human brain structure on a computer and simulate a neuron of a human neural network. The plurality of network nodes may each establish a connection relationship so that the neurons simulate the synaptic activity of the neurons sending and receiving signals through the synapse. Also, the learned AI model may include, for example, a neural network model or a deep learning model developed from a neural network model. In the deep learning model, a plurality of network nodes are located at different depths (or layers), and may transmit and receive data according to a convolution connection relationship. Examples of learned determination models include, but are not limited to, Deep Neural Network (DNN), Recurrent Neural Network (RNN), and Bidirectional Recurrent Deep Neural Network (BRDNN).
  • The electronic device 100 may use an artificial intelligence (AI)-exclusive program (or an artificial intelligence agent) and a personal assistant program (e.g. Bixby™). The personal assistant program is a dedicated program for providing an AI-based service. For AI processing, a conventional general-use processor (for example, CPU) may be used or a single-purpose processor (e.g., GPU, field programmable gate array (FPGA), ASIC, or the like) may be used. The electronic device 100 may include a plurality of processors, for example, a processor dedicated to artificial intelligence and a processor for other processing.
  • According to one embodiment, when a pre-set user input (e.g., an icon touch corresponding to a personal agent chatbot, a user voice including a preset word, etc.) is input or a button (e.g., a button for executing an AI agent) provided in the electronic device 100 is pressed, the AI agent may be operated (or executed). Alternatively, the AI agent may be in a standby state before a preset user input is detected or a button provided on the electronic device 100 is selected. The standby state is a state of sensing that a pre-defined user voice (e.g., a user voice including a preset keyword (e.g., Bixby) is input) is received to control the start of the operation of the AI agent. When a preset user input is detected while the AI agent is in the standby state or a button provided in the electronic device 100 is selected, the electronic device 100 may operate the AI agent. The AI agent may perform the function of the electronic device 100 based on the voice when the user voice is received, and output an answer when the voice is a voice for asking.
  • An operation based on AI may be performed in the electronic device 100 and may be performed through an external server. In the latter case, the AI model is stored in a server, and the electronic device 100 may provide data to be input to the AI model to the server, and the server may input the data to the AI model and provide the obtained result data to the electronic device 100.
  • FIG. 3 is a block diagram illustrating a configuration of the electronic device 100 according to another embodiment. As illustrated in FIG. 3 , the electronic device 100 may include a communicator 110, a memory 120, a processor 130, a display 140, an inputter 150, an audio outputter 160, and a microphone 170. Some of the configurations may be omitted depending on the embodiment, and any suitable hardware/software configurations apparent to those of ordinary skilled in the art may be further included in the electronic device 100, although not shown. The communicator 110, the memory 120, the processor 130, and the display 140 are described in FIG. 2 , and thus the description thereof will be omitted.
  • The microphone 170 is configured to receive a user voice or other sound and convert the voice or sound into a digital signal. The processor 130 may obtain at least one keyword in a call voice input through the microphone 170. The microphone 170 may be provided inside the electronic device 100, but it is only one embodiment, and may be provided outside the electronic device 100 and electrically connected to the electronic device 100.
  • The inputter 150 may receive a user input and transfer the user input to the processor 130. The inputter 150 may include a touch sensor, a (digital) pen sensor, a pressure sensor, a key, or a microphone. The touch sensor may use, for example, at least one of an electrostatic type, a pressure sensitive type, an infrared type, and an ultrasonic type. The (digital) pen sensor may, for example, be a part of a touch panel or may include a separate sheet for recognition. The key may include, for example, a physical button, an optical key, or a keypad.
  • The display 140 and a touch sensor of the inputter 150 may form a mutual layer structure and may be implemented with a touch screen.
  • The audio outputter 160 may output an audio signal. For example, the audio outputter 110 may output a calling voice of the other party received through the communicator 110. The audio outputter 160 may output audio data stored in the memory 120. For example, the audio outputter 160 may output various notification sounds and may output the voice of an AI assistant. The audio outputter 160 may include a receiver, a speaker, a buzzer, etc.
  • The memory 120 may store computer executable instructions and the control method of the electronic device 100 described in this disclosure may be performed when it is executed by the processor 130.
  • For example, the processor 130 may, by executing a computer executable instruction, obtain at least one keyword from a call with a user of another electronic device during a call with a user of another electronic device through the communicator 110, and may control the display 140 so that the obtained keyword is displayed during the call.
  • The processor 130 may collect the voice which is input through the microphone and the voice of the other party of the another electronic device received through the communicator 110, convert the collected voice into a text, and may obtain a keyword from the converted text.
  • The processor 130 may obtain at least one keyword from a call with a user of another electronic device using the trained AI model.
  • According to another embodiment, the processor 130 may obtain at least one keyword from a call with a user of another electronic device based on a rule. For example, the processor 130 may obtain at least one keyword based on the frequency of a word, which is mentioned in a call with a user of another electronic device, the intensity of the sound of a spoken word, etc.
  • An embodiment of obtaining a keyword from a call will be described with reference to FIGS. 4 and 5 .
  • FIG. 4 illustrates a call content between a user 10 of the electronic device 100 and a user 20 of another electronic device.
  • The processor 130 may obtain at least one keyword (underlined) from a call content of the call. For example, the processor 130 may obtain nouns such as “tomorrow,” “7 pm,” “brother Dongnam,” “Gangnam Station,” “delicious restaurant,” or the like, as keywords.
  • Referring to FIG. 5 , the processor 130 may control the display 140 to display at least one obtained keyword. In this case, the processor 130 may control the display 140 to display at least one keyword obtained on a call screen 500 with the user 20 of another electronic device. The call screen 500 is a screen that is displayed on the display 140 during a call with the user 20 of another electronic device and may include information such as a name, contact information, etc. of the user 20 of another electronic device and may include a button user interface (UI) corresponding to a function such as conversation over a speaker, telephone disconnection, a sound mute, or the like.
  • According to one embodiment, the processor 130 may control the display 140 to display at least one UI element 51 to 58 including at least one keyword obtained from a call. The at least one UI element 51 to 58 may be circular, and more specifically, may be a bubble shape. However, the shape of the UI element including the keyword is not particularly limited. For example, the UI element including the keyword may have a rectangular shape.
  • The processor 130 may display the size of the at least one UI element 51-58 including a keyword differently according to importance of the keyword included therein. In the case of the UI element including a keyword having a high importance, the size of the UI element may be displayed in a larger size than an UI element including a keyword having a low importance. For example, referring to FIG. 5 , UI elements 51, 54, 56, and 58 including keywords having a relatively high importance may be displayed to be larger than UI elements 52, 53, 55, 57 including a keyword having a relatively low importance.
  • According to an embodiment, the processor 130 may obtain information about an importance of keywords by inputting the keywords to the trained AI model.
  • According to another embodiment, the processor 130 may determine the level of importance of a keyword according to a weight set differently from each category. For example, if a weight (number in a parenthesis), such as a time category 5, a person category 4, a place category 2, is set, a keyword belonging to a time category has higher importance level than a keyword belonging to a place category.
  • According to a still another embodiment, the processor 130 may determine a level of importance according to the number of mentioning the keywords from a call. That is, the larger the number of mentioning, the higher the level of importance.
  • According to one embodiment, the processor 130 may control to display only a predetermined number of keywords on the call screen 500. In this example, the processor 130 may control so as to display only some keywords having a high level of importance.
  • The keywords displayed in the call screen 500 may disappear from the call screen 500 according to user manipulation. If a specific keyword disappears, another new keyword may be displayed on the call screen 500. For example, a new keyword having a subsequent level of importance may be displayed on the call screen 500.
  • For example, if there is a user input to remove any one of UI elements 51 to 58 including a keyword, the processor 130 may remove the corresponding UI element from the call screen 500 and may display the UI element including the keyword having the subsequent level of importance on the call screen 500.
  • The user input to remove the UI element may be, for example, a user’s touch motion to move the UI element out of the call screen 500. In another example, the UI element may be a user input to double touch the UI element. Various other user inputs are possible. According to an embodiment, when the UI elements 51-58 are in a bubble shape, a graphic effect of bubble burst may be provided when the UI elements are removed.
  • The processor 130 may arrange keywords according to the level of association between keywords so that the association between keywords may be more intuitively grasped. According to one embodiment, the processor 130 may determine an arrangement interval of a plurality of keywords according to the interval stated in a call with the user 20 of the other electronic device and may control the display 140 to display a plurality of keywords as the determined arrangement interval.
  • According to an embodiment, keywords mentioned together may be displayed to be adjacent with each other. For example, if a call includes a sentence “how about meeting at Gangnam Station at 7 o′clock, tomorrow?”, a UI element 55 including a keyword “tomorrow”, a UI element 51 including “7 o′clock” and a UI element 56 including a keyword “Gangnam Station” may be displayed on positions adjacent to each other.
  • The user 10 may view the keywords displayed in the call screen 500 during a call, and may be able to intuitively know with which subject a dialogue is proceeded. The keywords obtained in the call may be used in various functions.
  • According to one embodiment, the processor 130 may apply the keyword selected by the user, among at least one keyword displayed on the display 140, to the application of the electronic device 100, so as to perform a function related to the keyword in the corresponding application.
  • In one example, the processor 130 may provide a search result for a keyword selected by a user among at least one keyword displayed on the display 140. In another example, the processor 130 may register a keyword selected by the user, among at least one keyword displayed on the display 140, to a memo application. In addition, various functions may be provided.
  • If a user selects one keyword, it is possible that a function is executed using the one keyword, and if a user selects two or more keywords, it is possible that a function is executed using two or more keywords.
  • Various methods may be used to select two or more keywords. For example, the two or more keywords may be selected by various user operations, such as by sequentially touching two or more keywords, simultaneously touching two or more keywords (multi touch), and dragging to connect two or more keywords, or the like.
  • FIG. 6 is a diagram illustrating an embodiment of selecting two or more keywords.
  • Referring to FIG. 6 , a user may select keywords “Gangnam Station” and “delicious restaurant” by performing a dragging manipulation starting from the UI element 56 including “Gangnam Station” and ending at the UI element 58 including the keyword “delicious restaurant.”
  • When a user input to select two or more keywords is received, the processor 130 may determine the selected two or more keywords as one set. Referring to FIG. 6 , the processor 130 may determine the keywords “Gangnam Station” and “delicious restaurant” as one set.
  • The electronic device 100 may perform search or write a memo with the determined set. The description about the embodiment will refer to FIG. 7 .
  • FIG. 7 is a diagram illustrating one embodiment of the disclosure for performing a search or writing a memo with a keyword selected by a user among the keywords obtained from the call.
  • When a user input for selecting two or more keywords is received as illustrated in FIG. 6 , the processor 130 may determine two or more keywords as one set and may display the two or more keywords selected by the user in one region 710 of the call screen 500 as in FIG. 7 .
  • The processor 130 may control the display 140 to display a first UI element 720 corresponding to a memo function for the determined set and a second UI element 730 corresponding to a search function for the determined set.
  • When the first UI element 720 is selected, the processor 130 may store a keyword set in a memo application.
  • When the second UI element 730 is selected, the processor 130 may provide a search result for the set of keywords. For example, as shown in FIG. 8 , a search result 810 corresponding to “Gangnam Station” and “delicious restaurant” may be provided. The search result 810 may also be provided on the call screen 500.
  • After performing a search and writing a memo, or the like, through the call screen 500, the user 10 may continue calling with the user 20 of the another electronic device as illustrated in FIG. 9 . The processor 130 may obtain keywords in real time from a call. Accordingly, the keywords shown in the call screen 500 during a call may be changed in real time. FIG. 10 illustrates the call screen 500 after performing a call as shown in FIG. 9 . The keywords displayed in the call screen 500 of FIG. 10 may be identified as being different from the keywords displayed in the call screen 500 of FIG. 5 .
  • As illustrated in FIG. 10 , when a user performs a dragging operation to select a plurality of keywords, the processor 130 may determine three keywords selected by the dragging operation to one set and display the three keywords selected by the user as illustrated in FIG. 11 on one region 710 of the call screen 500.
  • In the region 710, keywords may be arranged in a natural format according to language structure. The processor 130 may arrange a keyword according to a category (time, place, activity). For example, the processor 130 may place a keyword (tasting room) belonging to a place category to behind the keyword belonging to the time category (7 o′clock), and may place a keyword (booking) belonging to an activity category to behind the keyword belonging to a place category.
  • The processor 130 may control the display 140 to display a first UI element 720 corresponding to a search function for a set of three keywords and a second UI element 730 corresponding to a memo function for a set of three keywords.
  • When the first UI element 720 is selected, the processor 130 may store a keyword set in the memo application. For example, as shown in FIG. 12 , the processor 130 may input and store a keyword set “booking of a Tasting Room at 7 o′clock” in a list of keywords in the memo application.
  • The processor 130, when the call with the user of another electronic device ends, may determine a recommended application among the plurality of applications of the electronic device 100 based on at least one keyword obtained in the call, and may control the display 140 to display a UI for asking whether to perform a task related to the determined recommended application and the obtained at least one keyword. When the UI for asking the task is selected, the processor 130 may perform the task.
  • The processor 130 may determine an application corresponding to a keyword (or a category of the obtained keyword) obtained in the call to a recommended application based on a mapping list between the keyword (or the category of the keyword) and the application. The mapping list may be stored in the memory 120.
  • For example, if the recommended application determined based on the keyword is a schedule application, the processor 130 may display a UI asking whether to perform a task of registering a keyword related to a schedule of the obtained at least one keyword in a schedule application. The embodiment will be described with reference to FIG. 13 .
  • Referring to FIG. 13 , when a call with the user 20 of another electronic device is completed, the processor 130 may control the display 140 to display a UI 1310 for asking whether to perform a task of registering the keyword obtained in the call to the schedule application immediately after the call is completed. When the UI 1310 is selected, the processor 130 may perform a task of registering a schedule in the schedule application.
  • In another example, the processor 130 may control the display 140 to display the UI (e.g., “Do you want to remit 15000 Won to Kim, Young-hee?”) to ask whether to perform a task of remitting the invoice amount through the remittance application based on a keyword associated with the invoice amount among at least one obtained keyword, when the recommended application determined based on the at least one keyword obtained in the call is a remittance application.
  • In another example, the processor 130 may, when a recommended application determined based on at least one keyword obtained in a call is a contact list application, control the display 140 to display a UI (e.g., “Do you want to transmit a contact number of Kim, Cheol-soo to Kim, Yeong-hee?”) for asking whether to perform a task to transmit information about a first contact number stored in the contact list application to a second contact stored in the contact list application, based on a keyword associated with transmission of a contact list among at least one keyword.
  • In another example, the processor 130 may control the display 140 to display a UI for asking whether to perform a task of transmitting a character composed of keywords based on a keyword related to a message transmission among at least one keyword when the recommended application determined based on the at least one keyword obtained in the call is a message application.
  • According to still another embodiment, personalization information may be provided based on at least one keyword obtained during a call. An example of personalized information will be described with reference to FIG. 14 .
  • FIG. 14 is a diagram illustrating a personalization information provision screen according to an embodiment.
  • The electronic device 100 may provide personalization information based on user profile information. The user profile information may include, for example, a user name, age, gender, occupation, schedule, a route of movement of a user (a route through which the electronic device 100 moves), and at least one keyword obtained in a call.
  • The electronic device 100 may provide information based on the user profile information. The electronic device 100 may provide information based on a keyword obtained in a call. For example, when a keyword of “dinner”, “7 o′clock”, and “Gangnam Station” are obtained in a call with a user of another electronic device as shown in FIG. 14 , information about a “recommended amusements around Gangnam Station at 7 pm” may be provided based on the keywords. According to an embodiment, information which further suits a context of use based on the call content may be provided.
  • FIG. 15 is a diagram illustrating an overall structure of a service providing various functions by obtaining a keyword from a call according to an embodiment.
  • Referring to FIG. 15 , when a user performs a call with a user of another electronic device through the electronic device 100, a call voice (voice of a user and other electronic device) is input to perform pre-processing for incoming call voice and feature vectors may be obtained. By inputting feature vectors to a trained artificial intelligence model, for example, a named entity recognition (NER) model, the proper noun may be extracted from the call voice.
  • The electronic device 100 may select a keyword among the extracted proper nouns. At this time, the importance of the keyword may be considered. The electronic device 100 may provide the selected keywords to the user, perform a search with the keywords selected by the user, or perform a memo function. The electronic device 100 may also use the selected keywords to make a deep learning-based behavior recommendation (e.g., schedule registration, transmission, contact transmission, short message service (SMS) transmission, or the like).
  • The functions performed in the embodiments described above may be performed using an artificial intelligence model. For example, obtaining a keyword from a call content, determination of importance for the keyword, the arrangement of a keyword, determination of a recommended application based on the keyword, or the like, may be performed using the artificial intelligence model.
  • FIG. 16 is a block diagram illustrating a processor for learning and using a recognition model according to an embodiment.
  • Referring to FIG. 16 , a processor 1600 may include at least one of a learning unit 1610 and an analysis unit 1620.
  • The learning unit 1610 may generate an artificial intelligence model having an identification criterion using learning data.
  • For example, the learning unit 1610 may generate and train an artificial intelligence model to obtain a keyword from voice data with the voice data as learning data.
  • The analysis unit 1620 may input voice data to the artificial intelligence model to obtain a keyword from the voice data. The artificial intelligence model may include a speech to text (STT) module, a named entity recognition (NER) module, and a keyword selection module, or the like, by functions.
  • The STT module converts the input speech to text. The NER module receiving the converted text may extract a proper noun from a text. The keyword selection module may identify the importance of the extracted proper noun, and may select a keyword. The importance may be determined according to a weight of a proper noun, a weight of a category to which the proper noun belongs, a frequency of use of a proper noun, an usage interval of a proper noun, a volume of a speech of a user uttering the proper noun, or the like.
  • At least a portion of the learning unit 1610 and at least a portion of the analysis unit 1620 may be implemented as software modules or at least one hardware chip form and mounted in the electronic device 100. For example, either one or both of the learning unit 1610 and the analysis unit 1620 may be manufactured in the form of an exclusive-use hardware chip for artificial intelligence (AI), or a conventional general purpose processor (e.g., a CPU or an application processor) or a graphics-only processor (e.g., a GPU) and may be mounted on the electronic device 100 described above or a server providing an analysis result to the electronic device 100.
  • Herein, the exclusive-use hardware chip for artificial intelligence is a dedicated processor for probability calculation, and it has higher parallel processing performance than existing general purpose processor, so it can quickly process computation tasks in artificial intelligence such as machine learning. When the learning unit 1610 and the analysis unit 1620 are implemented as a software module (or a program module including an instruction), the software module may be stored in a computer-readable non-transitory computer readable media. In this case, the software module may be provided by an operating system (OS) or by a predetermined application. Alternatively, some of the software modules may be provided by an O/S, and some of the software modules may be provided by a predetermined application.
  • The learning unit 1610 and the analysis unit 1620 may be mounted on one server, or may be mounted on separate servers, respectively. For example, the processor 1600 of FIG. 16 may be the processor 130 of FIG. 2 or FIG. 3 . For another example, one of the learning unit 1610 and the analysis unit 1620 may be included in the electronic device 100, and the other one may be included in an external server. In addition, the learning unit 1610 and the analysis unit 1620 may provide the model information constructed by the learning unit 1610 to the analysis unit 1620 via wired or wireless communication, and provide data that is input to the analysis unit 1620 to the learning unit 1610 as additional learning data.
  • FIG. 17 is a block diagram illustrating the learning unit 1610 according to an embodiment.
  • Referring to FIG. 17 , the learning unit 1610 according to some embodiments may implement a learning data acquisition unit 1610-1 and a model learning unit 1610-4. The learning unit 1610 may further selectively implement at least one of a learning data preprocessor 1610-2, a learning data selection unit 1610-3, and a model evaluation unit 1610-5.
  • The learning data acquisition unit 1610-1 may obtain learning data to train a model for acquiring a keyword from voice data.
  • The learning data may be data collected or tested by the learning unit 1610 or the manufacturer of the learning unit 1610. For example, learning data may include voice or text.
  • The model learning unit 1610-4 can use the learning data so that the model has a criterion for understanding, knowing, recognizing, identifying, inferring input data. The model learning unit 1610-4 may train a model through supervised learning using at least a part of learning data as a criterion for identification. Alternatively, the model learning unit 1610-4 may learn, for example, by itself using learning data without specific guidance to make the artificial intelligence model learn through unsupervised learning which detects a criterion for identifying a situation. Also, the model learning unit 1610-4 can learn the artificial intelligence model through reinforcement learning using, for example, feedback on whether the result of providing the response according to learning is correct. The model learning unit 1610-4 can also make an artificial intelligence model learn using, for example, a learning algorithm including an error back-propagation method or a gradient descent.
  • The model learning unit 1610-4 can determine a model having a great relevance between the input learning data and the basic learning data as a recognition model to be learned when there are a plurality of models previously constructed. In this case, the basic learning data may be pre-classified according to the type of data, and the model may be pre-constructed for each type of data. For example, basic learning data may be pre-classified based on various criteria such as a region in which learning data is generated, time when learning data is generated, size of learning data, a genre of learning data, a generator of learning data, or the like.
  • When the model is learned, the model learning unit 1610-4 can store the learned model. In this case, the model learning unit 1610-4 can store the learned model in the memory 120 of the electronic device 100. Alternatively, the model learning unit 1610-4 may store the learned model in a memory of a server connected to the electronic device 100 via a wired or wireless network.
  • The learning unit 1610 may further implement a learning data preprocessor 1610-2 and a learning data selection unit 1610-3 to improve the processing capability of the model or to save resources or time required for generation of the model.
  • The learning data preprocessor 1610-2 can preprocess acquired data so that the data obtained in the learning for identifying a situation may be used. That is, the learning data preprocessor 1610-2 can process the acquired data into a predetermined format so that the model learning unit 1610-4 may use the acquired data for learning to identify a situation.
  • The learning data selection unit 1610-3 can select data required for learning from the data acquired by the learning data acquisition unit 1610-1 or the data preprocessed by the learning data preprocessor 1610-2. The selected learning data may be provided to the model learning unit 1610-4. The learning data selection unit 1610-3 can select learning data necessary for learning from the acquired or preprocessed data in accordance with a predetermined selection criterion. The learning data selection unit 1610-3 may also select learning data according to a predetermined selection criterion by learning by the model learning unit 1610-4.
  • The learning unit 1610 may further implement the model evaluation unit 1610-5 to improve a processing capability of the model.
  • The model evaluation unit 1610-5 may input evaluation data to the model, and if the analysis result which is output from the evaluation result does not satisfy a predetermined criterion, the model evaluation unit 1610-5 may make the model learning unit 1610-4 learn again. In this case, the evaluation data may be predefined data to evaluate the model.
  • For example, when the number or ratio of evaluation data for which analysis result is not correct exceeds a preset threshold, among the analysis results of the model trained for the evaluation data, the model evaluation unit 1610-5 may evaluate that the data does not satisfy a predetermined criterion.
  • When there are a plurality of learned models, the model evaluation unit 1610-5 may evaluate whether each learned model satisfies a predetermined criterion, and determine the model which satisfies a predetermined criterion as a final model. Here, when there are a plurality of models that satisfy a predetermined criterion, the model evaluation unit 1610-5 may determine one or a predetermined number of models which are set in an order of higher evaluation score as a final model.
  • Referring to FIG. 18 , the analysis unit 1620 according to some embodiments may include a data acquisition unit 1620-1 and an analysis result provision unit 1620-4. In addition, the analysis unit 1620 may further implement at least one of a data preprocessor 1620-2, a data selection unit 1620-3, and a model update unit 1620-5 in a selective manner.
  • The data acquisition unit 1620-1 may obtain data necessary for identifying a situation. The analysis result provision unit 1620-4 may apply the acquired data obtained from the data acquisition unit 1620-1 to the learned model as an input value. The analysis result provision unit 1620-4 applies the data selected by the data preprocessor 1620-2 or the data selection unit 1620-3 to be described later to the model to obtain the analysis result. The analysis result may be determined by the model.
  • In an embodiment, the analysis result provision unit 1620-4 may obtain at least one keyword by applying voice data obtained from the data acquisition unit 1620-1 to the AI model.
  • The analysis unit 1620 may further include the data preprocessor 1620-2 and the data selection unit 1620-3 in order to improve an analysis result of the model or save resources or time to provide the analysis result.
  • The data preprocessor 1620-2 may preprocess the acquired data so that the acquired data may be used to identify a situation. That is, the data preprocessor 1620-2 can process the obtained data into the pre-defined format so that the analysis result provision unit 1620-4 may use the acquired data.
  • The data selection unit 1620-3 can select data required for identifying a situation from the data acquired by the data acquisition unit 1620-1 or the data preprocessed by the data preprocessor 1620-2. The selected data may be provided to the analysis result provision unit 1620-4. The data selection unit 1620-3 can select some or all of the obtained or preprocessed data according to a predetermined selection criterion for identifying a situation. The data selection unit 1620-3 can also select data according to a predetermined selection criterion by learning by the model learning unit 1610-4.
  • The model update unit 1620-5 can control the updating of the model based on the evaluation of the analysis result provided by the analysis result provision unit 1620-4. For example, the model update unit 1620-5 may provide the analysis result provided by the analysis result provision unit 1620-4 to the model learning unit 1610-4 so that the model learning unit 1610-4 can ask for further learning or updating the model.
  • FIG. 19 is a diagram illustrating an embodiment in which the learning unit 1610 and the analysis unit 1620 are implemented in different devices.
  • Referring to FIG. 19 , the external server 200 may include the learning unit 1610 and the electronic device 100 may include the analysis unit 1620. The electronic device 100 and the server 200 may communicate with each other on a network.
  • The analysis result provision unit 1620-4 of the electronic device 100 applies the data selected by the data selection unit 1620-3 to the model generated by the server 200 to obtain the analysis result. In this case, the analysis result provision unit 1620-4 of the electronic device 100 may receive a model generated by the server 200 from the server 200, and may use the received model to obtain at least one keyword in a call with the user of the electronic device 100.
  • FIG. 20 is a flowchart of a network system using an AI model according to various embodiments.
  • In FIG. 20 , the network system using the AI model may include a first component 2010 and a second component 2020.
  • Here, the first component 2010 may be the electronic device 100 and the second component 2020 may be a server storing the AI model. Alternatively, the first component 2010 may be a general purpose processor and the second component 2020 may be an AI dedicated processor. Alternatively, the first component 2010 may be at least one application, and the second component 2020 may be an operating system (OS). That is, the second component 2020 may be more integrated than the first component 2010, may be dedicated, delay less, have an outstanding performance, or have many resources. The second component 2020 may be a component that can process many operations required for generating, updating, or applying the model faster and more efficiently than the first component 2010.
  • An interface for transmitting/receiving data between the first component 2010 and the second component 2020 may be defined.
  • For example, an application program interface (API) having learning data (or an intermediate value or a transfer value) to be applied to the model as a factor value may be defined. The API may be defined as a group of subroutines or functions that may be called for any processing of any protocol (for example, a protocol defined in the electronic device 100) to another protocol (e.g., a protocol defined in an external server of the electronic device 100). That is, an environment may be provided in which an operation of another protocol may be performed through any one protocol through the API.
  • Referring to FIG. 20 , the user of the first component 2010 may perform a call with a user of another device in operation S2001.
  • The first component 2010 may transmit the call content between the user and the other party to the second component 2020 in operation S2003. In this case, audio data including the call voice may be transmitted to the second component 2020, or the text converted to the second component 2020 may be transmitted by converting the speech into text from the first component 2010.
  • The second component 2020 may obtain at least one keyword in a call using the learned artificial intelligence model in operation S2005. The keyword may be a proper noun having a high importance among the proper nouns included in the call. The importance may be determined based on the frequency of the proper noun, the weight set for the category to which the proper noun belongs, or the like.
  • The second component 2020 may transmit at least one keyword obtained from the call to the first component 2010 in operation S2007.
  • The first component 2010 may provide received at least one keyword in operation S2009.
  • The first component 2010 may provide the received keyword on the call screen and perform various functions using the keyword in response to the user selecting the keyword. As described above, a search function may be provided or a memo generation function may be provided.
  • The first component 2010 may determine a recommended application based on a keyword received from the second component 2020 and may execute a recommended application by associating with a keyword.
  • FIG. 21 is a flowchart illustrating a method for controlling an electronic device according to an embodiment. The flowchart illustrated in FIG. 21 may be configured as operations processed in the electronic device 100 described herein. Accordingly, the contents described with respect to the electronic device 100 may be applied to the flowchart shown in FIG. 21 , even if the contents are omitted below.
  • Referring to FIG. 21 , at least one keyword is obtained from a call content with a user of another electronic device during a call with a user of another electronic device using the electronic device in operation S2110.
  • The at least one obtained keyword is displayed during a call in operation S2120.
  • The obtained keywords may be displayed on the call screen. According to an embodiment, the keyword may be automatically extinguished on a call screen according to an algorithm considering the frequency of mentioning of the key word and the last time of mentioning.
  • According to still another embodiment, a keyword on the call screen may able to move or disappear by a user interaction (move to a recycle bin using a long click).
  • A search result for a keyword selected by a user, among the displayed at least one keyword, is provided in operation S2130. Alternatively, a keyword selected by a user among at least one displayed keywords may be stored in the memo application.
  • The user may specify a plurality of keywords for searching or taking a memo. For example, when the initial keyword is touched, a keyword to be searched or taken to a memo together can be touched in order and designated, and the electronic device 100 may identify that the selection of the key word has been completed when a long touch is done in the last keyword, and may display a screen asking whether to search or take a memo by determining a set from an initially selected keyword to a lastly selected keyword.
  • The electronic device may predict a scenario which the user may perform after a call, and display a UI for asking the user whether to perform a task using the keyword. When the UI is selected, it is identified that the user agrees to the task performance, and the user may automatically perform the task of inputting the keyword to the application by utilizing the API.
  • According to the embodiments described above, the personalized recommendation information may be actively provided from the extracted keywords during a call without intervention of a user, thereby significantly reducing a series of processes performed by a user one by one during a call and after a call, and increasing efficiency and a user satisfaction. Also, a service that is more suitable to a context of use may be provided based on the call content.
  • The various example embodiments described above may be implemented in software, hardware, or the combination of software and hardware. By hardware implementation, the example embodiments of the disclosure may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, or electric units for performing other functions. In some cases, example embodiments described herein may be implemented by the processor 130 of the electronic device 100. According to a software implementation, example embodiments of the disclosure, such as the procedures and functions described herein may be implemented with separate software modules. Each of the above-described software modules may perform one or more of the functions and operations described herein.
  • Meanwhile, various embodiments of the disclosure may be implemented in software, including instructions stored on machine-readable storage media readable by a machine (e.g., a computer). A machine is a device operable to call an instruction from a storage medium and operable according to a called instruction and may include the electronic device 100 of the disclosed embodiments.
  • When the instruction is executed by a processor, the processor may perform the function corresponding to the instruction, either directly or under the control of the processor, using other components. The instructions may contain a code made by a compiler or a code executable by an interpreter. For example, by executing an instruction stored in the storage medium by a processor, the control method of the electronic device 100 may be executed. For example, by executing an instruction stored in a storage medium by a processor of a device (or an electronic device), a control method including (obtaining at least one keyword from a call content with a user of another electronic device during a call with a user of another electronic device using an electronic device, displaying the at least one keyword during a call, and providing a search result for a keyword selected by a user among the at least one displayed keywords may be performed.
  • Herein, the “non-transitory” storage medium may not include a signal but is tangible, and does not distinguish the case in which a data is semi-permanently stored in a storage medium from the case in which a data is temporarily stored in a storage medium.
  • According to an embodiment of the disclosure, the method according to the above-described embodiments may be included in a computer program product. The computer program product may be traded as a product between a seller and a consumer. The computer program product may be distributed online in the form of machine-readable storage media (e.g., compact disc read only memory (CD-ROM)) or through an application store (e.g., Play Store™, APP Store™) or distributed online directly. In the case of online distribution, at least a portion of the computer program product may be at least temporarily stored or temporarily generated in a server of the manufacturer, a server of the application store, or a machine-readable storage medium such as memory of a relay server.
  • According to various embodiments of the disclosure, the respective elements (e.g., module or program) of the elements mentioned above may include a single entity or a plurality of entities. According to the embodiments, at least one element or operation from among the corresponding elements mentioned above may be omitted, or at least one other element or operation may be added. Alternatively or additionally, a plurality of components (e.g., module or program) may be combined to form a single entity. In this case, the integrated entity may perform functions of at least one function of an element of each of the plurality of elements in the same manner as or in a similar manner to that performed by the corresponding element from among the plurality of elements before integration. The module, a program module, or operations executed by other elements according to variety of embodiments may be executed consecutively, in parallel, repeatedly, or heuristically, or at least some operations may be executed according to a different order, may be omitted, or the other operation may be added thereto.
  • While the disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents.

Claims (20)

What is claimed is:
1. A control method of an electronic device, the method comprising:
during a call with a user of another electronic device by using the electronic device, obtaining at least one keyword from a content of the call with the user of the other electronic device;
displaying the at least one keyword during the call;
based on a keyword selected from the at least one keyword, displaying a first UI element for storing the selected keyword and a second UI element for searching the selected keyword; and
based on the first UI element being selected, storing the selected keyword in memo application.
2. The method of claim 1, wherein the method further comprising,
based on the second UI element being selected, providing a search result for the selected keyword.
3. The method of claim 1, wherein the displaying the at least one keyword comprises displaying the at least one keyword on a call screen with the user of the other electronic device.
4. The method of claim 1, wherein the displaying the at least one keyword comprises displaying at least one circular user interface (UI) element including each of the at least one keyword.
5. The method of claim 4, wherein further comprising,
determining a level of importance of the at least one keyword according to a number of times the at least one keyword is mentioned during the call.
6. The method of claim 5, wherein the displaying the at least one keyword comprises displaying a size of the at least one circular UI element differently according to the determined level of importance of the at least one keyword.
7. The method of claim 1, wherein the displaying the at least one keyword comprises, based on the obtained at least one keyword being plural, determining an arrangement interval of the plurality of keywords according to an interval mentioned in a call with the user of the other electronic device and displaying the plurality of keywords according to the determined arrangement interval.
8. The method of claim 1, further comprising:
based on the displayed at least one keyword being plural, in response to receiving a user input to sequentially select two or more keywords from among the plurality of keywords, determining the selected two or more keywords as one set, and
providing a search result about the determined set.
9. The method of claim 1, wherein the obtaining comprises obtaining at least one keyword from a content of a call with the user of the other electronic device using a learned artificial intelligence (AI) model.
10. The method of claim 1, further comprising:
based on a call with the user of the other electronic device being ended, determining a recommended application among a plurality of applications of the electronic device based on the obtained at least one keyword; and
displaying a UI for asking whether to perform a task associated with the determined recommended application and the obtained at least one keyword.
11. An electronic device comprising:
a communicator;
a display;
a memory configured to store computer executable instructions; and
a processor, by executing the computer executable instructions, configured to:
during a call with a user of another electronic device by using the electronic device,
obtain at least one keyword from a content of the call with the user of the other electronic device,
display, on the display, the at least one keyword during the call,
based on a keyword selected from the at least one keyword, display, on the display, a first UI element for storing the selected keyword and a second UI element for searching the selected keyword, and
based on the first UI element being selected, storing the selected keyword in memo application.
12. The electronic device of claim 11, wherein the processor is further configured to, based on the second UI element being selected, provide a search result for the selected keyword.
13. The electronic device of claim 11, wherein the processor is further configured to display, on the display, the at least one keyword on a call screen with the user of the other electronic device.
14. The electronic device of claim 11, wherein the processor is further configured to display, on the display, at least one circular user interface (UI) element including each of the at least one keyword.
15. The electronic device of claim 14, wherein the processor is further configured to determine a level of importance of the at least one keyword according to a number of times the at least one keyword is mentioned during the call.
16. The electronic device of claim 15, wherein the processor is further configured to display, on the display, a size of the at least one circular UI element differently according to the determined level of importance of the at least one keyword.
17. The electronic device of claim 11, wherein the processor is further configured to, based on the obtained at least one keyword being plural, determine an arrangement interval of the plurality of keywords according to an interval mentioned in a call with the user of the other electronic device and display, on the display, the plurality of keywords according to the determined arrangement interval.
18. The electronic device of claim 11, wherein the processor is further configured to:
based on the displayed at least one keyword being plural, in response to receiving a user input to sequentially select two or more keywords from among the plurality of keywords, determine the selected two or more keywords as one set,
provide a search result about the determined set.
19. The electronic device of claim 11, wherein the processor is further configured to obtain at least one keyword from a content of a call with the user of the other electronic device using a learned artificial intelligence (AI) model.
20. The electronic device of claim 11, wherein the processor is further configured to;
based on a call with the user of the other electronic device being ended, determine a recommended application among a plurality of applications of the electronic device based on the obtained at least one keyword; and
display, on the display, a UI for asking whether to perform a task associated with the determined recommended application and the obtained at least one keyword.
US18/321,146 2018-09-06 2023-05-22 Electronic device and control method therefor Pending US20230290343A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/321,146 US20230290343A1 (en) 2018-09-06 2023-05-22 Electronic device and control method therefor

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR10-2018-0106303 2018-09-06
KR1020180106303A KR102608953B1 (en) 2018-09-06 2018-09-06 Electronic apparatus and controlling method thereof
PCT/KR2019/011135 WO2020050554A1 (en) 2018-09-06 2019-08-30 Electronic device and control method therefor
US202017255605A 2020-12-23 2020-12-23
US18/321,146 US20230290343A1 (en) 2018-09-06 2023-05-22 Electronic device and control method therefor

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
PCT/KR2019/011135 Continuation WO2020050554A1 (en) 2018-09-06 2019-08-30 Electronic device and control method therefor
US17/255,605 Continuation US20210264905A1 (en) 2018-09-06 2019-08-30 Electronic device and control method therefor

Publications (1)

Publication Number Publication Date
US20230290343A1 true US20230290343A1 (en) 2023-09-14

Family

ID=69722580

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/255,605 Pending US20210264905A1 (en) 2018-09-06 2019-08-30 Electronic device and control method therefor
US18/321,146 Pending US20230290343A1 (en) 2018-09-06 2023-05-22 Electronic device and control method therefor

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/255,605 Pending US20210264905A1 (en) 2018-09-06 2019-08-30 Electronic device and control method therefor

Country Status (3)

Country Link
US (2) US20210264905A1 (en)
KR (2) KR102608953B1 (en)
WO (1) WO2020050554A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7196122B2 (en) * 2020-02-18 2022-12-26 株式会社東芝 Interface providing device, interface providing method and program
JP7476645B2 (en) * 2020-04-23 2024-05-01 富士フイルムビジネスイノベーション株式会社 Portable information terminal and program
CN113079247A (en) * 2021-03-24 2021-07-06 广州三星通信技术研究有限公司 Associated service providing method and associated service providing device

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080319649A1 (en) * 2007-06-20 2008-12-25 Amadeus S.A.S. System and method for integrating and displaying travel advices gathered from a plurality of reliable sources
US7672845B2 (en) * 2004-06-22 2010-03-02 International Business Machines Corporation Method and system for keyword detection using voice-recognition
US20100114899A1 (en) * 2008-10-07 2010-05-06 Aloke Guha Method and system for business intelligence analytics on unstructured data
US20100175020A1 (en) * 2009-01-05 2010-07-08 Samsung Electronics Co., Ltd. Mobile terminal and method for providing application program thereof
US8452336B2 (en) * 2008-04-30 2013-05-28 Lg Electronics Inc. Mobile terminal and call content management method thereof
US20130144610A1 (en) * 2011-12-05 2013-06-06 Microsoft Corporation Action generation based on voice data
US8537983B1 (en) * 2013-03-08 2013-09-17 Noble Systems Corporation Multi-component viewing tool for contact center agents
US20140019846A1 (en) * 2012-07-12 2014-01-16 Yehuda Gilead Notes aggregation across multiple documents
US20140058831A1 (en) * 2008-09-08 2014-02-27 Invoca, Inc. Methods and systems for processing and managing telephonic communications
US20140164923A1 (en) * 2012-12-12 2014-06-12 Adobe Systems Incorporated Intelligent Adaptive Content Canvas
US20140164985A1 (en) * 2012-12-12 2014-06-12 Adobe Systems Incorporated Predictive Directional Content Queue
US20160104226A1 (en) * 2014-10-13 2016-04-14 Samsung Electronics Co., Ltd. Method and apparatus for providing content service
US20160330150A1 (en) * 2015-05-06 2016-11-10 Kakao Corp. Message service providing method for message service linked to search service and message server and user terminal to perform the method
US9495176B2 (en) * 2013-03-06 2016-11-15 Lg Electronics Inc. Mobile terminal and control method thereof using extracted keywords
US20170324858A1 (en) * 2014-12-09 2017-11-09 Huawei Technologies Co., Ltd. Information Processing Method and Apparatus
US20170344517A1 (en) * 2016-05-26 2017-11-30 Konica Minolta, Inc. Information processing apparatus and program
US9986397B2 (en) * 2012-01-18 2018-05-29 Samsung Electronics Co., Ltd. Apparatus and method for processing call services in mobile terminal
US20180181922A1 (en) * 2016-12-23 2018-06-28 Samsung Electronics Co., Ltd. System and method of providing to-do list of user
US20190171357A1 (en) * 2013-03-15 2019-06-06 Forbes Holten Norris, III Space optimizing micro keyboard method and apparatus
US20190332345A1 (en) * 2016-07-21 2019-10-31 Samsung Electronics Co., Ltd. Electronic device and control method thereof
US20200183971A1 (en) * 2017-08-22 2020-06-11 Subply Solutions Ltd. Method and system for providing resegmented audio content
US20210406299A1 (en) * 2020-06-30 2021-12-30 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for mining entity relationship, electronic device, and storage medium
US20220406304A1 (en) * 2021-06-21 2022-12-22 Kyndryl, Inc. Intent driven voice interface

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011205238A (en) * 2010-03-24 2011-10-13 Ntt Docomo Inc Communication terminal and information retrieval method
US8797380B2 (en) * 2010-04-30 2014-08-05 Microsoft Corporation Accelerated instant replay for co-present and distributed meetings
KR20120104662A (en) * 2011-03-14 2012-09-24 주식회사 팬택 Apparatus and method for providing caller's information of mobile terminal
KR101798968B1 (en) * 2011-03-30 2017-11-17 엘지전자 주식회사 Mobiel Terminal And Mehtod For Controlling The Same
US9443518B1 (en) * 2011-08-31 2016-09-13 Google Inc. Text transcript generation from a communication session
KR20150134087A (en) * 2014-05-21 2015-12-01 삼성전자주식회사 Electronic device and method for recommending data in electronic device
US10002345B2 (en) * 2014-09-26 2018-06-19 At&T Intellectual Property I, L.P. Conferencing auto agenda planner
KR20160043842A (en) * 2014-10-14 2016-04-22 엘지전자 주식회사 Mobile terminal
KR101665969B1 (en) * 2015-03-25 2016-10-13 주식회사 카카오 Device, server and method for keyword retrieval via interaction
US10430070B2 (en) * 2015-07-13 2019-10-01 Sap Se Providing defined icons on a graphical user interface of a navigation system
WO2018039045A1 (en) * 2016-08-24 2018-03-01 Knowles Electronics, Llc Methods and systems for keyword detection using keyword repetitions
US20190156826A1 (en) * 2017-11-18 2019-05-23 Cogi, Inc. Interactive representation of content for relevance detection and review

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7672845B2 (en) * 2004-06-22 2010-03-02 International Business Machines Corporation Method and system for keyword detection using voice-recognition
US20080319649A1 (en) * 2007-06-20 2008-12-25 Amadeus S.A.S. System and method for integrating and displaying travel advices gathered from a plurality of reliable sources
US8452336B2 (en) * 2008-04-30 2013-05-28 Lg Electronics Inc. Mobile terminal and call content management method thereof
US20140058831A1 (en) * 2008-09-08 2014-02-27 Invoca, Inc. Methods and systems for processing and managing telephonic communications
US20100114899A1 (en) * 2008-10-07 2010-05-06 Aloke Guha Method and system for business intelligence analytics on unstructured data
US20100175020A1 (en) * 2009-01-05 2010-07-08 Samsung Electronics Co., Ltd. Mobile terminal and method for providing application program thereof
US20130144610A1 (en) * 2011-12-05 2013-06-06 Microsoft Corporation Action generation based on voice data
US9986397B2 (en) * 2012-01-18 2018-05-29 Samsung Electronics Co., Ltd. Apparatus and method for processing call services in mobile terminal
US20140019846A1 (en) * 2012-07-12 2014-01-16 Yehuda Gilead Notes aggregation across multiple documents
US20140164985A1 (en) * 2012-12-12 2014-06-12 Adobe Systems Incorporated Predictive Directional Content Queue
US20140164923A1 (en) * 2012-12-12 2014-06-12 Adobe Systems Incorporated Intelligent Adaptive Content Canvas
US9495176B2 (en) * 2013-03-06 2016-11-15 Lg Electronics Inc. Mobile terminal and control method thereof using extracted keywords
US8537983B1 (en) * 2013-03-08 2013-09-17 Noble Systems Corporation Multi-component viewing tool for contact center agents
US20190171357A1 (en) * 2013-03-15 2019-06-06 Forbes Holten Norris, III Space optimizing micro keyboard method and apparatus
US20160104226A1 (en) * 2014-10-13 2016-04-14 Samsung Electronics Co., Ltd. Method and apparatus for providing content service
US20170324858A1 (en) * 2014-12-09 2017-11-09 Huawei Technologies Co., Ltd. Information Processing Method and Apparatus
US20160330150A1 (en) * 2015-05-06 2016-11-10 Kakao Corp. Message service providing method for message service linked to search service and message server and user terminal to perform the method
US20170344517A1 (en) * 2016-05-26 2017-11-30 Konica Minolta, Inc. Information processing apparatus and program
US20190332345A1 (en) * 2016-07-21 2019-10-31 Samsung Electronics Co., Ltd. Electronic device and control method thereof
US20180181922A1 (en) * 2016-12-23 2018-06-28 Samsung Electronics Co., Ltd. System and method of providing to-do list of user
US20200183971A1 (en) * 2017-08-22 2020-06-11 Subply Solutions Ltd. Method and system for providing resegmented audio content
US20210406299A1 (en) * 2020-06-30 2021-12-30 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for mining entity relationship, electronic device, and storage medium
US20220406304A1 (en) * 2021-06-21 2022-12-22 Kyndryl, Inc. Intent driven voice interface

Also Published As

Publication number Publication date
WO2020050554A1 (en) 2020-03-12
KR20230169016A (en) 2023-12-15
US20210264905A1 (en) 2021-08-26
KR102608953B1 (en) 2023-12-04
KR20200028089A (en) 2020-03-16

Similar Documents

Publication Publication Date Title
US11671386B2 (en) Electronic device and method for changing chatbot
US11575783B2 (en) Electronic apparatus and control method thereof
US11721333B2 (en) Electronic apparatus and control method thereof
US11954150B2 (en) Electronic device and method for controlling the electronic device thereof
US20230290343A1 (en) Electronic device and control method therefor
US11153426B2 (en) Electronic device and control method thereof
US20200133211A1 (en) Electronic device and method for controlling electronic device thereof
US20200258504A1 (en) Electronic apparatus and controlling method thereof
US11481551B2 (en) Device and method for providing recommended words for character input
US11817097B2 (en) Electronic apparatus and assistant service providing method thereof
US20220059088A1 (en) Electronic device and control method therefor
KR20190105182A (en) Electronic apparatus and control method thereof
CN112585911B (en) Electronic device and control method thereof

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED