WO2024228718A1 - Change the tone of a message thread reply using machine learning - Google Patents
Change the tone of a message thread reply using machine learning Download PDFInfo
- Publication number
- WO2024228718A1 WO2024228718A1 PCT/US2023/032010 US2023032010W WO2024228718A1 WO 2024228718 A1 WO2024228718 A1 WO 2024228718A1 US 2023032010 W US2023032010 W US 2023032010W WO 2024228718 A1 WO2024228718 A1 WO 2024228718A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- message
- suggested
- computing device
- messages
- tone
- Prior art date
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 139
- 230000008859 change Effects 0.000 title description 6
- 238000000034 method Methods 0.000 claims description 78
- 238000012549 training Methods 0.000 claims description 48
- 230000008451 emotion Effects 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000004891 communication Methods 0.000 description 28
- 238000013528 artificial neural network Methods 0.000 description 25
- 230000004044 response Effects 0.000 description 19
- 238000004458 analytical method Methods 0.000 description 18
- 230000002787 reinforcement Effects 0.000 description 16
- 230000015654 memory Effects 0.000 description 14
- 230000000306 recurrent effect Effects 0.000 description 11
- 238000012508 change request Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 238000003058 natural language processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 210000003813 thumb Anatomy 0.000 description 6
- 230000001413 cellular effect Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 3
- IUVCFHHAEHNCFT-INIZCTEOSA-N 2-[(1s)-1-[4-amino-3-(3-fluoro-4-propan-2-yloxyphenyl)pyrazolo[3,4-d]pyrimidin-1-yl]ethyl]-6-fluoro-3-(3-fluorophenyl)chromen-4-one Chemical compound C1=C(F)C(OC(C)C)=CC=C1C(C1=C(N)N=CN=C11)=NN1[C@@H](C)C1=C(C=2C=C(F)C=CC=2)C(=O)C2=CC(F)=CC=C2O1 IUVCFHHAEHNCFT-INIZCTEOSA-N 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- WQZGKKKJIJFFOK-GASJEMHNSA-N Glucose Natural products OC[C@H]1OC(O)[C@H](O)[C@@H](O)[C@@H]1O WQZGKKKJIJFFOK-GASJEMHNSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 210000003811 finger Anatomy 0.000 description 1
- 239000008103 glucose Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000009182 swimming Effects 0.000 description 1
- 230000036642 wellbeing Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/02—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/253—Grammatical analysis; Style critique
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Definitions
- a computing device may enable a user of a computing device to respond to the incoming communication by allowing the user to input textual information (e.g., using an input device), and send the textual information to other users as a response.
- a computing device or computing system may leverage the structured understanding of language inputs from a machine learning model (e.g., large language model) to transform inputted text according to a learned message tone. For example, a computing device may receive an input of text as a draft message to be sent in a messaging application. The computing device may receive a request to change the message tone of the draft message.
- a machine learning model e.g., large language model
- the computing device may determine the context of the draft message.
- the computing device may instruct the machine learning model to apply changes to the draft message based on the determined context of the draft message and according to one or more message tones.
- the computing device may instruct the machine learning model to generate suggested messages according to the determined context and the one or more message tones included in the message tone change request.
- the computing device may output one or more of the suggested messages for a user operating the computing device to select.
- the computing device may replace the draft message with the selected message written in the one or more requested message tones.
- a method includes a computing device obtaining a message thread that includes a plurality of messages between a first user associated with the computing device and a second user.
- the method may further include the computing Docket No.: 1333-481WO01 device generating a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones.
- the method may further include the computing device outputting for display a graphical user interface that includes the at least one suggested message from the set of suggested messages.
- the method may further include the computing device receiving a user input that selects a suggested message from the at least one suggested message as a selected message.
- the method may further include the computing device outputting an updated graphical user interface that includes the selected message in place of the draft message.
- a device comprises at least one processor, a display, and a storage device that stores instructions executable by the at least one processor.
- the instructions executable by the at least one processor may be configured to obtain a message thread that includes a plurality of messages between a first user associated with the device and a second user.
- the instructions executable by the at least one processor may also generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones.
- the instructions executable by the at least one processor may also output, via the display, a graphical user interface that includes the at least one suggested message from the set of suggested messages.
- the instructions executable by the at least one processor may also receive a user input that selects a suggested message from the at least one suggested message as a selected message.
- the instructions executable by the at least one processor may also output, via the display, an updated graphical user interface that includes the selected message in place of the draft message.
- a computer-readable storage medium encoded with instructions, that when executed, cause at least one processor of a computing device to obtain a message thread that includes a plurality of messages between a first user associated with a computing device and a second user.
- the instructions may further cause the at least one processor to generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones.
- the instructions may further cause the at least one processor to output, for display, a graphical user interface that includes the at least one suggested message from the set of suggested messages.
- the instructions may further cause the at least one Docket No.: 1333-481WO01 processor to receive a user input that selects a suggested message from the at least one suggested message as a selected message.
- the instructions may further cause the at least one processor to output an updated graphical user interface that includes the selected message in place of the draft message.
- a system comprising at least one processor, a network interface, and a storage device that stores instructions executable by the at least one processor to obtain a message thread that includes a plurality of messages between a first user associated with a computing device and a second user.
- the instructions may further cause the at least one processor to generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones.
- the instructions may further cause the at least one processor to send, to the computing device via the network interface, at least one suggested message from the set of suggested messages.
- FIG. 1 is a conceptual diagram illustrating an example computing environment and graphical user interface for providing suggested messages written in a selected message tone, in accordance with one or more techniques of this disclosure.
- FIG. 2 is a block diagram illustrating a computing device for providing suggested messages, in accordance with one or more techniques of this disclosure.
- FIGS. 3A-3D are conceptual diagrams illustrating example graphical user interfaces for providing suggested messages in accordance with one or more techniques of this disclosure.
- FIG. 4 is a flowchart illustrating an example operation for providing suggested messages in accordance with one or more techniques of this disclosure.
- FIG. 1 is a conceptual diagram illustrating an example computing environment 101 and GUI 110 for providing suggested messages written in a selected message tone, in accordance with one or more techniques of this disclosure.
- computing environment 101 includes computing device 100.
- Examples of computing device 100 may include, but are not limited to, portable, mobile, or other devices, such as mobile phones (including smartphones), wearable computing devices (e.g., smart watches, smart glasses, etc.) laptop computers, desktop computers, tablet computers, smart television platforms, server computers, mainframes, infotainment systems (e.g., vehicle head units), etc.
- computing device 100 may represent a cloud computing system that provides one or more services via a network. That is, in some examples, computing device 100 may be a distributed computing system.
- computing device 100 includes one or more user interface (UI) devices (“UI device 102”).
- UI device 102 of computing device 100 may be configured to function as an input device and/or an output device for computing device 100.
- UI device 102 may be implemented using various technologies. For instance, UI device 102 may be configured to receive input from a user through tactile, audio, and/or video feedback.
- a presence-sensitive display includes a touch- sensitive or presence-sensitive input screen, such as a resistive touchscreen, a surface acoustic wave touchscreen, a capacitive touchscreen, a projective capacitance touchscreen, a pressure sensitive screen, an acoustic pulse recognition touchscreen, or another presence-sensitive technology.
- UI device 102 of computing device 100 may include a presence-sensitive device that may receive tactile input from a user of computing device 100.
- UI device 102 may receive indications of the tactile input by detecting one or more gestures from the user (e.g., when the user touches or points to one or more locations of UI device 102 with a finger or a stylus pen). [0015] UI device 102 may additionally or alternatively be configured to function as an output device by providing output to a user using tactile, audio, or video stimuli.
- Examples of output devices include a sound card, a video graphics adapter card, or any of one or more display devices, such as a liquid crystal display (LCD), dot matrix display, light emitting diode (LED) display, miniLED, microLED, organic light-emitting diode Docket No.: 1333-481WO01 (OLED) display, e-ink, or similar monochrome or color display capable of outputting visible information to a user of computing device 100.
- Additional examples of an output device include a speaker, a haptic device, or other device that can generate intelligible output to a user.
- UI device 102 may present output to a user of computing device 100 as a graphical user interface that may be associated with functionality provided by computing device 100.
- UI device 102 may present various user interfaces of applications executing at or accessible by computing device 100 (e.g., an electronic message application, an Internet browser application, etc.). A user of computing device 100 may interact with a respective user interface of an application to cause computing device 100 to perform operations relating to a function.
- UI device 102 of computing device 100 may detect two- dimensional and/or three-dimensional gestures as input from a user of computing device 100. For instance, a sensor of UI device 102 may detect the user's movement (e.g., moving a hand, an arm, a pen, a stylus, etc.) within a threshold distance of the sensor of UI device 102.
- UI device 102 may determine a two- or three-dimensional vector representation of the movement and correlate the vector representation to a gesture input (e.g., a hand-wave, a pinch, a clap, a pen stroke, etc.) that has multiple dimensions.
- UI device 102 may, in some examples, detect a multidimensional gesture without requiring the user to gesture at or near a screen or surface at which UI device 102 outputs information for display. Instead, UI device 102 may detect a multi-dimensional gesture performed at or near a sensor which may or may not be located near the screen or surface at which UI device 102 outputs information for display. [0017] In the example of FIG.
- computing device 100 includes user interface (UI) module 104, application modules 106A–106N (collectively “application modules 106”).
- Modules 104 and 106 may perform operations described herein using hardware, software, firmware, or a mixture thereof residing in and/or executing at computing device 100.
- Computing device 100 may execute modules 104 and 106 with one processor or with multiple processors.
- computing device 100 may execute modules 104 and 106 as a virtual machine executing on underlying hardware.
- Modules 104 and 106 may execute as one or more services of an operating system or computing platform or may execute as one or more executable programs at an application layer of a computing platform.
- UI module 104 as shown in the example of FIG.
- UI module 104 may be operable by computing device 100 to perform one or more functions, such as receive input and send Docket No.: 1333-481WO01 indications of such input to other components associated with computing device 100, such as application modules 106.
- UI module 104 may also receive data from components associated with computing device 100 such as application modules 106. Using the data received, UI module 104 may cause other components associated with computing device 100, such as UI device 102, to provide output based on the data. For instance, UI module 104 may receive data from one of application modules 106 to display a GUI.
- Application modules 106 as shown in the example of FIG.1, may include functionality to perform any variety of operations on computing device 100.
- application modules 106 may include a word processor, a text application, a web browser, a multimedia player, a calendar application, an operating system, a distributed computing application, a graphic design application, a video editing application, a web development application, or any other application.
- One of application modules 106 e.g., application module 106A
- application module 106A may be a text messaging or Short Message Service (SMS) application.
- SMS Short Message Service
- Application module 106A may include functionality to compose outgoing text message communications, receive incoming text message communications, respond to incoming text message communications, and other functions.
- Application module 106A in various examples, may provide data to UI module 104 causing UI device 102 to display a GUI.
- one or more of application modules 106 may be operable to receive incoming communications from other devices (e.g., via the network). For instance, one or more of application modules 106 may receive text messages for an account associated with a user of computing device 100, calendar alerts or meeting requests for a user of computing device 100, or other incoming communications.
- communications may include information (e.g., generated in response to input by users of the other devices). Examples of information include text (e.g., any combination of letters, words, numbers, punctuation, etc.), emoji, emoticons, images, video, audio, or any other content that may be included in a communication. In the example of FIG.
- application module 106A may receive an incoming communication (e.g., a text message) from another computing device.
- Computing device 100 may communicate with other computing devices via a network.
- the network may include any public or private communication network, such as a cellular network, Wi-Fi network, or other type of network for transmitting data between computing devices.
- the network may represent one or more packet switched networks, such as the Internet.
- Computing device 100 may send and receive Docket No.: 1333-481WO01 data across the network using any suitable communication techniques.
- computing device 100 may be operatively coupled to the network using respective network links.
- the network may include network hubs, network switches, network routers, terrestrial and/or satellite cellular networks, etc., that are operatively inter- coupled thereby providing for the exchange of information between computing device 100 and another computing device.
- network links of the network may be Ethernet, ATM or other network connections. Such connections may include wireless and/or wired connections.
- application module 106A may provide a user a platform to electronically communicate with other computing devices via computing device 100.
- computing device 100 may send a message to and receive a message from another computing device.
- UI device 102 may provide a user operating computing device 100 the ability to provide inputs (e.g., to select letters, emojis, etc.) at a graphical keyboard to compose draft message 114.
- draft message 114 may not be composed in a tone required or preferred by the user operating computing device 100.
- draft message 114 may be composed in a casual tone, but the user operating computing device 100 may want draft message 114 to be written in a formal tone.
- computing device 100 may provide the option for the user of computing device 100 to select message tone change request icon 116.
- UI device 102 may display GUI 110 that includes message tone change request icon 116.
- UI device 102 may detect an input from a user operating computing device 100 at a location corresponding to where message tone change request icon 116 is included in GUI 110.
- application module 106A may instruct UI module 104 to update UI device 102 to output message tones 118A-N (collectively referred to as “message tones 118”) in GUI 110.
- UI device 102 may allow the user operating computing device 100 to select one or more message tones of message tones 118, such as “Shakespeare” message tone 118A or “Excited” message tone 118B, in which draft message 114 may be written in.
- UI device 102 may, for example, detect an input selecting one or more message tones 118.
- UI device 102 may send application module 106A an indication of the one or more selected message tones 118.
- Application module 106A may send draft message 114 and one or more selected message tones of message tones 118 to suggestion module 108.
- Docket No.: 1333-481WO01 Computing device 100 may include suggestion module 108 that generates suggested messages 122.
- Computing device 100 may receive an input of draft message 114.
- Draft message 114 may either be input by a user or generated by computing device 100 or any other computing device.
- UI device 102 may output GUI 110 that displays draft message 114 in text editing region 160.
- suggestion module 108 may execute on a remote computing system or server external to computing device 100.
- Computing device 100 may send draft message 114 and the selected message tones of message tones 118 to the remote computing system or server hosting suggestion module 108.
- Suggestion module 108 executing on the remote computing system or server may generate suggested messages 122 based on the selected message tones of message tones 118 and the context of draft message 114.
- Suggestion module 108 executing on the remote computing system or server may send computing device 100 the generated suggested messages 122 via a network providing wired and/or wireless communication between computing device 100 and the remote computing system or server.
- Computing device 100 may output at least a subset of the received suggested messages 122 in GUI 110.
- Suggestion module 108 may generate suggested messages 122 according to one or more message tones of message tones 118.
- Suggestion module 108 may generate suggested messages 122 by rewriting draft message 114 in a particular message tone.
- Suggestion module 108 may represent or otherwise apply a machine learning (ML) model trained to generate contextually relevant and coherent messages (e.g., suggested messages 122) based on one or more learned message tones of message tones 118.
- ML machine learning
- Suggestion module 108 may suggest the generated messages to a first user operating computing device 100.
- Computing device 100 may receive a signal of a message selected by the user operating computing device 100. Responsive to the selection, computing device 100 may replace or substitute draft message 114 with the selected message. For example, computing device 100 may output the selected message in text editing region 160.
- Suggestion module 108 may represent or otherwise apply an ML model, such as a large language model. Suggestion module 108 is described in greater detail below with respect to FIG. 2.
- suggestion module 108 may be trained on a large amount of data in order to generate coherent and contextually relevant messages.
- suggestion module 108 may use complex algorithms and neural networks to analyze the structure, grammar, and context of language.
- Suggestion module 108 may Docket No.: 1333-481WO01 perform various natural language processing (NLP) tasks, such as machine translation, sentiment analysis, summarization, question answering, text generation, and the like.
- NLP natural language processing
- Suggestion module 108 may obtain message thread 112 between a first user operating computing device 100 and a second user.
- the second user may also operate computing device 100.
- the second user may operate another computing device (not shown) that is not computing device 100.
- Message thread 112 may include a set (i.e., one or more) of messages exchanged between two or more users (e.g., the first user and the second user) within a communication platform (e.g., provided by application modules 106), such as a messaging application, a social media platform, or other applications.
- computing device 100 may receive an input from a user operating computing device 100 via UI device 102.
- Computing device 100 may receive an input as draft message 114 composed by the user operating computing device 100.
- suggestion module 108 may generate draft message 114 based on message thread 112.
- Computing device 100 may provide suggestion module 108 draft message 114.
- Computing device 100 may only provide suggestion module 108 draft message 114 with express consent of a user operating computing device 100. Thus, the user operating computing device 100 retains control over how information is collected about the user and used by computing device 100 and suggestion module 108.
- Suggestion module 108 may determine the context of draft message 114 and/or message thread 112 that includes relevant information associated with draft message 114 and/or message thread 112.
- Suggestion module 108 may determine the context of draft message 114, such as the identity of the recipient of draft message 114, a time-of-day draft message 114 was composed, an application of application modules 106 being used to send and receive message thread 112, a message tone used by one or more messages of message thread 112, or any other factors indicating an intent associated with draft message 114.
- Suggestion module 108 may determine the context of message thread 112, such as the identity of users communicating in message thread 112, the tone of one or more messages of message thread 112, recent or previous messages of message thread 112, punctuation used in one or more messages of message thread 112, or any other factors indicating an intent associated with one or more messages of message thread 112.
- suggestion module 108 may determine the context of draft message 114 and/or message thread 112 by applying the ML model.
- Suggestion module 108 may Docket No.: 1333-481WO01 generate suggested messages 122 based on the context of draft message 114 and/or message thread 112. In the example of FIG. 1, suggestion module 108 may determine the context of draft message 114 stating “Wanna grab dinner?”.
- Suggestion module 108 may, for example, determine the context of the example draft message is a question about having dinner with the second user.
- suggestion module 108 may handle data associated with draft message 114 and message thread 112 such that personally identifiable information is removed.
- suggestion module 108 may determine a context of draft message 114 and/or message thread 112 in such a way that no personally identifiable information is used or obtained (e.g., in a way that a particular location of a user operating computing device 100 cannot be determined).
- suggestion module 108 Responsive to suggestion module 108 determining the context of draft message 114 and/or message thread 112, suggestion module 108 may generate suggested messages 122 based on the determined context and a selected message tone of message tones 118. In the example of FIG.
- suggestion modules may generate suggested messages 122 based on “Shakespeare” message tone 118A.
- Suggestion module 108 may apply the determined context of draft message 114 (e.g., a question about having dinner with a second user) and “Shakespeare” message tone 118A to generate suggested message 122A stating “Prythee, shall we dine tonight?”, suggested message 122B stating “Wouldst thou do me the honor of accompanying me to dinner?”, suggested message 122N stating “wouldst thou join me for a repast?”, and other suggested messages 122.
- Suggestion module 108 may apply the ML model to learn how to generate suggested messages 122 in each message tone of message tones 118.
- Suggestion module 108 may train the ML model with plain text rules associated with message tones 118 and example statements composed in message tones 118. Suggestion module 108 training the ML model to learn how to generate suggested messages 122 is described in more detail below with respect to FIG. 2. [0034] In some examples, suggestion module 108 may categorize message tones 118 within distinct dimensions. For example, suggestion module 108 may categorize each message tone of message tones 118 as being in a dimension that may include characters, emotions, or tactics. Suggestion module 108 may categorize message tones 118 as being part of a dimension to efficiently learn how to draft messages in each message tone of message tones 118.
- Suggestion module 108 may also categorize message tones 118 to assist in learning new message tones, such as message tones created by a user operating computing device 100. Docket No.: 1333-481WO01 [0035] Suggestion module 108 may receive plain text rules associated with message tones 118. Suggestion module 108 may receive plain text rules that include a description of how message tones 118 formats speech or statements. Suggestion module 108 may apply the target statements and plain text rules associated with message tones 118 to train the ML model on structural particularities of message tones 118. Suggestion module 108 may train the ML model to generate suggested messages 122 according to message tones 118. [0036] Suggestion module 108 may also update the target statements and plain text rules.
- Computing device 100 may provide an option to a user operating computing device 100 to select whether suggested messages 122 are properly written according to a selected message tone of message tones 118, while also maintaining the context of draft message 114 and/or message thread 112 (e.g., selecting a ‘thumbs up’ or ‘thumbs down’ corresponding to the quality of suggested messages 122 as perceived by a user operating computing device 100). Responsive to computing device 100 receiving an indication that selected messages 122 has the appropriate message tone and context, suggestion module 108 may reinforce the understanding of the ML model. Responsive to computing device 100 receiving an indication that selected messages 122 does not have the appropriate message tone or context, Suggestion module 108 may provide the ML model the appropriate feedback to further train the ML model.
- computing device 100 may allow a user to create a new message tone of message tones 118.
- suggestion module 108 may receive target statements or plain text rules from a user operating computing device 100.
- UI device 102 may detect inputs from a user.
- UI device 102 may detect inputs including a new message tone name, target statements to train the new message tone (e.g., example statements of text written in the new message tone, corresponding example statements of text not written in the new message tone, or any sort of grammatical structure associated with the new message tone), or plain text rules associated with the new message tone.
- UI device 102 may send application module 106 an indication of the detected inputs via UI module 104.
- Application module 106 may send an indication of the inputs to suggestion module 108 to train the ML model to learn how to compose suggested messages 122 in the new message tone.
- Suggestion module 108 may train the ML model to learn how to write messages in the new message tone by identifying textual particularities associated with the new message tone. Docket No.: 1333-481WO01
- suggestion module 108 is depicted as part of computing device 100 in FIG. 1, suggestion module 108 may be stored and/or executed by any other computing device or computing system communicably coupled to computing device 100.
- suggestion module 108 may be stored and/or executed by an external computing system that is communicatively coupled to computing device 100 via a network connection.
- Computing device 100 may output one or more of suggested messages 122.
- suggestion module 108 may send application module 106 at least a subset of the generated suggested messages 122.
- Application module 106 may send UI module 104 the received messages of suggested messages 122.
- UI module 104 may instruct UI device 102 to update GUI 110 to output suggested messages 122.
- UI device 102 may display suggested messages 122 in message selection area 120.
- Computing device 100 may receive an input selecting one of suggested messages 122.
- UI device 102 of computing device 100 may detect an input corresponding to a location of a suggested message of suggested messages 122 output in message selection area 120.
- UI device 102 may send the input selecting a message of suggested messages 122 to application module 106 via UI module 104.
- Application module 106 may instruct UI module 104 to update UI device 102 to output GUI 110 that displays the selected message in text editing region 160 (e.g., by replacing draft message 114 with the selected message).
- computing device 100 may automatically respond to all incoming messages from message thread 112 according to one or more message tones of message tones 118.
- application module 106 may receive new, incoming messages associated with message thread 112.
- Application module 106 may continuously send message thread 112 to suggestion module 108 responsive to a new message received via a communication platform handled by application module 106.
- application module 106 may include a context or intent of the new message in message thread 112 sent to suggestion module 108.
- Suggestion module 108 may generate a response to the new message in a selected message tone of message tones 118 and according to the context or intent of the new message. Suggestion module 108 may send the generated message to application module 106. Application modules 106 may automatically send the generated message to the one or more other users involved in message thread 112. Application module 106 may also send the generated message to UI Docket No.: 1333-481WO01 module 104. UI module 104 may instruct UI device 102 to update GUI 110 to output the automatically generated message sent to other users as part of message thread 112. [0042] Suggestion module 108 may generate a plurality of messages as potential responses to a recent message included in message thread 112.
- Suggestion module 108 may generate the plurality of messages according to a selected message tone of message tones 118. Suggestion module 108 may apply the machine learning model to generate potential messages that satisfies a message value condition. Suggestion module 108 may generate a set of potential messages based on a context of the recent message received in message thread 112. The context may include one or more of an identity of the recipient or sender of the new message, a time-of-day the new message was received, an application of application modules 106 being used to send and receive message thread 112, a message tone used by one or more messages of message thread 112, or any other factors indicating an intent associated with the new message included in message thread 112.
- Suggestion module 108 may apply the machine learning model to select at least a subset of the generated potential messages based on whether a potential message in the set of the potential message satisfies a message value condition, such as whether the potential message was appropriately composed in the selected message tone of message tones 118.
- suggestion module 108 may apply the machine learning model to determine a message value for each of the plurality of potential messages composed in a selected message tone of message tones 118.
- Suggestion module 108 may determine the message value for a potential message based on a confidence rating associated with how closely the potential message matches the selected message tone of message tones 118.
- Suggestion module 108 may determine a potential message satisfies the message value condition based on whether the message value assigned to each potential message satisfies a threshold confidence rating associated with the selected message tone of message tones 118. Suggestion module 108 may determine the message value condition based on factors and/or parameters used in training the ML model (e.g., plain text rules of message tones 118, structural particularities of message tones 118, language used in examples of message tones 118). Suggestion module 108 may send application modules 106 the potential message with the greatest message value. In some examples, application modules 106 may automatically send the potential message with the greatest message value to the users involved in message thread 112.
- FIG. 2 is a block diagram illustrating a computing device 200, in accordance with one or more techniques of this disclosure.
- Computing device 200 may be one example of computing device 100 in accordance with one or more techniques of this disclosure.
- computing device 200 may include one or more user interface devices 202 (“UI device 202” or “display 202”), one or more processors 224 (“processor 224”), one or more storage devices 228 (“storage device 228”), one or more communication units 226 (“communication unit 226”). Also shown in FIG. 2, UI device 202 may include one or more input devices 234 (“input device 234”) and one or more output devices (“output devices 236”). As also shown in FIG.
- storage device 228 may include a user interface module 204 (“UI module 204”), one or more application modules 206 (“application module 206”), a suggestion module 208, an operating system 230 (“OS 230”), and database 232.
- UI device 202 may be a presence-sensitive display configured to detect input (e.g., touch and non-touch input) from a user of respective computing device 200.
- UI device 202 may output information to a user in the form of a UI, which may be associated with functionality provided by computing device 200.
- Such UIs may be associated with computing platforms, operating systems, applications, and/or services executing at or accessible from computing device 200 (e.g., electronic message applications, chat applications, Internet browser applications, mobile or desktop operating systems, social media applications, electronic games, menus, and other types of applications).
- Processor 224 may implement functionality and/or execute instructions within computing device 200.
- processor 224 may receive and execute instructions Docket No.: 1333-481WO01 that provide the functionality of modules 204-208 and OS 230. These instructions executed by processor 224 may cause computing device 200 to store and/or modify information within storage device 228 or processor 224 during program execution.
- Processor 224 may execute instructions of modules 204-208 and OS 230 to perform one or more operations.
- Storage device 228 within computing device 200 may store information for processing during operation of computing device 200 (e.g., computing device 200 may store data accessed by modules 204-208 and OS 230 during execution at computing device 200).
- storage device 228 may be a temporary memory, meaning that a primary purpose of storage device 228 is not long-term storage.
- Storage device 228 on computing device 200 may be configured for short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art.
- Storage device 228 may include one or more computer-readable storage media. Storage device 228 may be configured to store larger amounts of information than volatile memory. Storage device 228 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. Storage device 228 may store program instructions and/or information (e.g., within database 232) associated with modules 204-208 and OS 230.
- Communication unit 226 of computing device 200 may communicate with one or more external devices via one or more wired and/or wireless networks by transmitting and/or receiving network signals on the one or more networks.
- Examples of communication units 226 include a network interface card (e.g., such as an Ethernet card), an optical transceiver, a radio frequency transceiver, a GNSS receiver, or any other type of device that can send and/or receive information.
- Other examples of communication unit 226 may include short wave radios, cellular data radios (for terrestrial and/or satellite cellular networks), wireless network radios, as well as universal serial bus (USB) controllers. Docket No.: 1333-481WO01 [0051]
- Input device 234 of computing device 200 may receive input.
- Input device 234 of computing device 200 includes a presence-sensitive display, a fingerprint sensor, touch-sensitive screen, mouse, keyboard, voice responsive system, video camera, microphone or any other type of device for detecting input from a human or machine.
- Input devices 234 may include one or more sensors. Numerous examples of sensors exist and include any input component configured to obtain environmental information about the circumstances surrounding computing device 200 and/or physiological information that defines the activity state and/or physical well-being of a user of computing device 200. In some examples, a sensor may be an input component that obtains physical position, movement, and/or location information of computing device 200.
- sensors may include one or more location sensors (e.g., GNSS components, Wi-Fi components, cellular components), one or more temperature sensors, one or more motion sensors (e.g., multi-axial accelerometers, gyros), one or more pressure sensors (e.g., barometer), one or more ambient light sensors, and one or more other sensors (e.g., microphone, camera, infrared proximity sensor, hygrometer, and the like).
- Other sensors may include a heart rate sensor, magnetometer, glucose sensor, hygrometer sensor, olfactory sensor, compass sensor, step counter sensor, to name a few other non-limiting examples.
- Output device 236 of computing device 200 may generate one or more outputs.
- Output device 236 of computing device 200 includes a presence-sensitive display, sound card, video graphics adapter card, speaker, liquid crystal display (LCD), or any other type of device for generating output to a human or machine.
- Communication channels 250 (“COMM channel 250”) may interconnect each of the components 202, 224, 226, and 228 for inter-component communications (physically, communicatively, and/or operatively).
- communication channel 250 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data.
- Computing device 200 may include OS 230.
- OS 230 may control the operation of components of computing device 200.
- OS 230 may facilitate the communication of modules 204-208 with processor 224, storage device 228, and communication units 226.
- OS 230 may manage interactions between software applications (e.g., application module 206) and a user of computing device 200. Docket No.: 1333-481WO01 OS 230 may have a kernel that facilitates interactions with underlying hardware of computing device 200 and provides a fully formed application space capable of executing a wide variety of software applications having secure partitions in which each of the software applications executes to perform various operations.
- UI module 204 may be considered a component of OS 230.
- suggestion module 208 may comprise a hardware device, such as a server computer, having various hardware, firmware, and software components.
- FIG. 2 illustrates only one particular example of suggestion module 208, and many other examples of suggestion module 208 may be used in accordance with techniques of this disclosure.
- components of suggestion module 208 may be located in a singular location.
- one or more components of suggestion module 108 may be in different locations (e.g., a computing system or server communicably connected to computing device 200 via a network). That is, in some examples suggestion module 208 may be a conventional computing device, while in other examples, suggestion module 208 may be a distributed or “cloud” computing device.
- Suggestion module 208 may treat data associated with the techniques described herein such that personally identifiable information is removed.
- Computing device 200 may also output, via UI device 202, an option for a user operating computing device 200 to grant computing device 200 explicit consent to provide suggestion module 208 information associated with draft message 114 and/or message thread 112, as shown in FIG. 1.
- Suggestion module 208 may apply a machine learning model, such as ML model 252, to generate suggested messages relevant to a message thread.
- ML model 252 can be or include one or more artificial neural networks (also referred to simply as neural networks).
- a neural network can include a group of connected nodes, which also can be referred to as neurons or perceptrons.
- a neural network can be organized into one or more layers. Neural networks that include multiple layers can be referred to as “deep” networks.
- a deep network can include an input layer, an output layer, and one or more hidden layers positioned between the input layer and the output layer.
- the nodes of the neural network can be connected or non-fully connected.
- ML model 252 can be or include one or more transformer-based neural networks, such as a large language model (LLM).
- Transformer-based neural networks may refer to a type of deep learning architecture specifically designed for handling sequential data, such as text or time series.
- transformer-based neural networks like LLMs Docket No.: 1333-481WO01 may be configured to perform natural language processing (NLP) tasks, such as question- answering, machine translation, text summarization, and sentiment analysis.
- NLP natural language processing
- Transformer-based neural networks may utilize a self-attention mechanism, which allows the model to weigh the importance of different elements in a given input sequence relative to each other.
- the self-attention mechanism may help ML model 252 effectively capture long-range dependencies and complex relationships between elements, such as words in a sentence.
- ML model 252 may include an encoder and a decoder that operate to process and generate sequential data, such as text. Both the encoder and decoder may include one or more of self-attention mechanisms, position-wise feedforward networks, layer normalization, or residual connections.
- the encoder may process an input sequence and create a representation that captures the relationships and context among the elements in the sequence. The decoder may then obtain the representation generated by the encoder and produce an output sequence.
- the decoder may generate the output one element at a time (e.g., one word at a time), using a process called autoregressive decoding, where the previously generated elements are used as input to predict the next element in the sequence.
- ML model 252 may generate a set of suggested messages by determining a set of information types of a recent message.
- An information type of a message may be or otherwise include a topic, theme, point, subject, purpose, intent, keyword, etc.
- ML model 252 may determine the information type by leveraging a self-attention mechanism to capture the relationships and dependencies between words in the input sequence.
- ML model 252 may tokenize (e.g., split) a sequence of words or subwords, which ML model 252 may convert into vectors (e.g., numerical representations) that ML model 252 can process.
- ML model 252 may use the self-attention mechanism to weigh the importance of each token in relation to the others. In this way, ML model 252 may identify patterns and relationships between the tokens, and in turn the words corresponding to the tokens, that indicate one or more information types of the recent message.
- Suggestion module 208 may treat data associated with the techniques described herein such that personally identifiable information is removed.
- Computing device 200 may also output, via UI device 202, an option for a user operating computing device 200 to grant computing device 200 explicit consent to provide suggestion module 208 information associated with draft message 114 and/or message thread 112.
- Docket No.: 1333-481WO01 [0061]
- ML model 252 may generate suggested messages based on the determined one or more information types of the recent message. For example, information in each of the suggested messages generated by ML model 252 may be related (e.g., logically, semantically, etc.) to at least a subset of the set of information types of recent message included in message thread 112.
- ML model 252 may be or otherwise include one or more other types of neural networks.
- ML model 252 may be or include an autoencoder.
- the aim of an autoencoder is to learn a representation (e.g., a lower- dimensional encoding) for a set of data, typically for the purpose of dimensionality reduction.
- an autoencoder can seek to encode the input data and the provided output data that reconstructs the input data from the encoding. Recently, the autoencoder concept has become more widely used for learning generative models of data. In some examples, the autoencoder can include additional losses beyond reconstructing the input data.
- ML model 252 may be or include one or more other forms of artificial neural networks such as, for example, deep Boltzmann machines, deep belief networks, stacked autoencoders, etc. Any of the neural networks described herein can be combined (e.g., stacked) to form more complex networks. [0063] In some examples, ML model 252 can be or include one or more feed forward neural networks. In feed forward networks, the connections between nodes do not form a cycle.
- each connection can connect a node from an earlier layer to a node from a later layer.
- ML model 252 can be or include one or more recurrent neural networks.
- at least some of the nodes of a recurrent neural network can form a cycle.
- Recurrent neural networks can be especially useful for processing input data that is sequential in nature.
- a recurrent neural network can pass or retain information from a previous portion of the input data sequence to a subsequent portion of the input data sequence through the use of recurrent or directed cyclical node connections.
- Sequential input data may include words in a sentence (e.g., for natural language processing, speech detection or processing, etc.).
- sequential input data can include time-series data (e.g., sensor data versus time or imagery captured at different Docket No.: 1333-481WO01 times).
- Example recurrent neural networks may include long short-term (LSTM) recurrent neural networks, gated recurrent units, bi-direction recurrent neural networks, continuous time recurrent neural networks, neural history compressors, echo state networks, Elman networks, Jordan networks, recursive neural networks, Hopfield networks, fully recurrent networks, sequence-to- sequence configurations, etc.
- ML model 252 can be or include one or more convolutional neural networks.
- a convolutional neural network can include one or more convolutional layers that perform convolutions over input data using learned filters.
- ML model 252 may determine a message value for each of suggested messages 122 generated by response generation module 242. In these examples, ML model 252 may compare the message value for each of suggested messages 122 to determine whether to display suggested messages 122 to a user. In other words, ML model 252 may only output, for display to a user, the subset of suggested messages 122 satisfying a message value condition. ML model 252 may define a message value condition as a confidence rating ML model 252 develops throughout training.
- ML model 252 may establish a message value condition for each message tone of messages tones 118.
- ML model 252 may establish a message value condition for a message tone of message tones 118 based on a message value ML model 252 determines as a threshold message value (e.g., threshold confidence rating associated with a selected message tone).
- ML model 252 may determine a threshold message value included in the message value condition based on factors and/or parameters used in training ML model 252 to composed messages in message tones 118 (e.g., plain text rules of message tones 118, structural particularities of message tones 118, language used in examples of message tones 118).
- ML model 252 may select one or more generated suggested messages to output based on whether an assigned message value to each generated suggested message satisfies the message value threshold included in the message value condition. Docket No.: 1333-481WO01 [0067] For example, for each of suggested messages 122, ML model 252 may determine a message value from 0 to 1. In this example, a message value of 0 may be associated with a lowest confidence in the message being useful or of interest to a user and a message value of 1 may be associated with a highest confidence in the message being useful or of interest to a user.
- ML model 252 may transmit the corresponding suggested message to UI module 204 for display to a user.
- the message value may satisfy the threshold when the message value is equal to or greater than a message value threshold.
- ML model 252 determines, for a particular suggested message, a message value of 0.8, and if the message value threshold is 0.7, then the particular suggested message may be in the subset of suggested messages that ML model 252 transmits to UI module 204 for display to a user.
- the particular suggested message may not be in the subset of suggested messages for display, and computing device 200 may execute a different action, such as discard the particular suggested message.
- UI module 204 may output, for display by UI device 202, at least the subset of suggested messages in an order based on the message value for each suggested message of at least the subset of suggested messages.
- ML model 252 may excel at performing NLP tasks, such as generating text and other content. However, with respect to specific types of content (e.g., specific information types), ML model 252 may have an increased likelihood of generating false or inaccurate information. To address the issue of generating false information, ML model 252 may be configured to exclude the generation of content relating to a set of excluded information types.
- the set of excluded information types may include one or more phone numbers, addresses, web addresses, etc.
- Computing device 200 may include tone training module 240 that trains (e.g., pre- train, fine-tune, etc.) ML model 252.
- Tone training module 240 may pre-train ML model 252 on a large and diverse corpus of text from various sources, such as websites, books, Docket No.: 1333-481WO01 articles, and other text repositories. This dataset may cover a wide range of topics and domains to ensure ML model 252 learns diverse linguistic patterns and contextual relationships.
- Tone training module 240 may train ML model 252 to optimize an objective function.
- the objective function may be or include a loss function, such as cross-entropy loss, that compares (e.g., determines a difference between) output data generated by the model from the training data and labels (e.g., ground-truth labels) associated with the training data.
- a loss function such as cross-entropy loss
- the objective function of ML model 252 may be to correctly predict the next word in a sequence of words or correctly fill in missing words as much as possible.
- suggestion module 208 may decrease the time and/or effort required to compose a message to send to another user while increasing the likelihood that suggested messages will actually be of value to the user.
- tone training module 240 may use ML model 252 to learn message tones 118.
- tone training module 240 may use ML model 252 that has been trained to understand language, such as a large language model (LLM), to learn message tone 118A.
- Tone training module 240 may receive target statements from database 232. Tone training module 240 may receive target statements that include example statements associated with message tone 118A.
- Tone training module 240 may also receive target statements that include example statements not associated with message tone 118A but include the same context or intent of corresponding example statements associated with message tone 118A. Tone training module 240 may train ML model 252 by analyzing textual differences between example statements associated with message tone 118A and corresponding example statements not associated with message tone 118A. [0074] In some examples, tone training module 240 may categorize message tones with categorization module 246. Categorization module 246 may categorize message tones 118 as either characters, emotions, or tactics. Categorization module 246 may assist the training of ML model 252.
- ML model 252 may use categories included in categorization module 246 to direct the learning of a message tone of message tones 118 according to a corresponding category specified in categorization module 246. For example, Docket No.: 1333-481WO01 categorization module 246 may categorize “Shakespeare” message tone 118A as a character and “Excited” message tone 118B as an emotion. ML model 252 may use information associated with a character category when learning to compose messages in “Shakespeare” message tone 118A. Similarly, ML model 252 may use information associated with an emotion category when learning to compose messages in “Excited” message tone 118B. [0075] Reinforcement training module 244 may also receive refreshers from database 232.
- Reinforcement training module 244 may receive refreshers that include one or more example statements written in a trained message tone of message tones 118. Reinforcement training module 244 may receive refreshers not included in target statements used to initially train ML model 252. Reinforcement training module 244 may continuously or periodically train ML model 252 with the received refreshers. For example, reinforcement training module 244 may receive refreshers including example nursery rhymes for a nursery rhyme message tone. Reinforcement training module 244 may apply the refreshers to remind ML model 252 of the textual particularities associated with the nursery rhyme message tone. [0076] Database 232 may continuously or periodically update or change the refreshers used by reinforcement training module 244. In some instances, database 232 may update or change the refreshers based on a user generated example provided via UI module 204.
- database 232 may update or change refreshers by receiving the refreshers from an external computing device or computing system.
- reinforcement training module 244 of tone training module 240 may fine-tune ML model 252 by using the feedback in the training process.
- UI device 202 may receive a user input – via input device 234 – that selects feedback (e.g., thumbs up, thumbs down, etc.) relating to at least one suggested message of the set of suggested messages.
- the feedback may indicate whether the suggested messages are accurate or inaccurate, correct or incorrect, high quality or low quality, etc.
- UI module 104 may send application module 206 the received input indicating feedback related to at least one suggested message of the set of suggested messages.
- Application module 206 may transmit the feedback to reinforcement training module 244.
- Reinforcement training module 244 may obtain this feedback from the user of computing device 200 (as well as feedback from other users) and use the feedback for training. For example, reinforcement training module 244 may convert the feedback into labeled data for supervised training. Additionally or alternatively, reinforcement training Docket No.: 1333-481WO01 module 244 may fine-tune ML model 252 by monitoring the relationship between the performance of ML model 252 and user feedback, and iterate the fine-tuning process as necessary (e.g., to receive more positive user feedback and less negative user feedback). In this way, the techniques of this disclosure may establish a feedback loop that continuously improves the quality of the output of ML model 252.
- Context analysis module 238 may determine the context of draft message 114 and/or message thread 112, as depicted in FIG. 1. Context analysis module 238 may determine the context of draft message 114 by preserving the original intent of draft message 114, not simply the content of draft message 114. For example, context analysis module 238 may use the textual understanding of ML model 252 (e.g., LLM) to determine the context of draft message 114 in view of message thread 112.
- ML model 252 e.g., LLM
- Context analysis module 238 may determine the context or intent of draft message 114, such as the identity of the recipient of draft message 114, a time-of-day draft message 114 was composed, an application of application modules 106 being used to send and receive message thread 112, a message tone used by one or more messages of message thread 112, or any other factors indicating an intent associated with draft message 114. Context analysis module 238 may also determine the context or intent of messages of message thread 112, such as the identity of users communicating in message thread 112, the tone of messages of message thread 112, recent messages of message thread 112, punctuation used in messages of message thread 112, or any other factors indicating an intent associated with one or more messages of message thread 112.
- Response generation module 242 may use ML model 252 to apply a message tone of message tones 118 to a determined context of draft message 114 and/or message thread 112. For example, response generation module 242 may generate suggested messages 122 by applying a message tone trained by tone training module 240 to a context determined by context analysis module 238. [0080] Response generation module 242 may include response elaboration module 248. Response elaboration module 248 may use context analysis module 238 understand the context of message thread 112 (e.g., information included in one or more recent messages included in message thread 112, a tone of one or more recent messages included in message thread 112, users involved in message thread 112, etc.).
- context analysis module 238 may apply ML model 252 to understand the context of message thread 112.
- Context analysis module 238 may determine the context of message thread 112 based on parameters associated with the intent of messages included in Docket No.: 1333-481WO01 message thread 112. For example, context analysis module 238 may determine the context of message thread 112 based on a time-of-day recent or previous messages in message thread 112 were sent, the users involved in message thread 112, a tone of one or more messages included in message thread 112, etc.
- Context analysis module 238 may send response elaboration module 248 the determined context of message thread 112.
- Response elaboration module 248 may apply a ML model 252 to generate suggested messages 122.
- FIGS. 3A-3D are conceptual diagrams illustrating example graphical user interfaces for providing suggested messages in accordance with one or more techniques of this disclosure. The example of FIGS. 3A-3D is described below within the context of FIG. 2. As shown in FIGS. 3A-3D, GUIs 310A-310D may include message threads 312A-312D and draft messages 314A-314D displayed in text editing region 360A-360D. Although not explicitly depicted, the examples of FIGS.
- 3A-3D may output an option granting computing device 200 consent to perform the techniques described herein.
- the user operating computing device 200 may have control over how information is collected about the user and used by computing device 200 and/or other computing systems or computing devices described herein.
- the examples of FIGs. 3A-3D may output an option granting computing device 200 consent to perform the techniques described herein.
- the user operating computing device 200 may have control over how information is collected about the user and used by computing device 200 and/or other computing systems or computing devices described herein.
- text editing region 360A includes draft message 314A stating “Wanna grab dinner?”. Draft message 314A may include text generated by a user interacting with computing device 200.
- Draft message 314A may include text of characters selected with keyboard 354A.
- UI device 202 may detect inputs of a user selecting characters included in keyboard 354A.
- UI device 202 may send inputs to application module 206.
- Application module 206 may instruct UI device 202 – via UI module 204 – to update GUI 310A to output the selected characters in text editing region 360.
- Text editing region 360 may include one or more characters that represent draft Docket No.: 1333-481WO01 message 314A.
- application module 206 may instruct UI device 202 to output GUI 310A that includes message tone change request graphical element 316A.
- UI device 202 may receive a signal indicating a user selecting message tone change request graphical element 316A.
- UI module 204 may send draft message 314A and message thread 312A to application module 206.
- Application module 206 may transmit draft message 314A and message thread 312A to suggestion module 208.
- suggestion module 208 may determine the context of draft message 314A and messages of message thread 312A to suggest messages of draft message 314A composed in one or more message tones.
- UI device 202 has received a signal that a user interacting with computing device 200 has selected message tone change request graphical element 316B.
- application module 206 may instruct UI module 204 to update GUI 310B output by UI device 202.
- UI device 202 may output GUI 310B to include message tones 318A-318N (collectively referred to as “message tones 318).
- “Shakespeare” message tone 318A may be selected by a user interacting with GUI 310B.
- one or more of message tones 318 may be selected.
- UI device 202 may detect an input from a user operating computing device 200 indicating one or more of message tones 118 has been selected.
- UI module 204 may send draft message 314B, message thread 312B, and one or more selected message tones 318 to application module 206.
- Application module 206 may transmit draft message 314B, message thread 312B, and the one or more selected messages tones 318 to suggestion module 208.
- context analysis module 238 of suggestion module 208 may determine a context of draft message 314B and/or message thread 312B.
- Context analysis module 238 may determine the context of draft message 314B and/or message thread 312B based on any relevant information indicating an intent underlying draft message 314B and/or message thread 312B, respectively.
- Context analysis module 238 may determine the context of draft message 314B based on, for example, message thread 312B, a tone of one or more messages included in message thread 312B, a time-of-day draft message 314B was composed, etc. Context analysis module 238 may determine the context of message thread 312B based on, for example, the identity of users communicating in message thread 312B, the tone of messages of message thread 312B, Docket No.: 1333-481WO01 recent messages of message thread 312B, punctuation used in messages of message thread 312B, or any other factors indicating an intent associated with one or more messages of message thread 312B.
- Response generation module 242 may generate a set of suggested messages 322A- 322N (collectively, “suggested messages 322”).
- Response generation module 242 may use ML model 252 to generate suggested messages 322 according to one or more selected message tones of message tones 318.
- Tone training module 240 may train ML model 252 to learn how to generate suggested messages 322 according to message tones 318, as discussed in more detail above.
- suggestion module 208 may send application module 206 the suggested messages 322.
- Application module 206 may transmit suggested messages 322 to UI module 204.
- UI module 204 may update UI device 202 with GUI 310B.
- UI device 202 may display suggested messages 322 in message selection area 320. [0086] In some examples, UI device 202 may output GUI 310B that includes feedback graphical elements 356. UI device 202 may detect an input of a user interacting with (e.g., select) one or more of feedback graphical elements 356 to provide feedback relating to suggested messages 322. UI module 204 may send the inputs associated with the feedback related to suggested messages 322 to application module 206. Application module 206 may transmit the feedback to tone training module 240 of suggestion module 208. Reinforcement training module 244 of tone training module 240 may use the user feedback to train ML model 252. For example, reinforcement training module 244 may train ML model 252 to maximize positive user feedback and minimize negative user feedback.
- feedback graphical elements 356 may include a “thumbs up” selection icon, a “thumbs down” selection icon, or a “manual feedback” selection icon.
- Application module 206 may instruct UI module 204 to output feedback graphical elements 356 in GUI 310B.
- UI device 202 may allow a user interacting with GUI 310B to select one or more of the icons included in feedback graphical elements 356.
- Responsive to interacting with feedback graphical elements 356, UI module 204 may send the feedback included in the user interaction with feedback graphical elements 356 to application module 206.
- Application module 206 may transmit the feedback to reinforcement training module 244 of tone training module 240.
- Reinforcement training module 244 may use the information included in feedback 356 to further refine the Docket No.: 1333-481WO01 training of ML model 252 with respect to the one or more selected message tones of message tones 318.
- application module 206 may instruct UI module 204 to output GUI 310C via UI device 202.
- UI device 202 may output the selected message of suggested messages 322 in text editing region 360C.
- Draft message 314A of FIG. 3C includes text of the selected message of suggested messages 322.
- UI device 202 may also output keyboard 354C.
- UI device 202 may output keyboard 354C to provide the user interacting with computing device 200 the opportunity to edit or manually change text included in text editing region 360C.
- UI device 202 may detect an input of a user selecting send button 358.
- UI module 204 may send the input of the user selecting send button 358 to application module 206.
- Send button 358 may interact with application modules 206 to include draft message 314C in message thread 312C, and effectively send draft message 314C to the other users involved in message thread 312C.
- application module 206 may then instruct UI module 204 to update GUI 310C to include draft message 314C in message thread 312C.
- application module 206 may instruct UI module 204 to output GUI 310D via UI device 202.
- GUI 310D outputs GUI 310D to display draft message 314C (e.g., the selected message of suggested messages 322) as part of message thread 312D.
- GUI device 202 outputs GUI 310D to display text editing region 360D as including no text.
- GUI device 202 outputs GUI 310D to display keyboard 354D to allow a user to select characters.
- UI device 202 may detect input of a user selecting characters included in keyboard 354D.
- UI module 204 may send the detected inputs associated with keyboard 354D to application module 206.
- Application module 206 may then instruct UI module 204 to update GUI 310D to include the inputs associated with keyboard 354D in text editing region 360D.
- suggestion module 208 of computing device 200 may obtain message thread 112 (or another message thread) (402). Docket No.: 1333-481WO01 Message thread 112 may include a plurality of messages exchanged between a user operating computing device 200 and a user operating one or more other computing devices. Message thread 112 may include a plurality of messages exchanged via a communication platform provided by application module 206.
- Suggestion module 208 may apply ML model 252 (e.g., a large language model) to draft message 114 to generate suggested messages 122 according to one or more selected message tones of message tones 118 (404).
- Response generation module 242 of suggestion module 208 may apply ML model 252 to generate suggested messages 122 according to one or more selected message tones of message tones 118, as well as a determined context of draft message 114 (e.g., the identity of the recipient of draft message 114, a time-of-day draft message 114 was composed, an application of application modules 106 being used to send and receive message thread 112, a message tone used by one or more messages of message thread 112, or any other factors indicating an intent associated with draft message 114) and or a determined context of message thread 112 (the identity of users communicating in message thread 112, the tone of one or more messages of message thread 112, recent or previous messages of message thread 112, punctuation used in one or more messages of message thread 112, or any other factors indicating an intent associated with one or
- Tone training module 240 of suggestion module 208 may train ML model 252 on how to compose suggested messages 122 according to message tones 118.
- Response generation module 242 may apply the trained ML model 252, along with a context determined by context analysis module 238, to generate suggested messages 122.
- Suggestion module 208 may send at least a subset of the generated suggested messages 122 to application module 206.
- Application module 206 may transmit suggested messages 122 to user interface module 204 to output suggested messages 122 via UI device 202.
- Output device 236 of user interface devices 202 may output at least a subset of suggested messages 122 (406).
- Output devices 236 may output suggested messages 122 in message selection area 120 of GUI 110.
- Input devices 234 of user interface devices 202 may receive an input that selects a suggested message of the outputted suggested messages 122 (408). Responsive to input devices 234 receiving an input selecting a suggested message of suggested messages 122, UI module 204 may send application module 206 the selected message of suggested messages 122. Application module 206 may instruct UI module 204 to update user interface device 202. UI device 202 may output the selected message of suggested Docket No.: 1333-481WO01 messages 122 in text editing region 160 (410). User interface device 202 may replace draft message 114 with the selected message of suggested messages 122 via instructions user interface module 204 receives from application module 206.
- a computing device and/or a computing system analyzes information (e.g., wireless ID tags and respective information, locations, context, motion, etc.) associated with a computing device and a user of the computing device, only if the computing device receives permission from the user of the computing device to analyze the information.
- information e.g., wireless ID tags and respective information, locations, context, motion, etc.
- a computing device or computing system can collect or may make use of information associated with a user
- the user may be provided with an opportunity to provide input to control whether programs or features of the computing device and/or computing system can collect and make use of user information (e.g., information about a user’s or user device’s current location, such as by GPS or wireless ID tag, etc.), or to dictate whether and/or how to the device and/or system may receive content that may be relevant to the user.
- user information e.g., information about a user’s or user device’s current location, such as by GPS or wireless ID tag, etc.
- certain data may be treated in one or more ways before it is stored or used by the computing device and/or computing system, so that personally identifiable information is removed.
- a user’s identity and image may be treated so that no personally identifiable information can be determined about the user, or a user’s geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined.
- location information such as to a city, ZIP code, or state level
- the user may have control over how information is collected about the user and used by the computing device and computing system.
- Example A1 A method includes obtaining, by a computing device, a message thread that includes a plurality of messages between a first user associated with the computing device and a second user; generating, by the computing device, a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones; outputting, by the computing device and for display, a graphical user interface that includes the at least one suggested message from the set of suggested messages; receiving, by the computing device, a user input that selects a suggested message from the at least one suggested message as a selected message; and outputting, by the computing device, an updated graphical user interface that includes the selected message in place of the draft message.
- Example A2 The method of example A1, further includes determining a context of the draft message, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the draft message.
- Example A3 The method of any of examples A1 and A2, further includes outputting, for display, an indication of the plurality of message tones to apply to the draft message; and receiving, by the computing device, a user input that selects the message tone.
- Example A4 The method of any of examples A1 through A3, further includes determining a context of the message thread, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the message thread.
- Example A5 The method of any of examples A1 through A4, wherein applying the machine learning model comprises determining a message value for each suggested message of the set of suggested messages, and wherein the message value indicates a confidence rating associated with how closely each suggested message matches the message tone.
- Example A6 The method of example A5, wherein each suggested message of the set of suggested messages satisfies a message value condition, and wherein the message value condition includes a threshold confidence rating associated with the message tone.
- Example A7 The method of example A6, wherein outputting the plurality of messages includes outputting, by the computing device and for display, the set of suggested messages in an order based on the message value for each suggested message of the set of suggested messages.
- Example A8 The method of any of examples A1 through A7, wherein at least one suggested message from the set of suggested messages includes at least one of text, emoji, emoticons, images, reactions, animations, or videos.
- Example A9 The method of any of examples A1 through A8, wherein each message tone of the plurality of message tones is categorized as either a character, an emotion, or a tactic.
- Example A10 The method of any of examples A1 through A9, further includes for each respective message tone from the plurality of message tones: receiving a description of the respective message tone, wherein the description of the respective message tone includes one or more of a plurality of examples associated with the Docket No.: 1333-481WO01 respective message tone and a plurality of rules associated with the respective message tone; and training the machine learning model using the description of the respective message tone.
- Example A11 The method of any of examples A1 through A10, further includes receiving, by the computing device, a user input that selects feedback relating to at least one suggested message from the set of suggested messages; and training the machine learning model based on the feedback.
- Example A12 The method of any of examples A1 through A11, wherein applying the machine learning model comprises excluding generation of content relating to an excluded set of information types comprising one or more of phone numbers, addresses, or web addresses.
- Example A13 The method of any of examples A1 through A12, wherein the machine learning model is a large language model.
- Example A14 The method of any of examples A1 through A13, wherein the second user is associated with the computing device.
- Example A15 The method of any of examples A1 through A13, wherein the computing device is a first computing device, and wherein the second user is associated with a second computing device different from the computing device.
- a device comprises at least one processor, a display, and a storage device that stores instructions executable by the at least one processor.
- the instructions executable by the at least one processor may be configured to obtain a message thread that includes a plurality of messages between a first user associated with the device and a second user; generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones; output, for display, a graphical user interface that includes the at least one suggested message from the set of suggested messages; receive a user input that selects a suggested message from the at least one suggested message as a selected message; and output an updated graphical user interface that includes the selected message in place of the draft message.
- EXAMPLE B2 The device of example B1 wherein the at least one processor is configured to: determine a context of the draft message, wherein to generate the set of suggested messages includes applying the machine learning model to both the draft message and the context of the draft message. Docket No.: 1333-481WO01 [0114]
- EXAMPLE B3 The device of any combination of examples B1 through B2 wherein the at least one processor is further configured to: output, for display, an indication of the plurality of message tones to apply to the draft message; and receive a user input that selects the message tone.
- EXAMPLE B4 The device of any combination of examples B1 through B3 wherein the at least one processor is configured to: determine a context of the message thread, wherein to generate the set of suggested messages includes applying the machine learning model to both the draft message and the context of the message thread.
- EXAMPLE B5 The device of any combination of examples B1 through B4 wherein applying the machine learning model comprises determining a message value for each suggested message of the set of suggested messages, and wherein the message value indicates a confidence rating associated with how closely each suggested message matches the message tone.
- EXAMPLE B6 The device of any combination of examples B1 through B5 wherein each suggested message of the set of suggested messages satisfies a message value condition, and wherein the message value condition includes a threshold confidence rating associated with the message tone.
- EXAMPLE B7 The device of any combination of examples B1 through B6 wherein to output the plurality of messages includes outputting, for display, the set of suggested messages in an order based on the message value for each suggested message of the set of suggested messages.
- EXAMPLE B8 The device of any combination of examples B1 through B7 wherein outputting the plurality of messages includes displaying, by the computing device, the set of suggested messages in an order based on the message value for each suggested message of the set of suggested messages.
- EXAMPLE B9 The device of any combination of examples B1 through B8 wherein at least one suggested message from the set of suggested messages includes at least one of text, emoji, emoticons, images, reactions, animations, or videos.
- EXAMPLE B10 The device of any combination of examples B1 through B9 wherein each message tone of the plurality of message tones is categorized as either a character, an emotion, or a tactic.
- EXAMPLE B11 The device of any combination of examples B1 through B10 wherein the at least one processor is further configured to: receive a user input that selects Docket No.: 1333-481WO01 feedback relating to at least one suggested message from the set of suggested messages; and train the machine learning model based on the feedback.
- EXAMPLE B12 The device of any combination of examples B1 through B11 wherein applying the machine learning model comprises excluding generation of content relating to an excluded set of information types comprising one or more of phone numbers, addresses, or web addresses.
- EXAMPLE B13 The device of any combination of examples B1 through B12 wherein the machine learning model is a large language model.
- Example B14 The method of any of examples B1 through B13, wherein the second user is associated with the computing device.
- Example B15 The method of any of examples B1 through B13, wherein the computing device is a first computing device, and wherein the second user is associated with a second computing device different from the computing device.
- EXAMPLE C1 A computer-readable storage medium encoded with instructions, that when executed, cause at least one processor of a computing device to obtain a message thread that includes a plurality of messages between a first user associated with a computing device and a second user.
- the instructions may further cause the at least one processor to generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones.
- the instructions may further cause the at least one processor to output, for display, a graphical user interface that includes the at least one suggested message from the set of suggested messages.
- the instructions may further cause the at least one processor to receive a user input that selects a suggested message from the at least one suggested message as a selected message.
- the instructions may further cause the at least one processor to output an updated graphical user interface that includes the selected message in place of the draft message.
- EXAMPLE C2 The computer-readable storage medium of example C1 wherein the instructions configure the at least one processor to: determine a context of the draft message, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the draft message.
- EXAMPLE C3 The computer-readable storage medium of any combination of examples C1 through C2 wherein the instructions configure the at least one processor to: Docket No.: 1333-481WO01 output, for display, an indication of the plurality of message tones to apply to the draft message; and receive a user input that selects the message tone.
- EXAMPLE C4 The computer-readable storage medium of any combination of examples C1 through C3 wherein the instructions configure the at least one processor to: determine a context of the message thread, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the message thread.
- EXAMPLE C5 The computer-readable storage medium of any combination of examples C1 through C4 wherein the instructions configure the at least one processor to: determine a message value for each suggested message of the set of suggested messages, and wherein the message value indicates a confidence rating associated with how closely each suggested message matches the message tone.
- EXAMPLE C6 The computer-readable storage medium of any combination of examples C1 through C5 wherein each suggested message of the set of suggested messages satisfies a message value condition, and wherein the message value condition includes a threshold confidence rating associated with the message tone.
- EXAMPLE C7 The computer-readable storage medium of C6 wherein outputting the plurality of messages includes outputting, for display, the set of suggested messages in an order based on the message value for each suggested message of the set of suggested messages.
- EXAMPLE C8 The computer-readable storage medium of C6 wherein at least one suggested message from the set of suggested messages includes at least one of text, emoji, emoticons, images, reactions, animations, or videos.
- EXAMPLE C9 The computer-readable storage medium of any combination of examples C1 through C8 wherein each message tone of the plurality of message tones is categorized as either a character, an emotion, or a tactic.
- EXAMPLE C10 The computer-readable storage medium of any combination of examples C1 through C9 wherein the instructions configure the at least one processor to: for each respective message tone from the plurality of message tones: receive a description of the respective message tone, wherein the description of the respective message tone includes one or more of a plurality of examples associated with the respective message tone and a plurality of rules associated with the respective message tone; and train the machine learning model using the description of the respective message tone.
- EXAMPLE C11 The computer-readable storage medium of any combination of examples C1 through C10 wherein the instructions configure the at least one processor to: receive a user input that selects feedback relating to at least one suggested message from the set of suggested messages; and train the machine learning model based on the feedback.
- EXAMPLE C12 The computer-readable storage medium of any combination of examples C1 through C11 wherein the instructions configure the at least one processor to: to exclude generation of content relating to an excluded set of information types comprising one or more of phone numbers, addresses, or web addresses.
- EXAMPLE C13 The computer-readable storage medium of any combination of examples C1 through C12 wherein the machine learning model is a large language model.
- Example C14 The method of any of examples C1 through C13, wherein the second user is associated with the computing device.
- Example C15 The method of any of examples C1 through C13, wherein the computing device is a first computing device, and wherein the second user is associated with a second computing device different from the computing device.
- EXAMPLE D1 A system comprising at least one processor, a network interface, and a storage device that stores instructions executable by the at least one processor to obtain a message thread that includes a plurality of messages between a first user associated with a computing device and a second user.
- the instructions may further cause the at least one processor to generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones.
- the instructions may further cause the at least one processor to send, to the computing device via the network interface, at least one suggested message from the set of suggested messages.
- EXAMPLE D2 The system of example D1 wherein the instructions configure the at least one processor to: determine a context of the draft message, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the draft message.
- EXAMPLE D3 The system of any combination of examples D1 through D2 wherein the instructions configure the at least one processor to: receive, from the computing device via the network interface, a user input that selects the message tone of the plurality of message tones.
- EXAMPLE D4 The system of any combination of examples D1 through D3 wherein the instructions configure the at least one processor to: determine a context of the message thread, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the message thread.
- EXAMPLE D5 The system of any combination of examples D1 through D4 wherein the instructions configure the at least one processor to: determine a message value for each suggested message of the set of suggested messages, and wherein the message value indicates a confidence rating associated with how closely each suggested message matches the message tone.
- EXAMPLE D6 The system of any combination of examples D1 through D5 wherein each suggested message of the set of suggested messages satisfies a message value condition, and wherein the message value condition includes a threshold confidence rating associated with the message tone.
- EXAMPLE D7 The system of any combination of examples D1 through D6 wherein at least one suggested message from the set of suggested messages includes at least one of text, emoji, emoticons, images, reactions, animations, or videos.
- EXAMPLE D8 The system of any combination of examples D1 through D7 wherein each message tone of the plurality of message tones is categorized as either a character, an emotion, or a tactic.
- EXAMPLE D9 The system of any combination of examples D1 through D8 wherein the instructions configure the at least one processor to: for each respective message tone from the plurality of message tones: receive a description of the respective message tone, wherein the description of the respective message tone includes one or more of a plurality of examples associated with the respective message tone and a plurality of rules associated with the respective message tone; and train the machine learning model using the description of the respective message tone.
- EXAMPLE D10 The system of any combination of examples D1 through D9 wherein the instructions configure the at least one processor to: receive a user input that selects feedback relating to at least one suggested message from the set of suggested messages; and train the machine learning model based on the feedback.
- EXAMPLE D11 The system of any combination of examples D1 through D10 wherein the instructions configure the at least one processor to: to exclude generation of content relating to an excluded set of information types comprising one or more of phone numbers, addresses, or web addresses. Docket No.: 1333-481WO01 [0153]
- EXAMPLE D12 The system of any combination of examples D1 through D11 wherein the machine learning model is a large language model.
- Example D13 The method of any of examples D1 through D12, wherein the second user is associated with the computing device.
- Example D14 The method of any of examples D1 through D12, wherein the computing device is a first computing device, and wherein the second user is associated with a second computing device different from the computing device.
- such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other storage medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium.
- Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of a computer-readable medium.
- the techniques described in this disclosure may be implemented, at least in part, in hardware, software, firmware, or any combination thereof.
- various aspects of the described techniques may be implemented within one or more processors, including one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components.
- processor or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry.
- a control unit including hardware may also perform one or more of the techniques of this disclosure. Docket No.: 1333-481WO01 [0158]
- Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various techniques described in this disclosure.
- any of the described units, modules or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware, firmware, or software components.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A computing device that includes at least one processor and. a storage device that stores instructions executable by the at least one processor to: obtain a message thread that includes a plurality of messages between a first user associated with the device and a second user associated with a second device, generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a. message tone from a. plurality of message tones; output, for display, a graphical user interface that includes the at least one suggested message from the set of suggested messages; receive a user input that selects a suggested message from the at least one suggested message as a. selected message; and output an updated graphical user interface that includes the selected message in place of the draft message.
Description
Docket No.: 1333-481WO01 CHANGING TONE OF TEXT CROSS-REFERENCE TO RELATED APPLICATION [0001] This application claims the benefit of U.S. Provisional Patent Application No. 63/500,189, filed May 4, 2023, which is incorporated by reference herein in its entirety. BACKGROUND [0002] Users may communicate with each other via one or more computing devices. In some cases, the incoming communications (e.g., an email, a text message, a meeting request, etc.) may include textual information. A computing device may enable a user of a computing device to respond to the incoming communication by allowing the user to input textual information (e.g., using an input device), and send the textual information to other users as a response. SUMMARY [0003] In general, described herein are techniques for changing or altering text according to a particular message tone without changing or altering the intent of the original text. A computing device or computing system may leverage the structured understanding of language inputs from a machine learning model (e.g., large language model) to transform inputted text according to a learned message tone. For example, a computing device may receive an input of text as a draft message to be sent in a messaging application. The computing device may receive a request to change the message tone of the draft message. The computing device may determine the context of the draft message. The computing device may instruct the machine learning model to apply changes to the draft message based on the determined context of the draft message and according to one or more message tones. In some instances, the computing device may instruct the machine learning model to generate suggested messages according to the determined context and the one or more message tones included in the message tone change request. The computing device may output one or more of the suggested messages for a user operating the computing device to select. The computing device may replace the draft message with the selected message written in the one or more requested message tones. [0004] In one example, a method includes a computing device obtaining a message thread that includes a plurality of messages between a first user associated with the computing device and a second user. The method may further include the computing
Docket No.: 1333-481WO01 device generating a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones. The method may further include the computing device outputting for display a graphical user interface that includes the at least one suggested message from the set of suggested messages. The method may further include the computing device receiving a user input that selects a suggested message from the at least one suggested message as a selected message. The method may further include the computing device outputting an updated graphical user interface that includes the selected message in place of the draft message. [0005] In another example, a device comprises at least one processor, a display, and a storage device that stores instructions executable by the at least one processor. The instructions executable by the at least one processor may be configured to obtain a message thread that includes a plurality of messages between a first user associated with the device and a second user. The instructions executable by the at least one processor may also generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones. The instructions executable by the at least one processor may also output, via the display, a graphical user interface that includes the at least one suggested message from the set of suggested messages. The instructions executable by the at least one processor may also receive a user input that selects a suggested message from the at least one suggested message as a selected message. The instructions executable by the at least one processor may also output, via the display, an updated graphical user interface that includes the selected message in place of the draft message. [0006] In another example, a computer-readable storage medium encoded with instructions, that when executed, cause at least one processor of a computing device to obtain a message thread that includes a plurality of messages between a first user associated with a computing device and a second user. The instructions may further cause the at least one processor to generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones. The instructions may further cause the at least one processor to output, for display, a graphical user interface that includes the at least one suggested message from the set of suggested messages. The instructions may further cause the at least one
Docket No.: 1333-481WO01 processor to receive a user input that selects a suggested message from the at least one suggested message as a selected message. The instructions may further cause the at least one processor to output an updated graphical user interface that includes the selected message in place of the draft message. [0007] In yet another example, a system comprising at least one processor, a network interface, and a storage device that stores instructions executable by the at least one processor to obtain a message thread that includes a plurality of messages between a first user associated with a computing device and a second user. The instructions may further cause the at least one processor to generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones. The instructions may further cause the at least one processor to send, to the computing device via the network interface, at least one suggested message from the set of suggested messages. [0008] The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF DRAWINGS [0009] FIG. 1 is a conceptual diagram illustrating an example computing environment and graphical user interface for providing suggested messages written in a selected message tone, in accordance with one or more techniques of this disclosure. [0010] FIG. 2 is a block diagram illustrating a computing device for providing suggested messages, in accordance with one or more techniques of this disclosure. [0011] FIGS. 3A-3D are conceptual diagrams illustrating example graphical user interfaces for providing suggested messages in accordance with one or more techniques of this disclosure. [0012] FIG. 4 is a flowchart illustrating an example operation for providing suggested messages in accordance with one or more techniques of this disclosure.
Docket No.: 1333-481WO01 DETAILED DESCRIPTION [0013] FIG. 1 is a conceptual diagram illustrating an example computing environment 101 and GUI 110 for providing suggested messages written in a selected message tone, in accordance with one or more techniques of this disclosure. As shown in the example of FIG. 1, computing environment 101 includes computing device 100. Examples of computing device 100 may include, but are not limited to, portable, mobile, or other devices, such as mobile phones (including smartphones), wearable computing devices (e.g., smart watches, smart glasses, etc.) laptop computers, desktop computers, tablet computers, smart television platforms, server computers, mainframes, infotainment systems (e.g., vehicle head units), etc. In some examples, computing device 100 may represent a cloud computing system that provides one or more services via a network. That is, in some examples, computing device 100 may be a distributed computing system. [0014] As shown in the example of FIG. 1, computing device 100 includes one or more user interface (UI) devices (“UI device 102”). UI device 102 of computing device 100 may be configured to function as an input device and/or an output device for computing device 100. UI device 102 may be implemented using various technologies. For instance, UI device 102 may be configured to receive input from a user through tactile, audio, and/or video feedback. Examples of input devices include a presence-sensitive display, a presence-sensitive or touch-sensitive input device, a mouse, a keyboard, a voice responsive system, video camera, microphone or any other type of device for detecting a command from a user. In some examples, a presence-sensitive display includes a touch- sensitive or presence-sensitive input screen, such as a resistive touchscreen, a surface acoustic wave touchscreen, a capacitive touchscreen, a projective capacitance touchscreen, a pressure sensitive screen, an acoustic pulse recognition touchscreen, or another presence-sensitive technology. That is, UI device 102 of computing device 100 may include a presence-sensitive device that may receive tactile input from a user of computing device 100. UI device 102 may receive indications of the tactile input by detecting one or more gestures from the user (e.g., when the user touches or points to one or more locations of UI device 102 with a finger or a stylus pen). [0015] UI device 102 may additionally or alternatively be configured to function as an output device by providing output to a user using tactile, audio, or video stimuli. Examples of output devices include a sound card, a video graphics adapter card, or any of one or more display devices, such as a liquid crystal display (LCD), dot matrix display, light emitting diode (LED) display, miniLED, microLED, organic light-emitting diode
Docket No.: 1333-481WO01 (OLED) display, e-ink, or similar monochrome or color display capable of outputting visible information to a user of computing device 100. Additional examples of an output device include a speaker, a haptic device, or other device that can generate intelligible output to a user. For instance, UI device 102 may present output to a user of computing device 100 as a graphical user interface that may be associated with functionality provided by computing device 100. In this way, UI device 102 may present various user interfaces of applications executing at or accessible by computing device 100 (e.g., an electronic message application, an Internet browser application, etc.). A user of computing device 100 may interact with a respective user interface of an application to cause computing device 100 to perform operations relating to a function. [0016] In some examples, UI device 102 of computing device 100 may detect two- dimensional and/or three-dimensional gestures as input from a user of computing device 100. For instance, a sensor of UI device 102 may detect the user's movement (e.g., moving a hand, an arm, a pen, a stylus, etc.) within a threshold distance of the sensor of UI device 102. UI device 102 may determine a two- or three-dimensional vector representation of the movement and correlate the vector representation to a gesture input (e.g., a hand-wave, a pinch, a clap, a pen stroke, etc.) that has multiple dimensions. In other words, UI device 102 may, in some examples, detect a multidimensional gesture without requiring the user to gesture at or near a screen or surface at which UI device 102 outputs information for display. Instead, UI device 102 may detect a multi-dimensional gesture performed at or near a sensor which may or may not be located near the screen or surface at which UI device 102 outputs information for display. [0017] In the example of FIG. 1, computing device 100 includes user interface (UI) module 104, application modules 106A–106N (collectively “application modules 106”). Modules 104 and 106 may perform operations described herein using hardware, software, firmware, or a mixture thereof residing in and/or executing at computing device 100. Computing device 100 may execute modules 104 and 106 with one processor or with multiple processors. In some examples, computing device 100 may execute modules 104 and 106 as a virtual machine executing on underlying hardware. Modules 104 and 106 may execute as one or more services of an operating system or computing platform or may execute as one or more executable programs at an application layer of a computing platform. [0018] UI module 104, as shown in the example of FIG. 1, may be operable by computing device 100 to perform one or more functions, such as receive input and send
Docket No.: 1333-481WO01 indications of such input to other components associated with computing device 100, such as application modules 106. UI module 104 may also receive data from components associated with computing device 100 such as application modules 106. Using the data received, UI module 104 may cause other components associated with computing device 100, such as UI device 102, to provide output based on the data. For instance, UI module 104 may receive data from one of application modules 106 to display a GUI. [0019] Application modules 106, as shown in the example of FIG.1, may include functionality to perform any variety of operations on computing device 100. For instance, application modules 106 may include a word processor, a text application, a web browser, a multimedia player, a calendar application, an operating system, a distributed computing application, a graphic design application, a video editing application, a web development application, or any other application. One of application modules 106 (e.g., application module 106A) may be a text messaging or Short Message Service (SMS) application. Application module 106A may include functionality to compose outgoing text message communications, receive incoming text message communications, respond to incoming text message communications, and other functions. Application module 106A, in various examples, may provide data to UI module 104 causing UI device 102 to display a GUI. [0020] In some examples, one or more of application modules 106 may be operable to receive incoming communications from other devices (e.g., via the network). For instance, one or more of application modules 106 may receive text messages for an account associated with a user of computing device 100, calendar alerts or meeting requests for a user of computing device 100, or other incoming communications. [0021] In general, communications may include information (e.g., generated in response to input by users of the other devices). Examples of information include text (e.g., any combination of letters, words, numbers, punctuation, etc.), emoji, emoticons, images, video, audio, or any other content that may be included in a communication. In the example of FIG. 1, application module 106A may receive an incoming communication (e.g., a text message) from another computing device. [0022] Computing device 100 may communicate with other computing devices via a network. The network may include any public or private communication network, such as a cellular network, Wi-Fi network, or other type of network for transmitting data between computing devices. In some examples, the network may represent one or more packet switched networks, such as the Internet. Computing device 100 may send and receive
Docket No.: 1333-481WO01 data across the network using any suitable communication techniques. For example, computing device 100 may be operatively coupled to the network using respective network links. The network may include network hubs, network switches, network routers, terrestrial and/or satellite cellular networks, etc., that are operatively inter- coupled thereby providing for the exchange of information between computing device 100 and another computing device. In some examples, network links of the network may be Ethernet, ATM or other network connections. Such connections may include wireless and/or wired connections. [0023] In general, application module 106A may provide a user a platform to electronically communicate with other computing devices via computing device 100. For example, computing device 100 may send a message to and receive a message from another computing device. UI device 102 may provide a user operating computing device 100 the ability to provide inputs (e.g., to select letters, emojis, etc.) at a graphical keyboard to compose draft message 114. However, draft message 114, illustrated in text editing region 160, may not be composed in a tone required or preferred by the user operating computing device 100. For example, draft message 114 may be composed in a casual tone, but the user operating computing device 100 may want draft message 114 to be written in a formal tone. [0024] In accordance with the techniques of this disclosure, computing device 100 may provide the option for the user of computing device 100 to select message tone change request icon 116. For example, UI device 102 may display GUI 110 that includes message tone change request icon 116. UI device 102 may detect an input from a user operating computing device 100 at a location corresponding to where message tone change request icon 116 is included in GUI 110. Responsive to selecting message tone change request icon 116, application module 106A may instruct UI module 104 to update UI device 102 to output message tones 118A-N (collectively referred to as “message tones 118”) in GUI 110. UI device 102 may allow the user operating computing device 100 to select one or more message tones of message tones 118, such as “Shakespeare” message tone 118A or “Excited” message tone 118B, in which draft message 114 may be written in. UI device 102 may, for example, detect an input selecting one or more message tones 118. UI device 102 may send application module 106A an indication of the one or more selected message tones 118. Application module 106A may send draft message 114 and one or more selected message tones of message tones 118 to suggestion module 108.
Docket No.: 1333-481WO01 [0025] Computing device 100 may include suggestion module 108 that generates suggested messages 122. Computing device 100 may receive an input of draft message 114. Draft message 114 may either be input by a user or generated by computing device 100 or any other computing device. UI device 102 may output GUI 110 that displays draft message 114 in text editing region 160. In some instances, suggestion module 108 may execute on a remote computing system or server external to computing device 100. Computing device 100 may send draft message 114 and the selected message tones of message tones 118 to the remote computing system or server hosting suggestion module 108. Suggestion module 108 executing on the remote computing system or server may generate suggested messages 122 based on the selected message tones of message tones 118 and the context of draft message 114. Suggestion module 108 executing on the remote computing system or server may send computing device 100 the generated suggested messages 122 via a network providing wired and/or wireless communication between computing device 100 and the remote computing system or server. Computing device 100 may output at least a subset of the received suggested messages 122 in GUI 110. [0026] Suggestion module 108 may generate suggested messages 122 according to one or more message tones of message tones 118. Suggestion module 108 may generate suggested messages 122 by rewriting draft message 114 in a particular message tone. Suggestion module 108 may represent or otherwise apply a machine learning (ML) model trained to generate contextually relevant and coherent messages (e.g., suggested messages 122) based on one or more learned message tones of message tones 118. Suggestion module 108 may suggest the generated messages to a first user operating computing device 100. Computing device 100 may receive a signal of a message selected by the user operating computing device 100. Responsive to the selection, computing device 100 may replace or substitute draft message 114 with the selected message. For example, computing device 100 may output the selected message in text editing region 160. [0027] Suggestion module 108 may represent or otherwise apply an ML model, such as a large language model. Suggestion module 108 is described in greater detail below with respect to FIG. 2. Briefly, however, suggestion module 108 may be trained on a large amount of data in order to generate coherent and contextually relevant messages. In some examples, suggestion module 108 may use complex algorithms and neural networks to analyze the structure, grammar, and context of language. Suggestion module 108 may
Docket No.: 1333-481WO01 perform various natural language processing (NLP) tasks, such as machine translation, sentiment analysis, summarization, question answering, text generation, and the like. [0028] Suggestion module 108 may obtain message thread 112 between a first user operating computing device 100 and a second user. In some examples, the second user may also operate computing device 100. In other examples, the second user may operate another computing device (not shown) that is not computing device 100. Message thread 112 may include a set (i.e., one or more) of messages exchanged between two or more users (e.g., the first user and the second user) within a communication platform (e.g., provided by application modules 106), such as a messaging application, a social media platform, or other applications. [0029] In some instances, computing device 100 may receive an input from a user operating computing device 100 via UI device 102. Computing device 100 may receive an input as draft message 114 composed by the user operating computing device 100. In other instances, suggestion module 108 may generate draft message 114 based on message thread 112. [0030] Computing device 100 may provide suggestion module 108 draft message 114. Computing device 100 may only provide suggestion module 108 draft message 114 with express consent of a user operating computing device 100. Thus, the user operating computing device 100 retains control over how information is collected about the user and used by computing device 100 and suggestion module 108. [0031] Suggestion module 108 may determine the context of draft message 114 and/or message thread 112 that includes relevant information associated with draft message 114 and/or message thread 112. Suggestion module 108 may determine the context of draft message 114, such as the identity of the recipient of draft message 114, a time-of-day draft message 114 was composed, an application of application modules 106 being used to send and receive message thread 112, a message tone used by one or more messages of message thread 112, or any other factors indicating an intent associated with draft message 114. Suggestion module 108 may determine the context of message thread 112, such as the identity of users communicating in message thread 112, the tone of one or more messages of message thread 112, recent or previous messages of message thread 112, punctuation used in one or more messages of message thread 112, or any other factors indicating an intent associated with one or more messages of message thread 112. In some instances, suggestion module 108 may determine the context of draft message 114 and/or message thread 112 by applying the ML model. Suggestion module 108 may
Docket No.: 1333-481WO01 generate suggested messages 122 based on the context of draft message 114 and/or message thread 112. In the example of FIG. 1, suggestion module 108 may determine the context of draft message 114 stating “Wanna grab dinner?”. Suggestion module 108 may, for example, determine the context of the example draft message is a question about having dinner with the second user. In some instances, suggestion module 108 may handle data associated with draft message 114 and message thread 112 such that personally identifiable information is removed. For example, suggestion module 108 may determine a context of draft message 114 and/or message thread 112 in such a way that no personally identifiable information is used or obtained (e.g., in a way that a particular location of a user operating computing device 100 cannot be determined). [0032] Responsive to suggestion module 108 determining the context of draft message 114 and/or message thread 112, suggestion module 108 may generate suggested messages 122 based on the determined context and a selected message tone of message tones 118. In the example of FIG. 1, suggestion modules may generate suggested messages 122 based on “Shakespeare” message tone 118A. Suggestion module 108 may apply the determined context of draft message 114 (e.g., a question about having dinner with a second user) and “Shakespeare” message tone 118A to generate suggested message 122A stating “Prythee, shall we dine tonight?”, suggested message 122B stating “Wouldst thou do me the honor of accompanying me to dinner?”, suggested message 122N stating “wouldst thou join me for a repast?”, and other suggested messages 122. [0033] Suggestion module 108 may apply the ML model to learn how to generate suggested messages 122 in each message tone of message tones 118. Suggestion module 108 may train the ML model with plain text rules associated with message tones 118 and example statements composed in message tones 118. Suggestion module 108 training the ML model to learn how to generate suggested messages 122 is described in more detail below with respect to FIG. 2. [0034] In some examples, suggestion module 108 may categorize message tones 118 within distinct dimensions. For example, suggestion module 108 may categorize each message tone of message tones 118 as being in a dimension that may include characters, emotions, or tactics. Suggestion module 108 may categorize message tones 118 as being part of a dimension to efficiently learn how to draft messages in each message tone of message tones 118. Suggestion module 108 may also categorize message tones 118 to assist in learning new message tones, such as message tones created by a user operating computing device 100.
Docket No.: 1333-481WO01 [0035] Suggestion module 108 may receive plain text rules associated with message tones 118. Suggestion module 108 may receive plain text rules that include a description of how message tones 118 formats speech or statements. Suggestion module 108 may apply the target statements and plain text rules associated with message tones 118 to train the ML model on structural particularities of message tones 118. Suggestion module 108 may train the ML model to generate suggested messages 122 according to message tones 118. [0036] Suggestion module 108 may also update the target statements and plain text rules. Computing device 100 may provide an option to a user operating computing device 100 to select whether suggested messages 122 are properly written according to a selected message tone of message tones 118, while also maintaining the context of draft message 114 and/or message thread 112 (e.g., selecting a ‘thumbs up’ or ‘thumbs down’ corresponding to the quality of suggested messages 122 as perceived by a user operating computing device 100). Responsive to computing device 100 receiving an indication that selected messages 122 has the appropriate message tone and context, suggestion module 108 may reinforce the understanding of the ML model. Responsive to computing device 100 receiving an indication that selected messages 122 does not have the appropriate message tone or context, Suggestion module 108 may provide the ML model the appropriate feedback to further train the ML model. [0037] In some examples, computing device 100 may allow a user to create a new message tone of message tones 118. In some instances, suggestion module 108 may receive target statements or plain text rules from a user operating computing device 100. For example, UI device 102 may detect inputs from a user. UI device 102 may detect inputs including a new message tone name, target statements to train the new message tone (e.g., example statements of text written in the new message tone, corresponding example statements of text not written in the new message tone, or any sort of grammatical structure associated with the new message tone), or plain text rules associated with the new message tone. UI device 102 may send application module 106 an indication of the detected inputs via UI module 104. Application module 106 may send an indication of the inputs to suggestion module 108 to train the ML model to learn how to compose suggested messages 122 in the new message tone. Suggestion module 108 may train the ML model to learn how to write messages in the new message tone by identifying textual particularities associated with the new message tone.
Docket No.: 1333-481WO01 [0038] Although suggestion module 108 is depicted as part of computing device 100 in FIG. 1, suggestion module 108 may be stored and/or executed by any other computing device or computing system communicably coupled to computing device 100. For example, suggestion module 108 may be stored and/or executed by an external computing system that is communicatively coupled to computing device 100 via a network connection. [0039] Computing device 100 may output one or more of suggested messages 122. For example, responsive to suggestion module 108 generating suggested messages 122, suggestion module 108 may send application module 106 at least a subset of the generated suggested messages 122. Application module 106 may send UI module 104 the received messages of suggested messages 122. UI module 104 may instruct UI device 102 to update GUI 110 to output suggested messages 122. UI device 102 may display suggested messages 122 in message selection area 120. [0040] Computing device 100 may receive an input selecting one of suggested messages 122. For example, UI device 102 of computing device 100 may detect an input corresponding to a location of a suggested message of suggested messages 122 output in message selection area 120. UI device 102 may send the input selecting a message of suggested messages 122 to application module 106 via UI module 104. Application module 106 may instruct UI module 104 to update UI device 102 to output GUI 110 that displays the selected message in text editing region 160 (e.g., by replacing draft message 114 with the selected message). [0041] In some instances, computing device 100 may automatically respond to all incoming messages from message thread 112 according to one or more message tones of message tones 118. For example, application module 106 may receive new, incoming messages associated with message thread 112. Application module 106 may continuously send message thread 112 to suggestion module 108 responsive to a new message received via a communication platform handled by application module 106. In some instances, application module 106 may include a context or intent of the new message in message thread 112 sent to suggestion module 108. Suggestion module 108 may generate a response to the new message in a selected message tone of message tones 118 and according to the context or intent of the new message. Suggestion module 108 may send the generated message to application module 106. Application modules 106 may automatically send the generated message to the one or more other users involved in message thread 112. Application module 106 may also send the generated message to UI
Docket No.: 1333-481WO01 module 104. UI module 104 may instruct UI device 102 to update GUI 110 to output the automatically generated message sent to other users as part of message thread 112. [0042] Suggestion module 108 may generate a plurality of messages as potential responses to a recent message included in message thread 112. Suggestion module 108 may generate the plurality of messages according to a selected message tone of message tones 118. Suggestion module 108 may apply the machine learning model to generate potential messages that satisfies a message value condition. Suggestion module 108 may generate a set of potential messages based on a context of the recent message received in message thread 112. The context may include one or more of an identity of the recipient or sender of the new message, a time-of-day the new message was received, an application of application modules 106 being used to send and receive message thread 112, a message tone used by one or more messages of message thread 112, or any other factors indicating an intent associated with the new message included in message thread 112. [0043] Suggestion module 108 may apply the machine learning model to select at least a subset of the generated potential messages based on whether a potential message in the set of the potential message satisfies a message value condition, such as whether the potential message was appropriately composed in the selected message tone of message tones 118. For example, suggestion module 108 may apply the machine learning model to determine a message value for each of the plurality of potential messages composed in a selected message tone of message tones 118. Suggestion module 108 may determine the message value for a potential message based on a confidence rating associated with how closely the potential message matches the selected message tone of message tones 118. Suggestion module 108 may determine a potential message satisfies the message value condition based on whether the message value assigned to each potential message satisfies a threshold confidence rating associated with the selected message tone of message tones 118. Suggestion module 108 may determine the message value condition based on factors and/or parameters used in training the ML model (e.g., plain text rules of message tones 118, structural particularities of message tones 118, language used in examples of message tones 118). Suggestion module 108 may send application modules 106 the potential message with the greatest message value. In some examples, application modules 106 may automatically send the potential message with the greatest message value to the users involved in message thread 112. In this way, the techniques may streamline communication between the first user operating computing device 100
Docket No.: 1333-481WO01 and a second user operating another computing device by reducing the number of interactions with computing device 100 (e.g., generating messages in a particular message tone without the first user contemplating how to draft a message in said message tone or reducing the number of interactions with computing device 100 thereby improving the battery life of computing device 100 by saving processor power usage). The techniques may improve the battery life of computing device 100 by saving processor power usage associated with the number of interactions with computing device 100. [0044] FIG. 2 is a block diagram illustrating a computing device 200, in accordance with one or more techniques of this disclosure. Computing device 200 may be one example of computing device 100 in accordance with one or more techniques of this disclosure. FIG. 2 illustrates only one particular example of computing device 200, and many other examples of computing device 200 may be used in other instances and may include a subset of components included in example computing device 200 or may include additional components not shown in FIG. 2. [0045] As shown in FIG. 2, computing device 200 may include one or more user interface devices 202 (“UI device 202” or “display 202”), one or more processors 224 (“processor 224”), one or more storage devices 228 (“storage device 228”), one or more communication units 226 (“communication unit 226”). Also shown in FIG. 2, UI device 202 may include one or more input devices 234 (“input device 234”) and one or more output devices (“output devices 236”). As also shown in FIG. 2, storage device 228 may include a user interface module 204 (“UI module 204”), one or more application modules 206 (“application module 206”), a suggestion module 208, an operating system 230 (“OS 230”), and database 232. [0046] In some examples, UI device 202 may be a presence-sensitive display configured to detect input (e.g., touch and non-touch input) from a user of respective computing device 200. UI device 202 may output information to a user in the form of a UI, which may be associated with functionality provided by computing device 200. Such UIs may be associated with computing platforms, operating systems, applications, and/or services executing at or accessible from computing device 200 (e.g., electronic message applications, chat applications, Internet browser applications, mobile or desktop operating systems, social media applications, electronic games, menus, and other types of applications). [0047] Processor 224 may implement functionality and/or execute instructions within computing device 200. For example, processor 224 may receive and execute instructions
Docket No.: 1333-481WO01 that provide the functionality of modules 204-208 and OS 230. These instructions executed by processor 224 may cause computing device 200 to store and/or modify information within storage device 228 or processor 224 during program execution. Processor 224 may execute instructions of modules 204-208 and OS 230 to perform one or more operations. That is Modules 204-208 and OS 230 may be operable by processor 224 to perform various functions described herein. [0048] Storage device 228 within computing device 200 may store information for processing during operation of computing device 200 (e.g., computing device 200 may store data accessed by modules 204-208 and OS 230 during execution at computing device 200). In some examples, storage device 228 may be a temporary memory, meaning that a primary purpose of storage device 228 is not long-term storage. Storage device 228 on computing device 200 may be configured for short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. [0049] Storage device 228 may include one or more computer-readable storage media. Storage device 228 may be configured to store larger amounts of information than volatile memory. Storage device 228 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. Storage device 228 may store program instructions and/or information (e.g., within database 232) associated with modules 204-208 and OS 230. [0050] Communication unit 226 of computing device 200 may communicate with one or more external devices via one or more wired and/or wireless networks by transmitting and/or receiving network signals on the one or more networks. Examples of communication units 226 include a network interface card (e.g., such as an Ethernet card), an optical transceiver, a radio frequency transceiver, a GNSS receiver, or any other type of device that can send and/or receive information. Other examples of communication unit 226 may include short wave radios, cellular data radios (for terrestrial and/or satellite cellular networks), wireless network radios, as well as universal serial bus (USB) controllers.
Docket No.: 1333-481WO01 [0051] Input device 234 of computing device 200 may receive input. Examples of input are tactile, audio, and video input. Input device 234 of computing device 200, in one example, includes a presence-sensitive display, a fingerprint sensor, touch-sensitive screen, mouse, keyboard, voice responsive system, video camera, microphone or any other type of device for detecting input from a human or machine. [0052] Input devices 234 may include one or more sensors. Numerous examples of sensors exist and include any input component configured to obtain environmental information about the circumstances surrounding computing device 200 and/or physiological information that defines the activity state and/or physical well-being of a user of computing device 200. In some examples, a sensor may be an input component that obtains physical position, movement, and/or location information of computing device 200. For instance, sensors may include one or more location sensors (e.g., GNSS components, Wi-Fi components, cellular components), one or more temperature sensors, one or more motion sensors (e.g., multi-axial accelerometers, gyros), one or more pressure sensors (e.g., barometer), one or more ambient light sensors, and one or more other sensors (e.g., microphone, camera, infrared proximity sensor, hygrometer, and the like). Other sensors may include a heart rate sensor, magnetometer, glucose sensor, hygrometer sensor, olfactory sensor, compass sensor, step counter sensor, to name a few other non-limiting examples. [0053] Output device 236 of computing device 200 may generate one or more outputs. Examples of outputs are tactile, audio, and video output. Output device 236 of computing device 200, in one example, includes a presence-sensitive display, sound card, video graphics adapter card, speaker, liquid crystal display (LCD), or any other type of device for generating output to a human or machine. [0054] Communication channels 250 (“COMM channel 250”) may interconnect each of the components 202, 224, 226, and 228 for inter-component communications (physically, communicatively, and/or operatively). In some examples, communication channel 250 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data. [0055] Computing device 200 may include OS 230. OS 230 may control the operation of components of computing device 200. For example, OS 230 may facilitate the communication of modules 204-208 with processor 224, storage device 228, and communication units 226. In some examples, OS 230 may manage interactions between software applications (e.g., application module 206) and a user of computing device 200.
Docket No.: 1333-481WO01 OS 230 may have a kernel that facilitates interactions with underlying hardware of computing device 200 and provides a fully formed application space capable of executing a wide variety of software applications having secure partitions in which each of the software applications executes to perform various operations. In some examples, UI module 204 may be considered a component of OS 230. [0056] In the example of FIG. 2, suggestion module 208 may comprise a hardware device, such as a server computer, having various hardware, firmware, and software components. However, FIG. 2 illustrates only one particular example of suggestion module 208, and many other examples of suggestion module 208 may be used in accordance with techniques of this disclosure. In some examples, components of suggestion module 208 may be located in a singular location. In other examples, one or more components of suggestion module 108 may be in different locations (e.g., a computing system or server communicably connected to computing device 200 via a network). That is, in some examples suggestion module 208 may be a conventional computing device, while in other examples, suggestion module 208 may be a distributed or “cloud” computing device. Suggestion module 208 may treat data associated with the techniques described herein such that personally identifiable information is removed. Computing device 200 may also output, via UI device 202, an option for a user operating computing device 200 to grant computing device 200 explicit consent to provide suggestion module 208 information associated with draft message 114 and/or message thread 112, as shown in FIG. 1. [0057] Suggestion module 208 may apply a machine learning model, such as ML model 252, to generate suggested messages relevant to a message thread. In some examples, ML model 252 can be or include one or more artificial neural networks (also referred to simply as neural networks). A neural network can include a group of connected nodes, which also can be referred to as neurons or perceptrons. A neural network can be organized into one or more layers. Neural networks that include multiple layers can be referred to as “deep” networks. A deep network can include an input layer, an output layer, and one or more hidden layers positioned between the input layer and the output layer. The nodes of the neural network can be connected or non-fully connected. [0058] ML model 252 can be or include one or more transformer-based neural networks, such as a large language model (LLM). Transformer-based neural networks may refer to a type of deep learning architecture specifically designed for handling sequential data, such as text or time series. In other words, transformer-based neural networks like LLMs
Docket No.: 1333-481WO01 may be configured to perform natural language processing (NLP) tasks, such as question- answering, machine translation, text summarization, and sentiment analysis. Transformer-based neural networks may utilize a self-attention mechanism, which allows the model to weigh the importance of different elements in a given input sequence relative to each other. The self-attention mechanism may help ML model 252 effectively capture long-range dependencies and complex relationships between elements, such as words in a sentence. [0059] ML model 252 may include an encoder and a decoder that operate to process and generate sequential data, such as text. Both the encoder and decoder may include one or more of self-attention mechanisms, position-wise feedforward networks, layer normalization, or residual connections. In some examples, the encoder may process an input sequence and create a representation that captures the relationships and context among the elements in the sequence. The decoder may then obtain the representation generated by the encoder and produce an output sequence. In some examples, the decoder may generate the output one element at a time (e.g., one word at a time), using a process called autoregressive decoding, where the previously generated elements are used as input to predict the next element in the sequence. [0060] In some examples, ML model 252 may generate a set of suggested messages by determining a set of information types of a recent message. An information type of a message may be or otherwise include a topic, theme, point, subject, purpose, intent, keyword, etc. In some examples, ML model 252 may determine the information type by leveraging a self-attention mechanism to capture the relationships and dependencies between words in the input sequence. For example, ML model 252 may tokenize (e.g., split) a sequence of words or subwords, which ML model 252 may convert into vectors (e.g., numerical representations) that ML model 252 can process. ML model 252 may use the self-attention mechanism to weigh the importance of each token in relation to the others. In this way, ML model 252 may identify patterns and relationships between the tokens, and in turn the words corresponding to the tokens, that indicate one or more information types of the recent message. Suggestion module 208 may treat data associated with the techniques described herein such that personally identifiable information is removed. Computing device 200 may also output, via UI device 202, an option for a user operating computing device 200 to grant computing device 200 explicit consent to provide suggestion module 208 information associated with draft message 114 and/or message thread 112.
Docket No.: 1333-481WO01 [0061] ML model 252 may generate suggested messages based on the determined one or more information types of the recent message. For example, information in each of the suggested messages generated by ML model 252 may be related (e.g., logically, semantically, etc.) to at least a subset of the set of information types of recent message included in message thread 112. For example, if ML model 252 determines that the set of information types of a recent message includes “vacation,” “swimming,” and “weekend,” ML model 252 may generate a suggested message like “Do you want to go to the beach this weekend?” and other similar messages. [0062] Although primarily described herein as being a transformer-based neural network, ML model 252 may be or otherwise include one or more other types of neural networks. For example, ML model 252 may be or include an autoencoder. In some examples, the aim of an autoencoder is to learn a representation (e.g., a lower- dimensional encoding) for a set of data, typically for the purpose of dimensionality reduction. For example, in some examples, an autoencoder can seek to encode the input data and the provided output data that reconstructs the input data from the encoding. Recently, the autoencoder concept has become more widely used for learning generative models of data. In some examples, the autoencoder can include additional losses beyond reconstructing the input data. ML model 252 may be or include one or more other forms of artificial neural networks such as, for example, deep Boltzmann machines, deep belief networks, stacked autoencoders, etc. Any of the neural networks described herein can be combined (e.g., stacked) to form more complex networks. [0063] In some examples, ML model 252 can be or include one or more feed forward neural networks. In feed forward networks, the connections between nodes do not form a cycle. For example, each connection can connect a node from an earlier layer to a node from a later layer. In some examples, ML model 252 can be or include one or more recurrent neural networks. In some examples, at least some of the nodes of a recurrent neural network can form a cycle. [0064] Recurrent neural networks can be especially useful for processing input data that is sequential in nature. For example, a recurrent neural network can pass or retain information from a previous portion of the input data sequence to a subsequent portion of the input data sequence through the use of recurrent or directed cyclical node connections. Sequential input data may include words in a sentence (e.g., for natural language processing, speech detection or processing, etc.). In some examples, sequential input data can include time-series data (e.g., sensor data versus time or imagery captured at different
Docket No.: 1333-481WO01 times). Example recurrent neural networks may include long short-term (LSTM) recurrent neural networks, gated recurrent units, bi-direction recurrent neural networks, continuous time recurrent neural networks, neural history compressors, echo state networks, Elman networks, Jordan networks, recursive neural networks, Hopfield networks, fully recurrent networks, sequence-to- sequence configurations, etc. [0065] In some examples, ML model 252 can be or include one or more convolutional neural networks. In some examples, a convolutional neural network can include one or more convolutional layers that perform convolutions over input data using learned filters. Filters can also be referred to as kernels. Convolutional neural networks can be especially useful for vision problems such as when the input data includes imagery such as still images or video. However, convolutional neural networks can also be applied for natural language processing. [0066] In some examples, ML model 252 may determine a message value for each of suggested messages 122 generated by response generation module 242. In these examples, ML model 252 may compare the message value for each of suggested messages 122 to determine whether to display suggested messages 122 to a user. In other words, ML model 252 may only output, for display to a user, the subset of suggested messages 122 satisfying a message value condition. ML model 252 may define a message value condition as a confidence rating ML model 252 develops throughout training. Responsive to ML model 252 being trained on how to compose suggested messages according to message tones 118, ML model 252 may establish a message value condition for each message tone of messages tones 118. ML model 252 may establish a message value condition for a message tone of message tones 118 based on a message value ML model 252 determines as a threshold message value (e.g., threshold confidence rating associated with a selected message tone). ML model 252 may determine a threshold message value included in the message value condition based on factors and/or parameters used in training ML model 252 to composed messages in message tones 118 (e.g., plain text rules of message tones 118, structural particularities of message tones 118, language used in examples of message tones 118). Responsive to ML model 252 establishing a message value condition associated with each message tone of message tones 118, ML model 252 may select one or more generated suggested messages to output based on whether an assigned message value to each generated suggested message satisfies the message value threshold included in the message value condition.
Docket No.: 1333-481WO01 [0067] For example, for each of suggested messages 122, ML model 252 may determine a message value from 0 to 1. In this example, a message value of 0 may be associated with a lowest confidence in the message being useful or of interest to a user and a message value of 1 may be associated with a highest confidence in the message being useful or of interest to a user. It should be understood that other ranges of message values (e.g., 0 to 100, 0 to -10, etc.), relationships between message values and amount of confidence (e.g., a message value of 0 being associated with a highest confidence and a message value of 1 being associated with a lowest confidence), and/or the like are contemplated. [0068] If the message value satisfies a message value threshold, ML model 252 may transmit the corresponding suggested message to UI module 204 for display to a user. In some examples, the message value may satisfy the threshold when the message value is equal to or greater than a message value threshold. For example, if ML model 252 determines, for a particular suggested message, a message value of 0.8, and if the message value threshold is 0.7, then the particular suggested message may be in the subset of suggested messages that ML model 252 transmits to UI module 204 for display to a user. [0069] On the other hand, if ML model 252 determines a message value that does not satisfy the message value threshold, such as a message value of 0.6, then the particular suggested message may not be in the subset of suggested messages for display, and computing device 200 may execute a different action, such as discard the particular suggested message. In some examples, UI module 204 may output, for display by UI device 202, at least the subset of suggested messages in an order based on the message value for each suggested message of at least the subset of suggested messages. [0070] In general, ML model 252 may excel at performing NLP tasks, such as generating text and other content. However, with respect to specific types of content (e.g., specific information types), ML model 252 may have an increased likelihood of generating false or inaccurate information. To address the issue of generating false information, ML model 252 may be configured to exclude the generation of content relating to a set of excluded information types. The set of excluded information types may include one or more phone numbers, addresses, web addresses, etc. [0071] Computing device 200 may include tone training module 240 that trains (e.g., pre- train, fine-tune, etc.) ML model 252. Tone training module 240 may pre-train ML model 252 on a large and diverse corpus of text from various sources, such as websites, books,
Docket No.: 1333-481WO01 articles, and other text repositories. This dataset may cover a wide range of topics and domains to ensure ML model 252 learns diverse linguistic patterns and contextual relationships. Tone training module 240 may train ML model 252 to optimize an objective function. The objective function may be or include a loss function, such as cross-entropy loss, that compares (e.g., determines a difference between) output data generated by the model from the training data and labels (e.g., ground-truth labels) associated with the training data. For example, the objective function of ML model 252 may be to correctly predict the next word in a sequence of words or correctly fill in missing words as much as possible. [0072] By leveraging ML model 252, suggestion module 208 may decrease the time and/or effort required to compose a message to send to another user while increasing the likelihood that suggested messages will actually be of value to the user. That is, by presenting the user of computing device 200 with a set of suggested messages, the techniques of this disclosure may reduce the likelihood that the user will need to manually type a message, thereby reducing processing power associated with manually typing a message (e.g., processing power associated with powering UI device 202, application module 206, etc.). [0073] In operation, tone training module 240 may use ML model 252 to learn message tones 118. For example, tone training module 240 may use ML model 252 that has been trained to understand language, such as a large language model (LLM), to learn message tone 118A. Tone training module 240 may receive target statements from database 232. Tone training module 240 may receive target statements that include example statements associated with message tone 118A. Tone training module 240 may also receive target statements that include example statements not associated with message tone 118A but include the same context or intent of corresponding example statements associated with message tone 118A. Tone training module 240 may train ML model 252 by analyzing textual differences between example statements associated with message tone 118A and corresponding example statements not associated with message tone 118A. [0074] In some examples, tone training module 240 may categorize message tones with categorization module 246. Categorization module 246 may categorize message tones 118 as either characters, emotions, or tactics. Categorization module 246 may assist the training of ML model 252. ML model 252 may use categories included in categorization module 246 to direct the learning of a message tone of message tones 118 according to a corresponding category specified in categorization module 246. For example,
Docket No.: 1333-481WO01 categorization module 246 may categorize “Shakespeare” message tone 118A as a character and “Excited” message tone 118B as an emotion. ML model 252 may use information associated with a character category when learning to compose messages in “Shakespeare” message tone 118A. Similarly, ML model 252 may use information associated with an emotion category when learning to compose messages in “Excited” message tone 118B. [0075] Reinforcement training module 244 may also receive refreshers from database 232. Reinforcement training module 244 may receive refreshers that include one or more example statements written in a trained message tone of message tones 118. Reinforcement training module 244 may receive refreshers not included in target statements used to initially train ML model 252. Reinforcement training module 244 may continuously or periodically train ML model 252 with the received refreshers. For example, reinforcement training module 244 may receive refreshers including example nursery rhymes for a nursery rhyme message tone. Reinforcement training module 244 may apply the refreshers to remind ML model 252 of the textual particularities associated with the nursery rhyme message tone. [0076] Database 232 may continuously or periodically update or change the refreshers used by reinforcement training module 244. In some instances, database 232 may update or change the refreshers based on a user generated example provided via UI module 204. In other instances, database 232 may update or change refreshers by receiving the refreshers from an external computing device or computing system. [0077] In some examples, reinforcement training module 244 of tone training module 240 may fine-tune ML model 252 by using the feedback in the training process. For example, UI device 202 may receive a user input – via input device 234 – that selects feedback (e.g., thumbs up, thumbs down, etc.) relating to at least one suggested message of the set of suggested messages. In some examples, the feedback may indicate whether the suggested messages are accurate or inaccurate, correct or incorrect, high quality or low quality, etc. UI module 104 may send application module 206 the received input indicating feedback related to at least one suggested message of the set of suggested messages. Application module 206 may transmit the feedback to reinforcement training module 244. Reinforcement training module 244 may obtain this feedback from the user of computing device 200 (as well as feedback from other users) and use the feedback for training. For example, reinforcement training module 244 may convert the feedback into labeled data for supervised training. Additionally or alternatively, reinforcement training
Docket No.: 1333-481WO01 module 244 may fine-tune ML model 252 by monitoring the relationship between the performance of ML model 252 and user feedback, and iterate the fine-tuning process as necessary (e.g., to receive more positive user feedback and less negative user feedback). In this way, the techniques of this disclosure may establish a feedback loop that continuously improves the quality of the output of ML model 252. [0078] Context analysis module 238 may determine the context of draft message 114 and/or message thread 112, as depicted in FIG. 1. Context analysis module 238 may determine the context of draft message 114 by preserving the original intent of draft message 114, not simply the content of draft message 114. For example, context analysis module 238 may use the textual understanding of ML model 252 (e.g., LLM) to determine the context of draft message 114 in view of message thread 112. Context analysis module 238 may determine the context or intent of draft message 114, such as the identity of the recipient of draft message 114, a time-of-day draft message 114 was composed, an application of application modules 106 being used to send and receive message thread 112, a message tone used by one or more messages of message thread 112, or any other factors indicating an intent associated with draft message 114. Context analysis module 238 may also determine the context or intent of messages of message thread 112, such as the identity of users communicating in message thread 112, the tone of messages of message thread 112, recent messages of message thread 112, punctuation used in messages of message thread 112, or any other factors indicating an intent associated with one or more messages of message thread 112. [0079] Response generation module 242 may use ML model 252 to apply a message tone of message tones 118 to a determined context of draft message 114 and/or message thread 112. For example, response generation module 242 may generate suggested messages 122 by applying a message tone trained by tone training module 240 to a context determined by context analysis module 238. [0080] Response generation module 242 may include response elaboration module 248. Response elaboration module 248 may use context analysis module 238 understand the context of message thread 112 (e.g., information included in one or more recent messages included in message thread 112, a tone of one or more recent messages included in message thread 112, users involved in message thread 112, etc.). In some instances, context analysis module 238 may apply ML model 252 to understand the context of message thread 112. Context analysis module 238 may determine the context of message thread 112 based on parameters associated with the intent of messages included in
Docket No.: 1333-481WO01 message thread 112. For example, context analysis module 238 may determine the context of message thread 112 based on a time-of-day recent or previous messages in message thread 112 were sent, the users involved in message thread 112, a tone of one or more messages included in message thread 112, etc. Context analysis module 238 may send response elaboration module 248 the determined context of message thread 112. Response elaboration module 248 may apply a ML model 252 to generate suggested messages 122. Response elaboration module 248 may generate suggested messages 122 that include messages that elaborate or expand on draft message 114 based on the determined context of message thread 112. Response elaboration module 248 may use user interface module 204 to output the generated suggested messages 122 via output device 236. [0081] FIGS. 3A-3D are conceptual diagrams illustrating example graphical user interfaces for providing suggested messages in accordance with one or more techniques of this disclosure. The example of FIGS. 3A-3D is described below within the context of FIG. 2. As shown in FIGS. 3A-3D, GUIs 310A-310D may include message threads 312A-312D and draft messages 314A-314D displayed in text editing region 360A-360D. Although not explicitly depicted, the examples of FIGS. 3A-3D may output an option granting computing device 200 consent to perform the techniques described herein. Thus, the user operating computing device 200 may have control over how information is collected about the user and used by computing device 200 and/or other computing systems or computing devices described herein. Although not explicitly depicted, the examples of FIGs. 3A-3D may output an option granting computing device 200 consent to perform the techniques described herein. Thus, the user operating computing device 200 may have control over how information is collected about the user and used by computing device 200 and/or other computing systems or computing devices described herein. [0082] In the example of FIG. 3A, text editing region 360A includes draft message 314A stating “Wanna grab dinner?”. Draft message 314A may include text generated by a user interacting with computing device 200. Draft message 314A may include text of characters selected with keyboard 354A. For example, UI device 202 may detect inputs of a user selecting characters included in keyboard 354A. UI device 202 may send inputs to application module 206. Application module 206 may instruct UI device 202 – via UI module 204 – to update GUI 310A to output the selected characters in text editing region 360. Text editing region 360 may include one or more characters that represent draft
Docket No.: 1333-481WO01 message 314A. In some instances, application module 206 may instruct UI device 202 to output GUI 310A that includes message tone change request graphical element 316A. UI device 202 may receive a signal indicating a user selecting message tone change request graphical element 316A. Responsive to selecting message tone change request graphical element 316A, UI module 204 may send draft message 314A and message thread 312A to application module 206. Application module 206 may transmit draft message 314A and message thread 312A to suggestion module 208. As described above, suggestion module 208 may determine the context of draft message 314A and messages of message thread 312A to suggest messages of draft message 314A composed in one or more message tones. [0083] In the example of FIG. 3B, UI device 202 has received a signal that a user interacting with computing device 200 has selected message tone change request graphical element 316B. Responsive to selecting message tone change request graphical element 316B, application module 206 may instruct UI module 204 to update GUI 310B output by UI device 202. UI device 202 may output GUI 310B to include message tones 318A-318N (collectively referred to as “message tones 318). As illustrated in the example of FIG. 3B, “Shakespeare” message tone 318A may be selected by a user interacting with GUI 310B. In some examples, one or more of message tones 318 may be selected. UI device 202 may detect an input from a user operating computing device 200 indicating one or more of message tones 118 has been selected. Responsive to selecting one or more of message tones 318, UI module 204 may send draft message 314B, message thread 312B, and one or more selected message tones 318 to application module 206. Application module 206 may transmit draft message 314B, message thread 312B, and the one or more selected messages tones 318 to suggestion module 208. [0084] As described above, context analysis module 238 of suggestion module 208 may determine a context of draft message 314B and/or message thread 312B. Context analysis module 238 may determine the context of draft message 314B and/or message thread 312B based on any relevant information indicating an intent underlying draft message 314B and/or message thread 312B, respectively. Context analysis module 238 may determine the context of draft message 314B based on, for example, message thread 312B, a tone of one or more messages included in message thread 312B, a time-of-day draft message 314B was composed, etc. Context analysis module 238 may determine the context of message thread 312B based on, for example, the identity of users communicating in message thread 312B, the tone of messages of message thread 312B,
Docket No.: 1333-481WO01 recent messages of message thread 312B, punctuation used in messages of message thread 312B, or any other factors indicating an intent associated with one or more messages of message thread 312B. [0085] Response generation module 242 may generate a set of suggested messages 322A- 322N (collectively, “suggested messages 322”). Response generation module 242 may use ML model 252 to generate suggested messages 322 according to one or more selected message tones of message tones 318. Tone training module 240 may train ML model 252 to learn how to generate suggested messages 322 according to message tones 318, as discussed in more detail above. Responsive to response generation module 242 generating suggested messages 322 according to one or more selected message tones of message tones 318, suggestion module 208 may send application module 206 the suggested messages 322. Application module 206 may transmit suggested messages 322 to UI module 204. UI module 204 may update UI device 202 with GUI 310B. UI device 202 may display suggested messages 322 in message selection area 320. [0086] In some examples, UI device 202 may output GUI 310B that includes feedback graphical elements 356. UI device 202 may detect an input of a user interacting with (e.g., select) one or more of feedback graphical elements 356 to provide feedback relating to suggested messages 322. UI module 204 may send the inputs associated with the feedback related to suggested messages 322 to application module 206. Application module 206 may transmit the feedback to tone training module 240 of suggestion module 208. Reinforcement training module 244 of tone training module 240 may use the user feedback to train ML model 252. For example, reinforcement training module 244 may train ML model 252 to maximize positive user feedback and minimize negative user feedback. [0087] In the example of FIG. 3B, feedback graphical elements 356 may include a “thumbs up” selection icon, a “thumbs down” selection icon, or a “manual feedback” selection icon. Application module 206 may instruct UI module 204 to output feedback graphical elements 356 in GUI 310B. UI device 202 may allow a user interacting with GUI 310B to select one or more of the icons included in feedback graphical elements 356. Responsive to interacting with feedback graphical elements 356, UI module 204 may send the feedback included in the user interaction with feedback graphical elements 356 to application module 206. Application module 206 may transmit the feedback to reinforcement training module 244 of tone training module 240. Reinforcement training module 244 may use the information included in feedback 356 to further refine the
Docket No.: 1333-481WO01 training of ML model 252 with respect to the one or more selected message tones of message tones 318. [0088] In the example of FIG. 3C, application module 206 may instruct UI module 204 to output GUI 310C via UI device 202. UI device 202 may output the selected message of suggested messages 322 in text editing region 360C. Draft message 314A of FIG. 3C includes text of the selected message of suggested messages 322. UI device 202 may also output keyboard 354C. UI device 202 may output keyboard 354C to provide the user interacting with computing device 200 the opportunity to edit or manually change text included in text editing region 360C. [0089] UI device 202 may detect an input of a user selecting send button 358. UI module 204 may send the input of the user selecting send button 358 to application module 206. Send button 358 may interact with application modules 206 to include draft message 314C in message thread 312C, and effectively send draft message 314C to the other users involved in message thread 312C. Responsive to sending draft message 314C, application module 206 may then instruct UI module 204 to update GUI 310C to include draft message 314C in message thread 312C. [0090] In the example of FIG. 3D, application module 206 may instruct UI module 204 to output GUI 310D via UI device 202. UI device 202 outputs GUI 310D to display draft message 314C (e.g., the selected message of suggested messages 322) as part of message thread 312D. UI device 202 outputs GUI 310D to display text editing region 360D as including no text. UI device 202 outputs GUI 310D to display keyboard 354D to allow a user to select characters. UI device 202 may detect input of a user selecting characters included in keyboard 354D. UI module 204 may send the detected inputs associated with keyboard 354D to application module 206. Application module 206 may then instruct UI module 204 to update GUI 310D to include the inputs associated with keyboard 354D in text editing region 360D. [0091] FIG. 4 is a flowchart illustrating an example operation for providing suggested messages in accordance with one or more techniques of this disclosure. Although the example operation of FIG. 4 is described as being performed by computing device 200 of FIG. 2 and with respect to elements illustrated in FIG. 1, in other examples some or all of the example operations may be performed by another computing device or computing system. [0092] In accordance with the techniques of this disclosure, suggestion module 208 of computing device 200 may obtain message thread 112 (or another message thread) (402).
Docket No.: 1333-481WO01 Message thread 112 may include a plurality of messages exchanged between a user operating computing device 200 and a user operating one or more other computing devices. Message thread 112 may include a plurality of messages exchanged via a communication platform provided by application module 206. [0093] Suggestion module 208 may apply ML model 252 (e.g., a large language model) to draft message 114 to generate suggested messages 122 according to one or more selected message tones of message tones 118 (404). Response generation module 242 of suggestion module 208 may apply ML model 252 to generate suggested messages 122 according to one or more selected message tones of message tones 118, as well as a determined context of draft message 114 (e.g., the identity of the recipient of draft message 114, a time-of-day draft message 114 was composed, an application of application modules 106 being used to send and receive message thread 112, a message tone used by one or more messages of message thread 112, or any other factors indicating an intent associated with draft message 114) and or a determined context of message thread 112 (the identity of users communicating in message thread 112, the tone of one or more messages of message thread 112, recent or previous messages of message thread 112, punctuation used in one or more messages of message thread 112, or any other factors indicating an intent associated with one or more messages of message thread 112). Tone training module 240 of suggestion module 208 may train ML model 252 on how to compose suggested messages 122 according to message tones 118. Response generation module 242 may apply the trained ML model 252, along with a context determined by context analysis module 238, to generate suggested messages 122. [0094] Suggestion module 208 may send at least a subset of the generated suggested messages 122 to application module 206. Application module 206 may transmit suggested messages 122 to user interface module 204 to output suggested messages 122 via UI device 202. Output device 236 of user interface devices 202 may output at least a subset of suggested messages 122 (406). Output devices 236 may output suggested messages 122 in message selection area 120 of GUI 110. [0095] Input devices 234 of user interface devices 202 may receive an input that selects a suggested message of the outputted suggested messages 122 (408). Responsive to input devices 234 receiving an input selecting a suggested message of suggested messages 122, UI module 204 may send application module 206 the selected message of suggested messages 122. Application module 206 may instruct UI module 204 to update user interface device 202. UI device 202 may output the selected message of suggested
Docket No.: 1333-481WO01 messages 122 in text editing region 160 (410). User interface device 202 may replace draft message 114 with the selected message of suggested messages 122 via instructions user interface module 204 receives from application module 206. [0096] Throughout the disclosure, examples are described where a computing device and/or a computing system analyzes information (e.g., wireless ID tags and respective information, locations, context, motion, etc.) associated with a computing device and a user of the computing device, only if the computing device receives permission from the user of the computing device to analyze the information. For example, in situations discussed above and below, before a computing device or computing system can collect or may make use of information associated with a user, the user may be provided with an opportunity to provide input to control whether programs or features of the computing device and/or computing system can collect and make use of user information (e.g., information about a user’s or user device’s current location, such as by GPS or wireless ID tag, etc.), or to dictate whether and/or how to the device and/or system may receive content that may be relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used by the computing device and/or computing system, so that personally identifiable information is removed. For example, a user’s identity and image may be treated so that no personally identifiable information can be determined about the user, or a user’s geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by the computing device and computing system. [0097] Example A1: A method includes obtaining, by a computing device, a message thread that includes a plurality of messages between a first user associated with the computing device and a second user; generating, by the computing device, a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones; outputting, by the computing device and for display, a graphical user interface that includes the at least one suggested message from the set of suggested messages; receiving, by the computing device, a user input that selects a suggested message from the at least one suggested message as a selected message; and outputting, by the computing device, an updated graphical user interface that includes the selected message in place of the draft message.
Docket No.: 1333-481WO01 [0098] Example A2: The method of example A1, further includes determining a context of the draft message, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the draft message. [0099] Example A3: The method of any of examples A1 and A2, further includes outputting, for display, an indication of the plurality of message tones to apply to the draft message; and receiving, by the computing device, a user input that selects the message tone. [0100] Example A4: The method of any of examples A1 through A3, further includes determining a context of the message thread, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the message thread. [0101] Example A5: The method of any of examples A1 through A4, wherein applying the machine learning model comprises determining a message value for each suggested message of the set of suggested messages, and wherein the message value indicates a confidence rating associated with how closely each suggested message matches the message tone. [0102] Example A6: The method of example A5, wherein each suggested message of the set of suggested messages satisfies a message value condition, and wherein the message value condition includes a threshold confidence rating associated with the message tone. [0103] Example A7: The method of example A6, wherein outputting the plurality of messages includes outputting, by the computing device and for display, the set of suggested messages in an order based on the message value for each suggested message of the set of suggested messages. [0104] Example A8: The method of any of examples A1 through A7, wherein at least one suggested message from the set of suggested messages includes at least one of text, emoji, emoticons, images, reactions, animations, or videos. [0105] Example A9: The method of any of examples A1 through A8, wherein each message tone of the plurality of message tones is categorized as either a character, an emotion, or a tactic. [0106] Example A10: The method of any of examples A1 through A9, further includes for each respective message tone from the plurality of message tones: receiving a description of the respective message tone, wherein the description of the respective message tone includes one or more of a plurality of examples associated with the
Docket No.: 1333-481WO01 respective message tone and a plurality of rules associated with the respective message tone; and training the machine learning model using the description of the respective message tone. [0107] Example A11: The method of any of examples A1 through A10, further includes receiving, by the computing device, a user input that selects feedback relating to at least one suggested message from the set of suggested messages; and training the machine learning model based on the feedback. [0108] Example A12: The method of any of examples A1 through A11, wherein applying the machine learning model comprises excluding generation of content relating to an excluded set of information types comprising one or more of phone numbers, addresses, or web addresses. [0109] Example A13: The method of any of examples A1 through A12, wherein the machine learning model is a large language model. [0110] Example A14: The method of any of examples A1 through A13, wherein the second user is associated with the computing device. [0111] Example A15: The method of any of examples A1 through A13, wherein the computing device is a first computing device, and wherein the second user is associated with a second computing device different from the computing device. [0112] EXAMPLE B1: A device comprises at least one processor, a display, and a storage device that stores instructions executable by the at least one processor. The instructions executable by the at least one processor may be configured to obtain a message thread that includes a plurality of messages between a first user associated with the device and a second user; generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones; output, for display, a graphical user interface that includes the at least one suggested message from the set of suggested messages; receive a user input that selects a suggested message from the at least one suggested message as a selected message; and output an updated graphical user interface that includes the selected message in place of the draft message. [0113] EXAMPLE B2: The device of example B1 wherein the at least one processor is configured to: determine a context of the draft message, wherein to generate the set of suggested messages includes applying the machine learning model to both the draft message and the context of the draft message.
Docket No.: 1333-481WO01 [0114] EXAMPLE B3: The device of any combination of examples B1 through B2 wherein the at least one processor is further configured to: output, for display, an indication of the plurality of message tones to apply to the draft message; and receive a user input that selects the message tone. [0115] EXAMPLE B4: The device of any combination of examples B1 through B3 wherein the at least one processor is configured to: determine a context of the message thread, wherein to generate the set of suggested messages includes applying the machine learning model to both the draft message and the context of the message thread. [0116] EXAMPLE B5: The device of any combination of examples B1 through B4 wherein applying the machine learning model comprises determining a message value for each suggested message of the set of suggested messages, and wherein the message value indicates a confidence rating associated with how closely each suggested message matches the message tone. [0117] EXAMPLE B6: The device of any combination of examples B1 through B5 wherein each suggested message of the set of suggested messages satisfies a message value condition, and wherein the message value condition includes a threshold confidence rating associated with the message tone. [0118] EXAMPLE B7: The device of any combination of examples B1 through B6 wherein to output the plurality of messages includes outputting, for display, the set of suggested messages in an order based on the message value for each suggested message of the set of suggested messages. [0119] EXAMPLE B8: The device of any combination of examples B1 through B7 wherein outputting the plurality of messages includes displaying, by the computing device, the set of suggested messages in an order based on the message value for each suggested message of the set of suggested messages. [0120] EXAMPLE B9: The device of any combination of examples B1 through B8 wherein at least one suggested message from the set of suggested messages includes at least one of text, emoji, emoticons, images, reactions, animations, or videos. [0121] EXAMPLE B10: The device of any combination of examples B1 through B9 wherein each message tone of the plurality of message tones is categorized as either a character, an emotion, or a tactic. [0122] EXAMPLE B11: The device of any combination of examples B1 through B10 wherein the at least one processor is further configured to: receive a user input that selects
Docket No.: 1333-481WO01 feedback relating to at least one suggested message from the set of suggested messages; and train the machine learning model based on the feedback. [0123] EXAMPLE B12: The device of any combination of examples B1 through B11 wherein applying the machine learning model comprises excluding generation of content relating to an excluded set of information types comprising one or more of phone numbers, addresses, or web addresses. [0124] EXAMPLE B13: The device of any combination of examples B1 through B12 wherein the machine learning model is a large language model. [0125] Example B14: The method of any of examples B1 through B13, wherein the second user is associated with the computing device. [0126] Example B15: The method of any of examples B1 through B13, wherein the computing device is a first computing device, and wherein the second user is associated with a second computing device different from the computing device. [0127] EXAMPLE C1: A computer-readable storage medium encoded with instructions, that when executed, cause at least one processor of a computing device to obtain a message thread that includes a plurality of messages between a first user associated with a computing device and a second user. The instructions may further cause the at least one processor to generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones. The instructions may further cause the at least one processor to output, for display, a graphical user interface that includes the at least one suggested message from the set of suggested messages. The instructions may further cause the at least one processor to receive a user input that selects a suggested message from the at least one suggested message as a selected message. The instructions may further cause the at least one processor to output an updated graphical user interface that includes the selected message in place of the draft message. [0128] EXAMPLE C2: The computer-readable storage medium of example C1 wherein the instructions configure the at least one processor to: determine a context of the draft message, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the draft message. [0129] EXAMPLE C3: The computer-readable storage medium of any combination of examples C1 through C2 wherein the instructions configure the at least one processor to:
Docket No.: 1333-481WO01 output, for display, an indication of the plurality of message tones to apply to the draft message; and receive a user input that selects the message tone. [0130] EXAMPLE C4: The computer-readable storage medium of any combination of examples C1 through C3 wherein the instructions configure the at least one processor to: determine a context of the message thread, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the message thread. [0131] EXAMPLE C5: The computer-readable storage medium of any combination of examples C1 through C4 wherein the instructions configure the at least one processor to: determine a message value for each suggested message of the set of suggested messages, and wherein the message value indicates a confidence rating associated with how closely each suggested message matches the message tone. [0132] EXAMPLE C6: The computer-readable storage medium of any combination of examples C1 through C5 wherein each suggested message of the set of suggested messages satisfies a message value condition, and wherein the message value condition includes a threshold confidence rating associated with the message tone. [0133] EXAMPLE C7: The computer-readable storage medium of C6 wherein outputting the plurality of messages includes outputting, for display, the set of suggested messages in an order based on the message value for each suggested message of the set of suggested messages. [0134] EXAMPLE C8: The computer-readable storage medium of C6 wherein at least one suggested message from the set of suggested messages includes at least one of text, emoji, emoticons, images, reactions, animations, or videos. [0135] EXAMPLE C9: The computer-readable storage medium of any combination of examples C1 through C8 wherein each message tone of the plurality of message tones is categorized as either a character, an emotion, or a tactic. [0136] EXAMPLE C10: The computer-readable storage medium of any combination of examples C1 through C9 wherein the instructions configure the at least one processor to: for each respective message tone from the plurality of message tones: receive a description of the respective message tone, wherein the description of the respective message tone includes one or more of a plurality of examples associated with the respective message tone and a plurality of rules associated with the respective message tone; and train the machine learning model using the description of the respective message tone.
Docket No.: 1333-481WO01 [0137] EXAMPLE C11: The computer-readable storage medium of any combination of examples C1 through C10 wherein the instructions configure the at least one processor to: receive a user input that selects feedback relating to at least one suggested message from the set of suggested messages; and train the machine learning model based on the feedback. [0138] EXAMPLE C12: The computer-readable storage medium of any combination of examples C1 through C11 wherein the instructions configure the at least one processor to: to exclude generation of content relating to an excluded set of information types comprising one or more of phone numbers, addresses, or web addresses. [0139] EXAMPLE C13: The computer-readable storage medium of any combination of examples C1 through C12 wherein the machine learning model is a large language model. [0140] Example C14: The method of any of examples C1 through C13, wherein the second user is associated with the computing device. [0141] Example C15: The method of any of examples C1 through C13, wherein the computing device is a first computing device, and wherein the second user is associated with a second computing device different from the computing device. [0142] EXAMPLE D1: A system comprising at least one processor, a network interface, and a storage device that stores instructions executable by the at least one processor to obtain a message thread that includes a plurality of messages between a first user associated with a computing device and a second user. The instructions may further cause the at least one processor to generate a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one message having a message tone from a plurality of message tones. The instructions may further cause the at least one processor to send, to the computing device via the network interface, at least one suggested message from the set of suggested messages. [0143] EXAMPLE D2: The system of example D1 wherein the instructions configure the at least one processor to: determine a context of the draft message, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the draft message. [0144] EXAMPLE D3: The system of any combination of examples D1 through D2 wherein the instructions configure the at least one processor to: receive, from the computing device via the network interface, a user input that selects the message tone of the plurality of message tones.
Docket No.: 1333-481WO01 [0145] EXAMPLE D4: The system of any combination of examples D1 through D3 wherein the instructions configure the at least one processor to: determine a context of the message thread, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the message thread. [0146] EXAMPLE D5: The system of any combination of examples D1 through D4 wherein the instructions configure the at least one processor to: determine a message value for each suggested message of the set of suggested messages, and wherein the message value indicates a confidence rating associated with how closely each suggested message matches the message tone. [0147] EXAMPLE D6: The system of any combination of examples D1 through D5 wherein each suggested message of the set of suggested messages satisfies a message value condition, and wherein the message value condition includes a threshold confidence rating associated with the message tone. [0148] EXAMPLE D7: The system of any combination of examples D1 through D6 wherein at least one suggested message from the set of suggested messages includes at least one of text, emoji, emoticons, images, reactions, animations, or videos. [0149] EXAMPLE D8: The system of any combination of examples D1 through D7 wherein each message tone of the plurality of message tones is categorized as either a character, an emotion, or a tactic. [0150] EXAMPLE D9: The system of any combination of examples D1 through D8 wherein the instructions configure the at least one processor to: for each respective message tone from the plurality of message tones: receive a description of the respective message tone, wherein the description of the respective message tone includes one or more of a plurality of examples associated with the respective message tone and a plurality of rules associated with the respective message tone; and train the machine learning model using the description of the respective message tone. [0151] EXAMPLE D10: The system of any combination of examples D1 through D9 wherein the instructions configure the at least one processor to: receive a user input that selects feedback relating to at least one suggested message from the set of suggested messages; and train the machine learning model based on the feedback. [0152] EXAMPLE D11: The system of any combination of examples D1 through D10 wherein the instructions configure the at least one processor to: to exclude generation of content relating to an excluded set of information types comprising one or more of phone numbers, addresses, or web addresses.
Docket No.: 1333-481WO01 [0153] EXAMPLE D12: The system of any combination of examples D1 through D11 wherein the machine learning model is a large language model. [0154] Example D13: The method of any of examples D1 through D12, wherein the second user is associated with the computing device. [0155] Example D14: The method of any of examples D1 through D12, wherein the computing device is a first computing device, and wherein the second user is associated with a second computing device different from the computing device. [0156] By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other storage medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage mediums and media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of a computer-readable medium. [0157] The techniques described in this disclosure may be implemented, at least in part, in hardware, software, firmware, or any combination thereof. For example, various aspects of the described techniques may be implemented within one or more processors, including one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The term “processor” or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry. A control unit including hardware may also perform one or more of the techniques of this disclosure.
Docket No.: 1333-481WO01 [0158] Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various techniques described in this disclosure. In addition, any of the described units, modules or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware, firmware, or software components. Rather, functionality associated with one or more modules or units may be performed by separate hardware, firmware, or software components, or integrated within common or separate hardware, firmware, or software components. [0159] Various examples of the invention have been described. These and other examples are within the scope of the following claims.
Claims
Docket No.: 1333-481WO01 WHAT IS CLAIMED IS: 1. A method comprising: obtaining, by a computing device, a message thread that includes a plurality of messages between a first user associated with the computing device and a second user; generating, by the computing device, a set of suggested messages by at least applying a machine learning model to a draft message, wherein the set of suggested messages includes at least one suggested message having a message tone from a plurality of message tones; outputting, by the computing device and for display, a graphical user interface that includes the at least one suggested message from the set of suggested messages; receiving, by the computing device, a user input that selects a suggested message from the at least one suggested message as a selected message; and outputting, by the computing device, an updated graphical user interface that includes the selected message in place of the draft message. 2. The method of claim 1, further comprising: determining a context of the draft message, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the draft message. 3. The method of claim 1, further comprising, prior to generating the set of suggested messages: outputting, for display, an indication of the plurality of message tones to apply to the draft message; and receiving, by the computing device, a user input that selects the message tone. 4. The method of claim 1, further comprising: determining a context of the message thread, wherein generating the set of suggested messages includes applying the machine learning model to both the draft message and the context of the message thread. 5. The method of claim 1, wherein applying the machine learning model comprises determining a message value for each suggested message of the set of suggested
Docket No.: 1333-481WO01 messages, and wherein the message value indicates a confidence rating associated with how closely each suggested message matches the message tone. 6. The method of claim 5, wherein each suggested message of the set of suggested messages satisfies a message value condition, and wherein the message value condition includes a threshold confidence rating associated with the message tone. 7. The method of claim 5, wherein outputting the plurality of messages includes outputting, by the computing device and for display, the set of suggested messages in an order based on the message value for each suggested message of the set of suggested messages. 8. The method of claim 1, wherein at least one suggested message from the set of suggested messages includes at least one of text, emoji, emoticons, images, reactions, animations, or videos. 9. The method of claim 1, wherein each message tone of the plurality of message tones is categorized as either a character, an emotion, or a tactic. 10. The method of claim 1, further comprising: for each respective message tone from the plurality of message tones: receiving a description of the respective message tone, wherein the description of the respective message tone includes one or more of a plurality of examples associated with the respective message tone and a plurality of rules associated with the respective message tone; and training the machine learning model using the description of the respective message tone. 11. The method of claim 1, further comprising: receiving, by the computing device, a user input that selects feedback relating to at least one suggested message from the set of suggested messages; and training the machine learning model based on the feedback.
Docket No.: 1333-481WO01 12. The method of claim 1, wherein applying the machine learning model comprises excluding generation of content relating to an excluded set of information types comprising one or more of phone numbers, addresses, or web addresses. 13. The method of claim 1, wherein the machine learning model is a large language model. 14. The method of claim 1, wherein the second user is associated with the computing device. 15. The method of claim 1, wherein the computing device is a first computing device, and wherein the second user is associated with a second computing device different from the computing device. 16. A device comprising means for performing any of the methods of claims 1-15. 17. A system comprising means for performing any of the methods of claims 1-15. 18. A computer-readable storage medium encoded with instructions that cause one or more processors of a computing system to perform any of the methods of claims 1-15.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202363500189P | 2023-05-04 | 2023-05-04 | |
US63/500,189 | 2023-05-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024228718A1 true WO2024228718A1 (en) | 2024-11-07 |
Family
ID=88287365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2023/032010 WO2024228718A1 (en) | 2023-05-04 | 2023-09-05 | Change the tone of a message thread reply using machine learning |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024228718A1 (en) |
-
2023
- 2023-09-05 WO PCT/US2023/032010 patent/WO2024228718A1/en unknown
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11809829B2 (en) | Virtual assistant for generating personalized responses within a communication session | |
US11062270B2 (en) | Generating enriched action items | |
US20230214781A1 (en) | Generating Smart Reminders by Assistant Systems | |
CN110892395B (en) | Virtual assistant providing enhanced communication session services | |
US9715498B2 (en) | Distributed server system for language understanding | |
US20180293483A1 (en) | Creating a Conversational Chat Bot of a Specific Person | |
US20180173692A1 (en) | Iconographic symbol predictions for a conversation | |
KR20190057357A (en) | Smart responses using the on-device model | |
US20230401170A1 (en) | Exploration of User Memories in Multi-turn Dialogs for Assistant Systems | |
CN107750360A (en) | Generated by using the context language of language understanding | |
US20220284904A1 (en) | Text Editing Using Voice and Gesture Inputs for Assistant Systems | |
US20180061393A1 (en) | Systems and methods for artifical intelligence voice evolution | |
CN114375449A (en) | Techniques for dialog processing using contextual data | |
US20230409615A1 (en) | Systems and Methods for Providing User Experiences on Smart Assistant Systems | |
KR102398386B1 (en) | Method of filtering a plurality of messages and apparatus thereof | |
US20200257954A1 (en) | Techniques for generating digital personas | |
US20220318499A1 (en) | Assisted electronic message composition | |
WO2024228718A1 (en) | Change the tone of a message thread reply using machine learning | |
US20210034946A1 (en) | Recognizing problems in productivity flow for productivity applications | |
WO2024233670A1 (en) | Generating suggested messages | |
US20240282300A1 (en) | Interaction Composer for Conversation Design Flow for Assistant Systems | |
US20240202460A1 (en) | Interfacing with a skill store | |
US20230353652A1 (en) | Presenting Personalized Content during Idle Time for Assistant Systems | |
WO2024137127A1 (en) | Interfacing with a skill store | |
WO2024232891A1 (en) | Generative custom stickers |