Nothing Special   »   [go: up one dir, main page]

CN114880062A - Chat expression display method and device, electronic device and storage medium - Google Patents

Chat expression display method and device, electronic device and storage medium Download PDF

Info

Publication number
CN114880062A
CN114880062A CN202210602226.3A CN202210602226A CN114880062A CN 114880062 A CN114880062 A CN 114880062A CN 202210602226 A CN202210602226 A CN 202210602226A CN 114880062 A CN114880062 A CN 114880062A
Authority
CN
China
Prior art keywords
expression
chat
identifier
candidate
expression identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210602226.3A
Other languages
Chinese (zh)
Other versions
CN114880062B (en
Inventor
薛源
沈姿绮
沈其
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210602226.3A priority Critical patent/CN114880062B/en
Publication of CN114880062A publication Critical patent/CN114880062A/en
Application granted granted Critical
Publication of CN114880062B publication Critical patent/CN114880062B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a chat emotion display method, equipment, electronic equipment and storage medium, which comprises the following steps: receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for re-editing the received chat expression identifier; and responding to the received operation instruction of the operation candidate, and generating a composite expression identifier on the basis of the chat expression identifier. According to the method and the device, on the basis of the chat expression identifiers sent by the chat object, the corresponding composite expression identifiers are generated according to the corresponding operation instructions, so that the interactivity among the expression identifiers is increased, the expression effect when communication is carried out between users by using the expression identifiers is improved, and the user experience of the users when the chat is carried out is improved.

Description

Chat expression display method and device, electronic device and storage medium
Technical Field
The application relates to the technical field of computer application, in particular to a chat emotion display method, equipment, electronic equipment and a storage medium.
Background
Instant Messaging (IM) tools on intelligent terminals are rapidly developed, and communication modes more convenient and rich than short messages and multimedia messages are brought to users. In the IM tool on the mobile terminal, the expression such as "magic expression", "emoticon emoji" or "interesting expression" is an important message form. Meanwhile, the introduction of the expression symbols on the social platform increases the interest and personalized display, and is more and more popular and sought after by users.
However, when the existing users communicate, the sent emoticons are mutually broken points, the emoticons sent by the users are substantially mutually independent, and the users lack the operation interaction interest based on the emoticons.
Disclosure of Invention
In view of this, the present application provides a chat emotion displaying method, device, electronic device, and storage medium, so as to increase the interaction of emotion identifier operations between users and improve user experience of the users.
Based on the above purpose, the present application provides a chat emotion display method, including:
receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for re-editing the received chat expression identifier;
and responding to the received operation instruction of the operation candidate, and generating a composite expression identifier on the basis of the chat expression identifier.
In some embodiments, the operation candidates include expression fit candidates;
generating a composite expression identifier based on the chat expression identifier, including:
and inserting a preset expression mark into one side of the chat expression mark to generate a composite expression mark comprising at least two expression marks.
In some embodiments, the graphical user interface includes an expression candidate area including a plurality of candidate expression identifiers therein;
the method comprises the following steps:
and responding to the selection operation aiming at the first candidate expression identifier, and replacing the preset expression identifier with the first candidate expression identifier determined by the selection operation.
In some embodiments, after inserting a preset emoticon into one side of the chat emoticon, the method further includes:
and responding to the received drag operation of the preset expression identifier, and adjusting the position, the size and/or the rotation angle of the preset expression identifier according to the drag operation.
In some embodiments, the generating a composite expression signature including at least two expression signatures further includes:
generating and displaying a hidden option corresponding to each expression identifier;
and responding to the selected operation of the hidden option, and hiding the expression identifier corresponding to the hidden option in the compound expression identifier.
In some embodiments, the operation candidates include recording candidates;
generating a composite expression identifier based on the chat expression identifier, including:
determining a model of the chat expression identifier according to the operation instruction corresponding to the recording candidate, and acquiring first expression feature data of a face image of a user; and generating a first facial makeup map according to the first expression feature data, and mapping the facial makeup map to the model to generate the compound expression identifier.
In some embodiments, the acquiring first expression feature data of a facial image of a user includes:
and responding to a recording instruction, and continuously acquiring the first expression characteristic data within the recording instruction duration or the time indicated by the recording instruction.
In some embodiments, the method further comprises:
and continuously acquiring the sound data of the user so as to load the sound data into the composite expression identifier.
In some embodiments, the chat emoticon is determined by:
responding to the selection operation of the chat object on at least one second candidate expression identifier, and determining the selected second candidate expression identifier as a target expression identifier;
responding to the editing operation of the chat object on the target expression mark, and collecting second expression characteristic data of the chat object;
and generating a second facial makeup map according to the second expression feature data, and mapping the facial makeup map to a model of a target expression identifier to generate the chat expression identifier.
In some embodiments, the at least one second candidate expression signature is determined by:
acquiring text information input by the chat object in a session input box;
and identifying keyword information in the text information, and determining at least one second candidate expression identifier matched with the keyword information according to the keyword information.
In some embodiments, the method further comprises:
and acquiring character information input by a user, and adding the character information into the composite expression mark.
Based on the same concept, the application also provides chat emotion display equipment, which comprises:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for re-editing the received chat expression identifier;
and the generating module is used for responding to the received operation instruction of the operation candidate and generating a composite expression identifier on the basis of the chat expression identifier.
Based on the same concept, the present application also provides an electronic device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the method according to any one of the above.
Based on the same concept, the present application also provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to implement the method of any one of the above.
As can be seen from the foregoing, a chat emoticon displaying method, apparatus, electronic apparatus and storage medium provided by the present application include: receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for re-editing the received chat expression identifier; and responding to the received operation instruction of the operation candidate, and generating a composite expression identifier on the basis of the chat expression identifier. According to the method and the device, on the basis of the chat expression identifiers sent by the chat object, the corresponding composite expression identifiers are generated according to the corresponding operation instructions, so that the interactivity among the expression identifiers is increased, the expression effect when communication is carried out between users by using the expression identifiers is improved, and the user experience of the users when the chat is carried out is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or related technologies, the drawings needed to be used in the description of the embodiments or related technologies are briefly introduced below, it is obvious that the drawings in the following description are only the embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a chat emoticon display method according to an embodiment of the present application;
fig. 2 is a schematic view of a scenario in which a chat object sends a chat emoticon according to an embodiment of the present application;
fig. 3 is a schematic view of a scene into which a preset expression identifier is inserted according to an embodiment of the present application;
fig. 4 is a scene schematic diagram illustrating that a sending end and a receiving end receive a composite expression identifier according to an embodiment of the present application;
fig. 5 is a scene schematic diagram illustrating matching of corresponding candidate expression identifiers after text information is input according to an embodiment of the application;
fig. 6 is a scene schematic diagram of recording of a composite expression identifier according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a chat emoticon display apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present specification more apparent, the present specification is further described in detail below with reference to the accompanying drawings in combination with specific embodiments.
It should be noted that technical terms or scientific terms used in the embodiments of the present application should have a general meaning as understood by those having ordinary skill in the art to which the present application belongs, unless otherwise defined. The use of "first," "second," and similar terms in the embodiments of the present application do not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that a element, article, or method step that precedes the word, and includes the element, article, or method step that follows the word, and equivalents thereof, does not exclude other elements, articles, or method steps. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
As described in the background section, in the current social platform or the chat interaction function in the application, when a user is ready to use an emoticon for communication, the user generally clicks an icon of an emoticon library to pop up an emoticon selection box, and the user selects an emoticon in the selection box, and after the selection is completed, clicks a transmission, and then transmits the selected emoticon. With the increasing popularity and the increasing usage rate of social networking services, expressions used by users are increasingly diversified, and users are increasingly used to express some simpler words by using the expressions, for example, the users directly express words such as "happy" or "haha" by smiling expressions. However, as the emoticons are used more and more frequently by the user, the problem of interactivity and correlation between emoticons sent by the user becomes more and more prominent, for example, when a chat object sends a plurality of emoticons, the user sends one emoticon, but which emoticon is one of the emoticons in a reply chat, which cannot be determined, so that the continuity of ideological expression of emoticon interaction is hindered, the interactivity between emoticons is lacking, and the user experience of the user is reduced.
By combining the actual situation, the embodiment of the application provides a chat expression display scheme. According to the method and the device, on the basis of the chat expression identifiers sent by the chat object, the corresponding composite expression identifiers are generated according to the corresponding operation instructions, so that the interactivity among the expression identifiers is increased, the expression effect when communication is carried out between users by using the expression identifiers is improved, and the user experience of the users when the chat is carried out is improved.
As shown in fig. 1, a schematic flow diagram of a chat emotion displaying method provided in the present application is provided, where the method specifically includes:
step 101, receiving a chat emotion identifier sent by a chat object, and displaying the chat emotion identifier and an operation candidate for the chat emotion identifier in a graphical user interface, wherein the operation candidate is used for re-editing the received chat emotion identifier.
In this step, the chat object is an object user interacting with the current user in the current graphical user interface, which may be one user or multiple users, and the chat emoticon is a static or dynamic picture emoticon. For a dynamic picture, a plurality of static pictures can be displayed in sequence according to a certain sequence. And then, the graphical user interface is provided with a chat window for the user to perform network communication with other chat objects, and the like, and can be used for inputting and displaying characters, images and the like, and the window interface can be used for displaying the chat expression identifiers.
Then, when determining an operation candidate of the chat emotion identifier, it may be determined what chat emotion identifier is specifically the chat emotion identifier, where the determination may be an identification process, and when performing chat, one or more chat emotion identifiers sent by a chat object may be identified, and a specific expression form of the chat emotion identifier may be determined, for example, whether the chat emotion identifier is a static picture, a dynamic picture GIF, or the like, or whether the chat emotion identifier is a custom emotion identifier generated by real-time recording, or the like. In a specific embodiment, a user-defined expression identifier generated by real-time recording, such as a user-defined pseudo-me expression identifier and the like, provides a plurality of 3D avatar models for a user to select, after selection, real facial expression changes of the user can be captured through image capturing equipment such as a camera and the like, and then a map is generated according to the changes and mapped to the 3D avatar models, so that the 3D avatar models can make facial expression changes similar to or consistent with the expression changes just recorded by the user, and finally the user-defined dynamic expression identifier is generated. These emoticons may then be stored in a pre-established emoticons library in some embodiments. Finally, according to the determined chat expression identifier, a corresponding operation candidate is displayed on the basis of the chat expression identifier, wherein the operation candidate is an option capable of performing operation on the chat expression identifier, for example, the operation candidate of the expression identifier is continuously added on the current chat expression identifier, or a new expression identifier is made on the basis of modeling or charting of the current expression, and the like. In particular embodiments, the operation candidate may be an insertion operation, a recording operation, a modification operation, and the like. Meanwhile, the operation candidates corresponding to each chat emotion identifier or each type of chat emotion identifier may be preset, for example, the static picture emotion identifier corresponds to an insertion operation, a modification operation, and the like, and the dynamic emotion identifier corresponds to an insertion operation, a recording operation, and the like. In a specific embodiment, as shown in fig. 2, after the chat emoticon sent by the chat object is recognized, the display position of the operation candidate may be specifically set according to a specific application scenario, for example, set below the chat emoticon, below, on one side of a chat frame of the chat object, and the like.
And 102, responding to the received operation instruction of the operation candidate, and generating a composite expression identifier on the basis of the chat expression identifier.
In this step, since the operation candidates of the chat emotion identifier have been presented in the previous step, the user can select the operation candidates, so as to generate a corresponding operation instruction, thereby generating a composite emotion identifier based on the chat emotion identifier. In some optional embodiments, the composite emoticon may be a composite emoticon formed by splicing and including at least two emoticons, where the composite emoticon includes a chat emoticon sent by a chat object; or, the model may be determined according to a chat expression identifier (e.g., a dynamic image expression) sent by the chat object, and then the model is fused with the current expression feature of the user to obtain a new dynamic expression, which is compounded by the model of the chat expression identifier sent by the chat object and the expression feature of the current user.
In a specific embodiment, for example, a composite expression identifier including two expression identifiers is formed by splicing, as shown in fig. 3, a composite expression identifier is generated after a user performs an adding operation (shown by a sticker in fig. 3) after receiving a chat expression identifier. The generated composite expression identifiers can be formed by arranging two or more expression identifiers side by side, or can be alternately arranged according to the sequence of sending the expression identifiers, such as expression in a mode of 'one left and one right'. Later, in some embodiments, in order to highlight the sequence of the expressions, the expression in the previous compound expression may be slightly higher than the expression in the next compound expression according to the sequence. In some specific embodiments, the expressions added to the composite expression identifier may be adjusted, and the user may select an expression identifier to be added to the composite expression identifier in an expression library of the user, so as to generate the composite expression identifier. The emotion identifier added to the composite emotion identifier may be an emotion identifier of the same type as the chat emotion identifier, or an emotion identifier of a different type from the chat emotion identifier, for example, the chat emotion identifier is a static picture emotion identifier, and the emotion identifier newly added to the composite emotion identifier may be a dynamic picture emotion identifier, so as to form a composite emotion identifier composed of a static picture and a dynamic picture.
In some embodiments, after generating a basic composite expression identifier, the user may further adjust the composite expression identifier, and in some embodiments, the position, size, orientation, and other attributes of each expression identifier in the composite expression identifier may be adjusted, so that the combined composite expression identifier is more personalized. In a specific embodiment, the whole compound expression identifier and each expression identifier forming the compound expression identifier can be adjusted by dragging, setting an operation button and the like. The composite emotion identifier may include a chat emotion identifier sent by the chat object before or is suitable for a dynamic model the same as the previous chat emotion identifier, so that the composite emotion identifier is more targeted, and after the chat object receives the composite emotion identifier, the interaction of the sent emotion of the composite emotion identifier can be seen at a glance. Therefore, the meaning of the expression identifier sent by the user can be better understood, and the user experience is improved.
And finally, the composite expression identifier can be output, and the user and the chat object can continue to make the composite expression identifier on the basis of the composite expression identifier. As shown in fig. 4a, the composite expression identifier displayed at the sending end (i.e., the end used by the user) is shown, and it can be seen that the user can also continue to operate on the basis of the composite expression identifier; as shown in fig. 4b, the composite emoticon displayed for the receiving end (i.e. the end used by the chat object) can be seen that the chat object can also use the composite emoticon as an overall emoticon, and continue to perform similar operations.
And then, outputting the composite expression mark, and storing, displaying, using or reprocessing the adjusted composite expression mark. According to different application scenes and implementation requirements, the specific output mode of the compound expression identifier can be flexibly selected.
For example, for an application scenario in which the method of the present embodiment is executed on a single device, the composite expression identifier may be directly output in a display manner on a display component (a display, a projector, etc.) of the current device, so that an operator of the current device can directly see the content of the composite expression identifier from the display component.
For another example, for an application scenario executed on a system composed of multiple devices by the method of this embodiment, the composite expression identifier may be sent to other preset devices serving as a receiving party in the system, that is, the synchronization terminal, through any data communication manner (e.g., wired connection, NFC, bluetooth, wifi, cellular mobile network, etc.), so that the synchronization terminal may perform subsequent processing on the composite expression identifier. Optionally, the synchronization terminal may be a preset server, the server is generally arranged at a cloud end and serves as a data processing and storage center, and the server can store and distribute the composite expression identifiers; the recipients of the distribution are terminal devices, and the holders or operators of the terminal devices may be current users, chat objects, emoticon administrators of chat modules in social platforms or applications, supervisors of chat modules in social platforms or applications, and so on.
For another example, for an application scenario executed on a system composed of multiple devices, the method of this embodiment may directly send the compound expression identifier to a preset terminal device through any data communication manner, where the terminal device may be one or more of the foregoing paragraphs.
As can be seen from the foregoing, a chat emoticon display method according to an embodiment of the present application includes: receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for re-editing the received chat expression identifier; and responding to the received operation instruction of the operation candidate, and generating a composite expression identifier on the basis of the chat expression identifier. According to the method and the device, on the basis of the chat expression identifiers sent by the chat object, the corresponding composite expression identifiers are generated according to the corresponding operation instructions, so that the interactivity among the expression identifiers is increased, the expression effect when communication is carried out between users by using the expression identifiers is improved, and the user experience of the users when the chat is carried out is improved.
It should be noted that the method of the embodiment of the present application may be executed by a single device, such as a terminal or a server. The method of the embodiment of the application can also be applied to a distributed scene and is completed by the mutual cooperation of a plurality of devices. In such a distributed scenario, one of the multiple devices may only perform one or more steps of the method of the embodiment, and the multiple devices interact with each other to complete the method. The terminal may comprise a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable gaming device), or the like, of any of a variety of types of user terminals or a combination of any two or more of these data processing devices.
It should be noted that the above-mentioned description describes specific embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In an optional exemplary embodiment, the operation candidates include expression fit candidates; generating a composite expression identifier based on the chat expression identifier, including: and inserting a preset expression mark into one side of the chat expression mark to generate a composite expression mark comprising at least two expression marks.
In this embodiment, when the operation candidate is an expression fit candidate, the user may generate the composite expression identifier by inserting a preset expression identifier on the basis of the chat expression identifier. As shown in fig. 3, the inserted preset emoticon is inserted in a manner of being disposed on one side of the chat emoticon, and the inserted position may be on the left side or the right side of the chat emoticon, and so on. In a specific embodiment, in a general chat window, information such as characters and emoticons sent by a chat object is arranged on the left side of a graphical user interface (i.e., the chat window), and content sent by a user of the user is generally arranged on the right side of the chat window, so that a chat emoticon identifier in a composite emoticon identifier can be arranged on the left side, and a new emotion identifier added is arranged on the right side. Of course, the specific setting position may be specifically set according to a specific application scenario.
In an alternative exemplary embodiment, the graphical user interface includes an expression candidate area, and the expression candidate area includes a plurality of candidate expression identifiers therein; the method comprises the following steps: and acquiring selection operation of a response user for the first candidate expression identifier, and replacing the preset expression identifier with the first candidate expression identifier determined and selected by the user through the selection operation.
In this embodiment, an expression identifier set by a user is generally added to generate the composite expression identifier, and the composite expression identifier may be an expression identifier carried by a template or a default expression identifier (i.e., a preset expression identifier), and the user may also perform a user-defined selection on the expression identifier. As shown in fig. 3, after the user clicks the option of the expression fit (the sticker label in fig. 3), a setting box may be directly popped up, where the setting box includes a chat expression identifier and a preset expression identifier, and the preset expression identifier may be a fixed expression identifier, may also be an expression identifier randomly selected from an expression library of the current user, and may also be represented by a frame. And then, directly popping up an expression library of the user to generate an expression candidate area, wherein all expression identifiers in the expression library of the user can be set in the expression candidate area, and the expression identifiers are candidate expression identifiers. When a user selects one expression identifier, the expression identifier is used as a first candidate expression identifier and directly replaces the original preset expression identifier, so that a compound expression identifier is generated.
In an optional exemplary embodiment, after inserting a preset emoticon into one side of the chat emoticon, the method further includes: and responding to the received drag operation of the preset expression identifier, and adjusting the position, the size and/or the rotation angle of the preset expression identifier according to the drag operation. Therefore, the generated composite expression mark is more personalized, and the user experience is improved.
In this embodiment, the composite expression identifier may be adjusted, which may be to adjust the whole composite expression identifier, or to adjust each expression identifier in the composite expression identifier. In a specific embodiment, in order to make the interactivity more obvious, the chat emotion identifier sent by the chat object is generally not adjusted, so that after the chat object receives the composite emotion identifier, the chat object can quickly recognize which emotion identifier the composite emotion identifier is for. Furthermore, generally, only the newly added preset expression identifier or the expression identifier to be added selected by the user is adjusted, the preset expression identifier can be adjusted through the touch control or the dragging operation of an external device such as a mouse, and specific attributes such as the position, the size and/or the rotation angle of the preset expression identifier can be adjusted.
In an optional exemplary embodiment, the generating a composite expression identifier including at least two expression identifiers further includes: generating and displaying a hidden option corresponding to each expression identifier; and responding to the selected operation of the hidden option, and hiding the expression identifier corresponding to the hidden option in the compound expression identifier.
In this embodiment, if the chat emoticon sent by the chat object is a composite emoticon, the chat emoticon may already include a plurality of emoticons. However, if too many emotion logos are already included in the previous chat emotion logo, the newly generated compound emotion logo is too redundant, and is not convenient for the user to express. Therefore, for the sake of simple expression, some emotion identifiers in the chat emotion identifiers can be hidden by setting a hidden option, each emotion identifier in the chat emotion identifiers can correspond to a hidden option, and one hidden option controls whether a corresponding emotion identifier is hidden or not.
In an alternative exemplary embodiment, the operation candidates include recording candidates; generating a composite expression identifier based on the chat expression identifier, including: determining a model of the chat expression identifier according to the operation instruction corresponding to the recording candidate, and acquiring first expression feature data of a face image of a user; and generating a first facial makeup map according to the first expression feature data, and mapping the facial makeup map to the model to generate the compound expression identifier.
In this embodiment, the chat expression identifier may also be a dynamically recorded expression identifier, for example, virtual expressions such as "pseudo-me expression" and AR expression in a specific embodiment, these expression identifiers first provide some basic models for the user, for example, "monkey head", "rabbit head" models, and the like, then obtain real-time facial expression features of the user by using functions such as camera shooting carried by the terminal, then make a map corresponding to the basic model by using the recorded facial expression features, and finally map the map onto the basic model, so that the basic model can make facial actions consistent with or similar to facial expressions recorded by the user, thereby generating the corresponding expression identifier. Furthermore, as shown in fig. 6, in a specific embodiment, if the chat object sends a "monkey head" dynamically recorded expression identifier as shown in the figure, the system may directly recognize the model, and when the user selects a recording candidate function, the system may directly call the model, and then record a facial image, so as to obtain feature data of the facial image, that is, first expression feature data, and the expression change of the current user may be directly reflected by using the first expression feature data. And further, a facial makeup map corresponding to the "monkey child head" basic model, namely a first facial makeup map, can be generated through the first expression feature data. Finally, the first facial makeup map is mapped to the basic model, so that the final real-time dynamic expression is generated, and the production of the compound expression mark is completed. Of course, in a specific embodiment, if the user wants to directly send a dynamic recording expression identifier, a basic model may be directly selected in a similar manner, and then the facial features of the user are obtained, so as to generate a map to be mapped onto the basic model, and finally generate a real-time dynamic expression identifier.
In an optional exemplary embodiment, the acquiring first expressive feature data of a facial image of a user includes: and responding to a recording instruction, and continuously acquiring the first expression characteristic data within the recording instruction duration or the time indicated by the recording instruction.
In this embodiment, since recording the first expression characteristic data is a continuous process, when starting recording, recording may be performed by setting a recording time duration, for example, setting the recording time duration to 10 seconds, performing face capture for 10 seconds, or by triggering the recording button all the time, when the user presses the recording button, recording is started until the user lifts the recording button to finish recording, and during this time, the user presses the triggering button all the time. Of course, the pressing mode may be pressing by touch, or pressing by an external device such as a mouse.
In an optional exemplary embodiment, the method further comprises: and continuously acquiring the sound data of the user so as to load the sound data into the composite expression identifier. Therefore, the composite expression identifier can send out real-time voice of the user, and the user experience of expression use is improved.
In this embodiment, the voice of the user can be collected synchronously in the recording process of the user, and the voice is added into the composite expression identifier, so that the composite expression identifier can not only convey the real-time expression of the user, but also convey the real-time voice of the user. The user experience and interactivity of expression use are improved.
In an alternative exemplary embodiment, the chat emoticon is determined by: responding to the selection operation of the chat object on at least one second candidate expression identifier, and determining the selected second candidate expression identifier as a target expression identifier; responding to the editing operation of the chat object on the target expression mark, and collecting second expression characteristic data of the chat object; and generating a second facial makeup mapping according to the second facial makeup feature data, and mapping the facial makeup mapping to a model of a target expression identifier to generate the chat expression identifier.
In this embodiment, in the chat communication including the emotion identifier, the chat object is inevitably provided with an emotion identifier library, and the chat object may select a corresponding emotion identifier from the emotion identifier library to send the emotion identifier. The system as the receiving party recognizes the expression identifier, and the recognition process may be based on an expression identifier library carried by the system, so that the system can be regarded as a chat object to select a second candidate expression identifier, and the selected second candidate expression identifier can be used as a target expression identifier. Then, if the target expression is identified as a static or dynamic picture expression, the system as the receiving party may directly generate an operation candidate of the fit candidate. If the target expression identifier is the fact dynamic expression identifier, the chat object inevitably needs to perform further editing operations such as expression recording and the like on the basis of the target expression identifier, namely, the editing operations of the target expression identifier by the chat object are responded, and second expression feature data of the chat object are acquired. And further, a facial makeup map of the base model corresponding to the target expression identifier, namely a second facial makeup map, can be generated through the second expression feature data. Finally, the second facial makeup map is mapped to the basic model so as to generate a final real-time dynamic expression, and the production of the chat expression mark is completed.
In an alternative exemplary embodiment, the at least one second candidate expression identity is determined by: acquiring text information input by the chat object in a session input box; and identifying keyword information in the text information, and determining at least one second candidate expression identifier matched with the keyword information according to the keyword information.
In this embodiment, because the expression identifiers in the expression library of the user are various, and as the expression identifiers used by the user increase, if all the expression identifiers are displayed to the user, how the user selects the expression identifiers is also a problem to be solved. The matching screening of the expression identifiers can be performed by using a text association mode. In a specific embodiment, a corresponding keyword or keyword may be added to each expression identifier, each expression identifier may be associated with a plurality of keywords or keywords, and then the expression identifier associated with the keyword may be displayed in a manner of inputting the keyword (i.e., inputting text information) when the user selects the expression identifier, so as to filter and filter the expression identifiers, thereby facilitating the setting of the expression identifiers by the user. As shown in fig. 5, after the user inputs "happy" in the conversation input box, the expression identifiers in the expression library are retrieved, and only all expression identifiers associated with "happy" are displayed for the user to select. These emoticons may be static picture emoticons, dynamic picture emoticons, custom emoticons, and the like. Of course, in a specific embodiment, if the user wants to send an emoticon directly, in a similar manner, a text message (the text message includes a keyword) is input in a conversation input box in the graphical user interface, and then the emoticon related to the keyword is popped up directly.
In an optional exemplary embodiment, the method further comprises: and acquiring character information input by a user, and adding the character information into the composite expression mark. And adding characters for the composite expression mark to generate the personalized expression.
In this embodiment, after the compound expression identifier is generated, a text description may be added to the compound expression identifier. After the text information input by the user is acquired, the text information can be added to the composite expression identifier in the form of a picture, and the like, and the size, the position, the rotation angle, and the like of the text information can be adjusted during adding. Therefore, the composite expression mark is more in line with the personalized requirements of the user.
Based on the same concept, the application also provides chat emotion display equipment corresponding to the method of any embodiment.
Referring to fig. 7, the chat emoticon presenting apparatus includes:
a determining module 210, configured to receive a chat emotion identifier sent by a chat object, and display the chat emotion identifier and an operation candidate for the chat emotion identifier in a graphical user interface, where the operation candidate is used to re-edit the received chat emotion identifier;
the generating module 220 is configured to generate a composite emotion identifier based on the chat emotion identifier in response to receiving the operation instruction for the operation candidate.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, the functions of the modules may be implemented in the same or multiple software and/or hardware when implementing the embodiments of the present application.
The device of the above embodiment is used to implement the corresponding chat emotion display method in the foregoing embodiment, and has the beneficial effects of the corresponding chat emotion display method embodiment, which are not described herein again.
In an optional exemplary embodiment, the operation candidates include expression fit candidates;
the generating module 220 is further configured to:
and inserting a preset expression mark into one side of the chat expression mark to generate a composite expression mark comprising at least two expression marks.
In an alternative exemplary embodiment, the graphical user interface includes an expression candidate area, and the expression candidate area includes a plurality of candidate expression identifiers therein;
the generating module 220 is further configured to:
and responding to the selection operation aiming at the first candidate expression identifier, and replacing the preset expression identifier with the first candidate expression identifier determined by the selection operation.
In an optional exemplary embodiment, the generating module 220 is further configured to:
and responding to the received drag operation of the preset expression identifier, and adjusting the position, the size and/or the rotation angle of the preset expression identifier according to the drag operation.
In an optional exemplary embodiment, the generating module 220 is further configured to:
generating and displaying a hidden option corresponding to each expression identifier;
and responding to the selected operation of the hidden option, and hiding the expression identifier corresponding to the hidden option in the compound expression identifier.
In an alternative exemplary embodiment, the operation candidates include recording candidates;
the generating module 220 is further configured to:
determining a model of the chat expression identifier according to the operation instruction corresponding to the recording candidate, and acquiring first expression feature data of a face image of a user; and generating a first facial makeup map according to the first expression feature data, and mapping the facial makeup map to the model to generate the compound expression identifier.
In an optional exemplary embodiment, the generating module 220 is further configured to:
and responding to a recording instruction, and continuously acquiring the first expression characteristic data within the duration of the recording instruction or the time indicated by the recording instruction.
In an optional exemplary embodiment, the generating module 220 is further configured to:
and continuously acquiring the sound data of the user so as to load the sound data into the composite expression identifier.
In an alternative exemplary embodiment, the chat emoticon is determined by:
responding to the selection operation of the chat object on at least one second candidate expression identifier, and determining the selected second candidate expression identifier as a target expression identifier;
responding to the editing operation of the chat object on the target expression mark, and collecting second expression characteristic data of the chat object;
and generating a second facial makeup map according to the second expression feature data, and mapping the facial makeup map to a model of a target expression identifier to generate the chat expression identifier.
In an alternative exemplary embodiment, the at least one second candidate expression identity is determined by:
acquiring text information input by the chat object in a session input box;
and identifying keyword information in the text information, and determining at least one second candidate expression identifier matched with the keyword information according to the keyword information.
In an optional exemplary embodiment, the generating module 220 is further configured to:
and acquiring character information input by a user, and adding the character information into the composite expression mark.
Based on the same concept, corresponding to the method of any embodiment, the application further provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and when the processor executes the program, the chat emotion display method of any embodiment is implemented.
Fig. 8 is a schematic diagram illustrating a more specific hardware structure of an electronic device according to this embodiment, where the electronic device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein the processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 are communicatively coupled to each other within the device via bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 1020 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 1020 and called to be executed by the processor 1010.
The input/output interface 1030 is used for connecting an input/output module to input and output information. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 1040 is used for connecting a communication module (not shown in the drawings) to implement communication interaction between the present apparatus and other apparatuses. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
The bus 1050 includes a path to transfer information between various components of the device, such as the processor 1010, memory 1020, input/output interface 1030, and communication interface 1040.
It should be noted that although the above-mentioned device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
The electronic device of the above embodiment is used to implement the corresponding chat emotion display method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same concept, the present application also provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the chat emoticon presenting method according to any of the above embodiments, corresponding to any of the above embodiments.
Computer-readable media of the present embodiments, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The computer instructions stored in the storage medium of the foregoing embodiment are used to enable the computer to execute the chat emotion display method according to any of the foregoing embodiments, and have the beneficial effects of the corresponding method embodiment, which are not described herein again.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the context of the present application, features from the above embodiments or from different embodiments may also be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present application as described above, which are not provided in detail for the sake of brevity.
In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures for simplicity of illustration and discussion, and so as not to obscure the embodiments of the application. Further, devices may be shown in block diagram form in order to avoid obscuring embodiments of the application, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the embodiments of the application are to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that the embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present application are intended to be included within the scope of the present application.

Claims (14)

1. A chat emotion display method is characterized by comprising the following steps:
receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for re-editing the received chat expression identifier;
and responding to the received operation instruction of the operation candidate, and generating a composite expression identifier on the basis of the chat expression identifier.
2. The method of claim 1, wherein the operation candidates comprise expression fit candidates;
generating a composite expression identifier based on the chat expression identifier, including:
and inserting a preset expression mark into one side of the chat expression mark to generate a composite expression mark comprising at least two expression marks.
3. The method of claim 2, wherein the graphical user interface comprises an expression candidate area, and wherein the expression candidate area comprises a plurality of candidate expression identifiers therein;
the method comprises the following steps:
and responding to the selection operation aiming at the first candidate expression identifier, and replacing the preset expression identifier with the first candidate expression identifier determined by the selection operation.
4. The method of claim 2, wherein after inserting a preset emoticon on one side of the chat emoticon, the method further comprises:
and responding to the received drag operation of the preset expression identifier, and adjusting the position, the size and/or the rotation angle of the preset expression identifier according to the drag operation.
5. The method of claim 2, wherein generating a composite expression signature comprising at least two expression signatures further comprises:
generating and displaying a hidden option corresponding to each expression identifier;
and responding to the selected operation of the hidden option, and hiding the expression identifier corresponding to the hidden option in the compound expression identifier.
6. The method of claim 1, wherein the operation candidates comprise recording candidates;
generating a composite expression identifier based on the chat expression identifier, including:
determining a model of the chat expression identifier according to the operation instruction corresponding to the recording candidate, and acquiring first expression feature data of a face image of a user; and generating a first facial makeup map according to the first expression feature data, and mapping the facial makeup map to the model to generate the compound expression identifier.
7. The method of claim 6, wherein the obtaining of the first expression feature data of the face image of the user comprises:
and responding to a recording instruction, and continuously acquiring the first expression characteristic data within the recording instruction duration or the time indicated by the recording instruction.
8. The method of claim 7, further comprising:
and continuously acquiring the sound data of the user so as to load the sound data into the composite expression identifier.
9. The method of claim 1, wherein the chat emoticon is determined by:
responding to the selection operation of the chat object on at least one second candidate expression identifier, and determining the selected second candidate expression identifier as a target expression identifier;
responding to the editing operation of the chat object on the target expression mark, and collecting second expression characteristic data of the chat object;
and generating a second facial makeup map according to the second expression feature data, and mapping the facial makeup map to a model of a target expression identifier to generate the chat expression identifier.
10. The method of claim 9, wherein the at least one second candidate expression signature is determined by:
acquiring text information input by the chat object in a session input box;
and identifying keyword information in the text information, and determining at least one second candidate expression identifier matched with the keyword information according to the keyword information.
11. The method of claim 1, further comprising:
and acquiring character information input by a user, and adding the character information into the composite expression mark.
12. A chat emoticon display apparatus, comprising:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for receiving a chat expression identifier sent by a chat object, and displaying the chat expression identifier and an operation candidate aiming at the chat expression identifier in a graphical user interface, wherein the operation candidate is used for re-editing the received chat expression identifier;
and the generating module is used for responding to the received operation instruction of the operation candidate and generating a composite expression identifier on the basis of the chat expression identifier.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 11 when executing the program.
14. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to implement the method of any one of claims 1 to 11.
CN202210602226.3A 2022-05-30 2022-05-30 Chat expression display method, device, electronic device and storage medium Active CN114880062B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210602226.3A CN114880062B (en) 2022-05-30 2022-05-30 Chat expression display method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210602226.3A CN114880062B (en) 2022-05-30 2022-05-30 Chat expression display method, device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN114880062A true CN114880062A (en) 2022-08-09
CN114880062B CN114880062B (en) 2023-11-14

Family

ID=82679050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210602226.3A Active CN114880062B (en) 2022-05-30 2022-05-30 Chat expression display method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114880062B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115269886A (en) * 2022-08-15 2022-11-01 北京字跳网络技术有限公司 Media content processing method, device, equipment and storage medium
WO2024037012A1 (en) * 2022-08-16 2024-02-22 腾讯科技(深圳)有限公司 Interactive animated emoji sending method and apparatus, computer medium, and electronic device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777891A (en) * 2014-02-26 2014-05-07 全蕊 Method for sending message by inserting an emoticon in message ending
CN106875460A (en) * 2016-12-27 2017-06-20 深圳市金立通信设备有限公司 A kind of picture countenance synthesis method and terminal
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN109472849A (en) * 2017-09-07 2019-03-15 腾讯科技(深圳)有限公司 Method, apparatus, terminal device and the storage medium of image in processing application
CN110336733A (en) * 2019-04-30 2019-10-15 上海连尚网络科技有限公司 A kind of method and apparatus that expression packet is presented
CN110780955A (en) * 2019-09-05 2020-02-11 连尚(新昌)网络科技有限公司 Method and equipment for processing emoticon message
CN111476154A (en) * 2020-04-03 2020-07-31 深圳传音控股股份有限公司 Expression package generation method, device, equipment and computer readable storage medium
CN112270733A (en) * 2020-09-29 2021-01-26 北京五八信息技术有限公司 AR expression package generation method and device, electronic equipment and storage medium
CN112463003A (en) * 2020-11-24 2021-03-09 维沃移动通信有限公司 Picture display method and device, electronic equipment and storage medium
CN112866475A (en) * 2020-12-31 2021-05-28 维沃移动通信有限公司 Image sending method and device and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103777891A (en) * 2014-02-26 2014-05-07 全蕊 Method for sending message by inserting an emoticon in message ending
CN106875460A (en) * 2016-12-27 2017-06-20 深圳市金立通信设备有限公司 A kind of picture countenance synthesis method and terminal
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN109472849A (en) * 2017-09-07 2019-03-15 腾讯科技(深圳)有限公司 Method, apparatus, terminal device and the storage medium of image in processing application
CN110336733A (en) * 2019-04-30 2019-10-15 上海连尚网络科技有限公司 A kind of method and apparatus that expression packet is presented
WO2020221104A1 (en) * 2019-04-30 2020-11-05 上海连尚网络科技有限公司 Emoji packet presentation method and equipment
CN110780955A (en) * 2019-09-05 2020-02-11 连尚(新昌)网络科技有限公司 Method and equipment for processing emoticon message
CN111476154A (en) * 2020-04-03 2020-07-31 深圳传音控股股份有限公司 Expression package generation method, device, equipment and computer readable storage medium
CN112270733A (en) * 2020-09-29 2021-01-26 北京五八信息技术有限公司 AR expression package generation method and device, electronic equipment and storage medium
CN112463003A (en) * 2020-11-24 2021-03-09 维沃移动通信有限公司 Picture display method and device, electronic equipment and storage medium
CN112866475A (en) * 2020-12-31 2021-05-28 维沃移动通信有限公司 Image sending method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115269886A (en) * 2022-08-15 2022-11-01 北京字跳网络技术有限公司 Media content processing method, device, equipment and storage medium
WO2024037012A1 (en) * 2022-08-16 2024-02-22 腾讯科技(深圳)有限公司 Interactive animated emoji sending method and apparatus, computer medium, and electronic device

Also Published As

Publication number Publication date
CN114880062B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
US12094047B2 (en) Animated emoticon generation method, computer-readable storage medium, and computer device
CN109819313B (en) Video processing method, device and storage medium
EP3713159B1 (en) Gallery of messages with a shared interest
US10311916B2 (en) Gallery of videos set to an audio time line
EP3095091B1 (en) Method and apparatus of processing expression information in instant communication
US12047657B2 (en) Subtitle splitter
CN114880062B (en) Chat expression display method, device, electronic device and storage medium
CN108989609A (en) Video cover generation method, device, terminal device and computer storage medium
EP2242281A2 (en) Method and apparatus for producing a three-dimensional image message in mobile terminals
US11394888B2 (en) Personalized videos
CN106791535B (en) Video recording method and device
US11393134B2 (en) Customizing soundtracks and hairstyles in modifiable videos of multimedia messaging application
CN108845741A (en) A kind of generation method, client, terminal and the storage medium of AR expression
CN113747199A (en) Video editing method, video editing apparatus, electronic device, storage medium, and program product
KR20240096709A (en) Inserting ads into video within messaging system
CN104917672A (en) E-mail signature setting method and device
US10965629B1 (en) Method for generating imitated mobile messages on a chat writer server
WO2024022473A1 (en) Method for sending comment in live-streaming room, method for receiving comment in live-streaming room, and related device
CN110830845A (en) Video generation method and device and terminal equipment
CN107135087B (en) Information interaction method and terminal and computer storage medium
EP4399688A2 (en) System and method for dynamic profile photos
CN117461301A (en) System and method for animated emoticon recording and playback
CN118737169A (en) Voice processing method and device, electronic equipment and storage medium
CN118590462A (en) Processing method of expression symbol, electronic equipment and storage medium
CN113342444A (en) Method, device, terminal and storage medium for sending greeting card

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant