CN108038892A - Expression, which packs, makees method, apparatus, electronic equipment and computer-readable recording medium - Google Patents
Expression, which packs, makees method, apparatus, electronic equipment and computer-readable recording medium Download PDFInfo
- Publication number
- CN108038892A CN108038892A CN201711219271.6A CN201711219271A CN108038892A CN 108038892 A CN108038892 A CN 108038892A CN 201711219271 A CN201711219271 A CN 201711219271A CN 108038892 A CN108038892 A CN 108038892A
- Authority
- CN
- China
- Prior art keywords
- images
- frame
- video
- user
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 115
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000005516 engineering process Methods 0.000 claims abstract description 13
- 230000008921 facial expression Effects 0.000 claims description 7
- 230000000694 effects Effects 0.000 abstract description 22
- 238000010586 diagram Methods 0.000 description 9
- 230000008451 emotion Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 4
- 235000000832 Ayote Nutrition 0.000 description 2
- 206010011469 Crying Diseases 0.000 description 2
- 235000009854 Cucurbita moschata Nutrition 0.000 description 2
- 240000001980 Cucurbita pepo Species 0.000 description 2
- 235000009804 Cucurbita pepo subsp pepo Nutrition 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 235000015136 pumpkin Nutrition 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
Packed the invention discloses a kind of expression and make method, apparatus, electronic equipment and computer-readable recording medium.This method includes:Multiple image is chosen from video;The portrait in multiple image is identified using face recognition technology, and the background patterns beyond the portrait in multiple image are replaced with to the pattern of specific context picture;With the expression bag picture of multiple image generation preset format.It can be seen that, by the technical program, user can be according to the expression bag of the specific context picture generation preset format oneself selected, the expression bag provided without passive receiving or selection third party, the effect of oneself desired expression bag is obtained, strengthens the usage experience of user.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for making an expression package, electronic equipment and a computer readable storage medium.
Background
For communication and communication between users, a chat function exists in many social applications, and users can talk or send various emoticons (or emoticons) to each other through a chat box. In the practical application, most of the emoticons sent by the user are obtained from a third party specially making the emoticons, namely, the third party generates the emoticons according to the collected materials and then releases the emoticons to the network, and the user obtains the emoticons interested by the user from the emoticons provided by the third party for use. However, in this case, the user passively accepts or passively selects the emoticon, and it is inevitable that the desired effect cannot be achieved.
Disclosure of Invention
In view of the above, the present invention has been made to provide an emoticon making method, apparatus, electronic device, and computer-readable storage medium that overcome or at least partially solve the above-mentioned problems.
According to one aspect of the invention, a method for making an expression package is provided, wherein the method comprises the following steps:
selecting a plurality of frames of images from a video;
recognizing the portrait in the multi-frame images by using a face recognition technology, and replacing background patterns except the portrait in the multi-frame images with patterns of a specified background picture;
and generating an expression package picture in a preset format by using the multi-frame image.
Optionally, the selecting a plurality of frames of images from the video includes:
selecting a plurality of frames of images from the recorded video;
or,
in the process of recording the video, a plurality of frames of images are selected from the collected video images while the video images are collected.
Optionally, the selecting a plurality of frames of images from the video includes:
selecting a corresponding multi-frame image from the video according to a selection instruction of a user;
or selecting a frame of image every other preset number of frames;
or when the human image expression in one frame of image is larger than a preset value relative to the human image expression change value in the previous frame, selecting the frame of image.
Optionally, the method further comprises:
acquiring a plurality of background pictures from a background picture library for displaying, and selecting a corresponding background picture as the specified background picture according to a selection instruction of a user;
or,
and taking a local picture of the user terminal equipment as the specified background picture according to an operation instruction of a user.
Optionally, before generating the emoticon picture in the preset format by using the multi-frame image, the method further includes:
acquiring a plurality of props from a prop library for displaying, and selecting corresponding props according to selection instructions of a user to add to one or more images in the multi-frame images;
and/or displaying a text editing control, and adding text in one or more images in the multi-frame images according to the operation of the user on the file editing control.
Optionally, the method further comprises:
and sharing the facial expression package picture in the preset format to a social application.
According to another aspect of the present invention, there is provided an emoticon making apparatus, wherein the apparatus comprises:
the selecting unit is used for selecting a plurality of frames of images from the video;
the replacing unit is used for recognizing the portrait in the multi-frame images by utilizing a face recognition technology and replacing background patterns except the portrait in the multi-frame images with patterns of a specified background picture;
and the generating unit is used for generating the expression bag picture in a preset format by using the multi-frame image.
Optionally, the selecting unit is configured to select a plurality of frames of images from the recorded video; or, in the process of recording the video, the multi-frame image is selected from the acquired video images while the video images are acquired.
Optionally, the selecting unit is configured to select a corresponding multi-frame image from the video according to a selection instruction of a user; or selecting a frame of image every other preset number of frames; or when the human image expression in one frame of image is larger than a preset value relative to the human image expression change value in the previous frame, selecting the frame of image.
Optionally, the apparatus further comprises:
the selecting unit is used for acquiring a plurality of background pictures from a background picture library for displaying, and selecting a corresponding background picture as the appointed background picture according to a selection instruction of a user; or, according to the operation instruction of the user, taking the picture local to the user terminal device as the specified background picture.
Optionally, the apparatus further comprises:
and the adding unit is used for acquiring a plurality of props from a prop library for displaying before generating the expression package picture in the preset format by using the multi-frame image, selecting the corresponding prop according to a selection instruction of a user and adding the prop into one or more images in the multi-frame image, and/or displaying a character editing control and adding characters into one or more images in the multi-frame image according to the operation of the user on the file editing control.
Optionally, the apparatus further comprises:
and the sharing unit is used for sharing the expression package picture in the preset format to a social application.
According to still another aspect of the present invention, there is provided an electronic apparatus, wherein the electronic apparatus includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform a method according to the foregoing.
According to yet another aspect of the present invention, there is provided a computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the aforementioned method.
According to the technical scheme of the invention, a plurality of frames of images are selected from a video; recognizing the portrait in the multi-frame images by using a face recognition technology, and replacing background patterns except the portrait in the multi-frame images with patterns of a specified background picture; and generating an expression package picture in a preset format by using the multi-frame image. Therefore, according to the technical scheme, the user can generate the expression package in the preset format according to the appointed background picture selected by the user, and the expression package provided by a third party is not required to be passively received or selected, so that the effect of the expression package required by the user is obtained, and the use experience of the user is enhanced.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow diagram illustrating a method for producing an expression package according to an embodiment of the invention;
FIG. 2 is a schematic diagram of the construction of an emoticon making apparatus according to an embodiment of the invention;
FIG. 3 shows a schematic structural diagram of an electronic device according to one embodiment of the invention;
fig. 4 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a flow chart diagram illustrating an emoticon making method according to an embodiment of the invention. As shown in fig. 1, the method includes:
step S110, selecting a plurality of frames of images from the video.
In this embodiment, the user may generate an emoticon picture in a preset format according to the existing video. A user may specify or record a video and then select a plurality of frames of images from the video. And generating an expression bag picture in a preset format based on the multi-frame image.
And step S120, recognizing the portrait in the multi-frame images by using a face recognition technology, and replacing the background patterns except the portrait in the multi-frame images with the patterns of the specified background pictures.
In this embodiment, an expression bag picture in a preset format is generated according to a portrait in a video, the portrait needs to be recognized from a selected multi-frame image by using a face recognition technology, and a part of the multi-frame image other than the portrait is replaced with a pattern of a specified background picture, so that a user can complete replacement of the background pattern according to own will, and obtain an effect desired by the user.
In this embodiment, the user may specify a plurality of background pictures, and replace each frame of image with a different background picture pattern; it is also possible to specify a background picture, and replace each frame of image with the pattern of the specified background picture.
And step S130, generating an expression bag picture in a preset format by using the multi-frame image.
The preset format is set according to the requirement, for example, the preset format may be a GIF format. The GIF format can store a plurality of images, and can constitute a simplest animation by reading out the plurality of images stored in one file one by one and displaying them on a screen. In this embodiment, a GIF-format emoticon image may be generated from the multi-frame image replacing the background pattern.
Therefore, according to the technical scheme, the user can generate the expression package in the preset format according to the appointed background picture selected by the user, and the expression package provided by a third party is not required to be passively received or selected, so that the effect of the expression package required by the user is obtained, and the use experience of the user is enhanced.
In one embodiment of the present invention, the selecting a plurality of frames of images from the video in step S110 includes: selecting a plurality of frames of images from the recorded video; or, in the process of recording the video, the multi-frame image is selected from the acquired video images while the video images are acquired.
The present embodiment is a preferred embodiment of the timing of extracting a plurality of frame images. The video in step S110 may be recorded by the user, so that when the emoticon image in the preset format is generated, the portrait in the video recorded by the user may be used, and the user experience may be further improved.
In this embodiment, the extraction of the multiple frames of images may be after the video is recorded, or during the process of recording the video. For example, a user records a video, and if the user wants to generate an expression package picture in the GIF format, the user can select to generate the expression package picture in the GIF format after the recording is completed, and then select a plurality of frames of images from the recorded video. Or before recording the video, the user selects the option of generating the expression package picture in the GIF format, then records the video, and selects the multi-frame image in the video recording process.
In one embodiment of the present invention, the selecting a plurality of frames of images from the video in step S110 includes: selecting a corresponding multi-frame image from the video according to a selection instruction of a user; or selecting a frame of image every other preset number of frames; or when the human image expression in one frame of image is larger than a preset value relative to the human image expression change value in the previous frame, selecting the frame of image.
The present embodiment is a preferred embodiment of the manner of extracting a plurality of frame images.
In this embodiment, when selecting a plurality of frame images, a user may specify which frame images are used as materials for generating an expression package image in a preset format from a video, for example, in a video recorded by the user, the user selects an image with a smiling expression, an image with a angry expression, and an image with a sad expression, and selects a corresponding image according to a selection instruction of the user, where the specific user operation may be to click an image that the user wants to select when the recorded video is played, and whether to select to generate an expression package in a GIF format may occur, and if the user selects yes, the frame image is selected as a material for generating an expression package image in the GIF format, and the next step of processing is performed.
When a plurality of frames of images are selected, one frame of image can be selected at intervals of a preset number of frames, for example, one frame of image is selected at intervals of 5 frames, in a section of video, the 1 st frame of image, the 6 th frame of image and the 11 th frame of image can be selected, and so on, the selected images are used as materials for generating expression package images in a preset format, and the next step of processing is carried out.
When a plurality of frames of images are selected, the frame of image can be selected when the human image expression change value in one frame of image relative to the human image expression change value in the previous frame is larger than a preset value. In this embodiment, the expression of the portrait in each frame of image needs to be identified, and once the expression change of the portrait is identified to be large, the frame of image is selected, so that the selection efficiency can be improved, and the display effect of the generated expression package picture in the preset format can be ensured. For example, in a video, if the expression of the portrait is smiling in the 10 th frame image, the 11 th frame image and the 12 th frame image and does not change much, only one of the three frame images is selected, and if the expression of the face of the 13 th frame image suddenly changes into smiling, the expression of the portrait changes greatly compared with the 12 th frame image, the 13 th frame image is selected. If the 10 th frame image, the 11 th frame image and the 12 th frame image are used, the number of selected images is increased, the generation efficiency of the expression bag is reduced, and meanwhile, a good display effect is not brought. Therefore, in this embodiment, when the change value of the portrait expression in one frame of image relative to the change value of the portrait expression in the previous frame is greater than the preset value, the frame of image is selected, so that the generation efficiency of the expression package can be improved, and the display effect of the expression package is ensured.
In one embodiment of the present invention, the method shown in fig. 1 further comprises: acquiring a plurality of background pictures from a background picture library for displaying, and selecting corresponding background pictures as designated background pictures according to selection instructions of a user; or, according to the operation instruction of the user, the picture local to the user terminal device is used as the specified background picture.
In this embodiment, the specified background picture may be a picture in a background picture library, or a picture locally stored by the user. The user is required to select a specified background picture before replacing the background pattern in the multi-frame image.
In order to facilitate the selection and designation of the background picture by the user, a plurality of background pictures are required to be acquired from the background picture library for displaying, so that the user can select the background picture selected by the user as the designated background picture, or the user opens a local picture of the terminal device and selects one background picture from the local pictures as the designated background picture.
In an embodiment of the present invention, before generating the emoticon picture in the preset format from the multi-frame image in step S130, the method shown in fig. 1 further includes: acquiring a plurality of props from a prop library for displaying, and selecting corresponding props according to selection instructions of a user to add to one or more images in a plurality of frames of images; and/or displaying a text editing control, and adding text in one or more images in the multi-frame images according to the operation of the user on the file editing control.
In order to further improve the display effect of the facial expression bag pictures, the user can add prop effects, such as props with effects of hats, pumpkin heads and the like, in one or more images in the multi-frame images according to own wishes. And/or some text information is added, for example, the ' I ' Happy ' and the like, when the expression package picture in the preset format is generated, the added props and/or the text information can be correspondingly added, so that the expression package picture is richer, and the use experience of a user is further enhanced.
In one embodiment of the present invention, the method shown in fig. 1 further comprises: sharing the facial expression package picture with the preset format to the social application.
After the emotion bag picture in the preset format is generated, the user can share the emotion bag picture to other people through the social application, for example, the emotion bag picture is shared to friends in the social application such as WeChat, QQ and microblog, and the user experience is improved.
In a specific example, the user records a video, the portrait in the video is the user himself, the part of the video other than the portrait is the ceiling, and the video includes the expressions of laughing, crying and anger of the user. If the user wants to generate the expression package picture in the GIF format, the three frames of images can be selected from the video after the recording is finished, the portrait is recognized from the three frames of images, the ceiling except the portrait in the three selected frames of images is replaced by the mountain according to the background picture of the mountain designated by the user, the characters of 'i want to go and climb the mountain' are added into the three frames of images according to the selection of the user, then the three frames of images are generated into the expression package picture in the GIF format, the effect displayed by the expression package picture in the GIF format is that the user is laughing, crying and angry in front of the mountain, and the character information of 'i want to go and climb the mountain' is displayed. Of course, the three frames of images with different human expressions of the user can also be extracted in real time by identifying each frame of image in the recording process of the video, and the processing method after the three frames of images are extracted is the same as that described above and will not be described in detail.
In one embodiment of the invention, the user can edit the size, font, color and character effect of the character by the added character information.
Fig. 2 is a schematic structural diagram of an emoticon making apparatus according to an embodiment of the invention. As shown in fig. 2, the emoticon making apparatus 200 includes:
the selecting unit 210 is configured to select a plurality of frames of images from the video.
In this embodiment, the user may generate an emoticon picture in a preset format according to the existing video. A user may specify or record a video and then select a plurality of frames of images from the video. And generating an expression bag picture in a preset format based on the multi-frame image.
And a replacing unit 220, configured to recognize the portrait in the multi-frame image by using a face recognition technology, and replace the background pattern other than the portrait in the multi-frame image with the pattern of the specified background picture.
In this embodiment, an expression bag picture in a preset format is generated according to a portrait in a video, the portrait needs to be recognized from a selected multi-frame image by using a face recognition technology, and a part of the multi-frame image other than the portrait is replaced with a pattern of a specified background picture, so that a user can complete replacement of the background pattern according to own will, and obtain an effect desired by the user.
In this embodiment, the user may specify a plurality of background pictures, and replace each frame of image with a different background picture pattern; it is also possible to specify a background picture, and replace each frame of image with the pattern of the specified background picture.
The generating unit 230 is configured to generate an emoticon picture in a preset format from the multi-frame image.
The preset format is set according to the requirement, for example, the preset format may be a GIF format. The GIF format can store a plurality of images, and can constitute a simplest animation by reading out the plurality of images stored in one file one by one and displaying them on a screen. In this embodiment, a GIF-format emoticon image may be generated from the multi-frame image replacing the background pattern.
Therefore, according to the technical scheme, the user can generate the expression package in the preset format according to the appointed background picture selected by the user, and the expression package provided by a third party is not required to be passively received or selected, so that the effect of the expression package required by the user is obtained, and the use experience of the user is enhanced.
In an embodiment of the present invention, the selecting unit 210 is configured to select a plurality of frames of images from the recorded video; or, in the process of recording the video, the multi-frame image is selected from the acquired video images while the video images are acquired.
The present embodiment is a preferred embodiment of the timing of extracting a plurality of frame images. The video in step S110 may be recorded by the user, so that when the emoticon image in the preset format is generated, the portrait in the video recorded by the user may be used, and the user experience may be further improved.
In this embodiment, the extraction of the multiple frames of images may be after the video is recorded, or during the process of recording the video. For example, a user records a video, and if the user wants to generate an expression package picture in the GIF format, the user can select to generate the expression package picture in the GIF format after the recording is completed, and then select a plurality of frames of images from the recorded video. Or before recording the video, the user selects the option of generating the expression package picture in the GIF format, then records the video, and selects the multi-frame image in the video recording process.
In an embodiment of the present invention, the selecting unit 210 is configured to select a corresponding multi-frame image from a video according to a selection instruction of a user; or selecting a frame of image every other preset number of frames; or when the human image expression in one frame of image is larger than a preset value relative to the human image expression change value in the previous frame, selecting the frame of image.
The present embodiment is a preferred embodiment of the manner of extracting a plurality of frame images.
In this embodiment, when selecting a plurality of frame images, a user may specify which frame images are used as materials for generating an expression package image in a preset format from a video, for example, in a video recorded by the user, the user selects an image with a smiling expression, an image with a angry expression, and an image with a sad expression, and selects a corresponding image according to a selection instruction of the user, where the specific user operation may be to click an image that the user wants to select when the recorded video is played, and whether to select to generate an expression package in a GIF format may occur, and if the user selects yes, the frame image is selected as a material for generating an expression package image in the GIF format, and the next step of processing is performed.
When a plurality of frames of images are selected, one frame of image can be selected at intervals of a preset number of frames, for example, one frame of image is selected at intervals of 5 frames, in a section of video, the 1 st frame of image, the 6 th frame of image and the 11 th frame of image can be selected, and so on, the selected images are used as materials for generating expression package images in a preset format, and the next step of processing is carried out.
When a plurality of frames of images are selected, the frame of image can be selected when the human image expression change value in one frame of image relative to the human image expression change value in the previous frame is larger than a preset value. In this embodiment, the expression of the portrait in each frame of image needs to be identified, and once the expression change of the portrait is identified to be large, the frame of image is selected, so that the selection efficiency can be improved, and the display effect of the generated expression package picture in the preset format can be ensured. For example, in a video, if the expression of the portrait is smiling in the 10 th frame image, the 11 th frame image and the 12 th frame image and does not change much, only one of the three frame images is selected, and if the expression of the face of the 13 th frame image suddenly changes into smiling, the expression of the portrait changes greatly compared with the 12 th frame image, the 13 th frame image is selected. If the 10 th frame image, the 11 th frame image and the 12 th frame image are used, the number of selected images is increased, the generation efficiency of the expression bag is reduced, and meanwhile, a good display effect is not brought. Therefore, in this embodiment, when the change value of the portrait expression in one frame of image relative to the change value of the portrait expression in the previous frame is greater than the preset value, the frame of image is selected, so that the generation efficiency of the expression package can be improved, and the display effect of the expression package is ensured.
In one embodiment of the present invention, the apparatus shown in fig. 2 further comprises:
the selecting unit is used for acquiring a plurality of background pictures from the background picture library for displaying, and selecting the corresponding background picture as an appointed background picture according to a selection instruction of a user; or, according to the operation instruction of the user, the picture local to the user terminal device is used as the specified background picture.
In this embodiment, the specified background picture may be a picture in a background picture library, or a picture locally stored by the user. The user is required to select a specified background picture before replacing the background pattern in the multi-frame image.
In order to facilitate the selection and designation of the background picture by the user, a plurality of background pictures are required to be acquired from the background picture library for displaying, so that the user can select the background picture selected by the user as the designated background picture, or the user opens a local picture of the terminal device and selects one background picture from the local pictures as the designated background picture.
In one embodiment of the present invention, the apparatus shown in fig. 2 further comprises:
the adding unit is used for obtaining a plurality of props from a prop library for displaying before generating the expression package picture in the preset format by using the multi-frame image, selecting the corresponding prop according to a selection instruction of a user and adding the prop into one or more images in the multi-frame image, and/or displaying a character editing control, and adding characters into one or more images in the multi-frame image according to the operation of the user on the file editing control.
In order to further improve the display effect of the facial expression bag pictures, the user can add prop effects, such as props with effects of hats, pumpkin heads and the like, in one or more images in the multi-frame images according to own wishes. And/or some text information is added, for example, the ' I ' Happy ' and the like, when the expression package picture in the preset format is generated, the added props and/or the text information can be correspondingly added, so that the expression package picture is richer, and the use experience of a user is further enhanced.
In one embodiment of the present invention, the apparatus shown in fig. 2 further comprises:
the sharing unit is used for sharing the facial expression package picture in the preset format to the social application.
After the emotion bag picture in the preset format is generated, the user can share the emotion bag picture to other people through the social application, for example, the emotion bag picture is shared to friends in the social application such as WeChat, QQ and microblog, and the user experience is improved.
The present invention also provides an electronic device, wherein the electronic device includes:
a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to perform a method of producing an expression package according to the embodiment shown in fig. 1 and its various embodiments.
Fig. 3 shows a schematic structural diagram of an electronic device according to an embodiment of the invention. As shown in fig. 3, the electronic device 300 includes:
a processor 310; and a memory 320 arranged to store computer executable instructions (program code), in the memory 320 there being a storage space 330 storing the program code, the program code 330 for performing the steps of the method according to the invention being stored in the storage space 330, the program code, when executed, causing the processor 310 to perform the method for producing an expression package according to the method shown in fig. 1 and its various embodiments.
Fig. 4 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention. As shown in fig. 4, the computer readable storage medium 400 stores one or more programs (program code) 410, and the one or more programs (program code) 410, when executed by the processor, are configured to perform the method steps according to the present invention, i.e., the method for producing an emoticon as shown in fig. 1 and its various embodiments.
It should be noted that the embodiments of the electronic device shown in fig. 3 and the computer-readable storage medium shown in fig. 4 are the same as the embodiments of the method shown in fig. 1, and the detailed description is given above and is not repeated here.
In summary, according to the technical solution of the present invention, a plurality of frames of images are selected from a video; recognizing the portrait in the multi-frame images by using a face recognition technology, and replacing background patterns except the portrait in the multi-frame images with patterns of a specified background picture; and generating an expression package picture in a preset format by using the multi-frame image. Therefore, according to the technical scheme, the user can generate the expression package in the preset format according to the appointed background picture selected by the user, and the expression package provided by a third party is not required to be passively received or selected, so that the effect of the expression package required by the user is obtained, and the use experience of the user is enhanced.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of the emoticon making apparatus, the electronic device, and the computer readable storage medium according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
For example, fig. 3 shows a schematic structural diagram of an electronic device according to an embodiment of the invention. The electronic device 300 conventionally comprises a processor 310 and a memory 320 arranged to store computer-executable instructions (program code). The memory 320 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Memory 320 has storage space 330 for storing program code 340 for performing the method steps shown in fig. 1 and in any of the embodiments. For example, the storage space 330 for the program code may comprise respective program codes 340 for implementing respective steps in the above method. The program code can be read from or written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. Such a computer program product is generally a computer-readable storage medium 400 such as described in fig. 4. The computer-readable storage medium 400 may have memory segments, memory spaces, etc. arranged similarly to the memory 320 in the electronic device of fig. 3. The program code may be compressed, for example, in a suitable form. In general, the memory unit stores a program code 410 for performing the steps of the method according to the invention, i.e. a program code readable by a processor such as 310, which program code, when executed by an electronic device, causes the electronic device to perform the individual steps of the method described above.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The invention discloses a1 and a method for manufacturing an expression package, wherein the method comprises the following steps:
selecting a plurality of frames of images from a video;
recognizing the portrait in the multi-frame images by using a face recognition technology, and replacing background patterns except the portrait in the multi-frame images with patterns of a specified background picture;
and generating an expression package picture in a preset format by using the multi-frame image.
A2, the method as in a1, wherein the selecting the plurality of frames of images from the video comprises:
selecting a plurality of frames of images from the recorded video;
or,
in the process of recording the video, a plurality of frames of images are selected from the collected video images while the video images are collected.
A3, the method as claimed in a1 or a2, wherein the selecting a plurality of frames of images from a video comprises:
selecting a corresponding multi-frame image from the video according to a selection instruction of a user;
or selecting a frame of image every other preset number of frames;
or when the human image expression in one frame of image is larger than a preset value relative to the human image expression change value in the previous frame, selecting the frame of image.
A4, the method of a1, wherein the method further comprises:
acquiring a plurality of background pictures from a background picture library for displaying, and selecting a corresponding background picture as the specified background picture according to a selection instruction of a user;
or,
and taking a local picture of the user terminal equipment as the specified background picture according to an operation instruction of a user.
A5, the method as recited in a1, wherein before the generating the emoticon picture in the preset format by using the plurality of frames of images, the method further comprises:
acquiring a plurality of props from a prop library for displaying, and selecting corresponding props according to selection instructions of a user to add to one or more images in the multi-frame images;
and/or displaying a text editing control, and adding text in one or more images in the multi-frame images according to the operation of the user on the file editing control.
A6, the method of a1, wherein the method further comprises:
and sharing the facial expression package picture in the preset format to a social application.
The invention also discloses B7 and an expression bag making device, wherein the device comprises:
the selecting unit is used for selecting a plurality of frames of images from the video;
the replacing unit is used for recognizing the portrait in the multi-frame images by utilizing a face recognition technology and replacing background patterns except the portrait in the multi-frame images with patterns of a specified background picture;
and the generating unit is used for generating the expression bag picture in a preset format by using the multi-frame image.
B8, the device of B7, wherein,
the selecting unit is used for selecting a plurality of frames of images from the recorded video; or, in the process of recording the video, the multi-frame image is selected from the acquired video images while the video images are acquired.
B9, device as described in B7 or B8,
the selecting unit is used for selecting a corresponding multi-frame image from the video according to a selection instruction of a user; or selecting a frame of image every other preset number of frames; or when the human image expression in one frame of image is larger than a preset value relative to the human image expression change value in the previous frame, selecting the frame of image.
B10, the apparatus of B7, wherein the apparatus further comprises:
the selecting unit is used for acquiring a plurality of background pictures from a background picture library for displaying, and selecting a corresponding background picture as the appointed background picture according to a selection instruction of a user; or, according to the operation instruction of the user, taking the picture local to the user terminal device as the specified background picture.
B11, the apparatus of B7, wherein the apparatus further comprises:
and the adding unit is used for acquiring a plurality of props from a prop library for displaying before generating the expression package picture in the preset format by using the multi-frame image, selecting the corresponding prop according to a selection instruction of a user and adding the prop into one or more images in the multi-frame image, and/or displaying a character editing control and adding characters into one or more images in the multi-frame image according to the operation of the user on the file editing control.
B12, the apparatus of B7, wherein the apparatus further comprises:
and the sharing unit is used for sharing the expression package picture in the preset format to a social application.
The invention also discloses C13 and an electronic device, wherein the electronic device comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform a method according to any one of a 1-a 6.
The invention also discloses D14, a computer readable storage medium, wherein the computer readable storage medium stores one or more programs that, when executed by a processor, implement the method of any one of a 1-a 6.
Claims (10)
1. A method for making an expression bag, wherein the method comprises the following steps:
selecting a plurality of frames of images from a video;
recognizing the portrait in the multi-frame images by using a face recognition technology, and replacing background patterns except the portrait in the multi-frame images with patterns of a specified background picture;
and generating an expression package picture in a preset format by using the multi-frame image.
2. The method of claim 1, wherein said selecting a plurality of frames of images from a video comprises:
selecting a plurality of frames of images from the recorded video;
or,
in the process of recording the video, a plurality of frames of images are selected from the collected video images while the video images are collected.
3. The method of claim 1 or 2, wherein said selecting a plurality of frames of images from a video comprises:
selecting a corresponding multi-frame image from the video according to a selection instruction of a user;
or selecting a frame of image every other preset number of frames;
or when the human image expression in one frame of image is larger than a preset value relative to the human image expression change value in the previous frame, selecting the frame of image.
4. The method of claim 1, wherein the method further comprises:
acquiring a plurality of background pictures from a background picture library for displaying, and selecting a corresponding background picture as the specified background picture according to a selection instruction of a user;
or,
and taking a local picture of the user terminal equipment as the specified background picture according to an operation instruction of a user.
5. The method of claim 1, wherein before generating the emoticon picture in the preset format from the multi-frame image, the method further comprises:
acquiring a plurality of props from a prop library for displaying, and selecting corresponding props according to selection instructions of a user to add to one or more images in the multi-frame images;
and/or displaying a text editing control, and adding text in one or more images in the multi-frame images according to the operation of the user on the file editing control.
6. The method of claim 1, wherein the method further comprises:
and sharing the facial expression package picture in the preset format to a social application.
7. An expression bag making device, wherein the device comprises:
the selecting unit is used for selecting a plurality of frames of images from the video;
the replacing unit is used for recognizing the portrait in the multi-frame images by utilizing a face recognition technology and replacing background patterns except the portrait in the multi-frame images with patterns of a specified background picture;
and the generating unit is used for generating the expression bag picture in a preset format by using the multi-frame image.
8. The apparatus of claim 7, wherein,
the selecting unit is used for selecting a plurality of frames of images from the recorded video; or, in the process of recording the video, the multi-frame image is selected from the acquired video images while the video images are acquired.
9. An electronic device, wherein the electronic device comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform a method according to any one of claims 1 to 6.
10. A computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711219271.6A CN108038892A (en) | 2017-11-28 | 2017-11-28 | Expression, which packs, makees method, apparatus, electronic equipment and computer-readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711219271.6A CN108038892A (en) | 2017-11-28 | 2017-11-28 | Expression, which packs, makees method, apparatus, electronic equipment and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108038892A true CN108038892A (en) | 2018-05-15 |
Family
ID=62093059
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711219271.6A Pending CN108038892A (en) | 2017-11-28 | 2017-11-28 | Expression, which packs, makees method, apparatus, electronic equipment and computer-readable recording medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108038892A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108846881A (en) * | 2018-05-29 | 2018-11-20 | 珠海格力电器股份有限公司 | Expression image generation method and device |
CN109729284A (en) * | 2018-12-17 | 2019-05-07 | 惠州Tcl移动通信有限公司 | Image processing method, intelligent terminal and the storage device of intelligent terminal |
CN109816759A (en) * | 2019-01-25 | 2019-05-28 | 维沃移动通信有限公司 | A kind of expression generation method and device |
CN110049377A (en) * | 2019-03-12 | 2019-07-23 | 北京奇艺世纪科技有限公司 | Expression packet generation method, device, electronic equipment and computer readable storage medium |
CN110321845A (en) * | 2019-07-04 | 2019-10-11 | 北京奇艺世纪科技有限公司 | A kind of method, apparatus and electronic equipment for extracting expression packet from video |
CN110321009A (en) * | 2019-07-04 | 2019-10-11 | 北京百度网讯科技有限公司 | AR expression processing method, device, equipment and storage medium |
CN110582020A (en) * | 2019-09-03 | 2019-12-17 | 北京达佳互联信息技术有限公司 | Video generation method and device, electronic equipment and storage medium |
CN111625740A (en) * | 2019-02-28 | 2020-09-04 | 阿里巴巴集团控股有限公司 | Image display method, image display device and electronic equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103426194A (en) * | 2013-09-02 | 2013-12-04 | 厦门美图网科技有限公司 | Manufacturing method for full animation expression |
CN104917666A (en) * | 2014-03-13 | 2015-09-16 | 腾讯科技(深圳)有限公司 | Method of making personalized dynamic expression and device |
CN105872438A (en) * | 2015-12-15 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Video call method and device, and terminal |
CN106204426A (en) * | 2016-06-30 | 2016-12-07 | 广州华多网络科技有限公司 | A kind of method of video image processing and device |
CN106412643A (en) * | 2016-09-09 | 2017-02-15 | 上海掌门科技有限公司 | Interactive video advertisement placing method and system |
WO2017116387A1 (en) * | 2015-12-28 | 2017-07-06 | Thomson Licensing | Context-aware feedback |
CN107240143A (en) * | 2017-05-09 | 2017-10-10 | 北京小米移动软件有限公司 | Bag generation method of expressing one's feelings and device |
CN107330408A (en) * | 2017-06-30 | 2017-11-07 | 北京金山安全软件有限公司 | Video processing method and device, electronic equipment and storage medium |
CN107370887A (en) * | 2017-08-30 | 2017-11-21 | 维沃移动通信有限公司 | A kind of expression generation method and mobile terminal |
-
2017
- 2017-11-28 CN CN201711219271.6A patent/CN108038892A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103426194A (en) * | 2013-09-02 | 2013-12-04 | 厦门美图网科技有限公司 | Manufacturing method for full animation expression |
CN104917666A (en) * | 2014-03-13 | 2015-09-16 | 腾讯科技(深圳)有限公司 | Method of making personalized dynamic expression and device |
CN105872438A (en) * | 2015-12-15 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Video call method and device, and terminal |
WO2017116387A1 (en) * | 2015-12-28 | 2017-07-06 | Thomson Licensing | Context-aware feedback |
CN106204426A (en) * | 2016-06-30 | 2016-12-07 | 广州华多网络科技有限公司 | A kind of method of video image processing and device |
CN106412643A (en) * | 2016-09-09 | 2017-02-15 | 上海掌门科技有限公司 | Interactive video advertisement placing method and system |
CN107240143A (en) * | 2017-05-09 | 2017-10-10 | 北京小米移动软件有限公司 | Bag generation method of expressing one's feelings and device |
CN107330408A (en) * | 2017-06-30 | 2017-11-07 | 北京金山安全软件有限公司 | Video processing method and device, electronic equipment and storage medium |
CN107370887A (en) * | 2017-08-30 | 2017-11-21 | 维沃移动通信有限公司 | A kind of expression generation method and mobile terminal |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108846881B (en) * | 2018-05-29 | 2023-05-12 | 珠海格力电器股份有限公司 | Expression image generation method and device |
CN108846881A (en) * | 2018-05-29 | 2018-11-20 | 珠海格力电器股份有限公司 | Expression image generation method and device |
CN109729284A (en) * | 2018-12-17 | 2019-05-07 | 惠州Tcl移动通信有限公司 | Image processing method, intelligent terminal and the storage device of intelligent terminal |
CN109816759A (en) * | 2019-01-25 | 2019-05-28 | 维沃移动通信有限公司 | A kind of expression generation method and device |
CN109816759B (en) * | 2019-01-25 | 2023-11-17 | 维沃移动通信有限公司 | Expression generating method and device |
CN111625740A (en) * | 2019-02-28 | 2020-09-04 | 阿里巴巴集团控股有限公司 | Image display method, image display device and electronic equipment |
CN110049377B (en) * | 2019-03-12 | 2021-06-22 | 北京奇艺世纪科技有限公司 | Expression package generation method and device, electronic equipment and computer readable storage medium |
CN110049377A (en) * | 2019-03-12 | 2019-07-23 | 北京奇艺世纪科技有限公司 | Expression packet generation method, device, electronic equipment and computer readable storage medium |
CN110321845B (en) * | 2019-07-04 | 2021-06-18 | 北京奇艺世纪科技有限公司 | Method and device for extracting emotion packets from video and electronic equipment |
CN110321009A (en) * | 2019-07-04 | 2019-10-11 | 北京百度网讯科技有限公司 | AR expression processing method, device, equipment and storage medium |
CN110321845A (en) * | 2019-07-04 | 2019-10-11 | 北京奇艺世纪科技有限公司 | A kind of method, apparatus and electronic equipment for extracting expression packet from video |
CN110582020A (en) * | 2019-09-03 | 2019-12-17 | 北京达佳互联信息技术有限公司 | Video generation method and device, electronic equipment and storage medium |
CN110582020B (en) * | 2019-09-03 | 2022-03-01 | 北京达佳互联信息技术有限公司 | Video generation method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108038892A (en) | Expression, which packs, makees method, apparatus, electronic equipment and computer-readable recording medium | |
EP3758364B1 (en) | Dynamic emoticon-generating method, computer-readable storage medium and computer device | |
CN108377418B (en) | Video annotation processing method and device | |
CN108010112B (en) | Animation processing method, device and storage medium | |
CN110557678B (en) | Video processing method, device and equipment | |
CN108460104B (en) | Method and device for customizing content | |
CN108989609A (en) | Video cover generation method, device, terminal device and computer storage medium | |
CN111935505B (en) | Video cover generation method, device, equipment and storage medium | |
CN108401176A (en) | A kind of method and apparatus for realizing video personage mark | |
US20180143741A1 (en) | Intelligent graphical feature generation for user content | |
US20200236297A1 (en) | Systems and methods for providing personalized videos | |
CN105787976A (en) | Method and apparatus for processing pictures | |
KR20210113679A (en) | Systems and methods for providing personalized video featuring multiple people | |
CN107547922B (en) | Information processing method, device, system and computer readable storage medium | |
CN109960549B (en) | GIF picture generation method and device | |
CN109766155A (en) | A kind of bullet frame generation method, device and storage medium | |
CN109753145A (en) | A kind of methods of exhibiting and relevant apparatus of transition cartoon | |
CN114880062A (en) | Chat expression display method and device, electronic device and storage medium | |
EP3876543A1 (en) | Video playback method and apparatus | |
CN109190019B (en) | User image generation method, electronic equipment and computer storage medium | |
CN107820622A (en) | A kind of virtual 3D setting works method and relevant device | |
CN108875670A (en) | Information processing method, device and storage medium | |
CN113343027B (en) | Interactive video editing and interactive video displaying method and device | |
CN114091639A (en) | Interactive expression generation method and device, electronic equipment and storage medium | |
CN112270733A (en) | AR expression package generation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20180917 Address after: 100015, 15 floor, 3 building, 10 Jiuxianqiao Road, Chaoyang District, Beijing, 17 story 1701-48A Applicant after: BEIJING MIJINGHEFENG TECHNOLOGY CO.,LTD. Address before: 100012 No. 28 building, No. 27 building, Lai Chun Yuan, Chaoyang District, Beijing, No. 28, 2, 201, No. 112, No. 28. Applicant before: BEIJING CHUANSHANG TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180515 |
|
RJ01 | Rejection of invention patent application after publication |