CN110930475A - Cartoon making method, system, medium and electronic equipment - Google Patents
Cartoon making method, system, medium and electronic equipment Download PDFInfo
- Publication number
- CN110930475A CN110930475A CN201910967196.4A CN201910967196A CN110930475A CN 110930475 A CN110930475 A CN 110930475A CN 201910967196 A CN201910967196 A CN 201910967196A CN 110930475 A CN110930475 A CN 110930475A
- Authority
- CN
- China
- Prior art keywords
- picture
- cartoon
- pictures
- user
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000004519 manufacturing process Methods 0.000 claims abstract description 10
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 10
- 239000000463 material Substances 0.000 claims description 41
- 230000015572 biosynthetic process Effects 0.000 claims description 11
- 238000003786 synthesis reaction Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 230000001131 transforming effect Effects 0.000 claims 1
- 230000008569 process Effects 0.000 description 19
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 7
- 238000013473 artificial intelligence Methods 0.000 description 6
- 238000012800 visualization Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a cartoon making method, a cartoon making system, a cartoon making medium and electronic equipment. The method comprises the following steps: acquiring one or more first pictures; automatically converting the character image in the first picture into a cartoon image to obtain a second picture; matching the cartoon image in the second picture with the dialogue content to obtain a third picture; and synthesizing one or more third pictures in sequence to generate the cartoon. The method can reduce the creation threshold, reduce the manufacturing cost and simultaneously ensure the quality of the original content.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a cartoon making method, a cartoon making system, a cartoon making medium and electronic equipment.
Background
Caricatures are artistic forms that draw life or current events by simple and exaggerated techniques. Generally, a method of transformation, metaphor, hint and mapping is used to construct a frame or a frame group which is humor attune so as to obtain the ironic or song effect. The cartoon as the painting works goes through a development process, and from the beginning, the cartoon is taken as the hobby of a few people, becomes a common reading material of people, is the favorite of young people and becomes the cartoon control.
In the past, caricatures were typically printed on caricature books that people had to purchase to read the caricature. With the development of science and technology, smart phones are widely popularized, and nowadays, people can read cartoons through smart phones, so that the smart phones are very convenient.
However, the threshold for creating comics is very high, and a certain art foundation and drawing tools are needed, so that for many non-professional creators, the process of creating comics is too complicated, the learning cost is high, and the time cost is high.
Therefore, in the long-term research and development, the inventor has conducted a great deal of research on simplification of the cartoon making process and has proposed a cartoon making method to solve one of the above technical problems.
Disclosure of Invention
An object of the present invention is to provide a cartoon making method, system, medium and electronic device, which can solve at least one of the above-mentioned technical problems. The specific scheme is as follows:
according to a specific implementation manner of the invention, in a first aspect, the invention provides a cartoon making method, which comprises the following steps: acquiring one or more first pictures; automatically converting the character image in the first picture into a cartoon image to obtain a second picture; matching the cartoon image in the second picture with the dialogue content to obtain a third picture; and synthesizing one or more third pictures in sequence to generate the cartoon.
According to a second aspect, the present invention provides a caricature production system, including: the material acquisition module is used for acquiring one or more first pictures; the image conversion module is used for converting the character image in the first picture into a cartoon image to obtain a second picture; the content matching module is used for matching the cartoon image in the second picture with the dialogue content to obtain a third picture; and the picture synthesis module is used for synthesizing one or more third pictures in sequence to generate the cartoon.
According to a third aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the caricature making method according to any one of the above.
According to a fourth aspect of the present invention, there is provided an electronic apparatus including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the caricature making method as claimed in any one of the above.
Compared with the prior art, the scheme of the embodiment of the invention automatically converts the acquired picture materials into the cartoon images and edits the dialogue to the cartoon images, so that a user does not need to have certain art foundation and drawing tools, the creation threshold can be reduced, the manufacturing cost can be reduced, and the user can further simply and conveniently manufacture the cartoon; in addition, the user autonomously matches the dialog for the cartoon picture, and the quality of the original content can be guaranteed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 shows a flow diagram of a method of caricature creation according to an embodiment of the invention;
fig. 2 is a schematic illustration showing a result of performing secondary editing on the second picture according to an embodiment of the present invention;
FIG. 3 shows a display diagram of generating a cartoon result according to an embodiment of the invention;
FIG. 4 is a schematic diagram of a caricature production system according to an embodiment of the present invention;
fig. 5 shows a schematic diagram of an electronic device connection structure according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe … … in embodiments of the present invention, these … … should not be limited to these terms. These terms are used only to distinguish … …. For example, the first … … can also be referred to as the second … … and similarly the second … … can also be referred to as the first … … without departing from the scope of embodiments of the present invention.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in the article or device in which the element is included.
Alternative embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Example 1
The embodiment provides a method for manufacturing a cartoon, which is applied to a terminal device, where the terminal device may be: a PC (Personal Computer), a smart phone, a tablet Computer, or the like, and the specific type of the terminal device is not specifically limited in this embodiment. Specifically, as shown in fig. 1, the method includes the steps of:
s11, acquiring one or more first pictures;
as an alternative embodiment, the acquiring one or more first pictures includes:
receiving search information input by a user; searching one or more first pictures related to the search information in a material library; displaying the one or more first pictures to a user; receiving a picture determination uploading instruction of a user, and acquiring one or more first pictures.
In a specific implementation process, the terminal is provided with a pre-established visualization tool, and the visualization tool can be a graphical configuration tool based on an open source browser engine (e.g., webkit).
The user can open an operation interface through the user terminal by using the visualization tool, and the operation interface is provided with a search box. When the user has a basic idea for the cartoon content, search information can be input in the search box, and the search information is a keyword extracted according to the existing basic idea. The keywords can be key contents in the basic concept, and can also be types to which the basic concept belongs. For example, when the user-owned comic text content is directed to a glancing conversation between a teacher and a student during a sports class, search information of "teacher and student" is input in the search box.
Before the browser engine executes search, the corresponding relation between the keywords and the related picture materials is stored in the material library, so that the search engine can find out one or more pieces of picture material information according to the current search information and the corresponding relation. At the moment, the client displays the one or more pieces of picture information to the user through a browser page for the user to select; after the user selects and determines the browser page, the client can obtain one or more picture materials, and the picture materials are used as a first picture. The first picture is a picture with a person. Preferably, the first picture may be temporarily stored in a template library.
Wherein the material library comprises various original contents, such as pictures, audio, video and the like. The user can directly search and obtain the materials through the material library, so that the cost of shooting, content searching and manufacturing of the user can be reduced; on the other hand, the method can play a secondary value for 'fun' creation by combining the massive multimedia contents of the material library.
As another alternative embodiment, the acquiring one or more first pictures includes:
after receiving an uploading instruction of a user, displaying an uploading page to the user, wherein the uploading page is used for selecting and determining an uploading picture; receiving a picture determination uploading instruction of a user, and acquiring one or more first pictures.
In a specific implementation process, after a user clicks an upload button on a page, selecting a picture meeting the requirement from locally stored pictures or taking a picture instantly to generate a picture; and clicking a determined uploading button after the selection is finished, and acquiring a first picture by the client. Wherein, the instant picture generation picture comprises: the photographing function of the terminal equipment is directly used for photographing, so that a first picture, such as a camera, is obtained.
S12, automatically converting the character image in the first picture into a cartoon image to obtain a second picture;
in a specific implementation process, after the picture material is obtained, the cartoon scene template is called to automatically edit the first picture. The cartoon scene template identifies the first picture through Artificial Intelligence (AI) and automatically converts into a cartoon image. In this embodiment, the cartoon scene template is an editor. Here, the conversion process will not be described in detail, and it is referred to a method of converting a character image into a cartoon image by an AI technique in the prior art.
As an alternative embodiment, after the step of converting the character image in the first picture into a cartoon image and obtaining the second picture, the method may further comprise:
and carrying out secondary editing on the second picture, wherein the secondary editing comprises canvas editing and cartoon image editing. Wherein the canvas editing comprises canvas size, canvas color, and the like. And the cartoon image editing comprises the step of modifying again on the basis of the cartoon image, such as wearing glasses, hair bands and the like. Specifically, as shown in fig. 2, a schematic diagram of a display result of the second picture editing is shown.
S13, matching the cartoon image in the second picture with the dialogue content to obtain a third picture;
as an alternative embodiment, the matching the cartoon character in the second picture with the dialog content to obtain a third picture includes:
after an input event of a user is detected, displaying an input box control in a corresponding area; and receiving the dialog content input by the user, displaying the dialog content in the input box control, and obtaining a third picture.
In the implementation process, after the cartoon image is converted, the user can select to match the dialogue content manually or the system automatically. If the user selects manual matching, the user clicks a certain position in the second picture, an input box appears, and the conversation content is edited manually; after the editing is completed, clicking any position outside the input box, and then completing the current position editing and displaying. Here, the frame line of the input box may not be displayed.
The dialogue content comprises Chinese characters, symbols, emoticons and the like. The position of the conversation content is not limited, and the conversation content is matched according to the actual scene.
Furthermore, if a sentence in the dialog content is internally unique, the dialog content can be displayed in a form of 'black screen inversion'. For example, a picture is added between the current picture and the next picture, which includes only the inner monologue.
As another optional embodiment, the matching the cartoon character in the second picture with the dialog content to obtain a third picture includes:
acquiring all conversation contents to be matched by a user; and automatically performing clause processing on all the dialogue contents according to semantics, and intelligently matching each clause with the second picture to obtain a third picture.
In the specific implementation process, if the user selects automatic matching, the user needs to input complete conversation content on a page; and after the input is finished, clicking a matching button, and automatically performing sentence division processing on the client according to the semantics and the second picture and matching the sentence division processing to the corresponding second picture. For example, if the dialog content is "do not go well", the cartoon scene template identifies the walking picture by using AI technology and matches the walking picture with the dialog.
And S14, synthesizing one or more third pictures in sequence to generate the cartoon.
As an alternative embodiment, the synthesizing one or more third pictures in sequence to generate the cartoon includes:
stacking one or more third pictures in sequence to obtain pictures with multiple layers; and switching the pictures of the multiple layers at a fixed frequency to generate a dynamic cartoon picture.
In a specific implementation process, the user sorts the plurality of third pictures obtained in step S13, but the sorting of the pictures is not limited to this step, and may be performed in any one of the above steps.
After sequencing, selecting a synthesized cartoon type on an operation interface by a user, wherein the cartoon type comprises a dynamic picture and a static picture; when the user selects the cartoon type as the dynamic picture, the client stacks the sequenced third pictures, wherein the first picture is at the top and the last picture is at the bottom; and switching the laminated pictures at a fixed frequency to present a dynamic cartoon picture. The data format of the dynamic cartoon picture can be JPG, PNG, GIF or umd, and the like.
Further, the method comprises: and S15, storing the cartoon in a work library.
Further, the method further comprises: and S16, receiving a contribution instruction of the user, and sending the cartoon to one or more cartoon platforms. Specifically, the user clicks a contribution button, and the generated cartoon is automatically sent to the corresponding platform through a contribution link.
As another alternative embodiment, the synthesizing one or more third pictures in sequence to generate the cartoon includes:
placing one or more of the third pictures in sequence; and responding to a synthesis instruction of a user, and performing static synthesis on one or more third pictures to generate a long-picture cartoon. Fig. 3 is a schematic diagram illustrating the cartoon result generated by this embodiment. Of course, the long-figure cartoon is not limited to be displayed longitudinally, and can be displayed transversely.
Finally, the cartoon making method provided by the embodiment of the invention automatically converts the obtained picture materials into cartoon images, so that a user does not need to have certain art foundation and drawing tools, the creation threshold can be reduced, the making cost can be reduced, and the user can simply and conveniently make cartoons; in addition, the user autonomously matches the dialog for the cartoon picture, and the quality of the original content can be guaranteed.
Example 2
Referring to fig. 4, an embodiment of the invention provides a cartoon making system 400 applied to a terminal device. The system 400 includes: a material acquisition module 410, an image conversion module 420, a content matching module 430 and a picture synthesis module 440.
The material obtaining module 410 is configured to obtain one or more first pictures;
as an optional embodiment, the material obtaining module 410 is specifically configured to:
receiving search information input by a user; searching one or more first pictures related to the search information in a preset material library; displaying the one or more first pictures to a user; receiving a picture determination uploading instruction of a user, and acquiring one or more first pictures.
In a specific implementation process, the terminal is provided with a pre-established visualization tool, and the visualization tool can be a graphical configuration tool based on an open source browser engine (e.g., webkit).
The user can open an operation interface through the user terminal by using the visualization tool, and the operation interface is provided with a search box. When the user has a basic idea for the cartoon content, search information can be input in the search box, and the search information is a keyword extracted according to the existing basic idea. The keywords can be key contents in the basic concept, and can also be types to which the basic concept belongs. For example, when the user-owned comic text content is directed to a glancing conversation between a teacher and a student during a sports class, search information of "teacher and student" is input in the search box.
Before the material obtaining module 410 searches, the corresponding relationship between the keyword and the related picture material is already stored in the material library, so that the material obtaining module 410 can find out one or more pieces of picture material information according to the current search information and the corresponding relationship. At this time, the material obtaining module 410 displays the one or more pieces of picture information to the user through a browser page for the user to select; after the user selects and determines the page of the browser, the material obtaining module 410 may obtain one or more picture materials, where the picture materials are used as the first picture. The first picture is a picture with a person. Preferably, the first picture may be temporarily stored in a template library.
Wherein the material library comprises various original contents, such as pictures, audio, video and the like. The user can directly search and obtain the materials through the material library, so that the cost of shooting, content searching and manufacturing of the user can be reduced; on the other hand, the method can play a secondary value for 'fun' creation by combining the massive multimedia contents of the material library.
As another alternative embodiment, the material obtaining module 410 is specifically configured to:
after receiving an uploading instruction of a user, displaying an uploading page to the user, wherein the uploading page is used for selecting and determining an uploading picture; receiving a picture determination uploading instruction of a user, and acquiring one or more first pictures.
In a specific implementation process, after a user clicks an upload button on a page, the material acquisition module 410 provides an upload page, and the user selects a picture meeting the requirement from locally stored pictures or takes a picture instantly to generate a picture; after the selection is completed, the upload confirmation button is clicked, and the material acquisition module 410 acquires a first picture. Wherein the material obtaining module 410 is further configured to: the photographing function of the terminal equipment is directly used for photographing, so that a first picture is obtained, for example, a camera is controlled to photograph.
The image conversion module 420 is configured to convert the character image in the first picture into a cartoon image, so as to obtain a second picture;
in a specific implementation process, after the material obtaining module 410 obtains a picture material, the image transformation module 420 calls a cartoon scene template to automatically edit the first picture. The cartoon scene template identifies the first picture through Artificial Intelligence (AI) and automatically converts into a cartoon image. In this embodiment, the cartoon scene template is an editor. Here, the conversion process will not be described in detail, and it is referred to a method of converting a character image into a cartoon image by an AI technique in the prior art.
As an alternative embodiment, the image transformation module 420 is further specifically configured to:
and carrying out secondary editing on the second picture, wherein the secondary editing comprises canvas editing and cartoon image editing. Wherein the canvas editing comprises canvas size, canvas color, and the like. And the cartoon image editing comprises the step of modifying again on the basis of the cartoon image, such as wearing glasses, hair bands and the like.
The content matching module 430 is configured to match the cartoon image in the second picture with the dialog content to obtain a third picture;
as an optional embodiment, the content matching module 430 is specifically configured to:
after an input event of a user is detected, displaying an input box control in a corresponding area; and receiving the dialog content input by the user, displaying the dialog content in the input box control, and obtaining a third picture.
In a specific implementation, after the character conversion module 420 converts the character image into the cartoon image, the user can select to match the dialog contents manually or to match the system automatically. If the user selects manual matching, the user clicks a certain position in the second picture, the content matching module 430 displays an input box at the position, and the user manually edits the conversation content; after editing is completed, and a click is made at any position outside the input box, the content matching module 430 completes the current position editing and performs a display session. Here, the frame line of the input box may not be displayed.
The dialogue content comprises Chinese characters, symbols, emoticons and the like. The position of the conversation content is not limited, and the conversation content is matched according to the actual scene.
Further, if a certain sentence in the dialog content is internally and uniquely white, the content matching module 430 may present the dialog content in a "black screen inversion" form. For example, a picture is added between the current picture and the next picture, which includes only the inner monologue.
As another alternative embodiment, the content matching module 430 is further configured to:
acquiring all conversation contents to be matched by a user; and automatically performing clause processing on all the dialogue contents according to semantics, and intelligently matching each clause with the second picture to obtain a third picture.
In the specific implementation process, if the user selects automatic matching, the content matching module 430 displays an input box in which the user needs to input complete conversation content; after the input is completed, a matching button is clicked, and the content matching module 430 automatically performs sentence division processing according to the semantics and the second picture and matches the second picture with the corresponding second picture. For example, if the dialog content is "do not go well", the avatar transformation module 420 recognizes the walking picture by using AI technology, and the content matching module 430 matches the picture with the dialog.
The picture synthesizing module 440 is configured to synthesize one or more third pictures in sequence to generate a cartoon.
As an optional embodiment, the picture synthesis module 440 is specifically configured to:
stacking one or more third pictures in sequence to obtain pictures with multiple layers; and switching the pictures of the multiple layers at a fixed frequency to generate a dynamic cartoon picture.
In a specific implementation process, the user sorts the plurality of third pictures obtained by the content matching module 430, and certainly, the sorting of the pictures is not limited to be performed during synthesis, and may also be performed during the above-mentioned processes of obtaining, converting, and matching.
After sequencing, selecting a synthesized cartoon type on an operation interface by a user, wherein the cartoon type comprises a dynamic picture and a static picture; when the user selects the cartoon type as the dynamic picture, the picture synthesis module 440 stacks the sorted third pictures, wherein the first picture is at the top and the last picture is at the bottom; and switching the laminated pictures at a fixed frequency to present a dynamic cartoon picture. The data format of the dynamic cartoon picture can be JPG, PNG, GIF or umd, and the like.
Further, the system 400 includes: the cartoon storage module 450 is configured to store the cartoon in a production library.
Further, the system 400 further comprises: and the cartoon contribution module 460 is configured to receive a contribution instruction of the user and send the cartoon to one or more cartoon platforms. Specifically, the user clicks a contribution button, and the caricature contribution module 460 automatically sends the generated caricature to the corresponding platform through the contribution link.
As another optional embodiment, the picture synthesis module 440 is specifically configured to:
placing one or more of the third pictures in sequence; and responding to a synthesis instruction of a user, and performing static synthesis on one or more third pictures to generate a long-picture cartoon. Of course, the long-figure cartoon is not limited to be displayed longitudinally, and can be displayed transversely.
Finally, the cartoon making system 400 provided by the embodiment of the invention automatically converts the acquired picture materials into cartoon images, so that a user does not need to have certain art foundation and drawing tools, the creation threshold can be reduced, the making cost can be reduced, and the user can simply and conveniently make cartoons; in addition, the user autonomously matches the dialog for the cartoon picture, and the quality of the original content can be guaranteed.
Example 3
The disclosed embodiments provide a non-volatile computer storage medium storing computer-executable instructions that can execute the caricature making method in any of the above method embodiments.
Example 4
The embodiment provides an electronic device, which is used for a cartoon making method, and the electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the one processor to cause the at least one processor to:
acquiring one or more first pictures;
automatically converting the character image in the first picture into a cartoon image to obtain a second picture;
matching the cartoon image in the second picture with the dialogue content to obtain a third picture;
and synthesizing one or more third pictures in sequence to generate the cartoon.
Example 5
Referring now to FIG. 5, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, the electronic device may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
Claims (10)
1. A cartoon making method is characterized by comprising the following steps:
acquiring one or more first pictures;
automatically converting the character image in the first picture into a cartoon image to obtain a second picture;
matching the cartoon image in the second picture with the dialogue content to obtain a third picture;
and synthesizing one or more third pictures in sequence to generate the cartoon.
2. The method of claim 1, wherein the obtaining one or more first pictures comprises:
receiving search information input by a user;
searching one or more first pictures related to the search information in a preset material library;
displaying the one or more first pictures to a user;
and receiving a picture determination uploading instruction of a user, and acquiring one or more first pictures.
3. The method of claim 1, wherein after the transforming the character image in the first picture into a cartoon image to obtain a second picture, further comprising:
and carrying out secondary editing on the second picture, wherein the secondary editing comprises canvas editing and cartoon image editing.
4. The method of claim 1, wherein matching the cartoon character in the second picture with the dialog content to obtain a third picture comprises:
after an input event of a user is detected, displaying an input box control in a corresponding area in the second picture;
and receiving the dialog content input by the user, displaying the dialog content in the input box control, and obtaining a third picture.
5. The method of claim 1, wherein matching the cartoon character in the second picture with the dialog content to obtain a third picture comprises:
acquiring all conversation contents to be matched by a user;
and automatically performing clause processing on all the dialogue contents according to semantics, and intelligently matching each clause with the second picture to obtain a third picture.
6. The method according to claim 1, wherein said combining one or more of said third pictures in sequence to generate a caricature comprises:
stacking one or more third pictures in sequence to obtain pictures with multiple layers;
and switching the pictures of the multiple layers at a fixed frequency to generate a dynamic cartoon picture.
7. The method of claim 1, further comprising:
receiving a contribution instruction of a user, and sending the cartoon to one or more cartoon platforms.
8. A caricature production system, comprising:
the material acquisition module is used for acquiring one or more first pictures;
the image conversion module is used for converting the character image in the first picture into a cartoon image to obtain a second picture;
the content matching module is used for matching the cartoon image in the second picture with the dialogue content to obtain a third picture;
and the picture synthesis module is used for synthesizing one or more third pictures in sequence to generate the cartoon.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910967196.4A CN110930475A (en) | 2019-10-12 | 2019-10-12 | Cartoon making method, system, medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910967196.4A CN110930475A (en) | 2019-10-12 | 2019-10-12 | Cartoon making method, system, medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110930475A true CN110930475A (en) | 2020-03-27 |
Family
ID=69848862
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910967196.4A Pending CN110930475A (en) | 2019-10-12 | 2019-10-12 | Cartoon making method, system, medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110930475A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102110304A (en) * | 2011-03-29 | 2011-06-29 | 华南理工大学 | Material-engine-based automatic cartoon generating method |
CN102184200A (en) * | 2010-12-13 | 2011-09-14 | 中国人民解放军国防科学技术大学 | Computer-assisted animation image-text continuity semi-automatic generating method |
CN103186908A (en) * | 2011-12-29 | 2013-07-03 | 方正国际软件(北京)有限公司 | Terminal, server and interactive type processing method based on caricature |
US20130188887A1 (en) * | 2012-01-20 | 2013-07-25 | Elwha Llc | Autogenerating video from text |
CN105574912A (en) * | 2015-12-15 | 2016-05-11 | 南京偶酷软件有限公司 | Method for converting natural languages into animation continuity data |
CN109215099A (en) * | 2018-08-27 | 2019-01-15 | 北京奇虎科技有限公司 | A kind of method and device making caricature |
-
2019
- 2019-10-12 CN CN201910967196.4A patent/CN110930475A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184200A (en) * | 2010-12-13 | 2011-09-14 | 中国人民解放军国防科学技术大学 | Computer-assisted animation image-text continuity semi-automatic generating method |
CN102110304A (en) * | 2011-03-29 | 2011-06-29 | 华南理工大学 | Material-engine-based automatic cartoon generating method |
CN103186908A (en) * | 2011-12-29 | 2013-07-03 | 方正国际软件(北京)有限公司 | Terminal, server and interactive type processing method based on caricature |
US20130188887A1 (en) * | 2012-01-20 | 2013-07-25 | Elwha Llc | Autogenerating video from text |
CN105574912A (en) * | 2015-12-15 | 2016-05-11 | 南京偶酷软件有限公司 | Method for converting natural languages into animation continuity data |
CN109215099A (en) * | 2018-08-27 | 2019-01-15 | 北京奇虎科技有限公司 | A kind of method and device making caricature |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109688463B (en) | Clip video generation method and device, terminal equipment and storage medium | |
CN115145529B (en) | Voice control device method and electronic device | |
US20240107127A1 (en) | Video display method and apparatus, video processing method, apparatus, and system, device, and medium | |
CN114222196B (en) | Method and device for generating scenario explanation short video and electronic equipment | |
CN111415399A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN113365134B (en) | Audio sharing method, device, equipment and medium | |
CN113408208B (en) | Model training method, information extraction method, related device and storage medium | |
CN111970571B (en) | Video production method, device, equipment and storage medium | |
CN116415594A (en) | Question-answer pair generation method and electronic equipment | |
CN112929746A (en) | Video generation method and device, storage medium and electronic equipment | |
KR20210090273A (en) | Voice packet recommendation method, device, equipment and storage medium | |
US11996124B2 (en) | Video processing method, apparatus, readable medium and electronic device | |
CN112165647A (en) | Audio data processing method, device, equipment and storage medium | |
CN110413834B (en) | Voice comment modification method, system, medium and electronic device | |
US20220238111A1 (en) | Electronic device and control method therefor | |
US20220217430A1 (en) | Systems and methods for generating new content segments based on object name identification | |
CN110827085A (en) | Text processing method, device and equipment | |
KR101804679B1 (en) | Apparatus and method of developing multimedia contents based on story | |
CN110377842A (en) | Voice remark display methods, system, medium and electronic equipment | |
CN110930475A (en) | Cartoon making method, system, medium and electronic equipment | |
CN117478975A (en) | Video generation method, device, computer equipment and storage medium | |
CN116528015A (en) | Digital human video generation method and device, electronic equipment and storage medium | |
CN115981769A (en) | Page display method, device, equipment, computer readable storage medium and product | |
CN117082292A (en) | Video generation method, apparatus, device, storage medium, and program product | |
JP2011519079A (en) | Photorealistic talking head creation, content creation, and distribution system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200327 |
|
RJ01 | Rejection of invention patent application after publication |