Nothing Special   »   [go: up one dir, main page]

CN105787982A - Method and device for manufacturing e-book - Google Patents

Method and device for manufacturing e-book Download PDF

Info

Publication number
CN105787982A
CN105787982A CN201610112549.9A CN201610112549A CN105787982A CN 105787982 A CN105787982 A CN 105787982A CN 201610112549 A CN201610112549 A CN 201610112549A CN 105787982 A CN105787982 A CN 105787982A
Authority
CN
China
Prior art keywords
image
object image
everyone
book
target person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610112549.9A
Other languages
Chinese (zh)
Other versions
CN105787982B (en
Inventor
田亮
陈凡
厍寅斌
陈开�
张锦
王新柱
孙淑芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Beijing Co Ltd
Original Assignee
Tencent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Beijing Co Ltd filed Critical Tencent Technology Beijing Co Ltd
Priority to CN201610112549.9A priority Critical patent/CN105787982B/en
Publication of CN105787982A publication Critical patent/CN105787982A/en
Application granted granted Critical
Publication of CN105787982B publication Critical patent/CN105787982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and device for manufacturing an e-book, and belongs to the technical field of internets. The method comprises the steps of: obtaining a first figure image of a target figure, and obtaining a first head portrait template image selected by a user from pre-stored head portrait template images of the e-book; performing facial image identification on the first figure image, determining a facial area in the first figure image, and obtaining a facial image included in the facial area of the first figure image; adding the facial image to a preset image inserting area of the first head portrait template image to obtain a combined head portrait; and arranging the combined head portrait at a preset position of a target page in the e-book. According to the invention, the flexibility of e-book manufacturing can be enhanced.

Description

A kind of method and apparatus making e-book
Technical field
The present invention relates to Internet technical field, particularly to a kind of method and apparatus making e-book.
Background technology
Development along with Internet technology, various terminals are widely used, accordingly, the function of terminal is also more and more abundanter, people can carry out recreation by terminal, such as, people can make e-book (such as electron album etc.) by terminal, adds the content such as image and word in e-books arbitrarily.
General, user can select one or more character image to be placed on the specified location of e-book.Showing image in e-books based on above-mentioned processing mode, simply directly showing image, exhibition method is comparatively single, thus, cause that the motility making e-book is poor.
Summary of the invention
In order to solve problem of the prior art, embodiments provide a kind of method and apparatus making e-book.Described technical scheme is as follows:
First aspect, it is provided that a kind of method making e-book, described method includes:
Obtain the first object image of target person, and in the head portrait template image of the e-book prestored, obtain the first head portrait template image that user selects;
Described the first object image is carried out face-image identification, it is determined that the facial zone in described the first object image, obtain the face-image comprised in the facial zone of described the first object image;
By described face-image, add the image insertion area place preset in described first head portrait template image to, obtain combination head picture;
By described combination head picture, it is arranged in described e-book the predetermined position of target pages.
Optionally, described described the first object image is carried out face-image identification, it is determined that the facial zone in described the first object image, obtain the face-image comprised in the facial zone of described the first object image, including:
Described the first object image is carried out face-image identification, obtains positioning the marginal position point of the predetermined number of the facial zone in described the first object image;
Marginal position point according to described predetermined number and described the first object image, obtain the face-image comprised in the facial zone of described the first object image.
As such, it is possible to automatically obtained the face-image of the first object image by marginal position point, it is thus possible to improve the efficiency obtaining face-image.
Optionally, described method also includes:
Obtain at least one character image of described target person, and obtain the shooting time of everyone object image at least one character image described;
Date of birth according to the described target person prestored and the shooting time of everyone object image described, it is determined that the age that described target person is corresponding in everyone object image described;
Corresponding relation according to the age prestored Yu text message, and the age that described target person is corresponding in everyone object image, it is determined that the text message that everyone object image described is corresponding;
By everyone object image described, text message that everyone object image described is corresponding, it is arranged in described target pages.
As such, it is possible to automatically for the text message of everyone object image coupling correspondence, it is possible to improve the efficiency of matched text information.
Optionally, described method also includes:
Obtain at least one character image of described target person, and obtain the shooting time of everyone object image at least one character image described;
Corresponding relation according to the shooting time prestored Yu text message, and the shooting time of everyone object image described, it is determined that the text message that everyone object image described is corresponding;
By everyone object image described, text message that everyone object image described is corresponding, it is arranged in described target pages.
As such, it is possible to automatically for the text message of everyone object image coupling correspondence, it is possible to improve the efficiency of matched text information.
Optionally, the shooting time of everyone object image at least one character image described in described acquisition, including:
Obtain the exchangeable image file EXIF information that at least one character image described, everyone object image is corresponding;
In the EXIF information that everyone object image described is corresponding, obtain the shooting time of everyone object image described.
Second aspect, it is provided that a kind of device making e-book, described device includes:
First acquisition module, for obtaining the first object image of target person, and obtains the first head portrait template image that user selects in the head portrait template image of the e-book prestored;
First determines module, for described the first object image is carried out face-image identification, it is determined that the facial zone in described the first object image, obtains the face-image comprised in the facial zone of described the first object image;
Add module, for by described face-image, adding the image insertion area place preset in described first head portrait template image to, obtain combination head picture;
First arranges module, for by described combination head picture, is arranged in described e-book the predetermined position of target pages.
Optionally, described first determines module, including:
Identify submodule, for described the first object image is carried out face-image identification, obtain positioning the marginal position point of the predetermined number of the facial zone in described the first object image;
First obtains submodule, for the marginal position point according to described predetermined number and described the first object image, obtains the face-image comprised in the facial zone of described the first object image.
Optionally, described device also includes:
Second acquisition module, for obtaining at least one character image of described target person, and obtains the shooting time of everyone object image at least one character image described;
Second determines module, for the shooting time of the date of birth according to the described target person prestored and everyone object image described, it is determined that the age that described target person is corresponding in everyone object image described;
3rd determines module, for the corresponding relation according to the age prestored with text message, and the age that described target person is corresponding in everyone object image, it is determined that the text message that everyone object image described is corresponding;
Second arranges module, for by everyone object image described, text message that everyone object image described is corresponding, being arranged in described target pages.
Optionally, described device also includes:
Second acquisition module, for obtaining at least one character image of described target person, and obtains the shooting time of everyone object image at least one character image described;
Second determines module, for the corresponding relation according to the shooting time prestored with text message, and the shooting time of everyone object image described, it is determined that the text message that everyone object image described is corresponding;
Second arranges module, for by everyone object image described, text message that everyone object image described is corresponding, being arranged in described target pages.
Optionally, described second acquisition module, including:
Second obtains submodule, for obtaining the exchangeable image file EXIF information that at least one character image described, everyone object image is corresponding;
3rd obtains submodule, for, in the EXIF information that everyone object image described is corresponding, obtaining the shooting time of everyone object image described.
The technical scheme that the embodiment of the present invention provides has the benefit that
In the embodiment of the present invention, obtain the first object image of target person, and in the head portrait template image of the e-book prestored, obtain the first head portrait template image that user selects, the first object image is carried out face-image identification, determine the facial zone in the first object image, obtain the face-image comprised in the facial zone of the first object image, by face-image, add the image insertion area place preset in the first head portrait template image to, obtain combination head picture, by combination head picture, the predetermined position of target pages in e-books is set.So, character image can be carried out Automated Design by terminal, changes the content of original character image, and the image shown in e-book is the image after the character image Automated Design that user is chosen by terminal, it is thus possible to strengthen the motility making e-book.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme in the embodiment of the present invention, below the accompanying drawing used required during embodiment is described is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is a kind of method flow diagram making e-book that the embodiment of the present invention provides;
Fig. 2 is the schematic diagram of a kind of combination head picture that the embodiment of the present invention provides;
Fig. 3 is the schematic diagram at a kind of interface that the embodiment of the present invention provides;
Fig. 4 is the schematic diagram of a kind of e-book that the embodiment of the present invention provides;
Fig. 5 is a kind of apparatus structure schematic diagram making e-book that the embodiment of the present invention provides;
Fig. 6 is a kind of apparatus structure schematic diagram making e-book that the embodiment of the present invention provides;
Fig. 7 is a kind of apparatus structure schematic diagram making e-book that the embodiment of the present invention provides;
Fig. 8 is a kind of apparatus structure schematic diagram making e-book that the embodiment of the present invention provides;
Fig. 9 is a kind of apparatus structure schematic diagram making e-book that the embodiment of the present invention provides;
Figure 10 is the structural representation of a kind of terminal that the embodiment of the present invention provides.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearly, below in conjunction with accompanying drawing, embodiment of the present invention is described further in detail.
Embodiments providing a kind of method making e-book, the executive agent of the method is terminal.Wherein, this terminal can be the mobile terminal such as mobile phone, panel computer, it is also possible to be PC (PersonalComputer, PC).This terminal can be provided with processor, memorizer, processor may be used for the first object image of target person is carried out face-image identification, and identifying that the face-image obtained adds the relevant treatment of head portrait template image to, memorizer may be used for storing the data needing in following processing procedure and producing.Being also provided with transceiver, transceiver may be used for receiving and sending data.
Below in conjunction with detailed description of the invention, the handling process shown in Fig. 1 being described in detail, content can be such that
Step 101, obtains the first object image of target person, and obtains the first head portrait template image that user selects in the head portrait template image of the e-book prestored.
Wherein, the first object image can be the individual direct picture of target person.
In force, terminal can be provided with there is the application program making ebook function, user can pass through this application program and the image (such as photo) of oneself or household's (i.e. target person) is made e-book, namely by terminal, the content of the image (i.e. the first object image) of target person can be modified, for example, it is possible to make e-book by being undertaken combining by the first object image of target person and cartoon image.It addition, user can also pass through have the website making e-book making ebook function.Concrete, user is when making e-book, first a head portrait template image (i.e. the first head portrait template image) can be selected in the head portrait template image of the e-book prestored, and choose the first object image of target person, terminal will obtain the first object image of the target person that user chooses, and obtains the first head portrait template image that user selects.For user by website making e-book, present disclosure will be carried out detailed statement below.After terminal obtains the first object image and the first head portrait template image, it is possible to sending it to server, wherein, server can be the background server of above-mentioned website.
Step 102, carries out face-image identification to the first object image, it is determined that the facial zone in the first object image, obtains the face-image comprised in the facial zone of the first object image.
In force, server can receive the first object image that terminal sends, it is possible to it is carried out recognition of face (i.e. face-image identification), it is determined that the facial zone in the first object image, and then, it is possible to the face-image comprised in facial zone is obtained according to the first object image.
Optionally, can pass through to position the face-image of the marginal position point the first object image of acquisition of facial zone, accordingly, the processing procedure of step 102 can be such that and the first object image is carried out face-image identification, obtains positioning the marginal position point of the predetermined number of the facial zone in the first object image;Marginal position point according to predetermined number and the first object image, obtain the face-image comprised in the facial zone of the first object image.
Wherein, marginal position point can be the location point of the facial zone surrounding in the first object image, it is possible to be location of pixels.
In force, it is possible to pre-set the quantity of marginal position point for positioning the facial zone in the first object image.After server receives the first object image that terminal sends, it is possible to the first object image is carried out face-image identification, obtains positioning the marginal position point of the predetermined number of the facial zone in the first object image, wherein it is possible to obtain 21 marginal position points.After server gets the marginal position point of predetermined number, can to the marginal position point of the storage position (such as network address) that terminal sends the first object image and the predetermined number obtained, terminal can receive the marginal position point storing position and predetermined number that server sends, the first object image can be stored in the storage position of server, and the marginal position point of the predetermined number that can send according to the first object image that this locality prestores and server, obtain the face-image comprised in the facial zone of the first object image, namely from the first object image, deduct, according to the marginal position point obtained, the face-image that facial zone therein comprises.
Step 103, by face-image, adds the image insertion area place preset in the first head portrait template image to, obtains combination head picture.
In force, after terminal obtains face-image, the face-image of acquisition can be added to the image insertion area place preset in the first head portrait template image obtained in advance, as shown in Figure 2, obtain face-image and the combination head picture of the first head portrait template image, wherein, the first head portrait template image can be the cartoon image with image insertion area.Concrete, terminal can obtain greatest length or the Breadth Maximum of the image insertion area of the first head portrait template image, first the face-image of acquisition can be zoomed in and out by terminal, make the size of the face-image after convergent-divergent basically identical with the size of the image insertion area preset in the first head portrait template image, then the face-image after convergent-divergent is added to the image insertion area place preset in the first head portrait template image again, now, the position at the face-image place after convergent-divergent manually can be carried out inching by user, obtains combination head picture.
Optionally, terminal can also according to the corresponding relation at age Yu text message, to other people object image matched text information, accordingly, processing procedure can be such that at least one character image obtaining target person, and obtains the shooting time of everyone object image at least one character image;Date of birth according to the target person prestored and the shooting time of everyone object image, it is determined that the age that target person is corresponding in everyone object image;Corresponding relation according to the age prestored Yu text message, and the age that target person is corresponding in everyone object image, it is determined that the text message that everyone object image is corresponding;By everyone object image, text message that everyone object image is corresponding, it is arranged in target pages.
In force, user passes through terminal when making e-book, first the personal information of target person can be inputted, wherein, personal information can be name, sex, hobby, birthday (i.e. date of birth) etc., e-book and personal information can be made to combine when making e-book, afterwards, the personal information that user inputs can be sent to server by terminal, server may determine that integrity and the legitimacy of personal information, when personal information complete and legal time, and then user can select above-mentioned the first object image.nullAfter obtaining combination head picture,Terminal can eject the page continuing to upload image,As shown in Figure 3,User can continue to upload at least one character image (wherein according to page prompts,This character image can be the first object image,Can also be other people object image comprising target person),Such as,This page can include add picture button,When user wants to continue to upload image,Interpolation picture button can be clicked,And then,Can from least one character image locally selecting target person,Now,Will triggering terminal obtain target person at least one character image,And at least one character image can be sent to server,Server can receive at least one character image that terminal sends,And the shooting time of everyone object image at least one character image can be obtained according at least one character image received,And then,Can according to the shooting time of the date of birth of the target person of user's input and everyone object image,Determine the age that target person is corresponding in everyone object image,Namely when shooting everyone object image,The age of target person.The corresponding relation at age and text message can be prestored, after determining the age that target person is corresponding in everyone object image, can at the corresponding relation at above-mentioned age Yu text message, determine text message corresponding to each age of acquisition, and then, it is determined that the text message that everyone object image is corresponding, for instance, the age of the target person that one of them character image is corresponding is 3 years old, and the text message of coupling can be " baby can above kindergarten ".After server determines the text message that everyone object image is corresponding, can by the text message transmission of the storage position of everyone object image and correspondence to terminal, and then, the text message of personal information, everyone object image and correspondence thereof that user can be inputted by terminal, is arranged in the target pages of e-book, as shown in Figure 4, wherein, e-book can be preset with the position of multiple placement personal images, after terminal obtains everyone object image, it is possible to be randomly placed in predeterminated position.The e-book of making can be preserved by user, now, will triggering terminal to server send target person e-book, wherein, terminal is when uploading the e-book of target person, the storage position that everyone object image is corresponding can be sent to server, server can receive the e-book of the target person that terminal sends, and the character image of the storage position acquisition each storage position storage that can send according to terminal, and then can obtain comprising the e-book of everyone object image, and it is preserved, when user is intended to the e-book that preview generates, the e-book preserved can be sent to terminal by server, so that user can carry out preview.
Optionally, terminal can also according to the corresponding relation of shooting time Yu text message, to other people object image matched text information, accordingly, processing procedure can be such that at least one character image obtaining target person, and obtains the shooting time of everyone object image at least one character image;Corresponding relation according to the shooting time prestored Yu text message, and the shooting time of everyone object image, it is determined that the text message that everyone object image is corresponding;By everyone object image, text message that everyone object image is corresponding, it is arranged in target pages.
In force, user passes through terminal when making e-book, first the personal information of target person can be inputted, wherein, personal information can be name, sex, hobby, birthday (i.e. date of birth) etc., e-book and personal information can be made to combine when making e-book, afterwards, the personal information that user inputs can be sent to server by terminal, server may determine that integrity and the legitimacy of personal information, when personal information complete and legal time, and then user can select above-mentioned the first object image.nullAfter obtaining combination head picture,Terminal can eject the page continuing to upload image,As shown in Figure 3,User can continue to upload at least one character image (wherein according to page prompts,This character image can be the first object image,Can also be other people object image comprising target person),Such as,This page can include add picture button,When user wants to continue to upload image,Interpolation picture button can be clicked,And then,Can from least one character image locally selecting target person,Now,Will triggering terminal obtain target person at least one character image,And at least one character image can be sent to server,Server can receive at least one character image that terminal sends,And the shooting time of everyone object image at least one character image can be obtained according at least one character image received.The corresponding relation of shooting time and text message can be prestored, after obtaining the shooting time of everyone object image, it is possible at the corresponding relation of above-mentioned shooting time Yu text message, it is determined that the text message that everyone object image is corresponding.After server determines the text message that everyone object image is corresponding, can by the text message transmission of the storage position of everyone object image and correspondence to terminal, and then, the text message of personal information, everyone object image and correspondence thereof that user can be inputted by terminal, is arranged in the target pages of e-book, as shown in Figure 4, wherein, e-book can be preset with the position of multiple placement personal images, after terminal obtains everyone object image, it is possible to be randomly placed in predeterminated position.The e-book of making can be preserved by user, now, will triggering terminal to server send target person e-book, wherein, terminal is when uploading the e-book of target person, the storage position that everyone object image is corresponding can be sent to server, server can receive the e-book of the target person that terminal sends, and the character image of the storage position acquisition each storage position storage that can send according to terminal, and then can obtain comprising the e-book of everyone object image, and it is preserved, when user is intended to the e-book that preview generates, the e-book preserved can be sent to terminal by server, so that user can carry out preview.
Optionally, it is possible to by EXIF information, obtain the shooting time of everyone object image, accordingly, processing procedure can be such that and obtains the exchangeable image file EXIF information that at least one character image, everyone object image is corresponding;In the EXIF information that everyone object image is corresponding, obtain the shooting time of everyone object image.
In force, after getting at least one character image, it is possible to obtain the exchangeable image file EXIF information that everyone object image at least one character image is corresponding, and then, in the EXIF information that everyone object image is corresponding, the shooting time of everyone object image can be obtained.
In addition, the atlas of the target person obtained can be preserved or share on the net by user, can also carry out placing an order and printing, now terminal will receive and place an order and print command, and send it to server, after staff receives order, it will the atlas of target person is printed, and according to the address prestored, the atlas of printing is sent to user.
Step 104, by combination head picture, arranges the predetermined position of target pages in e-books.
In force, e-book can be preset with the position placing combination head picture, it is possible to be a certain position (i.e. predeterminated position) of the wherein page (i.e. target pages).After obtaining combination head picture, terminal can place it in the predetermined position of target pages in e-book.
In the embodiment of the present invention, obtain the first object image of target person, and in the head portrait template image of the e-book prestored, obtain the first head portrait template image that user selects, the first object image is carried out face-image identification, determine the facial zone in the first object image, obtain the face-image comprised in the facial zone of the first object image, by face-image, add the image insertion area place preset in the first head portrait template image to, obtain combination head picture, by combination head picture, the predetermined position of target pages in e-books is set.So, character image can be carried out Automated Design by terminal, changes the content of original character image, and the image shown in e-book is the image after the character image Automated Design that user is chosen by terminal, it is thus possible to strengthen the motility making e-book.
Based on identical technology design, the embodiment of the present invention additionally provides a kind of device making e-book, as it is shown in figure 5, this device includes:
First acquisition module 510, for obtaining the first object image of target person, and obtains the first head portrait template image that user selects in the head portrait template image of the e-book prestored;
First determines module 520, for described the first object image is carried out face-image identification, it is determined that the facial zone in described the first object image, obtains the face-image comprised in the facial zone of described the first object image;
Add module 530, for by described face-image, adding the image insertion area place preset in described first head portrait template image to, obtain combination head picture;
First arranges module 540, for by described combination head picture, is arranged in described e-book the predetermined position of target pages.
Optionally, as shown in Figure 6, described first determines module 520, including:
Identify submodule 5201, for described the first object image is carried out face-image identification, obtain positioning the marginal position point of the predetermined number of the facial zone in described the first object image;
First obtains submodule 5202, for the marginal position point according to described predetermined number and described the first object image, obtains the face-image comprised in the facial zone of described the first object image.
Optionally, as it is shown in fig. 7, described device also includes:
Second acquisition module 550, for obtaining at least one character image of described target person, and obtains the shooting time of everyone object image at least one character image described;
Second determines module 560, for the shooting time of the date of birth according to the described target person prestored and everyone object image described, it is determined that the age that described target person is corresponding in everyone object image described;
3rd determines module 570, for the corresponding relation according to the age prestored with text message, and the age that described target person is corresponding in everyone object image, it is determined that the text message that everyone object image described is corresponding;
Second arranges module 580, for by everyone object image described, text message that everyone object image described is corresponding, being arranged in described target pages.
Optionally, as shown in Figure 8, described device also includes:
Second acquisition module 550, for obtaining at least one character image of described target person, and obtains the shooting time of everyone object image at least one character image described;
Second determines module 560, for the corresponding relation according to the shooting time prestored with text message, and the shooting time of everyone object image described, it is determined that the text message that everyone object image described is corresponding;
Second arranges module 580, for by everyone object image described, text message that everyone object image described is corresponding, being arranged in described target pages.
Optionally, as it is shown in figure 9, described second acquisition module 550, including:
Second obtains submodule 5501, for obtaining the exchangeable image file EXIF information that at least one character image described, everyone object image is corresponding;
3rd obtains submodule 5502, for, in the EXIF information that everyone object image described is corresponding, obtaining the shooting time of everyone object image described.
In the embodiment of the present invention, obtain the first object image of target person, and in the head portrait template image of the e-book prestored, obtain the first head portrait template image that user selects, the first object image is carried out face-image identification, determine the facial zone in the first object image, obtain the face-image comprised in the facial zone of the first object image, by face-image, add the image insertion area place preset in the first head portrait template image to, obtain combination head picture, by combination head picture, the predetermined position of target pages in e-books is set.So, character image can be carried out Automated Design by terminal, changes the content of original character image, and the image shown in e-book is the image after the character image Automated Design that user is chosen by terminal, it is thus possible to strengthen the motility making e-book.
It should be understood that the device of the making e-book of above-described embodiment offer is when making e-book, only it is illustrated with the division of above-mentioned each functional module, in practical application, as desired above-mentioned functions distribution can be completed by different functional modules, it is divided into different functional modules, to complete all or part of function described above by the internal structure of equipment.It addition, the device of making e-book that above-described embodiment provides belongs to same design with the embodiment of the method for making e-book, it implements process and refers to embodiment of the method, repeats no more here.
The embodiment of the present invention additionally provides a kind of terminal, refer to Figure 10, it illustrates the structural representation of terminal involved by the embodiment of the present invention, and this terminal may be used for implementing the method for the making e-book provided in above-described embodiment.Specifically:
Terminal 1000 can include RF (RadioFrequency, radio frequency) circuit 110, include the memorizer 120 of one or more computer-readable recording mediums, input block 130, display unit 140, sensor 150, voicefrequency circuit 160, WiFi (wirelessfidelity, Wireless Fidelity) module 170, include the parts such as processor 180 and power supply 190 of or more than one process core.It will be understood by those skilled in the art that the terminal structure shown in Figure 10 is not intended that the restriction to terminal, it is possible to include ratio and illustrate more or less of parts, or combine some parts, or different parts are arranged.Wherein:
RF circuit 110 can be used for receiving and sending messages or in communication process, the reception of signal and transmission, especially, after being received by the downlink information of base station, transfers to one or more than one processor 180 processes;It addition, be sent to base station by relating to up data.Generally, RF circuit 110 includes but not limited to antenna, at least one amplifier, tuner, one or more agitator, subscriber identity module (SIM) card, transceiver, bonder, LNA (LowNoiseAmplifier, low-noise amplifier), duplexer etc..Communicate additionally, RF circuit 110 can also pass through radio communication with network and other equipment.nullDescribed radio communication can use arbitrary communication standard or agreement,Include but not limited to GSM (GlobalSystemofMobilecommunication,Global system for mobile communications)、GPRS(GeneralPacketRadioService,General packet radio service)、CDMA(CodeDivisionMultipleAccess,CDMA)、WCDMA(WidebandCodeDivisionMultipleAccess,WCDMA)、LTE(LongTermEvolution,Long Term Evolution)、Email、SMS(ShortMessagingService,Short Message Service) etc..
Memorizer 120 can be used for storing software program and module, and processor 180 is stored in software program and the module of memorizer 120 by running, thus performing the application of various function and data process.Memorizer 120 can mainly include storage program area and storage data field, and wherein, storage program area can store the application program (such as sound-playing function, image player function etc.) etc. needed for operating system, at least one function;Storage data field can store the data (such as voice data, phone directory etc.) etc. that the use according to terminal 1000 creates.Additionally, memorizer 120 can include high-speed random access memory, it is also possible to include nonvolatile memory, for instance at least one disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memorizer 120 can also include Memory Controller, to provide processor 180 and the input block 130 access to memorizer 120.
Input block 130 can be used for receiving numeral or the character information of input, and produce the keyboard relevant with user setup and function control, mouse, action bars, optics or trace ball signal and input.Specifically, input block 130 can include Touch sensitive surface 131 and other input equipments 132.Touch sensitive surface 131, also referred to as touching display screen or Trackpad, user can be collected thereon or neighbouring touch operation (such as user uses any applicable object such as finger, stylus or adnexa operation on Touch sensitive surface 131 or near Touch sensitive surface 131), and drive corresponding connecting device according to formula set in advance.Optionally, Touch sensitive surface 131 can include touch detecting apparatus and two parts of touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect the signal that touch operation brings, transmit a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 180, and can receive order that processor 180 sends and be performed.Furthermore, it is possible to adopt the polytypes such as resistance-type, condenser type, infrared ray and surface acoustic wave to realize Touch sensitive surface 131.Except Touch sensitive surface 131, input block 130 can also include other input equipments 132.Specifically, other input equipments 132 can include but not limited to one or more in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc..
Display unit 140 can be used for showing the various graphical user interface of information or the information being supplied to user and the terminal 1000 inputted by user, and these graphical user interface can be made up of figure, text, icon, video and its combination in any.Display unit 140 can include display floater 141, optionally, the form such as LCD (LiquidCrystalDisplay, liquid crystal display), OLED (OrganicLight-EmittingDiode, Organic Light Emitting Diode) can be adopted to configure display floater 141.Further, Touch sensitive surface 131 can cover display floater 141, when Touch sensitive surface 131 detects thereon or after neighbouring touch operation, send processor 180 to determine the type of touch event, on display floater 141, provide corresponding visual output with preprocessor 180 according to the type of touch event.Although in Fig. 10, Touch sensitive surface 131 and display floater 141 are to realize input and input function as two independent parts, but in some embodiments it is possible to by integrated to Touch sensitive surface 131 and display floater 141 and realize input and output function.
Terminal 1000 may also include at least one sensor 150, such as optical sensor, motion sensor and other sensors.Specifically, optical sensor can include ambient light sensor and proximity transducer, and wherein, ambient light sensor can regulate the brightness of display floater 141 according to the light and shade of ambient light, proximity transducer when terminal 1000 moves in one's ear, can cut out display floater 141 and/or backlight.One as motion sensor, Gravity accelerometer can detect the size of the acceleration that (is generally three axles) in all directions, can detect that the size of gravity and direction time static, can be used for identifying the application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating) of mobile phone attitude, Vibration identification correlation function (such as pedometer, knock) etc.;Other sensors such as the gyroscope that can also configure as terminal 1000, barometer, drimeter, thermometer, infrared ray sensor, do not repeat them here.
Voicefrequency circuit 160, speaker 161, microphone 162 can provide the audio interface between user and terminal 1000.Voicefrequency circuit 160 can by receive voice data conversion after the signal of telecommunication, be transferred to speaker 161, by speaker 161 be converted to acoustical signal output;On the other hand, the acoustical signal of collection is converted to the signal of telecommunication by microphone 162, voice data is converted to after being received by voicefrequency circuit 160, after again voice data output processor 180 being processed, through RF circuit 110 to be sent to such as another terminal, or voice data is exported to memorizer 120 to process further.Voicefrequency circuit 160 is also possible that earphone jack, to provide the communication of peripheral hardware earphone and terminal 1000.
WiFi belongs to short range wireless transmission technology, and terminal 1000 can help user to send and receive e-mail by WiFi module 170, browse webpage and access streaming video etc., and it has provided the user wireless broadband internet and has accessed.Although Figure 10 illustrates WiFi module 170, but it is understood that, it is also not belonging to must be configured into of terminal 1000, completely can as needed in do not change invention essence scope in and omit.
Processor 180 is the control centre of terminal 1000, utilize various interface and the various piece of the whole mobile phone of connection, it is stored in the software program in memorizer 120 and/or module by running or performing, and call the data being stored in memorizer 120, perform the various functions of terminal 1000 and process data, thus mobile phone is carried out integral monitoring.Optionally, processor 180 can include one or more process core;Preferably, processor 180 can integrated application processor and modem processor, wherein, application processor mainly processes operating system, user interface and application program etc., and modem processor mainly processes radio communication.It is understood that above-mentioned modem processor can not also be integrated in processor 180.
Terminal 1000 also includes the power supply 190 (such as battery) powered to all parts, preferably, it is logically contiguous with processor 180 that power supply can pass through power-supply management system, realizes the functions such as management charging, electric discharge and power managed thereby through power-supply management system.Power supply 190 can also include one or more direct current or alternating current power supply, recharging system, power failure detection circuit, power supply changeover device or the random component such as inverter, power supply status indicator.
Although not shown, terminal 1000 can also include photographic head, bluetooth module etc., does not repeat them here.Specifically in the present embodiment, the display unit of terminal 1000 is touch-screen display, terminal 1000 also includes memorizer, and one or more than one program, one of them or more than one program are stored in memorizer, and are configured to be performed to state one or more than one program package containing the instruction for carrying out following operation by one or more than one processor:
Obtain the first object image of target person, and in the head portrait template image of the e-book prestored, obtain the first head portrait template image that user selects;
Described the first object image is carried out face-image identification, it is determined that the facial zone in described the first object image, obtain the face-image comprised in the facial zone of described the first object image;
By described face-image, add the image insertion area place preset in described first head portrait template image to, obtain combination head picture;
By described combination head picture, it is arranged in described e-book the predetermined position of target pages.
Optionally, described described the first object image is carried out face-image identification, it is determined that the facial zone in described the first object image, obtain the face-image comprised in the facial zone of described the first object image, including:
Described the first object image is carried out face-image identification, obtains positioning the marginal position point of the predetermined number of the facial zone in described the first object image;
Marginal position point according to described predetermined number and described the first object image, obtain the face-image comprised in the facial zone of described the first object image.
Optionally, described method also includes:
Obtain at least one character image of described target person, and obtain the shooting time of everyone object image at least one character image described;
Date of birth according to the described target person prestored and the shooting time of everyone object image described, it is determined that the age that described target person is corresponding in everyone object image described;
Corresponding relation according to the age prestored Yu text message, and the age that described target person is corresponding in everyone object image, it is determined that the text message that everyone object image described is corresponding;
By everyone object image described, text message that everyone object image described is corresponding, it is arranged in described target pages.
Optionally, described method also includes:
Obtain at least one character image of described target person, and obtain the shooting time of everyone object image at least one character image described;
Corresponding relation according to the shooting time prestored Yu text message, and the shooting time of everyone object image described, it is determined that the text message that everyone object image described is corresponding;
By everyone object image described, text message that everyone object image described is corresponding, it is arranged in described target pages.
Optionally, the shooting time of everyone object image at least one character image described in described acquisition, including:
Obtain the exchangeable image file EXIF information that at least one character image described, everyone object image is corresponding;
In the EXIF information that everyone object image described is corresponding, obtain the shooting time of everyone object image described.
In the embodiment of the present invention, obtain the first object image of target person, and in the head portrait template image of the e-book prestored, obtain the first head portrait template image that user selects, the first object image is carried out face-image identification, determine the facial zone in the first object image, obtain the face-image comprised in the facial zone of the first object image, by face-image, add the image insertion area place preset in the first head portrait template image to, obtain combination head picture, by combination head picture, the predetermined position of target pages in e-books is set.So, character image can be carried out Automated Design by terminal, changes the content of original character image, and the image shown in e-book is the image after the character image Automated Design that user is chosen by terminal, it is thus possible to strengthen the motility making e-book.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can be completed by hardware, can also be completed by the hardware that program carrys out instruction relevant, described program can be stored in a kind of computer-readable recording medium, storage medium mentioned above can be read only memory, disk or CD etc..
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all within the spirit and principles in the present invention, any amendment of making, equivalent replacement, improvement etc., should be included within protection scope of the present invention.

Claims (10)

1. the method making e-book, it is characterised in that described method includes:
Obtain the first object image of target person, and in the head portrait template image of the e-book prestored, obtain the first head portrait template image that user selects;
Described the first object image is carried out face-image identification, it is determined that the facial zone in described the first object image, obtain the face-image comprised in the facial zone of described the first object image;
By described face-image, add the image insertion area place preset in described first head portrait template image to, obtain combination head picture;
By described combination head picture, it is arranged in described e-book the predetermined position of target pages.
2. method according to claim 1, it is characterized in that, described described the first object image is carried out face-image identification, it is determined that the facial zone in described the first object image, obtain the face-image comprised in the facial zone of described the first object image, including:
Described the first object image is carried out face-image identification, obtains positioning the marginal position point of the predetermined number of the facial zone in described the first object image;
Marginal position point according to described predetermined number and described the first object image, obtain the face-image comprised in the facial zone of described the first object image.
3. method according to claim 1, it is characterised in that described method also includes:
Obtain at least one character image of described target person, and obtain the shooting time of everyone object image at least one character image described;
Date of birth according to the described target person prestored and the shooting time of everyone object image described, it is determined that the age that described target person is corresponding in everyone object image described;
Corresponding relation according to the age prestored Yu text message, and the age that described target person is corresponding in everyone object image, it is determined that the text message that everyone object image described is corresponding;
By everyone object image described, text message that everyone object image described is corresponding, it is arranged in described target pages.
4. method according to claim 1, it is characterised in that described method also includes:
Obtain at least one character image of described target person, and obtain the shooting time of everyone object image at least one character image described;
Corresponding relation according to the shooting time prestored Yu text message, and the shooting time of everyone object image described, it is determined that the text message that everyone object image described is corresponding;
By everyone object image described, text message that everyone object image described is corresponding, it is arranged in described target pages.
5. the method according to claim 3 or 4, it is characterised in that the shooting time of everyone object image at least one character image described in described acquisition, including:
Obtain the exchangeable image file EXIF information that at least one character image described, everyone object image is corresponding;
In the EXIF information that everyone object image described is corresponding, obtain the shooting time of everyone object image described.
6. the device making e-book, it is characterised in that described device includes:
First acquisition module, for obtaining the first object image of target person, and obtains the first head portrait template image that user selects in the head portrait template image of the e-book prestored;
First determines module, for described the first object image is carried out face-image identification, it is determined that the facial zone in described the first object image, obtains the face-image comprised in the facial zone of described the first object image;
Add module, for by described face-image, adding the image insertion area place preset in described first head portrait template image to, obtain combination head picture;
First arranges module, for by described combination head picture, is arranged in described e-book the predetermined position of target pages.
7. device according to claim 6, it is characterised in that described first determines module, including:
Identify submodule, for described the first object image is carried out face-image identification, obtain positioning the marginal position point of the predetermined number of the facial zone in described the first object image;
First obtains submodule, for the marginal position point according to described predetermined number and described the first object image, obtains the face-image comprised in the facial zone of described the first object image.
8. device according to claim 6, it is characterised in that described device also includes:
Second acquisition module, for obtaining at least one character image of described target person, and obtains the shooting time of everyone object image at least one character image described;
Second determines module, for the shooting time of the date of birth according to the described target person prestored and everyone object image described, it is determined that the age that described target person is corresponding in everyone object image described;
3rd determines module, for the corresponding relation according to the age prestored with text message, and the age that described target person is corresponding in everyone object image, it is determined that the text message that everyone object image described is corresponding;
Second arranges module, for by everyone object image described, text message that everyone object image described is corresponding, being arranged in described target pages.
9. device according to claim 6, it is characterised in that described device also includes:
Second acquisition module, for obtaining at least one character image of described target person, and obtains the shooting time of everyone object image at least one character image described;
Second determines module, for the corresponding relation according to the shooting time prestored with text message, and the shooting time of everyone object image described, it is determined that the text message that everyone object image described is corresponding;
Second arranges module, for by everyone object image described, text message that everyone object image described is corresponding, being arranged in described target pages.
10. device according to claim 8 or claim 9, it is characterised in that described second acquisition module, including:
Second obtains submodule, for obtaining the exchangeable image file EXIF information that at least one character image described, everyone object image is corresponding;
3rd obtains submodule, for, in the EXIF information that everyone object image described is corresponding, obtaining the shooting time of everyone object image described.
CN201610112549.9A 2016-02-29 2016-02-29 A kind of method and apparatus making e-book Active CN105787982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610112549.9A CN105787982B (en) 2016-02-29 2016-02-29 A kind of method and apparatus making e-book

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610112549.9A CN105787982B (en) 2016-02-29 2016-02-29 A kind of method and apparatus making e-book

Publications (2)

Publication Number Publication Date
CN105787982A true CN105787982A (en) 2016-07-20
CN105787982B CN105787982B (en) 2018-11-09

Family

ID=56386522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610112549.9A Active CN105787982B (en) 2016-02-29 2016-02-29 A kind of method and apparatus making e-book

Country Status (1)

Country Link
CN (1) CN105787982B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454328A (en) * 2016-10-21 2017-02-22 航天恒星科技有限公司 Method and system for predicting image quality grade
CN108520508A (en) * 2018-04-04 2018-09-11 掌阅科技股份有限公司 User image optimization method, computing device and storage medium based on user behavior
CN111556251A (en) * 2020-05-20 2020-08-18 深圳前海微众银行股份有限公司 Electronic book generation method, device and medium
CN114205318A (en) * 2020-08-31 2022-03-18 荣耀终端有限公司 Head portrait display method and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477696A (en) * 2009-01-09 2009-07-08 彭振云 Human character cartoon image generating method and apparatus
CN104156476A (en) * 2014-08-25 2014-11-19 小米科技有限责任公司 Image synthesis method and device
CN104657974A (en) * 2013-11-25 2015-05-27 腾讯科技(上海)有限公司 Image processing method and device
CN105302315A (en) * 2015-11-20 2016-02-03 小米科技有限责任公司 Image processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477696A (en) * 2009-01-09 2009-07-08 彭振云 Human character cartoon image generating method and apparatus
CN104657974A (en) * 2013-11-25 2015-05-27 腾讯科技(上海)有限公司 Image processing method and device
CN104156476A (en) * 2014-08-25 2014-11-19 小米科技有限责任公司 Image synthesis method and device
CN105302315A (en) * 2015-11-20 2016-02-03 小米科技有限责任公司 Image processing method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454328A (en) * 2016-10-21 2017-02-22 航天恒星科技有限公司 Method and system for predicting image quality grade
CN108520508A (en) * 2018-04-04 2018-09-11 掌阅科技股份有限公司 User image optimization method, computing device and storage medium based on user behavior
CN108520508B (en) * 2018-04-04 2019-02-01 掌阅科技股份有限公司 User image optimization method, calculating equipment and storage medium based on user behavior
CN111556251A (en) * 2020-05-20 2020-08-18 深圳前海微众银行股份有限公司 Electronic book generation method, device and medium
CN114205318A (en) * 2020-08-31 2022-03-18 荣耀终端有限公司 Head portrait display method and electronic equipment
CN114205318B (en) * 2020-08-31 2023-12-08 荣耀终端有限公司 Head portrait display method and electronic equipment

Also Published As

Publication number Publication date
CN105787982B (en) 2018-11-09

Similar Documents

Publication Publication Date Title
CN104618217B (en) Share method, terminal, server and the system of resource
CN104978115B (en) Content display method and device
CN104978176B (en) Application programming interfaces call method, device and computer readable storage medium
CN106371689B (en) Picture joining method, apparatus and system
CN104133624B (en) Web animation display packing, device and terminal
CN104717125B (en) Graphic code store method and device
CN103390034B (en) Method, device, terminal and the server of picture presentation
CN103941982A (en) Method for sharing interface processing and terminal
CN103310004A (en) Method, device and equipment for displaying number of unread messages
CN103616983A (en) Picture presentation method, picture presentation device and terminal device
CN104519404A (en) Graphics interchange format file playing method and device
CN104954159A (en) Network information statistics method and device
CN105808060A (en) Method and device for playing animation
CN105094809A (en) Combined picture layout modification method and device and terminal equipment
CN104090879A (en) Picture sharing method, device and system
CN104850406A (en) Page switching method and device
CN105094513A (en) User avatar setting method and apparatus as well as electronic device
CN104869465A (en) Video playing control method and device
CN106203254A (en) A kind of adjustment is taken pictures the method and device in direction
CN104965642A (en) Method and apparatus for generating a drop-down list
CN104021129A (en) Picture group display method and terminal
CN104516890A (en) Business processing method, business processing device and electronic equipment
CN105635553A (en) Image shooting method and device
CN104679724A (en) Page noting method and device
CN104007887A (en) Floating layer display method and terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant