CN112817503B - Intelligent display method of electronic photo frame, electronic photo frame and readable storage medium - Google Patents
Intelligent display method of electronic photo frame, electronic photo frame and readable storage medium Download PDFInfo
- Publication number
- CN112817503B CN112817503B CN202110059500.2A CN202110059500A CN112817503B CN 112817503 B CN112817503 B CN 112817503B CN 202110059500 A CN202110059500 A CN 202110059500A CN 112817503 B CN112817503 B CN 112817503B
- Authority
- CN
- China
- Prior art keywords
- information
- photo frame
- electronic photo
- output
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000009471 action Effects 0.000 claims abstract description 27
- 230000014509 gene expression Effects 0.000 claims abstract description 25
- 230000007613 environmental effect Effects 0.000 claims abstract description 9
- 230000001815 facial effect Effects 0.000 claims description 29
- 230000002452 interceptive effect Effects 0.000 claims description 18
- 230000009466 transformation Effects 0.000 claims description 2
- 230000036544 posture Effects 0.000 claims 1
- 206010063385 Intellectualisation Diseases 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Oral & Maxillofacial Surgery (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses an intelligent display method of an electronic photo frame, which comprises the following steps: acquiring environment information of an environment where the electronic photo frame is located, wherein the environment information comprises at least one of voice information and image information; acquiring information to be output according to the environment information, wherein the information to be output comprises at least two output information of voice information, expression information and action information; and controlling the target character image displayed by the electronic photo frame to output information to be output, so that each output information in the information to be output is correspondingly and consistently output. The invention also provides an electronic photo frame and a readable storage medium. According to the invention, the electronic photo frame actively interacts with the environmental information according to the environmental information such as voice information and/or image information of the environment in which the electronic photo frame is positioned, the intellectualization of the electronic photo frame is improved, at least two output information of the voice information, the expression information and the action information of the displayed target figure image are controlled to be correspondingly and consistently output, and the display diversity and the function diversity of the electronic photo frame are increased.
Description
Technical Field
The present invention relates to the field of intelligent electronic technologies, and in particular, to an intelligent display method of an electronic photo frame, and a readable storage medium.
Background
Photo frames are used for placing photos, and are usually placed in the home. Conventionally, photo frames are usually used for decoration, and can only be used for browsing by placing fixed pictures or simply playing stored photos.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide an intelligent display method of an electronic photo frame, the electronic photo frame and a readable storage medium, and aims to solve the problem that the photo frame can only be used for browsing in a mode of placing fixed pictures or simply playing a stored photo.
In order to achieve the above object, the present invention provides an intelligent display method of an electronic photo frame, the intelligent display method of the electronic photo frame includes:
acquiring environment information of an environment where an electronic photo frame is located, wherein the environment information comprises at least one of voice information and image information;
acquiring information to be output according to the environment information, wherein the information to be output comprises at least two output information of voice information, expression information and action information;
and controlling the target character image displayed by the electronic photo frame to output the information to be output, so that the output information in the information to be output is correspondingly and consistently output.
Optionally, before the step of acquiring the environmental information of the environment where the terminal device is located, the method includes:
when an image switching instruction is received, acquiring the target person image corresponding to the image switching instruction;
and switching the character image currently displayed by the electronic photo frame into the target character image.
Optionally, the step of switching the character image displayed by the electronic photo frame to the target character image includes:
acquiring a first facial feature point of the target person image;
acquiring a second facial feature point of the character image displayed by the electronic photo frame;
correcting the second facial feature points according to the first facial feature points, and determining the display position of the character image displayed by the terminal equipment after correction;
and outputting the target person image to the display position.
Optionally, the intelligent display method of the electronic photo frame further comprises the following steps:
acquiring a current display mode of the electronic photo frame;
executing the step of acquiring the environmental information of the environment where the electronic photo frame is located when the display mode is an interactive display mode; or,
when the display mode is a non-interactive display mode, identifying identity information of the target person image displayed by the electronic photo frame;
acquiring character information of the target character image according to the identity information;
and controlling the target character image displayed by the electronic photo frame to output the character information.
Optionally, the intelligent display method of the electronic photo frame further comprises the following steps:
when an image driving instruction is received, acquiring playing data information of reference video information;
and controlling the target figure image displayed by the electronic photo frame to be output according to the play data information.
Optionally, the step of obtaining the information to be output according to the environmental information includes:
identifying character identity information in the environment information;
and if the person identity information is matched with the authorized identity information, acquiring the information to be output associated with the authorized identity information.
Optionally, after the step of identifying the person identity information in the environment information, the step of identifying the person identity information includes:
and if the identity information of the person is not matched with the authorized identity information, saving the environment information.
In addition, to achieve the above object, the present invention further provides an electronic photo frame, where the electronic photo frame includes: the intelligent display device comprises a memory, a processor and an intelligent display program of the electronic photo frame, wherein the intelligent display program of the electronic photo frame is stored in the memory and can run on the processor, and when being executed by the processor, the intelligent display program of the electronic photo frame realizes the steps of the intelligent display method of the electronic photo frame.
In addition, in order to achieve the above object, the present invention further provides a readable storage medium storing an intelligent display program of an electronic photo frame, which when executed by the processor, implements the steps of the intelligent display method of an electronic photo frame as described above.
According to the intelligent display method of the electronic photo frame, the electronic photo frame actively interacts through the voice information and/or the image information according to the voice information and/or the image information of the environment where the electronic photo frame is located, the intellectualization of the electronic photo frame is improved, and at least two output information among the voice information, the expression information and the action information is correspondingly and consistently output through controlling the target person image output of the electronic photo frame, so that the dynamic display of the target person in the electronic photo frame is realized, the display diversity of the electronic photo frame is increased, and the function diversity of the electronic photo frame is increased.
Drawings
FIG. 1 is a block diagram of an electronic frame according to various embodiments of the present invention;
FIG. 2 is a flowchart of a first embodiment of an intelligent display method of an electronic photo frame according to the present invention;
FIG. 3 is a flowchart of a second embodiment of an intelligent display method of an electronic photo frame according to the present invention;
FIG. 4 is a flowchart of a third embodiment of an intelligent display method of an electronic photo frame according to the present invention;
fig. 5 is a flowchart of a fourth embodiment of an intelligent display method of an electronic photo frame according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description, suffixes such as "module", "part" or "unit" for representing elements are used only for facilitating the description of the present invention, and have no specific meaning per se. Thus, "module," "component," or "unit" may be used in combination.
Referring to fig. 1, fig. 1 is a block diagram of an electronic photo frame according to various embodiments of the present invention, where the electronic photo frame may include: memory 101, processor 102, display unit 103, and communication unit 104. It will be appreciated by those skilled in the art that the block diagram of the electronic picture frame shown in fig. 1 is not limiting and that the electronic picture frame may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. The memory 101 stores therein an operating system and an intelligent display program of the electronic photo frame. The processor 102 is a control center of the electronic photo frame, and the processor 102 executes an intelligent display program of the electronic photo frame stored in the memory 101 to implement the steps of the intelligent display method of the electronic photo frame according to the embodiments of the invention. The display unit 103 includes a display panel, which may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like, for displaying a person image. Alternatively, the display unit 103 may integrate a touch panel, and when the touch panel detects a touch operation of a finger thereon or thereabout, the touch operation is transferred to the processor 102 to determine the type of a touch event, and then the processor 102 corresponds to the implemented function according to the type of the touch event. The communication unit 104 establishes data communication (the data communication may be an IP communication or a bluetooth channel) with other terminal devices, such as a mobile phone, a computer or a server, through a network protocol, so as to enable the other terminal devices to send data to the electronic photo frame, for example, directly send an image driving instruction containing reference video information to the electronic photo frame through a terminal device, such as a mobile phone or a wearable device, of a user, or directly send an image switching instruction containing a target character image to the electronic photo frame through a terminal device, such as a mobile phone or a wearable device, of a user.
Based on the above structural block diagram of the electronic photo frame, various embodiments of the intelligent display method of the electronic photo frame of the present invention are presented.
The invention provides an intelligent display method of an electronic photo frame, please refer to fig. 2, fig. 2 is a flow chart of a first embodiment of the intelligent display method of the electronic photo frame of the invention. In this embodiment, the intelligent display method of the electronic photo frame includes the following steps:
step S10, acquiring environment information of an environment where an electronic photo frame is located, wherein the environment information comprises at least one of voice information and image information;
the environment information includes at least one of voice information and image information. The environment information of the environment where the electronic photo frame is located may be obtained by a voice collecting device, such as a microphone, disposed in the electronic photo frame, and/or may be obtained by an image capturing device disposed in the electronic photo frame, which is not limited in this embodiment.
Step S20, obtaining information to be output according to the environment information, wherein the information to be output comprises at least two output information of voice information, expression information and action information;
and step S30, controlling the target character image displayed by the electronic photo frame to output the information to be output, so that the output information in the information to be output is correspondingly and consistently output.
The information to be output comprises at least two output information of voice information, expression information and action information. The information to be output is obtained according to the environment information, and the environment information is converted into the semantic description corresponding to the environment information, for example, when the image information contains the human body and the characteristics of the human body, the corresponding semantic description is obtained: "person", age, sex, etc.; after the semantic description of ' people ', age and sex ' is converted, the characteristics of people and people in the current environment are indicated, and further the information to be output is obtained according to the semantic description corresponding to the environment information, for example, after the semantic description of ' people ', age and sex ' is obtained, the ' people ', age and sex ' and the like can be treated as conditions, the language to be output is obtained, for example, the calling sentence ' hello ' can be obtained.
Based on at least two of the information to be output such as voice information, expression information and action information acquired according to the environment information, feature data such as expression feature data and action feature data and mouth shape feature data corresponding to the voice information can be acquired corresponding to the information to be output, and then the target character image displayed by the electronic photo frame is enabled to output actions and/or gestures corresponding to the information to be output according to the expression feature data and the action feature data in the feature data through face/limb dynamic transformation software or algorithm, and the target character image is controlled to output corresponding mouth shapes and voice information according to the mouth shape feature data in the feature data, so that the purpose that all output information in the information to be output corresponds to and is consistently output is achieved. Alternatively, in the case of knowing the voice data corresponding to the target person image, the voice information externally output by the target person image may be controlled by the voice conversion software to conform to the voice characteristics of the person corresponding to the target person image.
As an optional implementation manner, the intelligent display method of the electronic photo frame further includes:
when an image driving instruction is received, acquiring playing data information of reference video information;
and controlling the target figure image displayed by the electronic photo frame to be output according to the play data information.
In the practical application process, in order to further increase the diversification of the functions of the electronic photo frame, the video drive can be used for controlling the electronic photo frame to dynamically display the target figure image so as to enhance the dynamic display diversification and interestingness of the target figure image displayed by the electronic photo frame. The reference video information is playing data information for obtaining the target person image displayed by the control electronic photo frame to be dynamically displayed, so that the target person image is dynamically displayed according to the playing data information. The playing data information comprises at least one of expression characteristic data, mouth shape characteristic data, voice data and action characteristic data of a reference character image in each image frame according to the video playing sequence by referring to the video information. It should be noted that, the reference character image is different from the target character image displayed by the electronic photo frame, and the reference character image is a character image displayed by each image frame in the reference video information, and optionally, one character is displayed by the reference video information.
Triggering of the image driving instruction can be triggered by directly detecting a touch operation gesture corresponding to the image driving instruction, or can be triggered by directly triggering a key or a button corresponding to the image driving instruction; the image driving instruction including the reference video information may also be triggered by receiving an image driving instruction sent by another terminal device, for example, by directly sending the image driving instruction including the reference video information to the electronic photo frame through the terminal device of the user, such as a mobile phone or a wearable device, which is not limited in this embodiment.
The method comprises the steps of obtaining playing data information of reference video information, namely obtaining data information, such as expression characteristic data, mouth shape characteristic data, voice data and action characteristic data, of a reference character image in each image frame according to a video playing sequence in the reference video information, and further generating the playing data information of the reference video information from the data information of each image frame according to the video playing sequence, wherein the reference video information can be stored in an electronic photo frame in advance and directly obtained; or may be obtained by receiving transmissions from other terminal devices.
According to the playing data information, the output of the target character image displayed by the electronic photo frame is controlled, namely, according to at least one of the expression characteristic data, the mouth shape characteristic data, the voice data and the action characteristic data in the playing data information, the corresponding expression characteristic, mouth shape characteristic and action characteristic of the target character image are obtained corresponding to the target character image displayed by the electronic photo frame, the expression characteristic, mouth shape characteristic and action characteristic of the target character image are correspondingly controlled and displayed according to the playing data information, and corresponding voice data is output corresponding to the mouth shape characteristic, so that the target character image displayed by the electronic photo frame can be dynamically output according to the expression, mouth shape, voice and action in the reference character image in the reference video information, and the dynamic display diversity and interestingness of the target character image displayed by the electronic photo frame are enhanced.
In the technical scheme disclosed by the embodiment, the electronic photo frame actively interacts through the voice information and/or the image information according to the voice information and/or the image information of the environment where the electronic photo frame is located, so that the intellectualization of the electronic photo frame is improved, and at least two output information among the voice information, the expression information and the action information is correspondingly and consistently output through controlling the target person image output of the electronic photo frame, so that the dynamic display of the target person in the electronic photo frame is realized, the display diversity of the electronic photo frame is increased, and the function diversity of the electronic photo frame is increased.
Based on the above-mentioned first embodiment, a second embodiment of the intelligent display method of an electronic photo frame of the present invention is provided, please refer to fig. 3, fig. 3 is a flowchart of the second embodiment of the intelligent display method of an electronic photo frame of the present invention. In this embodiment, step S10, that is, before acquiring the environmental information of the environment in which the electronic photo frame is located, includes:
step S40, when an image switching instruction is received, acquiring the target person image corresponding to the image switching instruction;
and S50, switching the character image currently displayed by the electronic photo frame into the target character image.
The target person image is the person image finally displayed by the electronic photo frame after the electronic photo frame receives the image switching instruction. Triggering the image switching instruction can directly send the image switching instruction containing the target character image to the electronic photo frame through terminal equipment such as a mobile phone or wearing equipment of a user; the terminal equipment of the user can also indirectly send an image switching instruction containing the target character image to the electronic photo frame through the server; the method can also avoid any person to randomly switch the character image displayed by the electronic photo frame by limiting the mode of triggering the operator of the image switching instruction, and is not limited.
The person image displayed at present of the electronic photo frame is switched into the target person image, the person image displayed at present of the electronic photo frame can be switched into the target person image through the face changing technology, the purpose of rapidly switching the person image displayed by the electronic photo frame is achieved, in the practical application process, the image of a person about to end of life can be conveniently and rapidly obtained, or the image of a person with the end of life such as a well-known person or ancestor is reproduced to commemorate.
Optionally, step S50 of switching the currently displayed character image of the electronic photo frame to the target character image includes:
acquiring a first facial feature point of the target person image;
acquiring a second facial feature point of the character image displayed by the electronic photo frame;
correcting the second facial feature points according to the first facial feature points, and determining the display position of the character image displayed by the terminal equipment after correction;
and outputting the target person image to the display position.
The first facial feature points refer to face feature information, such as facial feature data information and facial contour data information, of the target person image displayed after the electronic photo frame image is switched. The second facial feature points refer to face feature information of the person image displayed before the electronic photo frame image is switched. The first facial feature point of the target person image or the second facial feature point of the person image displayed by the electronic photo frame can be obtained through recognition by a face recognition tool or software.
In the practical application process, when the character image currently displayed in the electronic photo frame is switched, in order to enable the target character image to be suitable for the character image displayed in the electronic photo frame to be displayed, the display authenticity of the target character image is improved, the second facial feature points are corrected according to the first facial feature points, all correction can be directly performed according to the first facial feature points corresponding to the second facial feature points, and the first facial feature points and the second facial feature points can be compared to obtain the unmatched target facial feature points, and the second facial feature points are corrected through the first facial feature points corresponding to the target facial feature points. And finally, determining the display position of the character image displayed by the corrected terminal equipment, and outputting the target character image to the display position to realize character image switching of the electronic photo frame so as to achieve the aim of displaying the target character image.
In the technical scheme disclosed by the embodiment, the target person image after the display is switched is determined through the image switching instruction, so that the purpose that the user can selectively switch the person image currently displayed by the electronic photo frame into the target person image is achieved, the purpose that the person image is dynamically displayed by the electronic photo frame is achieved, the purpose that the person image required to be displayed by the user is replaced in real time is achieved by adopting the image switching instruction, and the convenience of changing the picture by the electronic photo frame is improved.
Referring to fig. 4, fig. 4 is a schematic flow chart of a third embodiment of the intelligent display method of the electronic photo frame according to the present invention. In this embodiment, the intelligent display method of the electronic photo frame further includes:
step S60, acquiring the current display mode of the electronic photo frame;
step S70, when the display mode is an interactive display mode, executing step S10 to acquire environment information of the environment where the electronic photo frame is located; or,
step S80, when the display mode is a non-interactive display mode, identifying identity information of the target person image displayed by the electronic photo frame;
step S90, acquiring the character information of the target character image according to the identity information;
and step S100, controlling the target character image displayed by the electronic photo frame to output the character information.
The display mode can be simply understood as a mode of displaying the electronic photo frame, and it is easy to understand that different display modes and modes of displaying the electronic photo frame are different. Optionally, the display mode includes an interactive mode and/or a non-interactive mode. When the display mode of the electronic photo frame is in an interactive mode, the target character image displayed in the electronic photo frame can actively interact with environment information, for example, when people are identified through the environment information such as voice information and/or image information, the target character image can actively interact with the people, and when the electronic photo frame is placed in an environment in a private room in an actual application process, people in the room are identified through the electronic photo frame, and people in a man-made room are identified, the welcome home can be output; when the display mode of the electronic photo frame is in the non-interactive mode, the character information corresponding to the character image can be output by controlling the target character image displayed by the electronic photo frame, so that a user can know the character information of the target character image displayed by the electronic photo frame, and the function diversity of the electronic photo frame is increased.
The current display mode of the electronic photo frame is obtained, a display identifier can be preset to confirm the display mode of the electronic photo frame, and the display mode of the electronic photo frame is confirmed to be in an interactive mode or a non-interactive mode by obtaining an identifier value of the display identifier, for example, when the identifier value of the display identifier is preset to be 1, the display mode of the electronic photo frame is confirmed to be in the interactive mode, and when the identifier value of the display identifier is set to be 0, the display mode of the electronic photo frame is confirmed to be in the non-interactive mode.
When the display mode is an interactive display mode, that is, when the electronic photo frame actively interacts with the environmental information, step S10 may be performed to obtain the environmental information of the environment where the electronic photo frame is located, and the specific implementation may refer to the first embodiment, which is not specifically described in this embodiment. When the display mode is a non-interactive mode, that is, the electronic photo frame and the environment information are not interacted, in order to increase the diversification of the functions of the electronic photo frame, the target person image displayed by the electronic photo frame can be controlled to output the person information corresponding to the person image, so that a user can know the person information of the target person image displayed by the electronic photo frame conveniently.
The identity information of the target person image is a unique identification for determining the identity of the person, and the identity information of the target person image includes at least one of a person identification, a name, and a face feature of the target person image. The character information includes at least one of a brief introduction, a trace experience, and an achievement of the target character image.
The identity information of the target person image displayed by the electronic photo frame is identified, the target person image can be directly sent to other terminals such as a server by the electronic photo frame through a network, the identity information of the target person image is further identified by the server, meanwhile, the person information corresponding to the identity information of the target person image is obtained, and the identity information of the target person image and the person information are sent to the electronic photo frame, so that the purpose of controlling the target person image displayed by the electronic photo frame to output the person information is achieved; the identity information of one or more character images and the character information corresponding to the identity information can be stored in the electronic photo frame in advance, after the identity information of the target character image displayed by the electronic photo frame is identified, the identity information of the target character image and the identity information of the pre-stored character image are compared, and when the identity information of the target character image is matched with the pre-stored identity information, the character information corresponding to the pre-stored identity information can be directly obtained, so that the target character image displayed by the electronic photo frame is controlled to output the character information; when the identity information of the target person image does not match the pre-stored identity information, the above step of transmitting the target person image to other terminals such as a server through a network is performed, and the specific implementation of this step is not limited in this embodiment.
Further, the method for controlling the target character image displayed by the electronic photo frame to output character information can directly control the displayed target character image to output character information in a voice mode; the displayed target person image can be controlled to correspondingly output person information by combining other output modes in a voice mode, wherein the other output modes comprise at least one of actions and expressions, so that the diversity of dynamic display of the target person image in the electronic photo frame is increased, and the attention of a listener or a viewer is conveniently attracted when the person information of the target person image is output by controlling the electronic photo frame, for example, the person information of the target person image is output through the electronic photo frame, the achievement of the target person image is known, and the learning effect is improved when learning is carried out.
In the technical scheme disclosed by the embodiment, the display mode of the electronic photo frame is clarified, so that the electronic photo frame dynamically displays the target figure image in different modes through different display modes, the electronic photo frame dynamically displays the target figure image, and the display diversity of the electronic photo frame is increased. In addition, when the display mode of the electronic photo frame is determined to be an interactive display mode, active interaction of the electronic photo frame through the voice information and/or the image information can be realized according to the voice information and/or the image information of the environment where the electronic photo frame is positioned, so that the intellectualization of the electronic photo frame is improved; when the electronic photo frame is determined to be in the non-interactive display mode, the character information corresponding to the character image can be output by controlling the target character image displayed by the electronic photo frame, so that a user can know the character information of the target character image displayed by the electronic photo frame, and the functional diversity of the electronic photo frame is increased.
Referring to fig. 5, fig. 5 is a schematic flow chart of a fourth embodiment of the intelligent display method of the electronic photo frame according to the present invention. In this embodiment, step S20 includes the steps of:
step S21, identifying the person identity information in the environment information;
step S22, if the character identity information is matched with the authorized identity information, the information to be output associated with the authorized identity information is obtained.
The environment information includes voice information and/or image information. Person identity information refers to information that can be used to uniquely determine the identity of a person, such as voiceprint information, fingerprint information, and facial features of the person. The authorized identity information refers to legal identity information pre-stored in the electronic photo frame, wherein the legal identity information indicates a safe user entering the environment range of the electronic photo frame.
The person identity information in the environment information can be identified by identifying voiceprint information in the voice information, so that the identity information of the person can be identified and determined according to the voiceprint information; the identity information of the person can be further determined according to the face features by identifying the face features in the image information; the voice information and the image information can be identified at the same time, and then the voice information and the image information are used as reference information for identifying the personal identity information, so that the accuracy of identifying the personal identity information is improved, the fake identity behavior existing in the personal identity information is prevented from being identified only by the voice information or the image information, and the specific implementation of the step is not limited in the embodiment.
If the character identity information is matched with the authorized identity information, namely, the character is a legal user, the information to be output associated with the authorized identity information is obtained, and then the target character image displayed by the electronic photo frame is controlled to output the information to be output, so that targeted interaction is carried out on the person corresponding to the authorized identity information. The obtaining of the information to be output associated with the authorized identity information may be performed by presetting a corresponding relationship between the authorized identity information and the information to be output, so as to obtain the information to be output corresponding to the authorized identity information based on the corresponding relationship.
It will be appreciated that, after identifying the person identity information in the environment information in step S21, it includes: and if the identity information of the person is not matched with the authorized identity information, saving the environment information.
If the identity information of the person is not matched with the authorized identity information, namely, when the person is an illegal user, the traceability of the illegal user can be conveniently realized by preserving the environment information. For example, when the environment where the electronic photo frame is located is placed in the user room, in the case that the person identity information of the person in the environment information is not matched with the authorized identity information, in order to avoid the condition that the indoor theft cannot be traced, the environment information such as voice information and/or image information is stored, so that evidence collection and tracing of the person illegally entering the room are realized, and the diversity of functions of the electronic photo frame is further increased.
In the technical scheme disclosed in the embodiment, whether the person in the environment is the authorized identity information is determined by identifying whether the person in the environment is the authorized identity information, when the person is the authorized identity information, that is, the person in the environment is the authorized user, the person in the environment is adapted to acquire information to be output such as voice information, expression information and/or action information, and actively interacts with the authorized identity information, so that the intellectualization of the electronic photo frame is improved, and the diversity of functions of the electronic photo frame is increased.
The invention also provides an electronic photo frame, which comprises a memory, a processor and an intelligent display program of the electronic photo frame, wherein the intelligent display program of the electronic photo frame is stored in the memory and can run on the processor, and the intelligent display method of the electronic photo frame in any embodiment is realized when the intelligent display program of the electronic photo frame is executed by the processor.
The invention also provides a readable storage medium, the readable storage medium stores an intelligent display program of the electronic photo frame, and the intelligent display program of the electronic photo frame realizes the steps of the intelligent display method of the electronic photo frame in any embodiment when being executed by a processor.
In the embodiments of the terminal and the readable storage medium provided by the invention, all technical features of each embodiment of the intelligent display method of the electronic photo frame are included, and the expansion and explanation contents of the description are basically the same as each embodiment of the intelligent display method of the electronic photo frame, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, comprising instructions for causing a mobile terminal (which may be a handset, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (5)
1. The intelligent display method of the electronic photo frame is characterized by comprising the following steps of:
when an image switching instruction is received, acquiring a target person image corresponding to the image switching instruction;
acquiring a first facial feature point of the target person image;
acquiring a second facial feature point of the character image displayed by the electronic photo frame;
correcting the second facial feature points according to the first facial feature points, and determining the display position of the character image displayed by the corrected electronic photo frame;
outputting the target person image to the display position;
acquiring a current display mode of the electronic photo frame;
when the display mode is an interactive display mode, acquiring environment information of an environment where the electronic photo frame is located, wherein the environment information comprises at least one of voice information and image information;
obtaining information to be output according to the environment information, wherein the information to be output comprises voice information, expression information and action information, and when the information to be output comprises the voice information, the step of obtaining the information to be output according to the environment information comprises the following steps:
converting the environment information into semantic descriptions corresponding to the environment information;
obtaining the information to be output according to the semantic description corresponding to the environment information;
controlling a target character image displayed by the electronic photo frame to output the information to be output, so that the output information in the information to be output is correspondingly and consistently output;
the step of controlling the target person image displayed by the electronic photo frame to output the information to be output comprises the following steps:
acquiring feature data according to the information to be output, wherein the feature data comprises expression feature data, action feature data and mouth shape feature data corresponding to the voice information;
enabling a target person image displayed by the electronic photo frame to output actions and/or postures corresponding to the information to be output according to expression characteristic data and action characteristic data in the characteristic data through face/limb dynamic transformation software or algorithm, and controlling the target person image to output corresponding mouth shapes and voice information according to the mouth shape characteristic data in the characteristic data;
when an image driving instruction is received, playing data information of reference video information is obtained, wherein the playing data information comprises data information obtained by at least one of expression characteristic data, mouth shape characteristic data, voice data and action characteristic data of a reference character image in each image frame according to a video playing sequence of the reference video information, the reference character image is a character image displayed by each image frame in the reference video information, the reference character image is different from a target character image displayed by the electronic photo frame, and the reference video information is stored in the electronic photo frame in advance or is obtained by receiving other terminal equipment to send;
according to the expression characteristic data, the mouth shape characteristic data, the voice data and the action characteristic data in the play data information, the expression characteristic, the mouth shape characteristic and the action characteristic corresponding to the target character image displayed by the electronic photo frame are obtained, the expression characteristic, the mouth shape characteristic and the action characteristic of the target character image are correspondingly controlled and displayed according to the play data information, and corresponding voice data is output corresponding to the mouth shape characteristic;
when the display mode is a non-interactive display mode, identifying the identity information of the target person image displayed by the electronic photo frame, wherein the identity information of the target person image is a unique identifier for determining the identity of a person, and the identity information of the target person image comprises at least one of the person identifier, the name and the face characteristic of the target person image;
acquiring character information of the target character image according to the identity information, wherein the character information comprises at least one of a brief introduction, a trace experience and achievement of the target character image;
and controlling the target character image displayed by the electronic photo frame to output the character information.
2. The intelligent display method of an electronic photo frame according to claim 1, wherein the step of obtaining information to be output according to the environmental information comprises:
identifying character identity information in the environment information;
and if the person identity information is matched with the authorized identity information, acquiring the information to be output associated with the authorized identity information.
3. The intelligent display method of an electronic photo frame according to claim 2, wherein after the step of recognizing the person identity information in the environment information, comprising:
and if the identity information of the person is not matched with the authorized identity information, saving the environment information.
4. An electronic photo frame, the electronic photo frame comprising: a memory, a processor and an intelligent display program of an electronic photo frame stored in the memory and executable on the processor, the intelligent display program of the electronic photo frame, when executed by the processor, implementing the steps of the intelligent display method of an electronic photo frame according to any one of claims 1-3.
5. A readable storage medium, wherein the readable storage medium has stored thereon an intelligent display program of an electronic photo frame, the intelligent display program of the electronic photo frame, when executed by a processor, implementing the steps of the intelligent display method of an electronic photo frame according to any one of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110059500.2A CN112817503B (en) | 2021-01-18 | 2021-01-18 | Intelligent display method of electronic photo frame, electronic photo frame and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110059500.2A CN112817503B (en) | 2021-01-18 | 2021-01-18 | Intelligent display method of electronic photo frame, electronic photo frame and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112817503A CN112817503A (en) | 2021-05-18 |
CN112817503B true CN112817503B (en) | 2024-03-26 |
Family
ID=75870267
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110059500.2A Active CN112817503B (en) | 2021-01-18 | 2021-01-18 | Intelligent display method of electronic photo frame, electronic photo frame and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112817503B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115035785A (en) * | 2022-06-17 | 2022-09-09 | 云知声智能科技股份有限公司 | Method and device for displaying photos, electronic equipment and storage medium |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101369439A (en) * | 2008-08-08 | 2009-02-18 | 深圳华为通信技术有限公司 | Method and device for switching and selecting digital photo frame photograph |
CN101795357A (en) * | 2009-01-29 | 2010-08-04 | 索尼公司 | Imaging device, search method and program |
CN102779251A (en) * | 2012-06-29 | 2012-11-14 | 鸿富锦精密工业(深圳)有限公司 | Electronic device and encrypting/decrypting method thereof |
CN104917666A (en) * | 2014-03-13 | 2015-09-16 | 腾讯科技(深圳)有限公司 | Method of making personalized dynamic expression and device |
CN106210271A (en) * | 2016-06-28 | 2016-12-07 | 上海青橙实业有限公司 | Data processing method and mobile terminal |
CN107180238A (en) * | 2017-07-27 | 2017-09-19 | 深圳市泰衡诺科技有限公司 | A kind of image preview device and method of intelligent terminal |
CN107203263A (en) * | 2017-04-11 | 2017-09-26 | 北京峰云视觉技术有限公司 | A kind of virtual reality glasses system and image processing method |
CN107330904A (en) * | 2017-06-30 | 2017-11-07 | 北京金山安全软件有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN107391997A (en) * | 2017-08-25 | 2017-11-24 | 突维科技有限公司 | Digital photo frame device and its control method |
CN107507217A (en) * | 2017-08-17 | 2017-12-22 | 北京觅己科技有限公司 | Preparation method, device and the storage medium of certificate photo |
CN107610202A (en) * | 2017-08-17 | 2018-01-19 | 北京觅己科技有限公司 | Marketing method, equipment and the storage medium replaced based on facial image |
CN107622256A (en) * | 2017-10-13 | 2018-01-23 | 四川长虹电器股份有限公司 | Intelligent album system based on facial recognition techniques |
CN108305573A (en) * | 2018-01-29 | 2018-07-20 | 京东方科技集团股份有限公司 | Intelligent picture frame system and its control method |
CN108702602A (en) * | 2017-03-10 | 2018-10-23 | 华为技术有限公司 | Share method, electronic equipment and the system of image |
CN108985241A (en) * | 2018-07-23 | 2018-12-11 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer equipment and storage medium |
CN109333539A (en) * | 2018-11-27 | 2019-02-15 | 深圳深度教育股份公司 | Robot and its control method, device and storage medium |
CN111176435A (en) * | 2019-11-06 | 2020-05-19 | 广东小天才科技有限公司 | User behavior-based man-machine interaction method and sound box |
CN111368796A (en) * | 2020-03-20 | 2020-07-03 | 北京达佳互联信息技术有限公司 | Face image processing method and device, electronic equipment and storage medium |
CN111507134A (en) * | 2019-01-31 | 2020-08-07 | 北京奇虎科技有限公司 | Human-shaped posture detection method and device, computer equipment and storage medium |
CN112101073A (en) * | 2019-06-18 | 2020-12-18 | 北京陌陌信息技术有限公司 | Face image processing method, device, equipment and computer storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009123711A1 (en) * | 2008-04-02 | 2009-10-08 | Google Inc. | Method and apparatus to incorporate automatic face recognition in digital image collections |
US9477311B2 (en) * | 2011-01-06 | 2016-10-25 | Blackberry Limited | Electronic device and method of displaying information in response to a gesture |
KR102549689B1 (en) * | 2015-12-24 | 2023-06-30 | 삼성전자 주식회사 | Electronic device and method for controlling an operation thereof |
KR20180108341A (en) * | 2017-03-24 | 2018-10-04 | 삼성전자주식회사 | Electronic device and method for capturing contents |
CN107509198A (en) * | 2017-08-31 | 2017-12-22 | 高创(苏州)电子有限公司 | The control method and its relevant apparatus of a kind of digital photo frame |
US10509563B2 (en) * | 2018-01-15 | 2019-12-17 | International Business Machines Corporation | Dynamic modification of displayed elements of obstructed region |
-
2021
- 2021-01-18 CN CN202110059500.2A patent/CN112817503B/en active Active
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101369439A (en) * | 2008-08-08 | 2009-02-18 | 深圳华为通信技术有限公司 | Method and device for switching and selecting digital photo frame photograph |
CN101795357A (en) * | 2009-01-29 | 2010-08-04 | 索尼公司 | Imaging device, search method and program |
CN102779251A (en) * | 2012-06-29 | 2012-11-14 | 鸿富锦精密工业(深圳)有限公司 | Electronic device and encrypting/decrypting method thereof |
CN104917666A (en) * | 2014-03-13 | 2015-09-16 | 腾讯科技(深圳)有限公司 | Method of making personalized dynamic expression and device |
CN106210271A (en) * | 2016-06-28 | 2016-12-07 | 上海青橙实业有限公司 | Data processing method and mobile terminal |
CN108702602A (en) * | 2017-03-10 | 2018-10-23 | 华为技术有限公司 | Share method, electronic equipment and the system of image |
CN107203263A (en) * | 2017-04-11 | 2017-09-26 | 北京峰云视觉技术有限公司 | A kind of virtual reality glasses system and image processing method |
CN107330904A (en) * | 2017-06-30 | 2017-11-07 | 北京金山安全软件有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN107180238A (en) * | 2017-07-27 | 2017-09-19 | 深圳市泰衡诺科技有限公司 | A kind of image preview device and method of intelligent terminal |
CN107610202A (en) * | 2017-08-17 | 2018-01-19 | 北京觅己科技有限公司 | Marketing method, equipment and the storage medium replaced based on facial image |
CN107507217A (en) * | 2017-08-17 | 2017-12-22 | 北京觅己科技有限公司 | Preparation method, device and the storage medium of certificate photo |
CN107391997A (en) * | 2017-08-25 | 2017-11-24 | 突维科技有限公司 | Digital photo frame device and its control method |
CN107622256A (en) * | 2017-10-13 | 2018-01-23 | 四川长虹电器股份有限公司 | Intelligent album system based on facial recognition techniques |
CN108305573A (en) * | 2018-01-29 | 2018-07-20 | 京东方科技集团股份有限公司 | Intelligent picture frame system and its control method |
CN108985241A (en) * | 2018-07-23 | 2018-12-11 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer equipment and storage medium |
CN109333539A (en) * | 2018-11-27 | 2019-02-15 | 深圳深度教育股份公司 | Robot and its control method, device and storage medium |
CN111507134A (en) * | 2019-01-31 | 2020-08-07 | 北京奇虎科技有限公司 | Human-shaped posture detection method and device, computer equipment and storage medium |
CN112101073A (en) * | 2019-06-18 | 2020-12-18 | 北京陌陌信息技术有限公司 | Face image processing method, device, equipment and computer storage medium |
CN111176435A (en) * | 2019-11-06 | 2020-05-19 | 广东小天才科技有限公司 | User behavior-based man-machine interaction method and sound box |
CN111368796A (en) * | 2020-03-20 | 2020-07-03 | 北京达佳互联信息技术有限公司 | Face image processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112817503A (en) | 2021-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11102450B2 (en) | Device and method of displaying images | |
CN109379497B (en) | Voice information playing method, mobile terminal and computer readable storage medium | |
US20070139512A1 (en) | Communication terminal and communication method | |
CN108108649B (en) | Identity verification method and device | |
CN107832784B (en) | Image beautifying method and mobile terminal | |
US9819784B1 (en) | Silent invocation of emergency broadcasting mobile device | |
CN111372119A (en) | Multimedia data recording method and device and electronic equipment | |
CN108521505B (en) | Incoming call processing method and mobile terminal | |
CN102034060A (en) | Method and system for controlling operation access, and mobile terminal | |
CN107832110A (en) | A kind of information processing method and mobile terminal | |
US11553157B2 (en) | Device and method of displaying images | |
CN107786427B (en) | Information interaction method, terminal and computer readable storage medium | |
CN108881782B (en) | Video call method and terminal equipment | |
CN114071425B (en) | Collaboration method and collaboration system between electronic devices, and electronic device | |
CN111782115A (en) | Application program control method and device and electronic equipment | |
KR20190016671A (en) | Communication device, server and communication method thereof | |
CN112817503B (en) | Intelligent display method of electronic photo frame, electronic photo frame and readable storage medium | |
CN108628644A (en) | A kind of the startup method, apparatus and mobile terminal of application | |
WO2019129264A1 (en) | Interface display method and mobile terminal | |
WO2020221024A1 (en) | Information reminding method, mobile terminal and computer readable storage medium | |
WO2020173283A1 (en) | Reminding task processing method, terminal, and computer readable storage medium | |
KR20110020131A (en) | System and method for delivering feeling during video call | |
CN113676395A (en) | Information processing method, related device and readable storage medium | |
CN112653789A (en) | Voice mode switching method, terminal and storage medium | |
CN109361804B (en) | Incoming call processing method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |