CN115086738B - Information adding method, information adding device, computer equipment and storage medium - Google Patents
Information adding method, information adding device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN115086738B CN115086738B CN202210638195.7A CN202210638195A CN115086738B CN 115086738 B CN115086738 B CN 115086738B CN 202210638195 A CN202210638195 A CN 202210638195A CN 115086738 B CN115086738 B CN 115086738B
- Authority
- CN
- China
- Prior art keywords
- target
- key point
- information
- image
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000012216 screening Methods 0.000 claims description 59
- 230000002093 peripheral effect Effects 0.000 claims description 55
- 238000012545 processing Methods 0.000 claims description 55
- 238000001514 detection method Methods 0.000 claims description 45
- 210000000746 body region Anatomy 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 7
- 230000008859 change Effects 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000000554 iris Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4318—Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4882—Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application discloses an information adding method, an information adding device, computer equipment and a storage medium. According to the method, the moving position of the banner information and the acquired face position are subjected to regional intersection judgment through acquiring the real-time change of the face position of the anchor user of the live broadcast interface, if the intersection shielding exists, the display position of the banner information is subjected to corresponding offset, so that the face of the anchor user is prevented from being shielded, and the video watching experience of the audience user is improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to an information adding method, an information adding device, a computer device, and a storage medium.
Background
With the rapid development of internet technology, network live broadcast is called an emerging network interaction mode, and is favored by more and more audiences due to the characteristics of real-time property, interactivity and the like. When a video is broadcast by a main broadcasting, various banner information can be triggered for activating the live broadcasting atmosphere, and the common display forms of the banner information are moving and playing at the upper or lower positions in the middle of a video area of a live broadcasting room, but when a large amount of banner information is broadcast, part of the banner information can shield the face of the main broadcasting, so that the face of the main broadcasting is shielded for a long time, and the watching experience of audience users is influenced.
Disclosure of Invention
The embodiment of the application provides an information adding method, an information adding device, computer equipment and a storage medium, which can improve video watching experience of a user.
The embodiment of the application provides an information adding method, which comprises the following steps:
Determining a current processed image, wherein the current processed image is obtained by adding target information on a current image frame of a target video;
Acquiring the current position of the target information in the current processed image and a movement parameter corresponding to the target information;
determining a post-movement position of the target information in a next image frame of the current image frame based on a time interval between adjacent image frames in the target video and the movement parameter;
Judging whether the target information in the next image frame shields the target object according to the moved position and the target position of the target object in the current processed image;
And if so, adjusting the moved position of the target information in the next image frame to obtain a processed image of the next frame.
Correspondingly, the embodiment of the application also provides an information adding device, which comprises:
A first determining unit, configured to determine a current processed image, where the current processed image is obtained by adding target information to a current image frame of a target video;
the acquisition unit is used for acquiring the current position of the target information in the current processed image and the movement parameter corresponding to the target information;
a second determining unit configured to determine a post-movement position of the target information in a next image frame of the current image frame based on a time interval between adjacent image frames in the target video and the movement parameter;
The judging unit is used for judging whether the target information in the next image frame shields the target object according to the moved position and the target position of the target object in the current processed image;
and the adjusting unit is used for adjusting the moved position of the target information in the next image frame if so, so as to obtain the processed image of the next frame.
In some embodiments, the judging unit includes:
the identification subunit is used for carrying out identification processing on the current processed image and determining the target position of the target object in the current processed image;
And the judging subunit is used for judging whether the position after the movement in the next image frame is overlapped with the target position or not.
In some embodiments, the judging subunit is specifically configured to:
screening out target key point positions from a plurality of key point positions, and generating a key point track based on the target key point positions;
Determining a peripheral track of the target information according to the moved position;
and judging whether the key point track and the peripheral track are intersected or not.
In some embodiments, the judging subunit is specifically configured to:
Screening first candidate key point positions corresponding to the peripheral outline of the target object from the plurality of key point positions; screening a second candidate key point position from the first candidate key point positions according to the movement parameters; screening the target key point position from the second candidate key point according to the offset distance between adjacent key point positions in the second candidate key point position; generating a key point track based on the target key point position;
Determining a peripheral track of the target information according to the moved position;
and judging whether the key point track and the peripheral track are intersected or not.
In some embodiments, the judging subunit is specifically configured to:
Screening first candidate key point positions corresponding to the peripheral outline of the target object from the plurality of key point positions; determining a target direction side opposite to the moving direction from among a plurality of direction sides of the target object; screening out the key point positions positioned at the target direction side from the first candidate key point positions to obtain the second candidate key point positions; screening the target key point position from the second candidate key point according to the offset distance between adjacent key point positions in the second candidate key point position; generating a key point track based on the target key point position;
Determining a peripheral track of the target information according to the moved position;
judging whether the key point track and the peripheral track are intersected
In some embodiments, the judging subunit is specifically configured to:
Screening first candidate key point positions corresponding to the peripheral outline of the target object from the plurality of key point positions; screening a second candidate key point position from the first candidate key point positions according to the movement parameters; calculating a first offset distance between a target second candidate key point position and a previous adjacent key point position in the second candidate key point positions and a second offset distance between the target second candidate key point position and a next adjacent key point position; determining a ratio of the first offset distance to the second offset distance; if the ratio meets a preset ratio range, determining the target second candidate key point position as the target key point position; generating a key point track based on the target key point position;
Determining a peripheral track of the target information according to the moved position;
and judging whether the key point track and the peripheral track are intersected or not.
In some embodiments, the identification subunit is specifically configured to:
processing the current processed image through a first detection model, and determining a human body area image in the current processed image;
Performing binarization processing on the human body region image to obtain a processed human body image;
And processing the processed human body image through a second detection model to obtain a target position of the human face in the current processed image.
In some embodiments, the adjustment unit comprises:
An acquisition subunit, configured to acquire, from the next image frame, an intersection point position at which the target position intersects the post-movement position;
a determining subunit, configured to determine position adjustment information according to a position relationship between the target position and the intersection position;
and the adjustment subunit is used for adjusting the position after the movement based on the position adjustment information to obtain the image after the next frame processing.
In some embodiments, the determining subunit is specifically configured to:
determining a first edge position and a second edge position in a specified direction from the target positions;
and determining an adjustment direction and an adjustment distance according to the distance between the intersection point position and the first edge position and the second edge position respectively, so as to obtain the position adjustment information.
In some embodiments, the determining subunit is specifically configured to:
determining a first edge position and a second edge position in a specified direction from the target positions;
calculating a first distance between the intersection point position and the first edge position in the appointed direction and a second distance between the intersection point position and the second edge position; if the first distance is greater than the second distance, determining the adjustment direction according to the direction of the intersection point position towards the second edge position, and determining the adjustment distance according to the second distance; and if the first distance is smaller than the second distance, determining the adjustment direction according to the direction of the intersection point position towards the first edge position, and determining the adjustment distance according to the first distance to obtain the position adjustment information.
Correspondingly, the embodiment of the application also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the information adding method provided by any one of the embodiments of the application.
Correspondingly, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to execute the information adding method.
According to the embodiment of the application, the moving position of the banner information and the acquired face position are subjected to regional cross judgment through acquiring the real-time change of the face position of the anchor user of the live broadcast interface, and if cross shielding exists, the display position of the banner information is subjected to corresponding offset, so that the face of the anchor user is prevented from being shielded, and the video watching experience of the audience user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of an information adding method according to an embodiment of the present application.
Fig. 2 is an application scenario schematic diagram of an information adding method according to an embodiment of the present application.
Fig. 3 is a flowchart of another information adding method according to an embodiment of the present application.
Fig. 4 is an application scenario schematic diagram of another information adding method according to an embodiment of the present application.
Fig. 5 is an application scenario schematic diagram of another information adding method according to an embodiment of the present application.
Fig. 6 is a block diagram of an information adding device according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides an information adding method, an information adding device, a storage medium and computer equipment. Specifically, the information adding method of the embodiment of the application can be executed by a computer device, wherein the computer device can be a server or the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
For example, the computer device may be a terminal that determines a current processed image, wherein the current processed image is obtained by adding target information to a current image frame of a target video; acquiring the current position of the target information in the current processed image and the movement parameters corresponding to the target information; determining a post-movement position of target information in a next image frame of the current image frame based on a time interval between adjacent image frames in the target video and movement parameters; judging whether the target information in the next image frame shields the target object according to the moved position and the target position of the target object in the current processed image; if so, the position of the target information after the movement in the next image frame is adjusted, and the image after the next frame processing is obtained.
Based on the above problems, the embodiments of the present application provide a first information adding method, apparatus, computer device, and storage medium, which can improve the video viewing experience of a user.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
The embodiment of the application provides an information adding method which can be executed by a terminal or a server, and the embodiment of the application is described by taking the information adding method executed by the terminal as an example, wherein the terminal can be a anchor client.
Referring to fig. 1, fig. 1 is a flow chart of an information adding method according to an embodiment of the application. The specific flow of the information adding method can be as follows:
101. A currently processed image is determined.
The current processed image is obtained by adding target information to the current image frame of the target video. The target video refers to a video played by a user interface of the current terminal, the target video can be various types of videos, different types of videos can be played through different video applications, for example, the target video can be an entertainment video which is recorded, including videos such as movies, television shows, and various products stored in a video database, and the entertainment video can be played through video playing software; or the target video can be video recorded and played in real time, including live video, and the live video can be played through live software and the like.
The current video frame refers to any original video frame in the target video, and the target information refers to display information added in the playing process of the target video.
For example, when the target video is an entertainment video, the target information may be barrage information, that is, barrage information input by the user when the target video is played by the video playing software; or when the target video is a live video, the target information may be banner information, that is, when the target video is played through live software, the live software determines banner content according to the live content, and the like.
Specifically, the current processed image can be obtained by collecting a playing interface corresponding to the current video frame. For example, referring to fig. 2, fig. 2 is a schematic application scenario diagram of an information adding method according to an embodiment of the present application. And playing the current image frame with the target video and the target information on the video playing interface shown in fig. 2, and then intercepting a playing picture of the video playing interface to obtain the current processed image.
102. And acquiring the current position of the target information in the current processed image and the movement parameters corresponding to the target information.
In the embodiment of the application, the target information can be displayed in a moving manner in the process of playing the target video, so that the display positions of the target information in different image frames of the target video can be different, for example, in the process from the start of the display of the target information to the end of the display of the target information, the video playing interface correspondingly plays the first image frame to the tenth image frame of the target video, and then the target information is respectively displayed in different positions of the first image frame to the tenth image frame.
The movement parameters refer to various parameters of the target information in the movement display process, and may include an initial display position, a movement direction, a movement speed, and the like of the target information, for example, the initial display position may be the upper left side of the video playing interface, the movement direction may be the movement to the right, and the movement speed may be 1 cm per second.
103. The post-movement position of the target information in the next image frame of the current image frame is determined based on the time interval between adjacent image frames in the target video and the movement parameter.
The time interval between adjacent image frames refers to the time interval between two adjacent image frames played by the video playing interface in the process that the target video is played by the video playing interface, for example, the time interval may be 0.1 second, etc.
The moving parameters may include a moving speed, and a moving distance of the target information from the current image frame to the next image frame may be determined according to the time interval and the moving speed, and then the moving distance may be added to an initial display position of the current image frame according to the target information, so that a position of the target information in the next image frame, that is, a position after moving, may be obtained.
For example, the initial display position of the acquired target information in the current image frame may be P1, the time interval may be 0.1 seconds, and the moving speed may be 1 cm, then the moving distance may be 0.1 cm according to the multiplication of the time interval and the moving speed, and then the initial display position and the moving distance may be added to obtain the post-moving position may be p1+0.1. The current image frame and the next image frame are the image frames in the target video, so that the current image frame and the next image frame have the same size, and each position in the current image frame corresponds to the same position in the next image frame.
104. And judging whether the target information in the next image frame shields the target object according to the moved position and the target position of the target object in the current processed image.
The target object refers to main display content in the target video, for example, the target object may be specified text, character, or pattern.
In the embodiment of the application, the video playing interface can display the target information in the process of playing the target video, so that the target object is prevented from being shielded when the target information is displayed, the target video is prevented from being watched by a user, the display positions of the target object and the target information of the video playing interface can be detected, and the display positions of the target information can be adjusted according to the detection result, so that the problem of shielding the target object is solved.
In some embodiments, in order to obtain the accurate position of the target object, the step of "determining whether the target object is blocked by the target information in the next image frame according to the moved position and the target position where the target object is located in the current processed image" may include the following operations:
performing recognition processing on the current processed image, and determining the target position of the target object in the current processed image;
And judging whether the position after the movement in the next image frame is overlapped with the target position or not.
Specifically, when the identification processing is performed on the current processed image, the target object in the current processed image can be identified through the detection model, so that the position of the target object in the processed image, namely, the target position, is determined.
In some embodiments, the target object is a human face part, and in order to improve the detection accuracy of the target object, the step of "performing recognition processing on the current processed image and determining the target position of the target object in the current processed image" may include the following operations:
Processing the current processed image through a first detection model, and determining a human body area image in the current processed image;
Binarizing the human body region image to obtain a processed human body image;
and processing the processed human body image through a second detection model to obtain the target position of the human face in the current processed image.
First, image preprocessing is performed on a currently processed image, which refers to processing performed before feature extraction, segmentation, and matching are performed on the image. The main purpose of image preprocessing is to eliminate extraneous information in the image, recover useful real information, enhance the detectability of related information and maximally simplify data, thereby improving the reliability of feature extraction, image segmentation, matching and recognition. The image preprocessing process generally comprises the steps of digitizing, geometric transformation, normalization, smoothing, restoration, enhancement and the like.
After the current processed image is subjected to image preprocessing, the preprocessed current processed image is input into a first detection model, and the first detection model is used for identifying a human body area in the image, for example, the first detection model can be a YCbCr skin color model, and the current processed image is detected through the YCbCr skin color model, so that the human body area such as a palm, an arm, a face and the like in the current processed image can be obtained. And rapidly removing background images except the human body areas from the currently processed image according to the obtained human body areas, so as to divide the human body area image.
The YCbCr skin color model is a color model for skin color detection in common use, where Y represents the luminance, cr represents the red component in the illuminant, and Cb represents the blue component in the illuminant. The difference in appearance of human skin colors is caused by chromaticity, and the skin color distribution of different people is concentrated in a smaller area. The YCbCr color space CbCr plane of the skin color is distributed in the approximate elliptical area, and whether the current pixel belongs to the skin color can be easily confirmed by judging whether CbCr of the current pixel falls in the elliptical area of the skin color distribution. The image is converted into YCbCr space and projected on CbCr plane, and sample points of skin color can be acquired.
In some embodiments, to reduce the amount of input data of the detection model, binarization processing may be performed on the segmented human body region image to obtain a processed human body image. The binarization of the image is to set the gray value of the pixel point on the image to 0 or 255, that is, the whole image is presented with obvious visual effects of only black and white, so that the calculated amount of the model can be reduced, and the detection efficiency of the model can be improved.
Further, the processed human body image is input into a second detection model, the second detection model is used for recognizing a human face part, for example, the second detection model can be a human face detection model, the processed human body image is detected and processed through the human face detection model, and the target position of the human face part in the current processed image is obtained, wherein the target position can comprise the positions of all human face key points of the human face part.
After determining the target position of the target object in the current processed image, the target position and the moved position of the target information can be compared, and whether the target position and the moved position overlap or not is judged.
In some embodiments, in order to reduce the calculation processing amount, the step of "determining whether there is overlap between the post-movement position and the target position in the next image frame" may include the following operations:
screening target key point positions from the plurality of key point positions, and generating a key point track based on the target key point positions;
determining the peripheral track of the target information according to the moved position;
and judging whether the key point track and the peripheral track are intersected or not.
The target location may include a location of a keypoint corresponding to each object keypoint forming the target object, for example, the keypoints forming the target object include: the target position includes a first key point position corresponding to the first object key point, a second key point position corresponding to the second object key point, a third key point position corresponding to the third object key point, and so on.
The target key point position refers to a main key point position in the target position for detecting whether the target position and the moved position overlap.
Specifically, generating the key point track based on the target key points may sequentially connect the target key points according to the position arrangement order, so as to obtain the key point track.
For example, the target keypoint locations may include: the position arrangement sequence of the positions of the first object key point, the fifth object key point, the sixth object key point, the eighth object key point and the like can be as follows: a first object key point, a fifth object key point, a sixth object key point, and an eighth object key point. The first object key point may be connected to the fifth object key point, the fifth object key point may be connected to the sixth object key point, the sixth object key point may be connected to the eighth object key point, and the eighth object key point may be connected to the first object key point, so as to obtain a connection track of the target key point position, that is, a key point track.
In the embodiment of the application, the target information corresponds to a display area in the next image frame, and the position of the display area is the position after the movement. Specifically, the peripheral track of the target information is determined according to the moved position, that is, the peripheral boundary of the display area is selected from the moved position to obtain the peripheral track, for example, the display area may be a rectangular display area, and then the peripheral rectangular outline of the rectangular display area is obtained, that is, the peripheral track can be obtained.
Further, whether the position after the movement in the next image frame is overlapped with the target position is judged by judging whether the key point track is intersected with the peripheral track or not. If the key point track is intersected with the peripheral track, the position overlapping with the target position after the movement in the next image frame can be determined; if the key point track and the peripheral track are not intersected, it can be determined that the position of the next image frame after movement is not overlapped with the target position.
In some embodiments, to further reduce the computational throughput, the step of "screening out target keypoint locations from a plurality of keypoint locations" may include the following operations:
Screening first candidate key point positions corresponding to the peripheral outline of the target object from a plurality of key point positions;
screening a second candidate key point position from the first candidate key point positions according to the movement parameters;
and screening the target key point position from the second candidate key point according to the offset distance between the adjacent key point positions in the second candidate key point position.
And detecting the processed human body image through the human face detection model to obtain the positions of the corresponding key points of each key point of the human face in the current processed image, wherein the key points mark the parts of the human face, such as eyes, nose, mouth, face and the like.
The peripheral outline of the target object refers to the outline position of the target object in the current processed image, and then the key point located at the outline position is determined from a plurality of key point positions corresponding to the detected target object, so that the first candidate key point position is obtained.
Specifically, in screening key points of the peripheral contour, a convex hull algorithm may be used, where convex hull (Convex Hull) is a concept in computational geometry (graphics). In a real vector space V, for a given set X, the intersection S of all convex sets containing X is referred to as the convex hull of X. The convex hull of X may be constructed with a linear combination of all points (X1,..xn) within X. In two-dimensional Euclidean space, a convex hull can be thought of as a band that just wraps all points. That is, given a set of points on a two-dimensional plane, a convex hull is a convex polygon formed by connecting points of the outermost layers, which can contain all the points in the set of points.
In some embodiments, to reduce the amount of computation, the step of "screening the second candidate keypoint location from the first candidate keypoint locations according to the movement parameter" may include the following operations:
Determining a target direction side opposite to the moving direction from among a plurality of direction sides of the target object;
and screening the key point positions positioned at the target direction side from the first candidate key point positions to obtain second candidate key point positions.
Wherein the movement parameter may comprise a movement direction. And further screening the first candidate key points according to the moving direction of the target information. The moving direction of the target information may be: horizontal left or horizontal right, etc., then in determining whether the position of the target information obscures the target object, it may be determined whether the left or right side of the target object intersects the position of the target information.
Wherein the plurality of directional sides of the target object include: the upper side, the lower side, the left side, the right side and the like of the target object, and if the moving direction is horizontal to the right, the target direction side can be determined as the left side of the target object; if the movement direction is horizontally moved to the left, the target direction side can be determined to be the right side of the target object or the like.
And after the target direction side is determined, screening the key point position positioned on the target direction side from the first candidate key point positions to obtain a second candidate key point position.
In some embodiments, to reduce the amount of computation, the step of "screening the target keypoint location from the second candidate keypoint based on the offset distance between adjacent keypoint locations among the second candidate keypoint locations" may include the following operations:
calculating a first offset distance between a target second candidate key point position and a previous adjacent key point position in the second candidate key point positions and a second offset distance between the target second candidate key point position and a next adjacent key point position;
Determining a ratio of the first offset distance to the second offset distance;
And if the ratio meets the preset ratio range, determining the target second candidate key point position as the target key point position.
Specifically, the second candidate key point position is further screened according to the change trend of the connecting line of the second candidate key point position. Traversing each second candidate key point position, judging the trend of each second candidate key point position, namely judging whether the second candidate key point position is an inflection point, if so, reserving the second candidate key point position, and if not, deleting the second candidate key point position.
For example, for the key point Pn in the second candidate key points, whether Pn is an inflection point is determined, first, it is necessary to calculate offset values of two key point positions of a previous second candidate key point Pn-1 adjacent to Pn and a next pn+1 adjacent to Pn, and the offset values of Pn and Pn-1 obtained by calculation may be: offsetX =pn.x-Pn-1. X, offsety1=pn.y-Pn-1.y; the offset values of Pn and Pn+1 may be: offsetX 2=pn+1. X-Pn. X, offsetY 2=pn+ 1.y-Pn. Y, calculating the offset ratio by the offset, offsetFactor 1= offsetX1/offsetY1, offsetFactor 2= offsetX2/offsetY2, and finally comparing the difference diff= offsetFactor2-offsetFactor1 of the two offset ratios, if the difference diff is outside the given reasonable offset range, i.e. the preset ratio range: if diff > diffMax (maximum range value) or diff < diffMin (minimum range value), it can be determined that Pn is an inflection point, and for each second candidate key point position, whether the inflection point is an inflection point can be determined in the above manner, and the inflection point is reserved, so that the target key point position is obtained.
105. And adjusting the moved position of the target information in the next image frame to obtain the processed image of the next frame.
In some embodiments, to avoid the occlusion of the target object by the target information, the step of adjusting the moved position of the target information in the next image frame to obtain the processed image of the next frame may include the following operations:
Acquiring an intersection point position of the target position and the moved position from the next image frame;
Determining position adjustment information according to the position relation between the target position and the intersection point position;
And adjusting the moved position based on the position adjustment information to obtain the image after the next frame processing.
The method comprises the steps of obtaining an intersection point position of the target position and the moved position, namely obtaining an intersection point of a key point track generated based on the target position and a peripheral contour track of the moved position, and obtaining the intersection point position. And then determining the position adjustment information of the target information according to the position relation between the target position of the target object in the next image frame and the intersection point position.
In some embodiments, in order to improve the position adjustment efficiency, the step of "determining the position adjustment information according to the positional relationship of the target position and the intersection position" may include the following operations:
Determining a first edge position and a second edge position in a specified direction from the target positions;
and determining an adjusting direction and an adjusting distance according to the distances between the intersection point position and the first edge position and the second edge position respectively, so as to obtain position adjusting information.
The specified direction may be a vertical direction, the first edge position refers to an edge position having a largest distance value in the vertical direction among the target positions, and the second edge position refers to an edge position having a smallest distance value in the vertical direction among the target positions.
For example, the first edge position is (X1, Y1), and the second edge position is (X2, Y2), where Y1 is greater than Y2, and Y1 takes the largest value among all Y values of all target positions, and Y2 takes the smallest value among all Y values of all target positions.
In some embodiments, the step of determining the adjustment direction and the adjustment distance according to the distance between the intersection point position and the first edge position and the second edge position, respectively, may include the following operations:
Calculating a first distance between the intersection point position and the first edge position in the appointed direction and a second distance between the intersection point position and the second edge position;
If the first distance is greater than the second distance, determining an adjustment direction according to the direction of the intersection point position towards the second edge position, and determining an adjustment distance according to the second distance;
If the first distance is smaller than the second distance, determining an adjustment direction according to the direction of the intersection point position towards the first edge position, and determining an adjustment distance according to the first distance.
For example, the first edge position is (X1, Y1), the second edge position is (X2, Y2), the intersection position is (X3, Y3), and the first distance between the intersection position and the first edge position in the specified direction is calculated as: Y1-Y3, a second distance between the intersection point position and the second edge position in the appointed direction is calculated as follows: Y3-Y2.
Specifically, if the distance between the intersection point position and the first edge position is closer, the adjustment direction may be determined according to the direction from the intersection point position to the first edge position, and the adjustment distance may be determined according to the first distance; if the intersection point position is closer to the second edge position, the adjustment direction may be determined according to the direction of the intersection point position to the second edge position, and the adjustment distance may be determined according to the second distance.
For example, if the first distance Y1-Y3 is greater than the second distance Y3-Y2, the direction of the intersection point position toward the second edge position, that is, the downward direction, may be determined as the adjustment direction, and the second distance is taken as the adjustment distance, thereby obtaining the position adjustment information; if the first distance Y1-Y3 is smaller than the second distance Y3-Y2, the direction of the intersection point position toward the first edge position, that is, the upward direction, may be determined as the adjustment direction, and the first distance may be taken as the adjustment distance, thereby obtaining the position adjustment information.
Further, the position after the movement is adjusted based on the position adjustment information, so that the next frame of processed image can be obtained, namely, the next frame of played image after the current processed image is played.
In some embodiments, if the image frame of the target video displays other information in addition to the target information, in order to ensure the display effect of the video playing interface, after the step of adjusting the post-movement position of the target information in the next image frame, the method may further include the following steps:
if other display information exists in the image processed by the next frame, the adjusted position of the target information shields the other display information;
Acquiring the display level of the target information and the display level of other display information;
And if the display level of the target information is higher than that of other display information, hiding the content which is blocked by the target information in the other display information in the image after the next frame processing.
Specifically, whether other display information except the target information exists in the image after the next frame processing is detected, if the other display information exists, whether the adjusted position of the target information is overlapped with the display position of the other display information is judged, and if the adjusted position of the target information is overlapped with the display position of the other display information, the target information is determined to shield the other display information.
When the target information shields other display information, the display of the target information or other display information can be adjusted through the display level of the target information and the other display information. Wherein the display level indicates an importance level of the information, and the higher the importance level is, the higher the display level is.
If the display level of the target information is higher than that of other display information, in order to ensure the display of the target information preferentially, displaying the target information in an overlapping area of the target information and the other display information, and hiding the other display information in the overlapping area; if the display level of the target information is lower than that of the other display information, the other display information may be displayed in an overlapping area of the target information and the other display information so as to preferentially ensure the display of the other display information, and the target information in the overlapping area may be hidden.
In some embodiments, in order to ensure that other display information and target information are displayed together, when there is overlap between other display information and target information, the display position of the target information may be continuously adjusted, so that the images display the contents of the other display information and the target information together after the next frame is processed, so that a user can watch more display contents conveniently.
The embodiment of the application discloses an information adding method, which comprises the following steps: determining a current processed image, wherein the current processed image is obtained by adding target information on a current image frame of a target video; acquiring the current position of the target information in the current processed image and the movement parameters corresponding to the target information; determining a post-movement position of target information in a next image frame of the current image frame based on a time interval between adjacent image frames in the target video and movement parameters; judging whether the target information in the next image frame shields the target object according to the moved position and the target position of the target object in the current processed image; if so, the position of the target information after the movement in the next image frame is adjusted, and the image after the next frame processing is obtained. Thus, the video viewing experience of the user can be improved.
In accordance with the above description, the information adding method of the present application will be further described below by way of example. Referring to fig. 3, fig. 3 is a flow chart of another information adding method according to an embodiment of the present application, and taking an example that the information adding method is applied to a live scene, a specific flow may be as follows:
201. and when the terminal receives a display instruction aiming at the target banner, acquiring a live image of the current live interface.
The target banner may be system prompt information or information input by a user in the live broadcasting room, and the live broadcasting image refers to a video frame image played by the current live broadcasting interface, that is, a real-time video stream image. In the embodiment of the application, when the target banner starts to be displayed, the detection logic is started, and the detection logic performs banner position detection and position judgment operations.
202. And the terminal performs face detection on the live image and determines the face position of the anchor user in the live image.
Specifically, the obtained live image is preprocessed, and the preprocessed live image is processed through YCbCr skin color model detection, so that palm, arm, face and other areas in the live image are obtained. According to the obtained region, the background of the live image is rapidly removed, so that the human body region image of the anchor user is segmented, and binarization processing is carried out on the human body region image, wherein the binarization processing is used for reducing the input data volume of a human face detection model, so that the calculated amount is reduced, the efficiency is improved, the human face position can be rapidly determined by carrying out detection processing on the human body region image through the human face detection model, and therefore the key point set m [ (X1, Y1), … (Xn, yn) ] of the position of the human face is obtained.
For example, referring to fig. 4, fig. 4 is a schematic application scenario of another information adding method provided by the embodiment of the present application, in the live image shown in fig. 4, the face position of the anchor user is identified through a face detection model, so as to obtain key points of the face position.
203. And the terminal judges whether the display position of the target banner and the face position are overlapped or not.
In the embodiment of the application, whether the banner intersects the human face is pre-calculated for pre-judging the possibility that the banner shields the human face in advance, because the detection judgment logic is processed regularly, and the calculation processing amount is reduced. The method mainly comprises the following steps:
Peripheral contour point screening: firstly, key points of a face are selected, a set m of key point positions of the face is obtained through a face detection model, the key points mark positions of all parts of the face, whether the face is intersected with a banner is judged, and whether a connecting line of adjacent points of a peripheral outline of the face is intersected with the banner is judged, so that the calculated amount can be reduced. And screening peripheral contour points in the obtained key point set m, calculating to quickly obtain peripheral contour key points m1 through a convex hull algorithm, and sequencing the screened face peripheral contour key points m1 clockwise according to positions, so that the processing of the following process is facilitated.
And (3) contour point secondary screening: the method comprises the steps of screening according to the movement direction of a banner, generally moving in a fixed direction, generally moving horizontally from left to right or from right to left, judging whether the two faces intersect or not, judging whether the left side or the right side of the face intersects the banner or not, if the banner moves leftwards, selecting a right key point of the face from a key point set m1, otherwise, selecting a left key point of the face from the key point set m1, and finally obtaining a key point set m2 through direction screening.
Contour point three times screening: and removing part of key points according to the trend of the key point connecting line, traversing the key point set m2, and sequentially judging the trend of the key points, namely judging whether the key point is an inflection point, if not, deleting the key point, determining the inflection point by calculating offset values of the front key point and the rear key point, setting an offset range, and if the offset range is exceeded, judging the inflection point as the inflection point.
For example, referring to fig. 5, fig. 5 is a schematic view of an application scenario of another information adding method provided in an embodiment of the present application, for example, if calculating whether a key point Pn is an inflection point, calculating offset values of two key point positions of a previous Pn-1 and a next pn+1 is needed, and calculating to obtain an offset value offsetX1 =pn.x-Pn-1. X and offsety1=pn.y-Pn-1.y; the offset value with the latter key point is offsetX < 2 > = pn+1.X-Pn. X, offsetY < 2 > = pn+1.y-Pn. Y, the offset ratio is calculated by the offset, offsetFactor < 1 > = offsetX < 1/offsetY < 1 >, offsetFactor < 2 > = offsetX < 2/offsetY < 2 >, and finally the difference diff= offsetFactor2-offsetFactor < 1 > of the two offset ratios is compared if the difference is outside the given reasonable offset range: diff > diffMax (maximum range value) or diff < diffMin (minimum range value), then it is considered the inflection point. The set of key points m3 is finally obtained by the above process.
Calculating whether the connection line of the banner and the contour point is intersected or not: the 3 times of screening above is to reduce the amount of calculation of intersection, only the intersection between the connecting line of two adjacent key points calculated in sequence by traversing the key point set m3 and the cross line of the cross is needed, the display position of the cross can be a rectangle, the left and right side lines and the upper and lower side lines are needed to be selected for intersection judgment, and the judgment is advanced by preprocessing, so that the position of the cross line during the judgment calculation is also needed to be added with the predicted offset of the movement of the cross in the detection interval time, and the next position of the pre-calculated cross line is intersected with the connecting line of each adjacent key point in the face key point set m 3.
If the banner intersects the contour point line, then step 204 may be performed; if the banner does not intersect the contour point connection, step 206 may be performed.
204. And the terminal determines the position adjustment information according to the intersection point position of the display position and the face position.
Specifically, the intersection point position of the display position of the target banner and the face position is obtained, the intersection point is judged to be located at the face position, the value ymax of the key point with the largest coordinate y and the value ymin of the key point with the smallest coordinate y are selected from the face key point set m3, then the y coordinate value ym= (ymax+ymin)/2 of the central point can be obtained, if the y value of the intersection point is larger than ym, the intersection point is deviated upwards, namely, the banner is deviated upwards, namely, the face is deviated downwards, and otherwise, the banner is deviated downwards, namely, the face is downwards.
205. And the terminal adjusts the display position of the target banner in the next frame of live image according to the position adjustment information.
After the position adjustment information is determined, the display position of the target banner may be adjusted based on the position adjustment information, wherein, the offset of the banner is only required to adjust the y value of the coordinate of the banner, and the offset is the ymax of the maximum key point of the y coordinate of the banner or the difference value of the key ymin with the minimum y coordinate. The banner can automatically adjust the position according to the offset, so that the condition of shielding the face can be timely avoided.
206. Ending the operation.
When the banner and the contour point connecting line are not intersected, namely, the display of the banner does not block the face of the anchor user, the display position of the banner does not need to be adjusted, and the operation can be ended.
The embodiment of the application discloses an information adding method, which comprises the following steps: when a terminal receives a display instruction aiming at a target banner, acquiring a live image of a current live interface, performing face detection on the live image, determining the face position of a host user in the live image, and judging whether the display position of the target banner is overlapped with the face position; if the overlapping exists, determining position adjustment information according to the intersection point position of the display position and the face position, and adjusting the display position of the target banner in the next frame of live image according to the position adjustment information; if there is no overlap, the operation is ended. Therefore, the face of the anchor user in the live broadcasting room can be prevented from being blocked, and the live broadcasting watching experience of the audience user is improved.
In order to facilitate better implementation of the information adding method provided by the embodiment of the application, the embodiment of the application also provides an information adding device based on the information adding method. Wherein the meaning of nouns is the same as in the above information adding method, and specific implementation details can refer to the description in the method embodiment.
Referring to fig. 6, fig. 6 is a block diagram of an information adding apparatus according to an embodiment of the present application, where the apparatus includes:
A first determining unit 301, configured to determine a current processed image, where the current processed image is obtained by adding target information to a current image frame of a target video;
An obtaining unit 302, configured to obtain a current position of the target information in the current processed image and a movement parameter corresponding to the target information;
a second determining unit 303, configured to determine a post-movement position of the target information in a next image frame of the current image frame based on a time interval between adjacent image frames in the target video and the movement parameter;
A judging unit 304, configured to judge whether the target information in the next image frame occludes the target object according to the moved position and a target position where the target object in the current processed image is located;
and the adjusting unit 305 is configured to adjust the moved position of the target information in the next image frame if yes, so as to obtain a processed image of the next frame.
In some embodiments, the judging unit includes:
the identification subunit is used for carrying out identification processing on the current processed image and determining the target position of the target object in the current processed image;
And the judging subunit is used for judging whether the position after the movement in the next image frame is overlapped with the target position or not.
In some embodiments, the judging subunit is specifically configured to:
screening out target key point positions from a plurality of key point positions, and generating a key point track based on the target key point positions;
Determining a peripheral track of the target information according to the moved position;
and judging whether the key point track and the peripheral track are intersected or not.
In some embodiments, the judging subunit is specifically configured to:
Screening first candidate key point positions corresponding to the peripheral outline of the target object from the plurality of key point positions; screening a second candidate key point position from the first candidate key point positions according to the movement parameters; screening the target key point position from the second candidate key point according to the offset distance between adjacent key point positions in the second candidate key point position; generating a key point track based on the target key point position;
Determining a peripheral track of the target information according to the moved position;
and judging whether the key point track and the peripheral track are intersected or not.
In some embodiments, the judging subunit is specifically configured to:
Screening first candidate key point positions corresponding to the peripheral outline of the target object from the plurality of key point positions; determining a target direction side opposite to the moving direction from among a plurality of direction sides of the target object; screening out the key point positions positioned at the target direction side from the first candidate key point positions to obtain the second candidate key point positions; screening the target key point position from the second candidate key point according to the offset distance between adjacent key point positions in the second candidate key point position; generating a key point track based on the target key point position;
Determining a peripheral track of the target information according to the moved position;
and judging whether the key point track and the peripheral track are intersected or not.
In some embodiments, the judging subunit is specifically configured to:
Screening first candidate key point positions corresponding to the peripheral outline of the target object from the plurality of key point positions; screening a second candidate key point position from the first candidate key point positions according to the movement parameters; calculating a first offset distance between a target second candidate key point position and a previous adjacent key point position in the second candidate key point positions and a second offset distance between the target second candidate key point position and a next adjacent key point position; determining a ratio of the first offset distance to the second offset distance; if the ratio meets a preset ratio range, determining the target second candidate key point position as the target key point position; generating a key point track based on the target key point position;
Determining a peripheral track of the target information according to the moved position;
and judging whether the key point track and the peripheral track are intersected or not.
In some embodiments, the identification subunit is specifically configured to:
processing the current processed image through a first detection model, and determining a human body area image in the current processed image;
Performing binarization processing on the human body region image to obtain a processed human body image;
And processing the processed human body image through a second detection model to obtain a target position of the human face in the current processed image.
In some embodiments, the adjustment unit comprises:
An acquisition subunit, configured to acquire, from the next image frame, an intersection point position at which the target position intersects the post-movement position;
a determining subunit, configured to determine position adjustment information according to a position relationship between the target position and the intersection position;
and the adjustment subunit is used for adjusting the position after the movement based on the position adjustment information to obtain the image after the next frame processing.
In some embodiments, the determining subunit is specifically configured to:
determining a first edge position and a second edge position in a specified direction from the target positions;
and determining an adjustment direction and an adjustment distance according to the distance between the intersection point position and the first edge position and the second edge position respectively, so as to obtain the position adjustment information.
In some embodiments, the determining subunit is specifically configured to:
determining a first edge position and a second edge position in a specified direction from the target positions;
calculating a first distance between the intersection point position and the first edge position in the appointed direction and a second distance between the intersection point position and the second edge position; if the first distance is greater than the second distance, determining the adjustment direction according to the direction of the intersection point position towards the second edge position, and determining the adjustment distance according to the second distance; and if the first distance is smaller than the second distance, determining the adjustment direction according to the direction of the intersection point position towards the first edge position, and determining the adjustment distance according to the first distance to obtain the position adjustment information.
The embodiment of the application discloses an information adding device, which is used for determining a current processed image through a first determining unit 301, wherein the current processed image is obtained by adding target information on a current image frame of a target video; the acquiring unit 302 acquires the current position of the target information in the current processed image and the movement parameter corresponding to the target information; the second determining unit 303 determines a post-movement position of the target information in a next image frame of the current image frame based on a time interval between adjacent image frames in the target video and the movement parameter; the judging unit 304 judges whether the target information in the next image frame shields the target object according to the moved position and the target position of the target object in the current processed image; and if so, the adjusting unit 305 adjusts the moved position of the target information in the next image frame to obtain a processed image of the next frame. Thus, the video viewing experience of the user can be improved.
Correspondingly, the embodiment of the application also provides computer equipment which can be a server. Fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer device 600 includes a processor 601 having one or more processing cores, a memory 602 having one or more computer readable storage media, and a computer program stored on the memory 602 and executable on the processor. The processor 601 is electrically connected to the memory 602. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 601 is a control center of the computer device 600, connects various parts of the entire computer device 600 using various interfaces and lines, and performs various functions of the computer device 600 and processes data by running or loading software programs and/or modules stored in the memory 602, and calling data stored in the memory 602, thereby performing overall monitoring of the computer device 600.
In an embodiment of the present application, the processor 601 in the computer device 600 loads instructions corresponding to the processes of one or more application programs into the memory 602 according to the following steps, and the processor 601 executes the application programs stored in the memory 602, thereby implementing various functions:
determining a current processed image, wherein the current processed image is obtained by adding target information on a current image frame of a target video;
acquiring the current position of the target information in the current processed image and the movement parameters corresponding to the target information;
Determining a post-movement position of target information in a next image frame of the current image frame based on a time interval between adjacent image frames in the target video and movement parameters;
judging whether the target information in the next image frame shields the target object according to the moved position and the target position of the target object in the current processed image;
If so, the position of the target information after the movement in the next image frame is adjusted, and the image after the next frame processing is obtained.
In some embodiments, determining whether the target information in the next image frame occludes the target object according to the moved position and the target position of the target object in the current processed image includes:
performing recognition processing on the current processed image, and determining the target position of the target object in the current processed image;
And judging whether the position after the movement in the next image frame is overlapped with the target position or not.
In some embodiments, the target location includes a keypoint location corresponding to each object keypoint in the target object;
determining whether there is an overlap between the moved position and the target position in the next image frame includes:
screening target key point positions from the plurality of key point positions, and generating a key point track based on the target key point positions;
determining the peripheral track of the target information according to the moved position;
and judging whether the key point track and the peripheral track are intersected or not.
In some embodiments, screening the target keypoint location from the plurality of keypoint locations includes:
Screening first candidate key point positions corresponding to the peripheral outline of the target object from a plurality of key point positions;
screening a second candidate key point position from the first candidate key point positions according to the movement parameters;
and screening the target key point position from the second candidate key point according to the offset distance between the adjacent key point positions in the second candidate key point position.
In some embodiments, the movement parameter comprises a movement direction;
Screening the second candidate key point position from the first candidate key point positions according to the movement parameters comprises the following steps:
Determining a target direction side opposite to the moving direction from among a plurality of direction sides of the target object;
and screening the key point positions positioned at the target direction side from the first candidate key point positions to obtain second candidate key point positions.
In some embodiments, selecting the target keypoint location from the second candidate keypoint location based on the offset distance between adjacent keypoint locations in the second candidate keypoint location comprises:
calculating a first offset distance between a target second candidate key point position and a previous adjacent key point position in the second candidate key point positions and a second offset distance between the target second candidate key point position and a next adjacent key point position;
Determining a ratio of the first offset distance to the second offset distance;
And if the ratio meets the preset ratio range, determining the target second candidate key point position as the target key point position.
In some embodiments, the target object is a face region;
Performing recognition processing on the current processed image to determine a target position of a target object in the current processed image, including:
Processing the current processed image through a first detection model, and determining a human body area image in the current processed image;
Binarizing the human body region image to obtain a processed human body image;
and processing the processed human body image through a second detection model to obtain the target position of the human face in the current processed image.
In some embodiments, adjusting the moved position of the target information in the next image frame to obtain the next frame processed image includes:
Acquiring an intersection point position of the target position and the moved position from the next image frame;
Determining position adjustment information according to the position relation between the target position and the intersection point position;
And adjusting the moved position based on the position adjustment information to obtain the image after the next frame processing.
In some embodiments, determining the position adjustment information based on the positional relationship of the target position and the intersection position includes:
Determining a first edge position and a second edge position in a specified direction from the target positions;
and determining an adjusting direction and an adjusting distance according to the distances between the intersection point position and the first edge position and the second edge position respectively, so as to obtain position adjusting information.
In some embodiments, determining the adjustment direction and the adjustment distance based on the distance of the intersection point position from the first edge position and the second edge position, respectively, includes:
Calculating a first distance between the intersection point position and the first edge position in the appointed direction and a second distance between the intersection point position and the second edge position;
If the first distance is greater than the second distance, determining an adjustment direction according to the direction of the intersection point position towards the second edge position, and determining an adjustment distance according to the second distance;
If the first distance is smaller than the second distance, determining an adjustment direction according to the direction of the intersection point position towards the first edge position, and determining an adjustment distance according to the first distance.
In some embodiments, after adjusting the post-movement position of the target information in the next image frame, further comprising:
if other display information exists in the image processed by the next frame, the adjusted position of the target information shields the other display information;
Acquiring the display level of the target information and the display level of other display information;
And if the display level of the target information is higher than that of other display information, hiding the content which is blocked by the target information in the other display information in the image after the next frame processing.
The scheme is that the current processed image is determined; acquiring the current position of the target information in the current processed image and the movement parameters corresponding to the target information; determining a post-movement position of target information in a next image frame of the current image frame based on a time interval between adjacent image frames in the target video and movement parameters; judging whether the target information in the next image frame shields the target object according to the moved position and the target position of the target object in the current processed image; if so, the position of the target information after the movement in the next image frame is adjusted, and the image after the next frame processing is obtained. Thus, the video viewing experience of the user can be improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 7, the computer device 600 further includes: a touch display 603, a radio frequency circuit 604, an audio circuit 605, an input unit 606, and a power supply 607. The processor 601 is electrically connected to the touch display 603, the radio frequency circuit 604, the audio circuit 605, the input unit 606, and the power supply 607, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 7 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display 603 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display 603 may include a display panel and a touch panel. Wherein the display panel may be used to display messages entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives a touch message from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 601, and can receive commands from the processor 601 and execute them. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 601 to determine the type of touch event, and the processor 601 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 603 to implement input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch display 603 may also implement an input function as part of the input unit 606.
In the embodiment of the present application, the processor 601 executes the game application program to generate a graphical user interface on the touch display screen 603, where the virtual scene on the graphical user interface includes at least one skill control area, and the skill control area includes at least one skill control. The touch display 603 is configured to present a graphical user interface and receive an operation instruction generated by a user acting on the graphical user interface.
The radio frequency circuit 604 may be configured to receive and transmit radio frequency signals to and from a network device or other computer device via wireless communication to and from the network device or other computer device.
The audio circuit 605 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 605 may transmit the received electrical signal converted from audio data to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 605 and converted into audio data, which are processed by the audio data output processor 601 for transmission to, for example, another computer device via the radio frequency circuit 604, or which are output to the memory 602 for further processing. The audio circuit 605 may also include an ear bud jack to provide communication of the peripheral headphones with the computer device.
The input unit 606 may be used to receive input numbers, character messages, or user characteristic messages (e.g., fingerprints, irises, facial messages, etc.), as well as to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 607 is used to power the various components of the computer device 600. Alternatively, the power supply 607 may be logically connected to the processor 601 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system. The power supply 607 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 7, the computer device 600 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which will not be described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment determines a current processed image, where the current processed image is obtained by adding target information to a current image frame of a target video; acquiring the current position of the target information in the current processed image and the movement parameters corresponding to the target information; determining a post-movement position of target information in a next image frame of the current image frame based on a time interval between adjacent image frames in the target video and movement parameters; judging whether the target information in the next image frame shields the target object according to the moved position and the target position of the target object in the current processed image; if so, the position of the target information after the movement in the next image frame is adjusted, and the image after the next frame processing is obtained.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium in which a plurality of computer programs are stored, the computer programs being capable of being loaded by a processor to perform steps in any of the information adding methods provided by the embodiment of the present application. For example, the computer program may perform the steps of:
determining a current processed image, wherein the current processed image is obtained by adding target information on a current image frame of a target video;
acquiring the current position of the target information in the current processed image and the movement parameters corresponding to the target information;
Determining a post-movement position of target information in a next image frame of the current image frame based on a time interval between adjacent image frames in the target video and movement parameters;
judging whether the target information in the next image frame shields the target object according to the moved position and the target position of the target object in the current processed image;
If so, the position of the target information after the movement in the next image frame is adjusted, and the image after the next frame processing is obtained.
In some embodiments, determining whether the target information in the next image frame occludes the target object according to the moved position and the target position of the target object in the current processed image includes:
performing recognition processing on the current processed image, and determining the target position of the target object in the current processed image;
And judging whether the position after the movement in the next image frame is overlapped with the target position or not.
In some embodiments, the target location includes a keypoint location corresponding to each object keypoint in the target object;
determining whether there is an overlap between the moved position and the target position in the next image frame includes:
screening target key point positions from the plurality of key point positions, and generating a key point track based on the target key point positions;
determining the peripheral track of the target information according to the moved position;
and judging whether the key point track and the peripheral track are intersected or not.
In some embodiments, screening the target keypoint location from the plurality of keypoint locations includes:
Screening first candidate key point positions corresponding to the peripheral outline of the target object from a plurality of key point positions;
screening a second candidate key point position from the first candidate key point positions according to the movement parameters;
and screening the target key point position from the second candidate key point according to the offset distance between the adjacent key point positions in the second candidate key point position.
In some embodiments, the movement parameter comprises a movement direction;
Screening the second candidate key point position from the first candidate key point positions according to the movement parameters comprises the following steps:
Determining a target direction side opposite to the moving direction from among a plurality of direction sides of the target object;
and screening the key point positions positioned at the target direction side from the first candidate key point positions to obtain second candidate key point positions.
In some embodiments, selecting the target keypoint location from the second candidate keypoint location based on the offset distance between adjacent keypoint locations in the second candidate keypoint location comprises:
calculating a first offset distance between a target second candidate key point position and a previous adjacent key point position in the second candidate key point positions and a second offset distance between the target second candidate key point position and a next adjacent key point position;
Determining a ratio of the first offset distance to the second offset distance;
And if the ratio meets the preset ratio range, determining the target second candidate key point position as the target key point position.
In some embodiments, the target object is a face region;
Performing recognition processing on the current processed image to determine a target position of a target object in the current processed image, including:
Processing the current processed image through a first detection model, and determining a human body area image in the current processed image;
Binarizing the human body region image to obtain a processed human body image;
and processing the processed human body image through a second detection model to obtain the target position of the human face in the current processed image.
In some embodiments, adjusting the moved position of the target information in the next image frame to obtain the next frame processed image includes:
Acquiring an intersection point position of the target position and the moved position from the next image frame;
Determining position adjustment information according to the position relation between the target position and the intersection point position;
And adjusting the moved position based on the position adjustment information to obtain the image after the next frame processing.
In some embodiments, determining the position adjustment information based on the positional relationship of the target position and the intersection position includes:
Determining a first edge position and a second edge position in a specified direction from the target positions;
and determining an adjusting direction and an adjusting distance according to the distances between the intersection point position and the first edge position and the second edge position respectively, so as to obtain position adjusting information.
In some embodiments, determining the adjustment direction and the adjustment distance based on the distance of the intersection point position from the first edge position and the second edge position, respectively, includes:
Calculating a first distance between the intersection point position and the first edge position in the appointed direction and a second distance between the intersection point position and the second edge position;
If the first distance is greater than the second distance, determining an adjustment direction according to the direction of the intersection point position towards the second edge position, and determining an adjustment distance according to the second distance;
If the first distance is smaller than the second distance, determining an adjustment direction according to the direction of the intersection point position towards the first edge position, and determining an adjustment distance according to the first distance.
In some embodiments, after adjusting the post-movement position of the target information in the next image frame, further comprising:
if other display information exists in the image processed by the next frame, the adjusted position of the target information shields the other display information;
Acquiring the display level of the target information and the display level of other display information;
And if the display level of the target information is higher than that of other display information, hiding the content which is blocked by the target information in the other display information in the image after the next frame processing.
The scheme is that the current processed image is determined; acquiring the current position of the target information in the current processed image and the movement parameters corresponding to the target information; determining a post-movement position of target information in a next image frame of the current image frame based on a time interval between adjacent image frames in the target video and movement parameters; judging whether the target information in the next image frame shields the target object according to the moved position and the target position of the target object in the current processed image; if so, the position of the target information after the movement in the next image frame is adjusted, and the image after the next frame processing is obtained. Thus, the video viewing experience of the user can be improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random access Memory), magnetic or optical disk, and the like.
The steps of any information adding method provided by the embodiment of the present application can be executed by the computer program stored in the storage medium, so that the beneficial effects of any information adding method provided by the embodiment of the present application can be achieved, and detailed descriptions of the foregoing embodiments are omitted.
The foregoing describes in detail a method, an apparatus, a storage medium, and a computer device for adding information provided by the embodiments of the present application, and specific examples are applied to illustrate the principles and implementations of the present application, where the foregoing examples are only used to help understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.
Claims (12)
1. An information adding method, characterized in that the method comprises:
Determining a current processed image, wherein the current processed image is obtained by adding target information on a current image frame of a target video;
Acquiring the current position of the target information in the current processed image and a movement parameter corresponding to the target information, wherein the movement parameter comprises a movement speed;
Determining a moving distance of the target information from the current image frame to a next image frame based on a time interval between adjacent image frames in the target video and the moving speed, and determining a post-moving position of the target information in the next image frame of the current image frame according to the current position and the moving distance, wherein the time interval refers to a time interval between two adjacent image frames of the target video played through a video playing interface;
Performing recognition processing on the current processed image, and determining a target position of the target object in the current processed image, wherein the target position comprises key point positions corresponding to key points of all objects in the target object;
Screening out target key point positions from a plurality of key point positions, and generating a key point track based on the target key point positions, wherein the target key point positions are part of key point positions in the peripheral outline of the target object, which are opposite to the moving direction of the target information;
Determining a peripheral track of the target information according to the moved position;
Judging whether the target information shields the target object in the next image frame according to whether the key point track is intersected with the peripheral track or not;
If yes, the position of the target information after the movement in the next image frame is adjusted, and a processed image of the next frame is obtained;
and if the target information and other display information in the image after the next frame processing are overlapped, continuing to adjust the display position of the target information so that the target display information and the other display information are not overlapped.
2. The method of claim 1, wherein the screening the target keypoint locations from the plurality of keypoint locations comprises:
Screening first candidate key point positions corresponding to the peripheral outline of the target object from the plurality of key point positions;
screening a second candidate key point position from the first candidate key point positions according to the movement parameters;
And screening the target key point position from the second candidate key point according to the offset distance between adjacent key point positions in the second candidate key point position.
3. The method of claim 2, wherein the movement parameter comprises a direction of movement;
The screening the second candidate key point position from the first candidate key point positions according to the movement parameter comprises the following steps:
Determining a target direction side opposite to the moving direction from among a plurality of direction sides of the target object;
and screening out the key point positions positioned at the target direction side from the first candidate key point positions to obtain the second candidate key point positions.
4. The method of claim 2, wherein the screening the target keypoint location from the second candidate keypoint locations based on the offset distance between adjacent keypoint locations of the second candidate keypoint locations comprises:
calculating a first offset distance between a target second candidate key point position and a previous adjacent key point position in the second candidate key point positions and a second offset distance between the target second candidate key point position and a next adjacent key point position;
determining a ratio of the first offset distance to the second offset distance;
And if the ratio meets a preset ratio range, determining the target second candidate key point position as the target key point position.
5. The method of claim 1, wherein the target object is a face part;
the identifying the current processed image, determining the target position of the target object in the current processed image, includes:
processing the current processed image through a first detection model, and determining a human body area image in the current processed image;
Performing binarization processing on the human body region image to obtain a processed human body image;
And processing the processed human body image through a second detection model to obtain a target position of the human face in the current processed image.
6. The method of claim 1, wherein adjusting the moved position of the target information in the next image frame to obtain the next frame processed image comprises:
Acquiring an intersection point position of the target position and the moved position from the next image frame;
Determining position adjustment information according to the position relation between the target position and the intersection point position;
and adjusting the moved position based on the position adjustment information to obtain the next frame processed image.
7. The method of claim 6, wherein the determining position adjustment information based on the positional relationship of the target position and the intersection position comprises:
determining a first edge position and a second edge position in a specified direction from the target positions;
and determining an adjustment direction and an adjustment distance according to the distance between the intersection point position and the first edge position and the second edge position respectively, so as to obtain the position adjustment information.
8. The method of claim 7, wherein determining the adjustment direction and adjustment distance based on the distance of the intersection location from the first edge location and the second edge location, respectively, comprises:
Calculating a first distance between the intersection point position and the first edge position in the appointed direction and a second distance between the intersection point position and the second edge position;
If the first distance is greater than the second distance, determining the adjustment direction according to the direction of the intersection point position towards the second edge position, and determining the adjustment distance according to the second distance;
if the first distance is smaller than the second distance, the adjusting direction is determined according to the direction of the intersection point position towards the first edge position, and the adjusting distance is determined according to the first distance.
9. The method of claim 1, further comprising, after said adjusting the post-movement position of the target information in the next image frame:
If other display information exists in the image processed by the next frame, the adjusted position of the target information shields the other display information;
Acquiring the display level of the target information and the display level of the other display information;
and if the display level of the target information is higher than that of the other display information, hiding the content which is blocked by the target information in the other display information in the image after the next frame processing.
10. An information adding apparatus, characterized in that the apparatus comprises:
A first determining unit, configured to determine a current processed image, where the current processed image is obtained by adding target information to a current image frame of a target video;
the acquisition unit is used for acquiring the current position of the target information in the current processed image and the movement parameters corresponding to the target information, wherein the movement parameters comprise movement speed;
A second determining unit, configured to determine a moving distance of the target information from the current image frame to a next image frame based on a time interval between adjacent image frames in the target video and the moving speed, and determine a post-moving position of the target information in the next image frame of the current image frame according to the current position and the moving distance, where the time interval refers to a time interval between two adjacent image frames in which the target video is played through a video playing interface;
the judging unit is used for carrying out identification processing on the current processed image and determining a target position of the target object in the current processed image, wherein the target position comprises key point positions corresponding to key points of all objects in the target object; screening out target key point positions from a plurality of key point positions, and generating a key point track based on the target key point positions, wherein the target key point positions are part of key point positions in the peripheral outline of the target object, which are opposite to the moving direction of the target information; determining a peripheral track of the target information according to the moved position; judging whether the target information shields the target object in the next image frame according to whether the key point track is intersected with the peripheral track or not;
the adjusting unit is used for adjusting the moved position of the target information in the next image frame if yes, so as to obtain a processed image of the next frame;
The device is also for: and if the target information and other display information in the image after the next frame processing are overlapped, continuing to adjust the display position of the target information so that the target display information and the other display information are not overlapped.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements the information adding method of any of claims 1 to 9 when executing the program.
12. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the information adding method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210638195.7A CN115086738B (en) | 2022-06-07 | 2022-06-07 | Information adding method, information adding device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210638195.7A CN115086738B (en) | 2022-06-07 | 2022-06-07 | Information adding method, information adding device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115086738A CN115086738A (en) | 2022-09-20 |
CN115086738B true CN115086738B (en) | 2024-06-11 |
Family
ID=83252387
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210638195.7A Active CN115086738B (en) | 2022-06-07 | 2022-06-07 | Information adding method, information adding device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115086738B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104735518A (en) * | 2015-03-31 | 2015-06-24 | 北京奇艺世纪科技有限公司 | Information display method and device |
KR20160116585A (en) * | 2015-03-30 | 2016-10-10 | 한국전자통신연구원 | Method and apparatus for blocking harmful area of moving poctures |
CN107147941A (en) * | 2017-05-27 | 2017-09-08 | 努比亚技术有限公司 | Barrage display methods, device and the computer-readable recording medium of video playback |
CN109710365A (en) * | 2018-12-28 | 2019-05-03 | 武汉斗鱼网络科技有限公司 | A kind of barrage display methods, device, electronic equipment and medium |
CN111385665A (en) * | 2018-12-29 | 2020-07-07 | 百度在线网络技术(北京)有限公司 | Bullet screen information processing method, device, equipment and storage medium |
CN113891154A (en) * | 2020-07-02 | 2022-01-04 | 武汉斗鱼鱼乐网络科技有限公司 | Method, device, medium and computer equipment for preventing bullet screen from shielding specific target |
CN113920167A (en) * | 2021-11-01 | 2022-01-11 | 广州博冠信息科技有限公司 | Image processing method, device, storage medium and computer system |
-
2022
- 2022-06-07 CN CN202210638195.7A patent/CN115086738B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160116585A (en) * | 2015-03-30 | 2016-10-10 | 한국전자통신연구원 | Method and apparatus for blocking harmful area of moving poctures |
CN104735518A (en) * | 2015-03-31 | 2015-06-24 | 北京奇艺世纪科技有限公司 | Information display method and device |
CN107147941A (en) * | 2017-05-27 | 2017-09-08 | 努比亚技术有限公司 | Barrage display methods, device and the computer-readable recording medium of video playback |
CN109710365A (en) * | 2018-12-28 | 2019-05-03 | 武汉斗鱼网络科技有限公司 | A kind of barrage display methods, device, electronic equipment and medium |
CN111385665A (en) * | 2018-12-29 | 2020-07-07 | 百度在线网络技术(北京)有限公司 | Bullet screen information processing method, device, equipment and storage medium |
CN113891154A (en) * | 2020-07-02 | 2022-01-04 | 武汉斗鱼鱼乐网络科技有限公司 | Method, device, medium and computer equipment for preventing bullet screen from shielding specific target |
CN113920167A (en) * | 2021-11-01 | 2022-01-11 | 广州博冠信息科技有限公司 | Image processing method, device, storage medium and computer system |
Also Published As
Publication number | Publication date |
---|---|
CN115086738A (en) | 2022-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210281771A1 (en) | Video processing method, electronic device and non-transitory computer readable medium | |
US11114130B2 (en) | Method and device for processing video | |
CN109427083B (en) | Method, device, terminal and storage medium for displaying three-dimensional virtual image | |
US20220148337A1 (en) | Living body detection method and apparatus, electronic device, and storage medium | |
CN113538696B (en) | Special effect generation method and device, storage medium and electronic equipment | |
CN111147880A (en) | Interaction method, device and system for live video, electronic equipment and storage medium | |
WO2023065849A1 (en) | Screen brightness adjustment method and apparatus for electronic device, and electronic device | |
WO2022148293A1 (en) | Information prompting method and apparatus | |
CN103000054B (en) | Intelligent teaching machine for kitchen cooking and control method thereof | |
CN113645476B (en) | Picture processing method and device, electronic equipment and storage medium | |
CN109544441B (en) | Image processing method and device, and skin color processing method and device in live broadcast | |
CN112316425B (en) | Picture rendering method and device, storage medium and electronic equipment | |
CN115761638A (en) | Online real-time intelligent analysis method based on image data and terminal equipment | |
US10134164B2 (en) | Information processing apparatus, information processing system, information processing method, and program | |
CN115086738B (en) | Information adding method, information adding device, computer equipment and storage medium | |
CN117455753B (en) | Special effect template generation method, special effect generation device and storage medium | |
CN112511890A (en) | Video image processing method and device and electronic equipment | |
CN114071244B (en) | Method and device for generating live cover, computer storage medium and electronic equipment | |
CN112435173A (en) | Image processing and live broadcasting method, device, equipment and storage medium | |
CN111107264A (en) | Image processing method, image processing device, storage medium and terminal | |
CN115761867A (en) | Identity detection method, device, medium and equipment based on face image | |
CN111679737B (en) | Hand segmentation method and electronic device | |
CN111258408B (en) | Object boundary determining method and device for man-machine interaction | |
CN116189251A (en) | Real-time face image driving method and device, electronic equipment and storage medium | |
CN106020433A (en) | 3D vehicle terminal man-machine interactive system and interaction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |