Nothing Special   »   [go: up one dir, main page]

WO2023284798A1 - 视频播放方法、装置及电子设备 - Google Patents

视频播放方法、装置及电子设备 Download PDF

Info

Publication number
WO2023284798A1
WO2023284798A1 PCT/CN2022/105517 CN2022105517W WO2023284798A1 WO 2023284798 A1 WO2023284798 A1 WO 2023284798A1 CN 2022105517 W CN2022105517 W CN 2022105517W WO 2023284798 A1 WO2023284798 A1 WO 2023284798A1
Authority
WO
WIPO (PCT)
Prior art keywords
video frame
video
displacement
data
target object
Prior art date
Application number
PCT/CN2022/105517
Other languages
English (en)
French (fr)
Inventor
谢锦洋
Original Assignee
维沃移动通信(杭州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信(杭州)有限公司 filed Critical 维沃移动通信(杭州)有限公司
Priority to EP22841431.4A priority Critical patent/EP4373073A4/en
Publication of WO2023284798A1 publication Critical patent/WO2023284798A1/zh
Priority to US18/411,383 priority patent/US20240153538A1/en

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234318Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/10Special adaptations of display systems for operation with variable images
    • G09G2320/106Determination of movement vectors or equivalent parameters within the image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2330/00Aspects of power supply; Aspects of display protection and defect management
    • G09G2330/02Details of power systems and of start or stop of display operation
    • G09G2330/021Power management, e.g. power saving
    • G09G2330/023Power management, e.g. power saving using energy recovery or conservation

Definitions

  • the present application relates to the technical field of communication, and in particular to a video playing method, device and electronic equipment.
  • video is processed by a central processing unit (central processing unit, CPU) of an electronic device.
  • CPU central processing unit
  • the CPU will obtain the video to be played according to the user's operation, decode the video to be played to obtain a video frame, and then send the video frame to the independent display chip, so that the video frame can be displayed.
  • the video to be played is composed of a plurality of video frames, if the CPU repeats the above operations, the power consumption of the CPU will be high, thereby reducing the performance of the device system.
  • Embodiments of the present invention provide a video playing method, device, and electronic equipment, which can solve the problem that the performance of the equipment system will be reduced when the video is processed by the CPU.
  • the embodiment of the present application provides a video playing method.
  • the method includes: acquiring video frame data of the target video and displacement data of the target object in the video frame; the displacement data indicates that the displacement of the target object in the first video frame and the second video frame of the target video is greater than or equal to a preset value
  • decode the video frame data to obtain the first video frame and the second video frame, and insert the third video frame according to preset rules; play the first video frame, the second video frame and the third video frame frame.
  • the embodiment of the present application provides a video playback device.
  • the device includes: an acquisition module, a first processing module, a second processing module and a playing module.
  • the obtaining module is used to obtain the video frame data of the target video and the displacement data of the target object in the video frame;
  • the first processing module is used to indicate the target object in the first video frame and the second video frame of the target video in the displacement data
  • the video frame data is decoded to obtain the first video frame and the second video frame;
  • the second processing module is used to insert the third video frame according to preset rules;
  • the playback module It is used to play the first video frame, the second video frame and the third video frame.
  • the embodiment of the present application provides an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored in the memory and operable on the processor.
  • the program or instruction is executed by the processor, the The steps of the method provided in the first aspect.
  • an embodiment of the present application provides a readable storage medium, on which a program or an instruction is stored, and when the program or instruction is executed by a processor, the steps of the method provided in the first aspect are implemented.
  • the video frame data of the target video and the displacement data of the target object in the video frame can be obtained first; then the displacement data indicates the displacement of the target object in the first video frame and the second video frame of the target video
  • decode the video frame data to obtain the first video frame and the second video frame, and insert the third video frame according to preset rules; then play the first video frame, the second video frame video frame and the third video frame.
  • FIG. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
  • FIG. 2 is one of the flow charts of a video playback method provided by the embodiment of the present application.
  • FIG. 3 is a schematic diagram of target objects in two video frames provided by an embodiment of the present application.
  • FIG. 4 is the second flowchart of a video playback method provided by the embodiment of the present application.
  • FIG. 5 is one of the schematic diagrams of the frame insertion operation provided by the embodiment of the present application.
  • FIG. 6 is the second schematic diagram of the frame insertion operation provided by the embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a video playback device provided by an embodiment of the present application.
  • FIG. 8 is one of the hardware schematic diagrams of the electronic device provided by the embodiment of the present application.
  • FIG. 9 is a second schematic diagram of hardware of the electronic device provided by the embodiment of the present application.
  • logos in the embodiments of the present application are used to indicate text, symbols, images, etc. of information, and controls or other containers may be used as carriers for displaying information, including but not limited to text logos, symbol logos, and image logos.
  • the embodiment of the present application provides a video playback method, device and electronic equipment, which can first obtain the video frame data of the target video and the displacement data of the target object in the video frame; and then indicate the first video frame and the displacement data of the target video in the displacement data
  • a video playback method, device and electronic equipment which can first obtain the video frame data of the target video and the displacement data of the target object in the video frame; and then indicate the first video frame and the displacement data of the target video in the displacement data
  • the displacement of the target object in the second video frame is greater than or equal to a preset value
  • decode the video frame data to obtain the first video frame and the second video frame, and insert a third video frame according to a preset rule; then play The first video frame, the second video frame and the third video frame.
  • FIG. 1 it is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
  • the electronic device includes a CPU, an independent display chip, a display screen and the like.
  • the CPU not only has traditional functions such as recording video, playing video, video chat, Bluetooth pairing, calling, and alarm clock, but also can be used to obtain video frame data and displacement data, decode video frame data, and convert displacement data and The decoded video frames are sent to the independent display chip.
  • the independent display chip includes a position controller, a zoom controller, a picture frame processor and a video memory, etc.
  • the position controller can be used to control the position of the image
  • the zoom controller can be used to control the zoom of the image
  • the image frame processor can be used to perform frame interpolation; Or the rendering data that is about to be fetched.
  • the embodiment of the present application provides a video playing method.
  • the video playing method may include the following steps 201 to 203. The following description will be made by taking the electronic device shown in FIG. 1 as an example for execution of the video playing method.
  • step 201 the electronic device acquires video frame data of a target video and displacement data of a target object in the video frame.
  • the video frame data is data obtained by compressing the video frame of the target video.
  • the displacement data is the data obtained after analyzing the target object in the video frame of the target video, and the displacement data can be used to indicate the position vector of the target object in the video frame, such as the position in the video frame, the position in the adjacent video frame Relative displacement between moves, etc.
  • the aforementioned video frame data and displacement data may be two sets of independent data, and packaged into a set of data; or, the aforementioned displacement data may be encoded and included in the video frame data. It can be determined according to actual usage requirements, and is not limited in this embodiment of the application.
  • the above-mentioned target video may be a video taken by the electronic device, a video received by the electronic device shared by other devices, or a video downloaded by the electronic device from a server.
  • the target video is a video taken by a mobile phone as an example.
  • the mobile phone can start recording the target video, and analyze the position vector (also called motion vector) of the moving object in each recorded video frame, and then record the obtained displacement data into the video frame data.
  • the mobile phone can play the target video according to the video frame data and displacement data in response to user input.
  • the above-mentioned target object may be an image of a moving object
  • the moving object may be any moving object such as a galloping horse, a soaring bird, a falling leaf, a waving arm, a flowing spring, or a flashing neon light. It can be understood that when an electronic device shoots a video, if an object moves, the relative position of the object in the video frame will also change, that is, the position vector of the target object in the video frame will change.
  • the displacement data of the target object may be the color data of the video frame.
  • the color data of the video frame may be the pixel value of the pixel in the video frame.
  • one circle represents one pixel
  • the image width and image height of the first video frame and the second video frame are both 10 pixels.
  • the coordinates of the four vertices of the region 011 of the first video frame are respectively (2,2), (4,2), (2,4), (4,4), and the pixel values of the pixels in the region 011 are a1;
  • the coordinates of the four vertices of the region 012 of the second video frame are respectively (5,6), (7,6), (5,8), (7,8), and the pixel value of the pixel point of the region 012 Also both are a1.
  • the area 011 of the first video frame is the target object
  • the area 011 of the first video frame and the area 012 of the second video frame have the same area size
  • the area 011 of the first video frame and the area 012 of the second video frame contain The pixel values of the pixels are the same, so it can be determined that compared with the first video frame, the target object moves to the area 012 in the second video frame, that is, the target object moves from the area 011 to the area 012.
  • FIG. 3 is illustrated by taking the pixel values of all pixels in the region 011 and the region 012 as a1 as an example, which does not limit the embodiment of the present application. It can be understood that the pixel values of the pixels in the area 011 and the area 012 can be different, but the pixel values of the corresponding pixels in the area 011 and the area 012 need to be the same, for example, the pixel (2, 2) in the area 011 and the area The pixel values of the pixel points (5, 6) of 012 must be the same.
  • Step 202 when the displacement data indicates that the displacement of the target object in the first video frame and the second video frame of the target video is greater than or equal to a preset value, the electronic device decodes the video frame data to obtain the first video frame and the second video frame the second video frame, and insert the third video frame according to preset rules.
  • the above step 202 may be specifically implemented through the following (1) to (4).
  • the CPU determines that the displacement of the target object in the first video frame and the second video frame is greater than or equal to a preset value.
  • the CPU decodes the video frame data to obtain the first video frame and the second video frame.
  • the CPU sends the first video frame, the second video frame, and the displacement of the target object in the first video frame and the second video frame to the independent display chip.
  • the independent display chip obtains a third video frame according to the first video frame, the second video frame, the displacement of the target object in the first video frame and the second video frame.
  • the above-mentioned FIG. 3 is still taken as an example.
  • the preset value is 3 pixels. Since the target object has moved 3 pixels on the horizontal axis and 4 pixels on the vertical axis, the motion vector of the target object is 5 pixels. Therefore, the CPU can determine that the displacement of the target object in the first video frame and the second video frame is greater than a preset value, then it can be considered that the target object has a large displacement. If the displacement of the target object in the first video frame and the second video frame is less than 3 pixels, it is considered that the target object has no displacement.
  • the independent display chip when it is determined that the displacement of the target object in the first video frame and the second video frame is greater than a preset value, since the CPU only needs to decode the first video frame and the second video frame, and There is no need to decode video frames with smaller displacements of the target object, thereby temporarily releasing the CPU from processing video tasks, so that the CPU can process other tasks such as calls and alarm clocks.
  • the independent display chip after the independent display chip receives the displacement of the target object in the first video frame, the second video frame, the first video frame, and the second video frame initiated by the CPU, the independent display chip can perform frame interpolation. processing to obtain the third video frame, so that the loss caused by not decoding the video frame with a smaller displacement of the target object can be compensated, thereby ensuring the playback quality of the video.
  • Step 203 the electronic device plays the first video frame, the second video frame and the third video frame.
  • the above-mentioned first video frame and the second video frame may be one video frame respectively, and the third video frame may be one video frame or multiple video frames.
  • the first video frame and the second video frame may be adjacent video frames, or may be non-adjacent video frames.
  • the electronic device may adopt a frame insertion method of inserting a video frame after the first video frame and the second video frame to replace the second video frame After the video frame; if the first video frame and the second video frame are non-adjacent video frames, for example, there is a fourth video frame between the first video frame and the second video frame, then the first video frame and the second video frame are used The frame interpolation method of inserting a video frame between the second video frame to replace the fourth video frame. It should be noted that when different frame insertion methods are used, the playback order of the video frames is also different. For details, reference may be made to the description in the following embodiments, which will not be repeated here.
  • the embodiment of the present application provides a video playback method.
  • the displacement of the target object in the first video frame and the second video frame of the target video is relatively large, since only the first video frame and the second video frame need to be decoded, the There is no need to decode the video frame with a small displacement of the target object, so the CPU can be temporarily released from the task of processing video, reducing CPU power consumption and improving device system performance; in addition, by inserting a third video frame, it can make up for non-decoding
  • the displacement of the target object causes less loss of video frames, thus ensuring the playback quality of the video.
  • the above-mentioned target video may include M sets of video frame data, and each set of video frame data is data of N video frames. Specifically, each set of video frame data is data obtained by compressing N video frames.
  • M is a positive integer
  • N is an integer greater than or equal to 2.
  • Step 202a when the displacement data indicates that the displacement of the target object in the first video frame and the second video frame of the i-th group of video frame data is greater than or equal to a preset value, the electronic device decodes the i-th group of video frame data to obtain the i-th group of video frame data A video frame and the second video frame.
  • i is a positive integer less than or equal to M, and i takes values 1, 2...M in sequence.
  • Step 202b the electronic device inserts the third video frame according to a preset rule.
  • Step 204 when the displacement data indicates that the displacement of the target object in the fifth video frame and the sixth video frame of the i+1th group of video frame data is greater than or equal to a preset value, decode the i+1th group of video frame data The fifth video frame and the sixth video frame are obtained, and the seventh video frame is inserted according to a preset rule.
  • Step 205 the electronic device plays the fifth video frame, the sixth video frame and the seventh video frame.
  • step 204 and step 205 For the specific implementation manners of step 204 and step 205, reference may be made to step 202 and step 203, which will not be repeated here.
  • the video playing method provided by the present application is exemplarily described below by taking that the target video includes 10 sets of video frame data, and each set of video frame data is obtained by compressing 5 frames of video as an example.
  • the CPU detects the 5 frames of data. If it is detected that the displacement of the second frame data, the third frame data, and the fourth frame data of the first group of video frame data and the target object in the first frame data is less than the preset value, and the fifth frame data and the first frame data If the displacement of the target object is greater than or equal to the preset value, the CPU only needs to decode the first frame data and the fifth frame data to obtain the first video frame and the fifth video frame, and convert the first video frame, the fifth video frame, The displacement of the target object in the first video frame and the fifth video frame is sent to the independent display chip.
  • the independent display chip can interpolate the displacement of the target object in the first video frame and the fifth video frame according to a certain algorithm and displacement ratio.
  • the 2nd video frame, the 3rd video frame, and the 4th video frame are inserted between the 5th video frames.
  • the display screen can sequentially display the first video frame, the second video frame, the third video frame, the fourth video frame, and the fifth video frame.
  • the CPU can be notified that the task of video frame insertion and playback has been completed.
  • the CPU can continue to process 5 frames of data of the second group of video frame data stored in the video buffer area.
  • the CPU detects that the displacement of the target object in the second frame data of the second group of video frame data and the first frame data is greater than or equal to the preset value, the CPU does not need to detect and decode the third frame data, the fourth frame data and For the fifth frame data, only the first frame data and the first frame data need to be decoded to obtain the first video frame and the second video frame, and the first video frame, the second video frame, the first video frame and the second video frame
  • the displacement of the target object is sent to the independent display chip.
  • the independent display chip can interpolate the displacement of the target object in the first video frame and the second video frame according to a certain algorithm and displacement ratio.
  • the display screen can sequentially display the first video frame, the second video frame, the third video frame, the fourth video frame, and the fifth video frame.
  • the CPU can be notified that the task of video frame insertion and playback has been completed. In this way, the CPU can continue to process the remaining 8 sets of video frame data in the above manner until all frames are played.
  • the video playing method provided by the embodiment of the present application is applied to the scene of video group processing.
  • the CPU can perform decompression and other processing operations on the video frames in batches sequentially, and After each set of video frame data is processed, the CPU is temporarily released from the task of processing video to process other routine tasks such as calls, text messages, emails, Bluetooth, etc., which can further improve device performance.
  • this embodiment of the application provides two video frame insertion methods:
  • the first video frame and the second video frame are adjacent video frames, and at this time, a frame insertion method of inserting a video frame after the first video frame and the second video frame is adopted.
  • the above step 202 can be realized through the following steps A1 and A2.
  • the above step 203 can be realized through the following step A3.
  • Step A1 When the displacement data indicates that the displacement of the target object in the first video frame and the second video frame of the target video is greater than or equal to the preset value, the electronic device decodes the video frame data to obtain the first video frame and the second video frame. video frame.
  • Step A2 The electronic device inserts a third video frame after the second video frame according to the displacement of the target object in the first video frame and the second video frame.
  • the displacement of the target object in the third video frame and the second video frame is greater than or equal to a preset value. It can be understood that since the displacement of the target object in the first video frame and the second video frame of the target video is greater than or equal to the preset value, it is more likely that the target object has a larger displacement in the video frame after the second video frame , setting the displacement of the target object in the third video frame and the second video frame to be greater than or equal to a preset value can more accurately predict the movement trend of the target object in video frames after the second video frame.
  • Step A3 the electronic device plays the first video frame, the second video frame and the third video frame in sequence.
  • An optional implementation is to play the first video frame and the second video frame immediately after the electronic device decodes the video frame data to obtain the first video frame and the second video frame;
  • a third video frame is inserted after the second video frame, so that after the second video frame is played , to play the third video frame.
  • Another optional implementation is, after the electronic device decodes the video frame data to obtain the first video frame and the second video frame, according to the displacement of the target object in the first video frame and the second video frame, in the second video frame A third video frame is inserted after the video frame. Then play the first video frame, the second video frame and the third video frame.
  • the target video includes 10 sets of video frame data, and each set of video frame data is data obtained by compressing 5 frames of video. Assume the preset value is 3 pixels. As shown in FIG. 5 , the CPU can detect 5 frames of data of the first group of video frame data stored in the video buffer area.
  • the CPU detects that the displacement of the target object in the second frame data and the first frame data of the first group of video frame data is 5 pixels, the CPU does not need to detect and decode other frame data, but only needs to decode the first frame data and the first frame data 2 frames of data to obtain the first video frame and the second video frame, and send the displacement of the target object in the first video frame, the second video frame, the first video frame and the second video frame to the independent display chip to notify the independent display chip Display chip for video playback and frame insertion processing. In this way, the CPU can release video playback tasks to handle calls, text messages, emails, Bluetooth and other things.
  • the independent display chip After the independent display chip receives the first video frame and the second video frame, it can directly display the first video frame and the second video frame. At the same time, the independent display chip can interpolate the displacement of the target object in the first video frame and the second video frame according to a certain algorithm and displacement ratio according to the first video frame and the second video frame, for example, the second video frame.
  • the target object in the frame continues to be displaced by 5 pixels to obtain the third video frame, and then the target object in the third video frame is continuously displaced by 5 pixels to obtain the fourth video frame, and then the target object in the fourth video frame continues to
  • the fifth video frame is obtained by shifting 5 pixels. In this way, after the playback of the second video frame is completed, the display screen can continue to play the third video frame, the fourth video frame, and the fifth video frame.
  • the video playback method provided by the embodiment of the present application is applied to the scene where a large displacement of the target object in the first two frames of video frames is detected. According to the displacement of the target object in the two frames of video frames, it can be estimated that after the two frames The displacement trend of the target object, so the video frame can be inserted after the adjacent video frame, so that the CPU does not need to decode the video frame after two video frames, and temporarily releases the CPU from the task of processing video to process calls, text messages, Mail, Bluetooth, and other routine tasks.
  • the first video frame and the second video frame are non-adjacent video frames, and at this time, a frame interpolation method of inserting a video frame between the first video frame and the second video frame is adopted.
  • the above step 202 can be realized through the following steps B1 and B2.
  • the above step 203 can be realized through the following step B3.
  • Step B1 when the displacement data indicates that the displacement of the target object in the first video frame and the second video frame of the target video is greater than or equal to the preset value, and the displacement of the target object in the first video frame and the fourth video frame is less than the preset value
  • the electronic device decodes the video frame data to obtain the first video frame and the second video frame.
  • Step B2 the electronic device inserts a third video frame between the first video frame and the second video frame according to the displacement of the target object in the first video frame and the second video frame.
  • the displacement of the target object in the first video frame and the third video frame is smaller than a preset value. It can be understood that, since the displacement of the target object in the first video frame and the fourth video frame is less than the preset value, the third video frame is used as the replacement frame of the fourth video frame, if the third video frame and the first video frame are satisfied If the displacement of the target object in the frame is smaller than the preset value, the position vector of the target object in the fourth video frame is more accurate.
  • Step B3 the electronic device sequentially plays the first video frame, the third video frame and the second video frame.
  • An optional implementation manner is to play the first video frame immediately after the electronic device decodes the video frame data to obtain the first video frame and the second video frame; A video frame and the displacement of the target object in the second video frame, a third video frame is inserted between the first video frame and the second video frame, so that after the first video frame is played, the third video is played frame. Then play the second video frame.
  • Another optional implementation is, after the electronic device decodes the video frame data to obtain the first video frame and the second video frame, according to the displacement of the target object in the first video frame and the second video frame, in the first video frame A third video frame is inserted between the video frame and the second video frame, and then the first video frame, the third video frame and the second video frame are played sequentially.
  • the target video includes 10 sets of video frame data, and each set of video frame data is data obtained after compressing 5 frames of video.
  • the default value is 4 pixels.
  • the CPU can detect 5 frames of data of the first group of video frame data stored in the video buffer area.
  • the CPU detects that the displacement of the target object in the first frame data, the second frame data, the third frame data, and the fourth frame data of the first group of video frame data is less than 4 pixels, the first frame data and the fifth frame data If the displacement of the target object is 4 pixels, the CPU does not need to decode the data of the second frame, the data of the third frame, and the data of the fourth frame, but only needs to decode the data of the first frame and the fifth frame to obtain the first video frame and the second video frame 5 video frames, and send the displacement of the target object in the 1st video frame, the 5th video frame, the 1st video frame and the 5th video frame to the independent display chip to notify the independent display chip to perform video playback and frame insertion processing. In this way, the CPU can release video playback tasks to handle calls, text messages, emails, Bluetooth and other things.
  • the independent display chip After the independent display chip receives the first video frame and the fifth video frame, it can directly display the first video frame. At the same time, the independent display chip can interpolate the displacement of the target object in the first video frame and the fifth video frame according to a certain algorithm and displacement ratio according to the first video frame and the fifth video frame, for example, in the first video frame On the basis of frame, continue to shift the target object in the first video frame by 1 pixel to get the second video frame, then continue to shift the target object in the second video frame by 1 pixel to get the third video frame, and then move The target object in the third video frame continues to be displaced by 1 pixel to obtain the fourth video frame. In this way, after the first video frame is played, the display screen can continue to play the second video frame, the third video frame, the fourth video frame, and the fifth video frame in sequence.
  • the video playback method provided by the embodiment of the present application is applied to the scene where a large displacement of the target object in the detected interval video frame is detected.
  • the displacement of the target object in the two video frames it can be estimated that the distance between the two frames.
  • the displacement trend of the target object so a video frame can be inserted between two video frames, so that the CPU does not need to decode the video frame between the two video frames, and temporarily releases the CPU from the task of processing video to process calls and text messages , email, bluetooth, and other routine tasks.
  • the executing subject may be a video playing device, or a control module in the video playing device for executing the video playing method.
  • the video playing device provided in the embodiment of the present application is described by taking the video playing method executed by the video playing device as an example.
  • the embodiment of the present application provides a video playing device 700 .
  • the video playback device 700 includes an acquisition module 701 , a first processing module 702 , a second processing module 703 and a playback module 704 .
  • the acquiring module 701 may be configured to acquire video frame data of the target video and displacement data of the target object in the video frame.
  • the first processing module 702 may be configured to decode the video frame data to obtain the second video frame when the displacement data indicates that the displacement of the target object in the first video frame and the second video frame of the target video is greater than or equal to a preset value.
  • the second processing module 703 may be configured to insert the third video frame according to a preset rule.
  • the playing module 704 can be used to play the first video frame, the second video frame and the third video frame.
  • the first video frame and the second video frame are adjacent video frames.
  • the second processing module 703 may specifically be configured to insert a third video frame after the second video frame according to the displacement of the target object in the first video frame and the second video frame.
  • the playing module 704 can be specifically configured to play the first video frame, the second video frame and the third video frame in sequence.
  • the first processing module 702 can be specifically configured to, when the displacement data indicates that the displacement of the target object in the first video frame and the second video frame is greater than or equal to a preset value, and the displacement of the target object in the first video frame and the fourth video frame If it is less than the preset value, decode the video frame data to obtain the first video frame and the second video frame.
  • the second processing module 703 may specifically be configured to insert a third video frame between the first video frame and the second video frame according to the displacement of the target object in the first video frame and the second video frame.
  • the playing module 704 can be specifically configured to play the first video frame, the third video frame and the second video frame in sequence.
  • the first processing module 702 can specifically be configured to determine, according to the displacement data, that the displacement of the target object in the first video frame and the second video frame is greater than or equal to a preset value; and decode the video frame data to obtain the first video frame and the second video frame; and sending the first video frame, the second video frame, the displacement of the target object in the first video frame and the second video frame to the second processing module 703.
  • the second processing module 703 may specifically be configured to acquire a third video frame according to the first video frame, the second video frame, the first video frame, and the displacement of the target object in the second video frame.
  • the target video includes M sets of video frame data, each set of video frame data is data of N video frames, M is a positive integer, and N is an integer greater than or equal to 2.
  • the first processing module 702 can be specifically configured to decode the i-th group of video frame data when the displacement data indicates that the displacement of the target object in the first video frame and the second video frame of the i-th group of video frame data is greater than or equal to a preset value
  • the video frame data obtains the first video frame and the second video frame, and i is a positive integer less than or equal to M.
  • the first processing module 702 can also be used to, after playing the first video frame, the second video frame and the third video frame, the displacement data indicating the fifth video frame and the sixth video frame of the i+1th group of video frame data If the displacement of the target object is greater than or equal to a preset value, the i+1th group of video frame data is decoded to obtain the fifth video frame and the sixth video frame.
  • the second processing module 703 may also be configured to insert the seventh video frame according to a preset rule.
  • the playing module 704 can also be used to play the fifth video frame, the sixth video frame and the seventh video frame.
  • the embodiment of the present application provides a video playback device.
  • the displacement of the target object in the first video frame and the second video frame of the target video is relatively large, since the device only needs to decode the first video frame and the second video frame , without decoding the video frame with a smaller displacement of the target object, so the CPU can be temporarily released from the task of processing video, reducing CPU power consumption and improving device system performance; in addition, by inserting a third video frame, it can make up for The loss caused by the video frame with small displacement of the target object is not decoded, thereby ensuring the playback quality of the video.
  • the video playback device in this embodiment of the present application may be a device, or a component, an integrated circuit, or a chip in a terminal.
  • the device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant).
  • non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
  • Network Attached Storage NAS
  • personal computer personal computer, PC
  • television television
  • teller machine or self-service machine etc.
  • the video playback device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in this embodiment of the present application.
  • the video playback device provided by the embodiment of the present application can realize various processes realized by the method embodiments in FIG. 1 to FIG. 6 , and details are not repeated here to avoid repetition.
  • the embodiment of the present application also provides an electronic device 800, including a processor 801, a memory 802, and a program or instruction stored in the memory 802 and operable on the processor 801.
  • the program when the instruction is executed by the processor 801, each process of the above video playback method embodiment can be realized, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110, etc. part.
  • the electronic device 100 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 110 through the power management system, so that the management of charging, discharging, and function can be realized through the power management system. Consumption management and other functions.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 9 does not constitute a limitation to the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine some components, or arrange different components, and details will not be repeated here. .
  • the processor 110 can be used to obtain the video frame data of the target video and the displacement data of the target object in the video frame, and the displacement data indicates the displacement of the target object in the first video frame and the second video frame of the target video If the value is greater than or equal to the preset value, the video frame data is decoded to obtain the first video frame and the second video frame.
  • the independent display chip 1061 can be used to insert the third video frame according to preset rules.
  • the display panel 1062 can be used to play the first video frame, the second video frame and the third video frame.
  • the first video frame and the second video frame are adjacent video frames.
  • the independent display chip 1061 can be specifically configured to insert a third video frame after the second video frame according to the displacement of the target object in the first video frame and the second video frame.
  • the display panel 1062 can specifically be used to play the first video frame, the second video frame and the third video frame in sequence.
  • the processor 110 may be specifically configured to, when the displacement data indicates that the displacement of the target object in the first video frame and the second video frame is greater than or equal to a preset value, and the displacement of the target object in the first video frame and the fourth video frame is less than a preset value.
  • decode the video frame data When set to a value, decode the video frame data to obtain the first video frame and the second video frame.
  • the independent display chip 1061 can be specifically configured to insert a third video frame between the first video frame and the second video frame according to the displacement of the target object in the first video frame and the second video frame.
  • the display panel 1062 can specifically be used to play the first video frame, the third video frame and the second video frame in sequence.
  • the processor 110 may specifically be configured to determine, according to the displacement data, that the displacement of the target object in the first video frame and the second video frame is greater than or equal to a preset value; and decode the video frame data to obtain the first video frame and the second video frame. the second video frame; and sending the first video frame, the second video frame, the displacement of the target object in the first video frame and the second video frame to the independent display chip 1061 .
  • the independent display chip 1061 can be specifically configured to acquire a third video frame according to the first video frame, the second video frame, the displacement of the target object in the first video frame and the second video frame.
  • the target video includes M sets of video frame data, each set of video frame data is data of N video frames, M is a positive integer, and N is an integer greater than or equal to 2.
  • the processor 110 may be specifically configured to decode the i-th group of video frames when the displacement data indicates that the displacement of the target object in the first video frame and the second video frame of the i-th group of video frame data is greater than or equal to a preset value
  • the first video frame and the second video frame are obtained as data, and i is a positive integer less than or equal to M.
  • the processor 110 may also be configured to, after playing the first video frame, the second video frame and the third video frame, select the object in the fifth video frame and the sixth video frame whose displacement data indicates the i+1th group of video frame data When the displacement of the object is greater than or equal to a preset value, the i+1th group of video frame data is decoded to obtain the fifth video frame and the sixth video frame.
  • the independent display chip 1061 can also be used to insert the seventh video frame according to preset rules.
  • the display panel 1062 can also be used to play the fifth video frame, the sixth video frame and the seventh video frame.
  • An embodiment of the present application provides an electronic device.
  • the displacement of the target object in the first video frame and the second video frame of the target video is large, since the device only needs to decode the first video frame and the second video frame, There is no need to decode the video frame with a small displacement of the target object, so the CPU can be temporarily released from the task of processing video, reducing CPU power consumption and improving the performance of the device system; in addition, by inserting the third video frame, it can make up for the lack of The loss caused by the video frame whose displacement of the decoding target object is small, thereby ensuring the playback quality of the video.
  • the input unit 104 may include a graphics processing unit (graphics processing unit, GPU) 1041 and a microphone 1042, and the graphics processing unit 1041 is compatible with the image capturing device (such as the image data of the still picture or video obtained by the camera) for processing.
  • the display unit 106 may include an independent display chip 1061 and a display panel 1062, and the display panel 1062 may be configured in the form of a liquid crystal display or an organic light emitting diode.
  • the user input unit 107 includes a touch panel 1071 and other input devices 1072 .
  • the touch panel 1071 is also called a touch screen.
  • the touch panel 1071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 1072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • Memory 109 may be used to store software programs as well as various data, including but not limited to application programs and operating systems.
  • the processor 110 may integrate an application processor and a modem processor, wherein the application processor mainly processes operating systems, user interfaces, and application programs, and the modem processor mainly processes wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 110 .
  • the embodiment of the present application also provides a readable storage medium, on which a program or instruction is stored, and when the program or instruction is executed by a processor, each process of the above-mentioned embodiment of the video playback method can be realized, and the same can be achieved. To avoid repetition, the technical effects will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiments.
  • the readable storage medium includes computer readable storage medium, such as computer read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk, and the like.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above video playback method embodiment Each process can achieve the same technical effect, so in order to avoid repetition, it will not be repeated here.
  • chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明实施例提供一种视频播放方法、装置及电子设备,属于通信技术领域。该方法包括:获取目标视频的视频帧数据和视频帧中目标对象的位移数据;在该位移数据指示该目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码该视频帧数据得到该第一视频帧和该第二视频帧,并按照预设规则插入第三视频帧;播放该第一视频帧、该第二视频帧和该第三视频帧。

Description

视频播放方法、装置及电子设备
相关申请的交叉引用
本申请主张在2021年07月15日在中国提交的中国专利申请号202110803088.0的优先权,其全部内容通过引用包含于此。
技术领域
本申请涉及通信技术领域,尤其涉及一种视频播放方法、装置及电子设备。
背景技术
随着通信技术的发展,视频播放已成为电子设备的重要功能。
通常,由电子设备的中央处理器(central processing unit,CPU)处理视频。具体地,在用户选择待播放视频之后,CPU会根据用户操作获取待播放视频,再解码待播放视频得到视频帧,然后将视频帧发送至独显芯片,从而可以显示视频帧。
然而,由于待播放视频是由多个视频帧组成的,因此若CPU重复上述操作,则会导致CPU功耗较大,从而降低了设备系统性能。
发明内容
本发明实施例提供一种视频播放方法、装置及电子设备,能够解决由CPU处理视频会降低设备系统性能的问题。
为了解决上述技术问题,本申请是这样实现的:
第一方面,本申请实施例提供了一种视频播放方法。该方法包括:获取目标视频的视频帧数据和视频帧中目标对象的位移数据;在该位移数据指示该目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码该视频帧数据得到该第一视频帧和该第二视频帧,并按照预设规则插入第三视频帧;播放该第一视频帧、该第二视频帧和该第三视频帧。
第二方面,本申请实施例提供了一种视频播放装置。该装置包括:获取模块、第一处理模块、第二处理模块和播放模块。获取模块,用于获取目标视频的视频帧数据和视频帧 中目标对象的位移数据;第一处理模块,用于在该位移数据指示该目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码该视频帧数据得到该第一视频帧和该第二视频帧;第二处理模块,用于按照预设规则插入第三视频帧;播放模块,用于播放该第一视频帧、该第二视频帧和该第三视频帧。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在存储器上并可在处理器上运行的程序或指令,程序或指令被处理器执行时实现如第一方面提供的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,可读存储介质上存储程序或指令,程序或指令被处理器执行时实现如第一方面提供的方法的步骤。
在本发明实施例中,可以先获取目标视频的视频帧数据和视频帧中目标对象的位移数据;再在该位移数据指示该目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码该视频帧数据得到该第一视频帧和该第二视频帧,并按照预设规则插入第三视频帧;然后播放该第一视频帧、该第二视频帧和该第三视频帧。通过该方案,当目标视频的第一视频帧和第二视频帧中目标对象的位移较大时,由于仅需解码该第一视频帧和该第二视频帧,而无需解码目标对象的位移较小的视频帧,因此可以暂时将CPU从处理视频的任务中释放,降低了CPU功耗,提升了设备系统性能;另外,通过插入第三视频帧,可以弥补不解码目标对象的位移较小的视频帧造成的损失,从而保证了视频的播放质量。
附图说明
图1是本申请实施例提供的一种电子设备的架构示意图;
图2为本申请实施例提供的一种视频播放方法的流程图之一;
图3为本申请实施例提供的两个视频帧中的目标对象的示意图;
图4为本申请实施例提供的一种视频播放方法的流程图之二;
图5为本申请实施例提供的插帧操作的示意图之一;
图6为本申请实施例提供的插帧操作的示意图之二;
图7为本申请实施例提供的视频播放装置的结构示意图;
图8为本申请实施例提供的电子设备的硬件示意图之一;
图9为本申请实施例提供的电子设备的硬件示意图之二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
需要说明的是,本申请实施例中的标识用于指示信息的文字、符号、图像等,可以以控件或者其他容器作为显示信息的载体,包括但不限于文字标识、符号标识、图像标识。
本申请实施例提供一种视频播放方法、装置及电子设备,可以先获取目标视频的视频帧数据和视频帧中目标对象的位移数据;再在该位移数据指示该目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码该视频帧数据得到该第一视频帧和该第二视频帧,并按照预设规则插入第三视频帧;然后播放该第一视频帧、该第二视频帧和该第三视频帧。通过该方案,当目标视频的第一视频帧和第二视频帧中目标对象的位移较大时,由于仅需解码该第一视频帧和该第二视频帧,而无需解码目标对象的位移较小的视频帧,因此可以暂时将CPU从处理视频的任务中释放,降低了CPU功耗,提升了设备系统性能;另外,通过插入第三视频帧,可以弥补不解码目标对象的位移较小的视频帧造成的损失,从而保证了视频的播放质量。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的视频播放方法、装置及电子设备进行详细地说明。
如图1所示,为本申请实施例提供的一种电子设备的架构示意图。该电子设备包括CPU、独显芯片和显示屏等。
本申请实施例中,CPU不但具备录制视频、播放视频、视频聊天、蓝牙配对、通话、 闹钟等传统功能,还可以用于获取视频帧数据和位移数据、解码视频帧数据、以及将位移数据和解码后得到的视频帧发送至独显芯片。
独显芯片包括位置控制器、缩放控制器、图帧处理器和显存等。其中,位置控制器可以用于控制图像的位置;缩放控制器可以用于控制图像的缩放;图帧处理器可以用于进行插帧动作;显存也叫做帧缓存,可以用于存储显卡芯片处理过或者即将提取的渲染数据。
如图2所示,本申请实施例提供一种视频播放方法。该视频播放方法可以包括下述的步骤201至步骤203。下面以该视频播放方法执行主体为如图1所示的电子设备为例进行说明。
步骤201、电子设备获取目标视频的视频帧数据和视频帧中目标对象的位移数据。
本申请实施例中,视频帧数据为对目标视频的视频帧压缩后得到的数据。位移数据为对目标视频的视频帧中的目标对象进行分析后得到的数据,该位移数据可以用于指示目标对象在视频帧中的位置矢量,例如在视频帧中的位置、在相邻视频帧之间移动相对位移等。
可选地,上述视频帧数据和位移数据可以为两组独立的数据,并打包成一组数据;或者,上述位移数据可以为编码后包含在视频帧数据中的数据。可以根据实际使用需求确定,本申请实施例不作限定。
可选地,上述目标视频可以为电子设备拍摄的视频,也可以为电子设备接收其他设备分享的视频,还可以为电子设备从服务器下载的视频等。
示例性的,以目标视频为手机拍摄的视频为例。在用户触发手机拍摄目标视频之后,手机可以开始录制目标视频,并分析录制的每个视频帧中运动对象的位置矢量(也称运动矢量),然后将得到的位移数据记录到视频帧数据中。如此,在用户想要观看目标视频时,手机可以响应于用户输入,根据视频帧数据和位移数据,播放目标视频。具体播放方式可以参照下述实施例中的描述,此处不予赘述。
可选地,上述目标对象可以为运动物体的图像,该运动物体可以为奔跑马匹、飞腾的小鸟、坠落的树叶、挥动的手臂、流动的泉水、闪烁的霓虹灯等任意可以运动的物体。可以理解的是,当电子设备拍摄视频时,若某物体发生移动,则该物体在视频帧中的相对位置也会发生变化,即,视频帧目标对象的位置矢量发生变化。
可选地,由于目标对象在视频帧中是通过颜色表示的,因此目标对象的位移数据可以 为视频帧的颜色数据。
进一步地,上述视频帧的颜色数据可以为视频帧中像素点的像素值。
示例性的,如图3所示,1个圆圈代表1个像素点,第一视频帧和第二视频帧的图像宽度和图像高度均为10个像素点。第一视频帧的区域011的四个顶点的坐标分别是(2,2)、(4,2)、(2,4)、(4,4),且区域011的像素点的像素值均为a1;第二视频帧的区域012的四个顶点的坐标分别是(5,6)、(7,6)、(5,8)、(7,8),且区域012的像素点的像素值也均为a1。假设第一视频帧的区域011为目标对象,由于第一视频帧的区域011和第二视频帧的区域012的区域大小相同,且第一视频帧的区域011和第二视频帧的区域012包含的像素点的像素值相同,因此可以确定与第一视频帧相比,在第二视频帧中目标对象移动至区域012,即目标对象从区域011运动至区域012。
需要说明的是,上述图3是以区域011和区域012的所有像素点的像素值均为a1为例说明的,其并不对本申请实施例形成限定。可以理解,区域011和区域012中的各个像素点的像素值可以各不相同,但区域011和区域012对应的像素点的像素值需相同,例如区域011的像素点(2,2)和区域012的像素点(5,6)的像素值需相同。
步骤202、在该位移数据指示该目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,电子设备解码该视频帧数据得到该第一视频帧和该第二视频帧,并按照预设规则插入第三视频帧。
本申请实施例中,在执行主体为如图1所示的电子设备的情况下,上述步骤202具体可以通过下述的(1)至(4)实现。
(1)CPU根据位移数据,确定第一视频帧和第二视频帧中目标对象的位移大于或等于预设值。
(2)CPU解码视频帧数据得到该第一视频帧和该第二视频帧。
(3)CPU向独显芯片发送该第一视频帧、该第二视频帧、该第一视频帧和该第二视频帧中目标对象的位移。
(4)独显芯片根据该第一视频帧、该第二视频帧、该第一视频帧和该第二视频帧中目标对象的位移,获取第三视频帧。
示例性的,仍以上述图3为例。假设预设值为3个像素点。由于目标对象在横轴移动 了3个像素点,在纵轴移动了4个像素点,那么目标对象的运动矢量为5个像素点。因此,CPU可以确定第一视频帧和第二视频帧中目标对象的位移大于预设值,那么可以认为目标对象发生较大位移。若第一视频帧和第二视频帧中目标对象的位移小于3个像素点,则认为目标对象没有发生位移。
可以理解的是,一方面,在确定第一视频帧和第二视频帧中目标对象的位移大于预设值的情况下,由于CPU仅需解码该第一视频帧和该第二视频帧,而无需解码目标对象的位移较小的视频帧,从而暂时将CPU从处理视频的任务中释放,进而CPU可以处理通话、闹钟等其他任务。另一方面,在独显芯片接收到CPU发动的该第一视频帧、该第二视频帧、该第一视频帧和该第二视频帧中目标对象的位移后,独显芯片可以进行插帧处理,以得到第三视频帧,从而可以弥补不解码目标对象的位移较小的视频帧造成的损失,从而保证了视频的播放质量。
需要说明的是,对于独显芯片获取第三视频帧的具体实施方式,可以参照下述实施例提供的两种视频插帧方式,此处不予赘述。
步骤203、电子设备播放该第一视频帧、该第二视频帧和该第三视频帧。
可选地,上述第一视频帧和第二视频帧可以分别为一个视频帧,第三视频帧可以一个视频帧或多个视频帧。
可选地,第一视频帧和第二视频帧可以为相邻的视频帧,也可以为不相邻的视频帧。具体地,若第一视频帧和第二视频帧为相邻的视频帧,则电子设备可以采用在第一视频帧和第二视频帧之后插入视频帧的插帧方式,以替代第二视频帧之后的视频帧;若第一视频帧和第二视频帧为不相邻的视频帧,例如第一视频帧和第二视频帧之间还有第四视频帧,则采用在第一视频帧和第二视频帧之间插入视频帧的插帧方式,以替代第四视频帧。需要说明的是,当采用的插帧方式不同时,视频帧的播放顺序也不相同,具体可以参照下述实施例中的描述,此处不予赘述。
本申请实施例提供一种视频播放方法,当目标视频的第一视频帧和第二视频帧中目标对象的位移较大时,由于仅需解码该第一视频帧和该第二视频帧,而无需解码目标对象的位移较小的视频帧,因此可以暂时将CPU从处理视频的任务中释放,降低了CPU功耗,提升了设备系统性能;另外,通过插入第三视频帧,可以弥补不解码目标对象的位移较小的 视频帧造成的损失,从而保证了视频的播放质量。
可选地,上述目标视频可以包括M组视频帧数据,每组视频帧数据为N个视频帧的数据。具体地,每组视频帧数据为对N个视频帧压缩后得到的数据。其中,M为正整数,N为大于或等于2的整数。结合图2,如图4所示,上述步骤202可以通过下述的步骤202a和步骤202b实现,在上述步骤203之后还可以包括下述步骤204和步骤205。
步骤202a、在位移数据指示第i组视频帧数据的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,电子设备解码第i组视频帧数据得到该第一视频帧和该第二视频帧。
本申请实施例中,i为小于或等于M的正整数,i依次取值1、2……M。
步骤202b、电子设备按照预设规则插入第三视频帧。
步骤204、在位移数据指示第i+1组视频帧数据的第五视频帧和第六视频帧中目标对象的位移大于或等于预设值的情况下,解码该第i+1组视频帧数据得到该第五视频帧和该第六视频帧,并按照预设规则插入第七视频帧。
步骤205、电子设备播放该第五视频帧、该第六视频帧和该第七视频帧。
对于步骤204和步骤205具体实施方式,可以参照步骤202和步骤203,此处不予赘述。
下面以目标视频包括10组视频帧数据,且每组视频帧数据为对5帧视频压缩后得到的数据为例,对本申请提供的视频播放方法进行示例性说明。
当用户想要观看目标视频时,视频缓存区域中存储有第1组视频帧数据的5帧数据,CPU对这5帧数据进行检测。若检测到第1组视频帧数据的第2帧数据、第3帧数据、第4帧数据分别与第1帧数据中目标对象的位移小于预设值,且第5帧数据与第1帧数据中目标对象的位移大于或等于预设值,则CPU仅需解码第1帧数据与第5帧数据,得到第1视频帧和第5视频帧,并将第1视频帧、第5视频帧、第1视频帧和第5视频帧中目标对象的位移发送至独显芯片。然后,独显芯片可以根据第1视频帧、第5视频帧,将第1视频帧和第5视频帧中目标对象的位移,按照一定算法和位移比例进行插帧处理,在第1视频帧和第5视频帧之间插入第2视频帧、第3视频帧、第4视频帧。如此,显示屏可以依次显示第1视频帧、第2视频帧、第3视频帧、第4视频帧、第5视频帧。
在显示屏播放完成第1组视频帧数据的第5视频帧之后,可以通知CPU已完成视频插帧和播放任务。CPU可以继续处理视频缓存区域中存储的第2组视频帧数据的5帧数据。例如,若CPU检测到第2组视频帧数据的第2帧数据与第1帧数据中目标对象的位移大于或等于预设值,则CPU无需检测、解码第3帧数据、第4帧数据和第5帧数据,仅需解码第1帧数据与第1帧数据,得到第1视频帧和第2视频帧,并将第1视频帧、第2视频帧、第1视频帧和第2视频帧中目标对象的位移发送至独显芯片。然后,独显芯片可以根据第1视频帧、第2视频帧,将第1视频帧和第2视频帧中目标对象的位移,按照一定算法和位移比例进行插帧处理,在第1视频帧和第2视频帧之后插入第3视频帧、第4视频帧、第5视频帧。如此,显示屏可以依次显示第1视频帧、第2视频帧、第3视频帧、第4视频帧、第5视频帧。
在显示屏播放完成第2组视频帧数据的第5视频帧之后,可以通知CPU已完成视频插帧和播放任务。如此,CPU可以按照上述方式,继续对剩余的8组视频帧数据进行处理,直至所有帧播放结束。
本申请实施例提供的视频播放方法,应用于对视频分组处理的场景中,通过将目标视频划分为多组视频帧数据,可以使得CPU分批依次对视频帧进行解压等处理操作,并在对每组视频帧数据处理之后,暂时将CPU从处理视频的任务中释放,以处理通话、短信、邮件、蓝牙等其他常规任务,从而可以进一步提高设备性能。
可选地,本申请实施例提供了两种视频插帧方式:
第1种视频插帧方式
第一视频帧和第二视频帧为相邻的视频帧,此时采用在第一视频帧和第二视频帧之后插入视频帧的插帧方式。上述步骤202可以通过下述步骤A1和步骤A2实现。相应的,上述步骤203可以通过下述步骤A3实现。
步骤A1、在位移数据指示目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,电子设备解码视频帧数据得到该第一视频帧和该第二视频帧。
步骤A2、电子设备根据该第一视频帧和该第二视频帧中目标对象的位移,在该第二视频帧后插入第三视频帧。
可选地,第三视频帧和第二视频帧中目标对象的位移大于或等于预设值。可以理解, 由于目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值,因此在第二视频帧之后的视频帧中目标对象发生较大位移的可能性较大,将第三视频帧和第二视频帧中目标对象的位移设置为大于或等于预设值,可以更为准确地预测在第二视频帧之后的视频帧中目标对象的运动趋势。
步骤A3、电子设备依次播放第一视频帧、第二视频帧和第三视频帧。
一种可选的实现方式为,在电子设备解码视频帧数据得到第一视频帧和该第二视频帧后,立即播放第一视频帧和该第二视频帧;在播放该第一视频帧和该第二视频帧的过程中,根据该第一视频帧和该第二视频帧中目标对象的位移,在该第二视频帧后插入第三视频帧,从而在播放完成该第二视频帧后,播放第三视频帧。
另一种可选的实现方式为,在电子设备解码视频帧数据得到第一视频帧和该第二视频帧后,根据第一视频帧和第二视频帧中目标对象的位移,在该第二视频帧后插入第三视频帧。然后再播放第一视频帧、第二视频帧和第三视频帧。
示例性的,仍以目标视频包括10组视频帧数据,且每组视频帧数据为对5帧视频压缩后得到的数据为例。假设预设值为3个像素点。如图5所示,CPU可以检测视频缓存区域中存储的第1组视频帧数据的5帧数据。若CPU检测到第1组视频帧数据的第2帧数据与第1帧数据中目标对象的位移为5个像素点,则CPU无需检测、解码其他帧数据,仅需解码第1帧数据与第2帧数据,得到第1视频帧和第2视频帧,并将第1视频帧、第2视频帧、第1视频帧和第2视频帧中目标对象的位移发送至独显芯片,以通知独显芯片进行视频播放和插帧处理。如此,CPU可以释放视频播放任务去处理通话、短信、邮件、蓝牙等其他事情。
在独显芯片接收到第1视频帧、第2视频帧之后,可以直接显示第1视频帧和第2视频帧。同时,独显芯片可以根据第1视频帧、第2视频帧,将第1视频帧和第2视频帧中目标对象的位移,按照一定算法和位移比例进行插帧处理,例如,将第2视频帧中的目标对象继续位移5个像素点得到第3视频帧,再将第3视频帧中的目标对象继续位移5个像素点得到第4视频帧,再将第4视频帧中的目标对象继续位移5个像素点得到第5视频帧。如此,在第2视频帧播放完成后,显示屏可以继续播放第3视频帧、第4视频帧、第5视频帧。
本申请实施例提供的视频播放方法,应用于检测到前两帧视频帧中目标对象发生较大位移的场景中,根据这两帧视频帧中目标对象的位移,可以预估在这两帧之后目标对象的位移趋势,因此可以在相邻的视频帧之后插入视频帧,从而使得CPU无需解码两帧视频帧后的视频帧,暂时将CPU从处理视频的任务中释放,以处理通话、短信、邮件、蓝牙等其他常规任务。
第2种视频插帧方式
第一视频帧和第二视频帧为不相邻的视频帧,此时采用在第一视频帧和第二视频帧之间插入视频帧的插帧方式。上述步骤202可以通过下述步骤B1和步骤B2实现。相应的,上述步骤203可以通过下述步骤B3实现。
步骤B1、在位移数据指示目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值,且第一视频帧和第四视频帧中目标对象的位移小于预设值的情况下,电子设备解码视频帧数据得到该第一视频帧和该第二视频帧。
步骤B2、电子设备根据该第一视频帧和该第二视频帧中目标对象的位移,在该第一视频帧和该第二视频帧之间插入第三视频帧。
可选地,第一视频帧和第三视频帧中目标对象的位移小于预设值。可以理解的是,由于第一视频帧和第四视频帧中目标对象的位移小于预设值,因此,第三视频帧作为第四视频帧的替代帧,若满足第三视频帧和第一视频帧中目标对象的位移小于预设值,则使得第四视频帧中目标对象的位置矢量更准确。
步骤B3、电子设备依次播放第一视频帧、第三视频帧和第二视频帧。
一种可选的实现方式为,在电子设备解码视频帧数据得到第一视频帧和第二视频帧后,立即播放该第一视频帧;在播放该第一视频帧的过程中,根据该第一视频帧和该第二视频帧中目标对象的位移,在该第一视频帧和该第二视频帧之间插入第三视频帧,从而在播放完成该第一视频帧后,播放第三视频帧。之后再播放第二视频帧。
另一种可选的实现方式为,在电子设备解码视频帧数据得到第一视频帧和该第二视频帧后,根据第一视频帧和第二视频帧中目标对象的位移,在该第一视频帧和该第二视频帧之间插入第三视频帧,然后依次播放第一视频帧、第三视频帧和第二视频帧。
示例性的,仍以目标视频包括10组视频帧数据,且每组视频帧数据为对5帧视频压 缩后得到的数据为例。假设预设值为4个像素点。如图6所示,CPU可以检测视频缓存区域中存储的第1组视频帧数据的5帧数据。若CPU检测到第1组视频帧数据的第1帧数据与第2帧数据、第3帧数据、第4帧数据中目标对象位移均小于4个像素点,第1帧数据与第5帧数据中目标对象的位移为4个像素点,则CPU无需解码第2帧数据、第3帧数据、第4帧数据,仅需解码第1帧数据与第5帧数据,得到第1视频帧和第5视频帧,并将第1视频帧、第5视频帧、第1视频帧和第5视频帧中目标对象的位移发送至独显芯片,以通知独显芯片进行视频播放和插帧处理。如此,CPU可以释放视频播放任务去处理通话、短信、邮件、蓝牙等其他事情。
在独显芯片接收到第1视频帧、第5视频帧之后,可以直接显示第1视频帧。同时,独显芯片可以根据第1视频帧、第5视频帧,将第1视频帧和第5视频帧中目标对象的位移,按照一定算法和位移比例进行插帧处理,例如,在第1视频帧的基础上,将第1视频帧中的目标对象继续位移1个像素点得到第2视频帧,再将第2视频帧中的目标对象继续位移1个像素点得到第3视频帧,再将第3视频帧中的目标对象继续位移1个像素点得到第4视频帧。如此,在第1视频帧播放完成后,显示屏可以继续依次播放第2视频帧、第3视频帧、第4视频帧、第5视频帧。
本申请实施例提供的视频播放方法,应用于检测到间隔的视频帧中目标对象发生较大位移的场景中,根据这两帧视频帧中目标对象的位移,可以预估在这两帧之间目标对象的位移趋势,因此可以在两个视频帧之间插入视频帧,从而使得CPU无需解码两帧视频帧之间的视频帧,暂时将CPU从处理视频的任务中释放,以处理通话、短信、邮件、蓝牙等其他常规任务。
需要说明的是,本申请实施例提供的视频播放方法,执行主体可以为视频播放装置,或者该视频播放装置中的用于执行视频播放方法的控制模块。本申请实施例中以视频播放装置执行视频播放方法为例,说明本申请实施例提供的视频播放装置。
如图7所示,本申请实施例提供一种视频播放装置700。该视频播放装置700包括获取模块701、第一处理模块702、第二处理模块703和播放模块704。
获取模块701,可以用于获取目标视频的视频帧数据和视频帧中目标对象的位移数据。第一处理模块702,可以用于在该位移数据指示该目标视频的第一视频帧和第二视频帧中 目标对象的位移大于或等于预设值的情况下,解码该视频帧数据得到该第一视频帧和该第二视频帧。第二处理模块703,可以用于按照预设规则插入第三视频帧。播放模块704,可以用于播放该第一视频帧、该第二视频帧和该第三视频帧。
可选地,第一视频帧和第二视频帧为相邻的视频帧。第二处理模块703,具体可以用于根据第一视频帧和第二视频帧中目标对象的位移,在第二视频帧后插入第三视频帧。播放模块704,具体可以用于依次播放该第一视频帧、该第二视频帧和该第三视频帧。
可选地,第一视频帧和第二视频帧之间还有第四视频帧。第一处理模块702,具体可以用于在位移数据指示第一视频帧和第二视频帧中目标对象的位移大于或等于预设值,且第一视频帧和第四视频帧中目标对象的位移小于预设值的情况下,解码视频帧数据得到该第一视频帧和该第二视频帧。第二处理模块703,具体可以用于根据该第一视频帧和该第二视频帧中目标对象的位移,在该第一视频帧和该第二视频帧之间插入第三视频帧。播放模块704,具体可以用于依次播放该第一视频帧、该第三视频帧和该第二视频帧。
可选地,第一处理模块702,具体可以用于根据位移数据,确定第一视频帧和第二视频帧中目标对象的位移大于或等于预设值;并解码视频帧数据得到该第一视频帧和该第二视频帧;以及向第二处理模块703发送该第一视频帧、该第二视频帧、该第一视频帧和该第二视频帧中目标对象的位移。第二处理模块703,具体可以用于根据该第一视频帧、该第二视频帧、该第一视频帧和该第二视频帧中目标对象的位移,获取第三视频帧。
可选地,目标视频包括M组视频帧数据,每组视频帧数据为N个视频帧的数据,M为正整数,N为大于或等于2的整数。第一处理模块702,具体可以用于在位移数据指示第i组视频帧数据的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码该第i组视频帧数据得到该第一视频帧和该第二视频帧,i为小于或等于M的正整数。
第一处理模块702,还可以用于在播放第一视频帧、第二视频帧和第三视频帧之后,在位移数据指示第i+1组视频帧数据的第五视频帧和第六视频帧中目标对象的位移大于或等于预设值的情况下,解码该第i+1组视频帧数据得到该第五视频帧和该第六视频帧。
第二处理模块703,还可以用于按照预设规则插入第七视频帧。
播放模块704,还可以用于播放第五视频帧、第六视频帧和第七视频帧。
本申请实施例提供一种视频播放装置,当目标视频的第一视频帧和第二视频帧中目标 对象的位移较大时,由于该装置仅需解码该第一视频帧和该第二视频帧,而无需解码目标对象的位移较小的视频帧,因此可以暂时将CPU从处理视频的任务中释放,降低了CPU功耗,提升了设备系统性能;另外,通过插入第三视频帧,可以弥补不解码目标对象的位移较小的视频帧造成的损失,从而保证了视频的播放质量。
本申请实施例中的视频播放装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的视频播放装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的视频播放装置能够实现图1至图6的方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图8所示,本申请实施例还提供一种电子设备800,包括处理器801,存储器802,存储在存储器802上并可在处理器801上运行的程序或指令,该程序或指令被处理器801执行时实现上述视频播放方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述移动电子设备和非移动电子设备。
图9为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、以及处理器110等部件。
本领域技术人员可以理解,电子设备100还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管 理充电、放电、以及功耗管理等功能。图9中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,处理器110,可以用于获取目标视频的视频帧数据和视频帧中目标对象的位移数据,并在该位移数据指示该目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码该视频帧数据得到该第一视频帧和该第二视频帧。
独显芯片1061,可以用于按照预设规则插入第三视频帧。
显示面板1062,可以用于播放第一视频帧、第二视频帧和第三视频帧。
可选地,第一视频帧和第二视频帧为相邻的视频帧。独显芯片1061,具体可以用于根据第一视频帧和第二视频帧中目标对象的位移,在第二视频帧后插入第三视频帧。显示面板1062,具体可以用于依次播放该第一视频帧、该第二视频帧和该第三视频帧。
可选地,第一视频帧和第二视频帧之间还有第四视频帧。处理器110,具体可以用于在位移数据指示第一视频帧和第二视频帧中目标对象的位移大于或等于预设值,且第一视频帧和第四视频帧中目标对象的位移小于预设值的情况下,解码视频帧数据得到该第一视频帧和该第二视频帧。独显芯片1061,具体可以用于根据该第一视频帧和该第二视频帧中目标对象的位移,在该第一视频帧和该第二视频帧之间插入第三视频帧。显示面板1062,具体可以用于依次播放该第一视频帧、该第三视频帧和该第二视频帧。
可选地,处理器110,具体可以用于根据位移数据,确定第一视频帧和第二视频帧中目标对象的位移大于或等于预设值;并解码视频帧数据得到该第一视频帧和该第二视频帧;以及向独显芯片1061发送该第一视频帧、该第二视频帧、该第一视频帧和该第二视频帧中目标对象的位移。独显芯片1061,具体可以用于根据该第一视频帧、该第二视频帧、该第一视频帧和该第二视频帧中目标对象的位移,获取第三视频帧。
可选地,目标视频包括M组视频帧数据,每组视频帧数据为N个视频帧的数据,M为正整数,N为大于或等于2的整数。处理器110,具体可以用于在位移数据指示第i组视频帧数据的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码该第i组视频帧数据得到该第一视频帧和该第二视频帧,i为小于或等于M的正整数。
处理器110,还可以用于在播放第一视频帧、第二视频帧和第三视频帧之后,在位移 数据指示第i+1组视频帧数据的第五视频帧和第六视频帧中目标对象的位移大于或等于预设值的情况下,解码该第i+1组视频帧数据得到该第五视频帧和该第六视频帧。
独显芯片1061,还可以用于按照预设规则插入第七视频帧。
显示面板1062,还可以用于播放第五视频帧、第六视频帧和第七视频帧。
本申请实施例提供一种电子设备,当目标视频的第一视频帧和第二视频帧中目标对象的位移较大时,由于该设备仅需解码该第一视频帧和该第二视频帧,而无需解码目标对象的位移较小的视频帧,因此可以暂时将CPU从处理视频的任务中释放,降低了CPU功耗,提升了设备系统性能;另外,通过插入第三视频帧,可以弥补不解码目标对象的位移较小的视频帧造成的损失,从而保证了视频的播放质量。
应理解的是,本申请实施例中,输入单元104可以包括图形处理器(graphics processing unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元106可包括独显芯片1061和显示面板1062,可以采用液晶显示器、有机发光二极管等形式来配置显示面板1062。用户输入单元107包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。存储器109可用于存储软件程序以及各种数据,包括但不限于应用程序和操作系统。处理器110可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述视频播放方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述视频播放方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (14)

  1. 一种数据发送方法,所述方法包括:
    获取目标视频的视频帧数据和视频帧中目标对象的位移数据;
    在所述位移数据指示所述目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码所述视频帧数据得到所述第一视频帧和所述第二视频帧,并按照预设规则插入第三视频帧;
    播放所述第一视频帧、所述第二视频帧和所述第三视频帧。
  2. 根据权利要求1所述的方法,其中,所述第一视频帧和第二视频帧为相邻的视频帧;
    所述按照预设规则插入第三视频帧,包括:
    根据所述第一视频帧和所述第二视频帧中目标对象的位移,在所述第二视频帧后插入所述第三视频帧;
    所述播放所述第一视频帧、所述第二视频帧和所述第三视频帧,包括:
    依次播放所述第一视频帧、所述第二视频帧和所述第三视频帧。
  3. 根据权利要求1所述的方法,其中,所述第一视频帧和第二视频帧之间还有第四视频帧;
    所述在所述位移数据指示所述目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码所述视频帧数据得到所述第一视频帧和所述第二视频帧,并按照预设规则插入第三视频帧,包括:
    在所述位移数据指示所述第一视频帧和第二视频帧中目标对象的位移大于或等于预设值,且所述第一视频帧和所述第四视频帧中目标对象的位移小于预设值的情况下,解码所述视频帧数据得到所述第一视频帧和所述第二视频帧,并根据所述第一视频帧和所述第二视频帧中目标对象的位移,在所述第一视频帧和所述第二视频帧之间插入所述第三视频帧;
    所述播放所述第一视频帧、所述第二视频帧和所述第三视频帧,包括:
    依次播放所述第一视频帧、所述第三视频帧和所述第二视频帧。
  4. 根据权利要求1至3中任一项所述的方法,其中,所述在所述位移数据指示所 述目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码所述视频帧数据得到所述第一视频帧和所述第二视频帧,并按照预设规则插入第三视频帧,包括:
    中央处理器根据所述位移数据,确定所述第一视频帧和第二视频帧中目标对象的位移大于或等于预设值;
    所述中央处理器解码所述视频帧数据得到所述第一视频帧和所述第二视频帧,并向独显芯片发送所述第一视频帧、所述第二视频帧、所述第一视频帧和所述第二视频帧中目标对象的位移;
    所述独显芯片根据所述第一视频帧、所述第二视频帧、所述第一视频帧和所述第二视频帧中目标对象的位移,获取所述第三视频帧。
  5. 根据权利要求1至3中任一项所述的方法,其中,所述目标视频包括M组视频帧数据,每组视频帧数据为N个视频帧的数据,M为正整数,N为大于或等于2的整数;
    所述在所述位移数据指示所述目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码所述视频帧数据得到所述第一视频帧和所述第二视频帧,包括:
    在所述位移数据指示第i组视频帧数据的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码所述第i组视频帧数据得到所述第一视频帧和所述第二视频帧,i为小于或等于M的正整数;
    所述播放所述第一视频帧、所述第二视频帧和所述第三视频帧之后,所述方法还包括:
    在所述位移数据指示第i+1组视频帧数据的第五视频帧和第六视频帧中目标对象的位移大于或等于预设值的情况下,解码所述第i+1组视频帧数据得到所述第五视频帧和所述第六视频帧,并按照预设规则插入第七视频帧;
    播放所述第五视频帧、所述第六视频帧和所述第七视频帧。。
  6. 一种视频播放装置,所述装置包括获取模块、第一处理模块、第二处理模块和播放模块;
    所述获取模块,用于获取目标视频的视频帧数据和视频帧中目标对象的位移数据;
    所述第一处理模块,用于在所述位移数据指示所述目标视频的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码所述视频帧数据得到所述第一视频帧和所述第二视频帧;
    所述第二处理模块,用于按照预设规则插入第三视频帧;
    所述播放模块,用于播放所述第一视频帧、所述第二视频帧和所述第三视频帧。
  7. 根据权利要求6所述的装置,其中,所述第一视频帧和第二视频帧为相邻的视频帧;
    所述第二处理模块,具体用于根据所述第一视频帧和所述第二视频帧中目标对象的位移,在所述第二视频帧后插入所述第三视频帧;
    所述播放模块,具体用于依次播放所述第一视频帧、所述第二视频帧和所述第三视频帧。
  8. 根据权利要求6所述的装置,其中,所述第一视频帧和第二视频帧之间还有第四视频帧;
    所述第一处理模块,具体用于在所述位移数据指示所述第一视频帧和所述第二视频帧中目标对象的位移大于或等于预设值,且所述第一视频帧和所述第四视频帧中目标对象的位移小于预设值的情况下,解码所述视频帧数据得到所述第一视频帧和所述第二视频帧;
    所述第二处理模块,具体用于根据所述第一视频帧和所述第二视频帧中目标对象的位移,在所述第一视频帧和所述第二视频帧之间插入所述第三视频帧;
    所述播放模块,具体用于依次播放所述第一视频帧、所述第三视频帧和所述第二视频帧。
  9. 根据权利要求6至8中任一项所述的装置,其中,
    所述第一处理模块,具体用于根据所述位移数据,确定所述第一视频帧和所述第二视频帧中目标对象的位移大于或等于预设值;并解码所述视频帧数据得到所述第一视频帧和所述第二视频帧;以及向所述第二处理模块发送所述第一视频帧、所述第二视频帧、第一视频帧和第二视频帧中目标对象的位移;
    所述第二处理模块,具体用于根据所述第一视频帧、所述第二视频帧、所述第一 视频帧和所述第二视频帧中目标对象的位移,获取所述第三视频帧。
  10. 根据权利要求6至8中任一项所述的装置,其中,所述目标视频包括M组视频帧数据,每组视频帧数据为N个视频帧的数据,M为正整数,N为大于或等于2的整数;
    所述第一处理模块,具体用于在所述位移数据指示第i组视频帧数据的第一视频帧和第二视频帧中目标对象的位移大于或等于预设值的情况下,解码所述第i组视频帧数据得到所述第一视频帧和所述第二视频帧,i为小于或等于M的正整数;
    所述第一处理模块,还用于在播放所述第一视频帧、所述第二视频帧和所述第三视频帧之后,在所述位移数据指示第i+1组视频帧数据的第五视频帧和第六视频帧中目标对象的位移大于或等于预设值的情况下,解码所述第i+1组视频帧数据得到所述第五视频帧和所述第六视频帧;
    所述第二处理模块,还用于按照预设规则插入第七视频帧;
    所述播放模块,还用于播放所述第五视频帧、所述第六视频帧和所述第七视频帧。
  11. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至5中任一项所述的视频播放方法的步骤。
  12. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至5中任一项所述的视频播放方法的步骤。
  13. 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至5中任一项所述的视频播放方法的步骤。
  14. 一种电子设备,包括所述电子设备被配置成用于执行如权利要求1-5任一项所述的视频播放方法。
PCT/CN2022/105517 2021-07-15 2022-07-13 视频播放方法、装置及电子设备 WO2023284798A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22841431.4A EP4373073A4 (en) 2021-07-15 2022-07-13 VIDEO PLAYBACK METHOD AND APPARATUS AND ELECTRONIC DEVICE
US18/411,383 US20240153538A1 (en) 2021-07-15 2024-01-12 Video Playback Method and Apparatus, and Electronic Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110803088.0A CN113691756A (zh) 2021-07-15 2021-07-15 视频播放方法、装置及电子设备
CN202110803088.0 2021-07-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/411,383 Continuation US20240153538A1 (en) 2021-07-15 2024-01-12 Video Playback Method and Apparatus, and Electronic Device

Publications (1)

Publication Number Publication Date
WO2023284798A1 true WO2023284798A1 (zh) 2023-01-19

Family

ID=78577253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105517 WO2023284798A1 (zh) 2021-07-15 2022-07-13 视频播放方法、装置及电子设备

Country Status (4)

Country Link
US (1) US20240153538A1 (zh)
EP (1) EP4373073A4 (zh)
CN (1) CN113691756A (zh)
WO (1) WO2023284798A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691756A (zh) * 2021-07-15 2021-11-23 维沃移动通信(杭州)有限公司 视频播放方法、装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009071842A (ja) * 2008-10-17 2009-04-02 Sharp Corp 画像表示装置及び方法、画像処理装置及び方法
US20090296819A1 (en) * 2008-05-30 2009-12-03 Kabushiki Kaisha Toshiba Moving Picture Decoding Apparatus and Moving Picture Decoding Method
CN101860755A (zh) * 2010-05-12 2010-10-13 北京数码视讯科技股份有限公司 用于台标字幕插入系统的解码方法及图像插入方法
CN111698500A (zh) * 2019-03-11 2020-09-22 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
CN113691756A (zh) * 2021-07-15 2021-11-23 维沃移动通信(杭州)有限公司 视频播放方法、装置及电子设备

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5903313A (en) * 1995-04-18 1999-05-11 Advanced Micro Devices, Inc. Method and apparatus for adaptively performing motion compensation in a video processing apparatus
US6680973B2 (en) * 2001-03-29 2004-01-20 Koninklijke Philips Electronics N.V. Scalable MPEG-2 video decoder with selective motion compensation
JP2008035281A (ja) * 2006-07-28 2008-02-14 Sanyo Electric Co Ltd 画像符号化方法
KR100819404B1 (ko) * 2006-10-27 2008-04-04 삼성전자주식회사 휴대용 단말기에서 부화면 디코딩 방법 및 장치
US20100027663A1 (en) * 2008-07-29 2010-02-04 Qualcomm Incorporated Intellegent frame skipping in video coding based on similarity metric in compressed domain
US20110310956A1 (en) * 2010-06-22 2011-12-22 Jian-Liang Lin Methods for controlling video decoder to selectively skip one or more video frames and related signal processing apparatuses thereof
KR102126511B1 (ko) * 2015-09-02 2020-07-08 삼성전자주식회사 보충 정보를 이용한 영상 프레임의 보간 방법 및 장치
CN108810549B (zh) * 2018-06-06 2021-04-27 天津大学 一种面向低功耗的流媒体播放方法
CN109672886B (zh) * 2019-01-11 2023-07-04 京东方科技集团股份有限公司 一种图像帧预测方法、装置及头显设备
CN112055254B (zh) * 2019-06-06 2023-01-06 Oppo广东移动通信有限公司 视频播放的方法、装置、终端及存储介质
CN110399842B (zh) * 2019-07-26 2021-09-28 北京奇艺世纪科技有限公司 视频处理方法、装置、电子设备及计算机可读存储介质
CN110996170B (zh) * 2019-12-10 2022-02-15 Oppo广东移动通信有限公司 视频文件播放方法及相关设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090296819A1 (en) * 2008-05-30 2009-12-03 Kabushiki Kaisha Toshiba Moving Picture Decoding Apparatus and Moving Picture Decoding Method
JP2009071842A (ja) * 2008-10-17 2009-04-02 Sharp Corp 画像表示装置及び方法、画像処理装置及び方法
CN101860755A (zh) * 2010-05-12 2010-10-13 北京数码视讯科技股份有限公司 用于台标字幕插入系统的解码方法及图像插入方法
CN111698500A (zh) * 2019-03-11 2020-09-22 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
CN113691756A (zh) * 2021-07-15 2021-11-23 维沃移动通信(杭州)有限公司 视频播放方法、装置及电子设备

Also Published As

Publication number Publication date
CN113691756A (zh) 2021-11-23
EP4373073A4 (en) 2024-11-13
US20240153538A1 (en) 2024-05-09
EP4373073A1 (en) 2024-05-22

Similar Documents

Publication Publication Date Title
CN109783178B (zh) 一种界面组件的颜色调整方法、装置、设备和介质
CN110636375B (zh) 视频流处理方法、装置、终端设备及计算机可读存储介质
US10257510B2 (en) Media encoding using changed regions
CN111225150B (zh) 插帧处理方法及相关产品
WO2017016339A1 (zh) 视频分享方法和装置、视频播放方法和装置
CN110365974A (zh) 用于视频编码和解码的自适应传递函数
CN104050040B (zh) 媒体重放工作负荷调度器
US10484690B2 (en) Adaptive batch encoding for slow motion video recording
CN114071223A (zh) 基于光流的视频插帧的生成方法、存储介质及终端设备
CN114245028B (zh) 图像展示方法、装置、电子设备及存储介质
CN111491208A (zh) 视频处理方法、装置、电子设备及计算机可读介质
CN111367434A (zh) 触控延迟的检测方法、装置、电子设备及存储介质
WO2023284798A1 (zh) 视频播放方法、装置及电子设备
CN113645476A (zh) 画面处理方法、装置、电子设备及存储介质
CN109753262B (zh) 帧显示处理方法、装置、终端设备及存储介质
CN109495762A (zh) 数据流处理方法、装置及存储介质、终端设备
WO2023125553A1 (zh) 插帧方法、装置及电子设备
CN104813342A (zh) 内容感知的改变视频大小
US10586367B2 (en) Interactive cinemagrams
CN106964154B (zh) 一种图像处理方法、装置及终端
TWI539795B (zh) 使用變化區域的媒體編碼
CN110830723B (zh) 拍摄方法、装置、存储介质及移动终端
CN111984173B (zh) 表情包生成方法和装置
CN113709372B (zh) 图像生成方法和电子设备
CN115514859A (zh) 图像处理电路、图像处理方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22841431

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022841431

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022841431

Country of ref document: EP

Effective date: 20240215