CN114063863A - Video processing method and device and electronic equipment - Google Patents
Video processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN114063863A CN114063863A CN202111430562.6A CN202111430562A CN114063863A CN 114063863 A CN114063863 A CN 114063863A CN 202111430562 A CN202111430562 A CN 202111430562A CN 114063863 A CN114063863 A CN 114063863A
- Authority
- CN
- China
- Prior art keywords
- input
- picture
- voiced
- video
- pictures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 47
- 238000012545 processing Methods 0.000 claims abstract description 24
- 230000004044 response Effects 0.000 claims description 39
- 238000004891 communication Methods 0.000 abstract description 7
- 230000001755 vocal effect Effects 0.000 description 23
- 238000010586 diagram Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 12
- 230000003993 interaction Effects 0.000 description 11
- 239000000463 material Substances 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/44—Browsing; Visualisation therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/74—Browsing; Visualisation therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a video processing method, a video processing device and electronic equipment, and belongs to the technical field of communication; the method comprises the following steps: the electronic equipment receives a first input aiming at a video to be processed; responding to the first input, processing a plurality of frames of pictures included in the video to be processed and audio files corresponding to the pictures of each frame to obtain and display the pictures with sound length; the voiced long picture includes: a plurality of frames of voiced pictures.
Description
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video processing method and device and electronic equipment.
Background
With the development of the times, more and more people like to shoot video records and share life by using mobile phones. But the video file is too large and too long in length; on one hand, the difficulty of searching key contents by a user is increased; on the other hand, the transmission difficulty of the video sharer is increased, and the browsing enthusiasm of the reader is suppressed.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method and device and electronic equipment, and the problems of viewing, transmission, sharing and the like caused by overlarge video files in the prior art can be solved.
In a first aspect, an embodiment of the present application provides a video processing method, including:
receiving a first input for a video to be processed;
responding to the first input, processing a plurality of frames of pictures included in the video to be processed and audio files corresponding to the pictures of each frame to obtain and display the pictures with sound length; the voiced long picture includes: a plurality of frames of voiced pictures.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the first receiving module is used for receiving a first input aiming at a video to be processed;
the first response module is used for responding to the first input, processing a plurality of frames of pictures included in the video to be processed and audio files corresponding to the pictures of each frame to obtain and display sound-length pictures; the voiced long picture includes: a plurality of frames of voiced pictures.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the highlight pictures of the video are reserved by extracting a plurality of wonderful pictures and nearby sound sources in the video and splicing the wonderful pictures into the sound-length pictures, and meanwhile, the original audio of the video is reserved by fusing the sound-length picture technology, so that a user can quickly acquire the key content of the video, and the user experience is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of a video processing method according to an embodiment of the present application;
FIG. 2 is a diagram illustrating an example of user interaction in a video processing method according to an embodiment of the present application;
fig. 3 is a second diagram illustrating an example of user interaction in a video processing method according to an embodiment of the present application;
fig. 4 is a third diagram illustrating an example of user interaction in a video processing method according to an embodiment of the present application;
FIG. 5 is a diagram illustrating a fourth example of user interaction in a video processing method according to an embodiment of the present application;
fig. 6 is a fifth diagram illustrating an example of user operation interaction of a video processing method according to an embodiment of the present application;
fig. 7 shows a sixth example of user operation interaction diagram of a video processing method according to an embodiment of the present application;
fig. 8 is a seventh diagram illustrating an example of user operation interaction in a video processing method according to an embodiment of the present application;
fig. 9 is an eighth exemplary diagram illustrating user interaction in a video processing method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 12 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a video processing method according to an embodiment of the present application, where the video processing method includes:
optionally, the video to be processed may be a video stored locally in the electronic device, may also be a video shot in real time, and may also be a video played by a certain application program or a web page, which is not specifically limited herein.
Optionally, the multi-frame picture included in the video to be processed may be a picture corresponding to a highlight picture or a highlight picture of the video to be processed; it should be emphasized that the above-mentioned multi-frame pictures are not all the video frame pictures in the video to be processed, but only some of the frame pictures extracted from all the video frame pictures of the video to be processed. For example, the AI technology is utilized to identify a video to be processed, identify a highlight picture or a highlight picture of the video to be processed, and capture a corresponding frame picture, thereby obtaining the multi-frame picture.
Alternatively, the above "audio file corresponding to each frame of picture" may be understood as: the audio file of the video clip where the frame picture is located; the starting point, the length and the like of the video clip where the frame picture is located can be configured in advance, and can also be obtained by AI technical analysis; for example, the starting point of the video segment where the frame picture is located is the frame where the frame picture is located, and the length is 3 seconds; for another example, the length of the video segment where the frame picture is located is 4 seconds, and the starting point is the first 2 seconds of the frame where the frame picture is located; not to mention here.
Optionally, the electronic device firstly analyzes the video to be processed to obtain multiple frames of pictures and audio files corresponding to the pictures; processing each frame of picture and the corresponding audio file line in sequence to obtain a plurality of frames of sound pictures; and further splicing the multi-frame sound pictures into sound long pictures according to the time sequence of the multi-frame sound pictures.
Optionally, in the above step, processing each frame of picture and the corresponding audio file line in sequence to obtain multiple frames of sound pictures includes: and coding the audio file into the frame picture as an extended data block of the corresponding frame picture to obtain the audio picture.
Optionally, in this step, the electronic device automatically analyzes the video to be processed by using an artificial intelligence AI technique to obtain multiple frames of pictures and reserve sound source segments (namely, reserve audio files) near the corresponding pictures, processes the multiple frames of pictures and the corresponding audio files to obtain multiple frames of voiced pictures, and then splices the multiple frames of voiced pictures into voiced long pictures according to the time sequence of the pictures; the sound-length picture reserves a highlight picture of a video, and simultaneously reserves the original audio of the video by fusing a sound picture technology, so that a user can quickly acquire key contents of the video, and the user experience is improved.
In at least one optional embodiment of the present application, the method further comprises:
receiving a second input for a first voiced picture comprised by the voiced long picture;
responding to the second input, and displaying a video clip corresponding to the frame where the first audio picture is located; wherein the audio file of the video clip is the audio file corresponding to the first audio picture;
receiving a third input for the video segment;
and responding to the third input, generating a second picture, and replacing the picture displayed by the first audio picture with the second picture.
In the embodiment of the application, after the automatically generated voiced long pictures are automatically generated in step 102, if the user of the electronic device is not satisfied with one or more of the automatically generated voiced long pictures, the replacement of the picture displayed by at least one of the voiced long pictures can be further realized through the second input and the third input, so that the user can actively edit the voiced long pictures, and the problem that the automatically generated voiced long pictures cannot accurately acquire the video highlight content is avoided.
For example, as shown in fig. 2, first, a vocal long image of the electronic device is in a state to be modified, in this state, each vocal screenshot is displayed at certain intervals, a user clicks a single vocal screenshot 2 (the vocal screenshot may also be referred to as a vocal image), a video clip corresponding to a frame where the vocal screenshot 2 is located pops up (for example, the video clip is displayed at the position of an original video bar in fig. 2), the user may manually capture a new image by dragging a video progress bar, the captured new image may be displayed on a "image preview interface" in fig. 2 (the user may determine whether the captured image is appropriate according to the preview interface), after capturing the new image, the electronic device may automatically encode an audio file near the image (optionally, the audio file near the new image is the same as the audio file corresponding to the original vocal screenshot 2) into the corresponding image, and replace the original vocal screenshot 2 with the newly formed vocal image, thus, a new sound-length picture is obtained, as shown in fig. 2, at this time, the sound-length picture is in an unmodified state (which may also be referred to as a preview state or a display state), and in this state, the sound screenshots are seamlessly displayed in a spliced manner.
According to the embodiment of the invention, the sound-length picture corresponding to the video to be processed is generated in an automatic generation and active modification mode, so that the sound-length picture can be ensured to display the key content of the video, and the user experience is improved.
Under the condition that a user needs to share a video to be processed with other users, two sharing modes are provided in the embodiment of the application:
the first method is as follows: sharing part of the sound pictures in the sound picture with other users; namely, the method further comprises:
receiving a fourth input for a third of the voiced long pictures;
in response to the fourth input, displaying a single graph sharing component;
receiving a fifth input for the single graph sharing component;
and responding to the fifth input, and displaying a sharing page for sharing the third sound picture.
For example, as shown in fig. 3, the user presses the audio screenshot 2 for a long time (i.e., the fourth input), the electronic device displays a single picture sharing button, and the user clicks the button (i.e., the fifth input) to start a sharing interface through which the audio screenshot 2 is shared with other users.
The second method comprises the following steps: and sharing the whole sound and long picture with other users. Namely, the method further comprises:
receiving a sixth input;
and responding to the sixth input, and displaying a sharing page for sharing the sound and long picture.
For example, as shown in fig. 4, the user clicks a "share" button (i.e., a sixth input) on the top of the page of the vocal print, and starts a share interface through which the entire vocal print is shared with other users.
According to the embodiment of the invention, after the video to be processed is processed to generate the sound-long picture, the sender only needs to share the sound-long picture or the single picture with the audio under the condition that the corresponding video to be processed needs to be shared, so that the transmission cost is reduced, the receiver can quickly acquire the key content of the video, and the user experience is improved.
In at least one embodiment of the present application, the method further comprises:
receiving a seventh input;
and responding to the seventh input, displaying a first desktop component, wherein the first desktop component is used for displaying the sound and long picture.
For example, as shown in FIG. 5, the user drags out the gallery of components (i.e., the seventh input) by a gesture, dragging out the voiced long graph component (i.e., the first desktop component).
Optionally, the first desktop component is of a default size of 2 × 2 with a maximum support of 7 × 2, and default material is presented. As shown in FIG. 6, pressing the component long, clicking the lower right "morph" button changes the component length.
In the embodiment of the application, the vocal long picture can be existed in the desktop of the electronic equipment as the desktop component, so that the user can check the vocal long picture on the desktop in a sliding manner without entering a mobile phone album, and the video original sound can be selectively played while the vocal long picture is slid, thereby increasing the interestingness of media browsing and interaction.
In at least one optional embodiment of the present application, the method further comprises:
receiving an eighth input for the first desktop component;
in response to the eighth input, displaying a voiced picture gallery management page, the voiced picture gallery management page including at least one voiced picture;
receiving a ninth input for the voiced long picture gallery management page, the ninth input for selecting at least one voiced long picture in the voiced long picture gallery;
in response to the ninth input, adding the selected voiced picture to the first desktop component.
For example, a user clicks a first desktop component to enter a gallery of sound and length management page, clicks "newly added picture of sound and length", invokes a system album in which the picture of sound and length is located (i.e., the gallery of sound and length management page), and the user can select one or more sound and length pictures which are already made to be placed in the first desktop component, and the desktop can display the corresponding sound and length pictures after the setting is completed.
In another optional embodiment of the present application, if the sound-length picture gallery management page does not include a sound-length picture, that is, the user has not braked the sound-length picture before, the method further includes:
receiving a thirteenth input for the first desktop component;
in response to the thirteenth input, displaying a video library, the video library including at least one video;
receiving a fourteenth input for the video library; the fourteenth input is used for selecting a video to be processed;
in response to the fourteenth input, displaying the selected to-be-processed video.
For example, a user clicks a first desktop component to enter a sound-length gallery management page, clicks a 'new sound-length gallery' — 'new material', jumps to a video gallery, and after the user selects a section of video, the user enters a sound-length picture making process of 'step 101 and step 102'; the finished materials are automatically stored in the component material management library and the system sound length picture album.
Optionally, the sound length map in the material management library can be freely added or removed.
In at least one embodiment of the present application, the method further comprises:
receiving a tenth input for a voiced long picture displayed within the first desktop component;
and responding to the tenth input, and sequentially displaying each frame of the sound long picture.
For example, as shown in FIG. 7, a user sliding up and down (i.e., the tenth input is a slide up and down) on the first desktop component may enable browsing of a long image.
In at least one embodiment of the present application, the method further comprises:
receiving an eleventh input for a fourth one of the voiced pictures displayed within the first desktop component;
and responding to the eleventh input, and playing an audio file corresponding to the fourth sound picture.
For example, as shown in fig. 8, a user double-clicks (i.e., eleventh input) a sound picture to play a sound source corresponding to the picture.
In at least one embodiment of the present application, the method further comprises:
receiving a twelfth input for a voiced picture displayed within the first desktop component;
in response to the twelfth input, displaying other sound and long pictures within the first desktop component.
For example, as shown in fig. 9, a user swiping left or right across the first desktop component (i.e., the twelfth input is a swipe left or right) may switch the vocal tract material.
In the embodiment of the application, the vocal long picture can be present in the electronic equipment desktop as the desktop subassembly, then the user need not to get into the mobile phone album, can look over the slip at the desktop and look over the vocal long picture, can select the broadcast video primary sound when sliding the vocal long picture, increases the interest that the media browsed and interacted, further increases being connected of image, sound and user, promotes the happiness.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution subject may be a video processing apparatus, or a control module in the video processing apparatus for executing the video processing method. In the embodiment of the present application, a video processing apparatus executing a video processing method is taken as an example, and the video processing apparatus provided in the embodiment of the present application is described.
Referring to fig. 10, fig. 10 is a structural diagram of a video processing apparatus according to an embodiment of the present application, where the video processing apparatus 1000 includes:
a first receiving module 10001, configured to receive a first input for a video to be processed,
a first response module 1002, configured to respond to the first input, process and display multiple frames of pictures included in the video to be processed and an audio file corresponding to each frame of picture, so as to obtain a picture with sound length; the voiced long picture includes: a plurality of frames of voiced pictures.
As an alternative embodiment, the apparatus further comprises:
a second receiving module, configured to receive a second input for a first voiced picture included in the voiced long picture;
the second response module is used for responding to the second input and displaying the video clip corresponding to the frame where the first sound picture is located; wherein the audio file of the video clip is the audio file corresponding to the first audio picture;
a third receiving module for receiving a third input for the video segment;
and the third response module is used for responding to the third input, generating a second picture, and replacing the picture displayed by the first audio picture with the second picture.
As an alternative embodiment, the apparatus further comprises:
a fourth receiving module for receiving a fourth input for a third one of the voiced long pictures;
a fourth response module, configured to respond to the fourth input and display the single-icon sharing component;
a fifth receiving module, configured to receive a fifth input for the single graph sharing component;
a fifth response module, configured to respond to the fifth input and display a sharing page for sharing the third voiced picture.
As an alternative embodiment, the apparatus further comprises:
a sixth receiving module, configured to receive a sixth input;
and the sixth response module is used for responding to the sixth input and displaying a sharing page used for sharing the vocal long picture.
As an alternative embodiment, the apparatus further comprises:
a seventh receiving module, configured to receive a seventh input;
a seventh response module, configured to display a first desktop component in response to the seventh input, where the first desktop component is configured to display the sound-length picture.
As an alternative embodiment, the apparatus further comprises:
an eighth receiving module for receiving an eighth input for the first desktop component;
an eighth response module, configured to respond to the eighth input, to display a voiced long picture gallery management page, where the voiced long picture gallery management page includes at least one voiced long picture;
a ninth receiving module, configured to receive a ninth input for the voiced long picture gallery management page, where the ninth input is used to select at least one voiced long picture in the voiced long picture gallery;
a ninth response module to add the selected voiced picture to the first desktop component in response to the ninth input.
As an alternative embodiment, the apparatus further comprises:
a tenth receiving module for receiving a tenth input for the voiced long picture displayed within the first desktop component;
and the tenth response module is used for responding to the tenth input and sequentially displaying each frame of the sound picture of the sound long picture.
As an alternative embodiment, the apparatus further comprises:
an eleventh receiving module for receiving an eleventh input for a fourth one of the voiced long pictures displayed within the first desktop component;
and the eleventh response module is used for responding to the eleventh input and playing the audio file corresponding to the fourth sound picture.
As an alternative embodiment, the apparatus further comprises:
a twelfth receiving module for receiving a twelfth input directed to the sound-length picture displayed within the first desktop component;
a twelfth response module to display other sound and long pictures in the first desktop component in response to the twelfth input.
In the embodiment of the application, the highlight pictures of the video are reserved by extracting a plurality of wonderful pictures and nearby sound sources in the video and splicing the wonderful pictures into the sound-length pictures, and meanwhile, the original audio of the video is reserved by fusing the sound-length picture technology, so that a user can quickly acquire the key content of the video, and the user experience is improved. Further, under the condition that the corresponding to-be-processed video needs to be shared, a sender only needs to share the audio long-distance picture or the single picture with the audio, so that the transmission cost is reduced, a receiver can quickly acquire the key content of the video, and the user experience is improved; and the sound-length picture can be used as a desktop component to be stored in the desktop of the electronic equipment, so that the user can check and slide the sound-length picture on the desktop without entering a mobile phone album, and can select to play the video original sound while sliding the sound-length picture, thereby increasing the interestingness of media browsing and interaction.
It should be noted that the video processing apparatus provided in the embodiments of the present application is an apparatus capable of executing the video processing method, and all embodiments of the video processing method are applicable to the apparatus and can achieve the same or similar beneficial effects.
The video processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 1 to fig. 9, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 11, an electronic device 1100 is further provided in an embodiment of the present application, and includes a processor 1101, a memory 1102, and a program or an instruction stored in the memory 1102 and executable on the processor 1101, where the program or the instruction is executed by the processor 1101 to implement each process of the above-mentioned video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 12 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1200 includes, but is not limited to: radio frequency unit 1201, network module 1202, audio output unit 1203, input unit 1204, sensors 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, and processor 1210.
Those skilled in the art will appreciate that the electronic device 1200 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1210 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 12 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The radio frequency unit 1201 is configured to receive a first input for a video to be processed;
the processor 1210 is configured to, in response to the first input, process multiple frames of pictures included in the video to be processed and an audio file corresponding to each frame of picture to obtain a sound-length picture and display the sound-length picture; the voiced long picture includes: a plurality of frames of voiced pictures.
In the embodiment of the application, the highlight pictures of the video are reserved by extracting a plurality of wonderful pictures and nearby sound sources in the video and splicing the wonderful pictures into the sound-length pictures, and meanwhile, the original audio of the video is reserved by fusing the sound-length picture technology, so that a user can quickly acquire the key content of the video, and the user experience is improved.
Optionally, the radio frequency unit 1201 is further configured to receive a second input for a first voiced picture included in the voiced long picture;
a radio frequency unit 1201, further configured to receive a third input for the video segment;
the processor 1210 is further configured to generate a second picture in response to the third input, and replace the picture displayed by the first audio picture with the second picture.
Optionally, the radio frequency unit 1201 is further configured to receive a fourth input for the voiced long picture;
a processor 1210 further configured to display a single picture sharing selection page in response to the fourth input, the single picture sharing selection page including at least one of the voiced long pictures;
the radio frequency unit 1201 is further configured to receive a fifth input for the single graph sharing selection page; the fifth input is used for selecting a third sound picture to be shared in the sound length pictures;
the processor 1210 is further configured to display, in response to the fifth input, a sharing page for sharing the third voiced picture.
Optionally, the radio frequency unit 1201 is further configured to receive a sixth input;
the processor 110 is further configured to display a sharing page for sharing the voiced long picture in response to the sixth input.
According to the embodiment of the application, under the condition that the corresponding to-be-processed video needs to be shared, a sender only needs to share the vocal long picture or the single picture with the audio, so that not only is the transmission cost reduced, but also the receiver can quickly obtain the key content of the video, and the user experience is improved.
Optionally, the radio frequency unit 1201 is further configured to receive a seventh input;
Optionally, the radio frequency unit 1201 is further configured to receive an eighth input for the first desktop component;
a processor 1210 further configured to display a voiced picture gallery management page in response to the eighth input, the voiced picture gallery management page including at least one voiced picture;
a radio frequency unit 1201, further configured to receive a ninth input for the voiced long picture gallery management page, where the ninth input is used to select at least one voiced long picture in the voiced long picture gallery;
a processor 1210 further configured to add the selected voiced picture to the first desktop component in response to the ninth input.
Optionally, the radio frequency unit 1201 is further configured to receive a tenth input for the voiced long picture displayed in the first desktop component;
the processor 1210 is further configured to sequentially display each frame of the voiced long picture in response to the tenth input.
Optionally, the radio frequency unit 1201 is further configured to receive an eleventh input for a fourth voiced picture in the voiced long pictures displayed in the first desktop component;
the processor 1210 is further configured to play an audio file corresponding to the fourth voiced picture in response to the eleventh input.
Optionally, the radio frequency unit 1201 is further configured to receive a twelfth input for the sound and long picture displayed in the first desktop component;
a processor 1210 further configured to display other sound and long pictures within the first desktop component in response to the twelfth input.
In the embodiment of the application, the vocal long picture can be existed in the desktop of the electronic equipment as the desktop component, so that the user can check the vocal long picture on the desktop in a sliding manner without entering a mobile phone album, and the video original sound can be selectively played while the vocal long picture is slid, thereby increasing the interestingness of media browsing and interaction.
It should be understood that, in the embodiment of the present application, the input Unit 1204 may include a Graphics Processing Unit (GPU) 12041 and a microphone 12042, and the Graphics Processing Unit 12041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1206 may include a display panel 12061, and the display panel 12061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1207 includes a touch panel 12071 and other input devices 12072. A touch panel 12071, also referred to as a touch screen. The touch panel 12071 may include two parts of a touch detection device and a touch controller. Other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1209 may be used to store software programs as well as various data, including but not limited to application programs and an operating system. Processor 1210 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1210.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (12)
1. A video processing method, comprising:
receiving a first input for a video to be processed;
responding to the first input, processing a plurality of frames of pictures included in the video to be processed and audio files corresponding to the pictures of each frame to obtain and display the pictures with sound length; the voiced long picture includes: a plurality of frames of voiced pictures.
2. The method of claim 1, further comprising:
receiving a second input for a first voiced picture comprised by the voiced long picture;
responding to the second input, and displaying a video clip corresponding to the frame where the first audio picture is located; wherein the audio file of the video clip is the audio file corresponding to the first audio picture;
receiving a third input for the video segment;
and responding to the third input, generating a second picture, and replacing the picture displayed by the first audio picture with the second picture.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
receiving a fourth input for a third of the voiced long pictures;
in response to the fourth input, displaying a single graph sharing component;
receiving a fifth input for the single graph sharing component;
and responding to the fifth input, and displaying a sharing page for sharing the third sound picture.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
receiving a sixth input;
and responding to the sixth input, and displaying a sharing page for sharing the sound and long picture.
5. The method according to claim 1 or 2, characterized in that the method further comprises:
receiving a seventh input;
and responding to the seventh input, displaying a first desktop component, wherein the first desktop component is used for displaying a sound and long picture.
6. The method of claim 5, further comprising:
receiving an eighth input for the first desktop component;
in response to the eighth input, displaying a voiced picture gallery management page, the voiced picture gallery management page including at least one voiced picture;
receiving a ninth input for the voiced long picture gallery management page, the ninth input for selecting at least one voiced long picture in the voiced long picture gallery;
in response to the ninth input, adding the selected voiced picture to the first desktop component.
7. The method of claim 5, further comprising:
receiving a tenth input for a voiced long picture displayed within the first desktop component;
and responding to the tenth input, and sequentially displaying each frame of the sound long picture.
8. The method of claim 5, further comprising:
receiving an eleventh input for a fourth one of the voiced pictures displayed within the first desktop component;
and responding to the eleventh input, and playing an audio file corresponding to the fourth sound picture.
9. The method of claim 5, further comprising:
receiving a twelfth input for a voiced picture displayed within the first desktop component;
in response to the twelfth input, displaying other sound and long pictures within the first desktop component.
10. A video processing apparatus, comprising:
the first receiving module is used for receiving a first input aiming at a video to be processed;
the first response module is used for responding to the first input, processing a plurality of frames of pictures included in the video to be processed and audio files corresponding to the pictures of each frame to obtain and display sound-length pictures; the voiced long picture includes: a plurality of frames of voiced pictures.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video processing method according to any one of claims 1 to 9.
12. A readable storage medium, on which a program or instructions are stored, which, when executed by a processor, carry out the steps of the video processing method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111430562.6A CN114063863B (en) | 2021-11-29 | 2021-11-29 | Video processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111430562.6A CN114063863B (en) | 2021-11-29 | 2021-11-29 | Video processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114063863A true CN114063863A (en) | 2022-02-18 |
CN114063863B CN114063863B (en) | 2024-10-15 |
Family
ID=80277143
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111430562.6A Active CN114063863B (en) | 2021-11-29 | 2021-11-29 | Video processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114063863B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008227693A (en) * | 2007-03-09 | 2008-09-25 | Oki Electric Ind Co Ltd | Speaker video display control system, speaker video display control method, speaker video display control program, communication terminal, and multipoint video conference system |
CN103186578A (en) * | 2011-12-29 | 2013-07-03 | 方正国际软件(北京)有限公司 | Processing system and processing method for sound effects of cartoon |
CN104065869A (en) * | 2013-03-18 | 2014-09-24 | 三星电子株式会社 | Method for displaying image combined with playing audio in an electronic device |
CN109618224A (en) * | 2018-12-18 | 2019-04-12 | 腾讯科技(深圳)有限公司 | Video data handling procedure, device, computer readable storage medium and equipment |
WO2019205872A1 (en) * | 2018-04-25 | 2019-10-31 | 腾讯科技(深圳)有限公司 | Video stream processing method and apparatus, computer device and storage medium |
CN110619673A (en) * | 2018-06-19 | 2019-12-27 | 阿里巴巴集团控股有限公司 | Method for generating and playing sound chart, method, system and equipment for processing data |
CN110868632A (en) * | 2019-10-29 | 2020-03-06 | 腾讯科技(深圳)有限公司 | Video processing method and device, storage medium and electronic equipment |
CN111343496A (en) * | 2020-02-21 | 2020-06-26 | 北京字节跳动网络技术有限公司 | Video processing method and device |
CN111857517A (en) * | 2020-07-28 | 2020-10-30 | 腾讯科技(深圳)有限公司 | Video information processing method and device, electronic equipment and storage medium |
CN112188115A (en) * | 2020-09-29 | 2021-01-05 | 咪咕文化科技有限公司 | Image processing method, electronic device and storage medium |
CN112261453A (en) * | 2020-10-22 | 2021-01-22 | 北京小米移动软件有限公司 | Method, device and storage medium for transmitting subtitle splicing map |
WO2021073368A1 (en) * | 2019-10-14 | 2021-04-22 | 北京字节跳动网络技术有限公司 | Video file generating method and device, terminal, and storage medium |
CN112905837A (en) * | 2021-04-09 | 2021-06-04 | 维沃移动通信(深圳)有限公司 | Video file processing method and device and electronic equipment |
CN113407144A (en) * | 2021-07-15 | 2021-09-17 | 维沃移动通信(杭州)有限公司 | Display control method and device |
-
2021
- 2021-11-29 CN CN202111430562.6A patent/CN114063863B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008227693A (en) * | 2007-03-09 | 2008-09-25 | Oki Electric Ind Co Ltd | Speaker video display control system, speaker video display control method, speaker video display control program, communication terminal, and multipoint video conference system |
CN103186578A (en) * | 2011-12-29 | 2013-07-03 | 方正国际软件(北京)有限公司 | Processing system and processing method for sound effects of cartoon |
CN104065869A (en) * | 2013-03-18 | 2014-09-24 | 三星电子株式会社 | Method for displaying image combined with playing audio in an electronic device |
WO2019205872A1 (en) * | 2018-04-25 | 2019-10-31 | 腾讯科技(深圳)有限公司 | Video stream processing method and apparatus, computer device and storage medium |
CN110619673A (en) * | 2018-06-19 | 2019-12-27 | 阿里巴巴集团控股有限公司 | Method for generating and playing sound chart, method, system and equipment for processing data |
CN109618224A (en) * | 2018-12-18 | 2019-04-12 | 腾讯科技(深圳)有限公司 | Video data handling procedure, device, computer readable storage medium and equipment |
WO2021073368A1 (en) * | 2019-10-14 | 2021-04-22 | 北京字节跳动网络技术有限公司 | Video file generating method and device, terminal, and storage medium |
CN110868632A (en) * | 2019-10-29 | 2020-03-06 | 腾讯科技(深圳)有限公司 | Video processing method and device, storage medium and electronic equipment |
CN111343496A (en) * | 2020-02-21 | 2020-06-26 | 北京字节跳动网络技术有限公司 | Video processing method and device |
CN111857517A (en) * | 2020-07-28 | 2020-10-30 | 腾讯科技(深圳)有限公司 | Video information processing method and device, electronic equipment and storage medium |
CN112188115A (en) * | 2020-09-29 | 2021-01-05 | 咪咕文化科技有限公司 | Image processing method, electronic device and storage medium |
CN112261453A (en) * | 2020-10-22 | 2021-01-22 | 北京小米移动软件有限公司 | Method, device and storage medium for transmitting subtitle splicing map |
CN112905837A (en) * | 2021-04-09 | 2021-06-04 | 维沃移动通信(深圳)有限公司 | Video file processing method and device and electronic equipment |
CN113407144A (en) * | 2021-07-15 | 2021-09-17 | 维沃移动通信(杭州)有限公司 | Display control method and device |
Non-Patent Citations (2)
Title |
---|
刘俊波;韩国强;: "数字家居中视频监控功能的设计与实现", 微计算机信息, no. 22, 22 August 2009 (2009-08-22) * |
张德馨;安鹏;张浩向;: "Application of robust face recognition in video surveillance systems", OPTOELECTRONICS LETTERS, no. 02, 1 March 2018 (2018-03-01) * |
Also Published As
Publication number | Publication date |
---|---|
CN114063863B (en) | 2024-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
RU2740785C2 (en) | Image processing method and equipment, electronic device and graphical user interface | |
CN112860163B (en) | Image editing method and device | |
CN111343074B (en) | Video processing method, device and equipment and storage medium | |
CN112887802A (en) | Video access method and device | |
WO2023061414A1 (en) | File generation method and apparatus, and electronic device | |
CN115379136B (en) | Special effect prop processing method and device, electronic equipment and storage medium | |
CN112887794A (en) | Video editing method and device | |
CN113553466A (en) | Page display method, device, medium and computing equipment | |
CN113596555A (en) | Video playing method and device and electronic equipment | |
CN112181252B (en) | Screen capturing method and device and electronic equipment | |
CN112291412B (en) | Application program control method and device and electronic equipment | |
CN113986083A (en) | File processing method and electronic equipment | |
CN111954076A (en) | Resource display method and device and electronic equipment | |
CN113810538B (en) | Video editing method and video editing device | |
CN113726953B (en) | Display content acquisition method and device | |
CN115499672B (en) | Image display method, device, equipment and storage medium | |
CN112202958B (en) | Screenshot method and device and electronic equipment | |
CN113037618B (en) | Image sharing method and device | |
CN114063863B (en) | Video processing method and device and electronic equipment | |
CN113268961A (en) | Travel note generation method and device | |
CN114221923A (en) | Message processing method and device and electronic equipment | |
CN112114735B (en) | Method and device for managing tasks | |
CN113139367A (en) | Document generation method and device and electronic equipment | |
CN112261483A (en) | Video output method and device | |
WO2024113679A1 (en) | Multimedia resource processing method and apparatus, and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |