WO2021104209A1 - 视频显示方法、电子设备以及介质 - Google Patents
视频显示方法、电子设备以及介质 Download PDFInfo
- Publication number
- WO2021104209A1 WO2021104209A1 PCT/CN2020/130920 CN2020130920W WO2021104209A1 WO 2021104209 A1 WO2021104209 A1 WO 2021104209A1 CN 2020130920 W CN2020130920 W CN 2020130920W WO 2021104209 A1 WO2021104209 A1 WO 2021104209A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- input
- user
- target object
- preview interface
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 92
- 230000004044 response Effects 0.000 claims description 40
- 238000003780 insertion Methods 0.000 claims description 20
- 230000037431 insertion Effects 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 description 31
- 238000010586 diagram Methods 0.000 description 26
- 230000006870 function Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 7
- 229910003460 diamond Inorganic materials 0.000 description 6
- 239000010432 diamond Substances 0.000 description 6
- 238000007726 management method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1686—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04808—Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
Definitions
- the embodiments of the present invention relate to the field of video processing, and in particular to a video display method, electronic equipment, and media.
- the self-photographing method requires a lot of user time. However, if you want to find the video you want from several videos saved in the electronic device, you also need to spend a lot of time searching it. Moreover, when there are multiple video saving folders in the user's electronic device, the user Need to open each folder separately to search, the search process is tedious and time-consuming.
- an embodiment of the present invention provides a video display method applied to an electronic device, including:
- the shooting preview interface includes a target object
- the shooting preview interface displaying N video identifiers associated with the target object
- each video identifier indicates a video
- N is a positive integer
- an embodiment of the present invention also provides an electronic device, including:
- the first display module is used to display the shooting preview interface of the camera
- the second display module is configured to display N video identifiers associated with the target object in the shooting preview interface when the target object is included in the shooting preview interface; wherein, each of the video identifiers Indicates a video, N is a positive integer.
- an embodiment of the present invention provides an electronic device, including a processor, a memory, and a computer program stored on the memory and running on the processor, the computer program being executed by the processor When realizing the steps of the video display method as described in the first aspect above.
- an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the video display as described in the above first aspect is realized. Method steps.
- the video display method, electronic device, and computer storage medium of the embodiments of the present invention can display the video identifier associated with the target object contained in the shooting preview interface when the shooting preview interface of the camera is displayed, so that subsequent users can directly obtain the video Identifies the indicated video. It can be seen that the embodiment of the present invention does not require the user to take video shooting, nor does it require the user to manually search for the desired video. Instead, the video identification of the desired video can be obtained directly through the target object in the shooting preview interface of the camera, which is convenient to operate and realize Quick search for videos.
- FIG. 1 is a schematic flowchart of a video display method provided by an embodiment of the present invention
- FIG. 2 is a schematic diagram of a sliding input provided by an embodiment of the present invention.
- FIG. 3 is a schematic diagram of another sliding input provided by an embodiment of the present invention.
- FIG. 5 is a schematic diagram of a fourth input provided by an embodiment of the present invention.
- FIG. 6 is a schematic diagram of another fourth input provided by an embodiment of the present invention.
- FIG. 7 is a schematic diagram of a fifth input provided by an embodiment of the present invention.
- FIG. 8 is a schematic diagram of a sixth input provided by an embodiment of the present invention.
- FIG. 9 is a schematic diagram of a seventh input provided by an embodiment of the present invention.
- FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
- Fig. 1 shows a schematic flowchart of a video display method provided by an embodiment of the present invention. The method is applied to electronic equipment, and the method includes:
- Step 101 Display the shooting preview interface of the camera
- Step 102 In the case where the target object is included in the shooting preview interface, display N video identifiers associated with the target object in the shooting preview interface; wherein each video identifier indicates a video, and N is a positive integer.
- the target object here may include one object or multiple objects.
- the video identifier associated with the target object contained in the shooting preview interface can be displayed, so that subsequent users can directly obtain the video indicated by the video identifier. It can be seen that the embodiment of the present invention does not require the user to take video shooting, nor does it require the user to manually search for the desired video. Instead, the video identification of the desired video can be obtained directly through the target object in the shooting preview interface of the camera, which is convenient to operate and realize Quick search for videos.
- step 101 it may further include:
- At least one object selected by the first input is taken as a target object.
- the target object in the shooting preview interface is determined according to the user's selection.
- This method can improve the autonomy of the user and avoid displaying too many video identifiers, which affects the user's video selection.
- the electronic device may also automatically recognize the object that meets the preset condition contained in the shooting preview interface as the target object, which is not limited in the present invention.
- the foregoing receiving a user's first input on at least one object in the shooting preview interface includes:
- the above-mentioned using at least one object selected by the first input as a target object includes:
- At least one object circled by the sliding track of the sliding input is used as the target object.
- the corresponding target object is selected through the trajectory of the user's operation gesture.
- the sliding input here can be: two fingers move together and slide down and then separate and then move together.
- the figure formed by the sliding track can be diamond or circular or similar. In diamond-shaped or circular shapes.
- the user's finger does not leave the screen to draw a rhombus, and the object enclosed in the rhombus is used as the target object as shown in Figure 2.
- Figure 2 shows a schematic diagram of a sliding input provided by an embodiment of the present invention.
- the object in Figure 2 110 is the selected target.
- FIG. 3 shows another schematic diagram of sliding input provided by an embodiment of the present invention, and the objects 110 and 120 in FIG. 3 are selected target objects.
- the above sliding input can also be: the user continuously slides without leaving the screen with two fingers, and the sliding track forms a plurality of continuous graphics, each of which is a diamond or a circle or a figure similar to a diamond or a circle, and the target selected by the sliding input
- the target objects include the objects delineated in each graph. For example, if the finger does not leave the screen to draw two circles in succession, the two circles respectively contain a sofa and a table, then the target object includes the sofa and the table, and then the video logo associated with the sofa and the table is displayed at the same time.
- the user may also use other sliding inputs on the shooting preview interface to select the target object.
- the sliding trajectory is a point or a check mark, and an object within a predetermined distance above the sliding trajectory is used as the selected target object, or the user can click on a certain area to focus, and the object in the focused area is used as the target object.
- the method may further include: acquiring N video identities associated with the target object.
- the foregoing acquisition of N video identities associated with the target object may specifically include:
- the video is specifically obtained through the storage of the server or the electronic device, wherein if the storage of the electronic device is selected to obtain the video, the user can obtain and edit the video without a network. If you choose a server to get the video, you can increase the scope of getting the video and get a very rich video source from the Internet.
- the foregoing acquiring N video identities associated with the target object may further include:
- the shooting time of the acquired video identifier can be limited according to the shooting angle of view when the target object is acquired.
- the time interval corresponding to the standard angle of view may be within 3 days, and the actual interval corresponding to the wide angle may be 3 days ago.
- the video identifier associated with each target object may be displayed in a preset range around each target object in the shooting preview interface.
- the video logo can also be displayed in the form of a list, or can also be displayed in other ways, and the present invention does not limit the display manner of the video logo.
- the foregoing step 102 may include:
- the first characteristic part of the target object is included in the shooting preview interface, display the video identifiers of the N videos associated with the first characteristic part;
- the video identifiers of the N videos are updated to the video identifiers of the T videos associated with the second characteristic part;
- T is a positive integer
- different characteristic parts of the target object are associated with different video identifiers.
- the characteristic parts of the sofa can be armrests, cushions, cushions, pillows, etc., among which the armrests of the sofa can be associated Family-related videos, pillows can be associated with party-related videos, etc. It can be seen that this embodiment can obtain videos associated with various characteristic parts of the target object, thereby improving the richness of the obtained videos.
- the process of identifying the first characteristic part and the second characteristic part of the target object in the shooting preview interface may be: scanning each characteristic part of the target object in sequence. Specifically, the target object may be scanned in a predetermined order (for example, from top to bottom, from left to right, etc.), and during the scanning process, the characteristic part in the scanned part is identified.
- the video identifier associated with the first characteristic part is no longer displayed, but only the video identifier associated with the second characteristic part is displayed.
- the video identifier associated with the second characteristic part may be added, that is, the video identifier associated with the first characteristic part and the second characteristic part may be displayed at the same time.
- the target object in the case where multiple characteristic parts of the target object are scanned, that is, when multiple characteristic parts of the target object are included in the shooting preview interface, it is also possible to set the scanned multiple characteristic parts of the target object. At least two characteristic parts are combined, and a video identifier associated with each combination is obtained. For example, after scanning the three characteristic parts, the three characteristic parts can be combined in twos and the three characteristic parts can be used as a combination, and then the video identifier associated with each of the four groups of combinations can be obtained. In this way, the richness of the obtained video identification can be further increased.
- the method may further include:
- the first video is edited, and the edited first video is output.
- this embodiment first plays the first video, and then edits the first video during the playback process.
- the previous video or the next video of the first video in response to the user's trigger input on the previous or next button, the previous video or the next video of the first video may be played.
- multiple first videos corresponding to the multiple first video identifiers are played at the same time.
- the upper and lower areas of the screen will be divided, and each area will display a first video separately, or the left and right areas of the screen can be divided, and each area will display a first video separately.
- the clipped first video may be saved in response to the received video saving input.
- the video saving input here may be an operation of long pressing the play button of the video. Of course, other operations may also be set as the video saving input, which is not limited in the present invention.
- FIG. 4 shows a schematic diagram of a third input provided by an embodiment of the present invention.
- the above-mentioned third input to the first video identifier may be: click on the first video identifier, and the finger does not leave the screen. Input operation of stretching out with two fingers apart.
- the third input may also be an operation of double-clicking the first video identifier, or other input methods may also be set as the third input.
- receiving the fourth input of the user may specifically be: receiving the fourth input of the user on the playing interface of the first video.
- FIG. 5 shows a schematic diagram of a fourth input provided by an embodiment of the present invention. After receiving the fourth input from the user, it may further include:
- the user-defined input first information such as text and patterns
- the user can be inserted into the play interface of the first video.
- the user can edit the text of the acquired first video according to his own needs, so as to obtain the video display effect he wants. Users have high flexibility in editing videos.
- the first video in order to facilitate the editing of the user and prevent the user from missing the display content of the first video during the editing process, the first video can be paused in response to the fourth input, and after the editing is completed, the user's editing completion is received. In the case of input, continue to play the first video.
- the aforementioned fourth input may be an operation in which the user touches the screen with a single finger and moves an arc beyond the preset length.
- the fourth input can also be set to other types of operations, which is not limited in the present invention.
- the foregoing receiving the fourth input of the user, in response to the fourth input, editing the first video, and outputting the edited first video may also be:
- the fourth input is the delete input.
- the first information to be adjusted needs to be deleted. This embodiment enables the user to delete the inserted first information, which improves the convenience of information editing on the first video.
- the fourth input in this embodiment may be an operation of a gesture sliding a line from one side of the screen (for example, the left side) to the other side of the screen.
- the first information passed by the sliding track of the user's gesture is Target first information
- FIG. 6 shows a schematic diagram of another fourth input provided by an embodiment of the present invention.
- the first information inserted on the sliding track of the user gesture can also be set as the target first information.
- other input methods can also be used as the fourth input, which is not limited in the present invention.
- the above-mentioned fourth input of the first information may be carried out before the completion of the saving of the edited first video.
- editing the first video and outputting the edited first video may further include:
- the first position is the position entered by the fifth input on the playback progress bar of the first video.
- the associated video identifier is obtained by shooting the target object in the preview interface, the video corresponding to the video identifier is used as the candidate video to be processed, and the user selects at least one video from the candidate video to be processed for editing .
- this embodiment additionally records the second video of the target object, and Insert the additionally recorded second video into the first video.
- the video content required by the user can be added at will in the first video, which enriches the user's video editing mode, facilitates the user's custom editing of the video, and enhances the flexibility of video editing.
- the second video can be recorded one or more.
- FIG. 7 shows a schematic diagram of a fifth input provided by an embodiment of the present invention.
- the above-mentioned fifth input can be: slide down the screen with two fingers, and the second video is inserted into the corresponding position of the fifth input on the playback progress bar of the first video.
- the indicator 130 in FIG. 7 is the position of the inserted second video. Indicates the logo. Or in other embodiments, a single-finger sliding operation can also be used as the fifth input, or a predetermined video insertion menu item can also be triggered as the fifth input.
- the present invention does not limit the specific operation mode of the fifth input.
- the indication mark of the second video is displayed; where the indication mark is used to indicate the insertion position of the second video.
- this embodiment identifies the insertion position of the second video on the playback progress bar of the first video, so that the user can intuitively understand the insertion situation of the second video, and it is convenient for subsequent users to check the previous
- the adjustment of the inserted second video improves the user's convenience in the process of video editing.
- the indication mark here can be represented by a bubble containing the video abbreviation, of course, other identification methods can also be used.
- the method may further include:
- the indicator In response to the sixth input, the indicator is moved from the first position to the second position, and the second video is inserted into the second position in the first video.
- the first position and the second position here refer to the position on the playback progress bar of the first video, that is, the position of the playback time point of the video.
- the first position is different from the second position.
- the insertion position of the second video may not be accurately controlled, resulting in a situation where the insertion position is wrong. In this case, the insertion position of the inserted second video needs to be adjusted.
- the insertion position of the second video by moving the position of the indication mark of the second video on the playback progress bar of the first video, the insertion position of the second video in the first video can be moved. After the inserted video is selected, in this process , The user can intuitively view the movement of the current insertion position of the second video according to the position of the indication mark, which facilitates the user's adjustment.
- FIG. 8 shows a schematic diagram of a sixth input provided by an embodiment of the present invention.
- the above-mentioned sixth input may be: single-finger or two-finger selecting the indication mark of the second video, and moving the position of the indication mark on the playback progress bar of the first video left and right to adjust the insertion position of the second video frame by frame.
- the user can move the insertion position of the second video by pressing and holding the indicator with one or two fingers, and sliding the indicator left and right without leaving the screen.
- the indicator 130 in Fig. 8 is the indicator of the adjusted second video. For example, if you swipe to the left, the second video is inserted into the previous frame of the current frame.
- other methods for adjusting the insertion position may also be used, and the present invention does not limit the specific operation content of the sixth input.
- the method may further include:
- the second video that has been inserted in the first video is deleted, and the display of the indication mark is eliminated.
- the inserted second video can be deleted, thereby further improving the user's editing method for the additional video during the video editing process, so that the user can more conveniently perform the insertion management of the additional video and the editing process. More flexibility.
- FIG. 9 shows a schematic diagram of a seventh input provided by an embodiment of the present invention.
- the aforementioned seventh input may be an operation of selecting the indicator of the second video with a single finger, and dragging the indicator to move down the edge of the screen without leaving the screen.
- a corresponding closing flag can also be set for the indicator flag of the second video.
- the seventh input is an operation of selecting the closing flag corresponding to the indicator flag of the second video with a single finger.
- the indicator flag 140 in FIG. 9 is The indicator of the deleted second video.
- the seventh input can also be set to other types of operations.
- step 102 it may further include:
- multiple target videos can be spliced directly.
- the user does not need to record the video every time, but can be spliced by splicing multiple target videos. Ways to obtain the video that users want, which improves the flexibility of video editing and the richness of video editing operations.
- the eighth input here may be an input operation of dragging the M target video identifiers together, and the subsequent video splicing refers to sequentially pasting the M target videos together according to the time sequence of the completion of the recording.
- the user can select two video logos and drag them together.
- the two video logos correspond to target videos A and B respectively.
- the recording completion time of A is earlier than that of B, and the end frame of A will be combined with the start frame of B.
- the connection makes the time sequence of completion of recording of A and B fit together.
- FIG. 10 shows a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
- the electronic equipment includes:
- the first display module 201 is used to display the shooting preview interface of the camera
- the second display module 202 is configured to display N video identifiers associated with the target object in the shooting preview interface when the target object is included in the shooting preview interface; wherein, each video identifier indicates a video, and N is positive Integer.
- the video identifier associated with the target object contained in the shooting preview interface can be displayed, so that subsequent users can directly obtain the video indicated by the video identifier. It can be seen that the embodiment of the present invention does not require the user to take video shooting, nor does it require the user to manually search for the desired video. Instead, the video identification of the desired video can be obtained directly through the target object in the shooting preview interface of the camera, which is convenient to operate and realize Quick search for videos.
- the electronic device may further include:
- the object selection module is configured to receive a user's first input on at least one object in the shooting preview interface; in response to the first input, the at least one object selected by the first input is used as the target object.
- the target object in the shooting preview interface is determined according to the user's selection.
- This method can improve the autonomy of the user and avoid displaying too many video identifiers, which affects the user's video selection.
- the electronic device may also automatically recognize the object that meets the preset condition contained in the shooting preview interface as the target object, which is not limited in the present invention.
- the above-mentioned object selection module is specifically configured to: receive a user's sliding input on the shooting preview interface; and use at least one object circled by the sliding track of the sliding input as a target object.
- the corresponding target object is selected through the trajectory of the user's operation gesture.
- the sliding input here can be: two fingers move together and slide down and then separate and then move together.
- the figure formed by the sliding track can be diamond or circular or similar. In diamond-shaped or circular shapes.
- the above sliding input can also be: the user continuously slides without leaving the screen with two fingers, and the sliding track forms a plurality of continuous graphics, each of which is a diamond or a circle or a figure similar to a diamond or a circle, and the sliding input selects
- the target objects include the objects circled in each graph.
- other sliding inputs may also be used.
- the video identifier acquisition module is used to acquire N video identifiers associated with the target object.
- the aforementioned video identifier obtaining module may be specifically used to obtain N video identifiers associated with the target object from at least one video stored in a memory of a server or an electronic device.
- the video is specifically obtained through the storage of the server or the electronic device, wherein if the storage of the electronic device is selected to obtain the video, the user can obtain and edit the video without a network. If you choose a server to get the video, you can increase the scope of getting the video and get a very rich video source from the Internet.
- the above-mentioned video identification acquisition module may also be used for:
- the second display module 202 may be specifically configured to display the video identifiers associated with each target object within a preset range around each target object in the shooting preview interface, or display the video identifiers in the form of a list. .
- the present invention does not limit the display mode of the video logo.
- the above-mentioned second display module 202 may specifically include:
- the first identification display unit is configured to display video identifications of N videos associated with the first feature part when the first feature part of the target object is included in the shooting preview interface;
- the second identification display unit is configured to update the video identifications of N videos to the video identifications of T videos associated with the second feature part when the shooting preview interface is updated to the second feature part of the target object; wherein, T is a positive integer, and different feature parts of the target object are associated with different video identifiers.
- the target object is a sofa
- the characteristic parts of the sofa may be sofa armrests, cushions, cushions, pillows, etc. Therefore, in order to obtain the video corresponding to each characteristic part of the target object in this embodiment, the richness of the obtained video can be improved.
- the above-mentioned second identification display unit is further used for identifying the first characteristic part and the second characteristic part of the target object in the shooting preview interface.
- the electronic device further includes:
- the video editing module 204 is configured to receive a fourth input from the user; in response to the fourth input, the first video is edited, and the edited first video is output.
- the relevant video of the target object can be obtained and then edited. This simplifies the process of video editing for a specific target object.
- the electronic device may further include a video saving module, configured to save the edited first video in response to the received video saving input.
- the video saving input here can be an operation of long pressing the play button of the video, of course, other operations can also be set as the video saving input.
- the video playing module 203 can also be used to: in the process of playing the first video, in response to the user's trigger input on the previous or next button, the previous video or the next video of the first video can be played. A video.
- the video editing module 204 may be used to: in response to the received fourth input, pause the playing video; receive the input information to be inserted, and insert the information to be inserted in the video frame displayed when the playing is paused.
- multiple first videos corresponding to the multiple first video identifiers are played at the same time.
- the upper and lower areas of the screen will be divided, and each area will display a first video separately, or the left and right areas of the screen can be divided, and each area will display a first video separately.
- the above-mentioned video editing module 204 is specifically configured to: receive a fourth input from the user on the play interface of the first video, and in response to the fourth input, display the fourth input on the play interface of the first video. Enter the first information entered.
- the user-defined input first information such as text and patterns
- the user can be inserted into the play interface of the first video.
- the user can edit the text of the acquired first video according to his own needs, so as to obtain the video display effect he wants. Users have high flexibility in editing videos.
- the foregoing video editing module 204 may be further configured to: receive a user's fourth input of the target first information that has been inserted in the first video, and delete the first information in response to the fourth input.
- the fourth input is the delete input. This embodiment enables the user to delete the inserted first information, which improves the convenience of information editing on the first video.
- the video editing module 204 may also be used to: in response to the fourth input, record a second video of the target object in the first video; receive a fifth input from the user to the second video; and respond to the fifth input. Input, insert the second video into the first position in the first video; where the first position is the position entered by the fifth input on the playback progress bar of the first video.
- the second video of the target object is additionally recorded, and the additionally recorded second video is inserted into the first video.
- the video content required by the user can be added at will in the first video, which enriches the user's video editing mode, facilitates the user's custom editing of the video, and enhances the flexibility of video editing.
- the second video can be recorded one or more.
- the video editing module 204 may also be used to: display an indication mark of the second video at the first position on the playback progress bar of the first video; wherein the indication mark is used to indicate the insertion position of the second video.
- the insertion position of the second video is marked, so that the user can intuitively understand the insertion situation of the second video, and it is convenient for subsequent users to view the previously inserted second video. Adjustments improve the convenience of users in the process of video editing.
- the video editing module 204 may also be used to: receive a sixth input from the user to the indication mark; in response to the sixth input, move the indication mark from the first position to the second position, and change the second video Insert to the second position in the first video.
- the insertion position of the second video in the first video can be moved. After the inserted video is selected, in this process , The user can intuitively view the movement of the current insertion position of the second video according to the position of the indication mark, which facilitates the user's adjustment.
- the video editing module 204 can also be used to:
- the user's editing method for the additional video during the video editing process is further improved, so that the user can more conveniently perform the insertion management of the additional video, and the editing flexibility is higher.
- the electronic device further includes:
- the video splicing module is used to receive the eighth input of the user to the M target video identifiers among the N video identifiers; in response to the eighth input, perform video splicing on the M target videos indicated by the M target video identifiers, and output the third Video; where M is an integer greater than 1, and M ⁇ N.
- multiple target videos can be spliced directly.
- the user does not need to record the video every time, but can be spliced by splicing multiple target videos. Ways to obtain the video that users want, which improves the flexibility of video editing and the richness of video editing operations.
- the electronic device provided in the embodiment of the present invention can implement each method step implemented by the electronic device in any of the foregoing method embodiments, and to avoid repetition, details are not described herein again.
- FIG. 11 is a schematic diagram of the hardware structure of an electronic device that implements various embodiments of the present invention.
- the electronic device 300 includes but is not limited to: a radio frequency unit 301, a network module 302, an audio output unit 303, an input unit 304, a sensor 305, a display unit 306, a user input unit 307, an interface unit 308, a memory 309, a processor 310, a power supply 311 and shooting component 312 and other components.
- a radio frequency unit 301 includes but is not limited to: a radio frequency unit 301, a network module 302, an audio output unit 303, an input unit 304, a sensor 305, a display unit 306, a user input unit 307, an interface unit 308, a memory 309, a processor 310, a power supply 311 and shooting component 312 and other components.
- electronic devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, vehicle-mounted terminals, wearable devices, and pedometers.
- the photographing component 312 may include a camera and related components of the camera.
- the processor 310 is used to display the shooting preview interface of the camera; when the shooting preview interface includes the target object, in the shooting preview interface, display N video identifiers associated with the target object; wherein, each video identifier Indicates a video, N is a positive integer.
- the video identifier associated with the target object contained in the shooting preview interface can be displayed, so that subsequent users can directly obtain the video indicated by the video identifier. It can be seen that the embodiment of the present invention does not require the user to take video shooting, nor does it require the user to manually search for the desired video. Instead, the video identification of the desired video can be obtained directly through the target object in the shooting preview interface of the camera, which is convenient to operate and realize Quick search for videos.
- the radio frequency unit 301 can be used for receiving and sending signals in the process of sending and receiving information or talking. Specifically, the downlink data from the base station is received and processed by the processor 310; in addition, Uplink data is sent to the base station.
- the radio frequency unit 301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
- the radio frequency unit 301 can also communicate with the network and other devices through a wireless communication system.
- the electronic device provides users with wireless broadband Internet access through the network module 302, such as helping users to send and receive emails, browse web pages, and access streaming media.
- the audio output unit 303 may convert the audio data received by the radio frequency unit 301 or the network module 302 or stored in the memory 309 into an audio signal and output it as sound. Moreover, the audio output unit 303 may also provide audio output related to a specific function performed by the electronic device 300 (for example, call signal reception sound, message reception sound, etc.).
- the audio output unit 303 includes a speaker, a buzzer, a receiver, and the like.
- the input unit 304 is used to receive audio or video signals.
- the input unit 304 may include a graphics processing unit (GPU) 3041 and a microphone 3042.
- the graphics processor 13041 is configured to respond to still pictures or video images obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
- the processed image frame may be displayed on the display unit 306.
- the image frame processed by the graphics processor 13041 may be stored in the memory 309 (or other storage medium) or sent via the radio frequency unit 301 or the network module 302.
- the microphone 3042 can receive sound, and can process such sound into audio data.
- the processed audio data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 301 for output in the case of a telephone call mode.
- the electronic device 300 also includes at least one sensor 305, such as a light sensor, a motion sensor, and other sensors.
- the light sensor includes an ambient light sensor and a proximity sensor.
- the ambient light sensor can adjust the brightness of the display panel 3061 according to the brightness of the ambient light.
- the proximity sensor can close the display panel 3061 and the display panel 3061 when the electronic device 300 is moved to the ear. / Or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, percussion), etc.; sensor 305 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
- the user input unit 307 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the electronic device.
- the user input unit 307 includes a touch panel 3071 and other input devices 3072.
- the touch panel 3071 also called a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 3071 or near the touch panel 3071. operating).
- the touch panel 3071 may include two parts: a touch detection device and a touch controller.
- the touch detection device detects the user's touch position, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 310, the command sent by the processor 310 is received and executed.
- the touch panel 3071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
- the user input unit 307 may also include other input devices 3072.
- other input devices 3072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
- the touch panel 3071 can be overlaid on the display panel 3061.
- the touch panel 3071 detects a touch operation on or near it, it transmits it to the processor 310 to determine the type of the touch event, and then the processor 310 responds to the touch The type of event provides corresponding visual output on the display panel 3061.
- the touch panel 3071 and the display panel 3061 are used as two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 3071 and the display panel 3061 can be integrated
- the implementation of the input and output functions of the electronic device is not specifically limited here.
- the interface unit 308 is an interface for connecting an external device and the electronic device 300.
- the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
- the interface unit 308 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the electronic device 300 or can be used to connect to the electronic device 300 and the external device. Transfer data between devices.
- the memory 309 can be used to store software programs and various data.
- the memory 309 may mainly include a program storage area and a data storage area.
- the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
- the memory 309 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
- the processor 310 is the control center of the electronic device. It uses various interfaces and lines to connect the various parts of the entire electronic device, runs or executes software programs and/or modules stored in the memory 309, and calls data stored in the memory 309 , Perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
- the processor 310 may include one or more processing units; preferably, the processor 310 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc., and the modem
- the processor mainly deals with wireless communication. It can be understood that the above-mentioned modem processor may not be integrated into the processor 310.
- the electronic device 300 may also include a power source 311 (such as a battery) for supplying power to various components.
- a power source 311 such as a battery
- the power source 311 may be logically connected to the processor 310 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system. And other functions.
- the electronic device 300 includes some functional modules not shown, which will not be repeated here.
- the embodiment of the present invention also provides an electronic device, including a processor 310, a memory 309, and a computer program stored on the memory 309 and running on the processor 310.
- an electronic device including a processor 310, a memory 309, and a computer program stored on the memory 309 and running on the processor 310.
- the computer program is executed by the processor 310,
- the various processes of the foregoing video display method embodiments are implemented, and the same technical effects can be achieved. To avoid repetition, details are not repeated here.
- the embodiment of the present invention also provides a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
- a computer program is stored on the computer-readable storage medium.
- the computer-readable storage medium may include non-transitory memory, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic disks, or optical disks.
- ROM Read-Only Memory
- RAM Random Access Memory
- magnetic disks such as magnetic disks, optical disks.
- each block in the flowchart and/or block diagram and the combination of each block in the flowchart and/or block diagram can be implemented by a program or instruction.
- These programs or instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device to generate a machine such that these programs or instructions are executed by the processor of the computer or other programmable data processing device Enables the realization of functions/actions specified in one or more blocks of the flowchart and/or block diagram.
- Such a processor can be, but is not limited to, a general-purpose processor, a dedicated processor, a special application processor, or a field programmable logic circuit. It can also be understood that each block in the block diagram and/or flowchart and the combination of the blocks in the block diagram and/or flowchart can also be implemented by dedicated hardware that performs the specified function or action, or can be implemented by dedicated hardware and A combination of computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- User Interface Of Digital Computer (AREA)
- Television Signal Processing For Recording (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (15)
- 一种视频显示方法,应用于电子设备,所述方法包括:显示摄像头的拍摄预览界面;在所述拍摄预览界面中包括目标对象的情况下,在所述拍摄预览界面内,显示与所述目标对象关联的N个视频标识;其中,每个所述视频标识指示一个视频,N为正整数。
- 根据权利要求1所述的方法,其中,所述显示摄像头的拍摄预览界面之后,所述方法还包括:接收用户对所述拍摄预览界面中的至少一个对象的第一输入;响应于所述第一输入,将所述第一输入选择的所述至少一个对象作为所述目标对象。
- 根据权利要求2所述的方法,其中,所述接收用户对所述拍摄预览界面中的至少一个对象的第一输入,包括:接收用户在所述拍摄预览界面上的滑动输入;所述将所述第一输入选择的所述至少一个对象作为所述目标对象,包括:将所述滑动输入的滑动轨迹所圈选的至少一个对象作为所述目标对象。
- 根据权利要求1所述的方法,其中,所述在所述拍摄预览界面中包括目标对象的情况下,在所述拍摄预览界面中,显示与所述目标对象关联的N个视频标识,包括:在所述拍摄预览界面中包括所述目标对象的第一特征部分的情况下,显示与所述第一特征部分关联的N个视频的视频标识;在所述拍摄预览界面更新为所述目标对象的第二特征部分的情况下,将所述N个视频的视频标识更新为与所述第二特征部分关联的T个视频的视频标识;其中,T为正整数,所述目标对象的不同特征部分关联不同的视频标识。
- 根据权利要求1所述的方法,其中,所述在所述拍摄预览界面中, 显示与所述目标对象关联的N个视频标识之前,所述方法还包括:接收用户的第二输入;响应于所述第二输入,调整所述摄像头的拍摄视场角至目标拍摄视场角;按照预设的拍摄视场角与时间区间的预设对应关系,确定所述目标拍摄视场角对应的目标时间区间;获取拍摄时间位于所述目标时间区间内且与所述目标对象关联的N个视频的视频标识。
- 根据权利要求1所述的方法,还包括:接收用户对所述N个视频标识中的第一视频标识的第三输入;响应于所述第三输入,播放所述第一视频标识对应的第一视频;接收用户的第四输入;响应于所述第四输入,对所述第一视频进行剪辑,输出剪辑后的所述第一视频。
- 根据权利要求6所述的方法,其中,所述接收用户的第四输入,包括:接收用户对所述第一视频的播放界面的第四输入;所述接收用户的第四输入之后,还包括:响应于所述第四输入,在所述第一视频的播放界面上,显示所述第四输入所输入的第一信息。
- 根据权利要求6所述的方法,其中,所述响应于所述第四输入,对所述第一视频进行剪辑,输出剪辑后的所述第一视频,还包括:响应于所述第四输入,录制所述第一视频中的目标对象的第二视频;接收用户对所述第二视频的第五输入;响应于所述第五输入,将所述第二视频插入至所述第一视频中的第一位置;其中,所述第一位置为所述第五输入在所述第一视频的播放进度条上所输入的位置。
- 根据权利要求8所述的方法,其中,所述将所述第二视频插入至所 述第一视频中的第一位置之后,所述方法还包括:在所述第一视频的播放进度条上的第一位置,显示所述第二视频的指示标识;其中,所述指示标识用于指示所述第二视频的插入位置。
- 根据权利要求9所述的方法,其中,所述在所述第一视频的播放进度条上的第一位置,显示所述第二视频的指示标识之后,还包括:接收用户对所述指示标识的第六输入;响应于所述第六输入,将所述指示标识从所述第一位置移动至第二位置,并将所述第二视频插入至所述第一视频中的所述第二位置。
- 根据权利要求9或10所述的方法,其中,所述在所述第一视频的播放进度条上的第一位置,显示所述第二视频的指示标识之后,所述方法还包括:接收用户对所述指示标识的第七输入;响应于所述第七输入,删除所述第一视频中已插入的所述第二视频,并消除所述指示标识的显示。
- 根据权利要求1所述的方法,其中,所述在所述拍摄预览界面中,显示与所述目标对象关联的N个视频标识之后,还包括:接收用户对所述N个视频标识中的M个目标视频标识的第八输入;响应于所述第八输入,将所述M个目标视频标识指示的M个目标视频进行视频拼接,输出第三视频;其中,M为大于1的整数,M≤N。
- 一种电子设备,包括:第一显示模块,用于显示摄像头的拍摄预览界面;第二显示模块,用于在所述拍摄预览界面中包括目标对象的情况下,在所述拍摄预览界面内,显示与所述目标对象关联的N个视频标识,其中,每个所述视频标识指示一个视频,N为正整数。
- 一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至12中任一项所述的视频显示方法的步骤。
- 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至12中任一项所述的视频显示方法的步骤。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20893151.9A EP4068754B1 (en) | 2019-11-29 | 2020-11-23 | Video display method, electronic device and medium |
KR1020227020704A KR102719323B1 (ko) | 2019-11-29 | 2020-11-23 | 비디오 표시 방법, 전자기기 및 매체 |
JP2022524947A JP7393541B2 (ja) | 2019-11-29 | 2020-11-23 | ビデオ表示方法、電子機器及び媒体 |
US17/825,692 US20220284928A1 (en) | 2019-11-29 | 2022-05-26 | Video display method, electronic device and medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911204479.X | 2019-11-29 | ||
CN201911204479.XA CN110913141B (zh) | 2019-11-29 | 2019-11-29 | 一种视频显示方法、电子设备以及介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/825,692 Continuation US20220284928A1 (en) | 2019-11-29 | 2022-05-26 | Video display method, electronic device and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021104209A1 true WO2021104209A1 (zh) | 2021-06-03 |
Family
ID=69820771
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/130920 WO2021104209A1 (zh) | 2019-11-29 | 2020-11-23 | 视频显示方法、电子设备以及介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20220284928A1 (zh) |
EP (1) | EP4068754B1 (zh) |
JP (1) | JP7393541B2 (zh) |
CN (1) | CN110913141B (zh) |
WO (1) | WO2021104209A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114390205A (zh) * | 2022-01-29 | 2022-04-22 | 西安维沃软件技术有限公司 | 拍摄方法、装置和电子设备 |
CN115309317A (zh) * | 2022-08-08 | 2022-11-08 | 北京字跳网络技术有限公司 | 媒体内容获取方法、装置、设备、可读存储介质及产品 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110913141B (zh) * | 2019-11-29 | 2021-09-21 | 维沃移动通信有限公司 | 一种视频显示方法、电子设备以及介质 |
CN111638817A (zh) * | 2020-04-27 | 2020-09-08 | 维沃移动通信有限公司 | 目标对象显示方法及电子设备 |
CN112905837A (zh) * | 2021-04-09 | 2021-06-04 | 维沃移动通信(深圳)有限公司 | 视频文件处理方法、装置及电子设备 |
CN113473224B (zh) * | 2021-06-29 | 2023-05-23 | 北京达佳互联信息技术有限公司 | 视频处理方法、装置、电子设备及计算机可读存储介质 |
CN113727140A (zh) * | 2021-08-31 | 2021-11-30 | 维沃移动通信(杭州)有限公司 | 音视频处理方法、装置和电子设备 |
WO2023155143A1 (zh) * | 2022-02-18 | 2023-08-24 | 北京卓越乐享网络科技有限公司 | 视频制作方法、装置、电子设备、存储介质和程序产品 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110243474A1 (en) * | 2010-04-06 | 2011-10-06 | Canon Kabushiki Kaisha | Video image processing apparatus and video image processing method |
CN105426035A (zh) * | 2014-09-15 | 2016-03-23 | 三星电子株式会社 | 用于提供信息的方法和电子装置 |
CN106162355A (zh) * | 2015-04-10 | 2016-11-23 | 北京云创视界科技有限公司 | 视频交互方法及终端 |
CN106488002A (zh) * | 2015-08-28 | 2017-03-08 | Lg电子株式会社 | 移动终端 |
CN106658199A (zh) * | 2016-12-28 | 2017-05-10 | 网易传媒科技(北京)有限公司 | 一种视频内容的展示方法及装置 |
CN108124167A (zh) * | 2016-11-30 | 2018-06-05 | 阿里巴巴集团控股有限公司 | 一种播放处理方法、装置和设备 |
CN110121093A (zh) * | 2018-02-06 | 2019-08-13 | 优酷网络技术(北京)有限公司 | 视频中目标对象的搜索方法及装置 |
CN110913141A (zh) * | 2019-11-29 | 2020-03-24 | 维沃移动通信有限公司 | 一种视频显示方法、电子设备以及介质 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190215449A1 (en) * | 2008-09-05 | 2019-07-11 | Telefonaktiebolaget Lm Ericsson (Publ) | Mobile terminal and method of performing multi-focusing and photographing image including plurality of objects using the same |
US10349077B2 (en) * | 2011-11-21 | 2019-07-09 | Canon Kabushiki Kaisha | Image coding apparatus, image coding method, image decoding apparatus, image decoding method, and storage medium |
KR102206184B1 (ko) * | 2014-09-12 | 2021-01-22 | 삼성에스디에스 주식회사 | 동영상 내 객체 관련 정보 검색 방법 및 동영상 재생 장치 |
KR102334618B1 (ko) * | 2015-06-03 | 2021-12-03 | 엘지전자 주식회사 | 이동 단말기 및 그 제어 방법 |
KR101705595B1 (ko) * | 2015-07-10 | 2017-02-13 | (주) 프람트 | 데이터 구조화를 통한 직관적인 동영상콘텐츠 재생산 방법 및 그 장치 |
CN107239203A (zh) * | 2016-03-29 | 2017-10-10 | 北京三星通信技术研究有限公司 | 一种图像管理方法和装置 |
JP2017182603A (ja) | 2016-03-31 | 2017-10-05 | 株式会社バンダイナムコエンターテインメント | プログラム及びコンピュータシステム |
CN105933538B (zh) * | 2016-06-15 | 2019-06-07 | 维沃移动通信有限公司 | 一种移动终端的视频查找方法及移动终端 |
JP2018006961A (ja) | 2016-06-30 | 2018-01-11 | カシオ計算機株式会社 | 画像処理装置、動画像選択方法及びプログラム |
CN106384264A (zh) * | 2016-09-22 | 2017-02-08 | 深圳市金立通信设备有限公司 | 一种信息查询方法及终端 |
US20180314698A1 (en) * | 2017-04-27 | 2018-11-01 | GICSOFT, Inc. | Media sharing based on identified physical objects |
-
2019
- 2019-11-29 CN CN201911204479.XA patent/CN110913141B/zh active Active
-
2020
- 2020-11-23 WO PCT/CN2020/130920 patent/WO2021104209A1/zh unknown
- 2020-11-23 EP EP20893151.9A patent/EP4068754B1/en active Active
- 2020-11-23 JP JP2022524947A patent/JP7393541B2/ja active Active
-
2022
- 2022-05-26 US US17/825,692 patent/US20220284928A1/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110243474A1 (en) * | 2010-04-06 | 2011-10-06 | Canon Kabushiki Kaisha | Video image processing apparatus and video image processing method |
CN105426035A (zh) * | 2014-09-15 | 2016-03-23 | 三星电子株式会社 | 用于提供信息的方法和电子装置 |
CN106162355A (zh) * | 2015-04-10 | 2016-11-23 | 北京云创视界科技有限公司 | 视频交互方法及终端 |
CN106488002A (zh) * | 2015-08-28 | 2017-03-08 | Lg电子株式会社 | 移动终端 |
CN108124167A (zh) * | 2016-11-30 | 2018-06-05 | 阿里巴巴集团控股有限公司 | 一种播放处理方法、装置和设备 |
CN106658199A (zh) * | 2016-12-28 | 2017-05-10 | 网易传媒科技(北京)有限公司 | 一种视频内容的展示方法及装置 |
CN110121093A (zh) * | 2018-02-06 | 2019-08-13 | 优酷网络技术(北京)有限公司 | 视频中目标对象的搜索方法及装置 |
CN110913141A (zh) * | 2019-11-29 | 2020-03-24 | 维沃移动通信有限公司 | 一种视频显示方法、电子设备以及介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4068754A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114390205A (zh) * | 2022-01-29 | 2022-04-22 | 西安维沃软件技术有限公司 | 拍摄方法、装置和电子设备 |
CN114390205B (zh) * | 2022-01-29 | 2023-09-15 | 西安维沃软件技术有限公司 | 拍摄方法、装置和电子设备 |
CN115309317A (zh) * | 2022-08-08 | 2022-11-08 | 北京字跳网络技术有限公司 | 媒体内容获取方法、装置、设备、可读存储介质及产品 |
Also Published As
Publication number | Publication date |
---|---|
JP7393541B2 (ja) | 2023-12-06 |
CN110913141A (zh) | 2020-03-24 |
EP4068754A1 (en) | 2022-10-05 |
JP2023502326A (ja) | 2023-01-24 |
CN110913141B (zh) | 2021-09-21 |
EP4068754B1 (en) | 2024-08-21 |
EP4068754A4 (en) | 2023-01-04 |
KR20220098027A (ko) | 2022-07-08 |
US20220284928A1 (en) | 2022-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021104209A1 (zh) | 视频显示方法、电子设备以及介质 | |
WO2021036536A1 (zh) | 视频拍摄方法及电子设备 | |
CN108089788B (zh) | 一种缩略图显示控制方法及移动终端 | |
WO2021136134A1 (zh) | 视频处理方法、电子设备及计算机可读存储介质 | |
WO2021078116A1 (zh) | 视频处理方法及电子设备 | |
WO2019137429A1 (zh) | 图片处理方法及移动终端 | |
WO2020042890A1 (zh) | 视频处理方法、终端及计算机可读存储介质 | |
WO2019196929A1 (zh) | 一种视频数据处理方法及移动终端 | |
WO2021104236A1 (zh) | 一种共享拍摄参数的方法及电子设备 | |
CN108108114A (zh) | 一种缩略图显示控制方法及移动终端 | |
WO2020020134A1 (zh) | 拍摄方法及移动终端 | |
KR20180133743A (ko) | 이동 단말기 및 그 제어 방법 | |
WO2021036553A1 (zh) | 图标显示方法及电子设备 | |
CN111050070B (zh) | 视频拍摄方法、装置、电子设备及介质 | |
WO2021179991A1 (zh) | 音频处理方法及电子设备 | |
WO2021036623A1 (zh) | 显示方法及电子设备 | |
WO2021036659A1 (zh) | 视频录制方法及电子设备 | |
WO2020233323A1 (zh) | 显示控制方法、终端设备及计算机可读存储介质 | |
CN111010610A (zh) | 一种视频截图方法及电子设备 | |
CN110719527A (zh) | 一种视频处理方法、电子设备及移动终端 | |
CN111177420B (zh) | 一种多媒体文件显示方法、电子设备及介质 | |
WO2020238911A1 (zh) | 消息发送方法及终端 | |
KR20180131908A (ko) | 이동 단말기 및 그것의 동작방법 | |
WO2021129818A1 (zh) | 视频播放方法及电子设备 | |
WO2020011080A1 (zh) | 显示控制方法及终端设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20893151 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022524947 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20227020704 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020893151 Country of ref document: EP Effective date: 20220629 |