Nothing Special   »   [go: up one dir, main page]

WO2021104209A1 - 视频显示方法、电子设备以及介质 - Google Patents

视频显示方法、电子设备以及介质 Download PDF

Info

Publication number
WO2021104209A1
WO2021104209A1 PCT/CN2020/130920 CN2020130920W WO2021104209A1 WO 2021104209 A1 WO2021104209 A1 WO 2021104209A1 CN 2020130920 W CN2020130920 W CN 2020130920W WO 2021104209 A1 WO2021104209 A1 WO 2021104209A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
input
user
target object
preview interface
Prior art date
Application number
PCT/CN2020/130920
Other languages
English (en)
French (fr)
Inventor
杨其豪
李明津
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Priority to EP20893151.9A priority Critical patent/EP4068754B1/en
Priority to KR1020227020704A priority patent/KR102719323B1/ko
Priority to JP2022524947A priority patent/JP7393541B2/ja
Publication of WO2021104209A1 publication Critical patent/WO2021104209A1/zh
Priority to US17/825,692 priority patent/US20220284928A1/en

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/036Insert-editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Definitions

  • the embodiments of the present invention relate to the field of video processing, and in particular to a video display method, electronic equipment, and media.
  • the self-photographing method requires a lot of user time. However, if you want to find the video you want from several videos saved in the electronic device, you also need to spend a lot of time searching it. Moreover, when there are multiple video saving folders in the user's electronic device, the user Need to open each folder separately to search, the search process is tedious and time-consuming.
  • an embodiment of the present invention provides a video display method applied to an electronic device, including:
  • the shooting preview interface includes a target object
  • the shooting preview interface displaying N video identifiers associated with the target object
  • each video identifier indicates a video
  • N is a positive integer
  • an embodiment of the present invention also provides an electronic device, including:
  • the first display module is used to display the shooting preview interface of the camera
  • the second display module is configured to display N video identifiers associated with the target object in the shooting preview interface when the target object is included in the shooting preview interface; wherein, each of the video identifiers Indicates a video, N is a positive integer.
  • an embodiment of the present invention provides an electronic device, including a processor, a memory, and a computer program stored on the memory and running on the processor, the computer program being executed by the processor When realizing the steps of the video display method as described in the first aspect above.
  • an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the video display as described in the above first aspect is realized. Method steps.
  • the video display method, electronic device, and computer storage medium of the embodiments of the present invention can display the video identifier associated with the target object contained in the shooting preview interface when the shooting preview interface of the camera is displayed, so that subsequent users can directly obtain the video Identifies the indicated video. It can be seen that the embodiment of the present invention does not require the user to take video shooting, nor does it require the user to manually search for the desired video. Instead, the video identification of the desired video can be obtained directly through the target object in the shooting preview interface of the camera, which is convenient to operate and realize Quick search for videos.
  • FIG. 1 is a schematic flowchart of a video display method provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a sliding input provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of another sliding input provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a fourth input provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of another fourth input provided by an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a fifth input provided by an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a sixth input provided by an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a seventh input provided by an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • Fig. 1 shows a schematic flowchart of a video display method provided by an embodiment of the present invention. The method is applied to electronic equipment, and the method includes:
  • Step 101 Display the shooting preview interface of the camera
  • Step 102 In the case where the target object is included in the shooting preview interface, display N video identifiers associated with the target object in the shooting preview interface; wherein each video identifier indicates a video, and N is a positive integer.
  • the target object here may include one object or multiple objects.
  • the video identifier associated with the target object contained in the shooting preview interface can be displayed, so that subsequent users can directly obtain the video indicated by the video identifier. It can be seen that the embodiment of the present invention does not require the user to take video shooting, nor does it require the user to manually search for the desired video. Instead, the video identification of the desired video can be obtained directly through the target object in the shooting preview interface of the camera, which is convenient to operate and realize Quick search for videos.
  • step 101 it may further include:
  • At least one object selected by the first input is taken as a target object.
  • the target object in the shooting preview interface is determined according to the user's selection.
  • This method can improve the autonomy of the user and avoid displaying too many video identifiers, which affects the user's video selection.
  • the electronic device may also automatically recognize the object that meets the preset condition contained in the shooting preview interface as the target object, which is not limited in the present invention.
  • the foregoing receiving a user's first input on at least one object in the shooting preview interface includes:
  • the above-mentioned using at least one object selected by the first input as a target object includes:
  • At least one object circled by the sliding track of the sliding input is used as the target object.
  • the corresponding target object is selected through the trajectory of the user's operation gesture.
  • the sliding input here can be: two fingers move together and slide down and then separate and then move together.
  • the figure formed by the sliding track can be diamond or circular or similar. In diamond-shaped or circular shapes.
  • the user's finger does not leave the screen to draw a rhombus, and the object enclosed in the rhombus is used as the target object as shown in Figure 2.
  • Figure 2 shows a schematic diagram of a sliding input provided by an embodiment of the present invention.
  • the object in Figure 2 110 is the selected target.
  • FIG. 3 shows another schematic diagram of sliding input provided by an embodiment of the present invention, and the objects 110 and 120 in FIG. 3 are selected target objects.
  • the above sliding input can also be: the user continuously slides without leaving the screen with two fingers, and the sliding track forms a plurality of continuous graphics, each of which is a diamond or a circle or a figure similar to a diamond or a circle, and the target selected by the sliding input
  • the target objects include the objects delineated in each graph. For example, if the finger does not leave the screen to draw two circles in succession, the two circles respectively contain a sofa and a table, then the target object includes the sofa and the table, and then the video logo associated with the sofa and the table is displayed at the same time.
  • the user may also use other sliding inputs on the shooting preview interface to select the target object.
  • the sliding trajectory is a point or a check mark, and an object within a predetermined distance above the sliding trajectory is used as the selected target object, or the user can click on a certain area to focus, and the object in the focused area is used as the target object.
  • the method may further include: acquiring N video identities associated with the target object.
  • the foregoing acquisition of N video identities associated with the target object may specifically include:
  • the video is specifically obtained through the storage of the server or the electronic device, wherein if the storage of the electronic device is selected to obtain the video, the user can obtain and edit the video without a network. If you choose a server to get the video, you can increase the scope of getting the video and get a very rich video source from the Internet.
  • the foregoing acquiring N video identities associated with the target object may further include:
  • the shooting time of the acquired video identifier can be limited according to the shooting angle of view when the target object is acquired.
  • the time interval corresponding to the standard angle of view may be within 3 days, and the actual interval corresponding to the wide angle may be 3 days ago.
  • the video identifier associated with each target object may be displayed in a preset range around each target object in the shooting preview interface.
  • the video logo can also be displayed in the form of a list, or can also be displayed in other ways, and the present invention does not limit the display manner of the video logo.
  • the foregoing step 102 may include:
  • the first characteristic part of the target object is included in the shooting preview interface, display the video identifiers of the N videos associated with the first characteristic part;
  • the video identifiers of the N videos are updated to the video identifiers of the T videos associated with the second characteristic part;
  • T is a positive integer
  • different characteristic parts of the target object are associated with different video identifiers.
  • the characteristic parts of the sofa can be armrests, cushions, cushions, pillows, etc., among which the armrests of the sofa can be associated Family-related videos, pillows can be associated with party-related videos, etc. It can be seen that this embodiment can obtain videos associated with various characteristic parts of the target object, thereby improving the richness of the obtained videos.
  • the process of identifying the first characteristic part and the second characteristic part of the target object in the shooting preview interface may be: scanning each characteristic part of the target object in sequence. Specifically, the target object may be scanned in a predetermined order (for example, from top to bottom, from left to right, etc.), and during the scanning process, the characteristic part in the scanned part is identified.
  • the video identifier associated with the first characteristic part is no longer displayed, but only the video identifier associated with the second characteristic part is displayed.
  • the video identifier associated with the second characteristic part may be added, that is, the video identifier associated with the first characteristic part and the second characteristic part may be displayed at the same time.
  • the target object in the case where multiple characteristic parts of the target object are scanned, that is, when multiple characteristic parts of the target object are included in the shooting preview interface, it is also possible to set the scanned multiple characteristic parts of the target object. At least two characteristic parts are combined, and a video identifier associated with each combination is obtained. For example, after scanning the three characteristic parts, the three characteristic parts can be combined in twos and the three characteristic parts can be used as a combination, and then the video identifier associated with each of the four groups of combinations can be obtained. In this way, the richness of the obtained video identification can be further increased.
  • the method may further include:
  • the first video is edited, and the edited first video is output.
  • this embodiment first plays the first video, and then edits the first video during the playback process.
  • the previous video or the next video of the first video in response to the user's trigger input on the previous or next button, the previous video or the next video of the first video may be played.
  • multiple first videos corresponding to the multiple first video identifiers are played at the same time.
  • the upper and lower areas of the screen will be divided, and each area will display a first video separately, or the left and right areas of the screen can be divided, and each area will display a first video separately.
  • the clipped first video may be saved in response to the received video saving input.
  • the video saving input here may be an operation of long pressing the play button of the video. Of course, other operations may also be set as the video saving input, which is not limited in the present invention.
  • FIG. 4 shows a schematic diagram of a third input provided by an embodiment of the present invention.
  • the above-mentioned third input to the first video identifier may be: click on the first video identifier, and the finger does not leave the screen. Input operation of stretching out with two fingers apart.
  • the third input may also be an operation of double-clicking the first video identifier, or other input methods may also be set as the third input.
  • receiving the fourth input of the user may specifically be: receiving the fourth input of the user on the playing interface of the first video.
  • FIG. 5 shows a schematic diagram of a fourth input provided by an embodiment of the present invention. After receiving the fourth input from the user, it may further include:
  • the user-defined input first information such as text and patterns
  • the user can be inserted into the play interface of the first video.
  • the user can edit the text of the acquired first video according to his own needs, so as to obtain the video display effect he wants. Users have high flexibility in editing videos.
  • the first video in order to facilitate the editing of the user and prevent the user from missing the display content of the first video during the editing process, the first video can be paused in response to the fourth input, and after the editing is completed, the user's editing completion is received. In the case of input, continue to play the first video.
  • the aforementioned fourth input may be an operation in which the user touches the screen with a single finger and moves an arc beyond the preset length.
  • the fourth input can also be set to other types of operations, which is not limited in the present invention.
  • the foregoing receiving the fourth input of the user, in response to the fourth input, editing the first video, and outputting the edited first video may also be:
  • the fourth input is the delete input.
  • the first information to be adjusted needs to be deleted. This embodiment enables the user to delete the inserted first information, which improves the convenience of information editing on the first video.
  • the fourth input in this embodiment may be an operation of a gesture sliding a line from one side of the screen (for example, the left side) to the other side of the screen.
  • the first information passed by the sliding track of the user's gesture is Target first information
  • FIG. 6 shows a schematic diagram of another fourth input provided by an embodiment of the present invention.
  • the first information inserted on the sliding track of the user gesture can also be set as the target first information.
  • other input methods can also be used as the fourth input, which is not limited in the present invention.
  • the above-mentioned fourth input of the first information may be carried out before the completion of the saving of the edited first video.
  • editing the first video and outputting the edited first video may further include:
  • the first position is the position entered by the fifth input on the playback progress bar of the first video.
  • the associated video identifier is obtained by shooting the target object in the preview interface, the video corresponding to the video identifier is used as the candidate video to be processed, and the user selects at least one video from the candidate video to be processed for editing .
  • this embodiment additionally records the second video of the target object, and Insert the additionally recorded second video into the first video.
  • the video content required by the user can be added at will in the first video, which enriches the user's video editing mode, facilitates the user's custom editing of the video, and enhances the flexibility of video editing.
  • the second video can be recorded one or more.
  • FIG. 7 shows a schematic diagram of a fifth input provided by an embodiment of the present invention.
  • the above-mentioned fifth input can be: slide down the screen with two fingers, and the second video is inserted into the corresponding position of the fifth input on the playback progress bar of the first video.
  • the indicator 130 in FIG. 7 is the position of the inserted second video. Indicates the logo. Or in other embodiments, a single-finger sliding operation can also be used as the fifth input, or a predetermined video insertion menu item can also be triggered as the fifth input.
  • the present invention does not limit the specific operation mode of the fifth input.
  • the indication mark of the second video is displayed; where the indication mark is used to indicate the insertion position of the second video.
  • this embodiment identifies the insertion position of the second video on the playback progress bar of the first video, so that the user can intuitively understand the insertion situation of the second video, and it is convenient for subsequent users to check the previous
  • the adjustment of the inserted second video improves the user's convenience in the process of video editing.
  • the indication mark here can be represented by a bubble containing the video abbreviation, of course, other identification methods can also be used.
  • the method may further include:
  • the indicator In response to the sixth input, the indicator is moved from the first position to the second position, and the second video is inserted into the second position in the first video.
  • the first position and the second position here refer to the position on the playback progress bar of the first video, that is, the position of the playback time point of the video.
  • the first position is different from the second position.
  • the insertion position of the second video may not be accurately controlled, resulting in a situation where the insertion position is wrong. In this case, the insertion position of the inserted second video needs to be adjusted.
  • the insertion position of the second video by moving the position of the indication mark of the second video on the playback progress bar of the first video, the insertion position of the second video in the first video can be moved. After the inserted video is selected, in this process , The user can intuitively view the movement of the current insertion position of the second video according to the position of the indication mark, which facilitates the user's adjustment.
  • FIG. 8 shows a schematic diagram of a sixth input provided by an embodiment of the present invention.
  • the above-mentioned sixth input may be: single-finger or two-finger selecting the indication mark of the second video, and moving the position of the indication mark on the playback progress bar of the first video left and right to adjust the insertion position of the second video frame by frame.
  • the user can move the insertion position of the second video by pressing and holding the indicator with one or two fingers, and sliding the indicator left and right without leaving the screen.
  • the indicator 130 in Fig. 8 is the indicator of the adjusted second video. For example, if you swipe to the left, the second video is inserted into the previous frame of the current frame.
  • other methods for adjusting the insertion position may also be used, and the present invention does not limit the specific operation content of the sixth input.
  • the method may further include:
  • the second video that has been inserted in the first video is deleted, and the display of the indication mark is eliminated.
  • the inserted second video can be deleted, thereby further improving the user's editing method for the additional video during the video editing process, so that the user can more conveniently perform the insertion management of the additional video and the editing process. More flexibility.
  • FIG. 9 shows a schematic diagram of a seventh input provided by an embodiment of the present invention.
  • the aforementioned seventh input may be an operation of selecting the indicator of the second video with a single finger, and dragging the indicator to move down the edge of the screen without leaving the screen.
  • a corresponding closing flag can also be set for the indicator flag of the second video.
  • the seventh input is an operation of selecting the closing flag corresponding to the indicator flag of the second video with a single finger.
  • the indicator flag 140 in FIG. 9 is The indicator of the deleted second video.
  • the seventh input can also be set to other types of operations.
  • step 102 it may further include:
  • multiple target videos can be spliced directly.
  • the user does not need to record the video every time, but can be spliced by splicing multiple target videos. Ways to obtain the video that users want, which improves the flexibility of video editing and the richness of video editing operations.
  • the eighth input here may be an input operation of dragging the M target video identifiers together, and the subsequent video splicing refers to sequentially pasting the M target videos together according to the time sequence of the completion of the recording.
  • the user can select two video logos and drag them together.
  • the two video logos correspond to target videos A and B respectively.
  • the recording completion time of A is earlier than that of B, and the end frame of A will be combined with the start frame of B.
  • the connection makes the time sequence of completion of recording of A and B fit together.
  • FIG. 10 shows a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
  • the electronic equipment includes:
  • the first display module 201 is used to display the shooting preview interface of the camera
  • the second display module 202 is configured to display N video identifiers associated with the target object in the shooting preview interface when the target object is included in the shooting preview interface; wherein, each video identifier indicates a video, and N is positive Integer.
  • the video identifier associated with the target object contained in the shooting preview interface can be displayed, so that subsequent users can directly obtain the video indicated by the video identifier. It can be seen that the embodiment of the present invention does not require the user to take video shooting, nor does it require the user to manually search for the desired video. Instead, the video identification of the desired video can be obtained directly through the target object in the shooting preview interface of the camera, which is convenient to operate and realize Quick search for videos.
  • the electronic device may further include:
  • the object selection module is configured to receive a user's first input on at least one object in the shooting preview interface; in response to the first input, the at least one object selected by the first input is used as the target object.
  • the target object in the shooting preview interface is determined according to the user's selection.
  • This method can improve the autonomy of the user and avoid displaying too many video identifiers, which affects the user's video selection.
  • the electronic device may also automatically recognize the object that meets the preset condition contained in the shooting preview interface as the target object, which is not limited in the present invention.
  • the above-mentioned object selection module is specifically configured to: receive a user's sliding input on the shooting preview interface; and use at least one object circled by the sliding track of the sliding input as a target object.
  • the corresponding target object is selected through the trajectory of the user's operation gesture.
  • the sliding input here can be: two fingers move together and slide down and then separate and then move together.
  • the figure formed by the sliding track can be diamond or circular or similar. In diamond-shaped or circular shapes.
  • the above sliding input can also be: the user continuously slides without leaving the screen with two fingers, and the sliding track forms a plurality of continuous graphics, each of which is a diamond or a circle or a figure similar to a diamond or a circle, and the sliding input selects
  • the target objects include the objects circled in each graph.
  • other sliding inputs may also be used.
  • the video identifier acquisition module is used to acquire N video identifiers associated with the target object.
  • the aforementioned video identifier obtaining module may be specifically used to obtain N video identifiers associated with the target object from at least one video stored in a memory of a server or an electronic device.
  • the video is specifically obtained through the storage of the server or the electronic device, wherein if the storage of the electronic device is selected to obtain the video, the user can obtain and edit the video without a network. If you choose a server to get the video, you can increase the scope of getting the video and get a very rich video source from the Internet.
  • the above-mentioned video identification acquisition module may also be used for:
  • the second display module 202 may be specifically configured to display the video identifiers associated with each target object within a preset range around each target object in the shooting preview interface, or display the video identifiers in the form of a list. .
  • the present invention does not limit the display mode of the video logo.
  • the above-mentioned second display module 202 may specifically include:
  • the first identification display unit is configured to display video identifications of N videos associated with the first feature part when the first feature part of the target object is included in the shooting preview interface;
  • the second identification display unit is configured to update the video identifications of N videos to the video identifications of T videos associated with the second feature part when the shooting preview interface is updated to the second feature part of the target object; wherein, T is a positive integer, and different feature parts of the target object are associated with different video identifiers.
  • the target object is a sofa
  • the characteristic parts of the sofa may be sofa armrests, cushions, cushions, pillows, etc. Therefore, in order to obtain the video corresponding to each characteristic part of the target object in this embodiment, the richness of the obtained video can be improved.
  • the above-mentioned second identification display unit is further used for identifying the first characteristic part and the second characteristic part of the target object in the shooting preview interface.
  • the electronic device further includes:
  • the video editing module 204 is configured to receive a fourth input from the user; in response to the fourth input, the first video is edited, and the edited first video is output.
  • the relevant video of the target object can be obtained and then edited. This simplifies the process of video editing for a specific target object.
  • the electronic device may further include a video saving module, configured to save the edited first video in response to the received video saving input.
  • the video saving input here can be an operation of long pressing the play button of the video, of course, other operations can also be set as the video saving input.
  • the video playing module 203 can also be used to: in the process of playing the first video, in response to the user's trigger input on the previous or next button, the previous video or the next video of the first video can be played. A video.
  • the video editing module 204 may be used to: in response to the received fourth input, pause the playing video; receive the input information to be inserted, and insert the information to be inserted in the video frame displayed when the playing is paused.
  • multiple first videos corresponding to the multiple first video identifiers are played at the same time.
  • the upper and lower areas of the screen will be divided, and each area will display a first video separately, or the left and right areas of the screen can be divided, and each area will display a first video separately.
  • the above-mentioned video editing module 204 is specifically configured to: receive a fourth input from the user on the play interface of the first video, and in response to the fourth input, display the fourth input on the play interface of the first video. Enter the first information entered.
  • the user-defined input first information such as text and patterns
  • the user can be inserted into the play interface of the first video.
  • the user can edit the text of the acquired first video according to his own needs, so as to obtain the video display effect he wants. Users have high flexibility in editing videos.
  • the foregoing video editing module 204 may be further configured to: receive a user's fourth input of the target first information that has been inserted in the first video, and delete the first information in response to the fourth input.
  • the fourth input is the delete input. This embodiment enables the user to delete the inserted first information, which improves the convenience of information editing on the first video.
  • the video editing module 204 may also be used to: in response to the fourth input, record a second video of the target object in the first video; receive a fifth input from the user to the second video; and respond to the fifth input. Input, insert the second video into the first position in the first video; where the first position is the position entered by the fifth input on the playback progress bar of the first video.
  • the second video of the target object is additionally recorded, and the additionally recorded second video is inserted into the first video.
  • the video content required by the user can be added at will in the first video, which enriches the user's video editing mode, facilitates the user's custom editing of the video, and enhances the flexibility of video editing.
  • the second video can be recorded one or more.
  • the video editing module 204 may also be used to: display an indication mark of the second video at the first position on the playback progress bar of the first video; wherein the indication mark is used to indicate the insertion position of the second video.
  • the insertion position of the second video is marked, so that the user can intuitively understand the insertion situation of the second video, and it is convenient for subsequent users to view the previously inserted second video. Adjustments improve the convenience of users in the process of video editing.
  • the video editing module 204 may also be used to: receive a sixth input from the user to the indication mark; in response to the sixth input, move the indication mark from the first position to the second position, and change the second video Insert to the second position in the first video.
  • the insertion position of the second video in the first video can be moved. After the inserted video is selected, in this process , The user can intuitively view the movement of the current insertion position of the second video according to the position of the indication mark, which facilitates the user's adjustment.
  • the video editing module 204 can also be used to:
  • the user's editing method for the additional video during the video editing process is further improved, so that the user can more conveniently perform the insertion management of the additional video, and the editing flexibility is higher.
  • the electronic device further includes:
  • the video splicing module is used to receive the eighth input of the user to the M target video identifiers among the N video identifiers; in response to the eighth input, perform video splicing on the M target videos indicated by the M target video identifiers, and output the third Video; where M is an integer greater than 1, and M ⁇ N.
  • multiple target videos can be spliced directly.
  • the user does not need to record the video every time, but can be spliced by splicing multiple target videos. Ways to obtain the video that users want, which improves the flexibility of video editing and the richness of video editing operations.
  • the electronic device provided in the embodiment of the present invention can implement each method step implemented by the electronic device in any of the foregoing method embodiments, and to avoid repetition, details are not described herein again.
  • FIG. 11 is a schematic diagram of the hardware structure of an electronic device that implements various embodiments of the present invention.
  • the electronic device 300 includes but is not limited to: a radio frequency unit 301, a network module 302, an audio output unit 303, an input unit 304, a sensor 305, a display unit 306, a user input unit 307, an interface unit 308, a memory 309, a processor 310, a power supply 311 and shooting component 312 and other components.
  • a radio frequency unit 301 includes but is not limited to: a radio frequency unit 301, a network module 302, an audio output unit 303, an input unit 304, a sensor 305, a display unit 306, a user input unit 307, an interface unit 308, a memory 309, a processor 310, a power supply 311 and shooting component 312 and other components.
  • electronic devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, vehicle-mounted terminals, wearable devices, and pedometers.
  • the photographing component 312 may include a camera and related components of the camera.
  • the processor 310 is used to display the shooting preview interface of the camera; when the shooting preview interface includes the target object, in the shooting preview interface, display N video identifiers associated with the target object; wherein, each video identifier Indicates a video, N is a positive integer.
  • the video identifier associated with the target object contained in the shooting preview interface can be displayed, so that subsequent users can directly obtain the video indicated by the video identifier. It can be seen that the embodiment of the present invention does not require the user to take video shooting, nor does it require the user to manually search for the desired video. Instead, the video identification of the desired video can be obtained directly through the target object in the shooting preview interface of the camera, which is convenient to operate and realize Quick search for videos.
  • the radio frequency unit 301 can be used for receiving and sending signals in the process of sending and receiving information or talking. Specifically, the downlink data from the base station is received and processed by the processor 310; in addition, Uplink data is sent to the base station.
  • the radio frequency unit 301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 301 can also communicate with the network and other devices through a wireless communication system.
  • the electronic device provides users with wireless broadband Internet access through the network module 302, such as helping users to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 303 may convert the audio data received by the radio frequency unit 301 or the network module 302 or stored in the memory 309 into an audio signal and output it as sound. Moreover, the audio output unit 303 may also provide audio output related to a specific function performed by the electronic device 300 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 303 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 304 is used to receive audio or video signals.
  • the input unit 304 may include a graphics processing unit (GPU) 3041 and a microphone 3042.
  • the graphics processor 13041 is configured to respond to still pictures or video images obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
  • the processed image frame may be displayed on the display unit 306.
  • the image frame processed by the graphics processor 13041 may be stored in the memory 309 (or other storage medium) or sent via the radio frequency unit 301 or the network module 302.
  • the microphone 3042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 301 for output in the case of a telephone call mode.
  • the electronic device 300 also includes at least one sensor 305, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 3061 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 3061 and the display panel 3061 when the electronic device 300 is moved to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, percussion), etc.; sensor 305 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
  • the user input unit 307 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the user input unit 307 includes a touch panel 3071 and other input devices 3072.
  • the touch panel 3071 also called a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 3071 or near the touch panel 3071. operating).
  • the touch panel 3071 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 310, the command sent by the processor 310 is received and executed.
  • the touch panel 3071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 307 may also include other input devices 3072.
  • other input devices 3072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
  • the touch panel 3071 can be overlaid on the display panel 3061.
  • the touch panel 3071 detects a touch operation on or near it, it transmits it to the processor 310 to determine the type of the touch event, and then the processor 310 responds to the touch The type of event provides corresponding visual output on the display panel 3061.
  • the touch panel 3071 and the display panel 3061 are used as two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 3071 and the display panel 3061 can be integrated
  • the implementation of the input and output functions of the electronic device is not specifically limited here.
  • the interface unit 308 is an interface for connecting an external device and the electronic device 300.
  • the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
  • the interface unit 308 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the electronic device 300 or can be used to connect to the electronic device 300 and the external device. Transfer data between devices.
  • the memory 309 can be used to store software programs and various data.
  • the memory 309 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
  • the memory 309 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 310 is the control center of the electronic device. It uses various interfaces and lines to connect the various parts of the entire electronic device, runs or executes software programs and/or modules stored in the memory 309, and calls data stored in the memory 309 , Perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
  • the processor 310 may include one or more processing units; preferably, the processor 310 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc., and the modem
  • the processor mainly deals with wireless communication. It can be understood that the above-mentioned modem processor may not be integrated into the processor 310.
  • the electronic device 300 may also include a power source 311 (such as a battery) for supplying power to various components.
  • a power source 311 such as a battery
  • the power source 311 may be logically connected to the processor 310 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system. And other functions.
  • the electronic device 300 includes some functional modules not shown, which will not be repeated here.
  • the embodiment of the present invention also provides an electronic device, including a processor 310, a memory 309, and a computer program stored on the memory 309 and running on the processor 310.
  • an electronic device including a processor 310, a memory 309, and a computer program stored on the memory 309 and running on the processor 310.
  • the computer program is executed by the processor 310,
  • the various processes of the foregoing video display method embodiments are implemented, and the same technical effects can be achieved. To avoid repetition, details are not repeated here.
  • the embodiment of the present invention also provides a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer-readable storage medium may include non-transitory memory, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic disks, or optical disks.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • magnetic disks such as magnetic disks, optical disks.
  • each block in the flowchart and/or block diagram and the combination of each block in the flowchart and/or block diagram can be implemented by a program or instruction.
  • These programs or instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device to generate a machine such that these programs or instructions are executed by the processor of the computer or other programmable data processing device Enables the realization of functions/actions specified in one or more blocks of the flowchart and/or block diagram.
  • Such a processor can be, but is not limited to, a general-purpose processor, a dedicated processor, a special application processor, or a field programmable logic circuit. It can also be understood that each block in the block diagram and/or flowchart and the combination of the blocks in the block diagram and/or flowchart can also be implemented by dedicated hardware that performs the specified function or action, or can be implemented by dedicated hardware and A combination of computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

本发明实施例公开了一种视频显示方法、电子设备以及介质。该视频显示方法,应用于电子设备,包括:显示摄像头的拍摄预览界面;在拍摄预览界面中包括目标对象的情况下,在拍摄预览界面内,显示与目标对象关联的N个视频标识;其中,每个视频标识指示一个视频,N为正整数。

Description

视频显示方法、电子设备以及介质
相关申请的交叉引用
本申请要求享有于2019年11月29日提交的名称为“一种视频显示方法、电子设备以及介质”的中国专利申请201911204479.X的优先权,该申请的全部内容通过引用并入本文中。
技术领域
本发明实施例涉及视频处理领域,尤其涉及一种视频显示方法、电子设备以及介质。
背景技术
目前用户在进行视频处理过程中,想要获得待处理的视频,仅能够通过自行拍摄,或者通过在电子设备内部保存的视频中查找想要的视频。
其中,自行拍摄的方式,需要占用用户较多的时间。而想要从电子设备内保存的若干视频中查找到自己想要的视频,也需要用户花费大量的时间进行查找,并且,在用户的电子设备内存在多个视频保存文件夹的情况下,用户需要分别打开各个文件夹进行查找,查找过程繁琐且费时。
发明内容
本发明实施例提供一种视频显示方法、电子设备以及介质,以解决查找视频繁琐费时的问题。
第一方面,本发明实施例提供了一种视频显示方法,应用于电子设备,包括:
显示摄像头的拍摄预览界面;
在所述拍摄预览界面中包括目标对象的情况下,在所述拍摄预览界面内,显示与所述目标对象关联的N个视频标识;
其中,每个所述视频标识指示一个视频,N为正整数。
第二方面,本发明实施例还提供了一种电子设备,包括:
第一显示模块,用于显示摄像头的拍摄预览界面;
第二显示模块,用于在所述拍摄预览界面中包括目标对象的情况下,在所述拍摄预览界面内,显示与所述目标对象关联的N个视频标识;其中,每个所述视频标识指示一个视频,N为正整数。
第三方面,本发明实施例提供了一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如以上第一方面所述的视频显示方法的步骤。
第四方面,本发明实施例提供了一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如以上第一方面所述的视频显示方法的步骤。
本发明实施例的视频显示方法、电子设备以及计算机存储介质,在显示摄像头的拍摄预览界面的情况下,能够显示拍摄预览界面内包含的目标对象关联的视频标识,以供后续用户能够直接获取视频标识指示的视频。可见,本发明实施例不需要用户进行视频拍摄,也不需要用户手动查找所需视频,而是直接通过相机的拍摄预览界面内的目标对象即可获取所需视频的视频标识,操作便捷,实现了视频的快速查找。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据附图获得其他的附图。
图1为本发明一个实施例提供的一种视频显示方法的流程示意图;
图2为本发明一个实施例提供的一种滑动输入的示意图;
图3为本发明一个实施例提供的另一种滑动输入的示意图;
图4为本发明一个实施例提供的一种第三输入的示意图;
图5为本发明一个实施例提供的一种第四输入的示意图;
图6为本发明一个实施例提供的另一种第四输入的示意图;
图7为本发明一个实施例提供的一种第五输入的示意图;
图8为本发明一个实施例提供的一种第六输入的示意图;
图9为本发明一个实施例提供的一种第七输入的示意图;
图10为本发明一个实施例提供的一种电子设备的结构示意图;
图11为本发明一个实施例提供的一种电子设备的硬件结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1示出了本发明一个实施例提供的视频显示方法的流程示意图。该方法应用于电子设备,该方法包括:
步骤101:显示摄像头的拍摄预览界面;
步骤102:在拍摄预览界面中包括目标对象的情况下,在拍摄预览界面内,显示与目标对象关联的N个视频标识;其中,每个视频标识指示一个视频,N为正整数。
其中,这里的目标对象可以包括一个对象或者多个对象。
本实施例在显示摄像头的拍摄预览界面的情况下,能够显示拍摄预览界面内包含的目标对象关联的视频标识,以供后续用户能够直接获取视频标识指示的视频。可见,本发明实施例不需要用户进行视频拍摄,也不需要用户手动查找所需视频,而是直接通过相机的拍摄预览界面内的目标对象即可获取所需视频的视频标识,操作便捷,实现了视频的快速查找。
在一个具体实施例中,上述步骤101之后,还可以包括:
接收用户对拍摄预览界面中的至少一个对象的第一输入;
响应于第一输入,将第一输入选择的至少一个对象作为目标对象。
即本实施例依据用户的选择,来确定拍摄预览界面内的目标对象,这种方式能够提高用户的自主性,并且避免显示过多的视频标识,影响用户的视频选择。当然,在其他实施例中,也可以由电子设备自动识别拍摄预览界面内包含的符合预设条件的对象作为目标对象,本发明对此不作限定。
进一步的,在一种具体实现方式中,上述接收用户对拍摄预览界面中的至少一个对象的第一输入,包括:
接收用户在拍摄预览界面上的滑动输入;
上述将第一输入选择的至少一个对象作为目标对象,包括:
将滑动输入的滑动轨迹所圈选的至少一个对象作为目标对象。
即在本实现方式中,通过用户的操作手势的轨迹来选择对应的目标对象,这里的滑动输入可以为:双指并拢下滑之后分开再并拢,滑动轨迹形成的图形可以为菱形或圆形或者近似于菱形或圆形的图形。例如,用户手指不离开屏幕画了一个菱形,菱形内圈定的对象作为目标对象如图2所示,图2示出了本发明一个实施例提供的一种滑动输入的示意图,图2中的物体110即为选择的目标对象。
在另一种实施例中,图3示出了本发明一个实施例提供的另一种滑动输入的示意图,图3中的物体110和120即为选择的目标对象。上述滑动输入还可以为:用户双指不离开屏幕连续滑动,滑动轨迹形成多个连续图形,每个图形分别为菱形或圆形或者近似于菱形或圆形的图形,该滑动输入所选择的目标对象为多个,目标对象包括每个图形内圈定的对象。例如,手指不离开屏幕连续画两个圆圈,两个圆圈内分别包含一个沙发和一个桌子,则目标对象包括该沙发和该桌子,之后同时显示沙发和桌子关联的视频标识。
当然,以上仅为几种滑动输入的具体的实施例,具体应用中,用户在拍摄预览界面上还可采用其他滑动输入来进行目标对象的选择。例如,滑动轨迹为点或者对勾等,该滑动轨迹上方预定距离内的对象作为所选择的目标对象,或者还可以为用户点击某个区域进行对焦,对焦的区域内的对象作为目标对象等。
此外,在其他实施例中,也可采用其他滑动输入。例如,可以通过直 接点击或者双击或者长按拍摄预览界面内的某个对象,来实现目标对象的选择。当然,以上滑动输入的方式仅为几种具体实例,本发明并不限定滑动输入的具体输入方式。
在本发明的一种实施例中,步骤101之后,步骤102之前,还可以包括:获取目标对象关联的N个视频标识。
具体的,上述获取目标对象关联的N个视频标识,具体可以包括:
从服务器或电子设备的存储器存储的至少一个视频中,获取与目标对象关联的N个视频标识。
需要注意的是,这里实际上是获取的视频,视频标识仅是用于在拍摄预览界面上显示以起到指示视频的作用。
本实施例中,具体通过服务器或电子设备的存储器来获取视频,其中,选择电子设备的存储器来获取视频的话,使得用户能够在没有网络的情况下进行视频获取以及剪辑。选择服务器来获取视频的话,能够增大获取视频的范围,从网络上获取非常丰富的视频来源。
在一种实施例中,上述获取目标对象关联的N个视频标识,还可以包括:
接收用户的第二输入;
响应于第二输入,调整摄像头的拍摄视场角至目标拍摄视场角;
按照预设的拍摄视场角与时间区间的预设对应关系,确定目标拍摄视场角对应的目标时间区间;
获取拍摄时间位于目标时间区间内且与目标对象关联的N个视频的视频标识。
在本实施例中,通过设置拍摄视场角与时间区间的对应关系,使得能够根据获取目标对象时的拍摄视场角,来限定所获取的视频标识的拍摄时间。例如,标准视场角对应的时间区间可以为3天内,广角对应的实际区间可以为3天前。这种方式,能够使用户根据自身需求来选择合适拍摄时间的视频,尽可能避免获取过多用户不需要的视频,方便了后续用户依据所获取的视频进行查看、剪辑时对视频的筛选。
另外,在其他实施例中,在上述步骤102中,可以在拍摄预览界面内, 在每个目标对象周围预设范围内显示该目标对象关联的视频标识。如图3所示。当然,视频标识也可以通过列表的方式进行显示,或者还可通过其他方式进行显示,本发明不限定视频标识的显示方式。
在一种实施例中,上述步骤102可以包括:
在拍摄预览界面中包括目标对象的第一特征部分的情况下,显示与第一特征部分关联的N个视频的视频标识;
在拍摄预览界面更新为目标对象的第二特征部分的情况下,将N个视频的视频标识更新为与第二特征部分关联的T个视频的视频标识;
其中,T为正整数,目标对象的不同特征部分关联不同的视频标识。
在本实施例中,由于某些目标对象是由不同的特征部分所组成的,例如目标对象为沙发,则沙发的特征部分可以为沙发扶手、靠垫、坐垫、抱枕等,其中,沙发扶手可以关联家庭相关的视频,抱枕可以关联聚会相关的视频等。可见,本实施例能够获取目标对象各个特征部分关联的视频,从而提高获取的视频的丰富性。
其中,上述识别拍摄预览界面中目标对象的第一特征部分和第二特征部分的过程可以为:依次扫描目标对象的各个特征部分。具体的,可以是按照预定顺序(例如从上到下,从左到右等)扫描目标对象,扫描过程中,识别已扫描部分中的特征部分。
上述实施例中,在扫描完第二特征部分后,即不再显示第一特征部分关联的视频标识,而是仅显示第二特征部分关联的视频标识。或者,也可以在扫描完第二特征部分后,增加显示第二特征部分关联的视频标识,即同时显示第一特征部分和第二特征部分关联的视频标识。
此外,基于上述实施例,在扫描完成目标对象的多个特征部分的情况下,即拍摄预览界面中包括目标对象的多个特征部分的情况下,还可以将扫描完成的多个特征部分中的至少两个特征部分进行组合,并获取每个组合关联的视频标识。例如,扫描完成3个特征部分后,则可以将这三个特征部分两两组合以及将这三个特征部分作为一个组合,之后获取这四组组合中每个组合关联的视频标识。这种方式,能够进一步增加所获得的视频标识的丰富性。
在一种具体实施例中,该方法还可以包括:
接收用户对N个视频标识中的第一视频标识的第三输入;其中,这里的第一视频标识可以是N个视频标识中的任意一个。
响应于第三输入,播放第一视频标识对应的第一视频;
接收用户的第四输入;
响应于第四输入,对第一视频进行剪辑,输出剪辑后的第一视频。
由于在视频剪辑过程中,通常需要对视频进行播放,因此本实施例首先播放第一视频,然后再对播放过程中的第一视频进行剪辑。
目前想要对特定的目标对象进行视频剪辑的话,通常需要用户自行进行录制,或者去网上查找相关视频,过程较为复杂,不够便利。而本实施例中,通过在拍摄预览界面选定目标对象后,即可获取目标对象的相关视频,进而进行剪辑。从而简化了对特定目标对象的视频剪辑的过程。
进一步的,在一种实施例中,在播放第一视频的过程中,响应于用户对上一个或下一个按键的触发输入,可以播放第一视频的上一个视频或者下一个视频。
在另一种实施例中,如果第一视频标识为多个,则同时播放多个第一视频标识对应的多个第一视频。播放多个第一视频时,屏幕上下区域将会分割开来,每个区域分别单独显示一个第一视频,或者也可以将屏幕左右区域分割开来,每个区域单独显示一个第一视频。
此外,得到剪辑后的第一视频之后,还可以响应于接收的视频保存输入,将剪辑后的第一视频进行保存。其中,这里的视频保存输入可以为长按视频的播放按钮的操作,当然,也可以设置其他操作作为视频保存输入,本发明对此不作限定。
另外,图4示出了本发明一个实施例提供的一种第三输入的示意图,上述对第一视频标识的第三输入可以为:点击第一视频标识,并在手指不离开屏幕的情况下两指分开往外拉伸的输入操作。当然,在其他实施例中,对第三输入也可以为双击第一视频标识的操作,或者还可以设置其他输入方式作为第三输入。
在一种具体的实施例中,上述接收用户的第四输入具体可以为:接收 用户对第一视频的播放界面的第四输入。
如图5所示,图5示出了本发明一个实施例提供的一种第四输入的示意图。上述接收用户的第四输入之后,还可以包括:
响应于第四输入,在第一视频的播放界面上,显示第四输入所输入的第一信息。
在本实施例中,可以在第一视频的播放界面上插入用户自定义输入的第一信息,例如文字和图案等。这种方式,使得用户能够根据自身需求对获取的第一视频进行文字编辑,从而得到自身想要的视频显示效果。用户编辑视频的灵活性高。
其中,为了方便用户的编辑,避免用户编辑过程中漏掉第一视频的显示内容,可以在响应于第四输入的情况下,暂停第一视频,并在编辑完成后,接收到用户的编辑完成输入的情况下,继续播放第一视频。
具体的,上述第四输入,可以为用户单指接触屏幕并移动一段超出预设长度的弧度的操作。当然,第四输入也可以设置为其他类型的操作,本发明对此并不限定。在将第一信息插入第一视频后,保存编辑完成后的第一视频,以后查看第一视频时,即可看到编辑的第一信息。
基于上述实施例,在一种具体实施例中,上述接收用户的第四输入,响应于第四输入,对第一视频进行剪辑,输出剪辑后的第一视频,还可以为:
接收用户对第一视频内已插入的目标第一信息的第四输入,响应于第四输入,删除第一视频内的目标第一信息。此时第四输入为删除输入。
由于用户可能存在插入信息错误或者想要调整之前的插入的第一信息的情况,因此需要将待调整的第一信息进行删除。本实施例使得用户能够进行已插入的第一信息的删除,提高了对第一视频进行信息编辑时的便利性。
具体的,本实施例中的第四输入,可以为手势从屏幕一侧(例如左侧)滑动一条线至屏幕另一侧的操作,用户手势的滑动轨迹所穿过的第一信息,即为目标第一信息,图6示出了本发明一个实施例提供的另一种第四输入的示意图。当然,在其他实施例中,也可以设置为用户手势的滑动轨迹之 上的插第一信息作为目标第一信息。或者还可以采用其他的输入方式作为第四输入,本发明对此不作限定。
此外,上述对第一信息的第四输入,可以是在剪辑后的第一视频保存完成前进行。当然,将剪辑后的第一视频保存完成后,也可以继续调用第一视频并对其进行剪辑,本发明对此不作限定。
在另一种实施例中,上述响应于第四输入,对第一视频进行剪辑,输出剪辑后的第一视频,还可以包括:
响应于第四输入,录制第一视频中的目标对象的第二视频;
接收用户对第二视频的第五输入;
响应于第五输入,将第二视频插入至第一视频中的第一位置;
其中,第一位置为第五输入在第一视频的播放进度条上所输入的位置。
本发明前述实施例中,是通过拍摄预览界面内的目标对象来获取关联的视频标识,将视频标识对应的视频作为待处理的候选视频,用户从待处理的候选视频中选择至少一个视频进行剪辑。在此过程中,由于待处理的候选视频为其他人录制的或者是用户之前录制的,因此可能不太符合目前的需求,基于此目的,本实施例追加录制了目标对象的第二视频,并将追加录制的第二视频插入第一视频中。这种方式,使得第一视频内能够随意增加用户需要的视频内容,丰富了用户的视频剪辑方式,方便了用户对视频的自定义剪辑,增强了视频剪辑的灵活性。其中,第二视频可以录制一个或多个。
具体的,图7示出了本发明一个实施例提供的一种第五输入的示意图。上述第五输入可以为:双指在屏幕内下滑,第二视频即插入第五输入在第一视频的播放进度条上对应的位置,图7中的指示标识130即为插入的第二视频的指示标识。或者在其他实施例中,也可以采用单指下滑的操作作为第五输入,或者还可以通过触发预定的视频插入菜单项作为第五输入。本发明不限定第五输入的具体操作方式。
在一种具体实施例中,如图7所示,上述将第二视频插入至第一视频中的第一位置之后,还可以包括:
在第一视频的播放进度条上的第一位置,显示第二视频的指示标识; 其中,指示标识用于指示第二视频的插入位置。
由于在第一视频内插入的第二视频,本质上是与之前的第一视频不同的视频,其来源、录制场景等通常是不一样的,这种情况下,在对第一视频进行剪辑的过程中,即使将某个第二视频进行了插入,后续也可能需要对第二视频进行调整。因此,基于该目的,本实施例在第一视频的播放进度条上,标识了第二视频的插入位置,从而使得用户能够直观的了解到第二视频的插入情况,并方便了后续用户对之前插入的第二视频的调整,提高了视频剪辑的过程中用户的便利性。其中,这里的指示标识可通过包含视频简称的气泡表示,当然也可以采用其他标识方式。
基于上述实施例,上述在第一视频的播放进度条上的第一位置,显示第二视频的指示标识之后,还可以包括:
接收用户对指示标识的第六输入;
响应于第六输入,将指示标识从第一位置移动至第二位置,并将第二视频插入至第一视频中的第二位置。
其中,这里的第一位置和第二位置指的是第一视频的播放进度条上的位置,即视频的播放时间点的位置。第一位置与第二位置不同。
由于在第二视频插入时,可能无法准确控制第二视频的插入位置,导致出现插入位置错误的情况,这种情况下则需要对插入的第二视频的插入位置进行调整。在本实施例中,通过移动第二视频的指示标识在第一视频的播放进度条上的位置,即能够移动第二视频在第一视频中的插入位置选中所插入的视频后,在此过程中,用户根据指示标识的位置能够直观的查看到当前第二视频的插入位置移动情况,方便了用户的调整。
此外,为了避免用户漏看第一视频的播放内容,简化后台处理程序,在响应于第六输入的过程中,优选暂停播放第一视频。
具体的,图8示出了本发明一个实施例提供的一种第六输入的示意图。上述第六输入可以为:单指或双指选中第二视频的指示标识,左右移动上述指示标识在第一视频的播放进度条上的位置,来逐帧调整第二视频的插入位置。例如,用户单指或双指按住指示标识,不离开屏幕左右滑动该指示标识,来移动第二视频的插入位置。其中,图8中的指示标识130即为 被调整的第二视频的指示标识。例如向左滑一下,第二视频插入到当前帧的上一帧。在其他实施例中也可以采用其他插入位置的调整方式,本发明不限定第六输入的具体操作内容。
在一种实施例中,上述在第一视频的播放进度条上的第一位置,显示第二视频的指示标识之后,还可以包括:
接收用户对指示标识的第七输入;
响应于第七输入,删除第一视频中已插入的第二视频,并消除指示标识的显示。
在本实施例中,能够将插入的第二视频进行删除,从而进一步完善了用户在进行视频剪辑过程中对于追加视频的剪辑方式,使得用户能够更为方便的进行追加视频的插入管理,剪辑的灵活性更高。
具体的,图9示出了本发明一个实施例提供的一种第七输入的示意图。上述第七输入可以为:单指选中第二视频的指示标识,在不离开屏幕的情况下,拖动该指示标识向下屏幕边缘移动的操作。在其他实施例中,也可以为第二视频的指示标识设置对应的关闭标识,第七输入为单指选中第二视频的指示标识对应的关闭标识的操作,图9中的指示标识140即为被删除的第二视频的指示标识。当然,在其他实施例中,第七输入也可设置为其他类型的操作。
在一种实施例中,上述步骤102之后,还可以包括:
接收用户对N个视频标识中的M个目标视频标识的第八输入;
响应于第八输入,将M个目标视频标识指示的M个目标视频进行视频拼接,输出第三视频;其中,M为大于1的整数,M≤N。
在实施例中,能够直接将多个目标视频进行拼接,在单一视频无法满足用户需求的情况下,不需要用户每次均进行视频的追录,而是可以通过将多个目标视频进行拼接的方式来获得用户想要的视频,提高了视频剪辑和灵活性以及视频剪辑操作的丰富性。其中,这里的第八输入可以是将M各目标视频标识拖拽至一起的输入操作,后续的视频拼接指的是按照录制完成的时间顺序将M个目标视频依次贴合在一起。例如,用户可以任意选择两个视频标识拖拽至一起,两个视频标识分别对应目标视频A和B,其 中A的录制完成时间早于B,则会将A的结尾帧与B的起始帧连接,使得A和B完成录制完成的时间顺序贴合在一起。
本发明另一实施例还提供了一种电子设备,图10示出了本发明一个实施例提供的一种电子设备的结构示意图。该电子设备包括:
第一显示模块201,用于显示摄像头的拍摄预览界面;
第二显示模块202,用于在拍摄预览界面中包括目标对象的情况下,在拍摄预览界面内,显示与目标对象关联的N个视频标识;其中,每个视频标识指示一个视频,N为正整数。
本实施例在显示摄像头的拍摄预览界面的情况下,能够显示拍摄预览界面内包含的目标对象关联的视频标识,以供后续用户能够直接获取视频标识指示的视频。可见,本发明实施例不需要用户进行视频拍摄,也不需要用户手动查找所需视频,而是直接通过相机的拍摄预览界面内的目标对象即可获取所需视频的视频标识,操作便捷,实现了视频的快速查找。
在一种实施例中,该电子设备还可以包括:
对象选择模块,用于接收用户对拍摄预览界面中的至少一个对象的第一输入;响应于第一输入,将第一输入选择的至少一个对象作为目标对象。
即本实施例依据用户的选择,来确定拍摄预览界面内的目标对象,这种方式能够提高用户的自主性,并且避免显示过多的视频标识,影响用户的视频选择。当然,在其他实施例中,也可以由电子设备自动识别拍摄预览界面内包含的符合预设条件的对象作为目标对象,本发明对此不作限定。
进一步的,在一种具体实现方式中,上述对象选择模块具体用于:接收用户在拍摄预览界面上的滑动输入;将滑动输入的滑动轨迹所圈选的至少一个对象作为目标对象。
即在本实现方式中,通过用户的操作手势的轨迹来选择对应的目标对象,这里的滑动输入可以为:双指并拢下滑之后分开再并拢,滑动轨迹形成的图形可以为菱形或圆形或者近似于菱形或圆形的图形。此外,上述滑动输入还可以为:用户双指不离开屏幕连续滑动,滑动轨迹形成多个连续图形,每个图形分别为菱形或圆形或者近似于菱形或圆形的图形,该滑动 输入所选择的目标对象为多个,目标对象包括每个图形内圈定的对象。此外,在其他实施例中,也可采用其他滑动输入。
在一种实施例中,该电子设备还可以包括:
视频标识获取模块,用于获取目标对象关联的N个视频标识。
具体的,上述视频标识获取模块具体可以用于:从服务器或电子设备的存储器存储的至少一个视频中,获取与目标对象关联的N个视频标识。
本实施例中,具体通过服务器或电子设备的存储器来获取视频,其中,选择电子设备的存储器来获取视频的话,使得用户能够在没有网络的情况下进行视频获取以及剪辑。选择服务器来获取视频的话,能够增大获取视频的范围,从网络上获取非常丰富的视频来源。
在另一实施例中,上述视频标识获取模块还可以用于:
接收用户的第二输入;响应于第二输入,调整摄像头的拍摄视场角至目标拍摄视场角;按照预设的拍摄视场角与时间区间的预设对应关系,确定目标拍摄视场角对应的目标时间区间;获取拍摄时间位于目标时间区间内且与目标对象关联的N个视频的视频标识。
在本实施例中,通过设置拍摄视场角与时间区间的对应关系,使得能够根据获取目标对象时的拍摄视场角,来限定所获取的视频标识的拍摄时间。这种方式,能够使用户根据自身需求来选择合适拍摄时间的视频,尽可能避免获取过多用户不需要的视频,方便了后续用户依据所获取的视频进行查看、剪辑时对视频的筛选。
另外,在其他实施例中,第二显示模块202可以具体用于在拍摄预览界面内,在每个目标对象周围预设范围内显示该目标对象关联的视频标识,或者通过列表的方式显示视频标识。本发明不限定视频标识的显示方式。
在一种实施例中,上述第二显示模块202具体可以包括:
第一标识显示单元,用于在拍摄预览界面中包括目标对象的第一特征部分的情况下,显示与第一特征部分关联的N个视频的视频标识;
第二标识显示单元,用于在拍摄预览界面更新为目标对象的第二特征部分的情况下,将N个视频的视频标识更新为与第二特征部分关联的T个视频的视频标识;其中,T为正整数,目标对象的不同特征部分关联不同 的视频标识。
在本实施例中,由于某些目标对象是由不同的特征部分所组成的,例如目标对象为沙发,则沙发的特征部分可以为沙发扶手、靠垫、坐垫、抱枕等。因此为了本实施例能够获取目标对象各个特征部分对应的视频,从而提高获取的视频的丰富性。
其中,上述第二标识显示单元还用于:识别拍摄预览界面中目标对象的第一特征部分和第二特征部分。
在另一种实施例中,该电子设备还包括:
视频播放模块203,用于接收用户对N个视频标识中的第一视频标识的第三输入;响应于第三输入,播放第一视频标识对应的第一视频。
视频剪辑模块204,用于接收用户的第四输入;响应于第四输入,对第一视频进行剪辑,输出剪辑后的第一视频。
本实施例中,通过在拍摄预览界面选定目标对象后,即可获取目标对象的相关视频,进而进行剪辑。从而简化了对特定目标对象的视频剪辑的过程。
在其他实施例中,得到剪辑后的第一视频之后,该电子设备还可以包括视频保存模块,用于响应于接收的视频保存输入,将剪辑后的第一视频进行保存。其中,这里的视频保存输入可以为长按视频的播放按钮的操作,当然,也可以设置其他操作作为视频保存输入。
在一种实施例中,视频播放模块203还可以用于:在播放第一视频的过程中,响应于用户对上一个或下一个按键的触发输入,可以播放第一视频的上一个视频或者下一个视频。
具体的,视频剪辑模块204可以用于:响应于接收的第四输入,暂停播放的视频;接收输入的待插入信息,在暂停播放时显示的视频帧内插入待插入信息。
在另一种实施例中,如果第一视频标识为多个,则同时播放多个第一视频标识对应的多个第一视频。播放多个第一视频时,屏幕上下区域将会分割开来,每个区域分别单独显示一个第一视频,或者也可以将屏幕左右区域分割开来,每个区域单独显示一个第一视频。
在一种具体的实施例中,上述视频剪辑模块204具体用于:接收用户对第一视频的播放界面的第四输入,响应于第四输入,在第一视频的播放界面上,显示第四输入所输入的第一信息。
在本实施例中,可以在第一视频的播放界面上插入用户自定义输入的第一信息,例如文字和图案等。这种方式,使得用户能够根据自身需求对获取的第一视频进行文字编辑,从而得到自身想要的视频显示效果。用户编辑视频的灵活性高。
基于上述实施例,在一种具体实施例中,上述视频剪辑模块204还可以用于:接收用户对第一视频内已插入的目标第一信息的第四输入,响应于第四输入,删除第一视频内的目标第一信息。此时第四输入为删除输入。本实施例使得用户能够进行已插入的第一信息的删除,提高了对第一视频进行信息编辑时的便利性。
在另一实施例中,视频剪辑模块204还可以用于:响应于第四输入,录制第一视频中的目标对象的第二视频;接收用户对第二视频的第五输入;响应于第五输入,将第二视频插入至第一视频中的第一位置;其中,第一位置为第五输入在第一视频的播放进度条上所输入的位置。
本实施例追加录制了目标对象的第二视频,并将追加录制的第二视频插入第一视频中。这种方式,使得第一视频内能够随意增加用户需要的视频内容,丰富了用户的视频剪辑方式,方便了用户对视频的自定义剪辑,增强了视频剪辑的灵活性。其中,第二视频可以录制一个或多个。
具体的,视频剪辑模块204还可以用于:在第一视频的播放进度条上的第一位置,显示第二视频的指示标识;其中,指示标识用于指示第二视频的插入位置。
本实施例在第一视频的播放进度条上,标识了第二视频的插入位置,从而使得用户能够直观的了解到第二视频的插入情况,并方便了后续用户对之前插入的第二视频的调整,提高了视频剪辑的过程中用户的便利性。
在另一实施例中,视频剪辑模块204还可以用于:接收用户对指示标识的第六输入;响应于第六输入,将指示标识从第一位置移动至第二位置,并将第二视频插入至第一视频中的第二位置。
在本实施例中,通过移动第二视频的指示标识在第一视频的播放进度条上的位置,即能够移动第二视频在第一视频中的插入位置选中所插入的视频后,在此过程中,用户根据指示标识的位置能够直观的查看到当前第二视频的插入位置移动情况,方便了用户的调整。
在另一实施例中,视频剪辑模块204还可以用于:
接收用户对指示标识的第七输入;响应于第七输入,删除第一视频中已插入的第二视频,并消除指示标识的显示。
本实施例中,进一步完善了用户在进行视频剪辑过程中对于追加视频的剪辑方式,使得用户能够更为方便的进行追加视频的插入管理,剪辑的灵活性更高。
在另一实施例中,该电子设备还包括:
视频拼接模块,用于接收用户对N个视频标识中的M个目标视频标识的第八输入;响应于第八输入,将M个目标视频标识指示的M个目标视频进行视频拼接,输出第三视频;其中,M为大于1的整数,M≤N。
在实施例中,能够直接将多个目标视频进行拼接,在单一视频无法满足用户需求的情况下,不需要用户每次均进行视频的追录,而是可以通过将多个目标视频进行拼接的方式来获得用户想要的视频,提高了视频剪辑和灵活性以及视频剪辑操作的丰富性。
本发明实施例提供的电子设备能够实现前述任一种方法实施例中电子设备实现的各个方法步骤,为避免重复,这里不再赘述。
图11为实现本发明各个实施例的一种电子设备的硬件结构示意图。
该电子设备300包括但不限于:射频单元301、网络模块302、音频输出单元303、输入单元304、传感器305、显示单元306、用户输入单元307、接口单元308、存储器309、处理器310、电源311以及拍摄组件312等部件。本领域技术人员可以理解,图11中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本发明实施例中,电子设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。拍摄组件312可以包括摄像头及摄像头的相关组件等。
其中,处理器310,用于显示摄像头的拍摄预览界面;在拍摄预览界面中包括目标对象的情况下,在拍摄预览界面内,显示与目标对象关联的N个视频标识;其中,每个视频标识指示一个视频,N为正整数。
在本发明实施例中,在显示摄像头的拍摄预览界面的情况下,能够显示拍摄预览界面内包含的目标对象关联的视频标识,以供后续用户能够直接获取视频标识指示的视频。可见,本发明实施例不需要用户进行视频拍摄,也不需要用户手动查找所需视频,而是直接通过相机的拍摄预览界面内的目标对象即可获取所需视频的视频标识,操作便捷,实现了视频的快速查找。
应理解的是,本发明实施例中,射频单元301可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器310处理;另外,将上行的数据发送给基站。通常,射频单元301包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元301还可以通过无线通信系统与网络和其他设备通信。
电子设备通过网络模块302为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元303可以将射频单元301或网络模块302接收的或者在存储器309中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元303还可以提供与电子设备300执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元303包括扬声器、蜂鸣器以及受话器等。
输入单元304用于接收音频或视频信号。输入单元304可以包括图形处理器(Graphics Processing Unit,GPU)3041和麦克风3042,图形处理器13041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元306上。经图形处理器13041处理后的图像帧可以存储在存储器309(或其它存储介质)中或者经由射频单元301或网络模块302进行发送。麦克风3042可以接收声音,并且能够将这样的声音处理为音频数据。处理 后的音频数据可以在电话通话模式的情况下转换为可经由射频单元301发送到移动通信基站的格式输出。
电子设备300还包括至少一种传感器305,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板3061的亮度,接近传感器可在电子设备300移动到耳边时,关闭显示面板3061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别电子设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器305还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元306用于显示由用户输入的信息或提供给用户的信息。显示单元306可包括显示面板3061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板3061。
用户输入单元307可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元307包括触控面板3071以及其他输入设备3072。触控面板3071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板3071上或在触控面板3071附近的操作)。触控面板3071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器310,接收处理器310发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板3071。除了触控面板3071,用户输入单元307还可以包括其他输入设备3072。具体地,其他输入设备3072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆, 在此不再赘述。
进一步的,触控面板3071可覆盖在显示面板3061上,当触控面板3071检测到在其上或附近的触摸操作后,传送给处理器310以确定触摸事件的类型,随后处理器310根据触摸事件的类型在显示面板3061上提供相应的视觉输出。虽然在图11中,触控面板3071与显示面板3061是作为两个独立的部件来实现电子设备的输入和输出功能,但是在某些实施例中,可以将触控面板3071与显示面板3061集成而实现电子设备的输入和输出功能,具体此处不做限定。
接口单元308为外部装置与电子设备300连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元308可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到电子设备300内的一个或多个元件或者可以用于在电子设备300和外部装置之间传输数据。
存储器309可用于存储软件程序以及各种数据。存储器309可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器309可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器310是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器309内的软件程序和/或模块,以及调用存储在存储器309内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。处理器310可包括一个或多个处理单元;优选的,处理器310可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集 成到处理器310中。
电子设备300还可以包括给各个部件供电的电源311(比如电池),优选的,电源311可以通过电源管理系统与处理器310逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,电子设备300包括一些未示出的功能模块,在此不再赘述。
优选的,本发明实施例还提供一种电子设备,包括处理器310,存储器309,存储在存储器309上并可在所述处理器310上运行的计算机程序,该计算机程序被处理器310执行时实现上述视频显示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本发明实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述视频显示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述的计算机可读存储介质可以包括非暂态存储器,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上面参考根据本发明的实施例的方法、装置(系统)和机器程序产品的流程图和/或框图描述了本发明的各方面。应当理解,流程图和/或框图中的每个方框以及流程图和/或框图中各方框的组合可以由程序或指令实现。这些程序或指令可被提供给通用计算机、专用计算机、或其它可编程数据处理装置的处理器,以产生一种机器,使得经由计算机或其它可编程数据处理装置的处理器执行的这些程序或指令使能对流程图和/或框图的一个或多个方框中指定的功能/动作的实现。这种处理器可以是但不限于是通用处 理器、专用处理器、特殊应用处理器或者现场可编程逻辑电路。还可理解,框图和/或流程图中的每个方框以及框图和/或流程图中的方框的组合,也可以由执行指定的功能或动作的专用硬件来实现,或可由专用硬件和计算机指令的组合来实现。
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本发明的保护之内。

Claims (15)

  1. 一种视频显示方法,应用于电子设备,所述方法包括:
    显示摄像头的拍摄预览界面;
    在所述拍摄预览界面中包括目标对象的情况下,在所述拍摄预览界面内,显示与所述目标对象关联的N个视频标识;
    其中,每个所述视频标识指示一个视频,N为正整数。
  2. 根据权利要求1所述的方法,其中,所述显示摄像头的拍摄预览界面之后,所述方法还包括:
    接收用户对所述拍摄预览界面中的至少一个对象的第一输入;
    响应于所述第一输入,将所述第一输入选择的所述至少一个对象作为所述目标对象。
  3. 根据权利要求2所述的方法,其中,所述接收用户对所述拍摄预览界面中的至少一个对象的第一输入,包括:
    接收用户在所述拍摄预览界面上的滑动输入;
    所述将所述第一输入选择的所述至少一个对象作为所述目标对象,包括:
    将所述滑动输入的滑动轨迹所圈选的至少一个对象作为所述目标对象。
  4. 根据权利要求1所述的方法,其中,所述在所述拍摄预览界面中包括目标对象的情况下,在所述拍摄预览界面中,显示与所述目标对象关联的N个视频标识,包括:
    在所述拍摄预览界面中包括所述目标对象的第一特征部分的情况下,显示与所述第一特征部分关联的N个视频的视频标识;
    在所述拍摄预览界面更新为所述目标对象的第二特征部分的情况下,将所述N个视频的视频标识更新为与所述第二特征部分关联的T个视频的视频标识;
    其中,T为正整数,所述目标对象的不同特征部分关联不同的视频标识。
  5. 根据权利要求1所述的方法,其中,所述在所述拍摄预览界面中, 显示与所述目标对象关联的N个视频标识之前,所述方法还包括:
    接收用户的第二输入;
    响应于所述第二输入,调整所述摄像头的拍摄视场角至目标拍摄视场角;
    按照预设的拍摄视场角与时间区间的预设对应关系,确定所述目标拍摄视场角对应的目标时间区间;
    获取拍摄时间位于所述目标时间区间内且与所述目标对象关联的N个视频的视频标识。
  6. 根据权利要求1所述的方法,还包括:
    接收用户对所述N个视频标识中的第一视频标识的第三输入;
    响应于所述第三输入,播放所述第一视频标识对应的第一视频;
    接收用户的第四输入;
    响应于所述第四输入,对所述第一视频进行剪辑,输出剪辑后的所述第一视频。
  7. 根据权利要求6所述的方法,其中,所述接收用户的第四输入,包括:
    接收用户对所述第一视频的播放界面的第四输入;
    所述接收用户的第四输入之后,还包括:
    响应于所述第四输入,在所述第一视频的播放界面上,显示所述第四输入所输入的第一信息。
  8. 根据权利要求6所述的方法,其中,所述响应于所述第四输入,对所述第一视频进行剪辑,输出剪辑后的所述第一视频,还包括:
    响应于所述第四输入,录制所述第一视频中的目标对象的第二视频;
    接收用户对所述第二视频的第五输入;
    响应于所述第五输入,将所述第二视频插入至所述第一视频中的第一位置;
    其中,所述第一位置为所述第五输入在所述第一视频的播放进度条上所输入的位置。
  9. 根据权利要求8所述的方法,其中,所述将所述第二视频插入至所 述第一视频中的第一位置之后,所述方法还包括:
    在所述第一视频的播放进度条上的第一位置,显示所述第二视频的指示标识;
    其中,所述指示标识用于指示所述第二视频的插入位置。
  10. 根据权利要求9所述的方法,其中,所述在所述第一视频的播放进度条上的第一位置,显示所述第二视频的指示标识之后,还包括:
    接收用户对所述指示标识的第六输入;
    响应于所述第六输入,将所述指示标识从所述第一位置移动至第二位置,并将所述第二视频插入至所述第一视频中的所述第二位置。
  11. 根据权利要求9或10所述的方法,其中,所述在所述第一视频的播放进度条上的第一位置,显示所述第二视频的指示标识之后,所述方法还包括:
    接收用户对所述指示标识的第七输入;
    响应于所述第七输入,删除所述第一视频中已插入的所述第二视频,并消除所述指示标识的显示。
  12. 根据权利要求1所述的方法,其中,所述在所述拍摄预览界面中,显示与所述目标对象关联的N个视频标识之后,还包括:
    接收用户对所述N个视频标识中的M个目标视频标识的第八输入;
    响应于所述第八输入,将所述M个目标视频标识指示的M个目标视频进行视频拼接,输出第三视频;
    其中,M为大于1的整数,M≤N。
  13. 一种电子设备,包括:
    第一显示模块,用于显示摄像头的拍摄预览界面;
    第二显示模块,用于在所述拍摄预览界面中包括目标对象的情况下,在所述拍摄预览界面内,显示与所述目标对象关联的N个视频标识,其中,每个所述视频标识指示一个视频,N为正整数。
  14. 一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至12中任一项所述的视频显示方法的步骤。
  15. 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至12中任一项所述的视频显示方法的步骤。
PCT/CN2020/130920 2019-11-29 2020-11-23 视频显示方法、电子设备以及介质 WO2021104209A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP20893151.9A EP4068754B1 (en) 2019-11-29 2020-11-23 Video display method, electronic device and medium
KR1020227020704A KR102719323B1 (ko) 2019-11-29 2020-11-23 비디오 표시 방법, 전자기기 및 매체
JP2022524947A JP7393541B2 (ja) 2019-11-29 2020-11-23 ビデオ表示方法、電子機器及び媒体
US17/825,692 US20220284928A1 (en) 2019-11-29 2022-05-26 Video display method, electronic device and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911204479.X 2019-11-29
CN201911204479.XA CN110913141B (zh) 2019-11-29 2019-11-29 一种视频显示方法、电子设备以及介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/825,692 Continuation US20220284928A1 (en) 2019-11-29 2022-05-26 Video display method, electronic device and medium

Publications (1)

Publication Number Publication Date
WO2021104209A1 true WO2021104209A1 (zh) 2021-06-03

Family

ID=69820771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/130920 WO2021104209A1 (zh) 2019-11-29 2020-11-23 视频显示方法、电子设备以及介质

Country Status (5)

Country Link
US (1) US20220284928A1 (zh)
EP (1) EP4068754B1 (zh)
JP (1) JP7393541B2 (zh)
CN (1) CN110913141B (zh)
WO (1) WO2021104209A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390205A (zh) * 2022-01-29 2022-04-22 西安维沃软件技术有限公司 拍摄方法、装置和电子设备
CN115309317A (zh) * 2022-08-08 2022-11-08 北京字跳网络技术有限公司 媒体内容获取方法、装置、设备、可读存储介质及产品

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110913141B (zh) * 2019-11-29 2021-09-21 维沃移动通信有限公司 一种视频显示方法、电子设备以及介质
CN111638817A (zh) * 2020-04-27 2020-09-08 维沃移动通信有限公司 目标对象显示方法及电子设备
CN112905837A (zh) * 2021-04-09 2021-06-04 维沃移动通信(深圳)有限公司 视频文件处理方法、装置及电子设备
CN113473224B (zh) * 2021-06-29 2023-05-23 北京达佳互联信息技术有限公司 视频处理方法、装置、电子设备及计算机可读存储介质
CN113727140A (zh) * 2021-08-31 2021-11-30 维沃移动通信(杭州)有限公司 音视频处理方法、装置和电子设备
WO2023155143A1 (zh) * 2022-02-18 2023-08-24 北京卓越乐享网络科技有限公司 视频制作方法、装置、电子设备、存储介质和程序产品

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243474A1 (en) * 2010-04-06 2011-10-06 Canon Kabushiki Kaisha Video image processing apparatus and video image processing method
CN105426035A (zh) * 2014-09-15 2016-03-23 三星电子株式会社 用于提供信息的方法和电子装置
CN106162355A (zh) * 2015-04-10 2016-11-23 北京云创视界科技有限公司 视频交互方法及终端
CN106488002A (zh) * 2015-08-28 2017-03-08 Lg电子株式会社 移动终端
CN106658199A (zh) * 2016-12-28 2017-05-10 网易传媒科技(北京)有限公司 一种视频内容的展示方法及装置
CN108124167A (zh) * 2016-11-30 2018-06-05 阿里巴巴集团控股有限公司 一种播放处理方法、装置和设备
CN110121093A (zh) * 2018-02-06 2019-08-13 优酷网络技术(北京)有限公司 视频中目标对象的搜索方法及装置
CN110913141A (zh) * 2019-11-29 2020-03-24 维沃移动通信有限公司 一种视频显示方法、电子设备以及介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190215449A1 (en) * 2008-09-05 2019-07-11 Telefonaktiebolaget Lm Ericsson (Publ) Mobile terminal and method of performing multi-focusing and photographing image including plurality of objects using the same
US10349077B2 (en) * 2011-11-21 2019-07-09 Canon Kabushiki Kaisha Image coding apparatus, image coding method, image decoding apparatus, image decoding method, and storage medium
KR102206184B1 (ko) * 2014-09-12 2021-01-22 삼성에스디에스 주식회사 동영상 내 객체 관련 정보 검색 방법 및 동영상 재생 장치
KR102334618B1 (ko) * 2015-06-03 2021-12-03 엘지전자 주식회사 이동 단말기 및 그 제어 방법
KR101705595B1 (ko) * 2015-07-10 2017-02-13 (주) 프람트 데이터 구조화를 통한 직관적인 동영상콘텐츠 재생산 방법 및 그 장치
CN107239203A (zh) * 2016-03-29 2017-10-10 北京三星通信技术研究有限公司 一种图像管理方法和装置
JP2017182603A (ja) 2016-03-31 2017-10-05 株式会社バンダイナムコエンターテインメント プログラム及びコンピュータシステム
CN105933538B (zh) * 2016-06-15 2019-06-07 维沃移动通信有限公司 一种移动终端的视频查找方法及移动终端
JP2018006961A (ja) 2016-06-30 2018-01-11 カシオ計算機株式会社 画像処理装置、動画像選択方法及びプログラム
CN106384264A (zh) * 2016-09-22 2017-02-08 深圳市金立通信设备有限公司 一种信息查询方法及终端
US20180314698A1 (en) * 2017-04-27 2018-11-01 GICSOFT, Inc. Media sharing based on identified physical objects

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243474A1 (en) * 2010-04-06 2011-10-06 Canon Kabushiki Kaisha Video image processing apparatus and video image processing method
CN105426035A (zh) * 2014-09-15 2016-03-23 三星电子株式会社 用于提供信息的方法和电子装置
CN106162355A (zh) * 2015-04-10 2016-11-23 北京云创视界科技有限公司 视频交互方法及终端
CN106488002A (zh) * 2015-08-28 2017-03-08 Lg电子株式会社 移动终端
CN108124167A (zh) * 2016-11-30 2018-06-05 阿里巴巴集团控股有限公司 一种播放处理方法、装置和设备
CN106658199A (zh) * 2016-12-28 2017-05-10 网易传媒科技(北京)有限公司 一种视频内容的展示方法及装置
CN110121093A (zh) * 2018-02-06 2019-08-13 优酷网络技术(北京)有限公司 视频中目标对象的搜索方法及装置
CN110913141A (zh) * 2019-11-29 2020-03-24 维沃移动通信有限公司 一种视频显示方法、电子设备以及介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4068754A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390205A (zh) * 2022-01-29 2022-04-22 西安维沃软件技术有限公司 拍摄方法、装置和电子设备
CN114390205B (zh) * 2022-01-29 2023-09-15 西安维沃软件技术有限公司 拍摄方法、装置和电子设备
CN115309317A (zh) * 2022-08-08 2022-11-08 北京字跳网络技术有限公司 媒体内容获取方法、装置、设备、可读存储介质及产品

Also Published As

Publication number Publication date
JP7393541B2 (ja) 2023-12-06
CN110913141A (zh) 2020-03-24
EP4068754A1 (en) 2022-10-05
JP2023502326A (ja) 2023-01-24
CN110913141B (zh) 2021-09-21
EP4068754B1 (en) 2024-08-21
EP4068754A4 (en) 2023-01-04
KR20220098027A (ko) 2022-07-08
US20220284928A1 (en) 2022-09-08

Similar Documents

Publication Publication Date Title
WO2021104209A1 (zh) 视频显示方法、电子设备以及介质
WO2021036536A1 (zh) 视频拍摄方法及电子设备
CN108089788B (zh) 一种缩略图显示控制方法及移动终端
WO2021136134A1 (zh) 视频处理方法、电子设备及计算机可读存储介质
WO2021078116A1 (zh) 视频处理方法及电子设备
WO2019137429A1 (zh) 图片处理方法及移动终端
WO2020042890A1 (zh) 视频处理方法、终端及计算机可读存储介质
WO2019196929A1 (zh) 一种视频数据处理方法及移动终端
WO2021104236A1 (zh) 一种共享拍摄参数的方法及电子设备
CN108108114A (zh) 一种缩略图显示控制方法及移动终端
WO2020020134A1 (zh) 拍摄方法及移动终端
KR20180133743A (ko) 이동 단말기 및 그 제어 방법
WO2021036553A1 (zh) 图标显示方法及电子设备
CN111050070B (zh) 视频拍摄方法、装置、电子设备及介质
WO2021179991A1 (zh) 音频处理方法及电子设备
WO2021036623A1 (zh) 显示方法及电子设备
WO2021036659A1 (zh) 视频录制方法及电子设备
WO2020233323A1 (zh) 显示控制方法、终端设备及计算机可读存储介质
CN111010610A (zh) 一种视频截图方法及电子设备
CN110719527A (zh) 一种视频处理方法、电子设备及移动终端
CN111177420B (zh) 一种多媒体文件显示方法、电子设备及介质
WO2020238911A1 (zh) 消息发送方法及终端
KR20180131908A (ko) 이동 단말기 및 그것의 동작방법
WO2021129818A1 (zh) 视频播放方法及电子设备
WO2020011080A1 (zh) 显示控制方法及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20893151

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022524947

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227020704

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020893151

Country of ref document: EP

Effective date: 20220629