Nothing Special   »   [go: up one dir, main page]

CN104866477B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN104866477B
CN104866477B CN201410059057.9A CN201410059057A CN104866477B CN 104866477 B CN104866477 B CN 104866477B CN 201410059057 A CN201410059057 A CN 201410059057A CN 104866477 B CN104866477 B CN 104866477B
Authority
CN
China
Prior art keywords
frame data
video
electronic device
output unit
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410059057.9A
Other languages
Chinese (zh)
Other versions
CN104866477A (en
Inventor
邓袁圆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410059057.9A priority Critical patent/CN104866477B/en
Publication of CN104866477A publication Critical patent/CN104866477A/en
Application granted granted Critical
Publication of CN104866477B publication Critical patent/CN104866477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Telephone Function (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses an information processing method and electronic equipment. The method comprises the following steps: obtaining video data; calling a video player; analyzing the video data through a video player and outputting video frame data of the analyzed video data through a display output unit; outputting the audio frame data of the analyzed video data through an audio output unit; and displaying an object identifier for indicating that a recognition search is performed for a part of audio frame data in the video data during the outputting of the video frame data of the parsed video data through the display output unit and the outputting of the audio frame data of the parsed video data through the audio output unit. By adopting the method or the electronic equipment, the user can be prompted to identify and search the audio frame data in the video data through the electronic equipment through the object identification in the playing process of the video data, so that the process of extracting the audio information is simplified, and the user can conveniently operate the interested audio information.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to the field of data processing, and in particular, to an information processing method and an electronic device.
Background
Modern electronic devices usually have a video playing function. Most of the video files played by the electronic device include audio information. When a user is interested in audio information in a video file, the video file can only be stored, and the audio track is extracted from the video file by playing the video file and using another professional tool to generate the audio file.
In the prior art, when a user watches a video file, an audio file can not be efficiently and quickly obtained from the current video file.
Disclosure of Invention
The invention aims to provide an information processing method and electronic equipment, which can extract and store audio information in a video file played by the electronic equipment, simplify the process of extracting the audio information and facilitate a user to operate interested audio information.
In order to achieve the purpose, the invention provides the following scheme:
an information processing method applied to a first electronic device having a display output unit and an audio output unit, the method comprising:
obtaining video data;
calling a video player;
analyzing the video data through the video player and outputting video frame data of the analyzed video data through the display output unit;
outputting the audio frame data of the analyzed video data through the audio output unit;
displaying an object identifier for indicating that a recognition search is performed for a part of audio frame data in the video data during the process of outputting the video frame data of the parsed video data through the display output unit and outputting the audio frame data of the parsed video data through the audio output unit.
Optionally, after displaying the object identifier, the method further includes:
obtaining a first trigger operation;
responding to the first trigger operation, and calling a first music searching program of the first electronic equipment;
obtaining first audio frame data currently output in the video data based on the first music search program;
and determining the music parameter corresponding to the first audio frame data.
Optionally, after displaying the object identifier, the method further includes:
obtaining a second trigger operation;
generating a first configuration message in response to the second trigger operation;
the first configuration message is used for setting a second electronic device to enter a working state and starting a second music searching program of the second electronic device;
the second electronic device is an electronic device which establishes a connection relationship with the first electronic device.
Optionally, the display object identifier specifically includes: and displaying the object identification when a preset condition is met.
Optionally, the predetermined condition is that the video frame data output by the display output unit includes a specific object;
displaying the object identifier when the predetermined condition is met, specifically including:
determining whether the specific object is included in the video frame data output through the display output unit through image analysis;
displaying the object identifier when the specific object is included in the video frame data output by the display output unit;
or/and the first and/or second light-emitting diodes are arranged in the light-emitting diode,
the predetermined condition is that the parameter of the audio frame data output by the sound output unit meets a specific condition;
and displaying the object identifier when the parameter of the audio frame data output by the sound output unit meets the specific condition.
Optionally, after the displaying the object identifier when the predetermined condition is satisfied, the method further includes:
and sending second configuration information to second electronic equipment which is connected with the first electronic equipment, wherein the second configuration information is used for setting the second electronic equipment to enter a working state and starting a second music search program of the second electronic equipment, so that the second electronic equipment displays a graphical interaction interface of the second music search program when a screen of the second electronic equipment is lightened.
An electronic apparatus having a display output unit and an audio output unit, the electronic apparatus comprising:
a video data acquisition unit for acquiring video data;
the video player calling unit is used for calling the video player;
a video frame data output unit, configured to parse the video data through the video player and output video frame data of the parsed video data through the display output unit;
the audio frame data output unit is used for outputting the audio frame data of the analyzed video data through the audio output unit;
and the object identifier display unit is used for displaying an object identifier in the process of outputting the video frame data of the parsed video data through the display output unit and outputting the audio frame data of the parsed video data through the audio output unit, wherein the object identifier is used for indicating that identification search is carried out on partial audio frame data in the video data.
Optionally, the electronic device further includes:
a first trigger operation acquisition unit for acquiring a first trigger operation after the object identifier is displayed,
a first music search program calling unit, configured to call a first music search program of the first electronic device in response to the first trigger operation;
a first audio frame data acquisition unit configured to acquire first audio frame data currently output in the video data based on the first music search program;
and the music parameter determining unit is used for determining the music parameter corresponding to the first audio frame data.
Optionally, the electronic device is a first electronic device, and after the object identifier is displayed, the electronic device further includes:
a second trigger operation acquisition unit configured to acquire a second trigger operation after the object identifier is displayed;
a first configuration message generating unit, configured to generate a first configuration message in response to the second trigger operation;
the first configuration message is used for setting a second electronic device to enter a working state and starting a second music searching program of the second electronic device;
the second electronic device is an electronic device which establishes a connection relationship with the first electronic device.
Optionally, the object identifier display unit is specifically configured to: and displaying the object identification when a preset condition is met.
Optionally, the predetermined condition is that the video frame data output by the display output unit includes a specific object;
the object identifier display unit specifically includes:
a first determining subunit operable to determine, through image analysis, whether the specific object is included in the video frame data output through the display output unit;
a first display subunit, configured to display the object identifier when the specific object is included in the video frame data output by the display output unit;
or/and the first and/or second light-emitting diodes are arranged in the light-emitting diode,
the predetermined condition is that the parameter of the audio frame data output by the sound output unit meets a specific condition;
the object identifier display unit specifically includes:
a second determining subunit for determining, through audio analysis, whether a parameter of the audio frame data output through the sound output unit meets a specific condition;
and the second display subunit is used for displaying the object identifier when the parameter of the audio frame data output by the sound output unit meets the specific condition.
Optionally, the electronic device is a first electronic device, and further includes:
and the second configuration information sending unit is used for sending second configuration information to second electronic equipment which is connected with the first electronic equipment after the object identification is displayed when a preset condition is met, wherein the second configuration information is used for setting the second electronic equipment to enter a working state and starting a second music search program of the second electronic equipment, so that the second electronic equipment displays a graphical interaction interface of the second music search program when a screen of the second electronic equipment is lightened.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the information processing method and the electronic device of the invention display the object identifier in the process of outputting the video frame data of the analyzed video data through the display output unit and outputting the audio frame data of the analyzed video data through the audio output unit, the object identifier is used for indicating that a recognition search is performed for a part of audio frame data in the video data, the user can be prompted to carry out identification search on the audio frame data in the video data through the electronic equipment through the object identification in the playing process of the video data, so that the user can directly search the audio frame data in the video data without saving the video file for enjoying music in the video file, therefore, the process of extracting the audio information is simplified, the working efficiency of the electronic equipment is improved, the user operation is simplified, and the user can operate interested audio information conveniently.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flowchart of an information processing method embodiment 1 of the present invention;
FIG. 2 is a flow chart of an embodiment 2 of the information processing method of the present invention;
FIG. 3 is a flow chart of an embodiment 3 of the information processing method of the present invention;
FIG. 4 is a flowchart of an embodiment 4 of the information processing method of the present invention;
FIG. 5 is a flow chart of an embodiment 5 of the information processing method of the present invention;
fig. 6 is a block diagram of an embodiment of an electronic device of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The information processing method of the present invention is applied to an electronic device having a display output unit and an audio output unit. The electronic device can be a mobile phone, a tablet computer or a television and the like. The display output unit may be various types of displays, such as a touch screen, a liquid crystal screen, or an LED screen, etc. The audio output unit may be various types of speakers.
Fig. 1 is a flowchart of an information processing method embodiment 1 of the present invention. As shown in fig. 1, the method may include:
step 101: obtaining video data;
the video data may be data in various types of video files. The format of the video file may be RMVB or AVI, etc.
The video data may be local data stored on the electronic device or network data acquired through a network.
For example, when the electronic device is a mobile phone, the video data may be data of a video file already stored on a memory card of the mobile phone, or data of a video file browsed online through a video website. When the electronic device is a television, the video data may also be video data transmitted through a broadcast television information network.
Step 102: calling a video player;
the video player may be various types of video players. The video player can play local video data and can also play video data on a network.
Step 103: analyzing the video data through the video player and outputting video frame data of the analyzed video data through the display output unit;
the video player has a function of parsing video data. The video data typically includes both video frame data and audio frame data. For example, video data in a video file of a movie includes both video frame data for displaying pictures of the movie and audio frame data for representing sounds of the movie.
The video frame data of the analyzed video data can be output through the display output unit, so that the picture playing is realized.
Step 104: outputting the audio frame data of the analyzed video data through the audio output unit;
the audio output unit can output the audio frame data of the analyzed video data, so that the sound can be played.
Step 105: displaying an object identifier for indicating that a recognition search is performed for a part of audio frame data in the video data during the process of outputting the video frame data of the parsed video data through the display output unit and outputting the audio frame data of the parsed video data through the audio output unit.
The object identifier may be an identifier of a text type, or an identifier of an image type, such as an icon.
In the process of playing the video data of the video file by the electronic device, if the output audio frame data meets certain specific conditions, the electronic device may display the object identifier on a video playing interface of the video player to prompt a user that the audio frame data of the currently played video file meets the specific conditions. After the user sees the object identifier, if some specific operation is performed, the electronic device or another electronic device may be instructed to start a music search program to perform an identification search on a part of the audio frame data in the video data.
Or, in the process of playing the video data of the video file by the electronic device, the electronic device may continuously display the object identifier on a video playing interface of the video player. When a user finds interesting music in the process of watching the video file, a specific operation can be executed based on the object identification, so that the electronic equipment or another electronic equipment is instructed to start a music search program to perform identification search on part of the audio frame data in the video data.
In summary, in this embodiment, in the process of outputting the video frame data of the parsed video data through the display output unit and outputting the audio frame data of the parsed video data through the audio output unit, an object identifier is displayed, where the object identifier is used to instruct to perform recognition search on part of the audio frame data in the video data, so that in the playing process of the video data, a user can be prompted to perform recognition search on the audio frame data in the video data through an electronic device through the object identifier, and the user can directly search for the audio frame data in the video data without saving a video file in order to enjoy music in the video file, thereby simplifying the process of extracting audio information and facilitating the user to operate interested audio information.
Fig. 2 is a flowchart of an information processing method embodiment 2 of the present invention. In this embodiment, the electronic device is referred to as a first electronic device. As shown in fig. 2, the method may include:
step 201: obtaining video data;
step 202: calling a video player;
step 203: analyzing the video data through the video player and outputting video frame data of the analyzed video data through the display output unit;
step 204: outputting the audio frame data of the analyzed video data through the audio output unit;
step 205: displaying an object identifier for indicating that a recognition search is performed for a part of audio frame data in the video data during the process of outputting the video frame data of the parsed video data through the display output unit and outputting the audio frame data of the parsed video data through the audio output unit.
Step 206: obtaining a first trigger operation;
the first trigger operation may be a click operation of the user on the object identifier.
Step 207: responding to the first trigger operation, and calling a first music searching program of the first electronic equipment;
after the first trigger operation is detected, a connection between the first electronic device and a network may be established in response to the first trigger operation, so that the first electronic device is in a networking state.
A first music search program on the first electronic device may also be invoked. The first music search program may acquire audio frame data of the parsed video data. And searching audio file information corresponding to the audio frame data according to the acquired audio frame data.
Step 208: obtaining first audio frame data currently output in the video data based on the first music search program;
specifically, the following two ways of obtaining the currently output first audio frame data in the video data may be adopted.
One way is to intercept audio frame data to be played through a speaker of the electronic device before an audio output unit of the electronic device outputs audio information corresponding to the audio frame data.
Another way is that after the audio output unit of the electronic device outputs the audio information corresponding to the audio frame data, the audio acquisition unit picks up the audio information played by the speaker of the electronic device, and the picked-up audio information is used as the audio frame data.
Step 209: and determining the music parameter corresponding to the first audio frame data.
Through the first music searching program, searching can be performed according to the currently output first audio frame data in the video data. And searching an audio file matched with the first audio data, and determining the music parameters corresponding to the audio file. For example, when the currently output first audio frame in the video data is the music melody of a segment of the song "moon represents my heart", the first music search program may find that the song corresponding to the first audio frame data is "moon represents my heart", and may determine the corresponding song name, singer name and other music parameters. The song can also be downloaded and played through the network.
It should be noted that, in this embodiment, the user may determine the first audio frame data that needs to be acquired by clicking the object identifier. The playing progress of the corresponding video file may be determined as a starting time point of the first audio frame data when the user clicks the object identifier for the first time, and the playing progress of the corresponding video file may be determined as an ending time point of the first audio frame data when the user clicks the object identifier for the second time. Determining audio frame data between the start time point and the end time point as the first audio frame data. When searching for a corresponding audio file, the subsequent first music search program may perform a search or a search based on audio frame data between the start time point and the end time point.
Fig. 3 is a flowchart of an information processing method embodiment 3 of the present invention. The main difference between embodiment 3 and embodiment 2 is that in the present embodiment, the electronic device that performs the recognition search for the audio frame data is a second electronic device different from the first electronic device.
As shown in fig. 3, the method may include:
step 301: obtaining video data;
step 302: calling a video player;
step 303: analyzing the video data through the video player and outputting video frame data of the analyzed video data through the display output unit;
step 304: outputting the audio frame data of the analyzed video data through the audio output unit;
step 305: displaying an object identifier for indicating that a recognition search is performed for a part of audio frame data in the video data during the process of outputting the video frame data of the parsed video data through the display output unit and outputting the audio frame data of the parsed video data through the audio output unit.
Step 306: obtaining a second trigger operation;
the second trigger operation may be a click operation of the object identifier by a user, an operation on an operation input unit such as a key or a touch panel on the first electronic device, or an operation on an operation input unit such as a key or a touch panel on a controller of the first electronic device.
Step 307: generating a first configuration message in response to the second trigger operation;
the first configuration message is used for setting the second electronic equipment to enter a working state and starting a second music searching program of the second electronic equipment.
The second electronic device is an electronic device which establishes a connection relationship with the first electronic device.
Specifically, the first electronic device may be a television, and the second electronic device may be a mobile phone. The second electronic device and the first electronic device can be connected in a wireless mode or a wired mode. For example, the first electronic device and the second electronic device may perform data connection in a Wi-Fi manner or a bluetooth manner. The second electronic device may have a second music search program installed thereon. The second music search program may be a program dedicated to identifying and searching for music information corresponding to the audio frame data based on the audio frame data.
The first configuration information may configure the second electronic device. For example, the first electronic device and the second electronic device are connected in a local area network by a bluetooth method or a Wi-Fi method, but the second electronic device is not connected to an external network. The first configuration information may set the second electronic device to a state of being connected to an external network. For another example, when the second electronic device is a mobile phone and the mobile phone is in a standby state with a screen off, the first configuration information may set the mobile phone to be in an operating state, and after receiving the first configuration information, the mobile phone may automatically turn on the screen and start the second music search program.
A specific example of an application scenario of this embodiment may be that the first electronic device is a television, and the second electronic device is a mobile phone. During the process of watching the television, the user displays an object identifier in the shape of a musical note on the screen of the television. When a user is interested in a sound segment in a television program, the user can execute a first operation by clicking a key on a remote controller (which can be a mobile phone with a television remote control function) of the television, so that the television generates first configuration information. And the television sends the first configuration information to the mobile phone. The mobile phone acquires audio frame data currently played by the television, sends the audio frame data to a server on a network for analysis and identification, receives an identification result sent by the server, and displays the identification result on a mobile phone screen. The identification result may be song information corresponding to the audio frame data. The song information may include a song title, a singer's title, and the like. The mobile phone can also download the song corresponding to the audio frame data to the mobile phone through the second music searching program.
In the above two embodiments, a method for manually triggering the retrieval process of the audio frame data by the user is provided. I.e., the process of performing the identification search on the audio frame data, is triggered by the first trigger operation of the user. In practical application, the process of performing the identification search on the audio frame data can also be automatically triggered by the electronic device.
Fig. 4 is a flowchart of an information processing method embodiment 4 of the present invention. As shown in fig. 4, the method may include:
step 401: obtaining video data;
step 402: calling a video player;
step 403: analyzing the video data through the video player and outputting video frame data of the analyzed video data through the display output unit;
step 404: outputting the audio frame data of the analyzed video data through the audio output unit;
step 405: determining whether the video frame data output through the display output unit includes the specific object through image analysis in a process of outputting the video frame data of the parsed video data through the display output unit and outputting the audio frame data of the parsed video data through the audio output unit;
the specific object may be a microphone and character image, etc. Because, usually, when a microphone and a character image exist in a video picture at the same time, the corresponding audio frame data is a song that the character image sings through the microphone. It is possible to determine whether the current audio frame data has sound information that needs to be identified and searched according to whether the specific object exists in the video frame data. Such sound information is largely sound information of interest to the user.
Step 406: when a specific object is included in the video frame data output by the display output unit, the object identifier is displayed, and the object identifier is used for indicating that a recognition search is performed on a part of audio frame data in the video data.
The object identifier may be a note, or may prompt "music" in a text manner to indicate that the current audio frame data has been automatically identified as data meeting a preset condition, and the first electronic device may control the second electronic device to automatically perform the identification search on the audio frame data.
Step 407: and sending second configuration information to second electronic equipment which is connected with the first electronic equipment, wherein the second configuration information is used for setting the second electronic equipment to enter a working state and starting a second music search program of the second electronic equipment, so that the second electronic equipment displays a graphical interaction interface of the second music search program when a screen of the second electronic equipment is lightened.
The second electronic device may be a mobile phone and the first electronic device may be a television. Alternatively, the first electronic device is a desktop computer, the second electronic device is a cell phone, or the like. The second configuration information may set the second electronic device to enter an operating state and start a second music search program of the second electronic device. When the second electronic device is a mobile phone, the mobile phone may be in a standby state. In the standby state, the screen of the mobile phone is turned off. And after receiving the second configuration information, the mobile phone can automatically light the screen and start the second music searching program. In addition, after the mobile phone receives the second configuration information, the second music search program can be started to perform recognition search on the current audio frame data, but the off state of the screen can be maintained, and when a user picks up the mobile phone or manually lights up the screen of the mobile phone, the graphical interactive interface of the second music search program can be directly displayed. And the graphical interactive interface can display the result of the identification search of the current audio frame data.
The gesture of the mobile phone can be monitored by adopting devices such as a gravity sensor or a gyroscope. When the posture of the mobile phone changes, it can be determined that the mobile phone is picked up by the user. At this time, the power screen may be automatically turned on and the graphical interactive interface of the second music search program may be displayed. And the graphical interactive interface can also display the result of identifying and searching the current audio frame data. In this way, the operation of the user can be greatly simplified, and the user can more easily acquire the music information corresponding to the audio frame data in the currently played video data.
Fig. 5 is a flowchart of an information processing method embodiment 5 of the present invention. As shown in fig. 5, the method may include:
step 501: obtaining video data;
step 502: calling a video player;
step 503: analyzing the video data through the video player and outputting video frame data of the analyzed video data through the display output unit;
step 504: outputting the audio frame data of the analyzed video data through the audio output unit;
step 505: determining that the parameters of the audio frame data output by the sound output unit meet specific conditions in the process of outputting the video frame data of the analyzed video data by the display output unit and outputting the audio frame data of the analyzed video data by the audio output unit;
the specific condition may be that the frequency of the continuous piece of audio frame data conforms to the melody characteristics of the music piece, or may be that no character dialog appears in the continuous piece of audio frame data, or may be that the timbre of the sound corresponding to the audio frame data conforms to the timbre of some musical instrument, and so on. The above conditions may each indicate to some extent that the audio information corresponding to the current audio frame data is a music piece.
The sound corresponding to these audio frame data is largely sound information of interest to the user.
Step 506: when the parameter of the audio frame data output by the sound output unit meets the specific condition, displaying the object identifier, wherein the object identifier is used for indicating that identification search is carried out on partial audio frame data in the video data.
The object identifier may be a note, or may prompt "music" in a text manner to indicate that the current audio frame data has been automatically identified as data meeting a preset condition, and the first electronic device may control the second electronic device to automatically perform the identification search on the audio frame data.
Step 507: and sending second configuration information to second electronic equipment which is connected with the first electronic equipment, wherein the second configuration information is used for setting the second electronic equipment to enter a working state and starting a second music search program of the second electronic equipment, so that the second electronic equipment displays a graphical interaction interface of the second music search program when a screen of the second electronic equipment is lightened.
In the above two embodiments, the process of performing the identification search on the audio frame data may also be performed by the first electronic device. When the first electronic device performs identification search on the current audio frame data, the current video playing program can jump to the interface of the music searching program.
In addition, in the above two embodiments, the process of performing the identification search on the audio frame data may be automatically triggered by the electronic device. That is, the electronic device may automatically determine whether there is sound information of the type of music or song in the currently played audio frame data by performing image analysis on the video frame data in the video data or performing sound signal analysis (for example, a sound spectrum analysis) on the audio frame data in the video data. And when the audio frame data currently played is judged to have the sound information of the types of music or songs and the like, displaying the object identifier on the screen of the first electronic equipment, so that the user can be intelligently reminded of identifying the sound information of the types of music or songs and the like in the audio frame data of the current video data.
The invention also discloses an electronic device. The electronic device can be a mobile phone, a tablet computer or a television and the like. The electronic device has a display output unit and an audio output unit. The display output unit may be various types of displays, such as a touch screen, a liquid crystal screen, or an LED screen, etc. The audio output unit may be various types of speakers.
Fig. 6 is a block diagram of an embodiment of an electronic device of the present invention. As shown in fig. 6, the electronic device may include:
a video data acquisition unit 601 for obtaining video data;
a video player calling unit 602, configured to call a video player;
a video frame data output unit 603, configured to parse the video data through the video player and output video frame data of the parsed video data through the display output unit;
an audio frame data output unit 604, configured to output audio frame data of the parsed video data through the audio output unit;
an object identifier display unit 605, configured to display an object identifier, which is used to indicate that a recognition search is performed for a part of audio frame data in the video data, in a process of outputting video frame data of the parsed video data through the display output unit and outputting audio frame data of the parsed video data through the audio output unit.
The video data may be data in various types of video files. The format of the video file may be RMVB or AVI, etc.
The video data may be local data stored on the electronic device or network data acquired through a network.
For example, when the electronic device is a mobile phone, the video data may be data of a video file already stored on a memory card of the mobile phone, or data of a video file browsed online through a video website. When the electronic device is a television, the video data may also be video data transmitted through a broadcast television information network.
The video player may be various types of video players. The video player can play local video data and can also play video data on a network.
The video player has a function of parsing video data. The video data typically includes both video frame data and audio frame data. For example, video data in a video file of a movie includes both video frame data for displaying pictures of the movie and audio frame data for representing sounds of the movie.
The video frame data of the analyzed video data can be output through the display output unit, so that the picture playing is realized.
The audio output unit can output the audio frame data of the analyzed video data, so that the sound can be played.
The object identifier may be an identifier of a character type or an identifier of an image type.
In the process of playing the video data of the video file by the electronic device, if the output audio frame data meets certain specific conditions, the electronic device may display the object identifier on a video playing interface of the video player to prompt a user that the audio frame data of the currently played video file meets the specific conditions. After the user sees the object identifier, if some specific operation is performed, the electronic device or another electronic device may be instructed to start a music search program to perform an identification search on a part of the audio frame data in the video data.
Or, in the process of playing the video data of the video file by the electronic device, the electronic device may continuously display the object identifier on a video playing interface of the video player. When a user finds interesting music in the process of watching the video file, a specific operation can be executed based on the object identification, so that the electronic equipment or another electronic equipment is instructed to start a music search program to perform identification search on part of the audio frame data in the video data.
In summary, in the embodiment, by displaying the object identifier during the process of outputting the video frame data of the parsed video data through the display output unit and outputting the audio frame data of the parsed video data through the audio output unit, the object identifier is used for indicating that a recognition search is performed for a part of audio frame data in the video data, the user can be prompted to carry out identification search on the audio frame data in the video data through the electronic equipment through the object identification in the playing process of the video data, so that the user can directly search the audio frame data in the video data without saving the video file for enjoying music in the video file, therefore, the process of extracting the audio information is simplified, the working efficiency of the electronic equipment is improved, the user operation is simplified, and the user can operate interested audio information conveniently.
In practical applications, the electronic device may further include:
a first trigger operation acquisition unit for acquiring a first trigger operation after the object identifier is displayed,
a first music search program calling unit, configured to call a first music search program of the first electronic device in response to the first trigger operation;
a first audio frame data acquisition unit configured to acquire first audio frame data currently output in the video data based on the first music search program;
and the music parameter determining unit is used for determining the music parameter corresponding to the first audio frame data.
In practical application, the electronic device is a first electronic device, and after the object identifier is displayed, the electronic device further includes:
a second trigger operation acquisition unit configured to acquire a second trigger operation after the object identifier is displayed;
a first configuration message generating unit, configured to generate a first configuration message in response to the second trigger operation;
the first configuration message is used for setting a second electronic device to enter a working state and starting a second music searching program of the second electronic device;
the second electronic device is an electronic device which establishes a connection relationship with the first electronic device.
In practical applications, the object identifier display unit 605 may be specifically configured to: and displaying the object identification when a preset condition is met.
In practical application, the predetermined condition is that the video frame data output by the display output unit contains a specific object;
the object identifier display unit 605 may specifically include:
a first determining subunit operable to determine, through image analysis, whether the specific object is included in the video frame data output through the display output unit;
a first display subunit, configured to display the object identifier when the specific object is included in the video frame data output by the display output unit;
or/and the first and/or second light-emitting diodes are arranged in the light-emitting diode,
the predetermined condition is that the parameter of the audio frame data output by the sound output unit meets a specific condition;
the object identifier display unit 605 may specifically include:
a second determining subunit for determining, through audio analysis, whether a parameter of the audio frame data output through the sound output unit meets a specific condition;
and the second display subunit is used for displaying the object identifier when the parameter of the audio frame data output by the sound output unit meets the specific condition.
In practical application, the electronic device is a first electronic device, and may further include:
and the second configuration information sending unit is used for sending second configuration information to second electronic equipment which is connected with the first electronic equipment after the object identification is displayed when a preset condition is met, wherein the second configuration information is used for setting the second electronic equipment to enter a working state and starting a second music search program of the second electronic equipment, so that the second electronic equipment displays a graphical interaction interface of the second music search program when a screen of the second electronic equipment is lightened.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus a necessary hardware platform, and certainly may be implemented by hardware, but in many cases, the former is a better embodiment. With this understanding in mind, all or part of the technical solutions of the present invention that contribute to the background can be embodied in the form of a software product, which can be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes instructions for causing a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments or some parts of the embodiments of the present invention.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the electronic equipment disclosed by the embodiment, the description is relatively simple because the electronic equipment corresponds to the method disclosed by the embodiment, and the relevant part can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. An information processing method applied to a first electronic device having a display output unit and an audio output unit, the method comprising:
obtaining video data;
calling a video player;
analyzing the video data through the video player and outputting video frame data of the analyzed video data through the display output unit;
outputting the audio frame data of the analyzed video data through the audio output unit;
in the process of outputting the video frame data of the parsed video data through the display output unit and outputting the audio frame data of the parsed video data through the audio output unit, displaying an object identifier on a video playing interface of the video player, wherein the object identifier is used for indicating that identification search is carried out on partial audio frame data in the video data; the display object identifier is specifically: and displaying the object identification when a preset condition is met.
2. The method of claim 1, wherein after displaying the object identifier, the method further comprises:
obtaining a first trigger operation;
responding to the first trigger operation, and calling a first music searching program of the first electronic equipment;
obtaining first audio frame data currently output in the video data based on the first music search program;
and determining the music parameter corresponding to the first audio frame data.
3. The method of claim 1, wherein after displaying the object identifier, the method further comprises:
obtaining a second trigger operation;
generating a first configuration message in response to the second trigger operation;
the first configuration message is used for setting a second electronic device to enter a working state and starting a second music searching program of the second electronic device;
the second electronic device is an electronic device which establishes a connection relationship with the first electronic device.
4. The method according to claim 1, wherein the predetermined condition is that a specific object is included in video frame data output through the display output unit;
displaying the object identifier when the predetermined condition is met, specifically including:
determining whether the specific object is included in the video frame data output through the display output unit through image analysis;
displaying the object identifier when the specific object is included in the video frame data output by the display output unit;
or/and the first and/or second light-emitting diodes are arranged in the light-emitting diode,
the predetermined condition is that the parameter of the audio frame data output by the audio output unit meets a specific condition;
and displaying the object identifier when the parameter of the audio frame data output by the audio output unit meets the specific condition.
5. The method according to claim 1, further comprising, after displaying the object identifier when the predetermined condition is satisfied:
and sending second configuration information to second electronic equipment which is connected with the first electronic equipment, wherein the second configuration information is used for setting the second electronic equipment to enter a working state and starting a second music search program of the second electronic equipment, so that the second electronic equipment displays a graphical interaction interface of the second music search program when a screen of the second electronic equipment is lightened.
6. An electronic apparatus having a display output unit and an audio output unit, comprising:
a video data acquisition unit for acquiring video data;
the video player calling unit is used for calling the video player;
a video frame data output unit, configured to parse the video data through the video player and output video frame data of the parsed video data through the display output unit;
the audio frame data output unit is used for outputting the audio frame data of the analyzed video data through the audio output unit;
an object identifier display unit, configured to display an object identifier on a video playing interface of the video player during outputting, by the display output unit, video frame data of the parsed video data and outputting, by the audio output unit, audio frame data of the parsed video data, where the object identifier is used to instruct to perform recognition search on a part of audio frame data in the video data; the object identifier display unit is specifically configured to: and displaying the object identification when a preset condition is met.
7. The electronic device of claim 6, further comprising:
a first trigger operation acquisition unit for acquiring a first trigger operation after the object identifier is displayed,
a first music search program calling unit, configured to call a first music search program of the electronic device in response to the first trigger operation;
a first audio frame data acquisition unit configured to acquire first audio frame data currently output in the video data based on the first music search program;
and the music parameter determining unit is used for determining the music parameter corresponding to the first audio frame data.
8. The electronic device of claim 6, wherein the electronic device is a first electronic device, and after the displaying the object identifier, the electronic device further comprises:
a second trigger operation acquisition unit configured to acquire a second trigger operation after the object identifier is displayed;
a first configuration message generating unit, configured to generate a first configuration message in response to the second trigger operation;
the first configuration message is used for setting a second electronic device to enter a working state and starting a second music searching program of the second electronic device;
the second electronic device is an electronic device which establishes a connection relationship with the first electronic device.
9. The electronic device according to claim 6, wherein the predetermined condition is that a specific object is included in video frame data output by the display output unit;
the object identifier display unit specifically includes:
a first determining subunit operable to determine, through image analysis, whether the specific object is included in the video frame data output through the display output unit;
a first display subunit, configured to display the object identifier when the specific object is included in the video frame data output by the display output unit;
or/and the first and/or second light-emitting diodes are arranged in the light-emitting diode,
the predetermined condition is that the parameter of the audio frame data output by the audio output unit meets a specific condition;
the object identifier display unit specifically includes:
a second determining subunit for determining, through audio analysis, whether a parameter of the audio frame data output through the audio output unit meets a specific condition;
and the second display subunit is used for displaying the object identifier when the parameter of the audio frame data output by the audio output unit meets the specific condition.
10. The electronic device of claim 6, wherein the electronic device is a first electronic device, further comprising:
and the second configuration information sending unit is used for sending second configuration information to second electronic equipment which is connected with the first electronic equipment after the object identification is displayed when a preset condition is met, wherein the second configuration information is used for setting the second electronic equipment to enter a working state and starting a second music search program of the second electronic equipment, so that the second electronic equipment displays a graphical interaction interface of the second music search program when a screen of the second electronic equipment is lightened.
CN201410059057.9A 2014-02-21 2014-02-21 Information processing method and electronic equipment Active CN104866477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410059057.9A CN104866477B (en) 2014-02-21 2014-02-21 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410059057.9A CN104866477B (en) 2014-02-21 2014-02-21 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN104866477A CN104866477A (en) 2015-08-26
CN104866477B true CN104866477B (en) 2021-08-17

Family

ID=53912316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410059057.9A Active CN104866477B (en) 2014-02-21 2014-02-21 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN104866477B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10153002B2 (en) * 2016-04-15 2018-12-11 Intel Corporation Selection of an audio stream of a video for enhancement using images of the video
CN106131590A (en) * 2016-06-30 2016-11-16 乐视控股(北京)有限公司 A kind of music acquisition methods and associated terminal, system
CN111526242B (en) * 2020-04-30 2021-09-07 维沃移动通信有限公司 Audio processing method and device and electronic equipment
CN111954076A (en) * 2020-08-27 2020-11-17 维沃移动通信有限公司 Resource display method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825936A (en) * 2006-02-24 2006-08-30 北大方正集团有限公司 News video retrieval method based on speech classifying indentification
CN101599179A (en) * 2009-07-17 2009-12-09 北京邮电大学 Method for automatically generating field motion wonderful scene highlights
CN101740083A (en) * 2008-11-12 2010-06-16 索尼株式会社 Information processing apparatus, information processing method, information processing program and imaging apparatus
CN101847158A (en) * 2009-03-24 2010-09-29 索尼株式会社 Based on contextual video finder

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825936A (en) * 2006-02-24 2006-08-30 北大方正集团有限公司 News video retrieval method based on speech classifying indentification
CN101740083A (en) * 2008-11-12 2010-06-16 索尼株式会社 Information processing apparatus, information processing method, information processing program and imaging apparatus
CN101847158A (en) * 2009-03-24 2010-09-29 索尼株式会社 Based on contextual video finder
CN101599179A (en) * 2009-07-17 2009-12-09 北京邮电大学 Method for automatically generating field motion wonderful scene highlights

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于内容的视频检索系统中关键帧提取方法的研究与实现;陶丹;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20041215;第I138-1140页 *
基于内容的音频检索特征提取技术研究;王薇;《中国优秀硕士学位论文全文数据库信息科技辑》;20140115;第I138-2418页 *
怎样快速提取视频背景音乐;lsyzdyp;《百度经验》;20120606;第1-4页 *
有效的基于内容的音频特征提取方法;郑继明 等;《计算机工程与应用》;20091231;第45卷(第12期);第131-137页 *

Also Published As

Publication number Publication date
CN104866477A (en) 2015-08-26

Similar Documents

Publication Publication Date Title
CN109547819B (en) Live list display method and device and electronic equipment
CN108132805B (en) Voice interaction method and device and computer readable storage medium
KR101454950B1 (en) Deep tag cloud associated with streaming media
CN107920256A (en) Live data playback method, device and storage medium
WO2016155562A1 (en) Content item display system, method and device
JP2016533075A (en) Information acquisition method, apparatus, program, and recording medium
CN107864410B (en) Multimedia data processing method and device, electronic equipment and storage medium
CN113852767B (en) Video editing method, device, equipment and medium
JP6665200B2 (en) Multimedia information processing method, apparatus and system, and computer storage medium
CN104598502A (en) Method, device and system for obtaining background music information in played video
US10468004B2 (en) Information processing method, terminal device and computer storage medium
US11511200B2 (en) Game playing method and system based on a multimedia file
CN104866275B (en) Method and device for acquiring image information
CN104866477B (en) Information processing method and electronic equipment
CN108090140A (en) A kind of playback of songs method and mobile terminal
EP3310066A1 (en) Identifying media content for simultaneous playback
CN107680614B (en) Audio signal processing method, apparatus and storage medium
CN105335414A (en) Music recommendation method, device and terminal
TW201535358A (en) Interactive beat effect system and method for processing interactive beat effect
CN102970427A (en) Method for playing songs by mobile phone
EP3654194A1 (en) Information processing device, information processing method, and program
CN112269898A (en) Background music obtaining method and device, electronic equipment and readable storage medium
CN111343509A (en) Action control method of virtual image and display equipment
CN110337041B (en) Video playing method and device, computer equipment and storage medium
CN105005612B (en) A kind of acquisition methods and mobile terminal of music file

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant