CN108196813B - Method and device for adding sound effect - Google Patents
Method and device for adding sound effect Download PDFInfo
- Publication number
- CN108196813B CN108196813B CN201711458023.7A CN201711458023A CN108196813B CN 108196813 B CN108196813 B CN 108196813B CN 201711458023 A CN201711458023 A CN 201711458023A CN 108196813 B CN108196813 B CN 108196813B
- Authority
- CN
- China
- Prior art keywords
- channel
- image
- sub
- frequency band
- color channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Controls And Circuits For Display Device (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
The disclosure relates to a method and a device for adding sound effect, and belongs to the technical field of music. The method comprises the following steps: acquiring image data of each color channel in a target image; respectively determining the amplitude adjustment value of each frequency band according to the image data of each color channel and the corresponding relation between the color channel and the frequency band which is stored in advance; and performing frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment. By adopting the method and the device, a user can select the pre-stored sound effect mode to add sound effects to music, can generate the sound effects corresponding to the target images according to the image data corresponding to the color channels of the target images, then add the sound effects to the music, and further can improve the flexibility of adding the sound effects to the music by the user.
Description
Technical Field
The present disclosure relates to the field of music technology, and more particularly, to a method and apparatus for adding sound effect.
Background
An equalizer is usually provided in a music player for changing the playing effect of music, wherein the preset sound effect modes in the equalizer include: normal, classical, rock, etc. For example, for the same song, the user can select to play in the classical sound effect mode or in the rock sound effect mode, so that the same song can play different effects, and more music enjoyment is brought to the user.
In carrying out the present disclosure, the inventors found that at least the following problems exist:
the user can only select the pre-stored sound effect mode in the equalizer to add sound effects to the music, resulting in poor flexibility in adding sound effects to the music by the user.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method and apparatus for adding sound effects. The technical scheme is as follows:
according to an embodiment of the present disclosure, a method for adding sound effect is provided, the method including:
acquiring image data of each color channel in a target image;
respectively determining the amplitude adjustment value of each frequency band according to the image data of each color channel and the corresponding relation between the color channel and the frequency band which is stored in advance;
and performing frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment.
Optionally, the method further includes:
adding a target sound effect option in the sound effect list;
the frequency spectrum adjustment is performed on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and the audio data after the frequency spectrum adjustment is played comprises the following steps:
and when a triggering instruction of the target sound effect option is received, performing frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment.
Optionally, the acquiring image data of each color channel in the target image includes:
dividing a target image into N sub-images;
determining image data corresponding to each color channel of each sub-image;
the determining the amplitude adjustment value of each frequency band according to the image data of each color channel and the pre-stored correspondence between the color channel and the frequency band includes:
and respectively determining the amplitude adjustment value of each frequency band according to the image data corresponding to each color channel of each sub-image and the corresponding relationship among the pre-stored sub-images, the color channels and the frequency bands.
Optionally, the image data is an average channel value;
the determining image data corresponding to each color channel of each sub-image comprises:
and determining an average channel value corresponding to each color channel of each sub-image, wherein for the sub-image i and the color channel j, the average value of the channel values of the color channels j of all pixel points of the sub-image i is calculated to obtain the average channel value corresponding to the color channel j of the sub-image i, the sub-image i is any sub-image in the N sub-images, and the color channel j is any color channel of the target image.
Optionally, the determining the amplitude adjustment value of each frequency band according to the average channel value corresponding to each color channel of each sub-image and the pre-stored correspondence between the sub-image, the color channel and the frequency band respectively includes:
determining an amplitude adjustment value corresponding to each color channel of each sub-image according to an average channel value corresponding to each color channel of each sub-image and a formula A [ (I-127) ÷ 127] × 10, wherein A is the amplitude adjustment value and I is the average channel value;
and respectively determining the amplitude adjustment value of each frequency band according to the amplitude adjustment value corresponding to each color channel of each sub-image and the corresponding relationship among the pre-stored sub-images, the color channels and the frequency bands.
Optionally, the color channels of the target image include an R channel, a G channel, and a B channel, and in the correspondence, the frequency of the frequency band corresponding to the R channel is higher than the frequency of the frequency band corresponding to the G channel, and the frequency of the frequency band corresponding to the G channel is higher than the frequency of the frequency band corresponding to the B channel.
According to an embodiment of the present disclosure, there is also provided a device for adding sound effect, the device including:
the acquisition module is used for acquiring the image data of each color channel in the target image;
the determining module is used for respectively determining the amplitude adjustment value of each frequency band according to the image data of each color channel and the corresponding relation between the color channel and the frequency band which is stored in advance;
and the adjusting module is used for carrying out frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment.
Optionally, the adjusting module is configured to add a target sound effect option in the sound effect list;
and when a triggering instruction of the target sound effect option is received, performing frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment.
Optionally, the obtaining module includes:
the segmentation unit is used for segmenting the target image into N sub-images;
a first determining unit, configured to determine image data corresponding to each color channel of each sub-image;
the determining module comprises:
and the second determining unit is used for respectively determining the amplitude adjustment value of each frequency band according to the image data corresponding to each color channel of each sub-image and the corresponding relationship among the pre-stored sub-images, the color channels and the frequency bands.
Optionally, the image data is an average channel value;
the first determining unit is configured to:
and determining an average channel value corresponding to each color channel of each sub-image, wherein for the sub-image i and the color channel j, the average value of the channel values of the color channels j of all pixel points of the sub-image i is calculated to obtain the average channel value corresponding to the color channel j of the sub-image i, the sub-image i is any sub-image in the N sub-images, and the color channel j is any color channel of the target image.
Optionally, the second determining unit is configured to:
determining an amplitude adjustment value corresponding to each color channel of each sub-image according to an average channel value corresponding to each color channel of each sub-image and a formula A [ (I-127) ÷ 127] × 10, wherein A is the amplitude adjustment value and I is the average channel value;
and respectively determining the amplitude adjustment value of each frequency band according to the amplitude adjustment value corresponding to each color channel of each sub-image and the corresponding relationship among the pre-stored sub-images, the color channels and the frequency bands.
Optionally, the color channels of the target image include an R channel, a G channel, and a B channel, and in the correspondence, the frequency of the frequency band corresponding to the R channel is higher than the frequency of the frequency band corresponding to the G channel, and the frequency of the frequency band corresponding to the G channel is higher than the frequency of the frequency band corresponding to the B channel.
According to the embodiment of the present disclosure, a terminal is further provided, where the terminal includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the method for adding sound effect.
According to the embodiment of the present disclosure, a computer-readable storage medium is further provided, where at least one instruction is stored in the storage medium, and the instruction is loaded and executed by a processor to implement the method for adding sound effect described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, the sound effect mode added by the user to the music to be played may be a sound effect generated according to the target image, specifically, the terminal first acquires image data of each color channel in the target image; then, the terminal respectively determines the amplitude adjustment value of each frequency band according to the image data of each color channel and the corresponding relation between the color channel and the frequency band which is stored in advance; and finally, the terminal performs frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and plays the audio data after the frequency spectrum adjustment. Therefore, the user can select the pre-stored sound effect mode to add sound effects to the music, and can add sound effects to the audio data based on the pictures, and then the flexibility of adding sound effects to the music by the user can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. In the drawings:
FIG. 1 is a schematic diagram of an interface of an equalizer according to an embodiment;
FIG. 2 is a flow diagram illustrating a method of generating an audio effect pattern, according to an embodiment;
FIG. 3 is a flow diagram illustrating a method for generating an audio effect pattern, according to an embodiment;
FIG. 4 is a diagram illustrating an apparatus for generating sound effect patterns according to an embodiment;
FIG. 5 is a diagram illustrating an apparatus for generating sound effect patterns according to an embodiment;
FIG. 6 is a schematic diagram of an apparatus for generating an audio effect mode according to an embodiment.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The embodiment of the invention provides a method for adding sound effect, which can be executed by a terminal. The terminal can be a mobile phone, a tablet computer, a desktop computer, a notebook computer and the like.
The terminal may include components such as a transceiver, processor, memory, and the like. The transceiver may be configured to perform data transmission with the server, for example, may receive a target image sent by the server, and the like. The transceiver may include bluetooth components, WiFi (Wireless-Fidelity) components, antennas, matching circuitry, modems, and the like. The processor, which may be a CPU (Central Processing Unit), may be configured to determine an amplitude adjustment value for each frequency band according to image data of each color channel and a pre-stored correspondence between the color channel and the frequency band, and perform other Processing. The Memory may be a RAM (Random Access Memory), a Flash (Flash Memory), or the like, and may be configured to store received data, data required by the processing procedure, data generated in the processing procedure, or the like, such as image data of each color channel in the target image.
The terminal may also include input components, display components, audio output components, and the like. The input means may be a microphone, a touch screen, a keyboard, a mouse, etc. The audio output component may be a speaker, headphones, or the like.
The terminal may have a system program and an application program installed therein. In the process of using the terminal, a user can use various applications based on different requirements of the user, for example, a music player application can be installed in the terminal.
The embodiment of the disclosure provides a method for adding sound effect, wherein the sound effect is a playing effect added for music when a user plays the music, and the sound effect can be added for the music according to own preference for a music user. The user usually adds sound effects to music through an equalizer, for example, some players or player software are provided with an equalizer, and when playing music, the user can select sound effects of a certain mode in the sound effect mode of the equalizer, such as classical sound effects, pop sound effects, rock sound effects, and the like.
As shown in fig. 1, an interface diagram of an equalizer in a player, the equalizer divides a frequency into 10 frequency bands, and an amplitude adjustment value corresponding to each frequency band is between-12 db and +12 db. In the default mode of the equalizer, the sound effect played by music is the effect after being adjusted by a disc-jockey in the process of making music audio, and in the mode, the amplitude adjustment value of each frequency band can be 0 db. Fig. 1 shows the amplitude adjustment values of each frequency band in the classical sound effect mode. The number of frequency bands in the equalizer can be set arbitrarily, for example, 10 bands as shown in fig. 1, or 9 bands, and the frequency value corresponding to each frequency band can also be set arbitrarily, although the frequency bands may be divided differently, they are also different in size, and play a similar role in adjusting the sound effect.
The present disclosure also provides a method for adding sound effects, which generates sound effects based on image data of each color channel in a picture, associates the color channels of the picture with the sound effects, and then adds the generated sound effects to music through an equalizer in a player. As shown in fig. 2, the processing flow of the method may include the following steps:
in step 201, the terminal acquires image data for each color channel in the target image.
Each color channel may include an R channel, a G channel, and a B channel, which respectively represent channels of three colors of red, green, and blue, and accordingly, the image data of the color channel may include an R channel value, a G channel value, and a B channel value.
In an implementation, the target image may be locally stored in the terminal, or may be sent from the server, for example, the terminal may send an image request for obtaining the corresponding target image to the server, and after receiving the image request, the server sends the target image to the terminal, and the embodiment is not particularly limited as to the source of the target image. The image is composed of one pixel, and the color of each pixel is formed by overlapping an R channel, a G channel and a B channel, so that the target image is composed of a large number of color channels.
In step 202, the terminal determines an amplitude adjustment value of each frequency band according to the image data of each color channel and a pre-stored correspondence between the color channel and the frequency band.
In implementation, each color channel may be associated with a frequency band in the equalizer, and since there is too much image data in the target image and the number of frequency bands is not so much, the corresponding processing may be to cut the target image and reduce the number of color channels through a series of merging calculations. In an alternative manner, step 201 may be performed according to the flow shown in fig. 3:
in step 2011, the terminal slices the target image into N sub-images.
In implementation, the terminal may cut the target image horizontally or vertically, and equally divide the target image by N, where N is a positive integer, and for example, N may be 3.
In step 2012, image data corresponding to each color channel of each sub-image is determined.
The image data may be an average channel value, such as an average R channel value, an average G channel value, and an average B channel value.
In implementation, after the terminal obtains N sub-images, for the sub-image i and the color channel j, calculating an average value of channel values of the color channel j of all pixel points of the sub-image i, and obtaining an average channel value corresponding to the color channel j of the sub-image i, where the sub-image i is any sub-image in the N sub-images, and the color channel j is any color channel of the target image.
Specifically, after the terminal obtains N sub-images, for each sub-image, an R set composed of all R channel values, a G set composed of all G channel values, and a B set composed of all B channel values are obtained. Then, for each sub-image, the terminal may calculate an average R-channel value in the R-set, an average G-channel value in the G-set, and an average B-channel value in the B-set, such that the terminal obtains N average R-channel values, N average G-channel values, and N average B-channel values.
The corresponding step 202 may be performed according to step 2021: and respectively determining the amplitude adjustment value of each frequency band according to the image data corresponding to each color channel of each sub-image and the corresponding relationship among the pre-stored sub-images, the color channels and the frequency bands.
In practice, each of the color channels varies from 0 to 255, while the amplitude adjustment value typically varies from-12 to +12, as described above, then the amplitude adjustment value for each color channel of each sub-image may be converted using the following equation:
A=[(I-127)÷127]×10
in the formula: a is the amplitude adjustment value, and I is the average channel value, such as the average R channel value, the average G channel value, and the average B channel value, respectively.
After the terminal determines the amplitude adjustment value corresponding to each color channel of each sub-image by using the formula, specifically, the N average R channel values, the N average G channel values, and the N average B channel values are respectively substituted into the formula to obtain 3N amplitude adjustment values. Then, the terminal may determine the amplitude adjustment value of each frequency band according to the pre-stored corresponding relationship between the sub-image, the color channel and the frequency band.
In practical application, the color channel may represent a happy image or a dark image, the frequency value of the frequency band may represent a cheerful effect or a deep effect, and based on the respective characteristics of the color channel and the frequency value, the image data corresponding to the happy image may generate a cheerful sound effect, and the image data corresponding to the dark image may generate a deep sound effect. For example, in the above correspondence relationship between the sub-image and the color channel and the frequency band, the frequency of the frequency band corresponding to the R channel may be higher than the frequency of the frequency band corresponding to the G channel, and the frequency of the frequency band corresponding to the G channel may be higher than the frequency of the frequency band corresponding to the B channel.
For example, the number of the sub-images may be 3, which are respectively denoted as sub-image a, sub-image b, and sub-image c, and correspondingly, the number of the frequency bands in the equalizer is 9, and then based on the above correspondence, the correspondence between the sub-images, the color channels, and the frequency bands may be as shown in table 1.
TABLE 1 Table of correspondence between sub-images, color channels and frequency bands
R channel | G channel | B channel | |
Sub-image a | 4KHz | 500Hz | 62Hz |
Sub-image b | 8KHz | 1KHz | 125Hz |
Sub-image c | 16KHz | 2KHz | 250Hz |
Table 1 is only one representation, and the actual correspondence is not limited to the above relationship, for example, the terminal may combine (c) the sub-images a, b, and c according to the arrangement among the sub-images) Six corresponding relations are stored in advance, wherein the table 1 is one of the six corresponding relations. In a specific application, the terminal may select the color channel according to the image data of the color channel of the target image, for example, if the terminal calculates that the channel value corresponding to the color channel of the sub-image c is overall higher, the corresponding relationship in table 1 may be selected.
In step 203, the terminal performs spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and plays the audio data after the spectrum adjustment.
After the terminal determines the amplitude adjustment value corresponding to each frequency band, a sound effect corresponding to the target image can be generated and can be recorded as a target sound effect, and then a target sound effect option is added to a sound effect list of the range extender. And after the terminal obtains the target sound effect, when a trigger instruction of the target sound effect option is received, performing frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment. For example, when playing music, the user may select a pre-stored sound effect mode from the sound effect list of the range extender to add sound effects to the music, or may select the target sound effect to add sound effects to the music.
Based on the above, the user not only can select the sound effect mode that prestores in the equalizer to add the sound effect for the music, but also can generate the target sound effect of corresponding target image according to the image data that the color channel of target picture corresponds to oneself, then add this sound effect for the music, and then, can improve the flexibility that the user added the sound effect for the music.
In addition, the user generates the sound effect corresponding to any image by using the method, and adds the generated sound effect into the music, so that the interestingness of listening to the music by the user is correspondingly increased.
In the embodiment of the disclosure, the sound effect added by the user to the music to be played may be a sound effect generated according to the target image, specifically, the terminal first acquires image data of each color channel in the target image; then, the terminal respectively determines the amplitude adjustment value of each frequency band according to the image data of each color channel and the corresponding relation between the color channel and the frequency band which is stored in advance; and finally, the terminal performs frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and plays the audio data after the frequency spectrum adjustment. Therefore, the user can select the pre-stored sound effect mode to add sound effects to the music, can generate the sound effects corresponding to the target images according to the image data corresponding to the color channels of the target images, and then add the sound effects to the music, and further can improve the flexibility of adding the sound effects to the music by the user.
The embodiment of the present disclosure further provides a device for adding sound effect, which may be a terminal in the foregoing embodiment, as shown in fig. 4, the device includes:
an obtaining module 410, configured to obtain image data of each color channel in a target image;
a determining module 420, configured to determine an amplitude adjustment value of each frequency band according to the image data of each color channel and a pre-stored correspondence between the color channel and the frequency band;
the adjusting module 430 is configured to perform spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and play the audio data after the spectrum adjustment.
Optionally, the adjusting module 430 is specifically configured to add a target sound effect option in the sound effect list;
and when a triggering instruction of the target sound effect option is received, performing frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment.
Optionally, as shown in fig. 5, the obtaining module 410 includes:
a segmentation unit 411 configured to segment the target image into N sub-images;
a first determining unit 412, configured to determine image data corresponding to each color channel of each sub-image;
the determining module 420 includes:
the second determining unit 421 is configured to determine an amplitude adjustment value of each frequency band according to the image data corresponding to each color channel of each sub-image and the pre-stored correspondence between the sub-image, the color channel, and the frequency band.
Optionally, the image data is an average channel value;
the first determining unit 412 is configured to:
and determining an average channel value corresponding to each color channel of each sub-image, wherein for the sub-image i and the color channel j, the average value of the channel values of the color channels j of all pixel points of the sub-image i is calculated to obtain the average channel value corresponding to the color channel j of the sub-image i, the sub-image i is any sub-image in the N sub-images, and the color channel j is any color channel of the target image.
Optionally, the second determining unit 421 is configured to:
determining an amplitude adjustment value corresponding to each color channel of each sub-image according to an average channel value corresponding to each color channel of each sub-image and a formula A [ (I-127) ÷ 127] × 10, wherein A is the amplitude adjustment value and I is the average channel value;
and respectively determining the amplitude adjustment value of each frequency band according to the amplitude adjustment value corresponding to each color channel of each sub-image and the corresponding relationship among the pre-stored sub-images, the color channels and the frequency bands.
Optionally, the color channels of the target image include an R channel, a G channel, and a B channel, and in the correspondence, the frequency of the frequency band corresponding to the R channel is higher than the frequency of the frequency band corresponding to the G channel, and the frequency of the frequency band corresponding to the G channel is higher than the frequency of the frequency band corresponding to the B channel.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In the embodiment of the disclosure, the sound effect added to the music to be played by the user using the device may be a sound effect generated according to the target image, specifically, image data of each color channel in the target image is obtained first; then, respectively determining the amplitude adjustment value of each frequency band according to the image data of each color channel and the corresponding relation between the color channel and the frequency band which is stored in advance; and finally, performing frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment. Therefore, the user can select the sound effect mode pre-stored in the equalizer to add sound effects to the music, can generate the sound effects corresponding to the target images according to the image data corresponding to the color channels of the target images, and then add the sound effects to the music, and further can improve the flexibility of adding the sound effects to the music by the user.
It should be noted that: in the device for adding sound effect provided by the above embodiment, when sound effect is added, only the division of the above functional modules is taken as an example, in practical application, the above function distribution can be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the apparatus for adding a sound effect and the method for adding a sound effect provided by the above embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiments and will not be described herein again.
Fig. 6 shows a block diagram of a terminal 600 according to an exemplary embodiment of the present invention. The terminal 600 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 602 is used to store at least one instruction for execution by processor 601 to implement the method of adding sound effects provided by method embodiments herein.
In some embodiments, the terminal 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a touch screen display 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the terminal 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the terminal 600 or in a folded design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The positioning component 608 is used for positioning the current geographic Location of the terminal 600 to implement navigation or LBS (Location Based Service). The Positioning component 608 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
In some embodiments, the terminal 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the touch screen display 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the terminal 600, and the gyro sensor 612 and the acceleration sensor 611 may cooperate to acquire a 3D motion of the user on the terminal 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 613 may be disposed on a side frame of the terminal 600 and/or on a lower layer of the touch display screen 605. When the pressure sensor 613 is disposed on the side frame of the terminal 600, a user's holding signal of the terminal 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the touch display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the terminal 600. When a physical button or vendor Logo is provided on the terminal 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of touch display 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 605 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 605 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
A proximity sensor 616, also known as a distance sensor, is typically disposed on the front panel of the terminal 600. The proximity sensor 616 is used to collect the distance between the user and the front surface of the terminal 600. In one embodiment, when the proximity sensor 616 detects that the distance between the user and the front surface of the terminal 600 gradually decreases, the processor 601 controls the touch display 605 to switch from the bright screen state to the dark screen state; when the proximity sensor 616 detects that the distance between the user and the front surface of the terminal 600 gradually becomes larger, the processor 601 controls the touch display 605 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not intended to be limiting of terminal 600 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (12)
1. A method of adding sound effects, the method comprising:
dividing a target image into N sub-images;
determining image data corresponding to each color channel of each sub-image;
respectively determining an amplitude adjustment value of each frequency band according to image data corresponding to each color channel of each sub-image and the corresponding relation among the pre-stored sub-images, the color channels and the frequency bands;
and performing frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment.
2. The method of claim 1, further comprising:
adding a target sound effect option in the sound effect list;
the frequency spectrum adjustment is performed on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and the audio data after the frequency spectrum adjustment is played comprises the following steps:
and when a triggering instruction of the target sound effect option is received, performing frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment.
3. The method of claim 1, wherein the image data is an average channel value;
the determining image data corresponding to each color channel of each sub-image comprises:
and determining an average channel value corresponding to each color channel of each sub-image, wherein for the sub-image i and the color channel j, the average value of the channel values of the color channels j of all pixel points of the sub-image i is calculated to obtain the average channel value corresponding to the color channel j of the sub-image i, the sub-image i is any sub-image in the N sub-images, and the color channel j is any color channel of the target image.
4. The method according to claim 3, wherein the determining the amplitude adjustment value of each frequency band according to the average channel value corresponding to each color channel of each sub-image and the pre-stored correspondence relationship between the sub-image, the color channel and the frequency band comprises:
determining an amplitude adjustment value corresponding to each color channel of each sub-image according to an average channel value corresponding to each color channel of each sub-image and a formula A [ (I-127) ÷ 127] × 10, wherein A is the amplitude adjustment value and I is the average channel value;
and respectively determining the amplitude adjustment value of each frequency band according to the amplitude adjustment value corresponding to each color channel of each sub-image and the corresponding relationship among the pre-stored sub-images, the color channels and the frequency bands.
5. The method according to any one of claims 1 to 4, wherein the color channels of the target image include an R channel, a G channel, and a B channel, and in the correspondence relationship, the frequency band corresponding to the R channel is higher than the frequency band corresponding to the G channel, and the frequency band corresponding to the G channel is higher than the frequency band corresponding to the B channel.
6. An apparatus for adding sound effects, the apparatus comprising:
an acquisition module, comprising: the segmentation unit is used for segmenting the target image into N sub-images; a first determining unit, configured to determine image data corresponding to each color channel of each sub-image;
a determination module comprising: the second determining unit is used for respectively determining the amplitude adjustment value of each frequency band according to the image data corresponding to each color channel of each sub-image and the corresponding relation of the pre-stored sub-image, color channel and frequency band;
and the adjusting module is used for carrying out frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment.
7. The apparatus of claim 6, wherein the adjustment module is configured to:
adding a target sound effect option in the sound effect list;
and when a triggering instruction of the target sound effect option is received, performing frequency spectrum adjustment on the audio data to be played based on the determined amplitude adjustment value of each frequency band, and playing the audio data after the frequency spectrum adjustment.
8. The apparatus of claim 6, wherein the image data is an average channel value;
the first determining unit is configured to:
and determining an average channel value corresponding to each color channel of each sub-image, wherein for the sub-image i and the color channel j, the average value of the channel values of the color channels j of all pixel points of the sub-image i is calculated to obtain the average channel value corresponding to the color channel j of the sub-image i, the sub-image i is any sub-image in the N sub-images, and the color channel j is any color channel of the target image.
9. The apparatus of claim 8, wherein the second determining unit is configured to:
determining an amplitude adjustment value corresponding to each color channel of each sub-image according to an average channel value corresponding to each color channel of each sub-image and a formula A [ (I-127) ÷ 127] × 10, wherein A is the amplitude adjustment value and I is the average channel value;
and respectively determining the amplitude adjustment value of each frequency band according to the amplitude adjustment value corresponding to each color channel of each sub-image and the corresponding relationship among the pre-stored sub-images, the color channels and the frequency bands.
10. The apparatus according to any one of claims 6 to 9, wherein the color channels of the target image include an R channel, a G channel, and a B channel, and in the correspondence relationship, the frequency band corresponding to the R channel is higher than the frequency band corresponding to the G channel, and the frequency band corresponding to the G channel is higher than the frequency band corresponding to the B channel.
11. A terminal, characterized in that it comprises a processor and a memory, in which at least one instruction is stored, which is loaded and executed by the processor to implement the method of adding sound effects according to any of claims 1-5.
12. A computer-readable storage medium having stored thereon at least one instruction which is loaded and executed by a processor to implement the method of adding sound effects according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711458023.7A CN108196813B (en) | 2017-12-27 | 2017-12-27 | Method and device for adding sound effect |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711458023.7A CN108196813B (en) | 2017-12-27 | 2017-12-27 | Method and device for adding sound effect |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108196813A CN108196813A (en) | 2018-06-22 |
CN108196813B true CN108196813B (en) | 2021-03-30 |
Family
ID=62585559
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711458023.7A Active CN108196813B (en) | 2017-12-27 | 2017-12-27 | Method and device for adding sound effect |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108196813B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110377212B (en) * | 2019-06-28 | 2021-03-16 | 上海元笛软件有限公司 | Method, apparatus, computer device and storage medium for triggering display through audio |
CN112133267B (en) * | 2020-09-04 | 2024-02-13 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio effect processing method, device and storage medium |
CN112185325A (en) * | 2020-10-12 | 2021-01-05 | 上海闻泰电子科技有限公司 | Audio playing style adjusting method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1503159A (en) * | 2002-11-25 | 2004-06-09 | ���µ�����ҵ��ʽ���� | Short film generation/reproduction apparatus and method thereof |
CN104090883A (en) * | 2013-11-15 | 2014-10-08 | 腾讯科技(深圳)有限公司 | Playing control processing method and playing control processing device for audio file |
CN104574453A (en) * | 2013-10-17 | 2015-04-29 | 付晓宇 | Software for expressing music with images |
CN106534962A (en) * | 2016-10-11 | 2017-03-22 | 腾讯科技(北京)有限公司 | Television content playing method and device |
CN107147792A (en) * | 2017-05-23 | 2017-09-08 | 惠州Tcl移动通信有限公司 | A kind of method for automatically configuring audio, device, mobile terminal and storage device |
CN107506171A (en) * | 2017-08-22 | 2017-12-22 | 深圳传音控股有限公司 | Audio-frequence player device and its effect adjusting method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4935385B2 (en) * | 2007-02-01 | 2012-05-23 | ソニー株式会社 | Content playback method and content playback system |
TW201021550A (en) * | 2008-11-19 | 2010-06-01 | Altek Corp | Emotion-based image processing apparatus and image processing method |
KR20150024650A (en) * | 2013-08-27 | 2015-03-09 | 삼성전자주식회사 | Method and apparatus for providing visualization of sound in a electronic device |
CN105487780B (en) * | 2016-01-15 | 2021-03-19 | 腾讯科技(深圳)有限公司 | Control display method and device |
-
2017
- 2017-12-27 CN CN201711458023.7A patent/CN108196813B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1503159A (en) * | 2002-11-25 | 2004-06-09 | ���µ�����ҵ��ʽ���� | Short film generation/reproduction apparatus and method thereof |
CN104574453A (en) * | 2013-10-17 | 2015-04-29 | 付晓宇 | Software for expressing music with images |
CN104090883A (en) * | 2013-11-15 | 2014-10-08 | 腾讯科技(深圳)有限公司 | Playing control processing method and playing control processing device for audio file |
CN106534962A (en) * | 2016-10-11 | 2017-03-22 | 腾讯科技(北京)有限公司 | Television content playing method and device |
CN107147792A (en) * | 2017-05-23 | 2017-09-08 | 惠州Tcl移动通信有限公司 | A kind of method for automatically configuring audio, device, mobile terminal and storage device |
CN107506171A (en) * | 2017-08-22 | 2017-12-22 | 深圳传音控股有限公司 | Audio-frequence player device and its effect adjusting method |
Also Published As
Publication number | Publication date |
---|---|
CN108196813A (en) | 2018-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108401124B (en) | Video recording method and device | |
CN110764730B (en) | Method and device for playing audio data | |
CN110992493B (en) | Image processing method, device, electronic equipment and storage medium | |
CN112492097B (en) | Audio playing method, device, terminal and computer readable storage medium | |
CN110166890B (en) | Audio playing and collecting method and device and storage medium | |
CN110740340B (en) | Video live broadcast method and device and storage medium | |
CN110618805B (en) | Method and device for adjusting electric quantity of equipment, electronic equipment and medium | |
CN109688461B (en) | Video playing method and device | |
CN110996305B (en) | Method and device for connecting Bluetooth equipment, electronic equipment and medium | |
CN111142838B (en) | Audio playing method, device, computer equipment and storage medium | |
CN111028144B (en) | Video face changing method and device and storage medium | |
CN112965683A (en) | Volume adjusting method and device, electronic equipment and medium | |
CN108922506A (en) | Song audio generation method, device and computer readable storage medium | |
CN109634688B (en) | Session interface display method, device, terminal and storage medium | |
CN108196813B (en) | Method and device for adding sound effect | |
CN112133332A (en) | Method, device and equipment for playing audio | |
CN111092991B (en) | Lyric display method and device and computer storage medium | |
CN110837300B (en) | Virtual interaction method and device, electronic equipment and storage medium | |
CN113963707A (en) | Audio processing method, device, equipment and storage medium | |
CN108540732B (en) | Method and device for synthesizing video | |
CN108966026B (en) | Method and device for making video file | |
CN110933454A (en) | Method, device, equipment and storage medium for processing live broadcast budding gift | |
CN109491636A (en) | Method for playing music, device and storage medium | |
CN111369434B (en) | Method, device, equipment and storage medium for generating spliced video covers | |
CN110708582B (en) | Synchronous playing method, device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |