CN107506409A - A kind of processing method of Multi-audio-frequency data - Google Patents
A kind of processing method of Multi-audio-frequency data Download PDFInfo
- Publication number
- CN107506409A CN107506409A CN201710673700.0A CN201710673700A CN107506409A CN 107506409 A CN107506409 A CN 107506409A CN 201710673700 A CN201710673700 A CN 201710673700A CN 107506409 A CN107506409 A CN 107506409A
- Authority
- CN
- China
- Prior art keywords
- sound
- source
- frequency
- audio
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/61—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
- G11B2020/10537—Audio or video recording
- G11B2020/10546—Audio or video recording specifically adapted for audio data
- G11B2020/10555—Audio or video recording specifically adapted for audio data wherein the frequency, the amplitude, or other characteristics of the audio signal is taken into account
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Stereophonic System (AREA)
Abstract
Present invention is disclosed a kind of processing method of Multi-audio-frequency data, this method stores a kind of Multi-audio-frequency data in single audio frequency file, and this method comprises the following steps:S1:Audiogenic device is chosen, and independent acquisition channel is set for different audiogenic devices;S2:Voice data to be gathered sets source information, frequency dependent data;S3:Source of sound quantity and the essential information of different sources of sound are set in audio file head;S4:The position occurred every time for each source of sound setting occurrence number, frequency, and source of sound and persistence length.S5:Audio files header is preserved, starts to encode and write voice data stream;S6:The sound source data of correlation is write set by encoder in audio file in frequency.A kind of multitone source file stored by this method is not only smaller than a kind of traditional multifile separate storage mode volume, is also easier to establish audio data index, all has the function that in the related fields such as audio video recording, management important.
Description
Technical field
The present invention relates to a kind of processing method of Multi-audio-frequency data, belong to technical field of electronic equipment.
Background technology
Voice data is primarily present two important parameters as a kind of Wave data in its traditional gatherer process:Sound
Frequency and volume.Wherein, audio is often as the major parameter for identifying sound source characteristics, and volume is then the important of performance intensity of sound
Feature.
During sound collection with coding, the sonic data in different pronunciation sources can on collecting device intermeshing,
It is final to produce the single audio frequency file for mixing various sources of sound, and in processing procedure afterwards, related Processing Algorithm often will
The data to be retrieved can be just found after filtering clutter data for specific frequecy characteristic, and in the process, identification
Difficulty is often relatively low with accuracy rate.Because during audio collection and storage, one kind of multiple sources of sound often all be present
It is overlapping, the data without same source of sound can all overlap together because frequency is different when very a kind of more, how overlapping at these
Frequency in filter out specific information be very difficult thing.Particularly when the volume of other sources of sound is higher than searched targets volume
When, the target volume retrieved generally can all be covered by background volume, can not detect.
Thus, in the retrieving to batch recording file, generally require to put into substantial amounts of human cost and time into
This can just find the target to be retrieved, and be difficult to find that effective automation quick-searching mode is substituted all the time.
The content of the invention
The purpose of the present invention is exactly to solve the above-mentioned problems in the prior art, there is provided a kind of Multi-audio-frequency data
Processing method.
The above-mentioned purpose of the present invention, its technical solution being achieved:A kind of processing method of Multi-audio-frequency data, its
It is characterised by:A kind of Multi-audio-frequency data are stored in single audio frequency file.
Preferably, this method comprises the following steps:
S1:Audiogenic device is chosen, and independent acquisition channel is set for different audiogenic devices;
S2:Voice data to be gathered sets source information, frequency dependent data;
S3:Source of sound quantity and the essential information of different sources of sound are set in audio file head;
S4:The position occurred every time for each source of sound setting occurrence number, frequency, and source of sound and persistence length;
S5:Audio files header is preserved, starts to encode and write voice data stream;
S6:The sound source data of correlation is write set by encoder in audio file in frequency, it is more when existing in some frequency range
During group sound source data, write successively according to source of sound order, source of sound 1 | source of sound 2 | source of sound 3.
Preferably, in step s3, the audio of different frequency is all considered as different sounds regardless of whether collecting device is identical
Source.
Preferably, in step s 4, source of sound number is the period that the frequency audio effectively occurs, and is ignored outside the frequency
Background data.
Preferably, in step s 4, sound source frequency is the unique identification of the source of sound, in order to create index.
Preferably, in step s 6, when multigroup sound source data be present in some frequency range, write successively according to source of sound order
Enter, source of sound 1 | source of sound 2 | source of sound 3.
Preferably, in step s 6, when multigroup sound source data be present in some frequency range, write successively according to source of sound order
Enter, source of sound 1 | source of sound 2 | source of sound 3.
Preferably, a kind of processing method of Multi-audio-frequency data, can store a kind of multitone frequency in single audio frequency file
According to;This method comprises the following steps:
S1:Audiogenic device is chosen, and independent acquisition channel is set for different audiogenic devices;
S2:Voice data to be gathered sets source information, frequency dependent data;
S3:Source of sound quantity and the essential information of different sources of sound are set in audio file head;
S4:The position occurred every time for each source of sound setting occurrence number, frequency, and source of sound and persistence length;
S5:Audio files header is preserved, starts to encode and write voice data stream;
S6:The sound source data of correlation is write set by encoder in audio file in frequency, it is more when existing in some frequency range
During group sound source data, write successively according to source of sound order, source of sound 1 | source of sound 2 | source of sound 3;In step s3, the audio of different frequency
Regardless of whether collecting device is identical, all it is considered as different sources of sound;In step s 4, source of sound number is that the frequency audio effectively occurs
Period, ignore the background data outside the frequency;In step s 4, sound source frequency is the unique identification of the source of sound, with
It is easy to create and indexes;When multigroup sound source data be present in some frequency range, write successively according to source of sound order, source of sound 1 | source of sound
2 | source of sound 3;When multigroup sound source data be present in some frequency range, write successively according to source of sound order, source of sound 1 | source of sound 2 | sound
Source 3.
The advantages of technical solution of the present invention, is mainly reflected in:The processing method of the Multi-audio-frequency data is one kind in single audio frequency
The method for storing multitone source data simultaneously in file, the voice data of various tone sources not only can be arbitrarily integrated by this method,
Store the voice data of multiple sound resource, multi-frequency simultaneously in same audio file, more can freely be cut during playback
Source of sound is changed, any switching laws sound source data or mixing, casts any one source of sound aside, real-time mixing sound operation is realized, can also be directed to not
Same sound source data setting sound line characteristic key index so that the retrieving of voice data is more efficient.Pass through this method institute
The multitone source file of storage is not only smaller than traditional multifile separate storage mode volume, is also easier to establish voice data rope
Draw, all have the function that in the related fields such as audio video recording, management it is important, it is suitable industrially to promote the use of.
Brief description of the drawings
Fig. 1 is space schematic diagram of the different sound source datas in dimension in the present invention.
Fig. 2 is two groups of difference audios in the present invention, the voice data of different frequency in a kind of multidimensional data space
The data flow form of expression.
Embodiment
The purpose of the present invention, advantage and feature, will by the non-limitative illustration of preferred embodiment below carry out diagram and
Explain.These embodiments are only the prominent examples using technical solution of the present invention, it is all take equivalent substitution or equivalent transformation and
The technical scheme of formation, all falls within the scope of protection of present invention.
Present invention is disclosed a kind of processing method of Multi-audio-frequency data, the processing method stores one in single audio frequency file
Kind Multi-audio-frequency data.
Specifically, the processing method of the Multi-audio-frequency data comprises the following steps:
S1:Audiogenic device is chosen, and independent acquisition channel is set for different audiogenic devices;
S2:Voice data to be gathered sets the related datas such as source information, frequency;
S3:Source of sound quantity and the essential information of different sources of sound are set in audio file head;
S4:The position occurred every time for each source of sound setting occurrence number, frequency, and source of sound and persistence length etc..Wherein,
The audio of different frequency is all considered as different sources of sound regardless of whether collecting device is identical.Source of sound number is that the frequency audio is effective
The period of appearance, ignore the background data outside the frequency, sound source frequency is the unique identification of the source of sound, in order to create
Index;Specifically, in the present embodiment, number, position and persistence length obtain in specific gatherer process, the scope of frequency
It is exactly normal frequency range, frequency is used for distinguishing voice, and the frequency range of voice is 300Hz--3400Hz.
S5:Audio files header is preserved, starts to encode and write voice data stream;
S6:The sound source data of correlation is write set by encoder in audio file in frequency, it is more when existing in some frequency range
During group sound source data, write successively according to source of sound order, source of sound 1 | source of sound 2 | source of sound 3.
In the present embodiment, the selection of multigroup sound source data is unrestricted, in the course of the work, specific according to what is be actually needed
Situation is chosen.The sound for the same person that some frequency range refers to, frequency are the same, unless deliberately sending high bass, no
So, the frequency of same person sound of speaking is typically all maintained in a frequency range, certainly the not just audio of frequency here
The frequency of itself, also including this concept of word speed, that is to say, that the speed of words.
Different from general voice data storage mode, it is can always gather volume soprano on node at the same time
Valid data, and the relatively low source of sound of volume can be submerged wherein often, or can only be mingled in the gap of different frequency and become
For background sound.The sound source data storage mode that the processing method of the Multi-audio-frequency data is proposed can use the means of various dimensions certainly
By collocation multiple sound resource, the voice data of multi-frequency, different sources of sound or identical source of sound audio can be on different dimensions all
Separation can be realized, is mixed in real time in playback, there is great flexibility.Fig. 1 is sky of the different sound source datas in dimension
Between schematic diagram, as shown in figure 1, x be voice data stream also be understood as the time, y is audio, and z be different audios, foundation audio list
The solely volume of collection at different moments, separated in y dimensions, gather respective voice data stream and volume respectively.The sound of different frequency
Frequency is considered as different sources of sound, and different sources of sound or identical source of sound audio can be separated on different dimensions.
Two groups of difference sources of sound, the voice data of different frequency are in multidimensional data space with independent data-stream form
In the presence of when being encoded to an independent audio file, its storage mode is as shown in Figure 2:The frequency of audio 1 and audio 2, source of sound
It is different, specifically, the occurrence number of audio 1, frequency and each original position, length and the occurrence number of audio 2, frequency and every
Secondary original position, length are different.
Different from the combination that existing voice data storage method is audio format file head+voice data, such as with this
Mode stores multitone source audio, and it is difficult separation that can cause mixed voice data.The processing method of Multi-audio-frequency data is proposed
Multidimensional audio coding and storage mode based on source of sound and frequency, it is each that storage is disperseed in same voice data stream
The audio-frequency information of source of sound, frequency, thus there is high efficiency when creating audio data index.
The voice data of multiple sound resource, multi-frequency can not only be stored simultaneously in same audio file by this method,
More can during playback free switching source of sound, realize the operation of real-time mixing sound, can also be set for different sound source datas
Accordatura line characteristic key indexes so that the retrieving of voice data is more efficient.
The present invention still has one kind of multiple embodiments, all technologies formed using equivalents or equivalent transformation
Scheme, it is within the scope of the present invention.
Claims (8)
- A kind of 1. processing method of Multi-audio-frequency data, it is characterised in that:A kind of Multi-audio-frequency data are stored in single audio frequency file.
- A kind of 2. processing method of Multi-audio-frequency data according to claim 1, it is characterised in that:This method includes following step Suddenly:S1:Audiogenic device is chosen, and independent acquisition channel is set for different audiogenic devices;S2:Voice data to be gathered sets source information, frequency dependent data;S3:Source of sound quantity and the essential information of different sources of sound are set in audio file head;S4:The position occurred every time for each source of sound setting occurrence number, frequency, and source of sound and persistence length;S5:Audio files header is preserved, starts to encode and write voice data stream;S6:The sound source data of correlation is write set by encoder in audio file in frequency, it is more when existing in some frequency range During group sound source data, write successively according to source of sound order, source of sound 1 | source of sound 2 | source of sound 3.
- A kind of 3. processing method of Multi-audio-frequency data according to claim 2, it is characterised in that:In step s3, it is different The audio of frequency is all considered as different sources of sound regardless of whether collecting device is identical.
- A kind of 4. processing method of Multi-audio-frequency data according to claim 2, it is characterised in that:In step s 4, source of sound Number is the period that the frequency audio effectively occurs, and ignores the background data outside the frequency.
- A kind of 5. processing method of Multi-audio-frequency data according to claim 2, it is characterised in that:In step s 4, source of sound Frequency is the unique identification of the source of sound, in order to create index.
- A kind of 6. processing method of Multi-audio-frequency data according to claim 2, it is characterised in that:In step s 6, when certain When multigroup sound source data be present in one frequency range, write successively according to source of sound order, source of sound 1 | source of sound 2 | source of sound 3.
- A kind of 7. processing method of Multi-audio-frequency data according to claim 2, it is characterised in that:In step s 6, when certain When multigroup sound source data be present in one frequency range, write successively according to source of sound order, source of sound 1 | source of sound 2 | source of sound 3.
- 8. according to a kind of processing method of any described Multi-audio-frequency data of claim 1-7, it is characterised in that:A kind of Multi-audio-frequency The processing method of data, a kind of Multi-audio-frequency data can be stored in single audio frequency file;This method comprises the following steps:S1:Audiogenic device is chosen, and independent acquisition channel is set for different audiogenic devices;S2:Voice data to be gathered sets source information, frequency dependent data;S3:Source of sound quantity and the essential information of different sources of sound are set in audio file head;S4:The position occurred every time for each source of sound setting occurrence number, frequency, and source of sound and persistence length;S5:Audio files header is preserved, starts to encode and write voice data stream;S6:The sound source data of correlation is write set by encoder in audio file in frequency, it is more when existing in some frequency range During group sound source data, write successively according to source of sound order, source of sound 1 | source of sound 2 | source of sound 3;In step s3, the audio of different frequency Regardless of whether collecting device is identical, all it is considered as different sources of sound;In step s 4, source of sound number is that the frequency audio effectively occurs Period, ignore the background data outside the frequency;In step s 4, sound source frequency is the unique identification of the source of sound, with It is easy to create and indexes;When multigroup sound source data be present in some frequency range, write successively according to source of sound order, source of sound 1 | source of sound 2 | source of sound 3;When multigroup sound source data be present in some frequency range, write successively according to source of sound order, source of sound 1 | source of sound 2 | sound Source 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710673700.0A CN107506409B (en) | 2017-08-09 | 2017-08-09 | Method for processing multi-audio data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710673700.0A CN107506409B (en) | 2017-08-09 | 2017-08-09 | Method for processing multi-audio data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107506409A true CN107506409A (en) | 2017-12-22 |
CN107506409B CN107506409B (en) | 2021-01-08 |
Family
ID=60689589
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710673700.0A Active CN107506409B (en) | 2017-08-09 | 2017-08-09 | Method for processing multi-audio data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107506409B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1507628A (en) * | 2002-04-01 | 2004-06-23 | 索尼株式会社 | Editing method and device |
CN101001485A (en) * | 2006-10-23 | 2007-07-18 | 中国传媒大学 | Finite sound source multi-channel sound field system and sound field analogy method |
US7869616B2 (en) * | 2005-01-12 | 2011-01-11 | Logitech International, S.A. | Active crossover and wireless interface for use with multi-driver in-ear monitors |
CN102867514A (en) * | 2011-07-07 | 2013-01-09 | 腾讯科技(北京)有限公司 | Sound mixing method and sound mixing apparatus |
CN104064191A (en) * | 2014-06-10 | 2014-09-24 | 百度在线网络技术(北京)有限公司 | Audio mixing method and device |
CN105981411A (en) * | 2013-11-27 | 2016-09-28 | Dts(英属维尔京群岛)有限公司 | Multiplet-based matrix mixing for high-channel count multichannel audio |
CN106486128A (en) * | 2016-09-27 | 2017-03-08 | 腾讯科技(深圳)有限公司 | A kind of processing method and processing device of double-tone source audio data |
-
2017
- 2017-08-09 CN CN201710673700.0A patent/CN107506409B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1507628A (en) * | 2002-04-01 | 2004-06-23 | 索尼株式会社 | Editing method and device |
US7869616B2 (en) * | 2005-01-12 | 2011-01-11 | Logitech International, S.A. | Active crossover and wireless interface for use with multi-driver in-ear monitors |
CN101001485A (en) * | 2006-10-23 | 2007-07-18 | 中国传媒大学 | Finite sound source multi-channel sound field system and sound field analogy method |
CN102867514A (en) * | 2011-07-07 | 2013-01-09 | 腾讯科技(北京)有限公司 | Sound mixing method and sound mixing apparatus |
CN105981411A (en) * | 2013-11-27 | 2016-09-28 | Dts(英属维尔京群岛)有限公司 | Multiplet-based matrix mixing for high-channel count multichannel audio |
CN104064191A (en) * | 2014-06-10 | 2014-09-24 | 百度在线网络技术(北京)有限公司 | Audio mixing method and device |
CN106486128A (en) * | 2016-09-27 | 2017-03-08 | 腾讯科技(深圳)有限公司 | A kind of processing method and processing device of double-tone source audio data |
Also Published As
Publication number | Publication date |
---|---|
CN107506409B (en) | 2021-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10540993B2 (en) | Audio fingerprinting based on audio energy characteristics | |
CN106503184B (en) | Determine the method and device of the affiliated class of service of target text | |
EP3440564B1 (en) | Audio fingerprinting based on audio energy characteristics | |
CN102799605A (en) | Method and system for monitoring advertisement broadcast | |
CN103971689A (en) | Audio identification method and device | |
TWI569263B (en) | Method and apparatus for signal extraction of audio signal | |
CN105898556A (en) | Plug-in subtitle automatic synchronization method and device | |
CN106708990A (en) | Music clip extraction method and device | |
CN105227966A (en) | To televise control method, server and control system of televising | |
CN102929887A (en) | Quick video retrieval method and system based on sound feature identification | |
CN104298748A (en) | Device and method for face search in videos | |
CN103886860A (en) | Information processing method and electronic device | |
CN104978961B (en) | A kind of audio-frequency processing method, device and terminal | |
CN102157150B (en) | Stereo decoding method and device | |
CN110545124B (en) | False-proof hidden communication structure and method based on cricket cry | |
JP2020013272A (en) | Feature amount generation method, feature amount generation device, and feature amount generation program | |
CN107506409A (en) | A kind of processing method of Multi-audio-frequency data | |
CN101950564A (en) | Remote digital voice acquisition, analysis and identification system | |
CN103440870A (en) | Method and device for voice frequency noise reduction | |
CN101296224B (en) | P2P flux recognition system and method | |
CN115985331B (en) | Audio automatic analysis method for field observation | |
CN103870466A (en) | Automatic extracting method for audio examples | |
CN103514196B (en) | Information processing method and electronic equipment | |
CN102214218A (en) | System and method for retrieving contents of audio/video | |
Kuchkorov | Analysis of sound signals in wav format in telecommunication systems and algorithms for them processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |