CN114124631A - Processing method suitable for audio frequency synchronous control between embedded equipment of airborne passenger cabin - Google Patents
Processing method suitable for audio frequency synchronous control between embedded equipment of airborne passenger cabin Download PDFInfo
- Publication number
- CN114124631A CN114124631A CN202111344854.8A CN202111344854A CN114124631A CN 114124631 A CN114124631 A CN 114124631A CN 202111344854 A CN202111344854 A CN 202111344854A CN 114124631 A CN114124631 A CN 114124631A
- Authority
- CN
- China
- Prior art keywords
- audio
- audio data
- receiving
- processing
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 230000001360 synchronised effect Effects 0.000 title claims abstract description 7
- 238000012545 processing Methods 0.000 claims abstract description 60
- 238000000034 method Methods 0.000 claims abstract description 59
- 230000005540 biological transmission Effects 0.000 claims description 29
- 238000012546 transfer Methods 0.000 claims description 11
- 238000009825 accumulation Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L27/00—Modulated-carrier systems
- H04L27/0014—Carrier regulation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H20/00—Arrangements for broadcast or for distribution combined with broadcast
- H04H20/53—Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers
- H04H20/61—Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast
- H04H20/62—Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast for transportation systems, e.g. in vehicles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/09—Arrangements for device control with a direct linkage to broadcast information or to broadcast space-time; Arrangements for control of broadcast-related services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/22—Parsing or analysis of headers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L27/00—Modulated-carrier systems
- H04L27/0014—Carrier regulation
- H04L2027/0024—Carrier regulation at the receiver end
- H04L2027/0026—Correction of carrier offset
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Security & Cryptography (AREA)
- Synchronisation In Digital Transmission Systems (AREA)
- Communication Control (AREA)
Abstract
The invention discloses a processing method suitable for audio frequency synchronous control among embedded equipment of an onboard passenger cabin, which comprises the following steps: according to the comparison value set in the receiving processing process of the audio data and the sending processing process of the audio data, the processing process of audio data blockage caused by inconsistent receiving and sending frequencies is added in the sending processing process of the audio data. The invention solves the problem of audio asynchronism caused by frequency deviation and can achieve the maximum fidelity of audio data.
Description
Technical Field
The invention relates to the technical field of airborne passenger cabin audio control, in particular to a processing method suitable for audio synchronous control between airborne passenger cabin embedded devices.
Background
The passenger cabin broadcasting and internal conversation system of the airplane generally comprises components such as an audio acquisition device, an audio transmission transfer device, an audio control playing device, a microphone, a loudspeaker and the like, provides voice input and output interfaces for drivers and crews, and realizes passenger broadcasting service of the drivers and crews of the airplane unit, and passenger cabin point-to-point and group internal conversation service between the drivers and the crews and between a plurality of crews. The audio acquisition equipment is used for sampling and packaging voice data, and the audio transmission transfer equipment is used for distributing the audio data to the audio control playing equipment for voice playing according to the communication requirements of broadcasting and internal call functions. In order to reduce the buffering delay, 384 bytes of audio sample data are transmitted as an audio packet at intervals of 4ms, and the transmission frequency and the interface protocol conform to the ARINC628P3 standard.
In order to meet the requirements of real-time performance, synchronism and high stability, audio acquisition equipment, audio transmission transfer equipment, audio control playing equipment and the like in a cabin broadcasting and internal phone system are generally developed by adopting an embedded operating system. Due to the difference of functions such as hardware circuits, signal processing software and the like among multiple devices, even if crystal oscillators of the same type are used as frequency sources, the accumulated error in the audio data transmission process is increased due to clock frequency deviation among the devices, sound lag occurs, and the conditions of integral broadcasting and internal call service are influenced.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a processing method suitable for audio synchronization control among embedded equipment in an onboard cabin, solves the problem of audio asynchronism caused by frequency offset, and can achieve fidelity of audio data to the maximum extent.
The purpose of the invention is realized by the following scheme:
a processing method suitable for audio frequency synchronous control between embedded equipment of an onboard passenger cabin comprises the following steps: according to the comparison value set in the receiving processing process of the audio data and the sending processing process of the audio data, the processing process of audio data blockage caused by inconsistent receiving and sending frequencies is added in the sending processing process of the audio data.
Further, the set comparison value includes a difference value between the audio receiving data statistic and the audio data transmitting statistic.
Further, the processing procedure of the audio data jam comprises the sub-steps of: and calculating the difference value diffAudo of the audio receiving data statistic RevAudio and the audio data sending statistic SendAudio by taking the set time as a period, wherein if the diffAudo of the current set time is the same as the diffAudo of the previous set time, the audio transmission transfer equipment and the audio acquisition equipment do not have frequency offset, and if the diffAudo of the current set time is different from the diffAudo of the previous set time, the frequency offset exists, and the audio transmission caused by the frequency offset is eliminated.
Further, the method comprises a receiving processing procedure of the audio control data; the receiving and processing process of the audio control data comprises the following sub-steps: s11, receiving network audio control message through predefined network port; s12, identifying the source of the audio control message according to the receiving port and the sending source IP address, wherein the source comprises a cockpit microphone, a cabin microphone and background music, pre-recorded audio and video which are pre-stored in the entertainment system; s13, analyzing the data of the received audio control message; s14, identifying the audio control information, judging whether the control information is to start or stop the audio playing, if it is, executing step S15; if the audio playing is stopped, go to step S16; s15, according to the passenger cabin broadcast, the broadcast of the internal phone system and the priority of the internal phone, judging whether the priority of the starting audio playing source equipment is higher than the priority of the current playing audio source, if so, setting the audio playing state as playing, otherwise, setting the audio playing state as stopping playing; and S16, ending the data processing and continuing to wait for the next frame of network audio control message.
Further, the receiving and processing procedure of the audio data comprises the following sub-steps: s21, receiving network audio data message through a predefined network port; s22, identifying the source of the audio data message according to the receiving port and the sending source IP address; s23, if the receiving source of the data message is consistent with the source device in playing state in the step S15, go to step S24, otherwise go to step S26; s24, storing the audio data in a receiving buffer area; s25, accumulating the audio data receiving statistic RevAudio by 1; and S26, ending the data processing and continuing to wait for the next frame of network audio data message.
Further, the process of sending audio data includes the following sub-steps: s31, if 4ms clock interrupt is received, the step S32 is carried out, otherwise, the waiting is continued; s32, whether the audio data in the receiving buffer zone is larger than or equal to 1, if so, the step S33 is carried out, otherwise, the step S31 is carried out; s33, the audio data in the receiving buffer and the identification number of the destination device identified in the step S13 are assembled into an audio data packet to be sent through the network; s34, audio data sending statistic SendAudio accumulation 1, beat statistic period accumulation 1; s35, data transmission ends, and the process continues to step S31.
Further, the processing procedure of the audio data jam comprises the following sub-steps: s41, calculating the difference offset value between diffAudo of the current 1S and diffAudo of the previous 1S; s42, if offsetValue >0, go to step S43; if offsetValue <0, go to step S49; s43, sending the audio data packets according to a 4ms cycle under normal conditions, wherein 1000/4 audio data packets transmitted within 1S are 250 audio data packets; when the current offset value is greater than 0, the number of audio data packets to be transmitted in the last 1s is 250+ offset value, so that the audio data packets cannot be blocked in the receiving buffer area; calculating the value of the mixValue according to the formula of 250/offset value; s44, calculating the sending time mixPeer [ i ] needing the mixing processing according to the following processing procedures,
For(i=1;i<=offsetValue;i++)
{
mixPeriod[i]=i*mixValue;
};
s45, when the tempo value period value is equal to mixPeriod [ i ] in step S33, mixing the Audio data of two current adjacent packets by using an average weight adjustment method, where the specific formula of the average weight adjustment method is mixaudios [ offset ] (Audio1[ offset ] + Audio2[ offset ])/2; s46, assembling and sending the mixAUDIO as new audio data through the step S33; s47, the audio data transmission statistic SendAudio in step S34 is accumulated by 2; s48, continuing the processing of the step S45 until all the mixPeer [ i ] values are traversed; s49, the process is terminated, and when the 1S clock cycle expires, the process proceeds to step S41.
Further, in step S44, when the offsetValue is 3, there are 3 transmission timings that require mixing processing, and mixports [0] ═ 83, mixports [1] ═ 166, and mixports [2] ═ 249, respectively.
The beneficial effects of the invention include:
the audio control synchronization method realized by the invention can solve the problem of audio asynchronism caused by frequency deviation by using a simplest and most effective algorithm and processing logic on the basis of not moving the original audio data processing frame, controls the influence of the frequency deviation on the current equipment, and does not cause the influence of frequency deviation superposition on the back-end equipment. Meanwhile, the audio mixing algorithm adopts a 2-channel average adjustment weight method, so that the maximum fidelity of the audio data is achieved under the condition that new noise is not introduced into the audio data.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a block diagram of a cabin broadcast and intercom system;
FIG. 2 is a flow chart of audio control message reception according to an embodiment of the present invention;
FIG. 3 is a flow chart of audio data reception according to an embodiment of the present invention;
fig. 4 is a flow chart of audio data transmission according to an embodiment of the present invention.
Detailed Description
All features disclosed in all embodiments in this specification, or all methods or process steps implicitly disclosed, may be combined and/or expanded, or substituted, in any way, except for mutually exclusive features and/or steps.
Example 1: in view of the problems in the background art, the present embodiment provides a processing method suitable for audio synchronization control between embedded devices in an onboard cabin on the basis that an existing audio communication interface meets the ARINC628P3 protocol, so as to solve the problem of audio asynchronization caused by frequency offset of multiple devices in a cabin broadcast and intercom system, and specifically includes the steps of: according to the comparison value set in the receiving processing process of the audio data and the sending processing process of the audio data, the processing process of audio data blockage caused by inconsistent receiving and sending frequencies is added in the sending processing process of the audio data.
As shown in fig. 1, the cabin broadcasting and intercom system is used for providing services such as passenger broadcasting, group meeting, emergency/normal calling, audio/video playing and the like for drivers and cabin crews, and generally comprises components such as an audio acquisition device, an audio transmission transfer device, an audio control playing device, a microphone, a receiver, a speaker and the like, and a structural block diagram of the system is shown in fig. 1. The audio acquisition equipment finishes the acquisition of sounds such as a driver microphone, a cabin attendant microphone, background music, pre-recorded audio, video and the like, and audio data are sampled according to 16 bits at a sampling rate of 48 KHZ. The audio transmission transfer equipment completes the transmission control functions of the cabin broadcast and the voice frequency of the internal voice system, controls a hub for the core of the whole system, and the realized functions comprise broadcast/internal voice priority control, voice frequency data distribution transfer control, volume and indicator light control, system maintenance and other logic processing. The audio control playing device completes the driving playing of audio data, and is a tail end control unit of the whole system and used for directly driving the sound playing of a loudspeaker and a receiver.
Example 2: on the basis of embodiment 1, in this embodiment, the method of the present invention is illustrated by the most complex and most core audio transmission relay device, that is, an adaptive control implementation mechanism including how to implement audio synchronization, and an audio control playing device can be directly referred to. The audio transmission relay apparatus generally includes processes of reception processing of audio control data, reception processing of audio data, transmission processing of audio data, and the like. The audio control data receiving and processing steps are shown in fig. 2, and the specific implementation steps are as follows:
step 11, receiving a network audio control message through a predefined network port;
step 12, identifying the source of the audio control message according to the receiving port and the sending source IP address, wherein the source comprises a cockpit microphone, a cabin microphone and background music, pre-recorded audio, video and the like which are pre-stored in an entertainment system;
step 13, carrying out data analysis on the received audio control message;
step 14, identifying audio control information, judging whether the control information is to start audio playing or stop audio playing, and if the control information is to start audio playing, performing step 15; if the audio playing is stopped, step 16 is performed;
step 15, judging whether the priority of the starting audio playing source equipment is higher than the priority of the current playing audio source according to the broadcasting and the internal call priorities of the passenger cabin broadcasting and the internal call system, if so, setting the audio playing state as playing, otherwise, setting the audio playing state as stopping playing;
and step 16, ending the data processing and continuing to wait for the next frame of network audio control message.
The audio data receiving and processing steps are shown in fig. 3, and the specific implementation steps are as follows:
step 21, receiving a network audio data message through a predefined network port;
step 22, identifying the source of the audio data message according to the receiving port and the sending source IP address;
step 23, if the receiving source of the data message is consistent with the source device in the playing state in step 15, performing step 24, otherwise, performing step 26;
step 24, storing the audio data in a receiving buffer area;
step 25, accumulating the audio data receiving statistic RevAudio by 1;
and step 26, ending the data processing and continuing to wait for the next frame of network audio data message.
The audio data sending and processing steps are shown in fig. 4, and the specific implementation steps are as follows:
step 31, if a 4ms clock interrupt is received, step 32 is carried out, otherwise, the waiting is continued;
step 32, judging whether the audio data in the receiving buffer area is greater than or equal to 1, if so, performing step 33, otherwise, performing step 31;
step 33, assembling the audio data in the receiving buffer and the identification number of the destination device identified in step 13 into an audio data packet, and sending the audio data packet through the network;
step 34, sending a statistical value SendAudio accumulation 1 by the audio data, and accumulating a beat statistical value period accumulation 1;
in step 35, data transmission ends, and the process continues to step 31.
The audio synchronization control process needs to add a processing process of audio data blockage caused by inconsistent receiving and sending frequencies in the sending processing process of the audio data, and the specific implementation scheme is that 1s is taken as a period to calculate an audio receiving data statistic RevAudio in the step 25 and an audio data sending statistic SendAudio difference diffAudo in the step 34, if the diffAudo of the current 1s is the same as the diffAudo of the previous 1s, frequency offset does not exist between the audio transmission transfer equipment and the audio acquisition equipment, and if the diffAudo of the current 1s is different from the diffAudo of the previous 1s, frequency offset exists, the audio transmission caused by the frequency offset needs to be eliminated, and the specific implementation steps are as follows:
step 41, calculating the difference value offset value between diffAudo of the current 1s and diffAudo of the previous 1 s;
step 42, if the offsetValue >0 indicates that the clock frequency of the audio transmission transfer device is less than that of the audio acquisition device, go to step 43; if the offset value is less than 0, it indicates that the operation clock frequency of the audio transmission transfer device is greater than that of the audio acquisition device, and theoretically, the clock frequency is greater, and there is no transmission delay in the data receiving buffer area, which causes audio data blockage, so step 49 is directly performed;
step 43, normally, the audio data packets are sent in a 4ms cycle, and 1000/4 audio data packets can be transmitted within 1 s. Currently, if the offset value >0, the number of audio packets to be transmitted in the last 1s is 250+ offset value, so as to ensure that the audio packets are not blocked in the receiving buffer. The mixValue value is calculated according to the formula mixValue 250/offsetValue.
Step 44, calculating a sending time mixpower [ i ] requiring the sound mixing processing according to the following processing procedures, for example, when offset value is 3, there are 3 sending times requiring the sound mixing processing, which are mixpower [0] ═ 83, mixpower [1] ═ 166, and mixpower [2] ═ 249 respectively;
For(i=1;i<=offsetValue;i++)
{
mixPeriod[i]=i*mixValue;
}
step 45, when the beat value period value in step 33 is equal to mixPeriod [ i ], performing mixing processing on the Audio data of two current adjacent packets by using an average adjustment weight method, wherein a specific formula of the average adjustment weight method is mixaudios [ offset ] (Audio1[ offset ] + Audio2[ offset ])/2;
step 46, assembling and sending the mixaudios as new audio data through step 33;
step 47, the audio data transmission statistic SendAudio in step 34 is accumulated by 2;
and step 48, continuing the processing of the step 45 until all the mixPower [ i ] values are traversed.
And step 49, ending the processing, and continuing to perform step 41 when the 1s clock period is up.
The audio control synchronization method realized by the scheme can solve the problem of audio asynchronism caused by frequency offset by using a simplest and most effective method and processing logic on the basis of not moving the original audio data processing frame, controls the influence of the frequency offset on the current equipment, and cannot cause the influence of frequency offset superposition on the back-end equipment. Meanwhile, the audio mixing algorithm adopts a 2-channel average adjustment weight method, so that the maximum fidelity of the audio data is achieved under the condition that new noise is not introduced into the audio data.
The parts not involved in the present invention are the same as or can be implemented using the prior art.
The above-described embodiment is only one embodiment of the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be easily made based on the application and principle of the present invention disclosed in the present application, and the present invention is not limited to the method described in the above-described embodiment of the present invention, so that the above-described embodiment is only preferred, and not restrictive.
Other embodiments than the above examples may be devised by those skilled in the art based on the foregoing disclosure, or by adapting and using knowledge or techniques of the relevant art, and features of various embodiments may be interchanged or substituted and such modifications and variations that may be made by those skilled in the art without departing from the spirit and scope of the present invention are intended to be within the scope of the following claims.
Claims (8)
1. A processing method suitable for audio frequency synchronous control between embedded equipment of an onboard passenger cabin is characterized by comprising the following steps: according to the comparison value set in the receiving processing process of the audio data and the sending processing process of the audio data, the processing process of audio data blockage caused by inconsistent receiving and sending frequencies is added in the sending processing process of the audio data.
2. The processing method for audio synchronization control between in-flight cabin embedded devices as claimed in claim 1, wherein the set comparison value comprises a difference between the audio receiving data statistic and the audio data transmitting statistic.
3. The processing method for audio synchronization control among onboard cabin embedded devices according to claim 2, wherein the processing procedure of audio data congestion comprises the sub-steps of: and calculating the difference value diffAudo of the audio receiving data statistic RevAudio and the audio data sending statistic SendAudio by taking the set time as a period, wherein if the diffAudo of the current set time is the same as the diffAudo of the previous set time, the audio transmission transfer equipment and the audio acquisition equipment do not have frequency offset, and if the diffAudo of the current set time is different from the diffAudo of the previous set time, the frequency offset exists, and the audio transmission caused by the frequency offset is eliminated.
4. The processing method suitable for the audio synchronous control among the embedded devices in the airborne passenger cabin according to any one of claims 1 to 3, characterized by comprising a receiving and processing process of audio control data; the receiving and processing process of the audio control data comprises the following sub-steps:
s11, receiving network audio control message through predefined network port;
s12, identifying the source of the audio control message according to the receiving port and the sending source IP address, wherein the source comprises a cockpit microphone, a cabin microphone and background music, pre-recorded audio and video which are pre-stored in the entertainment system;
s13, analyzing the data of the received audio control message;
s14, identifying the audio control information, judging whether the control information is to start or stop the audio playing, if it is, executing step S15; if the audio playing is stopped, go to step S16;
s15, according to the passenger cabin broadcast, the broadcast of the internal phone system and the priority of the internal phone, judging whether the priority of the starting audio playing source equipment is higher than the priority of the current playing audio source, if so, setting the audio playing state as playing, otherwise, setting the audio playing state as stopping playing;
and S16, ending the data processing and continuing to wait for the next frame of network audio control message.
5. The processing method suitable for the audio synchronization control among the in-cabin embedded devices of the aircraft as claimed in claim 4, wherein the receiving and processing procedure of the audio data comprises the following sub-steps:
s21, receiving network audio data message through a predefined network port;
s22, identifying the source of the audio data message according to the receiving port and the sending source IP address;
s23, if the receiving source of the data message is consistent with the source device in playing state in the step S15, go to step S24, otherwise go to step S26;
s24, storing the audio data in a receiving buffer area;
s25, accumulating the audio data receiving statistic RevAudio by 1;
and S26, ending the data processing and continuing to wait for the next frame of network audio data message.
6. The processing method suitable for the audio synchronization control among the in-cabin embedded devices in the aircraft according to claim 5, wherein the audio data transmission processing procedure comprises the following sub-steps:
s31, if 4ms clock interrupt is received, the step S32 is carried out, otherwise, the waiting is continued;
s32, whether the audio data in the receiving buffer zone is larger than or equal to 1, if so, the step S33 is carried out, otherwise, the step S31 is carried out;
s33, the audio data in the receiving buffer and the identification number of the destination device identified in the step S13 are assembled into an audio data packet to be sent through the network;
s34, audio data sending statistic SendAudio accumulation 1, beat statistic period accumulation 1;
s35, data transmission ends, and the process continues to step S31.
7. The processing method suitable for the audio synchronization control among the in-cabin embedded devices in the aircraft according to claim 6, wherein the processing procedure of the audio data jam comprises the following sub-steps:
s41, calculating the difference offset value between diffAudo of the current 1S and diffAudo of the previous 1S;
s42, if offsetValue >0, go to step S43; if offsetValue <0, go to step S49;
s43, sending the audio data packets according to a 4ms cycle under normal conditions, wherein 1000/4 audio data packets transmitted within 1S are 250 audio data packets; when the current offset value is greater than 0, the number of audio data packets to be transmitted in the last 1s is 250+ offset value, so that the audio data packets cannot be blocked in the receiving buffer area;
calculating the value of the mixValue according to the formula of 250/offset value;
s44, calculating the sending time mixPeer [ i ] needing the mixing processing according to the following processing procedures,
For(i=1;i<=offsetValue;i++)
{
mixPeriod[i]=i*mixValue;
};
s45, when the tempo value period value is equal to mixPeriod [ i ] in step S33, mixing the Audio data of two current adjacent packets by using an average weight adjustment method, where the specific formula of the average weight adjustment method is mixaudios [ offset ] (Audio1[ offset ] + Audio2[ offset ])/2;
s46, assembling and sending the mixAUDIO as new audio data through the step S33;
s47, the audio data transmission statistic SendAudio in step S34 is accumulated by 2;
s48, continuing the processing of the step S45 until all the mixPeer [ i ] values are traversed;
s49, the process is terminated, and when the 1S clock cycle expires, the process proceeds to step S41.
8. The processing method for controlling audio synchronization between in-cabin embedded devices according to claim 7, wherein in step S44, when the offset value is 3, there are 3 sending times that require mixing processing, which are mixport [0] ═ 83, mixport [1] ═ 166, and mixport [2] ═ 249 respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111344854.8A CN114124631B (en) | 2021-11-15 | 2021-11-15 | Processing method suitable for audio synchronous control between embedded equipment of aircraft cabin |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111344854.8A CN114124631B (en) | 2021-11-15 | 2021-11-15 | Processing method suitable for audio synchronous control between embedded equipment of aircraft cabin |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114124631A true CN114124631A (en) | 2022-03-01 |
CN114124631B CN114124631B (en) | 2023-10-27 |
Family
ID=80395389
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111344854.8A Active CN114124631B (en) | 2021-11-15 | 2021-11-15 | Processing method suitable for audio synchronous control between embedded equipment of aircraft cabin |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114124631B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103297173A (en) * | 2012-02-24 | 2013-09-11 | 国家广播电影电视总局广播科学研究院 | Method and device for transmission, distribution and receiving of data of digital audio broadcasting system in China |
CN103634254A (en) * | 2012-08-23 | 2014-03-12 | 华东师范大学 | Indirect modulation and demodulation data transmission device and transmission method thereof |
CN104968043A (en) * | 2015-04-29 | 2015-10-07 | 重庆邮电大学 | A Clock Synchronization Frequency Offset Estimation Method Applicable to WIA-PA Networks |
US20170155460A1 (en) * | 2014-03-24 | 2017-06-01 | Park Air Systems Limited | Simultaneous call transmission detection |
US20180192385A1 (en) * | 2014-02-21 | 2018-07-05 | Summit Semiconductor, Llc | Software based audio timing and synchronization |
CN109168059A (en) * | 2018-10-17 | 2019-01-08 | 上海赛连信息科技有限公司 | A kind of labial synchronization method playing audio & video respectively on different devices |
CN111857645A (en) * | 2020-07-31 | 2020-10-30 | 北京三快在线科技有限公司 | Audio data processing method, audio data playing method, audio data processing device, audio data playing device, audio data medium and unmanned equipment |
CN113473426A (en) * | 2021-06-04 | 2021-10-01 | 深圳市欧思微电子有限公司 | Audio signal Bluetooth low-delay transmission method based on PC-USB connection |
-
2021
- 2021-11-15 CN CN202111344854.8A patent/CN114124631B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103297173A (en) * | 2012-02-24 | 2013-09-11 | 国家广播电影电视总局广播科学研究院 | Method and device for transmission, distribution and receiving of data of digital audio broadcasting system in China |
CN103634254A (en) * | 2012-08-23 | 2014-03-12 | 华东师范大学 | Indirect modulation and demodulation data transmission device and transmission method thereof |
US20180192385A1 (en) * | 2014-02-21 | 2018-07-05 | Summit Semiconductor, Llc | Software based audio timing and synchronization |
US20170155460A1 (en) * | 2014-03-24 | 2017-06-01 | Park Air Systems Limited | Simultaneous call transmission detection |
CN104968043A (en) * | 2015-04-29 | 2015-10-07 | 重庆邮电大学 | A Clock Synchronization Frequency Offset Estimation Method Applicable to WIA-PA Networks |
CN109168059A (en) * | 2018-10-17 | 2019-01-08 | 上海赛连信息科技有限公司 | A kind of labial synchronization method playing audio & video respectively on different devices |
CN111857645A (en) * | 2020-07-31 | 2020-10-30 | 北京三快在线科技有限公司 | Audio data processing method, audio data playing method, audio data processing device, audio data playing device, audio data medium and unmanned equipment |
CN113473426A (en) * | 2021-06-04 | 2021-10-01 | 深圳市欧思微电子有限公司 | Audio signal Bluetooth low-delay transmission method based on PC-USB connection |
Also Published As
Publication number | Publication date |
---|---|
CN114124631B (en) | 2023-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10659380B2 (en) | Media buffering | |
EP0921666A2 (en) | Speech reception via a packet transmission facility | |
US20040114607A1 (en) | Low latency digital audio over packet switched networks | |
US8335576B1 (en) | Methods and apparatus for bridging an audio controller | |
EP1258102B1 (en) | Method for suppressing silence in voice traffic over an asynchronous communication medium | |
CN113286184B (en) | Lip synchronization method for respectively playing audio and video on different devices | |
US20100157973A1 (en) | Synchronization of a plurality of data streams | |
WO2021203829A1 (en) | Data transmission rate control method and system, and user equipment | |
JP2002111684A (en) | Radio communication system and its time out value updating method | |
EP3264725B1 (en) | Stream reservation class converter | |
US10867615B2 (en) | Voice recognition with timing information for noise cancellation | |
JP2004519124A (en) | Priority packet transmission system for telephony, latency sensitive data, best effort data and video streams on shared transmission media such as passive coaxial distribution | |
JP2002531964A (en) | A method for reliably transmitting on-time packets containing time-sensitive data | |
CN114124631B (en) | Processing method suitable for audio synchronous control between embedded equipment of aircraft cabin | |
US20080181104A1 (en) | Method and apparatus for improving quality of service for packetized voice | |
WO2021255327A1 (en) | Managing network jitter for multiple audio streams | |
CN113728691B (en) | Method of controlling telecommunications network, telecommunications network system | |
CN114785981A (en) | Data communication method and system suitable for 5.8G-WiFi conference system | |
CN114697720B (en) | Synchronization method and device of adaptive audio and video RTP (real-time protocol) time stamps | |
US20240314244A1 (en) | Method for jitter compensation during receipt of voice content over ip-based networks and receiver for that and method and device for sending and receiving voice content with jitter compensation | |
US8279888B2 (en) | Apparatus and method for upstream transmission of variable bit rate VoIP traffic in hybrid fiber coaxial network, and apparatus and method for resource allocation | |
CN118488527A (en) | Wireless audio data transmission method and device and electronic equipment | |
US20080170562A1 (en) | Method and communication device for improving the performance of a VoIP call | |
CN117459786A (en) | Interconnection video and audio system based on star flash technology | |
CN117498979A (en) | Network audio sending and receiving method and system based on same frequency of clock |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |