1337043 九、發明說明: 【發明所屬之技術領域】 本發明提供一種影音資料傳輸方法及影音系統,尤指— 種影音分流且同步之資料傳輸方法及影音系統。 【先前技術】 影音(audio/video,A/V)傳輸技術的功能和應用極為廣 泛,常應用於保全監控系統、投影機或家庭劇院等影音系 統中。先前技術之影音系統一般採用影音同步 (synchronization)且合流(merge)或影音不同·步且分流 (splitting)之傳輸技術。 保全監控系統通常包含複數個監視器和一監控中心,而 投影機常使用於大型會議或多人簡報等場合。在會議或簡 報進行的過程中’若主講人有所更動,傳統的有線投影機 常常需要頻繁地進行插線、拔線、開關電腦和投影機等動 作,不但浪費時間,亦會造成使用上的不便。隨著無線網 路(wireless fidelity,WiFi)的普及以及嵌入式中央處理器 (embedded central processing unit,embedded CPU)速度的提 升,無線投影機的應用也越來越普及,無線投影機可以無 線方式聯結至每一與會者之電腦,因此可隨時切換簡報 者,不需要重複插拔螢幕連接線。 337043 請參考第1圖,第1圖為先前技術中一影音系統100之 功能方塊圖。影音系統100採用影音同步且合流之傳輸技 術,包含一無線傳送端系統110和一無線接收端系統120。 無線傳送端系統110可從一影音輸入設備112接收影像來 源資料〖VIDEO 和聲音來源資料Iaudio ’並利用一影音處理罕· 元114對影像來源資料IvlDEO和聲音來源資料IAUDI0進行 壓縮和編碼等處理,進而產生相對應之影音位元流資料 φ Sa/v,最後再透過一無線輸出端116輸出影音位元流資料 SA/V。無線接收端系統120可透過一無線接收端126接收 無線傳送端系統110傳來之影音位元流資料SA/V,並利用 一影音處理單元124對影音位元流資料SA/V進行解壓縮和 解編碼等處理,進而產生相對應之影像輸出資料OVIDE〇和 聲音輸出資料〇AUD10 ’最後再將影像輸出資料0 VIDEO和聲 音輸出資料Oaudio 傳至一影音輸出設備122。先前技術之 影音系統100採用影音同步的架構,在傳送時將影音資料 * 合併處理後以無線方式同流輸出,因此僅需考量影音資料 在同一接收端系統的處理方式。當影音輸入設備112為一 監視器而影音輸出設備122為一監控中心之螢幕時,影音 系統100可為一保全監控系統;當影音輸入設備112為一 筆記型電腦而影音輸出設備122為一無線投影機時,影音 系統100可為一資料投影系統。1337043 IX. Description of the Invention: [Technical Field] The present invention provides a video and audio data transmission method and a video and audio system, and more particularly to a video transmission and synchronization data transmission method and a video and audio system. [Prior Art] The functions and applications of audio/video (A/V) transmission technology are extremely extensive, and are often used in security systems such as surveillance systems, projectors, or home theaters. Prior art audio and video systems generally employ video transmission synchronization and merge or video transmission and splitting transmission techniques. The security monitoring system usually includes a plurality of monitors and a monitoring center, and the projector is often used in large conferences or multi-person briefings. In the process of the conference or briefing, if the speaker has changed, the traditional wired projector often needs to perform the functions of plugging, unplugging, switching the computer and projector frequently, which not only wastes time, but also causes the use. inconvenient. With the popularity of wireless fidelity (WiFi) and the speed of embedded central processing units (embedded CPUs), the application of wireless projectors is becoming more and more popular. Wireless projectors can be connected wirelessly. To each participant's computer, the presenter can be switched at any time without having to re-plug the screen cable. 337043 Please refer to FIG. 1 , which is a functional block diagram of a video system 100 in the prior art. The audio-visual system 100 employs a video-synchronous and converged transmission technology, including a wireless transmitting end system 110 and a wireless receiving end system 120. The wireless transmitting end system 110 can receive the image source data [VIDEO and the sound source data Iaudio" from an audio/video input device 112 and compress and encode the image source data IvlDEO and the sound source data IAUDI0 by using a video processing device. Then, the corresponding video bit stream data φ Sa/v is generated, and finally the video bit stream data SA/V is output through a wireless output terminal 116. The wireless receiving end system 120 can receive the video bit stream data SA/V transmitted from the wireless transmitting end system 110 through a wireless receiving end 126, and decompress and solve the video bit stream data SA/V by using a video processing unit 124. Processing such as encoding, and then corresponding image output data OVIDE〇 and sound output data 〇 AUD10 ' Finally, the image output data 0 VIDEO and the sound output data Oaudio are transmitted to an audio output device 122. The audio-visual system 100 of the prior art adopts an audio-visual synchronization architecture, and combines video and audio data*s in a transmission manner and wirelessly outputs them in the same manner. Therefore, it is only necessary to consider the processing method of the video and audio data in the same receiving end system. When the AV input device 112 is a monitor and the AV output device 122 is a monitor center screen, the AV system 100 can be a security monitoring system; when the AV input device 112 is a notebook computer and the AV output device 122 is a wireless device; In the projector, the video system 100 can be a data projection system.
另一方面,隨著無線網路頻寬的加大(例如依據IEEE 802.11 a與802.11 g兩種無線網路標準),無線投影機的應用 範圍由辦公室、會議或簡報等場合中之無線資料投影機 (wireless data projector),逐漸朝向家庭場合中之無線影音 投影機(wireless video projector)來發展。在家庭劇院的環境 裡’使用者希望將影像机说透過投影機輸出,而將聲音訊 號接到音響組。 請參考第2圖,第2圖為先前技術中一影音系統2〇〇之 功月b方塊圖。影音系統2〇〇採用影音不同步且分流之傳輸 技術,包含一無線傳送端系統21〇和一無線接收端系統 220。無線傳送端系統21〇可從一影音輸入設備212接收影 像來源貝料IvmE〇和聲音來源資料Lud丨〇,並利用一影像處 理單το 214對影像來源資料進行壓縮和編碼等處 理進而產生相對應之影像位元流資料%,最後再透過一 無線輸出端216輸出影像位^流資料、,同時透過一有線 ^ = 8輪出聲音來源^料lMJD1Q。無線接收端系統220 〇透^ 一無線接收端226和一有绩接收她”〇 丨,今你 線傳送端系統210傳來之線接“ 228接分別收無 祖τ 辱來之〜像位元流貧料sv及聲音來源資 s 利用—影像處理單元224對影像位元流資料 v 丁^縮和解編碼等處理,進而產 出資料0V丨DPn,界你$ s丁您心汾丨於训 資料Ϊ 、後再將影像輸出資料〇v咖〇和聲音來源 貝料1AUD丨〇分別傳至—暴; 備⑵。先前·設備222和—聲音輸出設 先讀術之影音系統2〇〇可為—家庭劇院’影像 11337043 輪出設備222和聲音輸出設備 響組。由於採用影音不同牛且八、、^加別為一投影機和音 聲音資料係分別以無線和有線^八:構’在影像資料和 資,在傳送過程中可能會遇到二像: 广“:此2=收到之影像位元流資料〜及聲音來源資:: ,㈣物_料在不同 【發明内容】 本發明提供—㈣彡音分流且同步 含初始化-傳送端系統和-接收端系統以取得 率及相關於該傳送端系統中之一第一 預疋錄 -預定容量;該傳送端系統處理-影像資 影像資料播放通知訊息來傳遞處理狀 來之,訊號及該影:二:= 生Γ:收端系統接收並處理該影像資二該聲 ㈣應之影像輸出資料和聲音輸出資料;以 及該接收端系統輸出該影像輸出資料至― ,供 及輸出該聲音輸出資料至一聲音輸出設備輸 包二=—種影音分流且同步之資料傳輸系統,其 而系統和一接收端系統。該傳送端系統包含- 9 备 j. 之 =理—單二:r並處理,輸人設備傳來 一傳來:聲理該影 來儲存該影像資料;—聲〜像貝枓緩衝器’用 無線輸出端,用來 :==㈣有㈣式來傳== __影像:及 二包接,端,用來以無線方式來接收該影:Γ 以產生相對應之聲;早疋,用來處理該聲音資料 【實施方式】 音供―㈣酬傳之觸輸方法及影 二:統:目同來源送出的影像與聲音資料在不同系統播 化影音系二效果。本發明之方法包含初始 播放。 專㈣和接收端,以及啟動影音資料之同步 凊參考第 圖,第3圖為本發明中一影音系統3〇〇之功 10 [1337043On the other hand, as the bandwidth of wireless networks increases (for example, according to IEEE 802.11a and 802.11g wireless network standards), wireless projectors can be used for wireless data projection in offices, conferences or presentations. The wireless data projector is gradually developed toward a wireless video projector in a home. In the home theater environment, the user wants to output the video camera through the projector and connect the audio signal to the audio group. Please refer to FIG. 2, which is a block diagram of the power month b of an audio-visual system in the prior art. The audio-visual system 2 employs a video-to-synchronous and shunt transmission technology, including a wireless transmitting end system 21A and a wireless receiving end system 220. The wireless transmitting end system 21 can receive the image source IvmE and the sound source data Lud丨〇 from an audio input device 212, and compress and encode the image source data by using an image processing unit το 214 to generate a corresponding image. The image bit stream data is %, and finally, the image bit stream data is output through a wireless output terminal 216, and the sound source material lMJD1Q is simultaneously transmitted through a wire ^=8. The wireless receiving end system 220 transmits a wireless receiving end 226 and a good reception to receive her "〇丨, now the line transmitted from the line transmitting end system 210 is connected to the "228" and receives the ancestors. The flow s sv and the sound source s utilization - the image processing unit 224 processes the image bit stream data v, condensed and decoded, and then outputs the data 0V 丨 DPn, and you are sth. Ϊ , and then the image output data 〇 v curry and sound source bead material 1AUD 丨〇 传 to the violent; prepared (2). Previously, the device 222 and the sound output set-up audio-visual system 2 can be a home theater image 11337043 a wheel-out device 222 and a sound output device. Because the use of video and audio is different, and eight, and ^ are different for a projector and audio and audio data systems, respectively, wireless and wired ^8: construct 'in the image data and resources, in the transmission process may encounter two images: wide": 2 = received image bit stream data ~ and sound source::, (4) material_ different in the content of the invention [invention] The present invention provides - (four) voice shunt and synchronous initialization-transmitting system and - receiving system Obtaining rate and related to the first pre-recording-predetermined capacity in the transmitting end system; the transmitting end system processing-image image data playing notification message to transmit the processing signal, the signal and the shadow: 2:= Production: the receiving system receives and processes the image output data and sound output data of the sound (4); and the receiving end system outputs the image output data to, for supplying and outputting the sound output data to a sound output Equipment transmission package = = a video and audio distribution and synchronization data transmission system, and the system and a receiving end system. The transmission end system includes - 9 standby j. = = - two: r and processing, input device transmission Come to pass : Sound the image to store the image data; - Sound ~ like the Bellow buffer 'Use wireless output, used: == (4) There are (4) type to pass == __ Image: and 2 package, end, use To receive the shadow wirelessly: Γ to produce the corresponding sound; early 疋, to process the sound data [implementation] sound supply - (four) the transmission method of the transmission and shadow 2: system: the same source The image and sound data are broadcasted in different systems. The method of the present invention includes initial playback. The (four) and the receiving end, and the synchronization of the audio and video data are activated. Referring to the figure, FIG. 3 is an audio-visual system of the present invention. 3〇〇之功10 [1337043]
Pi m 二方統3。。採用影音分流且同步之輪 “傳心糸統3! 一接收端系統3 2 〇 音輸入設備312接收影像來源資料I·。 資料T AUDIG。影像處理單元314可對影像來源 貝抖Iv,DEO進行壓縮和編碼等處 像位元,、…h 。寺處理,進而產生相對應之影 “貝抖SV’再將影像位元流資料Sv存人-影像資 料緩衝器311。聲立卢搜留-像貝 ,41主/日處理早兀阳可控制輸出聲音來源資 '、audio日’之"。_里及時間。傳送端系統 可計算影像資料緩衝器311和签立次^㈣衣置317 園時’此時可透過-益線輪出端料量在一預定範 …線輸出知3】6輸出影像位元流資料Pi m two-party system 3. . Adopting the audio and video shunting and synchronizing wheel "Transmission system 3! A receiving end system 3 2 Arpeggio input device 312 receives the image source data I·. Data T AUDIG. The image processing unit 314 can perform image source BV Iv, DEO Compression and encoding are like bit bits, ... h. The temple process, and then the corresponding shadow "Bei shake SV" and then the image bit stream data Sv - image data buffer 311. Sound Li Lu search - like Bei, 41 main / day processing early Xiangyang can control the output of sound source ', audio day' ". _ and time. The transmitting end system can calculate the image data buffer 311 and the signing time ^(4) clothes 317 garden time 'At this time, the throughput of the light-transmitting wheel end is output at a predetermined range... 3 output 6 pixel output stream data
lAUDIO =’=音樣緩衝器313内存之資料量在一預定範圍 夺,此時可透過-有線輸出端川來輸出聲音來源資料 ^ΛΠΠΓΠ 0 /妾收端系統320可透過一無線接收端326和-有線接收 柒328分別接收無線傳送端系 料SV及聲音來源資料『 傳來之影像位元流資 ,及,並將影像位元流資料^存入 影像資料緩衝器321。當影像資社 田〜像貝枓緩衝器321内存之資 料量在一預定範圍時,影傻虛审„0 〜m…· 處早元324可對影像位元流 負料S v進行解壓縮和解編碼等處 ^ ^ 寻处理’進而產生相對應之影 像輸出 > 料〇VqDE〇 ’最後再將畢彡 杳τ ν 办像輪出資料〇VIDEO和聲音 來源貧枓IAUD丨〇分別傳至一影俊鈐 像輸出設備322和一聲音輸 [1337043 出設備323。 3 2 21發=影音系統3 G G可為—家庭劇院,影像輸出設備 2=曰輸出設備323可分別為一投影機和音響組。由 於細音不同步且分流的架構,在影像資料和聲音資料 無線和有線方式分流輸出,影像位元流資料〜在 傳达過程中可能會遇到不同程度的干擾,接收端系統32〇 接收到之影像位元流資料Sv及聲音來源以41侧。彼此之 間可能不同步,因此本發明之影音系統3〇〇之接收端系统 320利用影像處理單元314、聲音處理單元315和控制裝置 317來控制影音資料之傳送時間、傳送流量和資㈣縮率 等參數’依據接收端系統320實際接收到之影音資料及時 間來凋整上述參數,使得影像輸出設備322播放之影像輸 出資料0V1DE0和聲音輸出設備323播放之聲音來源資料 Iaudio彼此之間能夠同步。 請參考第4圖,第4圖之流程圖說明了本發明之影音系 統300在初始化傳送端系統31〇時之方法,其包含下列步 驟·· 步驟400 ·開始。 步驟410 :初始化傳送端系統31 〇之網路。 步驟420 :判斷傳送端系統31 〇和接收端系統320之 12 ! 1337043 間是否能成功建立連線;若能成功建立連 線,執行步驟430 ;若能成功建立連線, 執行步驟410。 步驟430 :以一預定通訊頻寬來傳遞虛擬(dummy)資 料至接收端系統320。 步驟440 :接收接收端系統320傳來之頻寬資訊。 步驟450 :依據頻寬資訊來判斷傳送端系統310和 接收端系統320之系統時間是否彼此已 經同步;若系統時間已經同步,執行步驟 470;若系統時間尚未同步,執行步驟460。 步驟460:依據頻寬資訊來更新預定通訊頻寬,執行 步驟430。 步驟470:依據頻寬資訊來設定影像資料之預定壓 縮率以及影像資料緩衝器311之預定容 量。 步驟480 :傳遞一影像資料接收通知訊息至接收端 系統320。 步驟490:準備擷取影音資料。 請參考第5圖,第5圖之流程圖說明了本發明之影音系 統300在初始化接收端系統320時之方法,其包含下列步 驟: 13 開始。 # 驟 500 v驟510 ·初始化接收端系統32〇之網路。 步驟520 :判斷傳送端系統310和接收端系統32〇之 間是否能成功建立連線;若能成功建立連 線執行步驟5 3 0 ;若未能成功建立連線, 執行步驟510。 步驟530 :接收傳送端系統31〇傳來之虛擬資料。 步驟540:依據虛擬資料之接收狀態產生相關之頻 寬資訊。 步驟550.傳遞頻寬資訊至傳送端系統31〇。 步驟560 :判斷是否收到傳送端系統310傳來之影 像貧料接收通知訊息;若能收到影像資料 接收通知訊息’執行步驟570 ;若未能收 到影像資料接收通知訊息,執行步驟560。 步驟570 :準備接收傳送端系統310輸出之影像資 料。 本發明首先於步驟410和51〇中分別初始化傳送端系統 M0和接收端系統32〇之網路’並於步驟"ο和π。中判 ,影音系 '统300之傳送端系統310和接從端系統320之間 疋否此成功建立連線。當傳送端系統31〇和接收端系統32〇 之間已經成功建立連線,此時會確認傳送端系 統310和接 收知系統320之系統時間是否彼此已經同纟,以及確認傳 1337043 .. 輸資料時之預定通訊頻寬。首先,傳送端系統310於步驟 43〇中以初始之預定通訊頻寬來傳遞虛擬資料。接著,接 收端系統320於步驟530中接收虛擬資料,再於步驟54〇 中依據虛擬資料之接收狀態來產生相關之通訊頻寬資訊。 例如,若傳送端系統3 1 〇係於100微秒内傳送1 〇〇個長度 為8000位元之封包,而接收端系統32〇在ι〇〇微秒後僅收 到90個長度為8〇〇〇位元之正確封包,此時接收端系統 # 可依據封包傳輸時間、封包長度和成功接收到之封包個數 於步驟540中計算出相對應之頻寬資訊,傳送端系統31〇 再依據頻寬資訊來調整傳送封包之數量及大小,直到接收 , 端系統32〇能完整無誤地接收到所有封包。例如,在於步 驟460中更新預定通訊頻寬後,傳送端系統31〇可在 微秒内僅傳送90個長度為80〇〇位元之封包,或傳送1〇〇 個長度為7000位元之封包,若接收端系統32〇在ι〇〇微秒 •後能收到數量符合之正確封包並依此回報相對應之頻寬資 ίΐ此時即此疋成傳送端系統310和接收端系統“ο系統 時間之間的同步以及預定通訊頻寬的確認。此時,傳送端 糸統310已經完成揭取影音資料的準備,而接收端系統似 已經元成接收影音資料的準備。 /請參考第6圖,第6圖之流程圖說明了本發明之傳送端 .系統則在處理影像資料時之方法,其包含下列步驟: 15 ^^7043 步驟600:開始擷取影像資料。 步驟610:依據預定壓縮率來壓縮影像資料。 步驟㈣:將壓縮後之影像資料存人影像資料緩衝 器 311 〇 步驟630 :判斷影像資料緩衝器3U内存之資料量 是否已達到-第一預定準位;若資料緩衝 器内存之資料已相第—預定準位,執行 步驟640 ;若資料緩衝器内存之資料尚未 達到第一預定準位,執行步驟620。 步驟_··判斷是否接收到—影像資料播放通知訊 息;若接收到影像資料播放通知訊息,執 行步驟66G ;若尚未接收到影像資料播放 通知訊息’執行步驟65〇。 t驟㈣:不允許傳遞聲音資料,執行步驟640。 ’60:允許傳遞聲音資料,執行步驟㈣。 步驟670:判斷影像資料緩衝器3ιι内存之資料量 疋否已超過-第二預定準位;若資料緩衝 器311内存之資料已超過第二預定準 ^,執行步驟若資料緩衝器内存之 資料尚未超過第二預定準位,執行步驟 680 〇 步驟停止將壓縮後之影像資料存人影像資料 緩衝器311,執行步驟69〇。 16 1337043 步驟69G: _傳遞影像資料, 執行步驟630l AUDIO = '= The amount of data in the sample buffer 313 memory is in a predetermined range. At this time, the sound source data can be output through the wired output terminal. ^ / / The receiving system 320 can pass through a wireless receiving end 326 and - The cable receiving port 328 receives the wireless transmitting end material SV and the sound source data "the transmitted image bit stream, and stores the image bit stream data ^ into the image data buffer 321 respectively. When the amount of data in the image of the company is like a beibu buffer 321 in a predetermined range, the shadow is stupid. ○0 ~m...· The early element 324 can decompress and solve the image bit stream S v Coding, etc. ^ ^ 寻处理' and then the corresponding image output> 〇VqDE〇' Finally, the second 再 轮 轮 〇 IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE IDE Junhao image output device 322 and a sound input [1337043 out device 323. 3 2 21 hair = video system 3 GG can be - home theater, video output device 2 = 曰 output device 323 can be a projector and audio group respectively. Due to the unsynchronized and shunted architecture of the fine sound, the video data and the sound data are distributed in a wireless and wired manner, and the image bit stream data may encounter different degrees of interference during the transmission process, and the receiving end system 32 receives the data. The image bit stream data Sv and the sound source are on the 41 side. They may not be synchronized with each other. Therefore, the receiving end system 320 of the audio and video system 3 of the present invention utilizes the image processing unit 314, the sound processing unit 315, and the control device 317. Controlling the transmission time of the audio-visual data, the transmission flow rate, and the (four) reduction rate parameters, etc., according to the audio-visual data and time actually received by the receiving end system 320, the above parameters are discarded, so that the image output data 0V1DE0 and the sound output played by the image output device 322 are output. The sound source data Iaudio played by the device 323 can be synchronized with each other. Referring to FIG. 4, a flow chart of FIG. 4 illustrates a method of the audio-visual system 300 of the present invention in initializing the transmitting end system 31, which includes the following steps: Step 400: Start Step 410: Initialize the network of the transmitting end system 31. Step 420: Determine whether the connection between the transmitting end system 31 and the receiving end system 320 12 1337043 can be successfully established; If the connection is successful, step 410 is performed. Step 430: The dummy data is transmitted to the receiving end system 320 by a predetermined communication bandwidth. Step 440: The receiving end system 320 transmits the data. The bandwidth information is determined. Step 450: judging whether the system time of the transmitting end system 310 and the receiving end system 320 are mutually based on the bandwidth information. If the system time is already synchronized, go to step 470. If the system time is not synchronized, go to step 460. Step 460: Update the scheduled communication bandwidth according to the bandwidth information, and go to step 430. Step 470: Set the image according to the bandwidth information. The predetermined compression ratio of the data and the predetermined capacity of the image data buffer 311. Step 480: Deliver an image data receiving notification message to the receiving end system 320. Step 490: Prepare to capture video and audio data. Please refer to Figure 5, Figure 5. The flowchart illustrates a method of the audiovisual system 300 of the present invention when initializing the receiving end system 320, which includes the following steps: 13 Start. #步骤 500 v步骤 510 · Initialize the network of the receiving system 32. Step 520: Determine whether the connection between the transmitting end system 310 and the receiving end system 32 can be successfully established; if the connection can be successfully established, step 5 3 0; if the connection is not successfully established, go to step 510. Step 530: Receive the virtual data transmitted by the transmitting end system 31. Step 540: Generate relevant bandwidth information according to the receiving state of the virtual data. Step 550. Transfer the bandwidth information to the transmitting end system 31. Step 560: Determine whether the image receiving notification message transmitted from the transmitting end system 310 is received; if the image data receiving notification message can be received, go to step 570; if the image data receiving notification message is not received, go to step 560. Step 570: Prepare to receive the image data output by the transmitting end system 310. The present invention first initializes the network of the transmitting end system M0 and the receiving end system 32 in steps 410 and 51, respectively, and in steps "ο and π. In the middle, the video system is connected between the transmitting system 310 of the system 300 and the slave system 320. When the connection between the transmitting end system 31〇 and the receiving end system 32〇 has been successfully established, it is confirmed whether the system time of the transmitting end system 310 and the receiving receiving system 320 are already identical to each other, and confirming the transmission of 1370034. The scheduled communication bandwidth. First, the transmitting end system 310 delivers the virtual material in the initial predetermined communication bandwidth in step 43. Next, the receiving end system 320 receives the virtual data in step 530, and then generates the relevant communication bandwidth information according to the receiving state of the virtual data in step 54. For example, if the transmitting end system 3 1 transmits 1 packet of 8000 bits in 100 microseconds, the receiving end system 32 only receives 90 lengths of 8 after ι 〇〇 microseconds. The correct packet of the bit element, at this time, the receiving end system # can calculate the corresponding bandwidth information according to the packet transmission time, the packet length and the number of successfully received packets in step 540, and the transmitting end system 31 The bandwidth information adjusts the number and size of the transport packets until receiving, and the end system 32 can receive all the packets without fail. For example, after updating the predetermined communication bandwidth in step 460, the transmitting end system 31 can transmit only 90 packets of length 80 bits in microseconds, or transmit one packet of 7000 bits in length. If the receiving end system 32 〇 〇〇 〇〇 microseconds, it can receive the correct number of packets that match the number and respond accordingly to the corresponding bandwidth. At this time, the transmitting end system 310 and the receiving end system are “. The synchronization between the system time and the confirmation of the predetermined communication bandwidth. At this time, the transmission end system 310 has completed the preparation for uncovering the video and audio data, and the receiving end system seems to have been prepared to receive the audio and video data. / Please refer to the sixth Figure 6 is a flow chart illustrating the transmitting end of the present invention. The method of processing the image data includes the following steps: 15 ^^7043 Step 600: Start capturing image data. Step 610: According to predetermined compression Rate to compress the image data. Step (4): The compressed image data is stored in the image data buffer 311. Step 630: Determine whether the data volume of the image data buffer 3U memory has reached the first predetermined level; The data of the buffer memory has been phased-predetermined, and step 640 is performed. If the data of the data buffer memory has not reached the first predetermined level, step 620 is performed. Step _··Determine whether the image data broadcast notification message is received. If the image data broadcast notification message is received, go to step 66G; if the video data playback notification message has not been received, go to step 65. Step (4): The voice data is not allowed to be transmitted, go to step 640. '60: Allow voice data to be transmitted Step 670: Determine whether the amount of data in the image data buffer 3 ιι memory has exceeded the second predetermined level; if the data in the data buffer 311 has exceeded the second predetermined level, the execution step is if the data buffer The data of the device memory has not exceeded the second predetermined level, and the step 680 〇 is performed to stop the compressed image data from being stored in the image data buffer 311, and step 69 is performed. 16 1337043 Step 69G: _Transfer the image data, and perform step 630
步驟700 : 步驟710 : 步驟720 : 步驟730 : 步驟740 : 開始操取聲音資料。 將聲音資料存入聲音資料緩衝器313。 判斷疋否允許傳遞聲音資料;若允許傳遞 聲音資料,執行步驟74〇;若不允許傳遞 聲音資料,執行步驟730。 停止傳遞聲音資料,執行步驟720。 接收相關於影像播放系統時間之通知訊 息0Step 700: Step 710: Step 720: Step 730: Step 740: Start listening to the sound data. The sound data is stored in the sound data buffer 313. It is judged whether or not the voice data is allowed to be transmitted; if the voice data is allowed to be transmitted, step 74 is performed; if the voice data is not allowed to be transmitted, step 730 is performed. Stop transmitting the sound data and go to step 720. Receive notification information about the time of the video playback system.
步驟750 :接收相關於影音播放系統時間之間差值 之誤差訊號。 步驟760 :依據誤差訊號來調整傳遞聲音資料時之 預定流量及預定資料量。 步驟770 :傳遞相關於聲音播放系統時間之通知訊 息至接收端系統320。 步驟7 8 0 :依據預定流量及預定資料量來傳遞聲音 資料,執行步驟720。 請參考第8圖,第8圖之流程圖說明了本發明之接收端 1337043 系統训在處理影音資料時之方法,其包含下列步驟: 步驟800 步驟810 步驟820 步驟830 開始接收傳送端系統310傳來之壓縮影 像資料和聲音資料。 將壓縮影像資料存入影像資料緩衝器 321。 計算影像㈣緩衝器321内存之資料量。 ^斷影像資料緩衝器321内存之資料量 = —預定準位;若影像資料緩衝器 步驟840 子之資料已達到預定準位,執行牛 Z 840 ;若影像資料緩衝器切内存之; 門^達到預定準位,執行步驟810。 =解壓縮影像資料緩衝器321内存之 步驟850 輸出解壓縮接夕旦,μ 備322。後之讀資料至影像輸出設 步驟860:輸出聲音資料至聲 像播放系統時間之通知訊自至傳t 統310。 L心至傳迗端系 一 步驟依據相關於聲音播 (令统時間之通知訊 息產生相關於影音播放系統時間之間差 值之块差訊號。 ^田本發明之影音系統300完成初始化後,接著傳送端 :統:10會開始處理影音資料(如第6圖和第7圖所示)。 •十對々像貝料,在於步驟6〇〇中擷取影像資料後,接著會 二 >驟61〇中壓縮影像資料,並於步驟㈣巾將壓縮後之 :::資料存入影像資料緩衝器311。傳送端系統310之控 J二置7會於步驟630矛口 670中判斷影像資料緩衝器311 子之’以及於步驟_中判斷是否接收到影像資 放通知訊息。若影像資料緩衝器311内存之資料量尚 $達到第一預疋準位,此時會執行步驟620以繼續將壓縮 „〜像胃料存4像資料緩衝mi ;若f彡像資料緩衝 資貝料里已達到第一預定準位且能接收到影像 :播放通知訊息’此時會執行步請以允許傳遞聲音 ㈣;若影像資料緩衝器311内存之資料量已達到第= ^位但尚未純到影像f料播 傳遞聲音資料。在允許傳遞聲音資= 位==1内存之資料量已超過第二預定準 位此時會執行步驟680 l、2彳古l 影像資料緩衝器311,接將壓縮後之影像⑽存入 遞影像資料;若影驟_以允許開始傳 過第二預定準位,此時會直内存之資料量尚未超 Θ直接執仃步驟690以開始傳遞影 19 像資料。 次另方面,針對聲音資料,在於步驟700中擷取聲音 資,後,接著會於步驟川中將聲音㈣存人聲音資料缓 衝益^13。右傳送端系統31〇之控制裝置Μ?於步驟72〇 ^判斷此時允許傳遞聲音f料,此時會接收接收端系統32〇 傳來相關於影像播放系統時間之通知訊息及相關於影音播 ,曰系…先時,之間差值之誤差訊號,因此傳送端系統3川可 了 曰貝料在接收端系統32〇播放時系、統時間的差異, t此:步驟760中調整傳遞聲音資料時之預定流量及預 疋貝料里’使仔影音資料在接收端系統32Q能夠同步播放。 當:發明之影音系統3〇〇之傳送端系統Μ。完成影音 貝’、’之处理及傳遞後(如第6圖和第7_示),接著接 = 320會開始接收影音資料(如第8圖所示… 將純到mf特入影m緩衝器 旦並於步驟820中計算影像資料緩衝器3心存之資料 =若影料__ 321 _之#已料到預定準 步二二會:步驟840以壓縮和解碼影像資料,並執行 料和於:=:!Γ像輸出設備322上播放影像資 系統 32。;二::^ 實際影音播放:=因步:9。;回_^^ ’’·、日1因此傳送鳊系統310能夠依據接 20 1337043 f圖式簡單說明】 •第i 料財m統之魏方塊圖。 f2 _切技财-f彡音純之魏方塊圖。 第3圖為本發明中—影音系統之功能方塊圖。 第4圖為本發明在初始化傳送端系統時之流程圏。 第5圓本發明在初始化接收端系統時之流程圖。 第6圖為本發明之傳送端系統在處理影像資料時之流程圖。 第7圖為本發明之傳送端㈣在處理聲音資料時之流程圖。 • 第8圖為本發明之接收端系統在處理影音資料時之流程圖。 【主要元件符號說明】 100〜300 影音系統 110 、 210 、 310 無線傳送端系統 112 、 212 、 312 影音輸入設備 114 、 124 影音處理單元 116 、 216 、 316 無線輸出端 120 、 220 、 320 無線接收端系統 122 影音輸出設備 126 ' 226 ' 326 無線接收端 214 、 224 、 314 、 324 影像處理單元 218 、 318 有線輸出端 222 、 322 影像輪出設備 223 、 323 聲音輪出設備 228 、 328 有線接收端 23 1337043 311 、 321 313 315 317 影像資料緩衝器 聲音資料緩衝器 聲音處理單元 控制單元 400〜490 、 500〜570 、 600〜690 、 700〜780、800〜890 步驟Step 750: Receive an error signal related to the difference between the time of the video playback system. Step 760: Adjust the predetermined flow rate and the predetermined data amount when the sound data is transmitted according to the error signal. Step 770: Deliver notification information related to the time of the sound playing system to the receiving end system 320. Step 7 8 0: The voice data is transmitted according to the predetermined flow rate and the predetermined data amount, and step 720 is performed. Please refer to FIG. 8 and FIG. 8 for a flowchart illustrating a method for processing the video and audio data by the receiving end 1337043 of the present invention, which includes the following steps: Step 800 Step 810 Step 820 Step 830 Start receiving the transmitting end system 310 Compressed image data and sound data. The compressed image data is stored in the image data buffer 321 . Calculate the amount of data in the image (4) buffer 321 memory. ^The amount of data in the image data buffer 321 is ==predetermined level; if the data of the image data buffer step 840 has reached the predetermined level, the cow Z 840 is executed; if the image data buffer cuts the memory; At a predetermined level, step 810 is performed. = Decompress the memory of the image data buffer 321 Step 850 The output is decompressed and the device is 322. After reading the data to the image output setting, step 860: the sound data is outputted to the time of the sound image playing system. The first step of the L heart to the transmission end is based on the sound difference broadcast (the notification message of the time is generated to generate a block difference signal related to the difference between the time of the video playback system.) After the initialization of the audio and video system 300 of the present invention is completed, Transmitter: System: 10 will start processing audio and video data (as shown in Figure 6 and Figure 7). • Ten pairs of 贝 贝 , , , , , 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷 撷The image data is compressed in 61〇, and the compressed::: data is stored in the image data buffer 311 in step (4). The control system of the transmitting end system 310 will determine the image data buffer in step 670. The device 311 and the step _ determine whether the image resource notification message is received. If the amount of data in the image data buffer 311 is still up to the first predetermined level, step 620 is executed to continue the compression. „~ Like the stomach material storage 4 image data buffer mi; if the image data buffer has reached the first predetermined level and can receive the image: play the notification message 'will execute the step at this time to allow the sound to be transmitted (4) ; if the image data buffer 311 memory The amount of data has reached the first = ^ position but has not been purely transmitted to the video f. The data is allowed to pass the voice = bit = 1 = 1. The amount of data in the memory has exceeded the second predetermined level. 2彳古1 image data buffer 311, the compressed image (10) is stored in the transfer image data; if the shadow _ is allowed to start to pass the second predetermined level, the amount of data in the straight memory is not exceeded yet. Step 690 is executed to start transmitting the image data. On the other hand, for the sound data, the sound is collected in step 700, and then the voice (4) is stored in the step Chuanzhong. The control device of the transmitting end system 31〇 determines in step 72 that the transmission of the sound f is allowed at this time, and the receiving end system 32 transmits the notification message related to the time of the video playing system and related to the video broadcasting.曰 ... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... Scheduled traffic and In the mussels, the audio and video data can be played synchronously on the receiving system 32Q. When: the invention of the audio and video system 3〇〇 transmission system Μ. After the processing and transmission of the audio and video shells, 'as shown in Figure 6 and 7_show), then connect = 320 will start to receive audio and video data (as shown in Figure 8) will be pure to mf special shadow m buffer and in step 820 calculate the image data buffer 3 heart data = If the shadow material __ 321 _ # has been scheduled to step 2: 2, step 840 to compress and decode the image data, and execute the material and the: =:! image output device 322 to play the image resource system 32; Two::^ Actual video playback:=Instep:9. Back to _^^ ''·, day 1 Therefore, the transmission system 310 can be based on the simple description of 20 1337043 f] • The first i-finance m system Block diagram. F2 _ cut skills - f voice pure Wei block diagram. Figure 3 is a functional block diagram of the video-audio system of the present invention. Figure 4 is a flow diagram of the present invention when initializing the transmitting end system. The fifth circle is a flowchart of the present invention when initializing the receiving end system. Figure 6 is a flow chart of the transmitting end system of the present invention when processing image data. Figure 7 is a flow chart of the transmitting end (4) of the present invention when processing sound data. • Figure 8 is a flow chart of the receiving end system of the present invention when processing video and audio materials. [Main component symbol description] 100~300 audio and video system 110, 210, 310 wireless transmitting end system 112, 212, 312 video input device 114, 124 video processing unit 116, 216, 316 wireless output terminal 120, 220, 320 wireless receiving end System 122 video output device 126 '226' 326 wireless receiving end 214, 224, 314, 324 image processing unit 218, 318 wired output 222, 322 image wheeling device 223, 323 sound wheeling device 228, 328 wired receiving end 23 1337043 311, 321 313 315 317 Image data buffer sound data buffer sound processing unit control unit 400~490, 500~570, 600~690, 700~780, 800~890 steps
24twenty four