Nothing Special   »   [go: up one dir, main page]

TW200951765A - System of inputting instruction by image identification and method of the same - Google Patents

System of inputting instruction by image identification and method of the same Download PDF

Info

Publication number
TW200951765A
TW200951765A TW98114338A TW98114338A TW200951765A TW 200951765 A TW200951765 A TW 200951765A TW 98114338 A TW98114338 A TW 98114338A TW 98114338 A TW98114338 A TW 98114338A TW 200951765 A TW200951765 A TW 200951765A
Authority
TW
Taiwan
Prior art keywords
image
area
module
image recognition
instruction input
Prior art date
Application number
TW98114338A
Other languages
Chinese (zh)
Other versions
TWI394063B (en
Inventor
Yeong-Sung Lin
Original Assignee
Tlj Intertech Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tlj Intertech Inc filed Critical Tlj Intertech Inc
Priority to TW98114338A priority Critical patent/TWI394063B/en
Publication of TW200951765A publication Critical patent/TW200951765A/en
Application granted granted Critical
Publication of TWI394063B publication Critical patent/TWI394063B/en

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

A system of inputting an instruction by image identification and a method of the same are applicable to a data processing device coupled to an image retrieving device. The method includes: defining at least an instruction inputting region from image data retrieved by the image retrieving device; determining a front-view image from the image data by image identification technology; detecting, by a detection module, for status information related to the front-view image present in the instruction inputting region; and retrieving a control instruction from a storage module according to the status information detected by the detection module, so as for the data processing device to operate under the control of the control signal. The system and method can substitute for conventional methods for inputting instructions by a conventional man-machine interface so as to provide a man-machine interface characterized by low costs, high performance, and convenience.

Description

200951765 六、發明說明: 【發明所屬之技術領域】 本發明係有關於一種資料處理技術,更具體言之,係 有關於一種利用影像辨識技術作為指令輸入之資料處理技 術。 【先前技術】 隨著資料處理軟硬體技術的日新月異,使用如個人電 腦、筆記型電腦、智慧型行動電話等資料處理裝置作為處 理文書、播放簡報或多媒體訊息的工作型態也日益普遍。 舉例言之,學生、教授或商務人士能夠利用筆記型電 腦搭配如投影機的影像產生裝置,以播放報告、教學内容 或商業訊息之簡報檔,一般習知的簡報檔可例如為微軟所 推出的 Microsoft Office PowerPoint™。 而於具體實施的過程中,使用者必須透過筆記型電腦 的按鍵或與其搭接之滑鼠執行如上一頁、下一頁、開啟或 關閉檔案或應用程式等操作指令的輸入。此時,若使用者 是位於接近筆記型電腦的位置,固然可以便利的進行操 作。然若使用者與筆記型電腦間有相當距離,或使用者必 須處於移動狀態時,即無法便利地透過按鍵或滑鼠輸入操 作指令。 為解決此一缺點,習知技術有提供利用紅外線或藍芽 之無線遙控技術,亦即將操作按鍵或軌跡球等輸入單元設 置於類似遙控器的裝置上,並透過無線訊號方式發出操作 指令,使筆記型電腦接收該無線訊號,藉以執行允符該指 4 110906DP01 200951765 •:的%作。此種技㈣能解決使肖 •記型電腦過遠的問題,作額外 去離開葦 ‘而丄又备β 卜的無線遙控裝置對於使用者 而3不啻疋一種限制與負擔’者 ^πτ 虿可尚須考慮到葦記型電 需要;相容的問題。此外,無線遙控褒置 攜柄_,因此難免會發生電絲“無法使用 ❹ Ι289=ΐ述習知技術的缺失,錢專利公告第 U的203號揭露一種「手 ° ^ 一種手指彳I向m及=」’係提供 操取使用者之數張手部影像,並:二=:取裝置 換程式轉換二 =:=:二:_轉 線之交點,即為使用者手指指::二π 指向-平面上之任音目=成=套即可_出使用者 ❹ 應用於岛報系統,作為取代雷射筆或滑鼠之人機入面 前述專利案雖然能解決人機介面存在的; 要利用數台影像擷取裝置並需要 τΗ… 轉換等複雜計算,相_ 牛重I確認與座標 本,且系統的安穿手續相… 夕的使用與建置成 對於資❹ 祕。此外,複雜的計算過程 硬體環境中可能導致辨識率下降。口素在文限的 率高種安裝便利、成本低廉且辨識 辨為之心令輪入系統以取代習知的指令輸入技 J10906DP0] 5 200951765 術,實為亟待解決之課題。 【發明内容】 為解決前述習知技術的種種問題,本發明提供一種應 用影像辨識之指令輸人系統,係應用於資料處理裝置中, •^育料處理裳置搭接有影像擷取H該應用影像辨識之 才曰令輸人系統包括:設定模組,係用以由該影義取裝置 所擷取之影像資料中定義至少—指令輸人區域;债測模 組’係由該f彡像資射_前景影像,則貞測該指令輸入 區域内出現之該前景影像的狀態資訊;儲存模組,係用以 儲存對應該狀態資訊之控制指令;以及控組,係用以 依據該_模組所偵測之該狀態資訊,自該儲存模組中操 取出對應之控制指令,以透過該控制指令使該資料處理裝 置執行功能動作。 於一較佳態樣中,該資料處理裝置復搭接一影像呈現 裝置用以於該影像資料中呈現—特定區域,俾該設定模 、且將》玄特疋區域定義為顯示區域,其巾,該設定模組於該 顯=區域的範_或範圍外定義至少—指令輸人區域, 該指令輸人區域與該顯示區域具有函數對應_,以由該 $測模組偵測該指令輸入區域中的狀態資訊。較佳地,該 衫像呈現裝置依序呈現不同尺寸之第—區域與第二區域, 以由該設定模組將該第一區域之邊界與該第二區域之邊界 。斤圍出之區域定義為邊框,再定義該邊框内之區域為顯示 區域。 再者,本發明復提供一種應用影像辨識之指令輸入方 110906DP01 6 200951765 .法,絲用於資料處理裝置中,該資料處理裝置搭接有影 ,像擷取裝置’該應用影像辨識之指令輸入方法包括下列步 驟:⑴於該影像擷取裂置所擷取之影像資料中定義至少 一指令輸人區域;(2)於該影像資料中判斷前景影像,以 價測該指令輸入區域内出現之該前景影像的狀態資訊;以 及⑺儲存對應該狀態資訊之控制指令,以依據所债測之 該狀態資訊擷取出對應之控制指令,俾透過該控制指令使 該資料處理裝置執行功能動作。 0 較佳態樣中’該資料處理裝置復搭接-影像呈現 裝置,用以於該影像資料中呈現一特定區域,以於步驟⑴ 將該特定區域定義為顯示區域,且此步驟⑴復包括於該 顯示區域的範圍内或範圍外定義至少一指令輸入區域,且 該指令輸入區域與該顯示區域具有函數對應關係,以俄測 該指令輸入區域中的狀態資訊的步驟。較佳地,上述之步 驟⑴復包括下列步驟:(M)由該影像呈現裝置依序呈 ❾現不同尺寸之第-區域與第二區域;(1·2)將該第一區域 之邊界與該第二區域之邊界所圍出之區域定義為邊框;以 及(1-3)定義該邊框内或邊框外所包含之區域為顯示區域。 〃於另-較佳態樣中,上述之步驟⑺復包括儲存前 景影像之預設姿態及對應該預設姿態之控制指令,以於所 债測到之則景影像之姿態符合該前景影像之預設姿態時, 操取出對應5玄預设安態之控制指令,俾透過該控制指令使 該肓料處理裝置執行功能動作的步驟。 相較於習知技術,本發明透過搭接影像擷取裝置之資 110906DP01 7 200951765 料處理裝置定義-顯示區域,再以影像辨識技術讓使用者 直接於該顯示區域中輸入控制指令,大大減少了指八輸入 控制系統的建置成本與安裂的複雜度,且透過輔助辨識 術更能提高辨識率,解決了習知硬體輸入單元三 技術所產生的問題。 剧入 【實施方式】 以下係藉由特定的具體實施例說明本發明之每 式,熟悉此技藝之人士可由本說明書所揭示之内容=地 瞭解本發明之其他優點與功效。本發明亦可藉由 ==加以施行或應用,本說明書中的各項細= 可基於不_點與應用,在不⑽本發明之精 種修飾與變更。 适仃谷 請參閱第1圖,其係用以顯 之指令輸入系統之第一實施例的應用架構 影像辨識之指令輸入系統係應用於 裝置0中,資料處理裝置20可例如但不 個人電腦、筆記型電腦、智慧型行動 限疋為 此外,資料處理裝置20捭接有至、/、地理裝置。 彳〇接有衫像擷取裝置30,於太者 方例中影像擷取裝置3G係内建_料處 ' = 發明之其他實施例中,影像φ ^ 於本 裝置2〇。 取裳置30可外接於資料處理 承上述’影像擷取裝置30 影像訊號轉換成數位影像資料,而經過轉二= 貧料會輸人及/或儲存於資料處理裳置2G中,並透過^ 8 1J0906DP0] 200951765200951765 VI. Description of the Invention: [Technical Field of the Invention] The present invention relates to a data processing technique, and more particularly to a data processing technique using image recognition technology as an instruction input. [Prior Art] With the rapid development of data processing software and hardware technology, it is becoming more and more common to use data processing devices such as personal computers, notebook computers, and smart mobile phones as processing files, playing briefings, or multimedia messages. For example, a student, a professor, or a business person can use a notebook computer with an image generating device such as a projector to play a report file, a teaching content, or a briefing file for a commercial message. For example, a conventional briefing file can be, for example, a Microsoft product. Microsoft Office PowerPointTM. In the process of implementation, the user must perform the input of the operation commands such as the previous page, the next page, and the file or the application through the button of the notebook computer or the mouse connected thereto. At this time, if the user is located close to the notebook, it is convenient to operate. However, if there is a considerable distance between the user and the notebook, or the user must be in a moving state, it is not convenient to input an operation command through a button or a mouse. In order to solve this shortcoming, the prior art provides a wireless remote control technology using infrared rays or blue buds, and also an input unit such as an operation button or a trackball is disposed on a device similar to a remote controller, and an operation command is issued through a wireless signal mode. The notebook receives the wireless signal to perform the implementation of the finger 4 110906DP01 200951765 •: %. This kind of technique (4) can solve the problem that the Xiao Keji computer is too far away, and the extra wireless remote control device for leaving the user can be used for the user. There is still a need to consider the need for mere-type electricity; compatibility issues. In addition, the wireless remote control device has a handle _, so it is inevitable that the wire will be "unable to use ❹ Ι ΐ ΐ ΐ ΐ ΐ ΐ ΐ ΐ 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ And ="' provides a number of hand images for the user, and: 2 =: take the device to change the program to convert two =: =: two: _ the intersection of the line, that is, the user's finger:: two π Pointing to the plane on the plane = = = set can be _ out of the user ❹ applied to the island newspaper system, as a replacement for the laser pen or mouse human machine into the face of the aforementioned patent case can solve the human-machine interface exists; To use several image capture devices and to require complex calculations such as τΗ... conversion, the phase _ 牛重I confirms the coordinate book, and the system's installation procedures are... The use and construction of the eve is a secret. In addition, complex computational processes can result in reduced recognition rates in a hardware environment. The high rate of the mouth is convenient to install, the cost is low, and the recognition is made to turn the system into the system to replace the conventional command input technology. It is an urgent problem to be solved. SUMMARY OF THE INVENTION In order to solve the above problems of the prior art, the present invention provides an instruction input system for applying image recognition, which is applied to a data processing device, and the image processing device is overlapped with an image capture H. The image recognition application system includes: a setting module for defining at least an instruction input area from the image data captured by the image capture device; the debt measurement module is configured by the f For example, the image of the foreground image is used to measure the state information of the foreground image appearing in the input area of the command; the storage module is used to store the control command corresponding to the state information; and the control group is used to control the state according to the image. The status information detected by the group is operated by the corresponding control command from the storage module to enable the data processing device to perform a functional action through the control command. In a preferred aspect, the data processing device is multiplexed with an image rendering device for presenting a specific area in the image data, and the setting mode is defined as a display area, and the towel is defined as a display area. The setting module defines at least the command input area outside the range or range of the display area, and the command input area has a function corresponding to the display area, so that the instruction input is detected by the $ test module. Status information in the area. Preferably, the shirt image presenting device sequentially displays the first region and the second region of different sizes, so that the boundary between the boundary of the first region and the second region is set by the setting module. The area enclosed by the pound is defined as the border, and the area inside the border is defined as the display area. Furthermore, the present invention provides an instruction input unit for applying image recognition 110906DP01 6 200951765. The method is used in a data processing device, and the data processing device is lapped with a shadow, such as a capture device, the instruction input of the application image recognition. The method comprises the following steps: (1) defining at least one command input area in the image data captured by the image capture splitting; (2) determining a foreground image in the image data, and measuring the appearance of the command input area The status information of the foreground image; and (7) storing a control command corresponding to the status information to retrieve the corresponding control command according to the status information of the debt, and the data processing device performs the function action through the control command. In the preferred embodiment, the data processing device multiplexed-image presentation device is configured to present a specific region in the image data, so as to define the specific region as a display region in step (1), and the step (1) includes At least one instruction input area is defined within or outside the range of the display area, and the instruction input area has a function correspondence relationship with the display area, and the step of measuring the status information in the input area is determined. Preferably, the step (1) above includes the following steps: (M) sequentially displaying the first-region and the second region of different sizes by the image presenting device; (1·2) bordering the first region with The area enclosed by the boundary of the second area is defined as a border; and (1-3) defines an area included in the border or outside the border as a display area. In another preferred embodiment, the step (7) includes a preset posture for storing the foreground image and a control command corresponding to the preset posture, so that the posture of the scene image conformed to the foreground image is determined by the debt. In the preset posture, the control command corresponding to the 5th preset state is operated, and the step of performing the function action by the data processing device is performed through the control command. Compared with the prior art, the present invention defines a display area by means of the image processing device 110906DP01 7 200951765, and then uses image recognition technology to allow the user to directly input control commands into the display area, thereby greatly reducing the number of control instructions. Refers to the construction cost and the complexity of the eight-input control system, and the recognition rate can be improved by the auxiliary identification technology, which solves the problems caused by the three techniques of the conventional hardware input unit. MODES FOR CARRYING OUT THE INVENTION The following is a description of each of the present invention by way of specific embodiments, and those skilled in the art can understand the other advantages and advantages of the present invention. The present invention can also be implemented or applied by ==, and the details in the present specification can be modified and changed without (10) the invention based on the non-points and applications. Please refer to FIG. 1 , which is applied to the device 0 for the application architecture image recognition system of the first embodiment of the command input system. The data processing device 20 can be, for example, but not a personal computer. In addition, the notebook computer and the smart action limit are connected to the data processing device 20 to the geographic device. The splicing device 30 is attached to the splicing device 30, and the image capturing device 3G is built in the singular case. In other embodiments of the invention, the image φ ^ is in the device 2 . The singer 30 can be externally connected to the data processing device. The image signal of the above image capturing device 30 is converted into digital image data, and after the conversion = the poor material will be input and/or stored in the data processing skirt 2G, and through ^ 8 1J0906DP0] 200951765

. 處理裝置20利用習4 A 像呈現在顯示單元21°卜W處理應^里式’將所擷取的影 理裝置20執行与俊# '於其他貫施例中,透過資料處 採而去π 像處理可能僅於後端進行fM象資偏声 理而未顯示於顯示 艰仃〜像貧科的處 。於本貫施例中,該㈣處理 勺零。己裂電腦,而顯示單元 顯示螢幕,影像擷取衷置 /為筆記型電腦之 機’且所擷取的影像會 聿己“細之攝衫 ❹ 21上。 θ王見在貝枓處理裝置20的顯示單元 本發明之應用影像辨識之指令輸 組1卜偵測模組12、儲存槎,且η… ^ ^ 爾;模組13以及控制模組14。 /史定模组11係用以由該影像擷取裝置30所擷取之影 像貢料中定義至少一指令輪入區域。 於本貫施例中’設定模組11可選擇性地透過資料處 理m〇接收指令輸人區域的定義訊息’並依據指令輸入 區域的定義訊息辨識出該指令輸入區域。 〇 偵測模組12係由該影像資料中判斷前景影像’以偵 測該指令輸入區域内出現之該前景影像的狀態資訊。較佳 者,該狀態資訊可為該前景影像之姿態、明滅變化、動態 軌跡及/或停留之時間。於一較佳實施例中,該設定模組 11可利用一初始化程序依據該前景影像之特定狀態資訊 定義該指令輸入區域。The processing device 20 is presented in the display unit 21 by using the image of the 4A image, and the image processing device 20 is executed in the other embodiment, and is taken through the data. The π image processing may only be performed on the back end of the fM image, but not on the display. In the present example, the (iv) treatment spoon zero. The computer has been cracked, and the display unit displays the screen, and the image captures the machine for the notebook computer' and the captured image will be smashed on the "fine camera ❹ 21. θ Wang see in the Bellow processing device 20 The display unit of the present invention applies the image recognition command to the detection module 12, the storage module 12, the storage port, and the η...^^; the module 13 and the control module 14. / The history module 11 is used for the image At least one command wheeling area is defined in the image shards captured by the capturing device 30. In the present embodiment, the setting module 11 can selectively receive the definition information of the command input area through the data processing. The command input area is identified according to the definition information of the command input area. The detection module 12 determines the foreground image from the image data to detect the status information of the foreground image appearing in the command input area. The status information may be the attitude of the foreground image, the change of the foreground image, the dynamic trajectory, and/or the time of the stay. In a preferred embodiment, the setting module 11 may use an initialization procedure to determine the specific state of the foreground image. The instruction input region information is defined.

具體實施時,以動態軌跡為例,請參閱第2a至2c圖, 其係用以顯示本發明之應用影像辨識之指令輸入系統之偵 測模組的操作示意圖。如2a圖所示,係於指令輸入區域A 9 110906DP01 200951765 内偵測之前景影像乂為手臂或手掌時 手掌進行向上揮動時,_莫組12會^則到如第 =之前景影像X的動態_是抑讀人區域 二 第T:或手掌向下揮動時,則偵測模組:: 會偵利到如第2c圖所示之前景影後 令輸入區域A的下方移動。讀X的動態執跡是朝指 承上述,本實施例雖係針對於指令輸入 内之影像的三維動態執跡做_,舉例而 ……,像X若以Ζ軸方向移動,則该測模电 過積測影像的放大料作為其動態執跡。 組13係用以儲存對應該狀態資訊之控制指令。 匕餘組! 4係用以依據偵測模組 訊,自該儲存模組13中操取出對應之控制指令= 控制指令使該資料處理裝置2G執行功能動作。 “ X的可先由儲存模組13預先儲存前景影像 關閉檔案及,或應用程:按鍵=:== 裝置2G播放簡報槽時,可利用本發明之系統定義 輸人區域A ’只要當前景影像X出現在指令輪入區 1内’即可偵測X向上揮動或向下揮動的動態轨跡,再 的動態軌跡與儲存模組13已預先儲存的影像執 跡進仃比對,以自儲存模組13中擷取對應之控制指令,以 '如、,该影像執跡可例如但不限定為分別相 聯於控制資料處理裝置2G「·——〜购對關 110906DP01 200951765 透過該控制指令使該資料處理裝置2 一頁」的換頁功能動作。 上—頁」或「下 於另一較佳實施例中, X停留時間被_到達到2秒二二二前景影像 館存模組〗3預先儲存對應之控制指令;=二;广 執行點擊清鼠左鍵或右鍵之功能動作 中,亦可設宕A力乂旦〜& V、+月施例 秒或其他預二==留時間到達到2 二m 開啟依#作模式清單,接著再判鼢 =二是否有停留在顯示單元21 “清單= ❹ 使用者it二t達到2秒或其他預設時間,若是’可判斷 程★二心項細作模式’如開啟與關閉樓案及/或廡用 :"控制資料處理裝置2 〇的操作。舉例而言,該摔 使用狀㈣分為—般狀態(編輯模式,例如: =/存 =/關閉/剪下/複製/貼上/刪除/螢幕虛擬鍵盤功能 接編輯文字)/切換程式/顯示桌面/拖拉視窗/改變視 ::··.等).及簡報狀態(簡報模式,例如:存槽/離開/ 里筆功能/簡報過程影音錄製/切換至簡報外的程式...等) 的功能動作。 值得主思的是,上述偵測模組12所偵測之狀態資訊 =可同時預設動態執跡以及停㈣間的判斷條件。舉例而 言,當資料處理裝置20正在執行所輸入「上一頁」或「下 匕頁」控制指令時,可同時令偵測模組12暫時停止偵測於 才曰令輸入區域A内所出現之前景影像乂 —段時間(例如: 矛>)以避免因多餘的動作造成誤判斷的情形,例如當使 11 110906DP01 200951765 用者以手臂或手掌向上/下揮之後,會因往下/上揮的反向 的慣性復歸動作,造成與前一次的動作狀態抵銷效應而無 法完成換頁的動作。 於又一較佳實施例中,上述偵測模組12所偵測之狀 態資訊除了可設定為前景影像X之動態執跡及/或停留時 間的判斷條件以外,亦可設定為前景影像X之姿態,例如, 手掌與手指或手臂彎度等姿態,但不以此為限。具體言之, 可於該儲存模組13中儲存前景影像X之預設姿態及對應 該預設姿態之控制指令,以於該偵測模組12所偵測到之前 景影像X之姿態符合該前景影像X之預設姿態時,由控制 模組14自該儲存模組13中擷取出對應該預設姿態之控制 指令,以透過該控制指令使該資料處理裝置執行功能動作。 承上述,具體實施時,儲存模組13所儲存之前景影 像X之預設姿態係特定前景影像之單一姿態或至少二個不 同姿態的連續組合,以依據偵測模組12所偵測的前景影像 X本身的單一個或至少二個不同的連續組合,而直接自儲 存模組14擷取對應該預設姿態之控制指令的關聯資料。舉 例而言,如第3a圖所示,前景影像X本身的特定影像晝 面Μ係可預設為不同數字的手勢變化,以當由偵測模組 12所辨識的影像符合預設特定影像晝面Μ時,即由控制 模組14自儲存模組13擷取出相對關聯之控制指令。另如 第3b與3c圖所示,係分別顯示以前景影像X代表 '' 張開〃 與''握合〃動作的不同特定影像晝面Ml、M2的連續組合, 並可依據該連續組合的不同循缳次數,分別預設與不同控 12 110906DP01 200951765 _ 制指令的關聯,亦即當使用者開合次數於符合預設特定影 « 像晝面的連續組合的循環次數,由控制模組14自儲存模組 13擷取出相對關聯之控制指令,例如,可預設連續開合二 次為功能點選之操作,以及預設連續開合三次為模式切換 的操作,但不以此為限,亦即,可根據使用需求預設不同 的開合次數來進行其他功能之操作,且於其他具體實施 上,也可例如依據前景影像X於預設時間内的開合次數 (如:連續兩次)呼叫 ''游標〃功能後,再依據彳貞測單元 ® 12補捉前景影像X的動態執跡而對應將 ''游標〃拖曳而移 動至顯示單元21晝面清單中特定的操作模式選項上,以當 所偵測的前景影像X於該特定的操作模式選項的對應影像 重疊停留達到2秒或其他預設時間時,即執行對應預設控 制指令的操作。 於再一較佳實施例中,本發明所述之設定模組11係 可於偵測模組12執行偵測之前,預先執行一前景影像之註 Q 冊程序,以取得該前景影像之尺寸、方向或姿態,俾提高 該偵測模組14偵測該前景影像之準確率,如第2d圖所示, 偵測模組12所偵測於指令輸入區域A内出現且符合預設 型態之前景影像X可透過一註冊程序來預先設定,亦即使 用者可於偵測模組12執行前景影像動態軌跡的偵測之 前,先於指令輸入區域A内暫時產生註冊區域D,並移動 前景影像X使其影像資料移至與該註冊區域D對應重疊而 進行註冊程序,以鎖定例如特定手臂或手掌型態的前景影 像X的影像資料後,始執行前景影像X動態執跡的偵測, 13 U0906DP01In the specific implementation, taking the dynamic trajectory as an example, please refer to the figures 2a to 2c, which are used to display the operation diagram of the detection module of the instruction input system for applying image recognition according to the present invention. As shown in Fig. 2a, when the foreground image is detected in the command input area A 9 110906DP01 200951765, when the palm is swung upwards for the arm or the palm, the _mo group 12 will go to the dynamics of the image X as before. _ is the anti-reader area 2nd T: or when the palm is swung downward, the detection module:: will detect the movement before the scene as shown in Figure 2c and then move below the input area A. The dynamic trajectory of reading X is directed to the above. Although the embodiment is for the three-dimensional dynamic trajectory of the image in the command input, for example, ..., if X moves in the Ζ-axis direction, the modulo The amplified material of the electrical over-the-counter image is used as its dynamic track. Group 13 is used to store control commands corresponding to status information. Yu Yu group! 4 is configured to perform a function operation on the data processing device 2G by operating a corresponding control command=control command from the storage module 13 according to the detection module. "X can be pre-stored by the storage module 13 to pre-store the foreground image to close the file and, or application: button =:== When the device 2G plays the presentation slot, the system of the present invention can be used to define the input area A' as long as the current scene image X appears in the command wheel entry area 1 to detect the dynamic trajectory of the X-up or down-swing, and the dynamic trajectory is compared with the image pre-stored image of the storage module 13 for self-storage. The module 13 retrieves the corresponding control command, such as, for example, the image representation may be, for example, but not limited to being associated with the control data processing device 2G respectively, the purchase of the gate 110906DP01 200951765 through the control command The page change function of the data processing device 2 page is activated. In the upper page or in the other preferred embodiment, the X dwell time is _ to reach 2 seconds 222 foreground image library module 3 pre-stored corresponding control command; = two; wide execution click clear In the function action of the left button or the right button of the mouse, you can also set the 乂A force 〜~& V, + month instance second or other pre-two == stay time to reach 2 two m open according to the # mode list, then Judgment = 2 Whether there is a stay in the display unit 21 "List = 使用者 User it 2 t reaches 2 seconds or other preset time, if it is 'determinable process ★ two-hearted mode" such as opening and closing the building and / or庑Use: "Control the operation of the data processing device 2 〇. For example, the use of the fall (four) is divided into the general state (edit mode, for example: = / save = / close / cut / copy / paste / delete / screen virtual keyboard function to edit text) / switch program / display Desktop/drag window/change view::··.etc.) and briefing status (presentation mode, for example: slot/departure/pen function/presentation process video recording/switching to program other than briefing...etc.) Functional action. It is worthwhile to think that the status information detected by the detection module 12 can simultaneously preset the dynamic excitation and the judgment condition between the stop (four). For example, when the data processing device 20 is executing the input "previous page" or "lower page" control command, the detection module 12 can be temporarily stopped from being detected in the input area A. The foreground image is a period of time (for example: spear>) to avoid misjudgment caused by unnecessary movements, for example, when the user of 11 110906DP01 200951765 is swung up/down with his arm or palm, it will go down/up. The reverse inertia reset action of the swing causes an action that cancels the page with the previous action state and cannot complete the page change. In another preferred embodiment, the status information detected by the detecting module 12 can be set as the foreground image X in addition to the condition for determining the dynamic tracking and/or dwell time of the foreground image X. Attitudes, for example, gestures such as palms and fingers or arms, but not limited to them. Specifically, the preset posture of the foreground image X and the control command corresponding to the preset posture may be stored in the storage module 13 so that the posture of the foreground image X detected by the detection module 12 conforms to the In the preset posture of the foreground image X, the control module 14 extracts a control command corresponding to the preset posture from the storage module 13 to cause the data processing device to perform a functional action through the control command. The specific posture of the foreground image X stored by the storage module 13 is a single gesture of a specific foreground image or a continuous combination of at least two different gestures according to the foreground detected by the detection module 12. The image X itself has a single or at least two different consecutive combinations, and the associated data of the control command corresponding to the preset posture is directly retrieved from the storage module 14. For example, as shown in FIG. 3a, the specific image surface of the foreground image X itself can be preset to a different number of gesture changes, so that when the image recognized by the detection module 12 conforms to the preset specific image. In the case of a facet, the control module 14 retrieves the relative associated control commands from the storage module 13. As shown in Figures 3b and 3c, respectively, the continuous combination of different specific image planes M1, M2 representing the foreground image X and the ''opening jaw'' movement is displayed, respectively, and according to the continuous combination The number of different cycles is preset to be associated with a different control 12 110906DP01 200951765 _ command, that is, when the number of times the user opens and closes is a number of cycles that meet a continuous combination of preset specific images, the control module 14 The control module 13 extracts the relative associated control command, for example, the operation of the function of selecting the continuous opening and closing twice, and the operation of the mode switching by three consecutive times, but not limited thereto. That is, the operation of other functions may be performed by preset different opening and closing times according to the usage requirement, and in other specific implementations, for example, according to the number of times of opening and closing of the foreground image X in a preset time (for example: two consecutive times) After the call to the ''cursor' function, the dynamic track of the foreground image X is captured according to the test unit® 12, and the ''cursor' is dragged to move to the specific operation mode option in the list of the display unit 21 To detect when the foreground image corresponding to the image X in the particular operation mode option overlapping stay up to 2 seconds or other preset time, i.e., corresponding to a predetermined control command execution operation. In a further preferred embodiment, the setting module 11 of the present invention pre-executes a foreground image recording program to obtain the size of the foreground image before the detecting module 12 performs the detecting. The direction or posture is used to improve the accuracy of the detection module 14 for detecting the foreground image. As shown in FIG. 2d, the detection module 12 detects the presence of the command input area A and conforms to the preset type. The foreground image X can be pre-set through a registration process, that is, the user can temporarily generate the registration area D and move the foreground image before the detection module 12 performs the detection of the foreground image dynamic track before the command input area A. X moves the image data to overlap with the registration area D to perform a registration process to lock the image data of the foreground image X of a specific arm or palm type, and then performs detection of the foreground image X dynamic tracking, 13 U0906DP01

A 200951765 例如,於具體實施時,可藉由該註冊程序確定所欲操作之 前景影像X的尺寸大小,以供偵測模組12有效偵測例如 上述的A張開〃、''握合〃或其他不同的手勢變化的動作的 單一個或至少二個不同的連續組合之特定的前景影像,同 時,藉此註冊程序也可提昇手勢指令辨識的效率及精準度。 再者,在其他的較佳實施例中,該偵測模組12亦可 利用至少一輔助光源的照射,使該影像擷取裝置2 0擷取該 影像資料時,提高該偵測模組12判斷前景影像之準確率, 亦即可排除背景晝面之變動區域,以凸顯主體影像(即對 於前景影像X的偵測),而於上述實施例中,該影像擷取 裝置20可搭配一濾光設備或反光/發光物體,以於擷取該 影像資料時,提高該偵測模組判斷該前景影像之準確率, 例如,以紅外線光源為輔助光源為例,係可於該影像擷取 裝置20上裝設紅外線濾鏡(Infrared fliter )以濾除可見光, 透過紅外線的照射處使影像凸顯後,再將可見光濾除以得 到單純無背景的主體影像,亦即,該濾鏡係可為其他頻譜 的單色或彩色濾鏡,俾提高擷取影像動態軌跡的精準度, 且無須經過多道影像處理如去背、邊緣、二值化等手續, 使得主體影像可以更容易做辨識。 此外,上述之實施例中所偵測的前景影像X並非侷限 於手臂或手掌等實體物件,在其它具體實施上,所偵測的 前景影像X亦為以明滅變化作為控制訊號之實體物件,以 根據其明滅變化作為狀態資訊,俾依據該狀態資訊自儲存 模組13中擷取出對應之控制指令,以透過該控制指令使該 14 110906DP01 200951765 資料處理裝置執行功能動作,舉例而言,前景影像χ所對 應的實體物件為反光裝置(例如,反光手環),並搭配照明 裝置來照射該反光裝置’使得主體影像可以更容易做辨 識。而在其他實施财,前景影像X賴應的實體物件為 發光裝置或照明裝置(例如,高亮度的發光二極體),俾直 接產生明滅變化的狀態資訊,但不以此為限。A 200951765 For example, in the specific implementation, the size of the foreground image X to be operated may be determined by the registration program, so that the detection module 12 can effectively detect, for example, the above-mentioned A-opening, ''grip 〃 Or a specific foreground image of a single one or at least two different consecutive combinations of different gesture-changing actions, and at the same time, the registration procedure can also improve the efficiency and accuracy of gesture command recognition. Furthermore, in other preferred embodiments, the detection module 12 can also use the illumination of at least one auxiliary light source to enable the image capturing device 20 to capture the image data, thereby improving the detection module 12 In the above embodiment, the image capturing device 20 can be combined with a filter to determine the accuracy of the foreground image, and the region of the background can be excluded to highlight the subject image (ie, the detection of the foreground image X). The optical device or the reflective/illuminating object is configured to improve the accuracy of the detection module to determine the accuracy of the foreground image, for example, using an infrared light source as an auxiliary light source, for example, the image capturing device An infrared filter (Infrared fliter) is installed on the 20 to filter out visible light, and the image is highlighted by the infrared ray irradiation, and then the visible light is filtered to obtain a simple background image without a background, that is, the filter system can be other The monochromatic or color filter of the spectrum enhances the accuracy of capturing the dynamic track of the image without the need for multiple image processing such as back, edge, binarization, etc., making the subject image more versatile. Easy to identify. In addition, the foreground image X detected in the above embodiment is not limited to a physical object such as an arm or a palm. In other specific implementations, the detected foreground image X is also a physical object that uses a change in brightness as a control signal. According to the change of the status information, the corresponding control command is extracted from the storage module 13 according to the status information, so that the 14 110906DP01 200951765 data processing device performs a function action through the control command, for example, the foreground imageχ The corresponding physical object is a reflective device (for example, a reflective bracelet), and the illumination device is used to illuminate the reflective device' so that the subject image can be more easily identified. In other implementations, the physical object of the foreground image X is a light-emitting device or a lighting device (for example, a high-brightness light-emitting diode), and the direct-state information is displayed, but not limited thereto.

請參閱第4圖’其係為本發明之應用影像辨識之指令 輸入方法之第-實施例之流程圖,如圖所示,本發明之應 用影像辨識之指令輸人方法係應詩前述之應用影像辨識 之指令輸入系、統(如第i圖所示),其卜該應用影像辨識 之指令輸人系統係應用於資料處理裝置2()中,資料處理果 置別搭接有影像娜裝置3G 1本發明之制影像辨識 之指令輸入方法係先執行步驟s丨〇。 在步驟S1〇中,於該影像操取裝置30所操取之影像 資料中定義至少-指令輪人區域A。接著,進至步驟sn。 在步驟S11中,於該影像資料中判斷前景影像,以偵 測邊指令輸人區域A内出現之該前景影像的狀態資訊。較 =,該狀態資訊可為該前景影像χ之姿態、明滅變化、 動態軌跡及/或停留之時間。接著,進至步驟犯。 在步驟S12中,係儲存對應該狀態資訊之控制指令, 以依據所偵測之該狀態資簡取出對應之控制指令 過該控制指令使該資料處理聚置執行功能動作。 透 請參閱第5圖,其係用以顯示本發明之應 之指令輸人系統之第二實施例的應用架構示意圖。象辨識 H0906DP01 15 200951765 如圖所示,本實施例與第一實施例之基本架構相同, 其差異之處係在於本實施例中的資料處理裝置20復搭接 一影像呈現裝置40,影像呈現裝置40可例如為投影機。 於具體實施時,影像擷取裝置30所擷取之影像中,可選擇 性地包括影像呈現裝置40呈現影像之全部或部分之特定 區域。舉例言之,若影像呈現裝置40為一投影機,則其產 生之影像會呈現在一投影幕或牆壁等物件上,並於該物件 上形成呈現影像之特定區域。 於本實施例中,本發明之應用影像辨識之指令輸入系 統包括設定模組11、偵測模組12、儲存模組13、控制模 組14、震動補償模組15以及干擾偵測模組16及干擾消除 模組17。 設定模組11除了由該影像擷取裝置30所擷取之影像 資料中定義至少一指令輸入區域A以外,可進一步於該影 像資料中呈現一特定區域,俾該設定模組11將該特定區域 定義為顯示區域S。較佳者,該設定模組11係用以依據顏 色、灰階程度、色彩漸層之同質性、前後影像差異性、特 定物件、特定圖案、特定型態及/或邊緣偵測方式辨識出該 顯示區域。具體言之,於影像資料中,設定模組11可辨識 出相同顏色、灰階程度、色彩漸層、前後影像、特定物件、 特定圖案、特定型態所形成之區域,並能夠與不同之顏色、 灰階程度、色彩漸層、前後影像、特定物件、特定圖案、 特定型悲所形成之區域相區別。而據以形成的區域即可作 為顯示區域S。另外,尚可透過影像邊緣偵測技術定義出 16 110906DP01 200951765 顯示區域s。 ·- 5月參閱弟6a至6d圖,其係用以顯示本發明之應用影 ❹ 像辨識之指令輸入系統之第二實施例之設定模組的操作示 意圖’透過影像呈現裝置40依序呈現不同尺寸之第一區域 -a與第二區域b,以由該設定模組11將該第一區域a之邊 界與該第二區域b之邊界所圍出之區域定義為邊框c,再 定義該邊框c内框或外框所包含之區域為顯示區域8。如 第6a圖所示,係設定顯示區域s為影像操取裝置%所摘 取到的影像資料之全部範圍,並顯示於顯示單元21,且積 測模組Π辨識出影像擷取裝χ3〇所揭取之影像資料中之 其次’如第6b圖所示,設定模組。會定義特 =域為第-區域a。接著,如第6c圖所示,設定模組12 έ專比例細小苐一區域a以定羞ψ @ 明者,係於其他實施财,=== 須特別說 以定義出第二區域b。之V:第,放大第-區域3 ❹ 域a之絲盘兹「, 圖所示’定義第一區 , 區之邊界所圍出之區域為邊框c, 而邊框c内之區域為顯示區域B, 四個頂角分別為4個定位點所定 〇 王c的 ^ Λ, , 心我但不以此為限,亦即, 了針對非四邊形的線性的邊框 點,此外,也可進-步於非線2應頂角數1的定位 對應增加定位點,以解決非線性於了員角 差,即因投影(pr〇jection)所造成^所產生的投影誤 d— warping ),且以相 線性失真(__Iinear 定義更精確的顯示區域3,較佳去可取得更多定位點,以 ’设定模組12復用以將 ]】〇卯6DP0】 17 200951765 邊框C以如藍色或紅色等特定之 一 處理裝置20上。 /框出,並顯示於資料 處理他用實施例中’可應用在具有複數個資料 7处埋裒置之壤境中。用以顯 資料處理|置2G,與搭接有影俊王2特定之色彩框之該 裝置20可為不同之資料處理=° ^置3〇之資料處理 於&梯rt义 哀置’而在其他具體實施的變 化恶樣與别述第-實施例類似,故在此不予資述。 能資π二、!^貝施例中’上述用以情測前景影像χ的狀 ;内°[二 2的偵測區域並不侷限於該顯示區域 ΰ内,如第6e圖所示,钟中七。,, 圓所不°又疋核組n於該顯示區域B的笳 圍内或範圍外係定義至少一指令輸入區域£,且該指令= 與該顯示區域β具有函數對應關係,以由該_ =2偵翁指令輸人區域£中的狀㈣訊。詳言之, 之=:…根據如第3a圖所示,預設為前景影 Γ像;:Γ當由偵測模組12於指令輸人區域E所辨^ 免特定影像畫面料,即由控制模組14自儲 取關聯之控制指令,以透過該控制指令 x 作’即_前述之前景影像 — 心令輸人區域E内_前景影像X的狀態資 讯,例如,以特定的手勢在指令輸入區域E唤起、'游標 二力二以當功能被呼叫後,才進行該前景影像X 之動恶軌跡的偵測。 方、又較佳貝施例中,本發明復可由控制模組14預 ]]〇906DP〇] 18 200951765 設對應顯示區域Β或指令 序,俾符合例如前述一般模式或:報== 的功料求,以達到分層㈣的效二, :或=個連續組合/循環次數之蚊的前景影像 = :游標〜力能為指令a、'、特殊功能選單,,功能別疋 =二執行順序為當指令a的被執行後,始‘ η I果,、且13中擷取出相對關聯之指令b,即當 再呼叫、殊功能選單〃功能,詳; 實體輸入^Μ如為細私、虛賴盤或其他 作的(例如:滑鼠、鍵盤)所能完成, 具體實施時,請參閱第73至7_,其係用 π:::識ΓΗ輸入系統之第二實施例= ❹ ^前景影像X(以手臂或手二二 二=時,前景影像X會如第3b圖所示的向顯示區❹ 景财何或手掌向下揮動時,前 ?y 會如第3C圖所示的向顯示區域B下方f勤,目丨 =模組12會_到其動態軌跡是朝顯示區域“下二 畲設定模組12將邊 】3能偵測於顯示區域 他具體實施的變化態 值得注意的是,在本實施例中, 框c以特定之色彩框出後,偵測模組 B内出現之影像的狀態資訊,而在其Please refer to FIG. 4, which is a flowchart of a first embodiment of an instruction input method for applying image recognition according to the present invention. As shown in the figure, the method for inputting instructions for applying image recognition according to the present invention should be applied to the foregoing application. The command input system and system of image recognition (as shown in Figure i), the command input system for applying image recognition is applied to the data processing device 2 (), and the data processing is set to overlap with the image device. 3G 1 The instruction input method for image recognition of the present invention first performs step s丨〇. In step S1, at least the command wheel person area A is defined in the image data manipulated by the image manipulation device 30. Then, proceed to step sn. In step S11, the foreground image is determined in the image data to detect the state information of the foreground image appearing in the input region A. Compared with =, the status information may be the attitude of the foreground image, the change of the brightness, the dynamic track and/or the time of the stay. Then, proceed to the step. In step S12, a control command corresponding to the status information is stored, and the corresponding control command is taken according to the detected status constraint to cause the data processing to perform the function function. Referring to Figure 5, there is shown a schematic diagram of an application architecture of a second embodiment of the command input system of the present invention. Image recognition H0906DP01 15 200951765 As shown in the figure, the basic structure of the first embodiment is the same as that of the first embodiment, and the difference is that the data processing device 20 in the embodiment is multiplexed with an image presentation device 40, and the image presentation device 40 can be, for example, a projector. In a specific implementation, the image captured by the image capturing device 30 optionally includes a specific region in which the image rendering device 40 presents all or part of the image. For example, if the image presentation device 40 is a projector, the image produced by the image presentation device 40 may be presented on a projection screen or a wall or the like, and a specific area for presenting the image is formed on the object. In this embodiment, the command input system for image recognition of the present invention includes a setting module 11, a detecting module 12, a storage module 13, a control module 14, a vibration compensation module 15, and an interference detecting module 16 And the interference cancellation module 17. In addition to defining at least one command input area A in the image data captured by the image capturing device 30, the setting module 11 further presents a specific area in the image data, and the setting module 11 displays the specific area. Defined as display area S. Preferably, the setting module 11 is configured to recognize the color, gray scale degree, color gradient homogeneity, front and back image difference, specific object, specific pattern, specific type and/or edge detection manner. Display area. Specifically, in the image data, the setting module 11 can recognize the same color, grayscale degree, color gradient, front and back images, specific objects, specific patterns, regions formed by specific patterns, and can be different colors. , grayscale degree, color gradient, front and back images, specific objects, specific patterns, and areas formed by specific types of sadness. The area thus formed can be used as the display area S. In addition, 16 110906DP01 200951765 display area s can be defined by image edge detection technology. - May, referring to the figures 6a to 6d, which are used to display the operation of the setting module of the second embodiment of the image input system for image recognition of the present invention The first area-a and the second area b of the size are defined by the setting module 11 as a frame c surrounded by the boundary between the boundary of the first area a and the second area b, and the border is defined The area contained in the c inner frame or the outer frame is the display area 8. As shown in FIG. 6a, the display area s is set to the entire range of the image data extracted by the image manipulation device %, and is displayed on the display unit 21, and the integrated measurement module Π identifies the image capture device. The second of the extracted image data is as shown in Figure 6b, and the module is set. The special = domain is defined as the first-region a. Next, as shown in Fig. 6c, the setting module 12 έ is proportional to a small area 以 区域 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明 明V: first, enlarge the first-area 3 ❹ the disk of the domain a", the figure shows the first area, the area enclosed by the boundary of the area is the frame c, and the area inside the frame c is the display area B. The four vertices are respectively defined by the four anchor points. The heart is not limited to this, that is, the linear border points for the non-quadrilaterals. In addition, the steps can be further advanced. The positioning of the non-line 2 should be increased by the number of vertices to solve the non-linearity of the angle difference, that is, the projection error d-warping caused by the projection (pr〇jection), and the phase is linear Distortion (__Iinear defines a more accurate display area 3, preferably to obtain more positioning points, to 'set module 12 to reuse to be]] 〇卯6DP0] 17 200951765 Border C is specified as blue or red One of the processing devices 20 is / boxed out and displayed in the data processing. In the embodiment, it can be applied in a soil with a plurality of data buried in 7 places. For data processing | set 2G, with The device 20 with the color box of the shadow king 2 can be processed for different data = ° ^ 3 The changes in other implementations are similar to the other embodiments described above, so they are not mentioned here. The foreground image is in the shape of the image; the inner region is not limited to the display region, as shown in Fig. 6e, the clock is in the seventh, and the circle is not in the The display area B defines at least one command input area within or outside the range, and the instruction= has a function correspondence with the display area β, so as to be in the input area of the _=2 (4) In other words, the =:... according to Figure 3a, the preset is the foreground image;: When the detection module 12 is in the command input area E, the specific image material is discriminated. That is, the control module 14 automatically stores the associated control command, so as to transmit the state information of the foreground image in the input area E, for example, by using the control command x. The gesture is evoked in the command input area E, and the cursor is used to detect the erroneous trajectory of the foreground image X after the function is called. In the example, the preferred embodiment of the present invention can be controlled by the control module 14]] 〇 〇 〇 〇 〇 〇 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 2009 Seek to achieve the effect of layering (four), : or = a continuous combination / cycle number of mosquitoes foreground image =: cursor ~ force can be command a, ', special function menu, function 疋 = two implementation The order is that when the instruction a is executed, the first 'ηI fruit, and 13 is the relative associated instruction b, that is, when the call again, the special function menu 〃 function, the details; the entity input ^ Μ is fine, Virtual disk or other work (for example: mouse, keyboard) can be completed, please refer to paragraphs 73 to 7_, which uses π::: to identify the second embodiment of the input system = ❹ ^ foreground Image X (When arm or hand 22 ==, the foreground image X will be as shown in Figure 3b. When the hand is swung down to the display area, the front y will be as shown in Figure 3C. Display area B below f diligence, witness = module 12 will _ to its dynamic trajectory is toward the display area "lower 畲 set module 12 will edge] 3 can Detecting the display area of the display area. It is worth noting that, in this embodiment, after the frame c is framed by a specific color, the state information of the image appearing in the module B is detected, and

310906DP0J 19 200951765 樣與前述第:實施例類似,故在此不予贅述。 .^ 、所< 田使用者透過於顯示區域B内揮動手臂或 二景影像X如前述第7b或7c圖所示產生向上310906DP0J 19 200951765 is similar to the foregoing: embodiment, and therefore will not be described here. .^ , , < The field user swings the arm or the two-view image X in the display area B as shown in the above-mentioned 7b or 7c

1二i:厂軌:’則控制模組14即可依據於顯示區域B 組13、中:广之刖?、影像X向上或向下動態軌跡’自儲存模 組^中擷取出相對關聯之該指令,亦即「上一頁」或「下 該「1Λ啟」與或關閉槽案及/或應用程式按鍵指令’並依據 4 + 下—頁」、開啟與關閉檔案及/或應用程 式之控制指令使資料處理裝置20的操作。 奋扩=外-在!^7、晝面消除的技術方面,本實施例與第-只施例不同的是,該偵測握細]^队 降北旦金& 、、、、,且12除了可藉由輔助光源來排 除…面之變動區域外,亦 判斷該顯示區域B中 卞復制模組16 ^ . mr_ Λ ^ 干擾£域,以使該偵測模組12於 圖\,中停止谓測該前景影像X,例如可將如第6e 為干择=顯不區域3内的至少一個指令輸入區域E設定 二像二避免偵測模組12因為侧到非預期的前景 办像,而造成控制模組14 ’、 操作的情況。 軌—處理早裝置2G的錯誤 承上述,於另一較佳實施例中,復可藉由干擾消除模 ”且,糸將該顯示區域Β中之預測變動内容 、 Β中之實際變動内容進行比對,用以使該㈣疒/域 1的、、,。果__景影像χ,俾提高該_模組 之準碑率。舉例而言,談 、’ 判辦 裝置2〇取得欲呈現之可藉由資料處理 付奴主現之衫像(即背景畫面)以對影像呈現裝 J10906DP01 20 200951765 置40所呈現的影像動態地進行背景消除,例如,將資料處 ' 理裝置20的預知投影内容中所設定欲進行背景消除之特 * 定的影像片段,以當影像呈現裝置40呈現該特定的影像片 段時,對該特定的影像片段動態地進行背景消除,以避免 因影像呈現裝置40所投映的背景畫面,使資料處理裝置 20產生的非預期的動作。 再者,請參閱第8圖中所示之應用影像辨識之指令輸 入系統,資料處理裝置20内之震動補償模組15用以避免 ® 因外力使資料處理裝置20或影像呈現裝置40的震動,造 成影像擷取裝置30於影像偵測區域擷取影像不穩定或過 量的晝面差異。如第8圖所示,該震動補償模組15係於前 述第6d圖所定義的邊框c内之顯示區域B上設定五個定 位點Cl、C2、C3、C4、C5,其中,定位點C1-C4設於顯 示區域B的四個角落,定位點C5則設於顯示區域B上緣 之定位點C1與定位點C3之間,接著,儲存模組13會紀 ^ 錄各該定位點的原始座標,接下來的資料處理裝置20所顯 示的每一個晝面都再進行定位點的座標比對,在本實施例 中,由於使用者最多同時遮住兩個點(如由左側遮住定位 點C1與C2),因此,只需要取移動距離最小的三個定位 點來計算移動的向量,當該三個定位點的平均偏移過大 時,即判斷為震動。此時,該震動補償模組15會進行與震 動方向相反之晝面修正。 請參閱第9圖,其係用以顯示本發明之應用影像辨識 之指令輸入方法之第二實施例的流程圖。 21 110906DP01 200951765 於步驟S20中,辨識該影像擷取裝置所擷取之影像資 料中之特定區域。於此步驟中,當影像資料輸入及/或儲存 於資料處理裝置後,隨即辨識影像擷取裝置所擷取之影像 資料中之特定區域,以將該特定區域定義為顯示區域。接 著,進至步驟S21。 於步驟S21中,由該影像呈現裝置依序呈現不同尺寸 之第一區域與第二區域。於此步驟中,設定模組會定義特 定區域為第一區域,接著,設定模組會等比例縮小第一區 域以定義出第二區域。須特別說明者,係於其他實施例中, 可利用等比例放大第一區域以定義出第二區域。接著進至 步驟S22。 於步驟S22中,將該第一區域之邊界與該第二區域之 邊界所圍出之區域定義為邊框,並定義該邊框内或邊框外 所包含之區域為顯示區域。接著,進至步驟S23。 於步驟S23中,係偵測該顯示區域内出現之前景影像 的狀態資訊。於此步驟中,係以偵測模組由該影像資料中 判斷前景影像,以偵測該顯示區域内出現之該前景影像的 狀態資訊。較佳者,該狀態資訊可為該前景影像之姿態、 明滅變化、動態軌跡及/或停留之時間,且在另一實施例 中,當設定模組將邊框以特定之色彩框出後,偵測模組能 偵測於顯示區域内出現之影像的狀態資訊,而在其他具體 實施的變化態樣與前述第一實施例類似,故在此不予贅 述。接著,進至步驟S24。 在步驟S24中,係儲存對應該狀態資訊之控制指令, 22 110906DP01 200951765 以依據所偵測之該狀態資簡取出對應之控令,俾透 •過該控制指令使該資料處理裝置執行功能動作。 紅上所述,本發明之應用影像辨識之指令輸入系統以 及方法具備以下優點: /1)低建置成本與絲便利。本發明利用影像擷取裝 置搭配資料處理裳置即可進行影像辨識,無須購買其他設 備:因此可減少建置指令輸入系統的成本,且同時具備安 裝簡便的特性。 ❹ (2)辨識率高。本發明利用如背景消除或震動修正等 輔助辨識技術來執行前景影像辨識’能大幅度提高辨識 率,因此解決了習知硬體輸入單元之指令輸入技術所產生 的問題。 上述實施例僅為例示性說明本發明之原理及豆功 效,=非用於限制本發明。任何熟習此項技藝之人士均可 在不延为本發明之精神及範脅下,對上述實施例進行修飾 ❹與變化。因此,本發明之權利保護範圍,應如後述之申請 專利範圍所列。 【圖式簡單說明】 —第1圖係本發明之應用影像辨識之指令輸入系統之第 —實施例的應用架構示意圖; 第2a至2d圖係本發明之應用影像辨識之指令輸入系 統之第一實施例之偵測模組的操作示意圖; 第3a至3c圖係本發明之應用影像辨識之指令輸入系 ’先之第只轭例以前景影像之預設姿態影像偵測的操作示 110906DP01 23 200951765 意圖; 第4圖係本發明之應用影像辨識之指令輸入方法之第 一實施例的流程圖; 第5圖係本發明之應用影像辨識之指令輸入系統之第 二貫施例的應用架構不意圖, 第6a至6e圖係本發明之應用影像辨識之指令輸入系 統之第二實施例之設定模組的操作示意圖; 第7a至7c圖係本發明之應用影像辨識之指令輸入系 統之第二實施例之偵測模組的操作示意圖; 第8圖係為本發明之應用影像辨識之指令輸入系統之 第二實施例以晝面補償偵測的設定示意圖;以及 第9圖係本發明之應用影像辨識之指令輸入方法之第 二貫施例的流程圖。 【主要元件符號說明】 11 設定模組 12 領測模組 13 儲存模組 14 控制模組 15 震動補償模組 16 干擾偵測模組 17 干擾消除模組 20 資料處理裝置 21 顯示單元 30 影像擷取裝置 24 110906DP01 200951765 40 影像呈現裝置 S、B 顯示區域 D 註冊區域 A、E 指令輸入區域 a 第一區域 b 第二區域 c 邊框 X 前景影像 M、Ml、M2 特定影像晝面 Cl、C2、C3、 C4、C5 定位點 S10-S12 步驟 S20〜S24 步驟1 2 i: factory track: 'The control module 14 can be based on the display area B group 13, medium: wide 刖?, image X up or down dynamic trajectory 'from the storage module ^ 相对 out of the relative association The order, that is, "previous page" or "below the "1" and "close" and / or application button command 'and according to 4 + down - page", open and close files and / or applications The control commands cause the operation of the data processing device 20. Fen expansion = outside - in! ^7, the technical aspect of face elimination, the difference between this embodiment and the first embodiment is that the detection of the grip] ^ team drops the North Gold &,,,, and 12 In addition to the auxiliary light source to exclude the area of the change of the surface, it is also determined that the copying module 16 ^ . mr_ Λ ^ in the display area B interferes with the field, so that the detecting module 12 is in the figure \, Stopping the foreground image X, for example, setting at least one command input area E in the 6th for the dry selection=display area 3 to prevent the detection module 12 from being side-to-unexpected foreground image, This causes the control module 14' to operate. The error of the track-handling early device 2G is as described above. In another preferred embodiment, the interference cancellation mode is multiplexed, and the predicted change content in the display area is compared with the actual change content in the frame. For the purpose of making the (four) 疒 / domain 1 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 、 The background image (ie, the background image) of the slave owner can be used to dynamically perform background elimination on the image presented by the image presentation device, for example, predicting the projection content of the device 20; The image segment to be background-removed is set to dynamically remove the background image when the image presentation device 40 presents the specific image segment to avoid being projected by the image presentation device 40. The background image causes unintended actions generated by the data processing device 20. Further, please refer to the command input system for applying image recognition shown in FIG. 8 and the vibration compensation module in the data processing device 20. 15 is used to avoid the vibration of the data processing device 20 or the image presenting device 40 due to an external force, causing the image capturing device 30 to capture an image of an unstable or excessive image in the image detecting area. As shown in FIG. The vibration compensation module 15 is configured to set five positioning points C1, C2, C3, C4, and C5 on the display area B in the frame c defined in the above-mentioned FIG. 6d, wherein the positioning points C1-C4 are disposed in the display area B. In the four corners, the positioning point C5 is disposed between the positioning point C1 and the positioning point C3 of the upper edge of the display area B. Then, the storage module 13 records the original coordinates of each positioning point, and the subsequent data processing Each of the kneading surfaces displayed by the device 20 performs coordinate comparison of the positioning points. In this embodiment, since the user blocks at most two points at the same time (for example, the positioning points C1 and C2 are blocked by the left side), It is only necessary to take the three positioning points with the smallest moving distance to calculate the moving vector. When the average offset of the three positioning points is too large, it is judged to be vibration. At this time, the vibration compensating module 15 performs the opposite direction of the vibration. After the correction, please refer to Figure 9, It is a flowchart for showing a second embodiment of the instruction input method for applying image recognition according to the present invention. 21 110906DP01 200951765 In step S20, a specific region in the image data captured by the image capturing device is identified. In this step, when the image data is input and/or stored in the data processing device, a specific region in the image data captured by the image capturing device is recognized to define the specific region as the display region. Then, proceed to the step S21. In step S21, the image presenting device sequentially displays the first region and the second region of different sizes. In this step, the setting module defines the specific region as the first region, and then the setting module waits. The first area is scaled down to define a second area. It should be noted that in other embodiments, the first region may be enlarged by equal magnification to define the second region. Then, the process proceeds to step S22. In step S22, the area enclosed by the boundary of the first area and the boundary of the second area is defined as a border, and an area included in the border or outside the border is defined as a display area. Next, the process proceeds to step S23. In step S23, status information of the foreground image in the display area is detected. In this step, the detection module determines the foreground image from the image data to detect the state information of the foreground image appearing in the display area. Preferably, the status information may be the attitude, the change of the foreground image, the dynamic trajectory and/or the time of the stay of the foreground image, and in another embodiment, after the setting module frames the border with a specific color, the Detector The measurement module can detect the status information of the image appearing in the display area, and the other specific implementation changes are similar to the foregoing first embodiment, and thus are not described herein. Next, the process proceeds to step S24. In step S24, the control command corresponding to the status information is stored, 22 110906DP01 200951765 to retrieve the corresponding control command according to the detected state profile, and the data processing device is caused to perform the function action. As described above, the command input system and method for applying image recognition of the present invention have the following advantages: /1) Low construction cost and silk convenience. The invention can perform image recognition by using the image capturing device and the data processing skirt, and does not need to purchase other devices: thereby reducing the cost of the built-in command input system and having the characteristics of simple installation. ❹ (2) The recognition rate is high. The present invention utilizes an auxiliary recognition technique such as background elimination or vibration correction to perform foreground image recognition, which can greatly improve the recognition rate, thereby solving the problems caused by the instruction input technology of the conventional hardware input unit. The above-described embodiments are merely illustrative of the principles of the invention and the utility of the beans, and are not intended to limit the invention. Modifications and variations of the above-described embodiments can be made by those skilled in the art without departing from the spirit and scope of the invention. Therefore, the scope of protection of the present invention should be as listed in the scope of the patent application to be described later. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram of an application architecture of a first embodiment of an instruction input system for applying image recognition according to the present invention; and FIGS. 2a to 2d are diagrams of an instruction input system for applying image recognition of the present invention. The operation diagram of the detection module of the embodiment; the 3a to 3c diagrams are the operation of the first embodiment of the image recognition system of the present invention, and the first yoke example of the foreground image is detected by the preset image of the foreground image. 110906DP01 23 200951765 4 is a flowchart of a first embodiment of an instruction input method for applying image recognition according to the present invention; FIG. 5 is a schematic diagram of an application architecture of a second embodiment of an instruction input system for applying image recognition according to the present invention. 6a to 6e are diagrams showing the operation of the setting module of the second embodiment of the image input command input system of the present invention; and FIGS. 7a to 7c are second embodiments of the command input system for applying image recognition of the present invention. Example of operation of the detection module of the example; FIG. 8 is a second embodiment of the command input system for applying image recognition according to the present invention, with the setting of the face compensation detection Intention; and Figure 9 is a flow chart of a second embodiment of the instruction input method for applying image recognition of the present invention. [Main component symbol description] 11 Setting module 12 Leading test module 13 Storage module 14 Control module 15 Vibration compensation module 16 Interference detection module 17 Interference cancellation module 20 Data processing device 21 Display unit 30 Image capture Device 24 110906DP01 200951765 40 Image presentation device S, B Display area D Registration area A, E Command input area a First area b Second area c Border X Foreground image M, Ml, M2 Specific image planes Cl, C2, C3, C4, C5 positioning point S10-S12 Steps S20~S24

25 110906DP0125 110906DP01

Claims (1)

200951765 七、申請專利範圍·· 1. 一種應用影像辨識之指令輸入系統,係應用於資料處 理裝置中,5亥資料處理裝置搭接有影像擷取裝置,該 應用景> 像辨識之指令輸入系統包括: 設定模組,係用以由該影像擷取裝置所擷取之影 像資料中定義至少一指令輸入區域; 偵測模組,係由該影像資料中判斷前景影像,以 偵測該指令輸入區域内出現之該前景影像的狀態資 ^ ; ❹ 儲存模組,係用以儲存對應該狀態資訊之控制指 令;以及 控制模組,係用以依據該債測模組所偵測之該狀 悲資況,自該儲存模組中擷取出對應之控制指令,以 透過該控制指令使該資料處理裝置執行功能動作。 2. 如申請專利範圍第〗項之應用影像辨識之指令輸入系 統,其中,该狀悲'資訊為該前景影像之姿態、明滅變 ❹ 化、動態執跡及/或停留時間。 3. 如申請專利範圍第2項之應用影像辨識之指令輸入系 統,其中,該動態執跡包括該前景影像於該指令輸入 區域内之二維或三維動態執跡。 4. 如申請專利範圍第〗項之應用影像辨識之指令輸入系 統,其中,該設定模組利用一初始化程序依據該前景 影像之特定狀態資訊定義該指令輸入區域。 5. 如申請專利範圍第2項之應用影像辨識之指令輸入系 110906DP01 200951765 統,其中,於該儲存模組中儲存前景影像之預設姿態 及對應該預設姿態之控制指令,以於該偵測模組所偵 測到之前景影像之姿態符合該前景影像之預設姿態 時,由控制模組自該儲存模組中擷取出對應該預設姿 態之控制指令,以透過該控制指令使該資料處理裝置 執行功能動作。 6. 如申請專利範圍第5項之應用影像辨識之指令輸入系 統,其中,該設定模組於該偵測模組執行偵測之前, 預先執行一前景影像之註冊程序,以取得該前景影像 之尺寸、方向或姿態,俾提高該偵測模組偵測該前景 影像之準確率。 7. 如申請專利範圍第5項之應用影像辨識之指令輸入系 統,其中,該前景影像之預設姿態係特定前景影像之 單一姿態或至少二個不同姿態的連續組合。 8. 如申請專利範圍第1項之應用影像辨識之指令輸入系 統,其中,係利用至少一輔助光源的照射,使該影像 擷取裝置擷取該影像資料時,提高該偵測模組判斷該 前景影像之準確率。 9. 如申請專利範圍第8項之應用影像辨識之指令輸入系 統,其中,該影像擷取裝置搭配一濾光設備或反光/發 光物體,以於擷取該影像資料時,提高該偵測模組判 斷該前景影像之準確率。 10. 如申請專利範圍第1項之應用影像辨識之指令輸入系 統,其中,該資料處理裝置復搭接一影像呈現裝置, 27 110906DP01 200951765 用,於該影像資料中呈現一特定區域,俾該設定模組 將該特定區域定義為顯示區域。 11·如申請專利範圍第10項之應用影像辨識之指令輸入系 統H該設定模組於該顯示輯的範_或範圍 外係定義至少-指令輸入區域,且該指令輸入區域與 之肩不區域具有函數對應關係,以由該搞測模組债測 該指令輸入區域中的狀態資訊。 士申m專利範圍第10項之應用影像辨識之指令輸入系 統,其中,該設定模組係用以依據顏色、灰階程度、〇 ^彩漸層之同質性、前後影像差異性、特定物件、特 疋圖案、特定型態及/或邊緣债測方式定義出該顯示區 域。 13*如申請專利範圍第10項之應用影像辨識之指令輸入系 統,其中,該影像呈現裝置依序呈現不同尺寸之第一 品或/、弟一區域,以由該設定模組將該第一區域之邊 2與該第二11域之邊界所圍出之區域定義為邊框,再❹ ι定義該邊框内或邊框外所包含之區域為顯示區域。 14’ 申請專利範圍第13項之應用影像辨識之指令輸入系 、 八中,s亥设疋模組復用以將線性的該邊框之對應 頂角定義為定位點。 15.,申請專利範圍第13項之應用影像辨識之指令輸入系 〃、中,5亥S交疋模組復用以將非線性的該邊框之對 〜頂角疋義為定位點,並於該頂角外的曲面對應增加 定位點。 110906DP01 28 200951765 16.如申請專利範圍第13 旦 貝之€用影像辨識之指令輸入系 八中,該設定模組復用以將該邊框以特定之色彩 框出,並顯示於該資料處理裴置上。 專利_第13項之應用影像韻之指令輸入系 復包括震動補償模組,係用以於該顯示區域内設 =複數個顯示定位點,以記錄該複數個顯示定位點之 :=標‘再與該影像擷取裝置所擷取之影像資料中 應,數個顯示定位點之影敎位點的座標進行比 '⑼虽平均偏移量過大時㈣斷為震動,以由該震動 補㈣組對該複數個影像定位點的座標進行修正。 18.如申請專利範圍第13項之應用影像辨識之指令輸入系 統,復包括干擾债測模組’係用以判斷該顯示區域中 之干擾區域,以使該债測模組於該干擾區域中停止偵 測該前景影像。200951765 VII. Scope of application for patents·· 1. An instruction input system for applying image recognition is applied to a data processing device. The 5 hai data processing device is lapped with an image capturing device, and the application scene> The system includes: a setting module for defining at least one command input area in the image data captured by the image capturing device; and a detecting module for determining a foreground image from the image data to detect the command a state information of the foreground image appearing in the input area; a storage module for storing a control command corresponding to the status information; and a control module for detecting the condition according to the debt module In the case of sorrow, the corresponding control command is extracted from the storage module to enable the data processing device to perform a functional action through the control command. 2. For the application of the image recognition command input system in the application for patent scope, the information is the attitude of the foreground image, the morphing, the dynamic trajectory and/or the dwell time. 3. The instruction input system for applying image recognition as claimed in claim 2, wherein the dynamic representation comprises a two-dimensional or three-dimensional dynamic representation of the foreground image in the command input area. 4. The instruction input system for applying image recognition according to the patent application scope, wherein the setting module defines an instruction input area according to specific state information of the foreground image by using an initialization program. 5. The instruction input system for applying image recognition according to item 2 of the patent application scope is 110906DP01 200951765, wherein a preset posture of the foreground image and a control instruction corresponding to the preset posture are stored in the storage module for the detection When the test module detects that the posture of the foreground image conforms to the preset posture of the foreground image, the control module extracts a control command corresponding to the preset posture from the storage module to enable the control command to The data processing device performs a functional action. 6. In the application of the image recognition command input system of claim 5, wherein the setting module performs a registration process of the foreground image in advance to obtain the foreground image before the detecting module performs the detection. The size, direction or posture of the detection module improves the accuracy of detecting the foreground image. 7. The instruction input system for applying image recognition according to claim 5, wherein the predetermined posture of the foreground image is a single gesture of a specific foreground image or a continuous combination of at least two different gestures. 8. The instruction input system for applying image recognition according to claim 1 of the patent application, wherein when the image capturing device captures the image data by using at least one auxiliary light source, the detecting module determines that the detecting module determines The accuracy of the foreground image. 9. The instruction input system for applying image recognition according to claim 8 of the patent application, wherein the image capturing device is combined with a filtering device or a reflective/illuminating object to improve the detecting mode when capturing the image data. The group judges the accuracy of the foreground image. 10. The instruction input system for applying image recognition according to claim 1 of the patent application, wherein the data processing device is multiplexed with an image presentation device, 27 110906DP01 200951765, for presenting a specific area in the image data, The module defines the specific area as the display area. 11. The instruction input system H for applying image recognition according to claim 10 of the patent application scope. The setting module defines at least an instruction input area outside the range or range of the display series, and the instruction input area and the shoulder non-region There is a function correspondence relationship, so that the state information in the instruction input area is measured by the test module debt. The command input system for applying image recognition according to Item 10 of the scope of the patent application, wherein the setting module is used for color, gray scale degree, homogeneity of gradation, front and rear image difference, specific object, The display area is defined by the feature pattern, the specific pattern, and/or the edge debt measurement method. 13* The instruction input system for applying image recognition according to claim 10, wherein the image presenting device sequentially presents the first product or the other area of different sizes, so that the first module is used by the setting module. The area enclosed by the boundary of the area 2 and the second 11 field is defined as a border, and then the area defined in the border or outside the border is defined as the display area. 14' The application of the image recognition command input system in the 13th patent application scope, and the octave module multiplexing are used to define the corresponding top angle of the linear frame as the positioning point. 15. The instruction input system of the application for image recognition in the 13th application of the patent scope is multiplexed, and the 5 S S 疋 module is multiplexed to define the non-linear boundary of the frame to the locating point, and The curved surface outside the apex angle corresponds to the increased positioning point. 110906DP01 28 200951765 16. If the instruction input system of the image recognition range is 13th, the setting module is multiplexed to frame the frame in a specific color and displayed on the data processing device. on. The instruction input system of the application image rhyme of the patent_third item includes a vibration compensation module, which is used for setting a plurality of display positioning points in the display area to record the plurality of display positioning points: == In the image data captured by the image capturing device, a plurality of coordinates indicating the shadowing point of the positioning point should be compared with the '(9), although the average offset is too large (four) is broken to vibrate, so as to be compensated by the vibration (four) group The coordinates of the plurality of image positioning points are corrected. 18. The instruction input system for applying image recognition according to claim 13 of the patent application scope, wherein the interference compensation module is configured to determine an interference region in the display area, so that the debt test module is in the interference region. Stop detecting the foreground image. .如申請專利範圍第13項之應用影像辨識之指令輸入系 統’後包括干擾消除模組,係將該顯示區域中之預測 變動内容與該顯示區域令之實際變動内容進行比對, 用以使該_模組依據比對的結果判斷該前景影像, 俾提高該偵測模組判斷之準確率。 瓜如申請專利範圍第!項之應用影像辨識之指令輸入系 、、克其中,该則景影像所對應的實體物件為反光裝置。 Μ.如申請專利顧第〗項之應用影像賴之指令輸入系 4 ’其中’該W景影像所對應的實體物件為發光裝置 或照明裝置。 110906DP01 29 200951765 22. 23. 24. 25. 26. 一種應用影像辨5线之指令輸入方法,总 理裝;中,該資,處理震置搭接有影像 應用影像辨識之指令輸入方法包括下列步驟人 ⑴於該影_衫置賴取之 至少一指令輸入區域; 貝疋戰 (2 )於該影像資料中判斷前寻 人认…, ㈣剛厅'衫像’以谓測該指 7輸入區域内出現之該前景影像的狀態資訊;以及 (3)儲存對應該狀態資訊 匕貝。扎乏才工制指令,以依據所 偵測之該狀態資訊擷取出對庫之护 7愚<4工制指令,俾透過該 控制指令使該資料處理裝置執行功能動作。 Μ請專職_ 22項之應用影像辨識之指令輸入方 法’其中,該狀態資訊為該前景影像之姿態、明滅變 化、動態軌跡及/或停留時間。 如申請專職圍第23項之制影_識之彳旨令輸入方 法’其中’該動態執跡包括該前景影像於該指令輸入 區域内之二維或三維動態軌跡。 如申π專利feu第22項之應用影像辨識之指令輸入方 去’其中’该步驟⑴利用一初始化程序依據該前景影 像之特定狀態資訊定義該指令輸入區域。 如申請專利範圍第23項之應用影像辨識之指令輸入方 去,其中,遠步驟(3)復包括儲存前景影像之預設姿 態及對應該預設姿態之控制指令,以於所偵測到之前 景景〉像之妥怨符合該前景影像之預設姿態時,擷取出 對應該預設姿態之控制指令,俾透過該控制指令使該 110906DP01 30 200951765 資料處理裝置執行功能動作的步驟。 27. ^申請專利範圍第抑之應用 法,復包括於該執行步驟⑺d线之心令輸入方 景影像之叫程序 I錢先執行-前 或姿態的步驟。乂取㈣則景影像之尺寸、方向If the instruction input system for applying image recognition in claim 13 of the patent application scope includes an interference cancellation module, the predicted change content in the display area is compared with the actual change content of the display area, so as to make The _ module determines the foreground image according to the result of the comparison, and improves the accuracy of the detection module. For example, the scope of patent application is the first! The command input system of the application image recognition, and the physical object corresponding to the scene image is a reflective device.如 If the application image of the patent application is applied to the instruction input system 4 ’, the physical object corresponding to the image of the W scene is a light-emitting device or a lighting device. 110906DP01 29 200951765 22. 23. 24. 25. 26. An instruction input method for applying image recognition 5-line, the prime minister's equipment; in this case, the method for processing the command input method for image-applying image recognition includes the following steps: (1) At least one command input area is taken in the shadow _ shirt; Beckham warfare (2) judges the front finder in the image data, (4) the vestibule 'shirt image' is measured in the 7 input area of the finger The status information of the foreground image appears; and (3) the corresponding state information mussel is stored. The data processing instruction is executed, and the data processing device performs the function operation by the control command according to the status information detected.专 专 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 For example, the application for the 23rd item of the full-time _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ For example, the instruction input side of the image recognition application of the π patent feu item 22 is 'in'. The step (1) defines an instruction input area according to the specific state information of the foreground image by an initialization program. For example, in the application for image recognition, the instruction input party of the patent application scope 23, wherein the remote step (3) includes a preset posture for storing the foreground image and a control instruction corresponding to the preset posture, so as to detect the detected When the foreground image conforms to the preset posture of the foreground image, the control command corresponding to the preset posture is taken out, and the 110906DP01 30 200951765 data processing device performs the function action step through the control command. 27. ^ The application law of the scope of application for patents is included in the step of the execution step (7). The order of the input image is called the program. Capture (4) the size and direction of the scene image 2δ:申應用影像辨識之指令輸入方 單-姿態或特定前景影像之 夕一個不冋安態的連續組合。 .:申請專顧_22項之助影 法,其中,步驟⑴復包括利用至少—輔助 使該影像擷取裝置擷取#旦 先源之…、射, 判斷該前景影像㈣’藉以提高步驟⑺2δ: The command input image recognition input is a continuous combination of a single-pose or a specific foreground image. .: Applying for the _22 item of assisting method, wherein the step (1) includes using at least the auxiliary to make the image capturing device take the #旦先源..., shooting, and judging the foreground image (4) to improve the step (7) 3〇Hf利範圍第22項之應用影像辨識之指令輸入方 ’-,步驟(1)復包括使該影像擷取裝置搭配一滤 ^設備歧光/發光物體’㈣擷取姉料料時,提 N该偵測模組判斷該前景影像之準確率。 31.如申請專利範圍第22項之應用影像辨識之指令輸入方 法,其中,該資料處理裝置復搭接一影像呈現裝置, 用以於該影像資料中呈現一特定區域,以於步驟⑴將 該特定區域定義為顯示區域。 32.如申請專利範圍第31項之應用影像辨識之指令輸入方 去一中δ亥步驟(1)復包括於該顯示區域的範圍内 或範圍外係定義至少一指令輸入區域,且該指令輸入 區域與該顯示區域具有函數對應關係;以偵測該指令 ]]0906DP01 200951765 輸入區域中的狀態資訊的步驟。 33. ^申請專利範圍第31項之應用影像辨識之指令輸入方 ' :其該步驟⑴復包括依據顏色、灰階程戶、 色彩漸層之同質性、前後畢彡後罢 又 定圖宏杜異性、特定物件、特 圖案、特疋型態及/或邊緣偵測 域的步驟。 心我出”亥頒不區 申圍第31項之應用影像辨識之指令輸入方 凌’其中’該步驟⑴復包括下列步驟: (^1)由該影像呈現裝置依序呈現 -區域與第二區域; 小丨沒寸之第 (1 2)將该第一區域之邊 _ ' ^ ' V -^¾.. Ί 7 *i/\ y^L 定義該邊框内或邊框外所包含之區域為 所圍出之區域定義㈣框;以及㈣之邊界 1-3) 顯 區域 35. ::申I:利範圍第31項之應用影像辨識之指令輪入方 36. :申Π利範圍第31項之應用影像辨識之指令輸入方 對廣了!( 1-2 )復包括將非線性的該邊框之 加ΐ位‘並於該頂角外的曲面對應增 =申利範圍第31項之應用影像辨識之指令輸入方 奢 。亥步轉(1_3)復包括將該邊框以特定之色 t框出’並顯示於該資料處理裝置上的步驟。 U0906DP01 32 200951765 、‘ 38.如申凊專利範圍第3ι項之應用影像辨識之 法,復包括下列步驟·· 曰7糕入方 (】_4)於該顯示區域内設有複數個顯示 以記錄該複數個顯示定位點之原始座標;以及··’ ⑴)將該影像棘裝置㈣取之 應該顯示定位點之影像定位點與該複數個顯示1 、 〇 動,以對該影像資;中,^置過大時即判斷為震 行修正。 财—數個影像定位點的座標進 39.如申請專利範圍第Μ項之應用影像 法,復包括判斷該顯示區域中 才”輪入方 擾區域中停止债測該前景影像的步驟f域’以於該干 4。· ΓΪΓ::!31項之應用影像辨識之指令輸人方 區域中之實際變動内容進行比動内容與該顯示 對的結果判斷該前景影像 ^驟’用以依據比 準確率。 卑^㈣職組判斷之 110906DP013〇Hf profit range 22nd application of image recognition command input party '-, step (1) complex includes the image capture device with a filter ^ device illuminating / illuminating object ' (4) when picking up the material The detection module determines the accuracy of the foreground image. 31. The method for inputting an image recognition method according to claim 22, wherein the data processing device is multiplexed with an image presenting device for presenting a specific region in the image data to be used in step (1) A specific area is defined as a display area. 32. The instruction input method for applying image recognition according to claim 31 of the patent application scope is: wherein the step (1) is included in the range of the display area or the out-of-range defines at least one instruction input area, and the instruction input The area has a function correspondence with the display area; to detect the state information of the instruction]]0906DP01 200951765 input area. 33. ^Applicant for image recognition in the 31st section of the patent application scope: 'This step (1) includes the homogeneity according to color, gray-scale, color gradient, and before and after the completion of the map. Steps for the opposite sex, specific objects, special patterns, special patterns, and/or edge detection fields. I am out of the instructions of the application of image recognition in the 31st paragraph of the application for the application of Fang Ling. [This step (1) includes the following steps: (^1) The image presentation device is sequentially presented - region and second The area; the first part of the first area _ ' ^ ' V -^3⁄4.. Ί 7 *i/\ y^L defines the area contained within the border or outside the border as The enclosed area defines (4) box; and (4) the boundary 1-3) The explicit area 35. :: Shen I: The scope of application of the image recognition instruction round of the 31st item 36. : Shen Yuli range item 31 The input of the command for applying the image recognition is wide! (1-2) The application includes the addition of the non-linear frame of the frame and the surface of the outer corner of the corner is increased. The instruction input is extravagant. The step (1_3) includes the step of boxing the frame with a specific color t and displaying it on the data processing device. U0906DP01 32 200951765 , ' 38. The method of applying image recognition of 3 items includes the following steps: · 曰7 cake into the square (] _4) in the display area There are a plurality of displays to record the original coordinates of the plurality of display anchor points; and (1) the image ratchet device (4) should be displayed to display the image point of the anchor point and the plurality of displays 1 and sway, to In the image resource; when the ^ is too large, it is judged as the earthquake correction. The coordinates of several image positioning points are entered into 39. If the application image method of the patent application scope is applied, the determination includes the determination of the display area. The step f field of the foreground image is stopped in the rounded interference zone for the dry 4 . · ΓΪΓ::! The application of the image recognition command in the input area of the input image of the target is determined by comparing the actual content with the result of the display and the result of the display is used to determine the accuracy. Humble ^ (four) team judgment 110906DP01
TW98114338A 2008-06-02 2009-04-30 System of inputting instruction by image identification and method of the same TWI394063B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98114338A TWI394063B (en) 2008-06-02 2009-04-30 System of inputting instruction by image identification and method of the same

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW97120576 2008-06-02
TW98114338A TWI394063B (en) 2008-06-02 2009-04-30 System of inputting instruction by image identification and method of the same

Publications (2)

Publication Number Publication Date
TW200951765A true TW200951765A (en) 2009-12-16
TWI394063B TWI394063B (en) 2013-04-21

Family

ID=44871838

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98114338A TWI394063B (en) 2008-06-02 2009-04-30 System of inputting instruction by image identification and method of the same

Country Status (1)

Country Link
TW (1) TWI394063B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103677442A (en) * 2012-08-28 2014-03-26 广达电脑股份有限公司 Keyboard device and electronic device
TWI494772B (en) * 2009-12-22 2015-08-01 Fih Hong Kong Ltd System and method for operating a powerpoint file
CN111643888A (en) * 2019-03-04 2020-09-11 仁宝电脑工业股份有限公司 Game device and method for identifying game device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU3436001A (en) * 1999-12-23 2001-07-03 Justsystem Corporation Method and apparatus for vision-based coupling between pointer actions and projected images
JP4281954B2 (en) * 2002-12-27 2009-06-17 カシオ計算機株式会社 Camera device
US6840627B2 (en) * 2003-01-21 2005-01-11 Hewlett-Packard Development Company, L.P. Interactive display device
TW200512652A (en) * 2003-09-26 2005-04-01 Jia-Zhang Hu Cursor simulator using limbs to control cursor and method for simulating the same
SE526119C2 (en) * 2003-11-24 2005-07-05 Abb Research Ltd Method and system for programming an industrial robot
US7308112B2 (en) * 2004-05-14 2007-12-11 Honda Motor Co., Ltd. Sign based human-machine interaction
TW200601180A (en) * 2004-06-30 2006-01-01 Inventec Corp Gesture recognition system and the method thereof
TW200816798A (en) * 2006-09-22 2008-04-01 Altek Corp Method of automatic shooting by using an image recognition technology
TWM318766U (en) * 2007-04-11 2007-09-11 Chi-Wen Chen Operation device of computer cursor

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI494772B (en) * 2009-12-22 2015-08-01 Fih Hong Kong Ltd System and method for operating a powerpoint file
CN103677442A (en) * 2012-08-28 2014-03-26 广达电脑股份有限公司 Keyboard device and electronic device
TWI476639B (en) * 2012-08-28 2015-03-11 Quanta Comp Inc Keyboard device and electronic device
US9367140B2 (en) 2012-08-28 2016-06-14 Quanta Computer Inc. Keyboard device and electronic device
CN103677442B (en) * 2012-08-28 2017-03-22 广达电脑股份有限公司 Keyboard device and electronic device
CN111643888A (en) * 2019-03-04 2020-09-11 仁宝电脑工业股份有限公司 Game device and method for identifying game device
CN111643888B (en) * 2019-03-04 2023-07-11 仁宝电脑工业股份有限公司 Game device and method for identifying game device

Also Published As

Publication number Publication date
TWI394063B (en) 2013-04-21

Similar Documents

Publication Publication Date Title
US11494000B2 (en) Touch free interface for augmented reality systems
US11227446B2 (en) Systems, methods, and graphical user interfaces for modeling, measuring, and drawing using augmented reality
US10761612B2 (en) Gesture recognition techniques
US11048333B2 (en) System and method for close-range movement tracking
DK179874B1 (en) USER INTERFACE FOR AVATAR CREATION
JP5855343B2 (en) Image manipulation based on improved gestures
JP6074170B2 (en) Short range motion tracking system and method
JP5807686B2 (en) Image processing apparatus, image processing method, and program
JP6573755B2 (en) Display control method, information processing program, and information processing apparatus
JP2013037675A5 (en)
WO2014066908A1 (en) Processing tracking and recognition data in gestural recognition systems
TW201214266A (en) Three dimensional user interface effects on a display by using properties of motion
WO2018000519A1 (en) Projection-based interaction control method and system for user interaction icon
JP2004246578A (en) Interface method and device using self-image display, and program
KR20230011349A (en) Trackpad on the back part of the device
JP6834197B2 (en) Information processing equipment, display system, program
TW200951765A (en) System of inputting instruction by image identification and method of the same
US20230379427A1 (en) User interfaces for managing visual content in a media representation
JP6699406B2 (en) Information processing device, program, position information creation method, information processing system
EP3584679B1 (en) Avatar creation user interface
CN114327063A (en) Interaction method and device of target virtual object, electronic equipment and storage medium
US20240291944A1 (en) Video application graphical effects