Nothing Special   »   [go: up one dir, main page]

TWI638278B - An online verification method and system for real-time gesture detection - Google Patents

An online verification method and system for real-time gesture detection Download PDF

Info

Publication number
TWI638278B
TWI638278B TW106112333A TW106112333A TWI638278B TW I638278 B TWI638278 B TW I638278B TW 106112333 A TW106112333 A TW 106112333A TW 106112333 A TW106112333 A TW 106112333A TW I638278 B TWI638278 B TW I638278B
Authority
TW
Taiwan
Prior art keywords
model
module
recognition result
recognition
new model
Prior art date
Application number
TW106112333A
Other languages
Chinese (zh)
Other versions
TW201737139A (en
Inventor
張宏鑫
陳鼎熠
池立盈
Original Assignee
芋頭科技(杭州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 芋頭科技(杭州)有限公司 filed Critical 芋頭科技(杭州)有限公司
Publication of TW201737139A publication Critical patent/TW201737139A/en
Application granted granted Critical
Publication of TWI638278B publication Critical patent/TWI638278B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本發明屬於電子技術領域,尤其涉及一種即時手勢檢測的在線驗證方法,包括以下步驟:步驟1,圖像採集模組即時捕獲視覺範圍內的圖像;步驟2,一嵌入式終端通過載入訓練好的模型對採集的所述圖像進行手勢識別及跟蹤監測;步驟3,記錄識別結果並響應於所述識別結果;步驟4,分析所述識別結果的正確性並依據設定規則重新訓練獲得新模型,驗證所述新模型的準確性。步驟5,以所述新模型更新之前訓練好的所述模型。以上技術方案實現了一套即時手勢識別的方法及系統,相比於傳統的基於非深度相機的手勢識別,提供了更爲準確的在線模型優化系統,有利於提高識別的準確性。The invention belongs to the field of electronic technology, and particularly relates to an online verification method for instant gesture detection, which comprises the following steps: Step 1: The image acquisition module captures images in a visual range in real time; Step 2, an embedded terminal passes loading training a good model performs gesture recognition and tracking monitoring on the captured image; step 3, records the recognition result and responds to the recognition result; and steps 4, analyzes the correctness of the recognition result and retrains according to the set rule to obtain a new one. Model to verify the accuracy of the new model. Step 5, updating the previously trained model with the new model. The above technical solution realizes a set of instant gesture recognition method and system, and provides a more accurate online model optimization system than the traditional non-depth camera based gesture recognition, which is beneficial to improve the accuracy of recognition.

Description

一種即時手勢檢測的在線驗證方法及系統Online verification method and system for instant gesture detection

本發明屬於電子技術領域,尤其涉及一種手勢檢測的在線驗證方法及系統。The invention belongs to the field of electronic technologies, and in particular relates to an online verification method and system for gesture detection.

隨著嵌入式技術的成熟,各種智能産品如雨後春筍般湧現。在智能設備中,機器視覺一直是備受關注的熱點問題。目前已有的手勢技術主要分爲兩大類:一類是基於深度相機的三維視覺識別,除了設置攝像頭外,還裝有一個深度相機,可以通過紅外反射獲取空間信息,豐富了攝像頭捕捉到的特徵,使得識別準確度大大增加,目前已應用於前端科技産品上,比如微軟的XBox系列,其配套的kinect設備正是業界較爲成熟的深度相機,可以通過它用手勢或身體的姿態,與遊戲進行交互,然而,基於深度相機的三維視覺識別技術雖然增強了對圖像的感知,但是受限於深度相機的時效性、穩定性以及一些架構上的相容性,還無法大規模推廣使用;另一類技術是基於傳統的二維成像攝像頭,比如海爾的智能電視,通過攝像頭捕捉到的手勢圖像,進行電視的控制操作,其實現的原理主要是基於載入事先訓練好的模型來篩選出符合指定手勢的視窗。但是這樣的傳統方案存在一些問題:1)對於不同的場景和環境受限於事先訓練好的模型;2)準確性和即時性無法同時滿足。With the maturity of embedded technology, various smart products have sprung up. In smart devices, machine vision has always been a hot topic of concern. At present, the existing gesture technologies are mainly divided into two categories: one is a three-dimensional visual recognition based on a depth camera. In addition to setting a camera, a depth camera is also provided, which can acquire spatial information through infrared reflection, enriching the features captured by the camera. The identification accuracy has been greatly increased, and it has been applied to front-end technology products, such as Microsoft's XBox series. Its supporting kinect device is a relatively mature depth camera in the industry, which can be used with games by gestures or body gestures. Interaction, however, the depth camera-based 3D visual recognition technology enhances the perception of the image, but is limited by the timeliness, stability and some architectural compatibility of the depth camera, and cannot be widely used. One type of technology is based on traditional two-dimensional imaging cameras, such as Haier's smart TV, the gesture image captured by the camera, and the control operation of the TV. The principle of its implementation is mainly based on loading the pre-trained model to filter out the match. A window that specifies the gesture. However, such traditional solutions have some problems: 1) different scenarios and environments are limited to pre-trained models; 2) accuracy and immediacy cannot be met at the same time.

本發明提供一種即時手勢檢測的在線驗證方法及系統,以解決現有技術的問題;The present invention provides an online verification method and system for instant gesture detection to solve the problems of the prior art;

具體技術方案如下:一種即時手勢檢測的在線驗證方法,其中,包括以下步驟: 步驟1,圖像採集模組即時捕獲視覺範圍內的圖像; 步驟2,一嵌入式終端通過載入訓練好的模型對採集的所述圖像進行手勢識別及跟蹤監測; 步驟3,記錄識別結果並響應於所述識別結果; 步驟4,分析所述識別結果的正確性並依據設定規則重新訓練獲得新模型,驗證所述新模型的準確性; 步驟5,以所述新模型更新先前訓練好的所述模型。The specific technical solution is as follows: an online verification method for instant gesture detection, which includes the following steps: Step 1: The image acquisition module instantly captures an image within a visual range; Step 2, an embedded terminal is trained by loading The model performs gesture recognition and tracking monitoring on the collected image; Step 3: recording the recognition result and responding to the recognition result; Step 4, analyzing the correctness of the recognition result and retraining to obtain a new model according to the set rule, Verifying the accuracy of the new model; Step 5, updating the previously trained model with the new model.

上述的即時手勢檢測的在線驗證方法,所述步驟4具體如下: 步驟41,所述識別結果定時上傳至一後臺服務器,所述後臺服務器利用深度學習的方法驗證識別結果的正確性; 步驟42,記錄所述識別結果爲不正確的錯誤案例,判斷所述錯誤案例達到設定數量或收集了設定時間後,將所述錯誤案例的數據添加至先前的所述模型的訓練數據中,重新訓練獲得新模型; 步驟43,使用標準的驗證集來分析所述新模型的質量。In the online verification method of the instant gesture detection, the step 4 is specifically as follows: Step 41: The recognition result is periodically uploaded to a background server, and the background server verifies the correctness of the recognition result by using a deep learning method; Step 42 Recording the error result that the recognition result is incorrect, determining that the error case reaches the set number or collecting the set time, adding the data of the error case to the training data of the previous model, and retraining to obtain new Model; Step 43, using a standard validation set to analyze the quality of the new model.

上述的即時手勢檢測的在線驗證方法, 所述步驟5具體如下: 步驟51,判斷所述新模型優於先前的模型時,所述後臺服務器向所述嵌入式終端發送升級所述模型的請求; 步驟52,所述嵌入式終端響應所述請求,所述後臺服務器自動下載所述新模型至所述嵌入式終端。In the online verification method of the instant gesture detection, the step 5 is specifically as follows: Step 51: When determining that the new model is better than the previous model, the background server sends a request for upgrading the model to the embedded terminal; Step 52: The embedded terminal automatically downloads the new model to the embedded terminal in response to the request.

上述的即時手勢檢測的在線驗證方法,所述步驟2具體如下: 步驟21,所述視覺範圍內有移動的物體時,所述嵌入式終端啓動手勢識別; 步驟22,載入預先訓練好的模型,自所述圖像中篩選出目標手勢,對後續的圖像進行跟蹤檢測。In the above-mentioned online verification method for instant gesture detection, the step 2 is specifically as follows: Step 21: When there is a moving object in the visual range, the embedded terminal starts gesture recognition; Step 22, loading the pre-trained model The target gesture is filtered out from the image, and the subsequent image is tracked and detected.

還包括,一種即時手勢檢測的在線驗證系統,其中,包括,圖像採集模組,用於即時捕獲視覺範圍內的圖像;手勢識別跟蹤模組,位於一嵌入式終端,與所述圖像採集模組連接,通過載入訓練好的模型對採集的所述圖像進行手勢識別及跟蹤監測;記錄響應模組,與所述手勢識別跟蹤模組連接,用於記錄識別結果並響應於所述識別結果;檢驗操作模組,與所述記錄響應模組連接,用於分析所述識別結果的正確性並依據設定規則重新訓練獲得新模型,並驗證所述新模型的準確性;模型更新模組,與所述檢驗操作模組連接,用於依據所述新模型更新所述模型。The invention also includes an online verification system for instant gesture detection, comprising: an image acquisition module for instantly capturing images in a visual range; a gesture recognition tracking module, located at an embedded terminal, and the image The acquisition module is connected, and the captured image is subjected to gesture recognition and tracking monitoring by loading the trained model; the recording response module is connected with the gesture recognition tracking module, and is used for recording the recognition result and responding to the Determining the result of the recognition; connecting the operation module to the record response module for analyzing the correctness of the recognition result and retraining the new model according to the set rule, and verifying the accuracy of the new model; And a module connected to the verification operation module, configured to update the model according to the new model.

上述的一種即時手勢檢測的在線驗證系統,所述檢驗操作模組位於一後臺服務器端,包括:檢測回測子模組,與所述記錄響應模組連接,用於對所述識別結果進行回測,將錯誤的識別信息及噪聲信息記錄下來;模型訓練子模組,與所述檢測回測子模組連接,將設定數量或設定時間的所述錯誤的識別信息及噪聲信息加入訓練數據中,重新訓練獲得新模型;驗證子模組,與所述模型訓練子模組連接,依據定時更新的驗證集對所述新模型進行量化評估,當所述新模型優於先前的所述模型時,發出更新模型的消息。In the above-mentioned online verification system for instant gesture detection, the verification operation module is located at a background server end, and includes: a detection back test sub-module connected to the record response module for returning the recognition result The error identification information and the noise information are recorded; the model training sub-module is connected to the detection back-test sub-module, and the wrong identification information and noise information of the set quantity or set time are added to the training data. Re-training to obtain a new model; verifying the sub-module, connecting with the model training sub-module, and performing quantitative evaluation on the new model according to the regularly updated verification set, when the new model is better than the previous model , issue a message to update the model.

上述的一種即時手勢檢測的在線驗證系統,所述記錄響應模組包括視覺反饋單元,所述視覺反饋單元通過顯示相應的圖標於所述嵌入式終端的顯示介面上響應所述識別結果;和/或所述記錄響應模組包括發聲反饋單元,所述發聲反饋單元通過播放音樂或收藏音樂響應於所述識別結果。The above-mentioned online verification system for instant gesture detection, the recording response module includes a visual feedback unit, and the visual feedback unit responds to the recognition result on a display interface of the embedded terminal by displaying a corresponding icon; and/ Or the recording response module includes an utterance feedback unit that responds to the recognition result by playing music or collecting music.

上述的一種即時手勢檢測的在線驗證系統,所述記錄響應模組位於所述嵌入式終端。In the above-mentioned online verification system for instant gesture detection, the recording response module is located at the embedded terminal.

上述的一種即時手勢檢測的在線驗證系統,所述圖像採集模組採用二維成像攝像頭。In the above-mentioned online verification system for instant gesture detection, the image acquisition module adopts a two-dimensional imaging camera.

還包括,一種嵌入式智慧設備,包括上述的即時手勢檢測的在線驗證系統。Also included is an embedded smart device, including the online verification system for instant gesture detection described above.

有益效果:以上技術方案實現了一套即時手勢識別的方法及系統,相比於傳統的基於非深度相機的手勢識別,提供了更爲準確的在線模型優化系統,有利於提高識別的準確性。The beneficial effects: the above technical solution realizes a set of instant gesture recognition method and system, and provides a more accurate online model optimization system than the traditional non-depth camera based gesture recognition, which is beneficial to improve the accuracy of recognition.

下面將結合本發明實施例中的附圖,對本發明實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例僅僅是本發明一部分實施例,而不是全部的實施例。基於本發明中的實施例,本領域普通技術人員在沒有作出創造性勞動的前提下所獲得的所有其他實施例,都屬本發明保護的範圍。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.

需要說明的是,在不衝突的情況下,本發明中的實施例及實施例中的特徵可以相互組合。It should be noted that the embodiments in the present invention and the features in the embodiments may be combined with each other without conflict.

下麵結合附圖和具體實施例對本發明作進一步說明,但不作爲本發明的限定。The invention is further illustrated by the following figures and specific examples, but is not to be construed as limiting.

參照圖1,一種即時手勢檢測的在線驗證方法,其中,包括以下步驟:Referring to FIG. 1, an online verification method for instant gesture detection includes the following steps:

步驟1,圖像採集模組即時捕獲視覺範圍內的圖像;Step 1: The image acquisition module instantly captures an image within a visual range;

步驟2,一嵌入式終端通過載入訓練好的模型對採集的圖像進行手勢識別及跟蹤監測;Step 2: an embedded terminal performs gesture recognition and tracking monitoring on the collected image by loading the trained model;

步驟3,記錄識別結果並響應於識別結果;Step 3, recording the recognition result and responding to the recognition result;

步驟4,分析識別結果的正確性並依據設定規則重新訓練獲得新模型,驗證新模型的準確性;Step 4: Analyze the correctness of the recognition result and retrain the new model according to the set rules to verify the accuracy of the new model;

步驟5,以所述新模型更新先前訓練好的所述模型。Step 5, updating the previously trained model with the new model.

現有技術中預先收集大量的數據,截取包含手勢的局部圖像作爲正樣本,同時截取更多不包含手勢的負樣本圖像,以此作爲訓練集。然後將訓練集用於合適的演算法訓練出模型,在識別前載入預先訓練好的模型,計算出每張圖像是否包含手勢,其受限於事先訓練好的模型,對於不同的場景和環境,影響識別的效果,本發明對識別結果進行記錄和分析,定時根據特定的數據重新訓練獲得新模型以便進行更新,提供了更爲準確的在線模型優化系統,有利於提高識別的準確性,並使得任意視野範圍內出現的目標手勢能夠被有效識別。In the prior art, a large amount of data is collected in advance, and a partial image including a gesture is intercepted as a positive sample, and more negative sample images not including a gesture are intercepted as a training set. Then use the training set for the appropriate algorithm to train the model, load the pre-trained model before recognition, and calculate whether each image contains gestures, which is limited by the pre-trained model, for different scenes and Environment, affecting the effect of recognition, the invention records and analyzes the recognition result, and re-trains the new model according to specific data to update, and provides a more accurate online model optimization system, which is beneficial to improve the accuracy of recognition. And the target gestures appearing in any field of view can be effectively recognized.

上述的即時手勢檢測的在線驗證方法,步驟4具體如下:The above online verification method for instant gesture detection, step 4 is as follows:

步驟41,識別結果定時上傳至一後臺服務器,後臺服務器利用深度學習的方法驗證識別結果的正確性;Step 41: The recognition result is periodically uploaded to a background server, and the background server uses the deep learning method to verify the correctness of the recognition result;

步驟42,記錄識別結果爲不正確的錯誤案例,判斷錯誤案例達到設定數量或收集了設定時間後,將錯誤案例的數據添加至先前的模型的訓練數據中,重新訓練獲得新模型;Step 42: Record an error case in which the recognition result is incorrect, determine that the error case reaches the set number or collect the set time, add the data of the error case to the training data of the previous model, and retrain to obtain the new model;

步驟43,使用標準的驗證集來分析新模型的質量。In step 43, a standard validation set is used to analyze the quality of the new model.

後臺服務器接受前端發送過來的識別結果並記錄,然後使用更爲精准的檢測方式對結果進行回測,將錯誤的識別信息,以及一些噪聲信息記錄下來。每隔規定的時間,將一些檢測的數據加入訓練數據中,對模型進行訓練。在得出訓練模型後,會根據一套定時更新的驗證集對新的模型進行量化評估。The background server accepts the recognition result sent by the front end and records it, and then uses the more accurate detection method to backtest the result, and records the wrong identification information and some noise information. At regular intervals, some detected data is added to the training data to train the model. After the training model is derived, the new model is quantitatively evaluated based on a set of regularly updated validation sets.

現有的大多數檢測識別演算法存在演算法的速度和演算法的精准度無法兼顧的問題。準確的模型普遍需要大量的計算量,難以滿足即時交互系統的要求,而快速的演算法則容易面臨誤識別和召回率低的問題,本發明將相應功能分別搭建在嵌入式終端和後臺服務器端。嵌入式終端提供即時快速的識別,出於性能的考慮,只有在檢測到有物體運動時才做檢測操作,這樣大大降低了系統的資源佔用,同時對檢測到的區域進行跟蹤,既加快了檢測的數據,又減少了系統的資源佔用,系統的時效性和穩定性大大提升;在後臺服務端則提供更爲精准的功能,由於後臺服務器端的操作屬定時更新,所以對時效性的要求非常低,通過嵌入式終端傳輸給後臺服務器端的數據會定時被後臺服務器端的演算法用於檢驗識別的正確率。並且定時利用採集的數據重新訓練客戶端的檢測模型。在實際的使用時,客戶端會佈置到不同的環境,在初期的使用中,可能會出現不同程度的誤檢。但是通過服務端幾輪的檢驗和重新訓練後,全新的模型將完全適用於所部署的環境。滿足時效性和準確性的雙重保障。Most of the existing detection and recognition algorithms have problems in that the speed of the algorithm and the accuracy of the algorithm cannot be balanced. Accurate models generally require a large amount of computation, which is difficult to meet the requirements of real-time interactive systems, while fast algorithms are prone to problems of misidentification and low recall rate. The present invention sets the corresponding functions on the embedded terminal and the background server. The embedded terminal provides instant and fast recognition. For performance reasons, the detection operation is performed only when an object motion is detected, which greatly reduces the resource occupancy of the system and simultaneously tracks the detected area, which speeds up detection. The data reduces the system's resource consumption, and the system's timeliness and stability are greatly improved. In the background server, it provides more precise functions. Since the background server operation is regularly updated, the timeliness requirement is very low. The data transmitted to the background server through the embedded terminal is periodically used by the background server-side algorithm to verify the correct rate of recognition. And periodically use the collected data to retrain the client's detection model. In actual use, the client will be deployed to different environments, and in the initial use, different levels of false detection may occur. But after several rounds of inspection and retraining on the server side, the new model will be fully applicable to the deployed environment. Meet the dual guarantee of timeliness and accuracy.

上述的即時手勢檢測的在線驗證方法, 步驟5具體如下:The above online verification method for instant gesture detection, step 5 is as follows:

步驟51,判斷新模型優於先前的模型時,後臺服務器向嵌入式終端發送升級模型的請求;Step 51: When determining that the new model is better than the previous model, the background server sends a request for upgrading the model to the embedded terminal;

步驟52,嵌入式終端響應請求,後臺服務器自動下載新模型至嵌入式終端。Step 52: The embedded terminal responds to the request, and the background server automatically downloads the new model to the embedded terminal.

得到訓練好的新模型後,使用標準的測試集分析模型的質量。每隔特定時間,如果出現優於嵌入式終端的新模型,後臺服務器端發出更新請求,當嵌入式終端響應請求後,後臺服務器端會自動下載新模型到嵌入式終端。每個用戶可以得到定制化的模型,使得嵌入式終端手勢識別系統可以適應不同的環境。After getting a trained new model, use a standard test set to analyze the quality of the model. At a specific time, if a new model is generated that is better than the embedded terminal, the background server sends an update request. When the embedded terminal responds to the request, the background server automatically downloads the new model to the embedded terminal. Each user can get a customized model so that the embedded terminal gesture recognition system can adapt to different environments.

上述的即時手勢檢測的在線驗證方法,步驟2具體如下:The above online verification method for instant gesture detection, step 2 is as follows:

步驟21,視覺範圍內有移動的物體時,嵌入式終端啓動手勢識別;Step 21: when there is a moving object in the visual range, the embedded terminal starts gesture recognition;

步驟22,載入預先訓練好的模型,自圖像中篩選出目標手勢,對後續的圖像進行跟蹤檢測。In step 22, the pre-trained model is loaded, the target gesture is filtered out from the image, and the subsequent image is tracked and detected.

具體地,對採集到的圖像用實現訓練後的分類器進行檢測,如果目標手勢出現,記錄並給出響應的反饋,同時記錄出現手勢的位置,對後續的圖像進行跟蹤檢測。Specifically, the collected image is detected by the classifier after the training is implemented. If the target gesture appears, the response of the response is recorded and given, and the position of the gesture is recorded, and the subsequent image is tracked and detected.

還包括,一種即時手勢檢測的在線驗證系統,其中,參照圖5,包括:Also included is an online verification system for instant gesture detection, wherein, referring to FIG. 5,

圖像採集模組11,用於即時捕獲視覺範圍內的圖像;The image acquisition module 11 is configured to instantly capture an image in a visual range;

手勢識別跟蹤模組12,位於一嵌入式終端1,與圖像採集模組11連接,通過載入訓練好的模型對採集的圖像進行手勢識別及跟蹤監測;The gesture recognition tracking module 12 is located in an embedded terminal 1 and is connected to the image acquisition module 11 to perform gesture recognition and tracking monitoring on the captured image by loading the trained model.

記錄響應模組13,與手勢識別跟蹤模組12連接,用於記錄識別結果並響應於識別結果;The record response module 13 is connected to the gesture recognition tracking module 12 for recording the recognition result and responding to the recognition result;

檢驗操作模組20,與記錄響應模組13連接,用於分析識別結果的正確性並依據設定規則重新訓練獲得新模型,並驗證新模型的準確性;The verification operation module 20 is connected to the record response module 13 for analyzing the correctness of the recognition result and retraining to obtain a new model according to the set rule, and verifying the accuracy of the new model;

模型更新模組21,與檢驗操作模組20連接,用於依據新模型更新先前的模型。The model update module 21 is connected to the verification operation module 20 for updating the previous model according to the new model.

手勢識別跟蹤模組12還可以是運行於嵌入式終端1的手勢識別程式,其搭載即時監測功能,同時提供監測的結果和數據至一後臺服務器端2用於檢測回測,嵌入式終端1可以在沒有網絡環境的情況下獨立於後臺服務器端2獨立運行。The gesture recognition tracking module 12 can also be a gesture recognition program running on the embedded terminal 1 , which is equipped with an instant monitoring function, and provides monitoring results and data to a background server 2 for detecting back testing. The embedded terminal 1 can Independent of the background server 2 running independently without a network environment.

上述的一種即時手勢檢測的在線驗證系統,檢驗操作模組20位於一後臺服務器端2,包括:In the above-mentioned online verification system for instant gesture detection, the verification operation module 20 is located at a background server end 2, and includes:

檢測回測子模組,與記錄響應模組連接,用於對識別結果進行回測,將錯誤的識別信息及噪聲信息記錄下來;Detecting the back test sub-module, connecting with the record response module, for performing backtesting on the recognition result, and recording the wrong identification information and noise information;

模型訓練子模組,與檢測回測子模組連接,將設定數量或設定時間的錯誤的識別信息及噪聲信息加入訓練數據中,重新訓練獲得新模型;The model training sub-module is connected with the detection back-test sub-module, and adds the wrong identification information and noise information of the set quantity or set time to the training data, and re-trains to obtain a new model;

驗證子模組,與模型訓練子模組連接,依據定時更新的驗證集對新模型進行量化評估,當新模型優於先前的模型時,發出更新模型的消息。The verification sub-module is connected with the model training sub-module, and the new model is quantitatively evaluated according to the verification set of the timing update. When the new model is better than the previous model, the message of updating the model is issued.

後臺服務器端2可以設置數據採集和檢測功能,通過後臺服務器端2進行更爲精確的深度學習,對嵌入式終端1的檢測結果進行回測分析,對出現的誤檢測進行記錄,並且定時訓練數據,將新的模型更新到嵌入式終端1。The background server 2 can set the data collection and detection function, perform more accurate deep learning through the background server 2, perform backtesting analysis on the detection result of the embedded terminal 1, record the occurrence of the false detection, and time the training data. , update the new model to embedded terminal 1.

上述的深度學習可以通過架構多層的神經網絡,底層的卷積層提取圖像的基礎信息,比如邊緣或點的信息。之後逐層提取更爲抽象的特徵,比如在手勢的例子中,中間層會提取膚色、皮膚褶皺等信息,較高的網絡層則會抽取手勢的局部特徵,最後通過全連接層擬合出最合理的分類函數。整個過程是自動訓練,雖然耗時較慢,但是屬後臺優化更新服務,無需擔心時效性。同時,後臺服務器端可以收集訓練數據,每隔特定時間重新訓練深度網絡的模型。以保證後臺服務器端2的模型精度高於嵌入式終端1,能夠起到校驗和優化更新的目的。The above deep learning can be performed by constructing a multi-layered neural network, and the underlying convolution layer extracts basic information of the image, such as edge or point information. Then extract more abstract features layer by layer. For example, in the example of gestures, the middle layer extracts information such as skin color and skin folds. The higher network layer extracts the local features of the gesture, and finally fits the most connected layer. A reasonable classification function. The whole process is automatic training, although it takes a little time, but it is a background optimization update service, no need to worry about timeliness. At the same time, the background server can collect training data and retrain the model of the deep network at specific times. In order to ensure that the model accuracy of the background server 2 is higher than that of the embedded terminal 1, the purpose of verifying and optimizing the update can be achieved.

上述的一種即時手勢檢測的在線驗證系統,記錄響應模組13包括視覺反饋單元,視覺反饋單元通過顯示相應的圖標於嵌入式終端的顯示介面上響應識別結果;和/或In the above-mentioned online verification system for instant gesture detection, the record response module 13 includes a visual feedback unit that responds to the recognition result by displaying a corresponding icon on the display interface of the embedded terminal; and/or

記錄響應模組包括發聲反饋單元,發聲反饋單元通過播放音樂或收藏音樂響應於識別結果。The recording response module includes an utterance feedback unit that responds to the recognition result by playing music or collecting music.

嵌入式終端1的其他服務。在接受到目標手勢的指令後,做出播放或停止音樂等交互反應,同時有相應的圖標和視覺效果展現在嵌入式終端的外部顯示模組上。Other services of the embedded terminal 1. After receiving the instruction of the target gesture, an interactive reaction such as playing or stopping the music is made, and corresponding icons and visual effects are displayed on the external display module of the embedded terminal.

上述的一種即時手勢檢測的在線驗證系統,記錄響應模組13位於嵌入式終端。In the above-mentioned online verification system for instant gesture detection, the record response module 13 is located at the embedded terminal.

上述的一種即時手勢檢測的在線驗證系統,圖像採集模組11可以採用二維成像攝像頭,用於採集即時圖像,並具備靜態圖和30幀每秒上的視頻採集功能。In the above-mentioned online verification system for instant gesture detection, the image acquisition module 11 can adopt a two-dimensional imaging camera for acquiring an instant image, and has a static image and a video capture function on 30 frames per second.

還包括,一種嵌入式智慧設備,包括上述的即時手勢檢測的在線驗證系統。該嵌入式智慧設備可以是運行嵌入式系統的機器人。Also included is an embedded smart device, including the online verification system for instant gesture detection described above. The embedded smart device can be a robot running an embedded system.

一種具體的實施例,參照圖6,一個高清攝像頭,通過MIPI(Mobile Industry Processor Interface , 移動産業處理器介面)或者USB介面連接到嵌入式智慧設備;整個手勢控制示例如圖6所示:A specific embodiment, referring to FIG. 6, an HD camera is connected to the embedded smart device through a MIPI (Mobile Industry Processor Interface) or a USB interface; the entire gesture control example is shown in FIG. 6:

在嵌入式終端:高清攝像頭會即時捕捉出現在視覺範圍內的圖像數據,只有當攝像頭範圍內有移動的物體時,手勢識別的系統才被啟動,當檢測到目標手勢時,會即時記錄出現目標手勢的局部圖形區域,然後根據出現不同的目標手勢執行相應的命令。比如出現播放音樂的手勢時,會調用本地的音樂介面,開始播放音樂。而如果識別到的目標手勢是收藏音樂的命令後,在螢幕上會出現收藏圖標,同時再調用音樂收藏的介面將當時播放的音樂添加到收藏列表。In the embedded terminal: the HD camera instantly captures the image data appearing in the visual range. The gesture recognition system is activated only when there is a moving object in the camera range. When the target gesture is detected, it will be recorded immediately. The partial graphics area of the target gesture is then executed according to the different target gestures. For example, when a gesture for playing music appears, the local music interface is called to start playing music. If the recognized target gesture is a command to collect music, a favorite icon will appear on the screen, and the interface of the music collection will be called to add the music played at that time to the favorite list.

而在後臺服務器端:在嵌入式終端記錄的識別結果會定時上傳至後臺服務器,系統會利用深度學習的方法驗證識別結果的正確性,同時將錯誤的案例記錄下來。當錯誤的案例達到一定數量或者收集了一定量的時間後,後台程式會將這些錯誤案例添加到原來的訓練數據中,重新訓練模型。得到新的模型後,會使用標準的驗證集來分析新模型的質量。當得到的新模型優於原有模型時,服務器會向嵌入式終端發送升級模型的請求。在嵌入式終端響應後,服務器會自動下載新模型到客戶端。在多次反覆運算之後,識別的精准度會大大提升。On the background server side: the recognition result recorded in the embedded terminal will be uploaded to the background server periodically, and the system will use the deep learning method to verify the correctness of the recognition result, and record the wrong case. When the wrong case reaches a certain amount or a certain amount of time is collected, the background program will add these error cases to the original training data and retrain the model. Once a new model is obtained, a standard validation set is used to analyze the quality of the new model. When the new model obtained is better than the original model, the server sends a request to upgrade the model to the embedded terminal. After the embedded terminal responds, the server automatically downloads the new model to the client. After multiple iterations, the accuracy of the recognition will be greatly improved.

以上僅爲本發明較佳的實施例,並非因此限制本發明的實施方式及保護範圍,對於本領域技術人員而言,應當能夠意識到凡運用本發明說明書及圖示內容所作出的等同替換和顯而易見的變化所得到的方案,均應當包含在本發明的保護範圍內。The above is only a preferred embodiment of the present invention, and is not intended to limit the scope of the embodiments and the scope of the present invention, and those skilled in the art should be able to The resulting solutions to the obvious variations are intended to be included within the scope of the invention.

S1-S5‧‧‧步驟 S1-S5‧‧‧ steps

41-43‧‧‧步驟 41-43‧‧‧Steps

51-52‧‧‧步驟 51-52‧‧‧Steps

21-22‧‧‧步驟 21-22‧‧‧Steps

1‧‧‧嵌入式終端 1‧‧‧ embedded terminal

2‧‧‧後臺服務器端 2‧‧‧Backend server side

11‧‧‧圖像採集模組 11‧‧‧Image Acquisition Module

12‧‧‧手勢識別跟蹤模組 12‧‧‧ gesture recognition tracking module

13‧‧‧記憶影響模組 13‧‧‧Memory Impact Module

20‧‧‧體驗操作模組 20‧‧‧Experience operation module

21‧‧‧模型更新模組 21‧‧‧Model Update Module

圖1爲本發明的方法流程示意圖; 圖2爲本發明的步驟4的流程示意圖; 圖3爲本發明的步驟5的流程示意圖; 圖4爲本發明的步驟2 的流程示意圖; 圖5爲本發明的系統結構示意圖; 圖6爲本發明的一種具體實施例的結構示意圖。1 is a schematic flowchart of a method according to the present invention; FIG. 2 is a schematic flowchart of step 4 of the present invention; FIG. 3 is a schematic flowchart of step 5 of the present invention; FIG. 4 is a schematic flowchart of step 2 of the present invention; BRIEF DESCRIPTION OF THE DRAWINGS FIG. 6 is a schematic structural view of a specific embodiment of the present invention.

Claims (8)

一種即時手勢檢測的在線驗證方法,包括以下步驟:步驟1,圖像採集模組即時捕獲視覺範圍內的圖像;步驟2,一嵌入式終端通過載入訓練好的模型對採集的所述圖像進行手勢識別及跟蹤監測;步驟3,記錄識別結果並響應於所述識別結果;步驟4,分析所述識別結果的正確性並依據設定規則重新訓練獲得新模型,驗證所述新模型的準確性;步驟5,以所述新模型更新先前訓練好的所述模型;其中,所述步驟4具體如下:步驟41,所述識別結果定時上傳至一後臺服務器,所述後臺服務器利用深度學習的方法驗證識別結果的正確性;步驟42,記錄所述識別結果為不正確的錯誤案例,判斷所述錯誤案例達到設定數量或收集了設定時間後,將所述錯誤案例的數據添加至先前的所述模型的訓練數據中,重新訓練獲得新模型;步驟43,使用標準的驗證集來分析所述新模型的質量。 An online verification method for instant gesture detection includes the following steps: Step 1: The image acquisition module instantly captures an image within a visual range; and Step 2, an embedded terminal loads the trained image by loading the trained model For example, performing gesture recognition and tracking monitoring; step 3, recording the recognition result and responding to the recognition result; and step 4, analyzing the correctness of the recognition result and retraining according to the set rule to obtain a new model, and verifying the accuracy of the new model And step 5, updating the previously trained model with the new model; wherein the step 4 is specifically as follows: Step 41, the recognition result is periodically uploaded to a background server, and the background server utilizes deep learning The method verifies the correctness of the recognition result; in step 42, records the error case that the recognition result is incorrect, determines that the error case reaches the set number, or collects the set time, and adds the data of the error case to the previous place In the training data of the model, retraining to obtain a new model; in step 43, using a standard validation set to analyze the new model Type of quality. 如申請專利範圍第1項所述之即時手勢檢測的在線驗證方法,其中,所述步驟5具體如下:步驟51,判斷所述新模型優於先前的模型時,所述後臺服務器向所述嵌入式終端發送升級所述模型的請求;步驟52,所述嵌入式終端響應所述請求,所述後臺服務器自動下載所述新模型至所述嵌入式終端。 The online verification method for instant gesture detection according to claim 1, wherein the step 5 is specifically as follows: Step 51: when the new model is judged to be superior to the previous model, the background server is embedded in the background The terminal sends a request to upgrade the model; in step 52, the embedded terminal automatically downloads the new model to the embedded terminal in response to the request. 如申請專利範圍第1項所述之即時手勢檢測的在線驗證方法,其中,所述步驟2具體如下:步驟21,所述視覺範圍內有移動的物體時,所述嵌入式終端啟動手勢識別; 步驟22,載入預先訓練好的模型,自所述圖像中篩選出目標手勢,對後續的圖像進行跟蹤檢測。 The online verification method for instant gesture detection according to the first aspect of the invention, wherein the step 2 is specifically as follows: Step 21: when there is a moving object in the visual range, the embedded terminal starts gesture recognition; In step 22, the pre-trained model is loaded, the target gesture is filtered out from the image, and subsequent images are tracked and detected. 一種即時手勢檢測的在線驗證系統,包括:圖像採集模組,用於即時捕獲視覺範圍內的圖像;手勢識別跟蹤模組,位於一嵌入式終端,與所述圖像採集模組連接,通過載入訓練好的模型對採集的所述圖像進行手勢識別及跟蹤監測;記錄響應模組,與所述手勢識別跟蹤模組連接,用於記錄識別結果並響應於所述識別結果;檢驗操作模組,與所述記錄響應模組連接,用於分析所述識別結果的正確性並依據設定規則重新訓練獲得新模型,並驗證所述新模型的準確性;模型更新模組,與所述檢驗操作模組連接,用於依據所述新模型更新所述模型;其中,所述檢驗操作模組位於一後臺服務器端,包括:檢測回測子模組,與所述記錄響應模組連接,用於對所述識別結果進行回測,將錯誤的識別信息及噪聲信息記錄下來;模型訓練子模組,與所述檢測回測子模組連接,將設定數量或設定時間的所述錯誤的識別信息及噪聲信息加入訓練數據中,重新訓練獲得新模型;驗證子模組,與所述模型訓練子模組連接,依據定時更新的驗證集對所述新模型進行量化評估,當所述新模型優於先前的所述模型時,發出更新模型的消息。 An online verification system for instant gesture detection, comprising: an image acquisition module for capturing an image in a visual range; the gesture recognition tracking module is located at an embedded terminal and connected to the image acquisition module. Performing gesture recognition and tracking monitoring on the captured image by loading the trained model; recording a response module connected to the gesture recognition tracking module for recording the recognition result and responding to the recognition result; An operation module is connected to the record response module for analyzing the correctness of the recognition result and retraining to obtain a new model according to the set rule, and verifying the accuracy of the new model; the model update module, and the The test operation module is connected to update the model according to the new model; wherein the verification operation module is located at a background server end, and includes: detecting a back test sub-module, and connecting with the record response module For performing backtesting on the recognition result, recording the wrong identification information and noise information; the model training sub-module, and the detecting back-testing sub-module Then, the error identification information and the noise information of the set quantity or the set time are added to the training data, and the new model is retrained; the verification sub-module is connected with the model training sub-module, and the verification set is updated according to the timing. The new model is quantitatively evaluated, and when the new model is better than the previous model, a message to update the model is issued. 如申請專利範圍第4項所述之即時手勢檢測的在線驗證系統,其中,所述記錄響應模組包括視覺反饋單元,所述視覺反饋單元通過 顯示相應的圖標於所述嵌入式終端的顯示介面上響應所述識別結果;和/或所述記錄響應模組包括發聲反饋單元,所述發聲反饋單元通過播放音樂或收藏音樂響應於所述識別結果。 An online verification system for instant gesture detection as described in claim 4, wherein the recording response module includes a visual feedback unit, and the visual feedback unit passes Displaying a corresponding icon in response to the recognition result on a display interface of the embedded terminal; and/or the recording response module includes an utterance feedback unit, wherein the utterance feedback unit responds to the recognition by playing music or collecting music result. 如申請專利範圍第4項所述之即時手勢檢測的在線驗證系統,其中,所述記錄響應模組位於所述嵌入式終端。 An online verification system for instant gesture detection as described in claim 4, wherein the recording response module is located in the embedded terminal. 如申請專利範圍第4項所述之即時手勢檢測的在線驗證系統,其中,所述圖像採集模組採用二維成像攝像頭。 The online verification system for instant gesture detection according to claim 4, wherein the image acquisition module adopts a two-dimensional imaging camera. 一種嵌入式智慧設備,採用如申請專利範圍第4或5或6或7項所述之即時手勢檢測的在線驗證系統。 An embedded smart device employing an online verification system for instant gesture detection as described in claim 4 or 5 or 6 or 7.
TW106112333A 2016-04-13 2017-04-13 An online verification method and system for real-time gesture detection TWI638278B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610231456.8A CN107292223A (en) 2016-04-13 2016-04-13 A kind of online verification method and system of real-time gesture detection
??201610231456.8 2016-04-13

Publications (2)

Publication Number Publication Date
TW201737139A TW201737139A (en) 2017-10-16
TWI638278B true TWI638278B (en) 2018-10-11

Family

ID=60041441

Family Applications (1)

Application Number Title Priority Date Filing Date
TW106112333A TWI638278B (en) 2016-04-13 2017-04-13 An online verification method and system for real-time gesture detection

Country Status (3)

Country Link
CN (1) CN107292223A (en)
TW (1) TWI638278B (en)
WO (1) WO2017177903A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549881A (en) * 2018-05-02 2018-09-18 杭州创匠信息科技有限公司 The recognition methods of certificate word and device
CN109492675B (en) * 2018-10-22 2021-02-05 深圳前海达闼云端智能科技有限公司 Medical image recognition method and device, storage medium and electronic equipment
CN109683938B (en) * 2018-12-26 2022-08-02 思必驰科技股份有限公司 Voiceprint model upgrading method and device for mobile terminal
CN109858380A (en) * 2019-01-04 2019-06-07 广州大学 Expansible gesture identification method, device, system, gesture identification terminal and medium
JP7262232B2 (en) * 2019-01-29 2023-04-21 東京エレクトロン株式会社 Image recognition system and image recognition method
CN112347947B (en) * 2020-11-10 2024-08-27 厦门长江电子科技有限公司 Image data processing system and method integrating intelligent detection and automatic test
CN112378916B (en) * 2020-11-10 2024-03-29 厦门长江电子科技有限公司 Automatic image grading detection system and method based on machine vision
CN112684887A (en) * 2020-12-28 2021-04-20 展讯通信(上海)有限公司 Application device and air gesture recognition method thereof
CN112396042A (en) * 2021-01-20 2021-02-23 鹏城实验室 Real-time updated target detection method and system, and computer-readable storage medium
CN112861934A (en) * 2021-01-25 2021-05-28 深圳市优必选科技股份有限公司 Image classification method and device of embedded terminal and embedded terminal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110296353A1 (en) * 2009-05-29 2011-12-01 Canesta, Inc. Method and system implementing user-centric gesture control
US20140354540A1 (en) * 2013-06-03 2014-12-04 Khaled Barazi Systems and methods for gesture recognition
TWI469813B (en) * 2010-01-15 2015-01-21 Microsoft Corp Tracking groups of users in motion capture system
TWM514600U (en) * 2015-08-04 2015-12-21 Univ Feng Chia A motional control and interactive navigation system of virtual park
TWI525407B (en) * 2010-01-29 2016-03-11 東京威力科創股份有限公司 Method and system for self-learning and self-improving a semiconductor manufacturing tool

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831439B (en) * 2012-08-15 2015-09-23 深圳先进技术研究院 Gesture tracking method and system
CN103632143B (en) * 2013-12-05 2017-02-08 冠捷显示科技(厦门)有限公司 Cloud computing combined object identifying system on basis of images
US9886094B2 (en) * 2014-04-28 2018-02-06 Microsoft Technology Licensing, Llc Low-latency gesture detection
CN105205436B (en) * 2014-06-03 2019-09-06 北京创思博德科技有限公司 A kind of gesture recognition system based on forearm bioelectricity multisensor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110296353A1 (en) * 2009-05-29 2011-12-01 Canesta, Inc. Method and system implementing user-centric gesture control
TWI469813B (en) * 2010-01-15 2015-01-21 Microsoft Corp Tracking groups of users in motion capture system
TWI525407B (en) * 2010-01-29 2016-03-11 東京威力科創股份有限公司 Method and system for self-learning and self-improving a semiconductor manufacturing tool
US20140354540A1 (en) * 2013-06-03 2014-12-04 Khaled Barazi Systems and methods for gesture recognition
TWM514600U (en) * 2015-08-04 2015-12-21 Univ Feng Chia A motional control and interactive navigation system of virtual park

Also Published As

Publication number Publication date
TW201737139A (en) 2017-10-16
CN107292223A (en) 2017-10-24
WO2017177903A1 (en) 2017-10-19

Similar Documents

Publication Publication Date Title
TWI638278B (en) An online verification method and system for real-time gesture detection
JP6681342B2 (en) Behavioral event measurement system and related method
US10488195B2 (en) Curated photogrammetry
US9471763B2 (en) User input processing with eye tracking
CN111652087B (en) Car inspection method, device, electronic equipment and storage medium
EP3860133A1 (en) Audio and video quality enhancement method and system employing scene recognition, and display device
KR102092931B1 (en) Method for eye-tracking and user terminal for executing the same
CN104049749A (en) Method and apparatus to generate haptic feedback from video content analysis
CN104239416A (en) User identification method and system
CN108109010A (en) A kind of intelligence AR advertisement machines
CN103985137A (en) Moving object tracking method and system applied to human-computer interaction
WO2022095516A1 (en) Livestreaming interaction method and apparatus
CN112527113A (en) Method and apparatus for training gesture recognition and gesture recognition network, medium, and device
CN109887331A (en) A kind of parking stall monitoring terminal and its monitoring method with Car license recognition function
CN109286848B (en) Terminal video information interaction method and device and storage medium
US11494596B2 (en) Systems and methods for performing object detection and motion detection on video information
WO2022041182A1 (en) Method and device for making music recommendation
CN107340868A (en) A kind of data processing method, device and VR equipment
US20220122341A1 (en) Target detection method and apparatus, electronic device, and computer storage medium
CN116259104A (en) Intelligent dance action quality assessment method, device and system
CN115118536B (en) Sharing method, control device and computer readable storage medium
CN113066024B (en) Training method of image blur detection model, image blur detection method and device
CN115311723A (en) Living body detection method, living body detection device and computer-readable storage medium
WO2023221273A1 (en) Server pressure testing method and device, and computer storage medium
CN118941862A (en) Smoking detection method and system based on deep learning

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees