Nothing Special   »   [go: up one dir, main page]

CN109558812B - Face image extraction method and device, practical training system and storage medium - Google Patents

Face image extraction method and device, practical training system and storage medium Download PDF

Info

Publication number
CN109558812B
CN109558812B CN201811347342.5A CN201811347342A CN109558812B CN 109558812 B CN109558812 B CN 109558812B CN 201811347342 A CN201811347342 A CN 201811347342A CN 109558812 B CN109558812 B CN 109558812B
Authority
CN
China
Prior art keywords
face image
face
image
images
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811347342.5A
Other languages
Chinese (zh)
Other versions
CN109558812A (en
Inventor
刘国成
张杨
霍睿
王仁正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Railway Polytechnic
Original Assignee
Guangzhou Railway Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Railway Polytechnic filed Critical Guangzhou Railway Polytechnic
Priority to CN201811347342.5A priority Critical patent/CN109558812B/en
Publication of CN109558812A publication Critical patent/CN109558812A/en
Application granted granted Critical
Publication of CN109558812B publication Critical patent/CN109558812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method and a device for extracting a face image, a practical training system and a storage medium. The method comprises the steps of extracting a first face image based on face complexion features, removing image backgrounds similar to face complexion in the first face image by combining Euler numbers of standard face images to obtain a second face image, extracting the face image according to an overlapping region of the second face image among different images, improving the accuracy of extracting the face image, and being applied to a practical training system of embedded equipment, accurately extracting the face image of a practical training student from a practical training scene image, and facilitating analysis of practical training personnel.

Description

Face image extraction method and device, practical training system and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a face image extraction method, a face image extraction device, a practical training system of embedded equipment and a computer readable storage medium.
Background
With the development of image processing technology, more and more industries need to acquire images for data analysis, for example, in teaching training, statistical analysis can be performed on the conditions of whether a trainee is absent or not in training and the like by acquiring facial images of the trainee in a classroom, and accurate extraction of the facial images from shot images is the basis for data analysis of the facial images.
The face image extraction mode adopted by the traditional technology is easily influenced by background noise, so that the image background is extracted by mistake to be a face image, and the face image is difficult to be accurately extracted from a shot source image.
Disclosure of Invention
Based on this, it is necessary to provide a face image extraction method, a face image extraction device, a practical training system of an embedded device, and a computer-readable storage medium, for solving the technical problem that it is difficult to accurately extract a face image from a captured source image in the conventional technology.
A method for extracting a face image comprises the following steps:
acquiring a plurality of images shot by a multi-view camera;
extracting a first face image from the image according to the face skin color characteristics;
determining a second face image in the first face image according to the Euler number of the standard face image;
acquiring an overlapping area of the second face image among the images; and extracting a third face image in the image according to the overlapping area.
An extraction device of a face image, comprising:
the acquisition module is used for acquiring a plurality of images shot by the multi-view camera;
the first extraction module is used for extracting a first face image from the image according to the face complexion characteristics;
the determining module is used for determining a second face image in the first face image according to the Euler number of a standard face image;
the second extraction module is used for acquiring an overlapping area of the second face image among the images; and extracting a third face image in the image according to the overlapping area.
A practical training system for an embedded device, comprising: the system comprises embedded equipment, a programming host used for programming the embedded equipment, a plug-in platform of the embedded equipment, a plurality of extension modules of the embedded equipment, a binocular camera and a director; wherein,
the embedded equipment is electrically connected with the plurality of extension modules through the plug wire platform and is used for training personnel to perform training operation;
the binocular camera is used for acquiring practical training scene images and sending the practical training scene images to the broadcasting guide device; the training scene image carries a face image of the training personnel;
the directing device is used for extracting the face image of the practical training personnel from the practical training scene image according to the face image extraction method.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned face image extraction method.
The method and the device for extracting the face images, the practical training system and the storage medium acquire a plurality of images shot by the multi-view camera, extract the first face image from the images according to the face complexion characteristics, determine the second face image in the first face image according to the Euler number of the standard face image, acquire the overlapping area of the second face image among the images, extract the third face image in the images according to the overlapping area, can extract the first face image based on the face complexion characteristics, eliminate the image background similar to the face complexion in the first face image by combining the Euler number of the standard face image to obtain the second face image, extract the accurate face image from the images according to the mutually overlapping area of the second face image among different images, and overcome the problem that the image background is easily extracted as the face image by mistake in the traditional technology, the method improves the accuracy of extracting the face image, can be applied to a practical training system of the embedded equipment, can accurately extract the face image of a practical training student from a practical training scene image, and is convenient for statistical analysis of practical training personnel.
Drawings
FIG. 1 is a schematic flow chart of a method for extracting a face image according to an embodiment;
FIG. 2 is a block diagram of an embodiment of an apparatus for extracting a face image;
FIG. 3 is a schematic structural diagram of a practical training system of an embedded device in one embodiment;
FIG. 4 is a schematic diagram of the structure of the socket in one embodiment;
FIG. 5 is a schematic structural view of a socket in another embodiment;
FIG. 6 is a schematic diagram of a pin structure according to an embodiment;
FIG. 7 is a schematic diagram of a jack connector according to one embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The terms "first" and "second" used herein do not denote any particular order or quantity, but rather are used to distinguish one element from another.
In an embodiment, a method for extracting a face image is provided, referring to fig. 1, fig. 1 is a schematic flow chart of the method for extracting a face image in an embodiment, the method may be implemented by a computer device such as a personal computer, a tablet computer, and the like, and the method for extracting a face image may include the following steps:
and step S101, acquiring a plurality of images shot by the multi-view camera.
In the step, the target area can be shot through the multi-camera, and each camera can shoot an image, so that a plurality of images shot by each camera can be obtained. Taking a binocular camera as an example, two images, namely an image shot by a left eye camera and an image shot by a right eye camera, can be obtained respectively by shooting the image of the same target area through the binocular camera, and when a person exists in the target area, the images shot by the multi-eye camera are carried with face images in a plurality of images shot by the target area, and the face images can be extracted based on the plurality of images. Specifically, the target area may be a training scene image of a training person in a training classroom or a training room.
And S102, extracting a first face image from the image according to the face skin color characteristics.
The method mainly comprises the step of extracting a first face image from a plurality of images respectively based on the face skin color characteristics. According to the face skin color feature, it can be determined that the face skin color range is 100-120B, 140-160R, so that the gray value of a pixel point of a color channel in the image in the range can be set to be 1, and the gray values of the remaining pixel points are set to be 0, wherein B represents a blue channel (B) in an RGB color standard, and R represents a red channel (R) in the RGB color standard.
And step S103, determining a second face image in the first face image according to the Euler number of the standard face image.
In the step, the euler number is a constant obtained by calculation according to the number of fragments in one region of the image in the image, the standard face image has a relatively stable euler number, the euler number of the standard face image is used as a main characteristic for identifying the face in the step, the euler number is easier to judge for a static image of the face, and in view of the fact that a shot image is usually a static image, an image background which is similar to the skin color of the face but not the face can be removed from a first face image based on the euler number of the standard face image, namely a background region which is suspected to be the face but not the face in the first face image is removed, and second face images of all the images can be obtained respectively.
Step S104, acquiring an overlapping area of a second face image among the images; and extracting a third face image in the image according to the overlapping area.
The method comprises the steps of obtaining an overlapping area of the second face images among the images, determining a third face image in each image according to the position of the overlapping area in each second face image, and identifying the face images in each image by combining the overlapping areas of the second face images of different images, so that the interference of noise is further eliminated, and the accuracy of identifying the face images can be improved.
The extraction method of the face image obtains a plurality of images shot by the multi-view camera, extracts the first face image from the images according to the face complexion characteristics, determining a second face image in the first face image according to the Euler number of the standard face image, acquiring the overlapping area of the second face image among the images, and extracting a third face image in the image according to the overlapping area, and based on a first face image extracted by face complexion characteristics, excluding an image background similar to the face complexion in the first face image by combining with the Euler number of a standard face image to obtain a second face image, and extracting an accurate face image from the image according to the mutually overlapping area of the second face images among different images, thereby overcoming the problem that the image background is easily extracted as the face image by mistake in the traditional technology, and improving the accuracy of extracting the face image.
In one embodiment, the step of extracting the first face image from the image according to the face skin color feature may include:
step S201, binarization is carried out on the image according to the face complexion characteristics to obtain a fourth face image.
In the step, the value range of the color channel of each pixel point of the image can be set according to the skin color characteristics of the human face, so that the image is binarized according to the value range to obtain a fourth human face image. Specifically, the human face skin color range is that B is greater than or equal to 100 and less than or equal to 120, and R is greater than or equal to 140 and less than or equal to 160, so that the gray value of the pixel point of the color channel of the pixel point in the range can be set to 1 in each image, and the gray values of the remaining pixel points can be set to 0, so that a fourth human face image can be obtained, wherein B represents a blue channel (B) in the RGB color standard, and R represents a red channel (R) in the RGB color standard.
In step S202, a plurality of target regions in the fourth face image are determined.
In this step, in the fourth face image, the target areas refer to areas suspected of being human faces in the fourth face image, and in the process of binarizing the image according to the skin color characteristics of the human faces, the gray value of the pixel points in the areas suspected of being human faces may be set to 1, and the gray values of other pixel points are set to 0, so that the target areas correspond to whitening areas (i.e., areas with gray values of the pixel points of 1) in the fourth face image.
Step S203, acquiring the number of pixels in each target area.
In this step, the number of pixel points in each target area in the fourth face image can be counted, that is, the number of pixel points in an area suspected of being a face in the fourth face image can be counted, and if the gray value of the pixel points in the target area is set to 1 and the gray values of other pixel points are set to 0, the number of pixel points in a whitening area in the fourth face image can be counted.
And step S204, if the number of the pixel points is less than a set threshold value, removing a target area from a fourth face image to obtain the first face image.
The number of pixels in the target area may be compared with a set threshold, and if the number of pixels in the target area is less than the threshold, the target area may be removed from the fourth face image, where the threshold may be set according to the number of pixels in the standard face image, and if the number of pixels in the target area is too small (e.g., less than 1000 pixels), the target area may be considered as not being the area where the face image is located, so the target area may be removed, and if the gray value of the pixels in the target area is set to 1, the gray value of the pixels in the target area may be set to 0, so as to obtain the first face image. Furthermore, before the number of pixel points in the target area is compared with a set threshold value, the fourth face image is subjected to spherical corrosion image processing, and then the fourth face image subjected to spherical corrosion image processing is subjected to adaptive median filtering, so that a smoothing effect is achieved, and the accuracy of face image recognition is further improved.
According to the embodiment, the number of the pixel points of the suspected face area in the fourth face image can be counted, the non-face area is removed from the fourth face image to obtain the first face image, the accuracy of extracting the face image is further improved, and the interference of background noise on the extraction of the face image is effectively removed.
In one embodiment, after the step of extracting the third face image in the image according to the overlapping region, the method may further include:
carrying out binarization processing on the third face image; and counting the third face image after the binarization processing to determine the number of people in the image.
In this embodiment, the third face image is mainly binarized, and the face area in the third face image can be segmented and marked for binarization counting, that is, the gray value of the pixel point in the face area is set to 1, the gray value of the pixel point in the non-face area is set to 0, and the number of people in each image can be obtained by counting the number of the areas with the gray value of the pixel point of 1. The method can rapidly and accurately count the number of the people in the image, and is convenient for counting the real-time training personnel who are actually training classes or training rooms.
In one embodiment, after the step of extracting the third face image in the image according to the overlapping region, the method may further include:
acquiring a face image database; the human face image database records a plurality of human face images and the identity of a person corresponding to the human face images; and matching the third face image with the face image in the face image database to determine the identity of the person in the image.
In this embodiment, the third facial image is mainly subjected to matching analysis with the facial images in the facial image database, so as to determine the identity of the person in each image. Specifically, the face area in the third face image can be segmented and marked, the face area is compared with a face image database prestored in a background, such as a whole training student, in a face matching manner, the identity information of the training student who is in practical training at the scene is determined, and the absent practical training personnel can be determined.
In one embodiment, after the step of extracting the third face image in the image according to the overlapping region, the method may further include:
determining various standard human face postures; and carrying out attitude analysis on the third face image according to various standard face attitudes to determine the face attitude in the image.
In the embodiment, multiple standard face poses can be obtained, pose comparison analysis is performed on the multiple standard face poses and the face poses of the face regions segmented and marked from the third face image, and the standard face pose areas and the face pose areas in the third face image can be compared, so that the face poses of people in all the images are determined. In a practical training teaching scene, the practical training trainees can recognize the face postures of the practical training trainees, and the practical training trainees with difficulty in operation generally have standard face postures, so the scheme of the practical training trainees can recognize the practical training trainees with difficulty in operation and timely provide early warning prompts for a practical training teacher, and the practical training state of the practical training trainees is sent to the practical training teacher.
In an embodiment, an extracting apparatus of a face image is provided, and referring to fig. 2, fig. 2 is a block diagram of a structure of the extracting apparatus of a face image in an embodiment, and the extracting apparatus of a face image may include:
the acquisition module 101 is used for acquiring a plurality of images shot by the multi-view camera;
the first extraction module 102 is configured to extract a first face image from an image according to a face skin color feature;
the determining module 103 is configured to determine a second face image in the first face image according to the euler number of the standard face image;
a second extraction module 104, configured to obtain an overlapping area of a second face image between the images; and extracting a third face image in the image according to the overlapping area.
In one embodiment, the first extraction module 102 is further configured to:
carrying out binarization on the image according to the skin color characteristics of the human face to obtain a fourth human face image; determining a plurality of target regions in a fourth face image; acquiring the number of pixel points of each target area; and if the number of the pixel points is less than a set threshold value, removing a target area from a fourth face image to obtain the first face image.
In one embodiment, the method may further include:
the quantity determining unit is used for carrying out binarization processing on the third face image; and counting the third face image after the binarization processing to determine the number of people in the image.
In one embodiment, the method may further include:
the identity determination unit is used for acquiring a face image database; the human face image database records a plurality of human face images and the identity of a person corresponding to the human face images; and matching the third face image with the face image in the face image database to determine the identity of the person in the image.
In one embodiment, the method may further include:
the gesture determining unit is used for determining various standard human face gestures; and carrying out attitude analysis on the third face image according to various standard face attitudes to determine the face attitude in the image.
The face image extraction device of the present invention corresponds to the face image extraction method of the present invention one to one, and specific limitations regarding the face image extraction device can be referred to the above limitations regarding the face image extraction method, and the technical features and the beneficial effects thereof described in the embodiments of the face image extraction method are all applicable to the embodiments of the face image extraction device, and are not described herein again. All or part of the modules in the facial image extraction device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an embodiment, a practical training system of an embedded device is provided, referring to fig. 3, where fig. 3 is a schematic structural diagram of the practical training system of the embedded device in an embodiment, and the practical training system of the embedded device may include:
the system comprises an embedded device 300, a programming host 400 for programming the embedded device 300, a plug-in platform 100 of the embedded device, a plurality of extension modules 200 of the embedded device, a binocular camera 800 and a director 700; wherein,
the embedded device 300 is electrically connected with the plurality of expansion modules 200 through the plug wire platform 100 and is used for training personnel to perform training operation;
the binocular camera 800 is used for acquiring a practical training scene image and sending the practical training scene image to the director 700; the training scene image carries a face image of a training person;
the director device 700 is configured to extract the facial image of the training person from the training scene image according to the facial image extraction method according to any one of the embodiments.
In this embodiment, on the plug wire platform 100 can be placed to embedded equipment 300, this embedded equipment 300 can carry out the electricity through this plug wire platform with insert a plurality of extension modules of locating on plug wire platform 100 and be connected, wherein, embedded equipment can be the embedded equipment of Arduino, this embedded equipment of Arduino can be applied to in the middle of the programming teaching, and embedded equipment can carry on different extension modules, and extension module can include but not limited to: LED lamp module, the traffic light module, laser head sensor module, the temperature and humidity sensor module, PS2 rocker module, the relay module, keep away barrier photoelectric sensing module, the heartbeat module is listened to the finger, high sensitivity microphone sensor module, the touch sensor module, the flame sensor module, full-color LED module, the hunting sensor module, the Hall sensor module, the rotary encoder module, the buzzer module, the soil sensor module, the moisture sensor module, mercury opens the light module, the gas sensor module, photo resistance module and vibrations switch module, make this embedded equipment realize corresponding function based on different extension modules through compiling corresponding procedure like this.
The programming host 400 can be in signal connection with the embedded device 300 through a connecting line such as a USB data line, and the programming host 400 has programming and debugging program functions, and when the programming host 400 is successfully connected with the embedded device 300, the training trainee can write a program into the embedded device 300 through the programming host 400 to debug the hardware and software of the embedded device 300.
The binocular camera 800 can be used for collecting practical training scene images carrying face images of practical training personnel, the practical training scene images are sent to the directing device 700, the directing device 700 can accurately identify the face images of the practical training personnel from the practical training scene images and analyze the practical training personnel based on the face images, the directing device 700 can perform face recognition analysis and face posture analysis on the practical training personnel, early warning prompts are timely provided for teachers for students with difficulty in operation, the teachers can timely perform assisting and answering, the practical training system of the embedded equipment integrates teaching, teaching video recording, directing and classroom monitoring and analyzing of the embedded equipment, improvement of practical training effects of the embedded equipment is facilitated, and teaching and student training of the embedded equipment are facilitated.
In one embodiment, the wire plugging platform is provided with a plurality of jacks; a first magnetic area and a first conductive area are arranged on the inner surface of the jack; the expansion module is provided with a plurality of pins matched with the jacks; a second magnetic region and a second conductive region are arranged on the outer surface of the pin; when the expansion module is inserted into the jack of the plug wire platform through the pins, the second magnetic area and the first magnetic area are attracted through magnetism, and the second conductive area is in conductive contact with the first conductive area.
In this embodiment, the wire platform 100 is provided with a plurality of jacks 110, the jacks are matched with the pins of the expansion module, that is, the shape, size and other attribute parameters of the jacks are matched with the pins, so that the pins of the expansion module can be inserted into the jacks. The inner surface of the insertion hole 110 is provided with a first magnetic region and a first conductive region, the first magnetic region is used for attracting a component with corresponding magnetism, for example, when a pin with corresponding magnetism is inserted into the insertion hole, the pin can be firmly attracted through the first magnetic region, the first conductive region is mainly used for making conductive contact with other conductive components, for example, when a conductive pin is inserted into the insertion hole, the first conductive region makes conductive contact with the pin, and can conduct current mutually. The manner of forming the insertion hole 110 on the patch panel 100 may include various manners, and the insertion hole 110 may be formed directly on the patch panel 100 or may be formed through the socket 120 installed on the patch panel 100.
The opening method of the jack 110 is described by taking the socket 120 mounted on the wiring platform 100 as an example, referring to fig. 4, fig. 4 is a schematic structural diagram of the jack in an embodiment, the socket 120 may be mounted on the wiring platform 100 by welding, integral forming connection, or the like, a through hole may be formed in the socket 120 as the jack 110, an inner surface of the jack 110 is a side surface 111, a first conductive region and a first magnetic region may be disposed on the side surface 111, and a manner of disposing the conductive region and the magnetic region on the inner surface of the jack may include various manners, for example, a magnetic material and a conductive material may be coated on the inner surface of the jack, or a magnetic patch and a conductive patch may be attached on the inner surface of the jack. Specifically, can paste the paster of establishing the electrically conductive metal of magnetism at the internal surface of jack, because this electrically conductive metal of magnetism has magnetism and electric conductivity simultaneously, consequently, this electrically conductive metal of magnetism can form first magnetism region and first electrically conductive region simultaneously on the position that the internal surface was pasted and is established, like this, when the pin is inserted and is established this jack, the corresponding magnetism region of pin can carry out the actuation with this electrically conductive metal of magnetism, and this electrically conductive metal can carry out conductive contact with the corresponding electrically conductive region of pin, make pin and jack can more closely cooperate, be difficult for droing, electrically conductive reliability is also higher.
In one embodiment, the number of the insertion holes opened in the socket may be plural, and the first conductive regions on the inner surfaces of the respective insertion holes communicate with each other. The socket body may be provided with a plurality of insertion holes, and the first conductive regions on the inner surface of the respective insertion holes are communicated with each other. Specifically, referring to fig. 5, fig. 5 is a schematic structural diagram of a socket in another embodiment, a first jack 131, a second jack 132, and a third jack 133 are disposed on the socket 130, wherein inner surfaces of the first jack 131, the second jack 132, and the third jack 133 may be communicated with each other, first conductive regions are disposed on inner surfaces of the first jack 131, the second jack 132, and the third jack 133, respectively, and the first conductive regions of the jacks are communicated with each other, so that a first pin inserted into the first jack 131 may be in conductive contact with a second pin inserted into the second jack 132 or the third jack 133, and a current conducted by each jack may be communicated with other jacks.
In one embodiment, further, the shape of the socket may be a cube.
In this embodiment, socket ontology's shape is the cube, adopts the cube as socket's shape, is favorable to making this socket can install more steadily on the plug wire platform, can also all set up the jack at six faces of this socket for the first electrically conductive region of each jack internal surface communicates each other, and the jack of seting up on six faces of this socket ontology all can connect different extension module's pin, still is favorable to improving the utilization ratio of plug wire platform.
The expansion module 200 is provided with a plurality of pins 210 matched with the jacks 110 of the patch panel 100, the pins 210 are matched with the jacks 110, that is, attribute parameters such as the shapes and sizes of the pins 210 are matched with the jacks, so that the expansion module 200 can be inserted into the jacks 110 through the pins 210, the outer surface of the pins 210 is provided with a second magnetic region and a second conductive region, the second magnetic region is used for being adsorbed with a corresponding magnetic region of the jack, for example, when the pins 210 are inserted into the jacks 110 with the corresponding magnetic regions, the pins 210 can be firmly adsorbed with the jacks through the second magnetic region; the second conductive region is mainly used for making conductive contact with a corresponding conductive region of the jack 110, for example, when the pin 210 is inserted into the jack 110 with the corresponding conductive region, the second conductive region makes conductive contact with the jack, and can conduct current mutually, and the first magnetic region is disposed on the inner surface of the jack, so that the jack is prevented from being contaminated by a magnetic substance and the firmness of the contact with the plug post is prevented from being affected.
The manner of providing the conductive region and the magnetic region on the outer surface of the pin 210 may include various manners, for example, the outer surface of the pin 210 may be coated with a magnetic material and a conductive material, or a magnetic patch and a conductive patch may be attached to the outer surface of the pin 210.
Specifically, referring to fig. 6, fig. 6 is a schematic structural diagram of a pin in an embodiment, a patch of a magnetic conductive metal may be attached to an outer surface 211 of the pin 210, and since the magnetic conductive metal has both magnetism and conductivity, the magnetic conductive metal may form a second magnetic region and a second conductive region at the same time at the position where the outer surface is attached, so that when the pin 210 is inserted into the jack 110, the magnetic conductive metal may attract the first magnetic region on the inner surface of the jack 110, and the magnetic conductive metal may make conductive contact with the first conductive region on the inner surface of the jack, so that the pin 210 and the jack 110 may be more closely matched, and are not easy to fall off, and the reliability of the electrical conduction is higher.
The jack 110 of the plug wire platform 100 can be used in cooperation with the pins 210 of the expansion module 200, when the expansion module 200 is inserted into the jack 110 of the plug wire platform 100 through the pins 210, the second magnetic regions of the pins 210 and the first magnetic regions of the jack 110 are magnetically attracted, the second conductive regions of the pins 210 are in conductive contact with the first conductive regions of the jack 110, so that the pins 210 and the jack 110 can be in stable contact through magnetism, the expansion module 200 is not easy to fall off from the plug wire platform, and the first conductive regions of the jack 110 and the second conductive regions of the pins 210 are in conductive contact to ensure that currents of the expansion module 200 and the plug wire platform can be conducted with each other.
In the above embodiment, the plug wire platform of the embedded device and the plurality of extension modules are provided with the jacks with the first magnetic regions and the first conductive regions on the inner surfaces, the extension modules are provided with the pins, the second magnetic regions and the second conductive regions are arranged on the outer surfaces of the pins, the pins are matched with the jacks on the plug wire platform, when the extension modules are inserted into the jacks through the pins, the second magnetic regions and the first magnetic regions attract each other through magnetism, and the second conductive regions are in conductive contact with the first conductive regions, so that the plug wire platform and the extension modules can attract each other through the corresponding magnetic regions while being in conductive connection, the plug wire platform and the extension modules are in stable contact and are not easy to fall off, and the poor circuit contact between the plug wire platform and the extension modules is avoided.
In one embodiment, the inner surface of the jack of the plug wire platform is pasted with magnetic conductive metal; and the pins of the expansion module are made of magnetic conductive metal.
In this embodiment, a patch of magnetic conductive metal may be attached to the inner surface of the insertion hole 110 of the patch platform 100, and since the magnetic conductive metal has both magnetism and conductivity, the patch of magnetic conductive metal may form a first magnetic region and a first conductive region at the same time at the position where the inner surface is attached; the pin 210 of the expansion module 200 may be made of a magnetic conductive metal, and since the magnetic conductive metal has both magnetism and conductivity, a second magnetic region and a second conductive region are simultaneously formed on the pin 210. Like this, when the pin 210 of extension module 200 inserts and establishes the jack 110 of plug wire platform 100, the surface of pin 210 can pass through magnetism looks attraction with the internal surface of jack 110, and pin 210 surface can carry out conductive contact with jack 110 internal surface, realized that the pin of extension module can also carry out actuation with this pin and jack internal surface through corresponding magnetism region when carrying out conductive connection with the jack internal surface of plug wire platform, make extension module and socket contact firm, be difficult for droing, avoid circuit contact failure.
In one embodiment, the socket and the pin are each hexagonal prism shaped.
In this embodiment, the shape of the insertion hole of the wire insertion platform may be a hexagonal prism, and correspondingly, the shape of the pin of the expansion module may also be a hexagonal prism. When the pin is matched with the jack, the pin can be attached more closely, the problem that the pin is easy to rotate relative to the jack under the condition that the pin is connected with the jack in the traditional technology to influence the electrical conductivity can be avoided, and the electrical conductivity reliability is further improved. Furthermore, the inner surface of the hexagonal-prism-shaped jack is provided with a patch made of magnetic conductive metal in an attached mode, so that when the pins of the expansion module are inserted into the jack, six-side contact is carried out through the patch made of magnetic conductive metal, and the reliability of conductive connection is guaranteed.
In one embodiment, the jacks may be connected by jack connection wires.
In this embodiment, the jacks 110 shown in fig. 3 may be electrically connected by jack connection wires. The back of the patch board 100 may be provided with a plurality of back jacks such as the back jack 141 and the back jack 142, and these back jacks are electrically conducted with the jacks on the upper surface of the patch board 100, for example, the jack 110 on the upper surface of the patch board 100 is conducted with the back jack 141, i.e., the pins of the jack 110 can be electrically connected with other jacks such as the back jack 142 through the corresponding back jacks 141, and the jack connection line of the embodiment can realize the conductive connection between different jacks.
Specifically, referring to fig. 7, fig. 7 is a schematic structural diagram of a jack connection line in an embodiment, the jack connection line may include a first pin 221 and a second pin 222 that are matched with the jack 110 of the plug platform, the first pin 221 and the second pin 222 are conductively connected through a wire 20, one end of the wire 20 is conductively connected to a second conductive region on an outer surface of the first pin 221, and the other end of the wire 20 is conductively connected to a second conductive region on an outer surface of the second pin 222, so that the first pin 221 and the second pin 222 are conductively connected through the wire 20, and the jack connection line may be used to conductively connect different jacks, thereby improving reliability and flexibility of jack electrical connection. It should be noted that, for specific limitations of the first pin 221 and the second pin 222, reference may be made to the content of the foregoing embodiment for limiting the pin 210 of the expansion module, and details are not described herein again.
In one embodiment, the method may further include: a first image acquisition device 500 and a projection device 600.
The first image acquisition device 500 is configured to acquire a video image of a practical training person operating a plug-in platform and an extension module of an embedded device, and send the video image to the director 700, where the director 700 may receive the video image and project the video image through the projection device 600.
In this embodiment, the first image capturing device 500 may be disposed above the plug wire platform of the embedded device, and the first image capturing device 500 may include a camera 510 and a plurality of auxiliary lamps 520 disposed around the camera, and the auxiliary lamps 520 may provide auxiliary light when light is insufficient, so that the camera 510 can shoot clear video images. The first image acquisition device 500 can acquire video images of a practical training person operating the plug-in platform and the extension module of the embedded device, for example, recording teaching demonstration operation video images of a teacher or acquiring operation video images of a student, and then sending the video images to the director 700, the director 700 can be internally provided with intelligent recording and directing software and can be used for recording and directing teaching videos, the first image acquisition device 500 can record video images of hardware connection operations, programming operations and the like of the teacher on the plug-in platform, the extension module and the programming host of the embedded device, and the recorded video images can be imported into a database through background data management software of the intelligent directing software, and then on-demand playing is realized. The directing and broadcasting device 700 can be provided with a touch screen, a practical training person can directly select and order teaching videos and learning courseware on the touch screen through touch operation, the ordered videos or courseware are projected onto a curtain through the projection device 600, and the projection device 600 can be used for playing and displaying the ordered videos or courseware, including practical teaching or learning operation of embedded equipment.
According to the embodiment, the operation video of the practical training personnel for the embedded equipment, the plug wire platform and the extension module can be collected through the first image collecting device, and the broadcasting and projection are carried out through the broadcasting guide device and the projection device, so that the practical training of the embedded equipment is facilitated, and the teaching practical training quality of the embedded equipment is improved.
In one embodiment, the method may further include: a work bench.
In this embodiment, the practical training system of the embedded device may further include a workbench 10, where the workbench 10 may be used to place the programming host 400 and the director device 700, so that practical training personnel can operate the programming host 400 and the director device 700 on the workbench 10, and teachers can also conveniently teach and monitor the practical training students.
In one embodiment, a first telescopic rod can be further included; the workstation is connected to the one end of this first telescopic link, and projection arrangement and binocular camera are connected to the other end of first telescopic link.
In this embodiment, the practical training system of the embedded device may further include a first telescopic rod 910, one end of the first telescopic rod 910 may be fixedly connected or detachably connected to the workbench 10, the other end of the first telescopic rod 910 may be connected to the projection apparatus 600 and the binocular camera 800, the binocular camera 800 may be disposed on the back of the projection apparatus 600, and the projection apparatus 600 and the binocular camera 800 may be connected to the first telescopic rod 910. This embodiment can be through adjusting the flexible state of first telescopic link 910, adjust first telescopic link 910 to the extension state when using projection arrangement 600 or binocular camera 800, and the specific length of first telescopic link 910 can be adjusted according to projection or image acquisition needs, and can adjust first telescopic link 910 to the contraction state when need not use, practices thrift the usage space.
In one embodiment, the method may further include: a second telescopic rod; one end of the second telescopic rod is arranged on the first telescopic rod, and the other end of the second telescopic rod is connected with the first image acquisition device.
In this embodiment, the practical training system of the embedded device may further include a second telescopic rod 920, one end of the second telescopic rod 920 may be fixedly connected or detachably connected to the first telescopic rod 910, and the other end of the second telescopic rod 920 is connected to the first image acquisition device 500, so that the first image acquisition device 500 may acquire a video image of a practical training person operating the plug-in platform and the expansion module of the embedded device. This embodiment can be through adjusting the flexible state of second telescopic link 920, adjust second telescopic link 920 to the extension state when using first image acquisition device 500, specific length can be adjusted according to the plug wire platform of embedded equipment and the position of expansion module, and when not using first image acquisition device 500, can adjust second telescopic link 920 to the contraction state, and can adjust first telescopic link 910 and second telescopic link 920 to the contraction state simultaneously when not using first image acquisition device 500, projection arrangement 600 and binocular camera 800, practice thrift the usage space.
In one embodiment, the method may further include: a mobile controller for controlling the director.
In this embodiment, the practical training system of the embedded device may further include a mobile controller 30, where the mobile controller 30 may be configured to control the director device 700, where the mobile controller 30 is connected to the director device 700 through WIFI or bluetooth, so that practical training personnel such as a teacher may use the mobile controller 30 to control the director device 700 during a classroom or a practical training room moving at will, and the mobile controller 30 may be embedded with director control software and connected to the director device 700 through WIFI or bluetooth, so as to realize selection of a learning video and a video on-demand function.
In an embodiment, a computer device is provided, and the computer device is applied to the director apparatus pertaining to any one of the above embodiments, where an internal structure diagram of the computer device may be as shown in fig. 8, and fig. 8 is an internal structure diagram of the computer device in an embodiment. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of extracting a face image. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the inventive arrangements and is not intended to limit the computing devices to which the inventive arrangements may be applied, as a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a plurality of images shot by a multi-view camera; extracting a first face image from the image according to the face skin color characteristics; determining a second face image in the first face image according to the Euler number of the standard face image; acquiring an overlapping area of a second face image among the images; and extracting a third face image in the image according to the overlapping area.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
carrying out binarization on the image according to the skin color characteristics of the human face to obtain a fourth human face image; determining a plurality of target regions in a fourth face image; acquiring the number of pixel points of each target area; and if the number of the pixel points is less than a set threshold value, removing a target area from a fourth face image to obtain the first face image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
carrying out binarization processing on the third face image; and counting the third face image after the binarization processing to determine the number of people in the image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a face image database; the human face image database records a plurality of human face images and the identity of a person corresponding to the human face images; and matching the third face image with the face image in the face image database to determine the identity of the person in the image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining various standard human face postures; and carrying out attitude analysis on the third face image according to various standard face attitudes to determine the face attitude in the image.
According to the computer equipment, the accuracy of extracting the face image is improved through the computer program running on the processor, and the face image extracting method and the face image extracting device are applied to a practical training system of the embedded equipment, can accurately extract the face image of a practical training student from a practical training scene image, and are convenient for statistical analysis of practical training personnel.
It will be understood by those skilled in the art that all or part of the processes in the method for extracting a face image according to any of the above embodiments may be implemented by a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, the computer program may include the processes of the above embodiments of the methods. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Accordingly, in one embodiment there is provided a computer readable storage medium having a computer program stored thereon, the computer program when executed by a processor implementing the steps of:
acquiring a plurality of images shot by a multi-view camera; extracting a first face image from the image according to the face skin color characteristics; determining a second face image in the first face image according to the Euler number of the standard face image; acquiring an overlapping area of a second face image among the images; and extracting a third face image in the image according to the overlapping area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
carrying out binarization on the image according to the skin color characteristics of the human face to obtain a fourth human face image; determining a plurality of target regions in a fourth face image; acquiring the number of pixel points of each target area; and if the number of the pixel points is less than a set threshold value, removing a target area from a fourth face image to obtain the first face image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
carrying out binarization processing on the third face image; and counting the third face image after the binarization processing to determine the number of people in the image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a face image database; the human face image database records a plurality of human face images and the identity of a person corresponding to the human face images; and matching the third face image with the face image in the face image database to determine the identity of the person in the image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining various standard human face postures; and carrying out attitude analysis on the third face image according to various standard face attitudes to determine the face attitude in the image.
The computer readable storage medium improves the accuracy of extracting the face image through the stored computer program, and can accurately extract the face image of the practical training student from the practical training scene image when being applied to the practical training system of the embedded device, thereby facilitating the statistical analysis of practical training personnel.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for extracting a face image is characterized by comprising the following steps:
acquiring a plurality of images shot by a multi-view camera; the multiple images all comprise face images; the multi-camera shoots the same target area through a plurality of cameras to obtain the image; the target area comprises a training scene with training personnel in a training classroom;
extracting a first face image from the image according to the face skin color characteristics;
determining a second face image in the first face image according to the Euler number of the standard face image; the second face image corresponds to each image;
acquiring an overlapping area of the second face image among the images; extracting a third face image in the images according to the positions of the overlapping areas in the second face images;
and carrying out attitude analysis on the third face image according to a plurality of preset standard face attitudes to determine the face attitude of the third face image, and if the face attitude is a target face attitude, determining that a practical training student corresponding to the third face image is a practical training student with difficulty in operation.
2. The method for extracting a face image according to claim 1, wherein the step of extracting a first face image from the image according to the face skin color feature comprises:
carrying out binarization on the image according to the skin color characteristics of the human face to obtain a fourth human face image;
determining a plurality of target regions in the fourth face image;
acquiring the number of pixel points of each target area;
and if the number of the pixel points is less than a set threshold value, removing the target area from the fourth face image to obtain the first face image.
3. The method for extracting a face image according to claim 1 or 2, wherein after the step of extracting a third face image from the images according to the position of the overlapping area in each of the second face images, the method further comprises:
carrying out binarization processing on the third face image;
and counting the third face image after binarization processing, and determining the number of people in the image.
4. The method for extracting a face image according to claim 1 or 2, wherein after the step of extracting a third face image from the images according to the position of the overlapping area in each of the second face images, the method further comprises:
acquiring a face image database; the face image database records a plurality of face images and the identity of a person corresponding to the face images;
and matching the third face image with the face image in the face image database to determine the identity of the person in the image.
5. The method for extracting a face image according to claim 1 or 2, wherein after the step of extracting a third face image from the images according to the position of the overlapping area in each of the second face images, the method further comprises:
determining various standard human face postures;
and carrying out attitude analysis on the third face image according to the plurality of standard face attitudes to determine the face attitude in the image.
6. An extraction apparatus for a face image, comprising:
the acquisition module is used for acquiring a plurality of images shot by the multi-view camera; the multiple images all comprise face images; the multi-camera shoots the same target area through a plurality of cameras to obtain the image; the target area comprises a training scene with training personnel in a training classroom;
the first extraction module is used for extracting a first face image from the image according to the face complexion characteristics;
the determining module is used for determining a second face image in the first face image according to the Euler number of a standard face image; the second face image corresponds to each image;
the second extraction module is used for acquiring an overlapping area of the second face image among the images; extracting a third face image in the images according to the positions of the overlapping areas in the second face images; and carrying out attitude analysis on the third face image according to a plurality of preset standard face attitudes to determine the face attitude of the third face image, and if the face attitude is a target face attitude, determining that a practical training student corresponding to the third face image is a practical training student with difficulty in operation.
7. A practical training system of embedded equipment is characterized by comprising: the system comprises embedded equipment, a programming host used for programming the embedded equipment, a plug-in platform of the embedded equipment, a plurality of extension modules of the embedded equipment, a binocular camera and a director; wherein,
the embedded equipment is electrically connected with the plurality of extension modules through the plug wire platform and is used for training personnel to perform training operation;
the binocular camera is used for acquiring practical training scene images and sending the practical training scene images to the broadcasting guide device; the training scene image carries a face image of the training personnel;
the director is used for extracting the facial image of the practical training personnel from the practical training scene image according to the facial image extraction method of any one of claims 1 to 5.
8. The practical training system of the embedded device according to claim 7, wherein the plug wire platform is provided with a plurality of jacks; a first magnetic region and a first conductive region are arranged on the inner surface of the jack; the expansion module is provided with a plurality of pins matched with the jacks; a second magnetic region and a second conductive region are arranged on the outer surface of the pin; wherein,
when the expansion module is inserted into the jack of the plug wire platform through the pins, the second magnetic area and the first magnetic area are attracted through magnetism, and the second conductive area is in conductive contact with the first conductive area.
9. The practical training system of the embedded device according to claim 8, wherein a magnetic conductive metal is attached to the inner surface of the jack; the pins are made of magnetic conductive metal; the shapes of the jacks and the pins are hexagonal prisms.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for extracting a face image according to any one of claims 1 to 5.
CN201811347342.5A 2018-11-13 2018-11-13 Face image extraction method and device, practical training system and storage medium Active CN109558812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811347342.5A CN109558812B (en) 2018-11-13 2018-11-13 Face image extraction method and device, practical training system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811347342.5A CN109558812B (en) 2018-11-13 2018-11-13 Face image extraction method and device, practical training system and storage medium

Publications (2)

Publication Number Publication Date
CN109558812A CN109558812A (en) 2019-04-02
CN109558812B true CN109558812B (en) 2021-07-23

Family

ID=65865989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811347342.5A Active CN109558812B (en) 2018-11-13 2018-11-13 Face image extraction method and device, practical training system and storage medium

Country Status (1)

Country Link
CN (1) CN109558812B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062293B (en) * 2019-12-10 2022-09-09 太原理工大学 Unmanned aerial vehicle forest flame identification method based on deep learning
CN116311383B (en) * 2023-05-16 2023-07-25 成都航空职业技术学院 Intelligent building power consumption management system based on image processing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452582A (en) * 2008-12-18 2009-06-10 北京中星微电子有限公司 Method and device for implementing three-dimensional video specific action
CN101706872A (en) * 2009-11-26 2010-05-12 上海交通大学 Universal open type face identification system
CN102063607A (en) * 2009-11-16 2011-05-18 日电(中国)有限公司 Method and system for acquiring human face image
CN203118326U (en) * 2013-01-18 2013-08-07 北京新大陆时代教育科技有限公司 A teaching practical training integrated platform device based on an internet of things
CN104933145A (en) * 2015-06-19 2015-09-23 深圳天珑无线科技有限公司 Photograph processing method and device and mobile terminal
CN207743420U (en) * 2018-01-24 2018-08-17 深圳阿凡达智控有限公司 Multi-function jack

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7050607B2 (en) * 2001-12-08 2006-05-23 Microsoft Corp. System and method for multi-view face detection
KR100866792B1 (en) * 2007-01-10 2008-11-04 삼성전자주식회사 Method and apparatus for generating face descriptor using extended Local Binary Pattern, and method and apparatus for recognizing face using it
US20090290791A1 (en) * 2008-05-20 2009-11-26 Holub Alex David Automatic tracking of people and bodies in video
CN101667245B (en) * 2009-09-25 2011-08-24 西安电子科技大学 Human face detection method by cascading novel detection classifiers based on support vectors
CN101872431B (en) * 2010-02-10 2014-04-09 杭州海康威视数字技术股份有限公司 People flow rate statistical method and system applicable to multi-angle application scenes
KR101189043B1 (en) * 2011-04-27 2012-10-08 강준규 Service and method for video call, server and terminal thereof
CN102509070A (en) * 2011-10-12 2012-06-20 西安理工大学 Video-based human face area tracking method for counting people paying close attention to advertisement
CN103632132B (en) * 2012-12-11 2017-02-15 广西科技大学 Face detection and recognition method based on skin color segmentation and template matching
CN103927520B (en) * 2014-04-14 2018-04-27 中国华戎控股有限公司 A kind of backlight environment servant's face detecting method
CN104268536B (en) * 2014-10-11 2017-07-18 南京烽火软件科技有限公司 A kind of image method for detecting human face
CN104715244A (en) * 2015-04-01 2015-06-17 华中科技大学 Multi-viewing-angle face detection method based on skin color segmentation and machine learning
CN105205480B (en) * 2015-10-31 2018-12-25 潍坊学院 Human-eye positioning method and system in a kind of complex scene
CN107229887A (en) * 2016-03-24 2017-10-03 北京亮亮视野科技有限公司 Multi-code scanning device and multi-code scan method
CN107507277B (en) * 2017-07-31 2021-04-06 北京康邦科技有限公司 Three-dimensional point cloud reconstruction method and device, server and readable storage medium
CN107609512A (en) * 2017-09-12 2018-01-19 上海敏识网络科技有限公司 A kind of video human face method for catching based on neutral net

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452582A (en) * 2008-12-18 2009-06-10 北京中星微电子有限公司 Method and device for implementing three-dimensional video specific action
CN102063607A (en) * 2009-11-16 2011-05-18 日电(中国)有限公司 Method and system for acquiring human face image
CN101706872A (en) * 2009-11-26 2010-05-12 上海交通大学 Universal open type face identification system
CN203118326U (en) * 2013-01-18 2013-08-07 北京新大陆时代教育科技有限公司 A teaching practical training integrated platform device based on an internet of things
CN104933145A (en) * 2015-06-19 2015-09-23 深圳天珑无线科技有限公司 Photograph processing method and device and mobile terminal
CN207743420U (en) * 2018-01-24 2018-08-17 深圳阿凡达智控有限公司 Multi-function jack

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于肤色信息和模板匹配的人脸检测与提取;邵虹 等;《计算机技术与发展》;20161130;第26卷(第11期);第49-53页 *

Also Published As

Publication number Publication date
CN109558812A (en) 2019-04-02

Similar Documents

Publication Publication Date Title
CN108304793B (en) Online learning analysis system and method
CN110659397B (en) Behavior detection method and device, electronic equipment and storage medium
CN108200334B (en) Image shooting method and device, storage medium and electronic equipment
WO2017032311A1 (en) Detection method and apparatus
CN108269333A (en) Face identification method, application server and computer readable storage medium
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
CN108839016B (en) Robot inspection method, storage medium, computer equipment and inspection robot
CN111814587B (en) Human behavior detection method, teacher behavior detection method, and related systems and devices
CN109558812B (en) Face image extraction method and device, practical training system and storage medium
CN108875667B (en) Target identification method and device, terminal equipment and storage medium
AU2018201238A1 (en) System for counting quantity of game tokens
CN113052127A (en) Behavior detection method, behavior detection system, computer equipment and machine readable medium
CN111312023B (en) Device and method for automatically drawing middle school physics circuit experiment circuit diagram
CN109462067B (en) Auxiliary device and practical training system of embedded equipment
CN212208377U (en) Binocular face recognition and temperature measurement class information display board
CN109697389B (en) Identity recognition method and device
CN113822907B (en) Image processing method and device
CN109697421A (en) Evaluation method, device, computer equipment and storage medium based on micro- expression
CN106846302B (en) Detection method for correctly taking tool and examination table based on method
CN108898134B (en) Number identification method and device, terminal equipment and storage medium
CN116868912A (en) Device and method for detecting social obstacle behaviors of animals, electronic equipment and medium
US20160275373A1 (en) Evaluation of models generated from objects in video
CN113807150A (en) Data processing method, attitude prediction method, data processing device, attitude prediction device, and storage medium
WO2022149784A1 (en) Method and electronic device for detecting candid moment in image frame
CN216824742U (en) Running examination system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant