IDENTITY SYSTEMS
TECHNICAL FIELD OF THE INVENTION
This invention relates to a Facial Identification Matrix (FIM). A FIM is a means of holding facial biometric data; specifically for application across the whole spectrum of security and identification requirements.
BACKGROUND
Biometrics is the science of automatically identifying individuals based on their unique physiological and/or behavioural characteristics. Biometric information based on the unique characteristics of a person's face, iris, voice, fingerprints, signature, palm prints, hand prints, or DNA can all be used to authenticate a person's identity or establish an identity from a database.
The use of facial biometric information has a number of advantages. Existing methods have a good level of reliability and the information can be obtained quickly using non-intrusive techniques. Furthermore, the data is not sensitive to superficial facial features such as sunglasses and beards; primarily because the system can undertake training of the data whilst processing the image. One of the major methods of the first generation of facial biometric software, based on "neural net" creates a template from a two-dimensional (2D) image of the subjects face; although this can be very accurate, both the angle at which the image is captured and the nature of the lighting are critical. Problems also arise with individuals having low skin contrast levels
(e.g. people of Afro-Caribbean origin).
A second generation of facial biometric software is now available which has overcome some of the issues with regard to image angle, lighting and skin contrast.
SUMMARY OF THE INVENTION
In simple terms the invention allows for images captured to be rendered into 3D models, and depending on the number of cameras used, the image can be accurately rotated through 360 degrees. Our invention makes use of a 3D Biometric Engine [The 3D Biometric Engine includes the following modules; 3D Patch Engine, 3D Graph Engine, 3D Feature Selector Engine, 3D Indexing Engine and a control module which controls the modules and the input and output functions of the system.]. This takes data from a 3D camera system which is passed to both a 2D and separate 3D Biometric Engine; the 3D engine enables a correlated index key to be created. The present invention provides a process for producing a FIM [A FIM contains both 2D and 3D templates, indexing data and optionally a copy of the original captured.] which is more accurate than any currently existing biometric identifier.
The present invention proposes a methodology for producing a FIM from a subject which includes:
- The linking of various known, and new, techniques to produce a facial biometric product with new and enhanced capabilities.
- obtaining a 3D facial image of the subject with improved and balanced contrasted levels through the use of modified lighting and filtering methods;
- producing both 2D and 3D biometric data from the captured facial image;
- providing the capability of using the 3D data to generate a 2D biometric template of the subjects face and manipulate images captured from various angles;
- providing the capability of using the 2D data to generate a 3D biometric template of the subjects face and manipulate images captured from various angles;
- through the use of the Patch Engine the invention provides the ability to detect motion at minute levels, down to the level of 3 to 4 pixels, for example the partial blink of an eye-lid.
- the ability to provide a 3D depth filter, to work with Skin Segmentation, in defining Region of Interest (ROI) to a much greater degree of accuracy than currently available.
- combining the two sets of biometric data to form a unique FIM for the subject, thus providing the means of creating a correlated indexing system.
The invention is more accommodating for poor lighting conditions and weak skin contrast levels, the range of angles through which a successful image can be captured is increased through the ability to correct angular displacement. The use of the two different kinds of biometric data
significantly increases the overall accuracy of identification when measured against using either of the biometric methods individually.
The accuracy of the matrix-generating process can be improved by obtaining a plurality (verification set) of facial images from the subject, generating the two kinds of biometric information for each successive image, and comparing each new set of information with the information previously generated until a predetermined correlation level is achieved. The FIM may be stored together with the facial images which were used to generate it.
It will be appreciated that the invention further provides a FIM which contains both a 3D facial biometric template and a 2D biometric template for a subject.
The invention, using the FIM, further provides a process for determining the identity of a subject or for verifying the subject against a known FIM. The process compares the matrix (thereby obtained through the image process) with a stored FIM which contains 3D facial biometric data and a 2D biometric template derived there from, to determine whether the two matrixes match.
The identification matrix is small enough to be written to portable identification cards, added to a database along with numerous similar matrixes or transmitted electronically. In one use of the process the stored matrix may be included on a portable data carrier such as an identity card, a 2D Barcode printout, smart card or any portable media capable of storing data. Thus, the verification process can be used to verify that the data carrier belongs to the individual presenting it.
The process results in a considerable increase in the accuracy with which the
identity of an individual can be verified, to a level previously unobtainable using non-intrusive biometric methodology. There is also a significant reduction in the number of false acceptances.
The accuracy of the verification process can be improved by repeatedly generating identification matrixes from the subject and comparing them with the stored matrix until a predetermined correlation level is achieved.
In another use of the process the stored FIM can be found from a database of similar matrixes using the index key value of the FIM. Thus, for example, the identity of an unknown individual can be ascertained by first finding all the matching index values which equal the newly generated FIM and then comparing each of the stored matrixes until a best match is found. The invention allows for duplicate index key values.
BRIEF DESCRIPTION OF THE DRAWINGS
The following description and the accompanying drawings referred to therein are included by way of non-limiting example in order to illustrate how the invention may be put into practice. In the drawings:
Figure 0 - This is a high level overview of the main modules forming the core of the invention.
Figure 1 - A high level diagrammatic representation of the process for producing a FIM in accordance with the invention;
Figure 2 - A high level diagrammatic representation of the
process for verifying the identity of a subject using the matrix.
Figure 3 - This is a high level diagrammatic representation of the process for identifying a subject using the FIM recalled from storage.
Figures 4a, b, c & d - Concerns image capture and lighting issues.
Figure 5 - Basic 2D & 3D Enrolment.
Figure 6 - Capture a 3D Image using a Single Camera Source
Figure 7 - Capture a 3D Image using a Pair of Cameras
Figure 8 - Creation of a 3D Image of the face.
DETAILED DESCRIPTION OF THE DRAWINGS
Overview: [See Figure 0.]
This is a high level overview of the main modules involved in the invention.
It explains which modules are linked together and in which order.
Equipment
Suggested equipment required for use by the various processes described. A standard PC (750 MHz processor, 128 MB of RAM, 10 gigabyte hard drive, 15 inch CRT or TFT LCD screen, keyboard, mouse, video Grabber Card) Microsoft Windows 2000 Professional Operating System (System requires
a secure O/S), an approved video camera, (for the low-end systems a USB Video Web Camera can be utilised) lighting units as specified. (Basically the lights illuminate the subject with light which is outside of the spectrum visible to a human being.) In certain circumstances an optional control device such as a touch screen may be utilised. If Process Step 7 (as described below) is implemented then a device, such as a smart card reader or a printer capable of outputting 2D barcode would be required.
Enrolment: [See Figure 1.]
The enrolment process involves the following steps:
Process Step 1. Image capture. This is the act of obtaining a digital image of the subject as a direct result of an input into the system from either a digital video camera or a Web camera. Depending on which methodology is used to obtain the third coordinate (required for the 3D biometric engine) either a single camera is used or a stereoscopic camera system. For details of the two different methodologies which are available see the entry below on pseudo 3D image calculation and stereoscopic 3D image. Whilst the image capture is being processed the system will normally display to the subject a copy of the image capture, however, in certain circumstances this will not be the case. Both the input from the Web Camera and the video camera provide a number of frames which are available for capture and utilisation by the system. The process monitors the quality of the image and assigns a value which relates to the acceptableness of the image quality provided. In the event that the image quality is adversely affected by strong peripheral lighting the subject would be advised of the problem. A series of "live" images are preferably obtained from the subject from whom successive pairs of biometric identifiers are created as described. The newly formed
identifiers are compared to previously stored identifiers until such time as a perfect match is achieved or the verification level reaches a preset figure. The image can be taken at various angles covering the front of the subject.
Process Step 2. Image separation. At this point the 2D and 3D images are created separately in order that they may be passed to the appropriate biometric engine. Depending on the method used to obtain the third coordinate either the 3D image will be calculated from a single camera system or a 2D image will be calculated from the images provided by the stereoscopic system.
Process Step 3. The 2D data is passed to the 2D biometric engine which performs a number of tasks and results in the 2D biometric template.
Process Step 4. The 3D data is passed to the 3D biometric engine which performs a number of tasks and results in the creation of a 3D biometric template.
Process Step 5. Indexing, at this point data from the 3D biometric engine is used to create an index value which has a specific correlation to the features of the subject involved. [The system allows for a duplicated index value, for example, identical twins, who are truly identical, may well produce a calculated value which in itself is identical.] At this point the two sets of data (2D and 3D templates) linking the index values are encapsulated together, this data set is known as a FIM.
Process Step 6. The FIM produced is appended to a database.
Process Step 7. (Optional) If required the FIM calculated can be output directly to a portable data storage product.
The two best biometric identifiers are then converted into a binary format which is suitable for storage. These biometric identifiers are then stored as a FIM, along with the "best match" image of the subject and any other desired data. Since the size of the matrix is remarkably small it can be recorded on a plastic or paper identity card, e.g. airline boarding pass or a company ID card, a smart card or any suitable carrier. The matrix could, for example, be incorporated into a barcode in PDF417 format (capacity about 1200 bytes), a USB token, a memory stick or similar binary recording means. In the case of a barcode the information can be printed using a laser printer for paper/cardboard or a commercial 150 dpi thermal printer for plastic cards. Other biometric, image or reference data may be included on the card as desired.
The FIM may be transmitted rapidly by any communication means, for example Blue tooth fax or email, to enable the FIM to be used at a remote location.
Verification: [See Figure 2.]
The verification process involves the following steps:
Process Step 1. The FIM or unique ID of the person to be verified is entered into the system, the method and type of input device by which data is entered into the system can vary considerably for example; PDF417 (2D barcode), smart cards, smart keys, proximity card, keyboard entry or even a floppy disk could be the means of triggering the system to verify the subject. The FIM
does not actually have to be entered into the system, the input to the system only needs to identify the location of the FIM, it is required that the location of the FIM is available to the system. If a touch screen facility is provided the image may be displayed on the touch screen so that the required facial image can be selected from a group of people by touching the appropriate face.
Process Step 2. The 2D and 3D images are acquired as per step 1 of the enrolment. Depending on the application either a single or multi camera system would be utilised.
Process Step 3. Image separation. At this point the 2D and 3D images are created separately in order that they may be passed to the appropriate biometric engine. Depending on the method used to obtain the third coordinate either the 3D data will be calculated from the single image (provided from a one camera system) or 2D data will be calculated from the images provided by the stereoscopic system. The 2D data is passed to the 2D biometric engine which performs a number of tasks resulting in the creation of a 2D biometric template. The 3D data is passed to the 3D biometric engine which performs a number of tasks and results in the creation of a 3D biometric template.
Process Step 4. The 2D and 3D biometric engines are used to compare the data held in the FIM against the newly acquired image. The user, as part of the system setup, can set a predefined threshold with regard to the accuracy of the system; this value is used in determining whether there is an acceptable match between the two sets of data.
Process Step 5. Dependent on the results of Process Step 4, an action or series of actions could be implemented, for example a trigger signal to open a door release.
Process Step 6. In most circumstances a message will be output indicating the results of the verification. This might be through the means of a message displayed on a computer screen, an audible response or the illumination of a sign.
Process Step 7. The system can record the details of the verification process which has just been enacted in the form of an Audit Trail. This ability enables the verification process to form a major part of an automated Time and Attendance system. The system can also run a training module which updates the FIM with the latest acquired image of the subject. In the case of a failed verification or where the input device failed a particular test (valid date) the system may withhold the return of the input device or overwrite the device effectively canceling its operational use.
Identification: [See Figure 3.]
Process Step 1. The 2D and 3D images are acquired as per step 1 of the enrolment. Depending on the application either a single or multi camera system would be utilised.
Process Step 2. Image separation, at this point the 2D and 3D images are created separately in order that they may be passed to the appropriate biometric engine. Depending on the method used to obtain the third coordinate either the 3D data will be calculated from the single image
(provided from a one camera system) or 2D data will be calculated from the images provided by the stereoscopic system. The 2D data is passed to the 2D biometric engine which performs a number of tasks and results in the 2D biometric template. The 3D data is passed to the 3D biometric engine which performs a number of tasks and results in the creation of a 3D biometric template.
Process Step 3. Using the 3D biometric data a new index key is generated.
Process Step 4. Using the key value generated at Process Step 3 the system recovers all FIM's with that specific key value. The 2D and 3D biometric engines are used to compare the data held in the returned FIM's against the FIM of the newly acquired image. The user, as part of the system setup, can set a predefined threshold with regard to the accuracy of the system; this value is used in determining whether there is an acceptable match between the two sets of data.
Process Step 5. Dependent on the results of Process Step 4, an action or series of actions could be implemented, for example a trigger signal needed to open the door release. In most circumstances a message will be output indicating the results of the verification. This might be through the means of a message displayed on a computer screen, an audible response or the illumination of a sign.
Process Step 6. The system can record the details of the identification process which has just been enacted to form an Audit Trail. The system can also run a training module which updates the FIM with the latest acquired image of the subject thus helping to maintain an accurate collection of
images of the subject. In the case of a failed verification or where the input device failed a particular test (valid date) the system may withhold the return of the input device or overwrite the device effectively canceling its operational use.
The enrolment and verification processes are not restricted to live CCTVfor image capture. Digital images can be acquired by means of digital still cameras, laptop computers with camera attachments, i.e. a Web camera, a PDA hand held barcode scanner with an image capture head, scanned from printed documents or photographs, camera-enabled WAP phones, etc. Images when acquired by these means would normally result in the 2D biometric process only, being applied.
Uses of the identification method include travel documents, banking, healthcare, social security, passports and immigration, education, the prison service, ATM machines, retail, secure access, document security, internet services and identification of football fans.
Image Capture and Lighting: [See Figures 4a - 4dJ The system makes use of cameras which have been tested in order to ensure their compliance for use with the invention, the cameras whether video cameras or Web cameras require high-grade optics and CCDs, the resolution and scanning levels of the system are also important, poor quality products will not perform adequately.
The figure 4a shows a single camera system where a projector is used to illuminate the subject with a calibrated grid; the frequency of illumination is outside that of a person's ability to register. The subject is further illuminated
by a balanced light source (i.e. light falling of each side of the subjects face is balance to reflect an equal level of light) providing light of a wavelength outside the visible spectrum. The camera is capable of switching, via software control, between colour and the wavelength(s) of the emitters used in the projector and lamps.
Figure 4b uses two cameras the angle between each camera is known and fixed; the lighting characteristics are the same as for figure 4a. The images from both cameras are fed simultaneously to the computer system.
At figure 4c multiple cameras are used, at least one pair of cameras will be used as described at figure 4b, the additional camera or cameras will normally be used to take colour images of the subject in addition to providing extra calibration information for the stereoscopic imaging process.
At figure 4d a single camera is used, meeting the range of wavelength sensitivity for the lighting and camera conditions as at figure 4a, the subject is partially surrounded by tracking which will enable the camera to track at high speed from one side of the subject to the other taking a number of images at predetermined points along the location of the tracking.
Note: The lighting fix does not have to be applied in every situation, if a constant level of light is available then the lighting fix may not need to be applied.
Basic 2D & 3D Enrolment: [See Figure 5]
Process Step 1. Image capture, is the process of obtaining a digital image
of the subject normally as a direct result of an input into the system from either a digital video camera or a Web camera. Either a single camera or a stereoscopic/multi camera system is used (applying the appropriate methodology) to obtain the third coordinate required for the 3D biometric engine. A framing box is positioned around the face, in order to achieve the maximum clarity of the image either a telescopic lens is used to zoom in on the subject or the image is digital enlarged to fill the framing box.
Process Step 2. Once the application has determined that the subject is correctly positioned within the framing box, the user is required to confirm that his face is correctly positioned; the process of enrolment can now commence.
Process Step 3. The main process, see steps 3a and 3b below, like a software 'Read ahead' operation, sets an initial value which the main process can use as a comparator.
Process Step 3a. The captured image is passed for processing by the 2D biometric engine; the first template created is a master.
Process Step 3b. The template is passed back to the application including an assessment score and size of the template.
Process Step 3 (Repeated). Further n images of the subject are taken (the actual number selected is determined in the program setup, for example 10 images may be taken of which only 5 are used in the process) The images are passed to the 2D biometric engine in order to determine that they are (a) valid template material (b) the score obtained is of an acceptable value (c)
and the size of template is acceptable.
Process Step 4. The templates are sorted in order, based on their quality and their size; a predetermined number of templates are selected to be encapsulated into the FIM.
Process Step 5. The selected templates are used to create a 2D person template which is then written to the application database.
Process Step 6 (not shown in Figure). The 3D enrolment follows a similar process but uses the 3D biometric engine.
Capture a 3D Image Using a Single Camera as Source: [See Figure 6]
Process Step 1. Projection using non-visible light of a calibration grid
The target subject is illuminated using a wavelength of the spectrum, which is not visible to the naked eye. Included within the projected light is a calibration grid (horizontal and vertical lines at a known distance), which through the use of filters and an appropriate camera device which can detect the grid, are captured along with the image. The camera used must have both a high-resolution lens and image capture element (Charge Coupled Device (CCD) or Low Noise CMOS) element and must also be able to see the wavelength of light projected. Modern CCDs can now work on resolutions up to 16.8M pixels.
Process Step 2. The captured image contains the target subject overlaid with the lines of the grid, because the contours of the surface where the
horizontal and vertical lines of the calibration grid are overlaid over the subject's image the surface contour is deformed (appears to bend).
Process Step 3. Preparation of image data. In order to build a 3D model from 2D information using a calibration grid two images are required; a standard image incorporating colour information and a calibrated image which incorporates the standard image overlaid with the calibration grid.
Process Step 4. Linear scan of image data. The standard image overlaid with the information from the generated calibration grid is then placed in 3D space (R3), this gives the effect of a plane in 3D space (with all z co-ordinates set to zero), note that the calibration grid has the same resolution as the projected calibration grid. A linear scan of the image can now be performed; the process involves scanning along each of the horizontal calibration grid lines, from left to right, top to bottom. For each pass of the scan, the generated calibration grid is adjusted, at each intersecting point on the grid the corresponding picture element is checked in the image which incorporates both the image and calibration grid. If the pixel information reveals a colour, which is part of the calibrated grid itself the next scan is processed, otherwise the point referenced needs to be adjusted until all checks have been processed.
Process Step 5. Applying image texture data. A specific area of interest (nose, mouth etc.) in the standard image can be used to map across the surface area of the patch, normally however the area of the face (forehead down to the bottom of the chin) would be used. Texture mapping co-ordinates (s, t) can be calculated from the u and v co-ordinates that map the actual surface of the patch, taking into account patch dimensions (top/left
= -1,-1 and bottom/right = 1,1).
Calculating actual texture mapping co-ordinates: s = (u+1.0)/2 t = (v+1.0)/2 ;
Values for u and v now range from 0.0 through to 1.0 these values can now be scaled to the device co-ordinates, however most 3D texture mapping hardware uses unit values ranging from 0.0 to 1.0.
Capture a 3D Image Using a Stereoscopic / Multiple Camera System as a Source: [See Figure 7.]
Figure 7 shows the layout of a pair of camera, for use in stereoscopic image capture. The principles on how this process functions are well known (see Mori, K., Kidode, M & Asada, H. (1973). An iterative prediction and correction method for automatic stereo comparison. Comp. Graph. Image Process). The figure indicates how two cameras can be used to determine depth information of a subject. By taking an image point from camera 1 (the point x1,y1) and searching for the same point in the image capture by camera 2 (the point x2,y2) and using image block cross-correlation it is possible to calculate the depth of this point in the real world. As camera 1 will have the coordinate system (x1,y1,z1), which will be different to that of the second camera 2 (x2,y2,z2), the stereo search will have two dimensions (x2,y2). Whilst there are different methods for calculating depth information, the use of the block cross-correlation method is preferred (even though costly in processor time) as it provides a more reliable way of obtaining the z1 values. The information obtained is used to build up a data set of the
x,y,z coordinates for use by other modules within the application.
Creation of a 3D Image of the face using a 3D Patch Engine [See Figure 8.]
Process Step 1. Obtain Working Resolution. The size of the patch array (resolution) is obtained directly from the resolution provided by the image capture device; effectively this establishes the working parameters of the patch. The unit scale of the patch is 2 by 2 (represented as a true mathematical model). A scaling factor can be applied creating a true (real world) scale. The minimum working resolution will be 90 units thus the scale is 90/2, which is 45 units.
Process Step 2. Initialise Patch Variables. Two variables are required in order to specify the points in the patch these are commonly known as u and v. The variables u and v are the working variables required for the generation of the 3D co-ordinates. At initialisation both the u and v variables are initialised to the value minus one, this representing the top left co-ordinate of the patch. The top left coordinate serves as the starting point for the linear scan process. Further variables are defined in order for the process to be completed.
Process Step 3. Generation of a Patch Constraint Points. This process generates 3D point components, where the x and y components are calculated and the third component (z) is set to the value zero. Note that the patch at this stage represents a flat plane in 3D space, standing upright spanning across the Y axis, with a width that spans across the X axis. This process also serves as an initialisation process for the patch array.
Process Step 4. Import of constraint data. Constraint information can be imported into the process from the data output by the 3D image capture engine in the form of 3D co-ordinates. The 3D coordinates now act as constraint points to be used in the generation of the curves; this provides the ability to automate the process of patch curve generation.
Process Step 5. Generation of Patch Curves. The generation of patch curves is accomplished by taking a collection of constraint points and calculating each part of the curve. The smoothness of the curve is variable depending on the level smoothness required. This process is applied across the u array of constraint points with the results feeding back into the patch array on completion of the u array being filled the process is repeated for the v array of constraint points. The final result is a grid of curves that breaks the patch down into quadrants with all curves being inter-connecting.
Process Step 6. Storage of 3D template information via a database. The final process is the creation of a 3D template, which is in the form of a FIM 3D Class; this is then stored in a database. Information held in the database can be recalled to be rendered on the display device. The application module has the ability to rotate the model about a central axis +/- 15 degrees.
3D Graph Pattern Matching
The graph-matching module has been developed at the University of York (UK); it is an advanced system for finding solutions to constraint-based problems. The solution prevents the exponential complexity of the problem by using methods based on that originally proposed by Hancock and Kittler. This method was extended and developed as neural relaxation by Austin and
Turner, further development introduced correlation matrix memories as a calculation engine for the system. This technique and its extensions are embodied in the Advanced Uncertain Reasoning Architecture (AURA) graph matching engine (from the University of York) versions 1.0 onwards. The underlying AURA library is the Copyright University of York and Cybula Ltd.
3D Feature Selector Engine
The engine is use to delimit a face from the surrounding background and for defining the key features of the face, for example eyes, mouth, nose etc.
Skin Chrominance Process
This element of the invention (3D Feature Selector Engine) uses a face / facial characteristics detection algorithm, which is able to handle a wide variety of variations in colour images; the engine also makes of pixel depth data from information supplied by the Patch Engine. The uses of colour information can simplify the task of finding faces in different environmental lighting conditions. The process for locating facial images/facial characteristics is shown below:
1. Acquire the image in colour, i 2. Provide for a lighting compensation.
3. Collect information on skin tone.
4. Create skin segmentation data.
5. Consolidate information on skin tones in order to create an overall skin image.
Lighting Compensation and Skin Colour Detection
It is known that the appearance of the skin colour will change under different lighting conditions. Modelling skin tone colour involves isolating the skin
colour space and using the cluster associated with the skin colour in this space.
Skin Segmentation
A problem arises when two faces (or a face and skin coloured objects) in a scene, when one face is actually touching or overlapping the other, causing them to be segmented from the scene as a single object and hence forming a single Region of Interest (ROI). The solution to this problem in the past has been to use an edge enhancement filter before skin segmentation; care must be exercised in how strongly the filters are applied, as enhancement filters tend to degrade the overall image. As part of this invention we now apply a 3D Depth filter to aid the segmentation process. It follows that the reverse is also true and that the skin tone information can be mapped to the Patch Engine output as a fourth component. The invention also applies the method of the colour difference between adjacent pixels, proportional to the original image colour difference.
Region of Interest Detection
The essence of ROI detection relies on skin tones, which allow (eye, nose etc) areas of interest to be processed. This process does not isolate a face in a scene but isolates areas of interest. Using a combination of TRUE positives with FALSE positives and various other percentage results, a probability face graph can be constructed and a decision threshold implemented. A neural net is now trained with results taken from the ROI process; both FALSE and TRUE results are used, resulting in face detection in a scene.
Indexing, using 3D Elements.
The index module works with information provided from several modules, the major ones being the delimitation of facial features and the 3D depth calculations. Common facial features, for example eyes, eyes to ear edge, nose to closed-jaw, depth of eyebrows etc can be used as elements of a compound index. The element values would be a normalized set of data based on five or more images of the subject. The delimitation of facial features module has to determine whether a true element value is used or if there is a concern regarding accuracy of the element then whether an average value should be substituted. The more elements used the greater the ability to separate out individuals and the lower the risk of using average values for some of the elements in the key. Each element has a one bit Boolean flag to indicate whether a real or substituted value has been supplied to the element.
The following is an illustration of the concept using only four elements: L.Eye (205); R.Eye (204); Nose to Closed-Jaw (650); Eye to Eye (200) the key would like 205b204b650B200b, 650 is a substituted value. Using 30 or more elements creates a highly selective and searchable index; identification can be against any part(s) of the key or the whole key.
Capturing images at various angles. (Optional Feature)
Description of method.
The invention will identify a subject, relative to a specific camera which is mounted at a known position in the environment, with the subjects face being at various angles.
Enrolment Process
The subjects head position is first aligned to 0 degrees, which is perpendicular to the camera from which the subject will be tracked from. The first enrolment image is taken, the subject then rotates their head by a number of degrees and a second enrolment image is taken. This procedure is then repeated until the subject's head is at right angles to the camera, that is the subject's head is at 90 degrees to the camera. Once a number of enrolment images have been taken, with the head rotating from centre to right, the procedure is repeated for the opposite side of the face. Note, there is no need to enrol the image of the subject's first head position at centre left, as this would duplicate with the first enrolment image made. When the enrolment procedure is completed, there will be a collection of enrolled images spanning the head 90 degrees either side of a front of head view. A new template is created holding a number of images representing the subject covering an angular range of 180 degrees.
Verification Process
The normal verification process is now applied with the new template information, which can now manage multi-angled images previously outside of the scope of the system.
It will be appreciated that the features disclosed herein may be present in any feasible combination. Whilst the above description lays emphasis on those areas which, in combination, are believed to be new, protection is claimed for any inventive combination of the features disclosed herein.