Nothing Special   »   [go: up one dir, main page]

CN117854163A - Palm vein recognition method, system and storage medium - Google Patents

Palm vein recognition method, system and storage medium Download PDF

Info

Publication number
CN117854163A
CN117854163A CN202410251729.XA CN202410251729A CN117854163A CN 117854163 A CN117854163 A CN 117854163A CN 202410251729 A CN202410251729 A CN 202410251729A CN 117854163 A CN117854163 A CN 117854163A
Authority
CN
China
Prior art keywords
feature
feature vector
fingerprint
gesture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202410251729.XA
Other languages
Chinese (zh)
Inventor
李伟
邓宝玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pinsheng Technology Co ltd
Original Assignee
Shenzhen Pinsheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pinsheng Technology Co ltd filed Critical Shenzhen Pinsheng Technology Co ltd
Priority to CN202410251729.XA priority Critical patent/CN117854163A/en
Publication of CN117854163A publication Critical patent/CN117854163A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Vascular Medicine (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a palm vein recognition method, a palm vein recognition system and a storage medium, which aim to improve the accuracy and the safety of identity authentication by comprehensively analyzing various biological characteristic data. The method comprises the steps of firstly collecting palm vein images of a user and extracting palm vein feature vectors from the palm vein images. In addition, the method also comprises the steps of collecting fingerprint images, palm print images and gesture dynamic data of the user, and extracting fingerprint feature vectors, palm print feature vectors and gesture feature vectors from the data respectively. These different types of biometric vectors are then fused to form a comprehensive user feature representation. Then, the method compares the obtained integrated feature representation with the target feature representations stored in the database, and calculates a similarity score between the two. Finally, identity information of the user is identified based on the similarity score. This method is particularly suitable for authentication situations where high security and accuracy are required.

Description

Palm vein recognition method, system and storage medium
Technical Field
The invention relates to the technical field of biological recognition, in particular to a palm vein recognition method, a palm vein recognition system and a storage medium.
Background
Touch screen technology has been widely used in a variety of devices including, but not limited to, smart phones, tablet computers, self-service terminals, and industrial control interfaces. As these applications have increased in performance requirements, assessing the speed of the touch screen response has become a critical task. The speed of reaction directly affects the user experience, including the smoothness and accuracy of the touch response.
Conventional touch screen testing methods typically focus on a single performance index, such as image quality or current variation, and ignore the complex relationship of these factors to each other. Still other methods provide relatively accurate reaction rate assessment, but this typically requires the use of costly test equipment and complex data analysis algorithms. This not only increases the cost of testing, but also limits the application of these methods in low cost or resource constrained environments.
Accordingly, there is a need to provide a touch screen testing method that provides a low cost and high accuracy touch screen testing scheme.
In the existing technical field of biological recognition, various different recognition methods have advantages and disadvantages, but all have common technical challenges. Palm vein identification technology, while excellent in terms of safety and uniqueness, is limited by certain environmental and operating conditions. For example, the quality of the palmar vein image may be affected by the lighting condition or the hand position of the user, resulting in degradation of recognition accuracy.
Fingerprint recognition is another common biological recognition technology, and has the problems of easy operation and high recognition speed, but is easily affected by finger dryness, abrasion or stains, and the factors can cause recognition failure or false recognition. The palm print recognition technology has uniqueness and universality, but palm print characteristics are complex, and the requirements on image processing and characteristic extraction are higher.
In addition, with the continuous improvement of security requirements, the single biometric identification method has not been able to meet the application scenario of high security level. For example, in financial transactions or security-sensitive situations, a single biometric approach may be more susceptible to fraud attacks or misidentification.
Therefore, it is highly desirable to develop a multi-biometric method based on the palmar veins.
Disclosure of Invention
The application provides a palm vein recognition method, a palm vein recognition system and a storage medium, so as to improve the accuracy and the safety of identity authentication
The application provides a palm vein recognition method, which comprises the following steps:
collecting a palm vein image of a user, and obtaining a palm vein feature vector in the palm vein image;
collecting fingerprint images, palm print images and gesture dynamic data of a user, obtaining fingerprint feature vectors in the fingerprint images, obtaining palm print feature vectors in the palm print images and obtaining gesture feature vectors in the gesture dynamic data;
Fusing the palm vein feature vector with the fingerprint feature vector, the palm print feature vector and the gesture feature vector to obtain a comprehensive feature representation of the user;
comparing the comprehensive feature representation with target feature representations stored in a database, and calculating a similarity score between the comprehensive feature representation and the target feature representation;
and identifying the identity information of the user according to the similarity score.
Still further, the acquiring the palmar vein feature vector in the palmar vein image includes:
wavelet transformation is applied to the acquired palm vein image to strengthen the visibility of the vein pattern, so that an enhanced image is obtained;
processing the reinforced image by using a customized convolutional neural network to generate a feature map; wherein the customized convolutional neural network is trained to specifically identify and emphasize key features of the palm vein, including branch points, shape, and direction of the palm vein;
and extracting key information from the feature map, and constructing a palm vein feature vector.
Still further, the acquiring the fingerprint feature vector in the fingerprint image includes:
preprocessing the acquired fingerprint image, wherein the preprocessing comprises image enhancement and denoising so as to improve the definition and the identifiability of the fingerprint pattern, thereby obtaining the preprocessed fingerprint image.
Analyzing the preprocessed fingerprint image using an improved convolutional neural network specifically designed to identify fingerprint minutiae features in the fingerprint image, including ridge lines, minutiae points, bifurcation points;
extracting key information from the identified fingerprint detail characteristics, and constructing a preliminary fingerprint characteristic vector representing fingerprint uniqueness;
optimizing and enhancing the preliminary fingerprint feature vector by applying an edge detection and pattern matching technology to generate an enhanced fingerprint feature vector;
and combining the fingerprint reinforcement feature vectors generated by the fingerprint images acquired from different angles or under different conditions to form the fingerprint feature vectors in the fingerprint images.
Still further, the acquiring the palmprint feature vector in the palmprint image includes:
preprocessing the acquired palm print image, wherein the preprocessing comprises image enhancement and filtering to improve the definition of palm print lines, so as to obtain a preprocessed palm print image;
analyzing the preprocessed palm print image by using edge detection and image segmentation technology, and identifying palm print detail characteristics of the palm print, wherein the palm print detail characteristics comprise ridge lines, bifurcation points and termination points;
Constructing a preliminary palm print feature vector containing palm print key information according to the extracted palm print detail features;
and further analyzing and optimizing the preliminary palm print feature vector by using a pattern recognition technology based on deep learning to generate the palm print feature vector.
Still further, the acquiring the gesture feature vector in the gesture dynamic data includes:
analyzing the collected gesture dynamic data, and identifying key gesture action characteristics, wherein the key gesture action characteristics comprise relative position change among fingers, duration time of gestures and speed curves of movement;
according to the key gesture motion characteristics, a preliminary gesture feature vector reflecting gesture characteristics is constructed, wherein the preliminary gesture feature vector comprises a gesture motion mode and dynamic characteristics;
processing the preliminary gesture feature vector by applying a time sequence analysis technology to capture the mode and rule of gesture motion changing along with time;
and optimizing the mode and rule of the captured gesture motion along with the change of time based on the long-short-time memory network, and generating a gesture feature vector.
Further, the fusing the palm vein feature vector with the fingerprint feature vector, the palm print feature vector and the gesture feature vector to obtain a comprehensive feature representation of the user includes:
The palm vein feature vector is calculated according to the following formula 1Fingerprint feature vector->Palm print feature vector->Gesture feature vector +.>Non-linear mapping is performed:
wherein,is->A feature vector; />Is the ∈th after mapping>A feature vector;
each mapped feature vector is calculated according to the following equation 2Weight of +.>
Wherein,is an adjustment parameter; />Is>Is an information entropy of (a);
the user's composite feature representation is calculated according to equation 3 below
Wherein,is a weight coefficient, and N is the number of feature vectors.
Still further, the computing a similarity score between the composite feature representation and the target feature representation includes:
computing a composite feature representation according to the following formulaRepresentation +.>Similarity score between->
Wherein,representation of the composite characteristic representation +.>Representation +.>Cosine similarity between the two vectors is used for measuring the consistency of the two vector directions; />Is an adjustable scale parameter; />、/>Is a weight coefficient.
The application provides a palmar vein recognition system, comprising:
the palm vein image processing unit is used for acquiring a palm vein image of a user and acquiring a palm vein feature vector in the palm vein image;
The multi-feature processing unit is used for collecting fingerprint images, palm print images and gesture dynamic data of a user, acquiring fingerprint feature vectors in the fingerprint images, acquiring palm print feature vectors in the palm print images and acquiring gesture feature vectors in the gesture dynamic data;
the fusion unit is used for fusing the palm vein feature vector with the fingerprint feature vector, the palm print feature vector and the gesture feature vector to obtain comprehensive feature representation of a user;
a comparison unit, configured to compare the integrated feature representation with a target feature representation stored in a database, and calculate a similarity score between the integrated feature representation and the target feature representation;
and the identification unit is used for identifying the identity information of the user according to the similarity score.
The present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a palm vein feature vector in a palm vein image of a user;
acquiring fingerprint feature vectors in a fingerprint image of a user, acquiring palm print feature vectors in a palm print image of the user, and acquiring gesture feature vectors in gesture dynamic data of the user;
Fusing the palm vein feature vector with the fingerprint feature vector, the palm print feature vector and the gesture feature vector to obtain a comprehensive feature representation of the user;
comparing the comprehensive feature representation with target feature representations stored in a database, and calculating a similarity score between the comprehensive feature representation and the target feature representation;
and identifying the identity information of the user according to the similarity score.
The beneficial effects of this application include: (1) By fusing palm vein features, fingerprint features, palm print features and gesture dynamic features, the method comprehensively utilizes various biological features, so that the recognition accuracy is greatly improved. The multi-feature fusion method can effectively reduce the false recognition rate in single biological feature recognition and provide more comprehensive identity verification. (2) The multi-feature fusion provides additional security and anti-spoofing aspects. Different biometric features are difficult to imitate or deceptive simultaneously, so the security of the method is far higher than that of a single biometric identification system.
Drawings
Fig. 1 is a flowchart of a method for palm vein recognition according to a first embodiment of the present application.
Fig. 2 is a schematic diagram of a palmar vein recognition system according to a second embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The first embodiment of the application provides a palm vein recognition method. Referring to fig. 1, a schematic diagram of a first embodiment of the present application is shown. A detailed description of a method for palm vein recognition is provided in the first embodiment of the present application with reference to fig. 1.
The palm vein identification method comprises the following steps:
step S101: and collecting palm vein images of the user, and extracting palm vein features in the palm vein images.
This step involves two key links: and collecting palm vein images and extracting palm vein characteristics.
Acquiring palm vein images typically requires the use of specialized imaging equipment, such as near infrared cameras. Such cameras are capable of capturing a pattern of veins that are not visible under the palm skin because venous blood has different characteristics of absorbing near infrared light than the surrounding tissue.
To ensure image quality, the acquisition should be performed in an environment that avoids direct sunlight or intense light reflection. In addition, the palm should remain clean, avoiding wetting or grease, to reduce noise during image capture.
The user needs to place the palm of the hand on a support platform at a designated location, typically below or in front of the camera. The palm of the user should lie flat to avoid compression or distortion to maintain the natural state of the vein pattern.
Under proper device and environment settings, the imaging device is activated to capture a high definition image of the palm. Multiple angles of images may be captured as needed to ensure the integrity of the vein pattern.
The captured image is first pre-processed, including denoising, contrast enhancement, graying, etc., to enhance the visibility and sharpness of the vein pattern. Next, vein patterns are identified and extracted using image processing algorithms, such as edge detection or image segmentation techniques. These algorithms can identify characteristics of the path, width, density, and shape of the vein.
Still further, the acquiring the palmar vein feature vector in the palmar vein image includes:
wavelet transformation is applied to the acquired palm vein image to strengthen the visibility of the vein pattern, so that an enhanced image is obtained;
processing the reinforced image by using a customized convolutional neural network to generate a feature map; wherein the customized convolutional neural network is trained to specifically identify and emphasize key features of the palm vein, including branch points, shape, and direction of the palm vein;
And extracting key information from the feature map, and constructing a palm vein feature vector.
The following is a detailed description:
1. wavelet transformation was applied to the palmar vein image:
collecting palm vein images: first, a palm vein image of the user is acquired using a suitable imaging device (e.g., a near infrared camera). The image is ensured to be clear so as to accurately capture the details of the palm vein.
Wavelet transformation is applied: wavelet transformation is applied to the acquired palm vein image. Wavelet transformation is an efficient image processing technique for enhancing specific features in an image. In this step it is used to enhance the visibility of the vein pattern, in particular to enhance the fine vein structures in the image, making them easier to identify in subsequent processing.
2. Processing the enhanced image using a custom convolutional neural network:
network architecture: a custom Convolutional Neural Network (CNN) is designed and used to process the enhanced image. The network should be optimized specifically for the characteristics of the palmar vein image, with appropriate convolutional, active, and pooling layers.
A custom Convolutional Neural Network (CNN) includes multiple layers, each with specific inputs, outputs, and implementations.
(1) Input layer
Input: the preprocessed and wavelet transformed palm vein image is assumed to be 256×256 pixels in size.
And (3) outputting: image data of the same size as the input image.
(2) First convolution layer
Input: image data from the input layer.
The implementation mode is as follows: a convolution operation is performed using a 3 x 3 convolution kernel and 64 filters.
And (3) outputting: the feature map is slightly reduced in size and has a depth of 64.
(3) Second convolution layer
Input: the output of the first convolution layer.
The implementation mode is as follows: a convolution operation was performed using a 3 x 3 convolution kernel and 128 filters.
And (3) outputting: the feature map is further reduced in size to a depth of 128.
(4) Maximum pooling layer
Input: the output of the second convolution layer.
The implementation mode is as follows: the maximum pooling is performed using a 2 x 2 pooling window.
And (3) outputting: the reduced size feature map, the depth remains 128.
(5) Multi-scale convolution layer
Input: and (5) outputting a pooling layer.
The implementation mode is as follows: convolution operations are performed in parallel using convolution kernels of different sizes (e.g., 3 x 3, 5 x 5, 7 x 7), 128 filters of each size.
And (3) outputting: feature maps of multiple dimensions.
(6) Feature merge layer
Input: all outputs of the multi-scale convolution layer.
The implementation mode is as follows: the feature maps of different sizes are combined together.
And (3) outputting: and merging the characteristic diagrams.
(7) Intermediate convolution layer
Input: and outputting the feature merging layer.
The implementation mode is as follows: the convolution is performed using a 3 x 3 convolution kernel with a number of filters of 256.
And (3) outputting: more advanced feature graphs.
(8) Second maximum pooling layer
Input: the output of the intermediate convolution layer.
The implementation mode is as follows: the maximum pooling is performed using a 2 x 2 pooling window.
And (3) outputting: feature images of further reduced size.
(9) Dropout layer (regularization)
Input: the output of the second max-pooling layer.
The implementation mode is as follows: dropout technology is applied (e.g., discard rate of 0.5).
And (3) outputting: regularized feature map.
10. Output layer
Input: the output of the Dropout layer.
The implementation mode is as follows: the final convolution is performed using an appropriate number (e.g., 3) of convolution kernels to extract key features.
And (3) outputting: the final feature map reflects key features in the palm vein image, such as vein branching points, shape and orientation.
Implementation notes:
the parameters of each layer (e.g. number of filters, size) can be adjusted according to the actual data and the needs when constructing the network.
The network needs to be trained on a large number of palm vein images in order to effectively identify the detailed features of the palm veins.
The loss function and accuracy are monitored during training to evaluate the model performance and make the necessary adjustments.
The following is one custom Convolutional Neural Network (CNN) example code implemented using Python and TensorFlow libraries. This network is designed to process the palm vein image and output a set of feature maps reflecting the key features in the image. Note that this is just one basic example, and more complex structures and tuning may be required in practical applications.
import tensorflow as tf
from tensorflow.keras import layers, models
def create_custom_cnn():
model = models.Sequential()
# input layer
model. Add (layers. Input (shape= (256, 256, 3))) # assume that the image is 256x256 size, 3 channels (color)
# first convolution layer
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.BatchNormalization())
# second convolution layer
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.BatchNormalization())
# max pooling layer
model.add(layers.MaxPooling2D((2, 2)))
# multiscale convolutional layer
branch1 = layers.Conv2D(128, (3, 3), activation='relu')(model.output)
branch2 = layers.Conv2D(128, (5, 5), activation='relu')(model.output)
branch3 = layers.Conv2D(128, (7, 7), activation='relu')(model.output)
# feature merge layer
merged = layers.concatenate([branch1, branch2, branch3], axis=-1)
# middle convolution layer
model.add(layers.Conv2D(256, (3, 3), activation='relu', input_shape=merged.shape[1:]))
Second maximum pooling layer #
model.add(layers.MaxPooling2D((2, 2)))
# Dropout layer (regularization)
model.add(layers.Dropout(0.5))
# output layer
model. Add (layers. Conv2D (3, (3, 3)), activation= 'return')) # assume that 3 key features are extracted
return model
# creation model
model = create_custom_cnn()
# output model structure
model.summary()
Training a custom Convolutional Neural Network (CNN) whose output is a feature map is a process focused on feature extraction. Unlike conventional classification or regression tasks, the goal here is to train the network to efficiently extract and represent key features in the input image. The following are general steps for training such networks:
(1) Data preparation:
data set: a dataset is prepared containing a plurality of palm vein images. These images should cover different hand types, different lighting conditions and diverse backgrounds.
Pretreatment: the image is subjected to the necessary pre-processing such as resizing, normalization and possibly data enhancement (e.g. rotation, scaling, cropping etc.).
(2) Marking the characteristics:
in order to train the feature extraction network, labeling data about the key features of the palm vein is required. This may include information on the location, branching points, shape, etc. of the vein.
In some cases, specialized software or manual labeling may be used to create such labeling data.
(3) Defining a loss function:
since the goal is feature extraction rather than direct classification, a loss function suitable for feature learning needs to be defined. For example, a pixel-based loss function, such as a Mean Square Error (MSE) loss, may be used to measure the difference between the network-generated feature map and the actual annotated feature.
(4) Designing a network architecture:
a CNN architecture is constructed that includes a plurality of convolutional layers and an active layer. This network should be able to gradually extract higher level feature representations from the original image.
(5) Training process:
the network is trained using the annotated data set. During the training process, the network will learn how to extract useful features from the input images and generate corresponding feature maps.
The values of the loss function are monitored and an optimizer such as Adam or SGD is used to optimize the network parameters to reduce the difference between the predicted feature map and the actual labels.
Cross-validation may be employed to evaluate the performance of the model and adjust network architecture or parameters as necessary.
(6) Evaluation and optimization:
the performance of the trained network is evaluated on a separate test set. Ensuring that the network not only performs well on training data, but also can effectively process new images that are not seen.
And adjusting and optimizing a network structure or a training strategy according to the evaluation result so as to improve the accuracy and the robustness of feature extraction.
Through the steps, the CNN special for extracting the palm vein image characteristics can be effectively trained, so that a high-quality characteristic diagram can be generated, and reliable data support is provided for subsequent identification and verification steps.
3. Extracting key information and constructing feature vectors:
analyzing a characteristic diagram: the CNN generated feature map is analyzed to identify key features of the palmar vein. This includes identifying salient features in the vein pattern, such as particular vein branching structures, shape changes, and patterns of vein orientation.
Analyzing feature maps generated by Convolutional Neural Networks (CNNs) to identify key features of the palmar veins involves a series of image processing and pattern recognition techniques. The purpose of this procedure is to extract specific information about the characteristics of the palmar veins, such as branching pattern, shape and direction, from the feature map. The following steps are detailed:
(1) Understanding the characteristic diagram:
feature maps are the outputs of the CNN, each map representing the response of the network to a particular feature of the input image at a particular layer.
Different feature maps may highlight different information, for example, some maps may represent the edges of a vein, while other maps may highlight the texture or direction of a vein.
(2) Feature positioning:
the critical portion of the vein is located using image processing techniques such as thresholding, edge detection (e.g., using a Canny edge detector), or image segmentation (e.g., threshold-based segmentation or clustering techniques) for each feature map.
Major paths and branch points of the vein are identified, and these areas typically appear as distinct lines or concentrated points on the feature map.
(3) Characteristic interpretation:
the characteristics of these locations are analyzed to determine their characteristics. For example, by measuring the angle of the branch point, the direction of the vein can be derived; by analyzing the thickness and continuity of the lines, the shape and pattern of the vein can be assessed.
These features are further understood using pattern recognition techniques such as shape analysis, geometry and vector field analysis. This may involve calculating statistics of a particular region, such as average pixel intensity, color distribution, or texture pattern.
(4) Feature abstraction and coding:
these interpreted features are converted into numerical or symbolic form for use in constructing feature vectors.
For example, the position of a vein branching point may be encoded as a coordinate value, and the direction of the vein may be encoded as an angle or a direction vector.
Through these steps, the technician can extract detailed and specific information about the palmar veins from the feature map generated by the CNN. This information is critical to constructing feature vectors that reflect the unique palm vein features of the user.
Constructing a feature vector: and constructing a palm vein feature vector according to the key information extracted from the feature map. This vector will be used as a mathematical representation of the unique palm vein features of the user for subsequent identification and comparison steps. The feature vector should contain enough information to accurately describe the user's palm vein pattern while maintaining a modest dimension for efficient processing.
(1) Definition of feature vectors:
The feature vector is a mathematical representation summarizing key features in the palmar vein image. It needs to contain enough information to accurately describe the user's palm vein pattern while maintaining a modest dimension for ease of computation.
(2) Feature extraction and coding:
the key information extracted from the feature map needs to be converted into a digital form. For example, the position of a vein branching point may be encoded as a coordinate value, and the direction of a vein may be represented by an angle or a vector.
These values may be obtained using various feature extraction techniques, such as local feature descriptors or statistical methods.
(3) Vector assembly:
and assembling all the extracted characteristic values into a unified characteristic vector. This vector may contain different types of data such as real numbers, binary values or angles, etc.
It is necessary to ensure that the dimension of this feature vector is both sufficient to represent the palm vein complexity and efficient for subsequent computation and comparison.
(4) Dimension normalization and normalization:
the feature vectors are normalized and standardized to ensure that the different dimensions of the vectors are comparable and suitable for subsequent machine learning or pattern recognition tasks.
In step S101, the important point is to accurately and efficiently collect and extract the features of the palmar vein. This step is critical to the overall palmar vein recognition process, as high quality image acquisition and accurate feature extraction can significantly improve the accuracy and reliability of subsequent recognition.
Step S102: collecting fingerprint images, palm print images and gesture dynamic data of a user, acquiring fingerprint feature vectors in the fingerprint images, acquiring palm print feature vectors in the palm print images and acquiring gesture feature vectors in the gesture dynamic data.
Step S102-1: collecting fingerprint images and extracting features:
a fingerprint of the user is acquired using a fingerprint scanner. The scanner may be of the optical, capacitive or ultrasonic type, each having its own advantages. Ensure that the user's finger is clean and moisture free to obtain a clear fingerprint image. The user is instructed to place the finger correctly and hold it for a period of time until the device acquisition is complete.
Preprocessing the acquired fingerprint image, including enhancing image contrast, denoising, etc. The ridge of the fingerprint is extracted using an image processing algorithm, such as a refinement algorithm. Key feature points (e.g., bifurcation points, end points) are identified and recorded.
Step S102-2: collecting palmprint images and extracting features:
1. collecting palmprint images:
a high resolution scanner or camera is selected. These devices should be able to capture high definition palm images with the details clearly visible. The device should be arranged on a stable platform to ensure no shaking in the image acquisition process. The user is instructed to properly lay the palm in a designated area under the scanner or camera. The palm should be fully extended to ensure that every portion of the palmprint is fully captured.
The indoor light or equipment light source is adjusted, the light is ensured to be uniformly distributed on the palm, shadows and light reflection are reduced, and the method is important for improving the image quality. In a state that the palm of the user is kept still, the scanner or the camera is started to capture the palm image. If necessary, multiple captures may be made from different angles or positions to ensure the integrity of the palmprint features.
2. Feature extraction of palm print images:
and preprocessing the acquired palmprint image. Including adjusting the brightness and contrast of the image to improve image quality. If the image contains noise, a denoising algorithm is applied to sharpen the image details.
Image processing algorithms (e.g., edge detection, image segmentation) are used to identify the dominant lines in the palm print. These lines include the major ridges, wrinkles and folds of the palm lines.
In addition to the main lines, attention is paid to detailed features in palmprints, such as punctual features, line spacing, line branching, etc. These features can be identified and extracted using more elaborate image processing techniques.
The extracted palmprint features are then converted into a numerical representation (e.g., feature vector). This step involves converting the visual features in the image into a digital format that can be used for calculation and comparison.
The coded palmprint characteristic data is stored in the system for subsequent authentication process.
In step S102-2, detailed and accurate palm print image acquisition and feature extraction are critical. Through high-quality image capturing and accurate image analysis, enough information can be extracted from palmprints to perform effective identity verification, which is a key link for the whole biological feature recognition process.
Step S102-3: gesture dynamic data acquisition and feature extraction:
1. gesture dynamic data acquisition:
gesture motion is captured using a device such as a 3D camera or motion capture sensor. These devices are capable of recording the movements of the hands and fingers in space, capturing fine movements and changes in direction. The user is guided to perform a series of predefined gesture actions, such as waving a hand, pointing, grabbing, etc. These actions should cover various hand movements to adequately capture the dynamic characteristics of the user. During the gesture execution process, the device records three-dimensional motion data of the hand in real time, including the position, the moving speed, the moving direction and the like of each finger.
2. Gesture feature extraction
Analyzing the recorded data and extracting the motion trail of the hand and the finger. This includes the starting point, ending point, motion path, and motion direction of the gesture. The speed and acceleration during the gesture are calculated. This involves quantifying the change in hand movement speed, including the speed at which the gesture starts, proceeds, and ends. The relative movement and coordination between the fingers is identified. This includes analyzing how the fingers move in concert, as well as the relative position change between the fingers when performing a particular gesture.
Key gesture features are extracted from the above analysis, such as the overall dynamic pattern of a particular gesture, the duration of the gesture, the complexity of the gesture (e.g., coordinated movement of fingers), etc. These features are encoded into a numerical form that can be used in a subsequent identification process.
In step S102-3, the key is to accurately and comprehensively capture the dynamic characteristics of the user gesture and extract key information from it that facilitates personal authentication. These dynamic gesture features, while not as unique as traditional biometric features, can significantly enhance the overall performance and security of the system when used in conjunction with other biometric features (e.g., palm vein, fingerprint, palm print, etc.) for multi-modal authentication.
Still further, the acquiring the fingerprint feature vector in the fingerprint image includes:
preprocessing the acquired fingerprint image, wherein the preprocessing comprises image enhancement and denoising so as to improve the definition and the identifiability of the fingerprint pattern, thereby obtaining the preprocessed fingerprint image.
Analyzing the preprocessed fingerprint image using an improved convolutional neural network specifically designed to identify fingerprint minutiae features in the fingerprint image, including ridge lines, minutiae points, bifurcation points;
Extracting key information from the identified fingerprint detail characteristics, and constructing a preliminary fingerprint characteristic vector representing fingerprint uniqueness;
optimizing and enhancing the preliminary fingerprint feature vector by applying an edge detection and pattern matching technology to generate an enhanced fingerprint feature vector;
and combining the fingerprint reinforcement feature vectors generated by the fingerprint images acquired from different angles or under different conditions to form the fingerprint feature vectors in the fingerprint images.
The following describes in detail the acquisition of fingerprint feature vectors in a fingerprint image:
1. preprocessing a fingerprint image:
image enhancement: image enhancement techniques, such as contrast enhancement and brightness adjustment, are applied to the acquired fingerprint image to improve image quality. The goal of the enhancement is to make the ridges and valleys of the fingerprint more pronounced.
Denoising: a denoising algorithm, such as gaussian filtering or median filtering, is applied to remove noise from the image. This helps to reduce erroneous judgment in subsequent processing.
2. Analysis of fingerprint images using a modified Convolutional Neural Network (CNN):
network design: an improved CNN is designed specifically for identifying minutiae features in a fingerprint image. This network should contain multiple convolution layers, activation layers and pooling layers to effectively extract fingerprint features.
And (3) feature recognition: the training network identifies key minutiae features of the fingerprint, such as ridges, minutiae, and bifurcation points. These features are key elements in fingerprint recognition.
3. Constructing a preliminary fingerprint feature vector:
feature extraction: and extracting key information from the result after CNN processing. This involves identifying and quantifying the ridge, minutiae, and bifurcation points of the fingerprint.
Vector construction: these extracted features are combined into a preliminary fingerprint feature vector. This vector should contain enough information to reflect the uniqueness of the fingerprint.
4. Optimizing and enhancing feature vectors:
edge detection: edge detection techniques are applied to further analyze and refine fingerprint features, particularly edges of ridges.
Pattern matching technology: pattern matching techniques, such as template-based matching or feature point matching, are used to optimize the fingerprint feature vectors and enhance their expressive power.
5. Fusing fingerprint characteristics of different angles:
multi-angle fusion: if any, a more comprehensive fingerprint feature vector is formed in combination with fingerprint images acquired from different angles or under different conditions.
Feature vector integration: the feature vectors from these different sources are integrated together to form the final fingerprint feature vector. This process may involve feature alignment and normalization.
Through the steps, a detailed characteristic vector can be extracted from the fingerprint image, and the vector not only contains the basic characteristics of the fingerprint, but also is subjected to optimization and enhancement processing so as to improve the accuracy and reliability of identification. This process is critical to achieving efficient and accurate fingerprint identification.
The following details the specific implementation of the improved convolutional neural network:
1. input layer:
-function: the preprocessed fingerprint image is received as input.
-input: the size of the preprocessed fingerprint image is assumed to be 256×256 pixels.
-output of: image data of the same size as the input image.
2. First convolution layer:
-function: fine features in the image are captured using a smaller convolution kernel.
-input: image data from the input layer.
-realizing: a 3 x 3 convolution kernel is used, 64 filters.
-output of: feature maps, one feature map is generated for each filter.
3. Second convolution layer:
-function: further extracting features.
-input: an output profile of the first convolution layer.
-realizing: a 3 x 3 convolution kernel is used, 128 filters.
-output of: more feature maps highlight more complex features.
4. Pooling layer:
-function: the size of the feature map is reduced, and the generalization capability of the model is enhanced.
-input: and outputting a characteristic diagram of the second convolution layer.
-realizing: using maximum pooling, a 2 x 2 pooling window.
-output of: a feature map halved in size.
5. Third convolution layer:
-function: further refine and highlight the key features.
-input: and (5) outputting a pooling layer.
-realizing: a 3 x 3 convolution kernel is used, 256 filters.
-output of: more advanced feature graphs.
6. An activation layer:
-function: introducing nonlinearities enables the network to learn more complex features.
-input: the output of the third convolution layer.
-realizing: the function was activated using ReLU (Rectified Linear Unit).
-output of: and (5) activating the characteristic diagram.
7. Fourth convolution layer:
-function: the final feature extraction focuses on advanced features.
-input: and activating the output of the layer.
-realizing: a 3 x 3 convolution kernel is used, 512 filters.
-output of: high-level feature maps.
8. Output layer:
-function: and outputting the final characteristic representation.
-input: and the output of the fourth convolution layer.
-realizing: the appropriate convolution layer or full link layer depends on the needs of the subsequent processing.
-output of: the final feature representation of the fingerprint may be used for subsequent feature comparison.
The following is a reference code for a modified Convolutional Neural Network (CNN) implemented using Python and TensorFlow libraries. This network is designed to process fingerprint images and extract key fingerprint features.
import tensorflow as tf
from tensorflow.keras import layers, models
def create_improved_cnn():
model = models.Sequential()
# input layer
model. Add (layers. Input (shape= (256, 256, 1))) # assume that the image is 256x256 size, gray scale image (single channel)
# first convolution layer
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
# second convolution layer
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
# pooling layer
model.add(layers.MaxPooling2D((2, 2)))
# third convolution layer
model.add(layers.Conv2D(256, (3, 3), activation='relu'))
# active layer
model.add(layers.Activation('relu'))
# fourth convolution layer
model.add(layers.Conv2D(512, (3, 3), activation='relu'))
# output layer
# if a fixed length feature vector is required to be extracted, a full join layer may be added
model. Add (layers ()) # flattening layer
model. Add (layers. Dense (512, activation= 'return')) # full connection layer, assuming an output length of 512
return model
# creation model
model = create_improved_cnn()
# output model structure
model.summary()
Feature recognition training:
during the training process, the modified convolutional neural network needs to be trained to identify key minutiae features of the fingerprint, such as ridges, minutiae and bifurcation points.
-performing supervised learning using annotated fingerprint images, wherein the annotation information indicates the locations of the ridge lines, minutiae points and bifurcation points.
Training of the network with appropriate loss functions, such as cross entropy loss, and optimizers, such as Adam.
Through the steps, a convolutional neural network special for extracting fingerprint features can be constructed and trained, and the convolutional neural network can identify and emphasize the most important details in fingerprint images, so that accurate feature representation is provided for subsequent fingerprint identification.
Still further, the acquiring the palmprint feature vector in the palmprint image includes:
preprocessing the acquired palm print image, wherein the preprocessing comprises image enhancement and filtering to improve the definition of palm print lines, so as to obtain a preprocessed palm print image;
analyzing the preprocessed palm print image by using edge detection and image segmentation technology, and identifying palm print detail characteristics of the palm print, wherein the palm print detail characteristics comprise ridge lines, bifurcation points and termination points;
constructing a preliminary palm print feature vector containing palm print key information according to the extracted palm print detail features;
and further analyzing and optimizing the preliminary palm print feature vector by using a pattern recognition technology based on deep learning to generate the palm print feature vector.
Step 1, preprocessing the palmprint image
Image enhancement and filtering:
the palm print image is preprocessed using an image processing library (e.g., openCV or tillow).
First, image enhancement techniques, such as contrast adjustment, are applied to increase the sharpness of the palmprint.
Filtering techniques, such as gaussian blur or median filtering, are then applied to smooth the image and remove noise.
Step 2, applying edge detection and image segmentation
Identifying palmprint detail features
Edge detection algorithms (e.g., canny edge detector) are used to identify the ridge of the palm print.
Image segmentation techniques (such as threshold segmentation or contour-based segmentation) are applied to identify bifurcation points and end points.
Step 3, constructing a preliminary palmprint feature vector
Extracting key information and constructing vector
And analyzing the image after edge detection and segmentation, and extracting the ridge line, bifurcation point and termination point of the palm print.
These features are encoded into a feature vector. This may involve calculating the position, angle, etc. of the feature.
Step 4, feature vector optimization using deep learning
Pattern recognition based on deep learning:
the preliminary feature vectors are analyzed and optimized using a deep learning model (e.g., convolutional neural network).
This involves training a network to further extract and optimize palmprint features.
The following is an exemplary code for the four steps described above:
import cv2
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
# palmprint image preprocessing
def preprocess_palmprint(image_path):
"" "preprocessing palmprint image" "" "";
# reading image, converting into gray-scale image
image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
# image enhancement-contrast enhancement
enhanced_image = cv2.equalizeHist(image)
# filter-Gaussian filter denoising
filtered_image = cv2.GaussianBlur(enhanced_image, (5, 5), 0)
return filtered_image
Edge detection and image segmentation
def detect_palmprint_features(image):
"" "extraction of palmprint features" "" using edge detection and image segmentation techniques "
# edge detection-Canny algorithm
edges = cv2.Canny(image, 100, 200)
# image segmentation-thresholding
ret, segmented = cv2.threshold(edges, 50, 255, cv2.THRESH_BINARY)
return segmented
Construction of preliminary palmprint feature vector #
def construct_initial_feature_vector(segmented_image):
Construction of feature vectors from segmented images "
Finding contours #
contours, hierarchy = cv2.findContours(segmented_image, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Simple example #: using the number of contours as a feature
feature_vector = np.array([len(contours)])
return feature_vector
Creating deep learning model #
def create_deep_learning_model(input_shape):
"" "creates a simple deep learning model to optimize feature vectors" "".
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(128, activation='relu'))
model.add(Dense(64, activation='relu'))
model. Add (1, activation= 'signature') #, assume that the output vector length is 1
return model
# Main function-example
def main():
# image preprocessing
preprocessed_image = preprocess_palmprint('path_to_your_palmprint_image.jpg')
Feature extraction
segmented_image = detect_palmprint_features(preprocessed_image)
feature_vector = construct_initial_feature_vector(segmented_image)
# create and train model (only the part of creating the model is shown here)
model = create_deep_learning_model(feature_vector.shape)
model.compile(optimizer='adam', loss='binary_crossentropy')
# assuming training data and labels, model training can be performed here
# model.fit(training_data, labels, epochs=10)
print ("feature vector:", feature_vector)
if __name__ == "__main__":
main()
Still further, the acquiring the gesture feature vector in the gesture dynamic data includes:
analyzing the collected gesture dynamic data, and identifying key gesture action characteristics, wherein the key gesture action characteristics comprise relative position change among fingers, duration time of gestures and speed curves of movement;
According to the key gesture motion characteristics, a preliminary gesture feature vector reflecting gesture characteristics is constructed, wherein the preliminary gesture feature vector comprises a gesture motion mode and dynamic characteristics;
processing the preliminary gesture feature vector by applying a time sequence analysis technology to capture the mode and rule of gesture motion changing along with time;
and optimizing the mode and rule of the captured gesture motion along with the change of time based on the long-short-time memory network, and generating a gesture feature vector.
In order to obtain the gesture feature vector of the gesture dynamic data in the palm vein recognition method, an optimization process comprising gesture data analysis, feature vector construction, time sequence analysis and long-short-term memory network (LSTM) based is required to be developed. The following is a detailed description of each step, and how these steps are implemented using Python code.
Step 1, analysis of gesture dynamic data
Identifying key gesture motion features:
using sensor data (such as data captured by an accelerometer or a depth camera) to record dynamic data of the gesture.
-analyzing the gesture data, extracting key features such as the relative position change between the fingers, the duration of the gesture and the speed profile of the movement.
Step 2, constructing a preliminary gesture feature vector
Extracting motion modes and dynamic characteristics of gestures:
-constructing a preliminary feature vector comprising gesture motion patterns and dynamics based on the analyzed gesture features.
The feature vector may comprise values of speed, acceleration, duration, etc. of the gesture.
Step 3, applying time sequence analysis technology
Processing gesture feature vectors:
processing the gesture feature vectors using a time series analysis method (such as an autoregressive model) to capture the patterns and rules of the gesture motion over time.
Step 4 optimization based on Long short time memory network (LSTM)
Optimizing gesture features using LSTM:
-creating an LSTM network for analyzing and optimizing gesture feature vectors.
The LSTM network is able to process time series data, optimizing the understanding of gesture dynamics.
The following is a reference code for implementing the above steps using the Python and TensorFlow libraries:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
# assume that hand_geture_data is a NumPy array containing gesture dynamic data
The data format may be [ number of samples, number of time steps, number of features ]
# e.g. hand_geture_data=np.random.rand (100, 10, 3) # 100 samples, 10 time steps each, 3 features each
Creating LSTM model for gesture feature vector optimization
def create_lstm_model(input_shape):
model = Sequential()
model.add(LSTM(64, return_sequences=True, input_shape=input_shape))
model.add(LSTM(64))
model.add(Dense(32, activation='relu'))
model. Add (input_shape [2 ]) ]
model.compile(optimizer='adam', loss='mean_squared_error')
return model
Construction of LSTM model #
input_shape=hand_geture_data_shape [1: ] # shape of input data
model = create_lstm_model(input_shape)
Training model #
Suppose hand_capture_labels is the label or target output corresponding to the gesture data
# model.fit(hand_gesture_data, hand_gesture_labels, epochs=10)
Prediction or feature extraction using model #
# predicted_features = model.predict(hand_gesture_data)
```
Note that this is one basic example. In practical application, the data preprocessing step needs to be adjusted according to the specific type and format of gesture data, and meanwhile, the structure and parameters of the LSTM model are adjusted according to task requirements. Furthermore, training of the model requires a large amount of annotated gesture data.
In step S102, emphasis is placed on capturing and analyzing various biometric data using different acquisition devices and specialized algorithms. Efficient extraction of each feature is critical for subsequent data fusion and authentication. By combining the multi-modal features of fingerprint, palm print and gesture dynamics, the recognition capability and security of the system can be significantly improved.
Step S103: and fusing the palm vein feature vector with the fingerprint feature vector, the palm print feature vector and the gesture feature vector to obtain the comprehensive feature representation of the user.
In the palm vein recognition method of step S103, attention is paid to fusing different biometric vectors into one comprehensive feature representation, which is a key step, as it will provide comprehensive data for final identity verification.
First, different types of biometric vectors have been obtained from steps S101 and S102. In step S101, a palm vein image of the user is acquired and processed, and a palm vein feature vector is obtained. Also, in step S102, not only the fingerprint and palm print images are collected, but also gesture dynamic data of the user are collected, from which corresponding feature vectors are extracted. These feature vectors represent unique biological features of the user, including palm vein, fingerprint, palm print, and gesture dynamics, respectively.
Next, in step S103, the respective independent feature vectors are combined into a single, integrated feature representation. This process involves a number of important sub-steps and considerations. First, it is necessary to ensure that the data formats and scales of the different feature vectors are consistent in order to facilitate efficient fusion. For this purpose, some pre-processing, such as normalization and normalization, may be required to ensure that the different data are comparable.
Next, an advanced fusion algorithm may be employed, which may be a machine learning based approach, such as a deep learning model, to integrate the different features. This fusion process involves not only simple data consolidation but also finding the association and complementation information between different features. For example, palm veins and fingerprint features may provide complementary information in some aspects, while gesture dynamic data may add additional behavioral feature dimensions.
Finally, after processing by a fusion algorithm, a comprehensive characteristic representation containing all biological characteristic information is obtained. The comprehensive representation not only reflects the unique biological characteristics of the user, but also fuses the advantages of different types of characteristics, and enhances the accuracy and reliability of the whole recognition system.
Further, the fusing the palm vein feature vector with the fingerprint feature vector, the palm print feature vector and the gesture feature vector to obtain a comprehensive feature representation of the user includes:
the palm vein feature vector is calculated according to the following formula 1Fingerprint feature vector->Palm print feature vector->Gesture feature vector +.>Non-linear mapping is performed:
wherein,is- >A feature vector; />Is the ∈th after mapping>A feature vector; for example->Is the palmar vein feature vector, < >>Is a fingerprint feature vector;
each mapped feature vector is calculated according to the following equation 2Weight of +.>
Wherein,is an adjusting parameter, and can be obtained through experimental data or directly set according to expert knowledge;is>Is an information entropy of (a);
the user's composite feature representation is calculated according to equation 3 below
Wherein,is a weight coefficient, +.>The method can be obtained through experimental data or set directly according to expert knowledge; n is the number of feature vectors, where N may be 4.
Feature vectorThe information entropy of (2) is calculated as follows
Wherein,is a feature vector +.>Is an information entropy of (a); />Is a feature vector +.>The%>An element;is a feature vector +.>The%>Probability of individual elements.
The fusion method combines nonlinear feature mapping, weight distribution based on information entropy and consideration of interaction between features. This approach not only takes into account the uniqueness and relevance of each feature, but also the complex interactions that may exist between different features. In this way, a more comprehensive and accurate representation of the integrated features can be obtained.
In general, step S103 plays a crucial role in the palmar vein recognition method. By efficiently fusing the various features from palm vein, fingerprint, palm print, and gesture dynamics, it provides powerful support for user authentication to achieve higher accuracy and security.
Step S104: and comparing the comprehensive characteristic representation with target characteristic representations stored in a database, and calculating a similarity score between the comprehensive characteristic representation and the target characteristic representation.
In the palm vein recognition method of step S104, a key task is to compare the comprehensive feature representation of the user with the target feature representations stored in the database and calculate a similarity score between the two. This step is critical in the identification process, as it directly determines the accuracy of the identification.
First, consider that in step S103, a composite representation of the user 'S complex features has been generated, which is a composite data structure of the user' S palm veins, fingerprints, palmprint, and gesture features fused. In step S104, this integrated feature representation is first compared with a pre-stored target feature representation in a database. These target feature representations, which have been previously acquired and processed, are typically stored in a database during the user registration phase. If the comprehensive characteristic representation is the same as the target characteristic pre-stored in the database, the user corresponding to the target characteristic pre-stored in the database can be determined as the target user.
If not identical, a specific algorithm is used to calculate the similarity score. This process involves not only traditional distance measures (such as euclidean distance or manhattan distance), but also more complex similarity calculation methods such as angle-based cosine similarity or model-based similarity assessment. The choice of these methods depends on the nature of the feature data and the accuracy of identification desired.
In calculating the similarity score, a plurality of factors need to be comprehensively considered. For example, certain features may be more important than others, or certain features may be more reliable under certain conditions. Thus, we may give different features different weights or apply more complex strategies to resolve possible conflicts between features.
Once the similarity score is calculated, this score will be used to determine the identity of the user. The higher the score, the more likely the user-provided feature representation is to be similar to the target feature representation in the database, the more likely the verification of the user's identity is successful.
Still further, the computing a similarity score between the composite feature representation and the target feature representation includes:
computing a composite feature representation according to the following formula Representation +.>Similarity score between->
Wherein,representation of the composite characteristic representation +.>Representation +.>Cosine similarity between the two vectors is used for measuring the consistency of the two vector directions; />Is an adjustable scale parameter and can be obtained through experimental data; />Is the weight ofThe coefficients may be specified by experimental data or expert knowledge.
In general, step S104 is a fine and complex process involving in-depth data analysis and intelligent decision-making. This step ensures that the system can accurately identify and verify the identity of the user, which is an indispensable part of the whole palm vein identification method.
Step S105: and identifying the identity information of the user according to the similarity score.
Step S105 is a key element in the palmar vein recognition method, which involves a process of recognizing the identity of the user based on the similarity score calculated previously. In this step, it will be determined whether the user is a true target user using the similarity score obtained in step S104.
First, a threshold value for distinguishing the case of authentication success from the case of authentication failure needs to be determined. This threshold is set based on system safety requirements and previous empirical data, and may be fixed or dynamically adjusted depending on the safety requirements of the particular application scenario.
Next, the similarity score calculated in step S104 is compared with this threshold value. If the similarity score is greater than or equal to the threshold, the system will determine that the authentication of the user was successful; if the similarity score is below the threshold, the authentication is deemed to fail and the user identity cannot be confirmed.
In addition to simply comparing the similarity score to a threshold, more complex logic may be employed in the decision process of identity verification. For example, if the similarity score is close to a threshold, the system may require the user to provide additional information or perform a secondary authentication, such as entering a password, answering a security question, or using another biometric authentication.
Finally, once an authentication decision is made, the system will perform the corresponding operation. If the user is successfully authenticated, they will gain access to the system or service; if the verification fails, access may be denied, an unsuccessful attempt may be recorded, and a security alarm or other protective measures may be triggered.
In general, step S105 is not only a process of judging the identity of the user according to the similarity score, but also a policy of flexibly processing the authentication result according to circumstances to ensure the security of the system and the convenience of the user. This step is the ending phase of the overall palmar vein identification process and is critical to ensure the safety and efficiency of the system as a whole.
In the above embodiment, a method for identifying a palmar vein is provided, and accordingly, a palmar vein identification system is also provided. Referring to fig. 2, a flowchart of an embodiment of a palmar vein recognition system is shown. Since this embodiment, i.e. the second embodiment, is substantially similar to the method embodiment, the description is relatively simple, and reference should be made to the description of the method embodiment for relevant points. The device embodiments described below are merely illustrative.
A second embodiment of the present application provides a palmar vein recognition system, including:
a palm vein image processing unit 201, configured to collect a palm vein image of a user, and obtain a palm vein feature vector in the palm vein image;
the multi-feature processing unit 202 is configured to collect a fingerprint image, a palm print image and gesture dynamic data of a user, and obtain a fingerprint feature vector in the fingerprint image, obtain a palm print feature vector in the palm print image, and obtain a gesture feature vector in the gesture dynamic data;
a fusion unit 203, configured to fuse the palm vein feature vector with the fingerprint feature vector, the palm print feature vector, and the gesture feature vector, to obtain a comprehensive feature representation of the user;
A comparison unit 204, configured to compare the integrated feature representation with target feature representations stored in a database, and calculate a similarity score between the integrated feature representation and the target feature representation;
and the identifying unit 205 is configured to identify identity information of the user according to the similarity score.
A fourth embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a palm vein feature vector in a palm vein image of a user;
acquiring fingerprint feature vectors in a fingerprint image of a user, acquiring palm print feature vectors in a palm print image of the user, and acquiring gesture feature vectors in gesture dynamic data of the user;
fusing the palm vein feature vector with the fingerprint feature vector, the palm print feature vector and the gesture feature vector to obtain a comprehensive feature representation of the user;
comparing the comprehensive feature representation with target feature representations stored in a database, and calculating a similarity score between the comprehensive feature representation and the target feature representation;
and identifying the identity information of the user according to the similarity score.
While the preferred embodiment has been described, it is not intended to limit the invention thereto, and any person skilled in the art may make variations and modifications without departing from the spirit and scope of the present invention, so that the scope of the present invention shall be defined by the claims of the present application.

Claims (9)

1. A method for palm vein identification, comprising:
collecting a palm vein image of a user, and obtaining a palm vein feature vector in the palm vein image;
collecting fingerprint images, palm print images and gesture dynamic data of a user, obtaining fingerprint feature vectors in the fingerprint images, obtaining palm print feature vectors in the palm print images and obtaining gesture feature vectors in the gesture dynamic data;
fusing the palm vein feature vector with the fingerprint feature vector, the palm print feature vector and the gesture feature vector to obtain a comprehensive feature representation of the user;
comparing the comprehensive feature representation with target feature representations stored in a database, and calculating a similarity score between the comprehensive feature representation and the target feature representation;
and identifying the identity information of the user according to the similarity score.
2. The method of claim 1, wherein the acquiring the palm vein feature vector in the palm vein image comprises:
wavelet transformation is applied to the acquired palm vein image to strengthen the visibility of the vein pattern, so that an enhanced image is obtained;
processing the reinforced image by using a customized convolutional neural network to generate a feature map; wherein the customized convolutional neural network is trained to specifically identify and emphasize key features of the palm vein, including branch points, shape, and direction of the palm vein;
and extracting key information from the feature map, and constructing a palm vein feature vector.
3. The method of claim 1, wherein the acquiring a fingerprint feature vector in the fingerprint image comprises:
preprocessing the acquired fingerprint image, wherein the preprocessing comprises image enhancement and denoising so as to improve the definition and the identifiability of the fingerprint pattern, thereby obtaining a preprocessed fingerprint image;
analyzing the preprocessed fingerprint image using an improved convolutional neural network specifically designed to identify fingerprint minutiae features in the fingerprint image, including ridge lines, minutiae points, bifurcation points;
Extracting key information from the identified fingerprint detail characteristics, and constructing a preliminary fingerprint characteristic vector representing fingerprint uniqueness;
optimizing and enhancing the preliminary fingerprint feature vector by applying an edge detection and pattern matching technology to generate an enhanced fingerprint feature vector;
and combining the fingerprint reinforcement feature vectors generated by the fingerprint images acquired from different angles or under different conditions to form the fingerprint feature vectors in the fingerprint images.
4. The method of claim 1, wherein the acquiring the palm print feature vector in the palm print image comprises:
preprocessing the acquired palm print image, wherein the preprocessing comprises image enhancement and filtering to improve the definition of palm print lines, so as to obtain a preprocessed palm print image;
analyzing the preprocessed palm print image by using edge detection and image segmentation technology, and identifying palm print detail characteristics of the palm print, wherein the palm print detail characteristics comprise ridge lines, bifurcation points and termination points;
constructing a preliminary palm print feature vector containing palm print key information according to the extracted palm print detail features;
and further analyzing and optimizing the preliminary palm print feature vector by using a pattern recognition technology based on deep learning to generate the palm print feature vector.
5. The method of claim 1, wherein the obtaining the gesture feature vector in the gesture dynamic data comprises:
analyzing the collected gesture dynamic data, and identifying key gesture action characteristics, wherein the key gesture action characteristics comprise relative position change among fingers, duration time of gestures and speed curves of movement;
according to the key gesture motion characteristics, a preliminary gesture feature vector reflecting gesture characteristics is constructed, wherein the preliminary gesture feature vector comprises a gesture motion mode and dynamic characteristics;
processing the preliminary gesture feature vector by applying a time sequence analysis technology to capture the mode and rule of gesture motion changing along with time;
and optimizing the mode and rule of the captured gesture motion along with the change of time based on the long-short-time memory network, and generating a gesture feature vector.
6. The method for recognizing a palmvein as recited in claim 1, wherein the fusing the palmvein feature vector with the fingerprint feature vector, the palmprint feature vector, and the gesture feature vector to obtain the comprehensive feature representation of the user includes:
the palm vein feature vector is calculated according to the following formula 1 Fingerprint feature vector->Palm print feature vector->Gesture feature vector +.>Non-linear mapping is performed:
wherein,is->A feature vector; />Is the ∈th after mapping>A feature vector;
each mapped feature vector is calculated according to the following equation 2Weight of (2)Weight->
Wherein,is an adjustment parameter; />Is>Is an information entropy of (a);
the user's composite feature representation is calculated according to equation 3 below
Wherein,is a weight coefficient, and N is the number of feature vectors.
7. The method of palm vein recognition according to claim 1, wherein the calculating a similarity score between the integrated feature representation and a target feature representation comprises:
computing a composite feature representation according to the following formulaRepresentation +.>Similarity score between->
Wherein,representation of the composite characteristic representation +.>Representation +.>Cosine similarity between the two vectors is used for measuring the consistency of the two vector directions; />Is an adjustable scale parameter; />、/>Is a weight coefficient.
8. A palmar vein recognition system, comprising:
the palm vein image processing unit is used for acquiring a palm vein image of a user and acquiring a palm vein feature vector in the palm vein image;
The multi-feature processing unit is used for collecting fingerprint images, palm print images and gesture dynamic data of a user, acquiring fingerprint feature vectors in the fingerprint images, acquiring palm print feature vectors in the palm print images and acquiring gesture feature vectors in the gesture dynamic data;
the fusion unit is used for fusing the palm vein feature vector with the fingerprint feature vector, the palm print feature vector and the gesture feature vector to obtain comprehensive feature representation of a user;
a comparison unit, configured to compare the integrated feature representation with a target feature representation stored in a database, and calculate a similarity score between the integrated feature representation and the target feature representation;
and the identification unit is used for identifying the identity information of the user according to the similarity score.
9. A computer readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, realizes the steps of:
acquiring a palm vein feature vector in a palm vein image of a user;
acquiring fingerprint feature vectors in a fingerprint image of a user, acquiring palm print feature vectors in a palm print image of the user, and acquiring gesture feature vectors in gesture dynamic data of the user;
Fusing the palm vein feature vector with the fingerprint feature vector, the palm print feature vector and the gesture feature vector to obtain a comprehensive feature representation of the user;
comparing the comprehensive feature representation with target feature representations stored in a database, and calculating a similarity score between the comprehensive feature representation and the target feature representation;
and identifying the identity information of the user according to the similarity score.
CN202410251729.XA 2024-03-06 2024-03-06 Palm vein recognition method, system and storage medium Withdrawn CN117854163A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410251729.XA CN117854163A (en) 2024-03-06 2024-03-06 Palm vein recognition method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410251729.XA CN117854163A (en) 2024-03-06 2024-03-06 Palm vein recognition method, system and storage medium

Publications (1)

Publication Number Publication Date
CN117854163A true CN117854163A (en) 2024-04-09

Family

ID=90534892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410251729.XA Withdrawn CN117854163A (en) 2024-03-06 2024-03-06 Palm vein recognition method, system and storage medium

Country Status (1)

Country Link
CN (1) CN117854163A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118552533A (en) * 2024-07-29 2024-08-27 南通市肿瘤医院(南通市第五人民医院) Cancerous wound monitoring data analysis processing method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118552533A (en) * 2024-07-29 2024-08-27 南通市肿瘤医院(南通市第五人民医院) Cancerous wound monitoring data analysis processing method, device and storage medium
CN118552533B (en) * 2024-07-29 2024-10-29 南通市肿瘤医院(南通市第五人民医院) Cancerous wound monitoring data analysis processing method, device and storage medium

Similar Documents

Publication Publication Date Title
Singh et al. A comprehensive overview of biometric fusion
Afifi 11K Hands: Gender recognition and biometric identification using a large dataset of hand images
Qin et al. Deep representation for finger-vein image-quality assessment
CN111191539B (en) Certificate authenticity verification method and device, computer equipment and storage medium
Bibi et al. Biometric signature authentication using machine learning techniques: Current trends, challenges and opportunities
Lin et al. Palmprint verification using hierarchical decomposition
Wu et al. A secure palm vein recognition system
Bharathi et al. Biometric recognition using finger and palm vein images
Nigam et al. Designing an accurate hand biometric based authentication system fusing finger knuckleprint and palmprint
Jaswal et al. DeepKnuckle: revealing the human identity
Hou et al. Finger-vein biometric recognition: A review
CN107169479A (en) Intelligent mobile equipment sensitive data means of defence based on fingerprint authentication
Nagwanshi et al. Biometric authentication using human footprint
CN117854163A (en) Palm vein recognition method, system and storage medium
Jalilian et al. Enhanced segmentation-CNN based finger-vein recognition by joint training with automatically generated and manual labels
US20230281762A1 (en) Fingerphoto deblurring using deep learning gan architectures
Ilankumaran et al. Multi-biometric authentication system using finger vein and iris in cloud computing
CN114821682B (en) Multi-sample mixed palm vein identification method based on deep learning algorithm
Chen et al. Explainable AI: a multispectral palm-vein identification system with new augmentation features
Qin et al. Finger-vein quality assessment based on deep features from grayscale and binary images
Farooq et al. Performance analysis of biometric recognition system based on palmprint
Mehmood et al. Palmprint enhancement network (PEN) for robust identification
Bakshi et al. An efficient face anti-spoofing and detection model using image quality assessment parameters
Shreya et al. Gan-enable latent fingerprint enhancement model for human identification system
Chinnappan et al. Fingerprint recognition technology using deep learning: a review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20240409

WW01 Invention patent application withdrawn after publication