Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Building Function Recognition Using the Semi-Supervised Classification
Next Article in Special Issue
Users’ Information Disclosure Behaviors during Interactions with Chatbots: The Effect of Information Disclosure Nudges
Previous Article in Journal
Privacy and Security in Federated Learning: A Survey
Previous Article in Special Issue
Prediction of Eudaimonic and Hedonic Orientation of Movie Watchers
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Authentic Signature Identification Method Integrating Dynamic and Static Features

1
School of Information Engineering, Huzhou University, Huzhou 313000, China
2
Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou 313000, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9904; https://doi.org/10.3390/app12199904
Submission received: 22 August 2022 / Revised: 23 September 2022 / Accepted: 26 September 2022 / Published: 1 October 2022
(This article belongs to the Special Issue Human and Artificial Intelligence)

Abstract

:

Featured Application

This study focuses on fusing the static features of traditional pen-and-paper writing with the dynamic features of digital writing, seeking more understandable features for precise signature identification.

Abstract

In many fields of social life, such as justice, finance, communication and so on, signatures are used for identity recognition. The increasingly convenient and extensive application of technology increases the opportunity for forged signatures. How to effectively identify a forged signature is still a challenge to be tackled by research. Offline static handwriting has a unique structure and strong interpretability, while online handwriting contains dynamic information, such as timing and pressure. Therefore, this paper proposes an authentic signature identification method, integrating dynamic and static features. The dynamic data and structural style of the signature are extracted by dot matrix pen technology, the global and local features, time and space features are fused and clearer and understandable features are applied to signature identification. At the same time, the classification of a forged signature is more detailed according to the characteristics of signature and a variety of machine learning models and a deep learning network structure are used for classification and recognition. When the number of classifications is 5, it is better to identify simple forgery signatures. When the classification number is 15, the accuracy rate is mostly about 96.7% and the highest accuracy reaches 100% on CNN. This paper focuses on feature extraction, incorporates the advantages of dynamic and static features and improves the classification accuracy of signature identification.

1. Introduction

Biometric recognition is based on human characteristics and signatures are considered one of the most common biological features [1,2]. The active mode of handwriting is widely associated with signature identification in biometric user authentication systems [3]. In a sense, handwriting is a behavioral manifestation of human thought, especially signatures, which have unique characteristics and strong personal style color [4]. Signature identification is required for office approval in corporate units, signing in cell phone business offices and banks, corroboration in the judicial industry and identification in examination scenarios. With the further development of information technology, the increasing popularity of handwritten signature acquisition devices and the maturity of digital writing technology have led to the replacement of pen-and-paper writing in the traditional sense. In the process of signature verification, dynamic features are the trend and static structure is the basis. If more comprehensive, simple and accurate methods appear, they will have a profound impact on various industries. However, as signature verification and identification systems are often used for forgery and fraud detection [5], the emergence of forged signatures complicates simple programs and even causes huge losses.
Currently, there are two types of signature identification: offline and online. Offline handwriting identification materials use traditional writing tools to write handwriting information on paper, which is then captured as a picture by a camera or scanner [6]. The features extracted from offline images can be combined to form a variety of effective features with uniqueness that cannot be ignored. Online signatures are obtained by signing on touch screen devices, such as tablets and cell phones, and many features are obtained by using a special pen and tablet and a scanned signature image [7]. Online handwriting recognition can be performed by collecting rich information, such as writing speed, angle, strength used by writers and stroke order online [8]. The online data are very clear, captured on a digital device, consisting of a discrete number of samples [9] and contain some additional supporting information [10].
With the popularity of paperless scenarios, online signature verification is widely used in various fields [11]. Electronic signatures are influenced by writing carriers and writing tools, resulting in many handwriting feature changes [12]. Handwriting is especially important from the perspective of handwriting verification but relying on handwriting signatures alone also loses some important features.
This study uses a dot matrix pen tool to identify handwriting by combining the static features of traditional pen-and-paper writing with the dynamic features of digitized writing. The dot matrix digital pen is a writing tool that captures the pen’s motion track of the pen through the high-speed camera at the front, obtains the pen tip pressure data by a pressure sensor and transmits the dynamic information of the writing process through coordination and pressure changes simultaneously. After preprocessing the written dynamic data and static image information, the easily understood structured static features and fine dynamic features are extracted and then the training and test sets are divided for each subtask and different models are used to study the fused features for classification and discrimination.
The paper is organized as follows: Section 2 describes the work related to this study. Section 3 describes materials and methods. Section 4 shows the results of the study. Section 5 is the discussion. Section 6 provides a summary.

2. Related Works

Handwriting identification is based on human handwriting to determine the identity of the writer [13]. Offline signature verification is more practical than online signature verification because it is more popular and its structural information is more intuitive to reflect the characteristics of the writer. The online signature verification mode is more robust than the offline signature verification mode because it captures the dynamic information of the signature in real time and is not easy for impersonators to copy [14].
There are three types of signature forgery: simple, random and skilled. In the case of simple forgery, the forger knows the name information of the signer, but does not know the real signature of the signer. In the case of random forgery, the forger knows the name of the signer or one of the real signatures. In the case of skilled forgery, both the signer’s name information and real signature information are known to the forger and the forger often practices imitating the signature of the signer [15].
The features used in the identification method can be divided into global features, local features, statistical features and dynamic features. For Chinese offline handwriting, Qingwu Li et al. [16] generated handwriting feature sets to identify handwriting samples by extracting curvature features of the stroke skeleton in four directions: horizontal, vertical, apostrophe and down. The samples were divided into reference handwriting and query handwriting. The similarity measurement method was used to find the writer of the corresponding handwriting. The handwriting of 10 people was randomly selected for the query and the number of characters per sample was 30, with an identification rate of 86%. Ding et al. [17] proposed an offline signature identification method based on scale invariant feature transform (SIFT) for local details of signature images, which detects SIFT feature points of the signature image and extracts feature descriptors, performs matching according to the Euclidean distance, filters matching pairs through the ratio of adjacent distances and the angle difference of feature points and performs histogram statistics on the angle difference in the matched feature points to form an ODH feature vector. Finally, the identification is completed according to the number of matching pairs and the similarity of ODH feature vectors. 4NsigComp2010 Database has fake signature, including real signature, fake signature and fake signature. The real signature is a signature written by the same author as the reference signature, the imitation signature is a signature written by other authors imitating the reference signature and the fake signature is a signature written by the same author as the reference signature but deliberately concealing the writing method. Tested on the local database, the error acceptance rate (EAR) was 5.3%, the error rejection rate (ERR) was 7%, the equal error rate (EER) was 6.7% and the EER was 20% on the 4NsigComp2010 Database. GRAPHJ is a forensic tool for handwriting analysis that implements automatic detection of lines and words in handwritten documents. The main focus in feature extraction is to measure the number of parts, such as the distance between text and characters, as well as the height and width of characters. The relative position of the punctuation on the "i" character is also used as a parameter to infer the authorship [18,19].
Huang Feiteng et al. [20] conducted a study on recognition of electronic signatures based on dynamic features, using writing duration, number of strokes and average writing strength per stroke as feature classification and collected three types of signature samples: simple, general and complex for classification. The results of discriminant analysis (DA), K-nearest neighbor (KNN), random forest (RF) and support vector machine (SVM) were all above 77% or more, which, to some extent, shows the feasibility of machine learning algorithms for classification of electronic signature handwriting recognition. Bhowal P et al. [21] designed an online signature verification system to extract three different types of features from the online signature, namely, physical features, frequency-based features and statistical features. The first ensemble using the feature classifier strategy combines the results of the seven classifiers using the sum of the normalized distribution, while the second ensemble, using the majority voting strategy, uses the decision of the first ensemble to make the final prediction, which is evaluated on the SVC 2004 and MCYT-100. The dataset includes real signatures and skilled forged signatures with 98.43% accuracy on the SVC 2004 dataset and 97.87% accuracy on MCYT-100.
Yelmati et al. [22] obtained a total of 42 feature vectors containing static and dynamic features, such as average velocity, pen up/down ratio, maximum pressure, pressure range, x-velocity variance, signature width, signature height, etc. They obtained better accuracy and faster training time on the SVC2004 Dataset but used fewer static features and weak interpretability. Kunshuai Wu [23] extracted GLCM and LBP features and fused them. After extracting texture features, he proposed an extraction method for signature stroke depth features, taking depth as the dynamic feature of the signature. The rules of using the GPDS dataset are consistent with the local dataset and are divided into three parts: real signature, skilled forged signature and random pseudo-signature, collecting 10 real signatures and 10 skilled signatures for a total of 20 groups. The highest overall correct rate of 87.75% for texture feature identification and 97.378% for depth feature identification was achieved, but an attempt was not made to combine the two, fusing dynamic and static feature information. Zhou et al. [6] proposed a handwritten signature verification method based on improved combined features. Based on the acquisition of offline images and online data, texture features were extracted using GLCM and HOG and nine geometric features were extracted. In addition to the horizontal and vertical coordinates and pressure contained in the online data, four dynamic features, velocity, acceleration, angle and radius of curvature, were also extracted. Support vector machine (SVM) and dynamic time warping (DTW) were used to verify the results. The forged signature is obtained by finding 2–3 experimenters to provide real signatures and forging them after pre-training. A total of 20 authors was collected and 1200 signatures were forged. Thus, 3, 5, 8 and 10 real signatures were selected for training small samples. The remaining signatures were used as test samples. After feature fusion, the highest accuracy rate of 10 samples was 97.83% and the false accept rate (FAR) value was 1.00%, The value of false reject rate (FRR) was 3.33%, but the characteristics of Chinese signatures are not well utilized and the task of forging signature is not detailed enough.

3. Materials and Methods

3.1. Sample Collection

Handwritten handwriting identification has become a very active research direction because of its wide application fields and numerous advantages [24]. The establishment of handwriting database is the basis of the research. Although the issue of signature identification has been discussed for many years, with the continuous update of science and technology, there is no practical database for the dynamic and static combination of forged signature handwriting database. Therefore, this study design is designed to establish a practical Chinese signature forged handwriting database for research purposes.
Collect the forged signature handwriting of the writer and establish a Chinese signature forged handwriting database. The database is a Chinese signature database, including 44 signatures of different signers. The acquisition device used is a dot matrix pen, which is composed of a high-speed camera and a pressure sensor. It can not only collect the coordinate information and pressure values of the sampling points during the writing process, but also collect the offline images for writing signatures. The multi-task design covers the issues that can be involved more comprehensively from two perspectives: the complexity of the strokes and the difficulty of the imitation. The signature handwriting is divided into two types: simple forged signature and skilled forged signature, as shown in Figure 1. The signature handwriting is collected according to different degrees and each type of signature is written 10 times under natural conditions in compliance with the personal habits of the writer to collect as many signature handwriting samples as possible. The online raw data are X and Y coordinate points, pressure, timestamp and pen up–down marks. Pen up–down marks refer to when the pen is lifted and the pen is dropped. Simple forged signature is written when the writer does not know the real signature. According to the complexity of the signature strokes, the signature is divided into simple signature, general signature and complex signature. The simple forgery signature task of different writers is shown in Table 1, where P1, P2 and P3 are different writers (the same below). Skilled forgery signature is to write and practice imitation when the writer knows the real signature. The real signature is shown in Table 2. According to the imitation degree of signature imitation, the signature is divided into simple imitation, general imitation and complex imitation. The skilled forgery signature tasks of different writers are shown in Table 3. Task 1 is a simple forgery task and task 2 is a skilled forgery task, containing a total of 2640 images and the corresponding signature data.
The X and Y coordinate points, pressure, time and signature images can be obtained by writing, which are the original data collected by the sample. Figure 2 shows the X and Y points of the signature data, which are the changes in the X and Y coordinate points with time during the writing process.

3.2. Preprocessing

In the preprocessing stage, the collected handwriting information is processed to remove irrelevant information, enhance the availability of information and facilitate feature extraction. The collected handwriting information is mainly divided into online data and offline images, which are preprocessed, respectively. A flow chart of data preprocessing is shown in Figure 3. By further processing the original data such as X and Y coordinates and pressure obtained by online writing, the dynamic information such as speed, acceleration and dead time is calculated as shown in Table 4, so as to improve the diversity of dynamic data and enhance the quality of signature data. For offline images, after selecting the required samples, each signature is trimmed with a fixed size and then de-noising, opening and closing operations and binarization operations are carried out. Finally, the binary image is refined using a fast refinement algorithm to extract the skeleton as in Figure 4, as a way to reduce the interference of the external influencing environment.

3.3. Classification Model

After feature selection, a variety of classification methods is used for verification. The training data are input into the classifier to learn the model and then the model predicts the label. Finally, the accuracy of model prediction is calculated to achieve the effect of signature identification.
Machine learning is a way to realize artificial intelligence, including different kinds of algorithms. It is a method of using data, training models and then using models to predict. For smaller datasets, classical machine learning algorithms are usually better than deep learning, which often require a large amount of data. The experiments are mainly conducted using four traditional algorithms: discriminant analysis, K-nearest neighbor, random forest and support vector machine.
Convolutional neural network (CNN) is a kind of feedforward neural network with convolution calculation and deep structure, which has been used to varying degrees in image processing, natural language processing, etc. [25]. The advantage is that multiple convolutional filters are used to extract high-level information from low-level information and the disadvantage is that encapsulation is not conducive to network performance improvement. The model used in the experiment has 9 layers, including 4 convolution layers, 4 maximum pooling layers and 1 fully connected layer.
Long short-term memory (LSTM) is a special recurrent neural network (RNN) model, which solves the short-term memory problem of RNN for solving the gradient disappearance and gradient explosion problems during the training of long sequences [26]. LSTM is very suitable for dealing with problems highly related to time series, such as machine translation, conversation generation, encoding and decoding, etc. The model used in the experiment has 4 layers, including 1 input layer, 2 hidden layers and 1 output layer.

4. Results

4.1. Feature Extraction

Feature selection is a process of removing irrelevant features, retaining relevant features and transforming the original data that cannot be recognized by the algorithm into data features that can be recognized. The selection of features usually follows the principles of representativeness, stability and comprehensibility. Too much will increase the amount of calculation and too little will lead to a loss of information. To extract the features of the preprocessed data, first judge the possible effective features, select the time and space features, local and global features, construct features according to the style characteristics of the signature and finally conduct a further screening of all features. The dynamic and static features of feature extraction are shown in Table 5.

4.1.1. Dynamic Feature Extraction

When writing a signature, a series of movement tracks will be left. Each person’s stroke characteristics, writing strength and speed will be different. Dynamic features are obtained by further processing the information on attributes, such as speed, time and pressure, obtained when writing online signatures. It has higher accuracy and can reflect the writing style of the writer to a certain extent.
In the process of screening dynamic features, the obtained dynamic features are analyzed by a thermal map. The heat map shows the color shades corresponding to different correlation coefficients to explore the correlation between the identity of the writer and each feature; as in Figure 5, the selected features have larger values and lighter colors, the results indicate that the selected dynamic features are more effective.
(1)
Total strokes
As a representative of the extremely strong embodiment of writing style, the number of strokes can show the connection of the written signature, as shown in Figure 6, which provides favorable conditions for identification. Especially in the imitation of complex signatures, the writing habits and psychological states of different signers will have a certain impact on the number of strokes.
(2)
Average pressure
Pressure is the force exerted on the paper by the individual through the nib when writing, as in Figure 7. The behavior of writing a signature is a dynamic process. As a continuous and hard-to-copy feature, pressure is difficult for forgers to accurately reproduce. Although there are different degrees of pressure values in continuous strokes, considering the number of strokes, the average pressure value of the signature is better.
(3)
Total hang time
This refers to the total pause time between each stroke during writing, as shown in Figure 8. Different authors have different proficiency in signature and personal writing habits and the pause time during writing is also different.
(4)
Total time
In the case of forged signatures, the time spent writing varies from author to author, as shown in Figure 9, which is directly related to the author’s original writing speed, the difficulty of signing and the author’s proficiency in forging signatures.
(5)
Maximum velocity
The maximum velocity of the author’s writing is shown in Figure 10. Velocity is a key feature that cannot be ignored when writing and it can express the natural degree and accuracy of a signature.
(6)
Minimum velocity
It shows the minimum value of the writer’s speed when writing, as in Figure 11.

4.1.2. Static Feature Extraction

Static features are the features extracted from offline signature images, which are similar to the results of visual analysis. They mainly distinguish different writers by analyzing the image structure, including the shape, position and writing style of the signature. The characteristics of the combined signatures are different through the different treatment methods of different strokes by the writers and the extracted static characteristics have a certain relationship with the public’s cognitive judgment, which is easier for the public to understand.
(1)
Aspect ratio
It is the horizontal and vertical span of the signature image, such as Figure 12, which reflects the habits of the writer when writing the signature, such as flat or rounded. Although this is not a unique key feature, the writing characteristics reflected by the aspect ratio are less likely to change for different writers.
(2)
Area
The area is the most basic feature to describe the size of the block pattern. The pattern area in the image can be represented by the number of pixels in the same marked area, as shown in Figure 13. This refers to the total number of pixels after the binarization image and the sample area reflect the size of signatures of different people to a certain extent. It is informative to supplement with the aspect ratio feature.
(3)
Center of gravity
The center of gravity of the signature is the center point of the weight for the whole signature, as shown in Figure 14. The center of gravity is the foundation of writing and a good grasp of the center of gravity will result in flatter words, which will vary from writer to writer.
(4)
Spindle direction
Among the axes that pass through the center of gravity of the graph, the longest axis is called the principal axis of the graph. The angle between the principal axis and the i-axis is called the principal axis direction angle θ, as in Figure 15, which can be used to represent the position of the signature graph.
(5)
Quadrilateral defining signature structure
For the signature, which has obvious personalized features and is not easy to change, it is possible to find some representative features that make a significant contribution to distinguishing different signatures. The strokes, such as horizontal, vertical, apostrophe, down, dot and hook, are all handled differently by different signers and the shape of the structural quadrilateral formed is also completely different. According to this feature, the general writing characteristics of the writer can be deduced. We processed the refined sample image to find the most edge points of the image in the four directions of up, down, left and right and connected the four edge points, in turn, to get a quadrilateral, as in Figure 16. When reflecting personal characteristics, we extract the four internal angles of the quadrilateral of the edge points.
(6)
Chain code for signature quadrilateral
Starting from the construction of a vertex of a quadrilateral, mark the edges in anticlockwise order and classify the boundary between each edge and the horizontal direction into chain codes of 0–7 different numbers, as shown in Figure 17. The chain code is adjusted on the basis of the direction of the meter character grid. According to the floating offset, when people write horizontally and vertically, the original direction of the meter character grid is expanded by 45 degrees to both sides; that is, the angle of each direction has a limit of 45 degrees, which can well eliminate the error caused by different handwriting. The chain code complements the internal angle feature and distinguishes signatures with the same angle but different directions, as shown in Figure 18.

4.2. Classification Results

To verify the feasibility of this method, experiments were conducted on a local dataset. The extracted dynamic features are fused with static features to obtain the signature feature set, which consists of 12 features, including 6 dynamic features and 6 static features. The number of words contained in the signature is small and identification is difficult. The local dataset contains the signatures of 44 writers. The experiments were conducted with a sample of 15 writers’ signatures for two major categories of simple forgery and skilled forgery and the samples were divided into training and test sets, according to 7:3, which were randomly selected in the original dataset. A variety of typical methods of machine learning was used for classification and, in addition, RNN and CNN were used for experiments with this feature, which showed that the method of fusing dynamic and static features was better in terms of classification accuracy. We found that any classifier does not work well when trained and tested on dynamic features or static feature sets alone. This is because we selected features that incorporate a representative part unique to dynamic and static, both of which complement each other, discarding relatively redundant parts and highlighting the uniqueness of dynamic and static features, while increasing interpretability and making it easier for the public to understand.
For the multi-classification experiment of forged signatures, in order to verify the effectiveness of the selected features in the multi classification, the machine learning algorithm is used to conduct separate experiments for a different number of writers. The results are shown in Table 6. The effect is better when the number of writers is 5 or less and basically 100% can be classified correctly, the effect is above 90% when the number of writers is 10 and the effect is stable at about 90% when the number of writers is 15. Table 7 shows the experimental results of forged signature identification when the number of writers is 15. Overall, simple forged signatures are better than skilled forged signatures in terms of identification, which is consistent with the characteristics of forged signatures. Among simple forgeries, complex signatures are best identified with a DA classification accuracy of 100% and, among skilled forgeries, simple imitations are best identified with a DA classification accuracy of 93.3%.
When deep learning is used for classification, CNN and LSTM network structures are mainly used and attention modules are added to the network. Table 8 shows the experimental results of forged signatures when the number of writers is 15. On the whole, the classification results of simple forgery are better than those of skilled forgery. The classification accuracy of complex signatures is better in simple forgery and the classification accuracy is 96.7%. The classification of complex forgery is better in skilled forgery, the average classification accuracy was 96.7% and the highest was 100%.

5. Discussion

In this study, the method of combining dynamic and static feature extraction is used to achieve better results. For the whole signature, dynamic features pay more attention to fine and clear information and use the value of each sampling point to obtain other data, while offline images pay more attention to the overall and structural information and use static features to complete the macro supplement. From the two dimensions, we can integrate more comprehensive features and complement each other. In the aspect of dynamic features, the more prominent and special features in the writing process, such as writing speed and pressure, are selected. In the aspect of static features, the public’s impression of the signature, such as the aspect ratio (the signature is flat or square) and the angle of the signature quadrangle (whether the signature is inclined to the left or right as a whole), is referred to. The features that can best represent the dynamic and static features are screened out, which can also achieve better results and be more easily recognized by the public.
At present, there are few studies on dynamic and static signature identification and it is difficult to find a database for research. At the same time, we made a comparative analysis of the results of existing studies. Zhou et al. gradually improved the accuracy when training with real samples of 3, 5, 8 and 10. The highest classification result was 97.83% when 10 samples were used [6]. In addition to fewer selected features, the others are basically consistent with the design of the dichotomous experiment in this study. When the number of writers was two, the accuracy of all tasks was 100%. When Huang et al. studied the multi-classification recognition of electronic signatures, the multi-classification results of the machine learning algorithm for 3000 samples from 30 authors were more than 90% [20]. In this study, when the number of writers was 15, the machine learning algorithm was about 90%, but the effect was good and stable in the deep learning network. The selection of features by Yelmti et al. is largely consistent with the features we used in the extraction of dynamic features [22] and, through correlation analysis, we know that the standard deviation and variance in velocity and other features have relatively low correlation. From the above discussion, we can see that this study is relatively comprehensive in feature extraction. When combining dynamic and static features, structural static features are added. When the amount of multi-classification identification is small, it can reach 100% and when the number of authors is 15, most of them can reach 96.7%. Under the condition of high accuracy, it is easier to understand, but the identification of individual tasks still needs to be improved.
The scope of application of signature verification has spread throughout people’s daily life and it is essential for the general public to identify reasonably and effectively. Alice J. et al., for identity recognition, relied on human facial and body expressions from static and dynamic situations, incorporating different conditions. Experiments have shown that a fusion of static and dynamic features, which focus on different directions, works better and achieves perfect performance [27]. Such feature fusion is not only for identity recognition; for example, the line interruption caused by typhoon can be predicted through the coordination of static and dynamic data [28] and multi-scale features and hierarchical features can be extracted for super-resolution image detection [29]. The effectiveness of feature fusion is fully demonstrated in various fields. A better result can be achieved by starting from multiple dimensions, looking at things in a comprehensive way, learning from each other’s strong points to complement each other’s weak points and explaining in simple terms.

6. Conclusions

This paper proposes a handwriting identification method that incorporates dynamic and static features and establishes a Chinese signature forgery handwriting database. By combining the static features of traditional paper-and-pen writing and the dynamic features of digital writing, the feasibility of the used features for multi-classification forgery handwriting identification is verified to some extent by experimenting and comparing different classification number cases using multiple classifiers.
The fusion of dynamic and static features makes the handwriting identification more interpretable and the effective features obtained are more comprehensive. It can better identify the forged signature handwriting and obtain better accuracy. Multi-classification experiments on forged signatures are a new way of thinking for handwriting identification.

Author Contributions

Conceptualization, J.L. and H.Q.; methodology, J.L. and H.Q.; software, J.L.; validation, J.L.; formal analysis, J.L.; investigation, J.L.; resources, H.Q.; data curation, J.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L., H.Q., C.Z. and Q.T.; supervision, H.Q., X.W., C.Z. and Q.T.; project administration, H.Q., X.W., C.Z. and Q.T.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Zhejiang Key R&D Plan (Grant number: 2017C03047).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. Correspondence: [email protected] (H.Q.)

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, L. The Application of Weighted DTW on Handwritten Signature Verification. Ph.D. Thesis, Shandong Normal University, Jinan, China, 2011. [Google Scholar]
  2. Plamondon, R.; Srihari, S.N. Online and off-line handwriting recognition: A comprehensive survey. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 63–84. [Google Scholar] [CrossRef] [Green Version]
  3. Dhieb, T.; Boubaker, H.; Njah, S.; Ben Ayed, M.; Alimi, A.M. A novel biometric system for signature verification based on score level fusion approach. Multimed. Tools Appl. 2022, 81, 7817–7845. [Google Scholar] [CrossRef]
  4. Ye, J. Online Signature Verification Based on SVM and One-class SVM. Ph.D. Thesis, South China University of Technology, Guangzhou, China, 2016. [Google Scholar]
  5. Naz, S.; Bibi, K.; Ahmad, R. DeepSignature: Fine-tuned transfer learning based signature verification system. Multimed. Tools Appl. 2022, 81, 38113–38122. [Google Scholar] [CrossRef]
  6. Zhou, Y.; Zheng, J.; Hu, H.; Wang, Y. Handwritten Signature Verification Method Basedon Improved Combined Features. Appl. Sci. 2021, 11, 5687. [Google Scholar] [CrossRef]
  7. Saleem, M.; Kovari, B. Online signature verification using signature down-sampling and signer-dependent sampling frequency. Neural Comput. Appl. 2021, 1–13. [Google Scholar] [CrossRef]
  8. Chen, S.; Wang, Y. A Robust Off-line Writer Identifi-cation Method. Acta Autom. Sin. 2020, 46, 108–116. [Google Scholar]
  9. Yapıcı, M.M.; Tekerek, A.; Topaloğlu, N. Deep learning-based data augmentation method and signature verification system for offline handwritten signature. Pattern Anal. Appl. 2021, 24, 165–179. [Google Scholar] [CrossRef]
  10. Jain, A.; Singh, S.K.; Singh, K.P. Handwritten signature verification using shallow convolutional neural net-work. Multimed. Tools Appl. 2020, 79, 19993–20018. [Google Scholar] [CrossRef]
  11. Okawa, M. Online Signature Verification Using LocallyWeighted Dynamic Time Warping via Multiple Fusion Strategies. IEEE Access 2022, 10, 40806–40817. [Google Scholar] [CrossRef]
  12. Wang, S.; Wang, J. Analysis on the Changes of Hand-writing and Electronic Signature. J. Railw. Police Collge 2019, 29, 7. [Google Scholar]
  13. Zhang, C.; Tong, X.; Wang, J. Review of Handwritten Signature Identification Based on Machine Learning. J. Jiangsu Police Inst. 2021, 36, 6. [Google Scholar]
  14. Rohilla, S.; Sharma, A.; Singla, R. Role of sub-trajectories in online signature verification. Array 2020, 6, 100028. [Google Scholar] [CrossRef]
  15. Hameed, M.M.; Ahmad, R.; Kiah, M.L.M.; Murtaza, G. Machine learning-based offlinesignature verification systems: A systematic review. Signal Process. Image Commun. 2021, 93, 116139. [Google Scholar] [CrossRef]
  16. Li, Q.; Ma, Y.; Zhou, Y.; Zhou, L. Method of Writer Identification Based on Curvature of Strokes. J. Chin. Inf. Processing 2016, 30, 6. [Google Scholar]
  17. Ding, Y.; Zhan, E.; Zheng, J.; Wang, Y. Offline signature identification based on improved SIFT. Appl. Esearch Comput. 2017, 34, 5. [Google Scholar]
  18. Guarnera, L.; Farinella, G.M.; Furnari, A.; Salici, A.; Ciampini, C.; Matranga, V.; Battiato, S. GRAPHJ: A forensics tool for handwriting analysis. In Proceedings of the International Conference on Image Analysis and Processing; Springer: Berlin/Heidelberg, Germany, 2017; pp. 591–601. [Google Scholar]
  19. Guarnera, L.; Farinella, G.M.; Furnari, A.; Salici, A.; Ciampini, C.; Matranga, V.; Battiato, S. Forensic analysis of handwritten documents with GRAPHJ. J. Electron. Imaging 2018, 27, 051230. [Google Scholar] [CrossRef]
  20. Huang, F.; Hao, H.; Chen, W.; Sun, J.; Shi, W.; Zhang, L.; Wnag, Z. Research on Electronic Signature Handwriting Recognition Based on Dynamic Features. Mod. Comput. 2020, 5, 84–88. [Google Scholar] [CrossRef]
  21. Bhowal, P.; Banerjee, D.; Malakar, S.; Sarkar, R. A two-tier ensemble approachfor writer dependent online signature verification. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 21–40. [Google Scholar] [CrossRef]
  22. Yelmati, S.R.; Rao, J.H. Online Signature Verification Using Fully Connected Deep Neural Networks. Int. J. Eng. Manuf. 2021, 11, 41–47. [Google Scholar]
  23. Wu, K. Signatures Verification Based on Texture Feature and Depth Feature. Ph.D. Thesis, Chinese Academy of Sciences, Beijing, China, 2020. [Google Scholar]
  24. Batool, F.E.; Attique, M.; Sharif, M.; Javed, K.; Nazir, M.; Abbasi, A.A.; Iqbal, Z.; Riaz, N. Offline signature verification system: A novel techniqueof fusion of GLCM and geometric features using SVM. Multimed. Toolsand Appl. 2020, 1–20. [Google Scholar] [CrossRef]
  25. Zhang, B. Off-line signature verification and identification by pyramid histogram of oriented gradients. Int. J. Intell. Comput. Cybern. 2010, 3, 611–630. [Google Scholar] [CrossRef]
  26. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  27. O’Toole, A.J.; Phillips, P.J.; Weimer, S.; Roark, D.A.; Ayyad, J.; Barwick, R.; Dunlop, J. Recognizing people from dynamic and static faces and bodies: Dissecting identity with a fusion approach. Vis. Res. 2011, 51, 74–83. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Tang, L.; Xie, H.; Wang, Y.; Zhu, H.; Bie, Z. Predicting typhoon-induced transmission line outages with coordination of static and dynamic data. Int. J. Electr. Power Energy Syst. 2022, 142, 108296. [Google Scholar] [CrossRef]
  29. Luo, J.; Liu, L.; Xu, W.; Yin, Q.; Lin, C.; Liu, H.; Lu, W. Stereo super-resolution images detection based on multi-scale feature extraction and hierarchical feature fusion. Gene Expr. Patterns 2022, 45, 119266. [Google Scholar] [CrossRef]
Figure 1. Sample collection tasks.
Figure 1. Sample collection tasks.
Applsci 12 09904 g001
Figure 2. X and Y sites of signature data. (a) Change in X coordinate point; (b) change in Y coordinate point.
Figure 2. X and Y sites of signature data. (a) Change in X coordinate point; (b) change in Y coordinate point.
Applsci 12 09904 g002
Figure 3. Data preprocessing.
Figure 3. Data preprocessing.
Applsci 12 09904 g003
Figure 4. Offline image preprocessing.
Figure 4. Offline image preprocessing.
Applsci 12 09904 g004
Figure 5. Heat map.
Figure 5. Heat map.
Applsci 12 09904 g005
Figure 6. Total strokes.
Figure 6. Total strokes.
Applsci 12 09904 g006
Figure 7. Pressure value.
Figure 7. Pressure value.
Applsci 12 09904 g007
Figure 8. Hang time.
Figure 8. Hang time.
Applsci 12 09904 g008
Figure 9. Writing time.
Figure 9. Writing time.
Applsci 12 09904 g009
Figure 10. Maximum velocity.
Figure 10. Maximum velocity.
Applsci 12 09904 g010
Figure 11. Minimum velocity.
Figure 11. Minimum velocity.
Applsci 12 09904 g011
Figure 12. Aspect ratio. (a) A signature with a certain aspect ratio A; (b) a signature with a certain aspect ratio B.
Figure 12. Aspect ratio. (a) A signature with a certain aspect ratio A; (b) a signature with a certain aspect ratio B.
Applsci 12 09904 g012
Figure 13. Area.
Figure 13. Area.
Applsci 12 09904 g013
Figure 14. Graphic center of gravity. (a) A signature with a center of gravity A; (b) a signature with a center of gravity B.
Figure 14. Graphic center of gravity. (a) A signature with a center of gravity A; (b) a signature with a center of gravity B.
Applsci 12 09904 g014
Figure 15. Spindle direction. (a) Directional angle; (b) signature corresponds to spindle direction.
Figure 15. Spindle direction. (a) Directional angle; (b) signature corresponds to spindle direction.
Applsci 12 09904 g015
Figure 16. Quadrilateral defining signature structure. (a) signature written by P1; (b) signature written by P2; (c) signature written by P3.
Figure 16. Quadrilateral defining signature structure. (a) signature written by P1; (b) signature written by P2; (c) signature written by P3.
Applsci 12 09904 g016
Figure 17. Chain code.
Figure 17. Chain code.
Applsci 12 09904 g017
Figure 18. Signature quadrilateral with different chain codes at the same angle. (a) Signature quadrilateral with chain code 7; (b) signature quadrilateral with chain code 0.
Figure 18. Signature quadrilateral with different chain codes at the same angle. (a) Signature quadrilateral with chain code 7; (b) signature quadrilateral with chain code 0.
Applsci 12 09904 g018
Table 1. Simple forgery task.
Table 1. Simple forgery task.
Simple SignatureGeneral SignatureComplex Signature
P1Applsci 12 09904 i001Applsci 12 09904 i002Applsci 12 09904 i003
P2Applsci 12 09904 i004Applsci 12 09904 i005Applsci 12 09904 i006
P3Applsci 12 09904 i007Applsci 12 09904 i008Applsci 12 09904 i009
Table 2. Genuine signature.
Table 2. Genuine signature.
Simple Genuine SignatureGeneral Genuine SignatureComplex Genuine Signature
Applsci 12 09904 i010Applsci 12 09904 i011Applsci 12 09904 i012
Table 3. Skilled forgery task.
Table 3. Skilled forgery task.
Simple ImitationGeneral ImitationComplex Imitation
P1Applsci 12 09904 i013Applsci 12 09904 i014Applsci 12 09904 i015
P2Applsci 12 09904 i016Applsci 12 09904 i017Applsci 12 09904 i018
P3Applsci 12 09904 i019Applsci 12 09904 i020Applsci 12 09904 i021
Table 4. Online image preprocessing.
Table 4. Online image preprocessing.
Raw DataProcessed Data
XStrokeSum
YHangTime
PressureStrokeTime
StateStrokeLength
StrokeNumVelocity
TimestampAcceleration
Pressure
Table 5. Feature extraction.
Table 5. Feature extraction.
Dynamic FeatureStatic Feature
StrokeSumAspectRatio
AveragePressureArea
HangTimeCenter of Gravity
StrokeTimeSpindleDirection
SpeedMaxQuadrilateral defining Signature structure
SpeedMinChainCode
Table 6. Result of multi-classification experiment.
Table 6. Result of multi-classification experiment.
Number of WritersSimple Forged Signature Skilled Forged Signature
Simple SignatureGeneral SignatureComplex SignatureSimple ImitationGeneral ImitationComplex Imitation
2100.0100.0100.0100.0100.0100.0
3100.0100.0100.0100.0100.0100.0
4100.0100.0100.0100.0100.0100.0
594.3100.0100.097.197.1100.0
690.5100.097.697.697.697.6
793.998.0100.098.098.091.8
891.196.498.298.294.891.1
992.196.893.796.895.293.7
1092.994.391.497.195.790.0
1184.496.194.898.796.192.2
1284.598.892.995.295.294.0
1390.196.794.590.194.591.2
1483.792.991.889.893.985.7
1586.797.190.588.693.386.7
Table 7. Result on multiple classifiers.
Table 7. Result on multiple classifiers.
Simple Forged Signature Skilled Forged Signature
Simple SignatureGeneral SignatureComplex SignatureSimple ImitationGeneral ImitationComplex Imitation
KNN73.386.795.680.075.682.2
DA75.691.1100.093.377.893.3
RF80.088.995.688.975.684.4
SVM75.677.895.673.375.675.6
Table 8. Deep learning network classification results.
Table 8. Deep learning network classification results.
Simple Forged Signature Skilled Forged Signature
Simple SignatureGeneral SignatureComplex SignatureSimple ImitationGeneral ImitationComplex Imitation
CNN90.090.096.796.783.393.0
CNN + Att96.796.796.793.390.0100.0
LSTM95.796.796.790.090.096.7
LSTM + Att95.796.796.793.380.096.7
CNN-LSTM83.390.086.790.083.393.3
CNN-LSTM + Att83.393.393.3100.083.396.7
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lu, J.; Qi, H.; Wu, X.; Zhang, C.; Tang, Q. Research on Authentic Signature Identification Method Integrating Dynamic and Static Features. Appl. Sci. 2022, 12, 9904. https://doi.org/10.3390/app12199904

AMA Style

Lu J, Qi H, Wu X, Zhang C, Tang Q. Research on Authentic Signature Identification Method Integrating Dynamic and Static Features. Applied Sciences. 2022; 12(19):9904. https://doi.org/10.3390/app12199904

Chicago/Turabian Style

Lu, Jiaxin, Hengnian Qi, Xiaoping Wu, Chu Zhang, and Qizhe Tang. 2022. "Research on Authentic Signature Identification Method Integrating Dynamic and Static Features" Applied Sciences 12, no. 19: 9904. https://doi.org/10.3390/app12199904

APA Style

Lu, J., Qi, H., Wu, X., Zhang, C., & Tang, Q. (2022). Research on Authentic Signature Identification Method Integrating Dynamic and Static Features. Applied Sciences, 12(19), 9904. https://doi.org/10.3390/app12199904

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop