CN110956229A - Method and device for realizing goods safe transaction - Google Patents
Method and device for realizing goods safe transaction Download PDFInfo
- Publication number
- CN110956229A CN110956229A CN201811134755.5A CN201811134755A CN110956229A CN 110956229 A CN110956229 A CN 110956229A CN 201811134755 A CN201811134755 A CN 201811134755A CN 110956229 A CN110956229 A CN 110956229A
- Authority
- CN
- China
- Prior art keywords
- image
- information
- goods
- acquiring
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 99
- 238000001514 detection method Methods 0.000 claims description 57
- 238000012795 verification Methods 0.000 claims description 49
- 230000008569 process Effects 0.000 claims description 27
- 238000012546 transfer Methods 0.000 claims description 22
- 210000000056 organ Anatomy 0.000 claims description 18
- 230000009471 action Effects 0.000 claims description 14
- 238000003860 storage Methods 0.000 claims description 13
- 230000001815 facial effect Effects 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 9
- 238000005286 illumination Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 description 10
- 238000001727 in vivo Methods 0.000 description 9
- 230000036544 posture Effects 0.000 description 5
- 210000000887 face Anatomy 0.000 description 4
- 238000012015 optical character recognition Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000005520 cutting process Methods 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 239000013067 intermediate product Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K17/00—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07B—TICKET-ISSUING APPARATUS; FARE-REGISTERING APPARATUS; FRANKING APPARATUS
- G07B17/00—Franking apparatus
- G07B17/00733—Cryptography or similar special procedures in a franking system
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07B—TICKET-ISSUING APPARATUS; FARE-REGISTERING APPARATUS; FRANKING APPARATUS
- G07B17/00—Franking apparatus
- G07B17/00733—Cryptography or similar special procedures in a franking system
- G07B2017/00822—Cryptography or similar special procedures in a franking system including unique details
- G07B2017/0083—Postal data, e.g. postage, address, sender, machine ID, vendor
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a method and a device for realizing goods safe transaction. The method for realizing the safe transaction of the goods comprises the following steps: acquiring an image of a delivery document to be identified; judging whether the image is a real-time image or not according to the additional information of the image; if yes, extracting the goods taking document information contained in the image; judging whether the goods taking receipt information is in compliance or not in a preset judgment mode according to the extracted goods taking receipt information; and if the goods taking bill is judged to be in compliance, the goods corresponding to the goods taking bill is allowed to be handed over. By adopting the method provided by the application, the safety problem in goods transaction is solved.
Description
Technical Field
The invention relates to the technical field of logistics, in particular to a method and a device for realizing safe transaction of goods.
Background
The logistics system is an organic whole with specific functions, which is formed by materials to be conveyed and a plurality of dynamic elements which are mutually restricted and comprise related equipment, conveying tools, storage equipment, personnel, communication connection and the like in a certain time and space.
In the logistics system, various documents, various certificates and identity information can be forged by using technical means in the goods transaction process, so that a complete evidence chain is provided for goods property transfer in the goods transaction process, and the realization of the safe transaction of goods becomes a problem to be solved urgently.
Disclosure of Invention
The application provides a method and a device for realizing goods safe transaction, which are used for solving the problem of safety in goods transaction.
The method for realizing the goods secure transaction comprises the following steps:
acquiring an image of a delivery document to be identified;
judging whether the image is a real-time image or not according to the additional information of the image;
if yes, extracting the goods taking document information contained in the image;
judging whether the goods taking receipt information is in compliance or not in a preset judgment mode according to the extracted goods taking receipt information;
and if the goods taking bill is judged to be in compliance, the goods corresponding to the goods taking bill is allowed to be handed over.
Optionally, the additional information of the image adopts image capturing time, and the step of determining whether the image is a real-time image includes:
acquiring the image shooting time;
and comparing the image shooting time with the current time, and if the difference value of the image shooting time and the current time meets a preset threshold range, judging the image to be a real-time image.
Optionally, the extracted pickup document information includes geographical location information recorded by the pickup document; the step of judging whether the delivery receipt information is in compliance in a preset judgment mode comprises the following steps: and acquiring geographical position information in the additional information of the image, comparing the geographical position information with the geographical position information recorded by the pickup receipt, and judging whether the comparison result is consistent or not as one of the bases for judging whether the pickup receipt information is in compliance or not.
Optionally, the obtaining the geographic location information in the additional information of the image, where the geographic location information in the additional information of the image is encrypted, includes:
and executing decryption operation aiming at the encrypted geographic position information to obtain the decrypted geographic position information.
Optionally, before the obtaining of the image of the pickup document to be identified, the method includes:
acquiring certificate information of a person who presents the delivery receipt;
comparing the certificate information with prestored certificate information;
and if the certificate comparison is passed, the step of acquiring the image of the document to be identified is carried out.
Optionally, before the acquiring the image of the document to be recognized, the method includes:
acquiring the face image information of the person who presents the delivery receipt;
performing face verification according to the face image information;
and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
Optionally, before the step of performing face verification, the method includes:
according to the face image information, performing living body detection;
and if the living body detection passes, entering the step of face verification.
Optionally, before the obtaining of the image of the pickup document to be identified, the method includes:
acquiring certificate information of a person who presents the delivery receipt;
comparing the certificate information with prestored certificate information;
if the certificate comparison is passed, acquiring the face image information of the delivery receipt person;
according to the face image information, performing living body detection;
if the living body detection is passed, carrying out face verification according to the face image information;
and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
Optionally, the certificate comparison includes:
comparing the text information of the certificate information with the prestored related text information of the certificate;
and comparing the image information of the certificate information with the prestored related image information of the certificate.
Optionally, the in-vivo detection is implemented by the following steps, including:
providing an indication to a user to complete a related action;
acquiring the state of the facial organ of the user when the user executes the related action in real time;
and judging whether the face organ is a living body according to the state of the face organ.
Optionally, the in-vivo detection is implemented by the following steps, including:
acquiring a real-time video of a user;
executing random frame extraction operation aiming at the video to obtain a corresponding random picture;
performing silent picture living body detection on the random picture;
and judging whether the living body is the living body according to the result of the living body detection of the silent picture.
Optionally, before acquiring a segment of real-time video of a user, the method includes:
randomly allocating predetermined content for a user;
instructing a user to read the distributed preset content in the process of recording the real-time video;
and in the step of acquiring a piece of video of the user, simultaneously storing the sound of the preset content read by the user in the video.
Optionally, in the step of obtaining the face image information, face quality monitoring is adopted to ensure the quality of the face image information, and the face quality monitoring includes:
in the process of face acquisition, setting a verification condition aiming at an image acquisition condition of a face; the image acquisition condition comprises at least one of the following conditions when the face image is acquired: the method comprises the following steps of (1) obtaining a human face posture angle condition, an illumination condition, a human face shielding condition, a human face fuzziness and a human face integrity;
checking the collected face image according to the checking condition;
and the checked face image meeting the condition requirement enters a subsequent processing step.
Optionally, after the allowing of the delivery of the goods corresponding to the pickup receipt, the method includes:
acquiring certificate information of a person who presents the delivery receipt;
acquiring the face image information of the bill taker for taking the goods;
acquiring the geographical position information of the bill taker for showing the goods;
the information obtained in the above steps is stored in a predetermined database as evidence of the property transfer of the goods at the time of shipment.
Optionally, the method includes:
acquiring the geographical position information of the goods in the transportation process;
the geographic location information provides evidence of movement of the item.
Optionally, during the handing over of the goods, the method includes:
acquiring an image of a delivery document;
acquiring a certificate image of a receiver;
acquiring a face image of the receiver;
acquiring the geographical position information of the receiver;
acquiring the information of a goods-taking receipt of a receiver;
the information obtained in the above steps is stored in a predetermined database as proof of the transfer of the property of the goods at the time of delivery.
Correspondingly, this application still provides a device of realizing goods safety transaction, includes:
the acquisition unit is used for acquiring an image of the delivery document to be identified;
a determination unit configured to determine whether the image is a real-time image based on additional information of the image;
the extracting unit is used for extracting the goods taking document information contained in the image if the image is true;
the compliance judging unit is used for judging whether the goods taking receipt information is in compliance or not in a preset judging mode according to the extracted goods taking receipt information;
and the transaction control unit is used for allowing the delivery of the goods corresponding to the delivery receipt if the delivery receipt is judged to be in compliance.
Optionally, the additional information of the image adopts image capturing time, and the determining unit is specifically configured to:
acquiring the image shooting time;
and comparing the image shooting time with the current time, and if the difference value of the image shooting time and the current time meets a preset threshold range, judging the image to be a real-time image.
Optionally, the extracted pickup document information includes geographical location information recorded by the pickup document; the compliance determination unit is specifically configured to: and acquiring geographical position information in the additional information of the image, comparing the geographical position information with the geographical position information recorded by the pickup receipt, and judging whether the comparison result is consistent or not as one of the bases for judging whether the pickup receipt information is in compliance or not.
Optionally, the obtaining the geographic location information in the additional information of the image, where the geographic location information in the additional information of the image is encrypted, includes:
and executing decryption operation aiming at the encrypted geographic position information to obtain the decrypted geographic position information.
Optionally, the certificate comparison unit, before the image of the pickup document to be identified is acquired, is specifically configured to:
acquiring certificate information of a person who presents the delivery receipt;
comparing the certificate information with prestored certificate information;
and if the certificate comparison is passed, the step of acquiring the image of the document to be identified is carried out.
Optionally, the face verification unit, before the obtaining of the image of the document to be recognized, is specifically configured to:
acquiring the face image information of the person who presents the delivery receipt;
performing face verification according to the face image information;
and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
Optionally, the living body detecting unit is specifically configured to, before the step of performing face verification:
according to the face image information, performing living body detection;
and if the living body detection passes, entering the step of face verification.
Optionally, the identity information verification unit, before the obtaining of the image of the pickup document to be identified, is specifically configured to:
acquiring certificate information of a person who presents the delivery receipt;
comparing the certificate information with prestored certificate information;
if the certificate comparison is passed, acquiring the face image information of the delivery receipt person;
according to the face image information, performing living body detection;
if the living body detection is passed, carrying out face verification according to the face image information;
and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
Optionally, the certificate comparison implementation unit is specifically configured to:
comparing the text information of the certificate information with the prestored related text information of the certificate;
and comparing the image information of the certificate information with the prestored related image information of the certificate.
Optionally, the first in-vivo detection unit is specifically configured to:
providing an indication to a user to complete a related action;
acquiring the state of the facial organ of the user when the user executes the related action in real time;
and judging whether the face organ is a living body according to the state of the face organ.
Optionally, the second in-vivo detection unit is specifically configured to:
acquiring a real-time video of a user;
executing random frame extraction operation aiming at the video to obtain a corresponding random picture;
performing silent picture living body detection on the random picture;
and judging whether the living body is the living body according to the result of the living body detection of the silent picture.
Optionally, the sound adding unit, before acquiring a segment of real-time video of the user, is specifically configured to:
randomly allocating predetermined content for a user;
instructing a user to read the distributed preset content in the process of recording the real-time video;
and in the step of acquiring a piece of video of the user, simultaneously storing the sound of the preset content read by the user in the video.
Optionally, the face quality monitoring unit is specifically configured to, in the step of obtaining the face image information:
in the process of face acquisition, setting a verification condition aiming at an image acquisition condition of a face; the image acquisition condition comprises at least one of the following conditions when the face image is acquired: the method comprises the following steps of (1) obtaining a human face posture angle condition, an illumination condition, a human face shielding condition, a human face fuzziness and a human face integrity;
checking the collected face image according to the checking condition;
and the checked face image meeting the condition requirement enters a subsequent processing step.
Optionally, the pickup asset transfer recording unit is specifically configured to, after the acceptance of the handover of the goods corresponding to the pickup receipt:
acquiring certificate information of a person who presents the delivery receipt;
acquiring the face image information of the bill taker for taking the goods;
acquiring the geographical position information of the bill taker for showing the goods;
the information obtained in the above steps is stored in a predetermined database as evidence of the property transfer of the goods at the time of shipment.
Optionally, the geographic location information recording unit is specifically configured to:
acquiring the geographical position information of the goods in the transportation process;
the geographic location information provides evidence of movement of the item.
Optionally, the delivery asset transfer recording unit, during the handing over of the goods, is specifically configured to:
acquiring an image of a delivery document;
acquiring a certificate image of a receiver;
acquiring a face image of the receiver;
acquiring the geographical position information of the receiver;
acquiring the information of a goods-taking receipt of a receiver;
the information obtained in the above steps is stored in a predetermined database as proof of the transfer of the property of the goods at the time of delivery.
The present application also provides an electronic device, characterized in that the electronic device includes:
a processor;
a memory for storing a program that, when read and executed by the processor, performs the following:
acquiring an image of a delivery document to be identified;
judging whether the image is a real-time image or not according to the additional information of the image;
if yes, extracting the goods taking document information contained in the image;
judging whether the goods taking receipt information is in compliance or not in a preset judgment mode according to the extracted goods taking receipt information;
if the goods taking receipt is judged to be in compliance, the goods corresponding to the goods taking receipt is allowed to be handed over
The present application further provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of:
acquiring an image of a delivery document to be identified;
judging whether the image is a real-time image or not according to the additional information of the image;
if yes, extracting the goods taking document information contained in the image;
judging whether the goods taking receipt information is in compliance or not in a preset judgment mode according to the extracted goods taking receipt information;
if the goods taking receipt is judged to be in compliance, the goods corresponding to the goods taking receipt is allowed to be handed over
The method for realizing the goods secure transaction comprises the following steps: acquiring an image of a delivery document to be identified; judging whether the image is a real-time image or not according to the additional information of the image; if yes, extracting the goods taking document information contained in the image; judging whether the goods taking receipt information is in compliance or not in a preset judgment mode according to the extracted goods taking receipt information; and if the goods taking bill is judged to be in compliance, the goods corresponding to the goods taking bill is allowed to be handed over. By adopting the method provided by the application, the safety problem in goods transaction is solved.
Drawings
FIG. 1 is a flow chart of an embodiment of a method of implementing a secure transaction for goods according to the present application.
Fig. 2 is a flow chart of an embodiment of an apparatus for performing a secure transaction of goods according to the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The first embodiment of the application provides a method for realizing goods secure transaction. Please refer to fig. 1, which is a flowchart illustrating a first embodiment of the present application. The first embodiment of the present application will be described in detail below with reference to fig. 1. The method comprises the following steps:
step S101: and acquiring an image of the delivery document to be identified.
The step is used for obtaining the image of the goods taking receipt to be identified.
During the transaction of goods, some documents are used. For example, in supply chain finance, a bill of lading is used in the pick-up phase and a delivery slip is used in the delivery phase, where both the bill of lading and the delivery slip are logistics documents.
The supply chain is a functional network chain structure which is formed by controlling information flow, logistics and fund flow, manufacturing intermediate products and final products from the raw material purchase, and finally sending the products to the consumer by a sales network to connect suppliers, manufacturers, distributors, retailers and end users into a whole.
Supply chain finance is a specialized field (bank level) of commercial bank credit business, and is also a financing channel (enterprise level) of enterprises, especially small and medium-sized enterprises. Banks provide financing and other settlement and financing services to customers (core enterprises), while providing the suppliers of these customers with the convenience of timely receipt of loans, or with prepaid proxy payment and inventory financing services to their distributors.
In supply chain finance, after a business bank makes a loan to a small or medium business, the business bank needs to monitor whether the loan provided to the small or medium business has agreed to purchase a particular commodity from a particular supplier. And the bill of lading and delivery takes up a very important position in the whole process. If the delivery order and delivery order during the logistics transportation are real and reliable, the use of the loan provided by the commercial bank to the medium and small enterprises is effectively monitored.
The image of the pickup document to be identified can be obtained by a mobile client and then transmitted back to the server for further identification.
Before the acquiring the image of the delivery document to be identified, the method comprises the following steps: acquiring certificate information of a person who presents the delivery receipt; comparing the certificate information with prestored certificate information; and if the certificate comparison is passed, the step of acquiring the image of the document to be identified is carried out.
The certificate information comprises identity certificate information such as an identity card, a driving license, a passport and the like.
The certificate comparison comprises the following steps: comparing the text information of the certificate information with the prestored related text information of the certificate; and comparing the image information of the certificate information with the prestored related image information of the certificate.
The text information of the certificate information refers to text information on the certificate, such as name, gender, household registration location and the like.
The image information of the certificate information refers to an image on a certificate, such as a photo on an identity card.
Through certificate contrast, the human face image different from the prestored certificate information is screened out, and the efficiency of identity authentication is improved. For example, Mr. driver uses the second generation ID card to register in the system, and Mr. driver collects the face image in the stage of picking up goods. The system firstly compares the collected face image with the image of Mr. ID card in the identity authentication system of the ministry of public security, if the difference between the two exceeds a preset threshold value, the person who takes the goods is not Mr. and then the step of face verification is not needed to be executed.
Before the acquiring of the image of the document to be identified, the method comprises the following steps: acquiring the face image information of the person who presents the delivery receipt; performing face verification according to the face image information; and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
The face verification is realized by a face recognition technology. In order to improve the safety of transaction, the logistics transportation person is identified by using a face identification technology, and only approved logistics transportation persons can transport specific goods.
The face verification is realized by a face recognition technology. Face recognition is a biometric technology for identity recognition based on facial feature information of a person. The method comprises the following steps of collecting images or video streams containing human faces by using a camera or a camera, automatically detecting and tracking the human faces in the images, and further carrying out face recognition on the detected human faces.
Since face recognition is now a well-established technology, a detailed description thereof will not be provided here.
Before the step of performing face verification, the method comprises the following steps: according to the face image information, performing living body detection; and if the living body detection passes, entering the step of face verification.
The living body detection is used for dealing with the camouflaging attack of the static picture. There are many methods for in vivo detection, and two implementation methods are provided below.
The in-vivo detection can be realized by adopting the following steps of: providing an indication to a user to complete a related action; acquiring the state of the facial organ of the user when the user executes the related action in real time; and judging whether the face organ is a living body according to the state of the face organ.
The related actions can be head lowering and head raising, mouth opening and mouth closing, or eye opening and eye closing. In this way, the states of the eyes, the mouth, the head and the like when the user performs the relevant actions are compared with the pre-stored expected postures of the user, and whether the user is operating really is judged.
The in-vivo detection can also be realized by adopting the following steps of: acquiring a real-time video of a user; executing random frame extraction operation aiming at the video to obtain a corresponding random picture; performing silent picture living body detection on the random picture; and judging whether the living body is the living body according to the result of the living body detection of the silent picture.
The silent picture living body detection means that whether the attack is a secondary reproduction attack or not is distinguished by submitting a picture and detecting clues such as screen borders, light reflection, moire fringes, imaging distortion and the like in the picture.
Firstly, a section of video of a user is obtained and can be realized through a mobile phone client.
And secondly, randomly extracting video frames from the video to serve as corresponding random pictures.
Then, carrying out silent picture living body detection on the random picture, wherein the silent picture living body detection means that whether the random picture is a double-shot attack or not is distinguished by submitting the picture and detecting clues such as screen borders, light reflection, moire fringes, imaging distortion and the like in the picture;
and finally, judging whether the image is a living body according to the result of the silent image living body detection.
Before acquiring a real-time video of a user, the method comprises the following steps:
randomly allocating predetermined content for a user; instructing a user to read the distributed preset content in the process of recording the real-time video; and in the step of acquiring a piece of video of the user, simultaneously storing the sound of the preset content read by the user in the video.
In the process of acquiring a section of video of the user, predetermined content can be randomly distributed for the user, such as reading a series of specified numbers; and storing the number read out by the user in the video. After the server receives the video, the audio information in the video is extracted and compared with the audio information prestored in the server, and the reliability of the living body detection is further ensured.
Before the acquiring the image of the delivery document to be identified, the method comprises the following steps: acquiring certificate information of a person who presents the delivery receipt; comparing the certificate information with prestored certificate information; if the certificate comparison is passed, acquiring the face image information of the delivery receipt person; according to the face image information, performing living body detection; if the living body detection is passed, carrying out face verification according to the face image information; and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
The method comprises the steps of using three technologies of living body detection, certificate comparison and face verification together. This is not described in detail.
In the step of obtaining the face image information, face quality monitoring is adopted to ensure the quality of the face image information, and the face quality monitoring comprises:
in the process of face acquisition, setting a verification condition aiming at an image acquisition condition of a face; the image acquisition condition comprises at least one of the following conditions when the face image is acquired: the method comprises the following steps of (1) obtaining a human face posture angle condition, an illumination condition, a human face shielding condition, a human face fuzziness and a human face integrity; checking the collected face image according to the checking condition; and the checked face image meeting the condition requirement enters a subsequent processing step.
The final effect of face recognition depends on whether the collected face meets the standard quality requirement in the process of face collection. If the quality of the face during registration is not good, the feature extraction in the registration link is affected, so that the original registered face information is poor, the subsequent identification is directly affected, and the obtained similarity score is often not accurate enough. Because the quality of the face during registration is not good, a certain score error exists in each recognition, and the situation that the user is clearly too inexhaustible is caused.
The attitude angle refers to the angle distribution of the human face in a three-dimensional space. The attitude angles are Pitch, Roll and Yaw, are used for representing the angles of the human faces in a space three-dimensional coordinate system, and are commonly used for judging the limit value of the identification angle.
The angle thresholds are as follows:
pitch: pitch angle of three-dimensional rotation, range: [ -90 (upper), 90 (lower) ].
Roll: in-plane rotation angle, range: [ -180 (counter clockwise), 180 (clockwise) ].
And (3) Yaw: left and right rotation angles of three-dimensional rotation, range: [ -90 (left), 90 (right) ].
The illumination refers to the illumination intensity of the human face.
Too dark a face can have a significant impact on recognition, so it is usually preferable to keep the face sufficiently illuminated in all quality checks.
The occlusion refers to the occlusion proportion of each part of the face.
The degree of sheltering from of each position in the people's face is judged, and the region can be divided into: left eye, nose, left cheek, mouth, chin, total 7 zones. Usually, the shielding area of one or more regions is too large, which affects the final recognition effect,
the ambiguity refers to the degree of clarity of the face. The fuzzy degree of the human face generally takes a value range as follows: 0 to 1, with 0 being the clearest and 1 being the most blurred, and generally less than 0.7 being considered satisfactory.
The completeness refers to whether the face in the picture is complete or not.
In the face acquisition process, calibration conditions are set according to the attitude angle, illumination, shielding, ambiguity, integrity and the like of the face. For example, the following check conditions may be used for occlusion:
the value range of the shielding is as follows: [0 to 1] 0: no shielding 1: complete shielding
left _ eye: 0.6,// threshold for left eye occluded
right _ eye: 0.6,// threshold for occlusion of the right eye
no se: 0.7,// threshold where nose is occluded
mouth: 0.7,// threshold for occluded mouth
left _ check: 0.8,// threshold for left cheek occlusion
right _ check: 0.8,// threshold for right cheek occlusion
chi _ contour: 0.6,// chin occluded threshold
Then, checking the collected face image according to the condition check; and the checked face image meeting the condition requirement enters a subsequent processing step.
Step S102: and judging whether the image is a real-time image or not according to the additional information of the image.
This step is for determining whether the image is a real-time image based on the additional information of the image.
The real-time property of the image is judged to ensure the reliability of the document. The prior art has provided solutions to add additional information in the image. For example, in an android system, a picture file stores pixel information in a two-dimensional array of pixel points, and some additional photo shooting parameter information is added to the beginning of the picture file, and the information is Exif. The Exif information includes shooting conditions such as an aperture, a shutter, balance white, ISO, a focal length, and a date and time of shooting, and camera brand, model, color coding, and geographical position information.
Exif, an abbreviation of english Exchangeable Image file, conforms to the JPEG file format standard.
In the android system, the Exif information of a picture is operated through an ExifInterface class.
The Exif data can be stored in a picture in a way of being understood as a Key-value Key value pair, and generally operates by the following methods:
string getAttribute (String tag): and acquiring a character string value with tag attribute in the picture.
double getAttribute (String tag, double defaultValue): and acquiring a double value with tag attribute in the picture.
int getAttributeInt (String tag, defaultValue): and acquiring an int value with tag attribute in the picture.
void setAttribute (String tag, String value): according to the input parameters, the value of the picture Exif is set.
void saveAttributes (): and writing the Exif of the picture in the memory into the picture.
Static constants for some strings are defined in the exifoInterface to represent these tag values, and are commonly used as follows:
TAG _ alert _ source: the aperture value.
TAG _ datatime: the shooting time depends on the time set by the device.
TAG _ expose _ TIME: the exposure time.
TAG _ FLASH: a flash lamp.
TAG _ FOCAL _ LENGTH: a focal length.
TAG _ IMAGE _ LENGTH: the picture height.
TAG _ IMAGE _ WIDTH: the width of the picture.
TAG_ISO:ISO。
TAG _ MAKE: the brand of the device.
TAG _ mode: the device model and the shaping expression have corresponding constant expression in the ExifInterface.
TAG _ origin: the rotation angle and the shaping expression are expressed by corresponding constants in the ExifInterface.
And after the android mobile phone is used for collecting the image of the document, the image is transmitted to the server side in real time. And analyzing the additional information of the image by the server side to obtain the time information in the additional information of the image. The server can judge whether the image is a real-time image according to the acquired time information of the image. For example, the time the server gets from the additional information of the image is 13: 01: 59, July, 2018, and the current time of the server is 13: 02: july, 2018, the server considers the image as a real-time image considering the time the image consumes during the data transmission process.
Step S103: and if so, extracting the information of the goods taking bill contained in the image.
This step is used to extract document information contained in the image, such as recipient information in the document.
Extracting the text in the image may be implemented using OCR (Optical Character Recognition) technology.
Generally, OCR uses the following steps to achieve text extraction.
Firstly, image input and preprocessing are carried out. The image input means inputting an image to be recognized. The preprocessing mainly comprises binarization, noise removal, inclination correction and the like. Binarization refers to defining the foreground information to be black and the background information to be white for a picture shot by a camera. The noise removal means to remove noise according to the characteristics of noise. The tilt correction refers to correcting the tilt of a picture.
Next, layout analysis is performed. The layout analysis refers to a process of segmenting and dividing the document pictures into lines.
Then, character cutting is performed. Character cutting refers to cutting the stuck characters.
Finally, character recognition is performed. Character recognition is based on feature extraction.
Since the OCR algorithm is a relatively mature technology, a detailed implementation will not be given here.
Step S104: and judging whether the goods taking receipt information is in compliance or not in a preset judgment mode according to the extracted goods taking receipt information.
The step is used for judging whether the bill information is in compliance or not in a preset judging mode according to the extracted bill information.
The extracted bill information comprises the geographic position information of the bill record; the step of judging whether the document information is in compliance in a preset judgment mode comprises the following steps: and acquiring geographical position information in the additional information of the image, comparing the geographical position information with the geographical position information recorded by the bill, and judging whether the comparison result is consistent or not as one of the bases for judging whether the bill information is consistent or not.
In the transaction process of goods, the geographical location information of the transaction is one of the factors for ensuring the reliability of the transaction. For example, in goods transaction in the logistics industry, it is possible to determine from the geographical location information of the delivery location whether the goods have arrived at the specified location.
The geographic location information is descriptive information of geographic coordinates of a certain geographic location point to be identified.
According to different geographical position positioning methods, the geographical position information can adopt different description modes.
In the prior art, common geographical location positioning methods include a GPS positioning method and a base station positioning method.
The GPS positioning method is that a GPS positioning module on the mobile equipment is utilized to send own position signals to a positioning background to realize the positioning of the mobile equipment.
The base station positioning method is to determine the position of the mobile equipment by utilizing the measurement and calculation of the distance of the base station to the mobile equipment.
Through base station positioning, the mobile equipment is required to have GPS positioning capability, but the precision depends on the distribution of the base stations and the size of a coverage area to a great extent, and the error exceeds one kilometer; the precision is higher through GPS positioning.
The time information in the additional information of the image, whose geographical position information is very similar to the use of the time information, has been explained in detail in step S102. This is not described in detail.
The acquiring the geographical position information in the additional information of the image, which adopts the encrypted geographical position information, includes:
and executing decryption operation aiming at the encrypted geographic position information to obtain the decrypted geographic position information.
Since the geographical location information is easily forged, the geographical location information is encrypted. The encryption algorithm may be implemented using the RSA algorithm.
Step S105: and if the goods taking bill is judged to be in compliance, the goods corresponding to the goods taking bill is allowed to be handed over.
And if the receipt is judged to be in compliance, the delivery of the goods corresponding to the goods taking receipt is allowed.
After the allowing of the delivery of the goods corresponding to the goods taking receipt, the method comprises the following steps: acquiring certificate information of a person who presents the delivery receipt; acquiring the face image information of the bill taker for taking the goods; acquiring the geographical position information of the bill taker for showing the goods; the information obtained in the above steps is stored in a predetermined database as evidence of the property transfer of the goods at the time of shipment.
The information obtained in the above steps is used as evidence of property transfer of the goods during goods picking. For example, the financial structure may send a loan to the small to medium business based on evidence of the property transfer of the good at the time of the pickup.
The method for realizing the goods secure transaction comprises the following steps: acquiring the geographical position information of the goods in the transportation process; the geographic location information provides evidence of movement of the item.
The evidence of the movement of the goods can be provided for a financial structure to monitor the movement track of the goods.
In the handing over of the goods, comprising: acquiring an image of a delivery document; acquiring a certificate image of a receiver; acquiring a face image of the receiver; acquiring the geographical position information of the receiver; acquiring the information of a goods-taking receipt of a receiver; the information obtained in the above steps is stored in a predetermined database as proof of the transfer of the property of the goods at the time of delivery.
Evidence of the transfer of the property of the good at delivery may be used by a financial structure to monitor whether the good is flowing to an intended recipient.
In the above embodiments, a method for realizing safe transaction of goods is provided, and correspondingly, the application also provides a device for realizing safe transaction of goods. Please refer to fig. 2, which is a flowchart of an embodiment of an apparatus for secure transaction of goods according to the present application. Since this embodiment, i.e., the second embodiment, is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The device embodiments described below are merely illustrative.
The device for realizing the safe transaction of goods of the embodiment comprises an acquisition unit 201, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring an image of a goods taking document to be identified; a determining unit 202, configured to determine whether the image is a real-time image according to the additional information of the image; the extracting unit 203 is configured to extract the pickup document information included in the image if the image is determined to be the pickup document information; a compliance judgment unit 204, configured to judge, according to the extracted pickup document information, whether the pickup document information is compliant in a predetermined judgment manner; and the transaction control unit 205 is configured to allow handing over of the goods corresponding to the pickup receipt if the pickup receipt is determined to be in compliance.
Optionally, the additional information of the image adopts image capturing time, and the determining unit is specifically configured to:
acquiring the image shooting time;
and comparing the image shooting time with the current time, and if the difference value of the image shooting time and the current time meets a preset threshold range, judging the image to be a real-time image.
Optionally, the extracted pickup document information includes geographical location information recorded by the pickup document; the compliance determination unit is specifically configured to: and acquiring geographical position information in the additional information of the image, comparing the geographical position information with the geographical position information recorded by the pickup receipt, and judging whether the comparison result is consistent or not as one of the bases for judging whether the pickup receipt information is in compliance or not.
Optionally, the obtaining the geographic location information in the additional information of the image, where the geographic location information in the additional information of the image is encrypted, includes:
and executing decryption operation aiming at the encrypted geographic position information to obtain the decrypted geographic position information.
Optionally, the certificate comparison unit, before the image of the pickup document to be identified is acquired, is specifically configured to:
acquiring certificate information of a person who presents the delivery receipt;
comparing the certificate information with prestored certificate information;
and if the certificate comparison is passed, the step of acquiring the image of the document to be identified is carried out.
Optionally, the face verification unit, before the obtaining of the image of the document to be recognized, is specifically configured to:
acquiring the face image information of the person who presents the delivery receipt;
performing face verification according to the face image information;
and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
Optionally, the living body detecting unit is specifically configured to, before the step of performing face verification:
according to the face image information, performing living body detection;
and if the living body detection passes, entering the step of face verification.
Optionally, the identity information verification unit, before the obtaining of the image of the pickup document to be identified, is specifically configured to:
acquiring certificate information of a person who presents the delivery receipt;
comparing the certificate information with prestored certificate information;
if the certificate comparison is passed, acquiring the face image information of the delivery receipt person;
according to the face image information, performing living body detection;
if the living body detection is passed, carrying out face verification according to the face image information;
and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
Optionally, the certificate comparison implementation unit is specifically configured to:
comparing the text information of the certificate information with the prestored related text information of the certificate;
and comparing the image information of the certificate information with the prestored related image information of the certificate.
Optionally, the first in-vivo detection unit is specifically configured to:
providing an indication to a user to complete a related action;
acquiring the state of the facial organ of the user when the user executes the related action in real time;
and judging whether the face organ is a living body according to the state of the face organ.
Optionally, the second in-vivo detection unit is specifically configured to:
acquiring a real-time video of a user;
executing random frame extraction operation aiming at the video to obtain a corresponding random picture;
performing silent picture living body detection on the random picture;
and judging whether the living body is the living body according to the result of the living body detection of the silent picture.
Optionally, the sound adding unit, before acquiring a segment of real-time video of the user, is specifically configured to:
randomly allocating predetermined content for a user;
instructing a user to read the distributed preset content in the process of recording the real-time video;
and in the step of acquiring a piece of video of the user, simultaneously storing the sound of the preset content read by the user in the video.
Optionally, the face quality monitoring unit is specifically configured to, in the step of obtaining the face image information:
in the process of face acquisition, setting a verification condition aiming at an image acquisition condition of a face; the image acquisition condition comprises at least one of the following conditions when the face image is acquired: the method comprises the following steps of (1) obtaining a human face posture angle condition, an illumination condition, a human face shielding condition, a human face fuzziness and a human face integrity;
checking the collected face image according to the checking condition;
and the checked face image meeting the condition requirement enters a subsequent processing step.
Optionally, the pickup asset transfer recording unit is specifically configured to, after the acceptance of the handover of the goods corresponding to the pickup receipt:
acquiring certificate information of a person who presents the delivery receipt;
acquiring the face image information of the bill taker for taking the goods;
acquiring the geographical position information of the bill taker for showing the goods;
the information obtained in the above steps is stored in a predetermined database as evidence of the property transfer of the goods at the time of shipment.
Optionally, the geographic location information recording unit is specifically configured to:
acquiring the geographical position information of the goods in the transportation process;
the geographic location information provides evidence of movement of the item.
Optionally, the delivery asset transfer recording unit, during the handing over of the goods, is specifically configured to:
acquiring an image of a delivery document;
acquiring a certificate image of a receiver;
acquiring a face image of the receiver;
acquiring the geographical position information of the receiver;
acquiring the information of a goods-taking receipt of a receiver;
the information obtained in the above steps is stored in a predetermined database as proof of the transfer of the property of the goods at the time of delivery.
A third embodiment of the present application provides an electronic apparatus, including:
a processor:
a memory for storing a program that, when read and executed by the processor, performs the following:
acquiring an image of a delivery document to be identified;
judging whether the image is a real-time image or not according to the additional information of the image;
if yes, extracting the goods taking document information contained in the image;
judging whether the goods taking receipt information is in compliance or not in a preset judgment mode according to the extracted goods taking receipt information;
and if the goods taking bill is judged to be in compliance, the goods corresponding to the goods taking bill is allowed to be handed over.
A fourth embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of:
acquiring an image of a delivery document to be identified;
judging whether the image is a real-time image or not according to the additional information of the image;
if yes, extracting the goods taking document information contained in the image;
judging whether the goods taking receipt information is in compliance or not in a preset judgment mode according to the extracted goods taking receipt information;
and if the goods taking bill is judged to be in compliance, the goods corresponding to the goods taking bill is allowed to be handed over.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
1. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Claims (34)
1. A method of effecting a secure transaction for goods, comprising:
acquiring an image of a delivery document to be identified;
judging whether the image is a real-time image or not according to the additional information of the image;
if yes, extracting the goods taking document information contained in the image;
judging whether the goods taking receipt information is in compliance or not in a preset judgment mode according to the extracted goods taking receipt information;
and if the goods taking bill is judged to be in compliance, the goods corresponding to the goods taking bill is allowed to be handed over.
2. The method for realizing the safe transaction of goods according to claim 1, wherein the additional information of the image is the image capturing time, and the step of determining whether the image is a real-time image comprises the following steps:
acquiring the image shooting time;
and comparing the image shooting time with the current time, and if the difference value of the image shooting time and the current time meets a preset threshold range, judging the image to be a real-time image.
3. The method for realizing the safe transaction of the goods according to the claim 1, characterized in that the extracted pick-up document information comprises the geographic position information of a pick-up document record; the step of judging whether the delivery receipt information is in compliance in a preset judgment mode comprises the following steps: and acquiring geographical position information in the additional information of the image, comparing the geographical position information with the geographical position information recorded by the pickup receipt, and judging whether the comparison result is consistent or not as one of the bases for judging whether the pickup receipt information is in compliance or not.
4. The method for realizing the goods secure transaction according to the claim 3, wherein the geographic position information in the additional information of the image is encrypted geographic position information, and the obtaining the geographic position information in the additional information of the image comprises:
and executing decryption operation aiming at the encrypted geographic position information to obtain the decrypted geographic position information.
5. The method for realizing safe transaction of goods according to claim 1, characterized in that before the step of acquiring the image of the delivery document to be identified, the method comprises the following steps:
acquiring certificate information of a person who presents the delivery receipt;
comparing the certificate information with prestored certificate information;
and if the certificate comparison is passed, the step of acquiring the image of the document to be identified is carried out.
6. The method for realizing safe transaction of goods according to claim 1, characterized in that before the acquiring of the image of the document to be identified, the method comprises:
acquiring the face image information of the person who presents the delivery receipt;
performing face verification according to the face image information;
and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
7. The method for realizing safe transaction of goods according to claim 6, characterized in that before the step of face authentication, the method comprises:
according to the face image information, performing living body detection;
and if the living body detection passes, entering the step of face verification.
8. The method for realizing safe transaction of goods according to claim 1, characterized in that before the step of acquiring the image of the delivery document to be identified, the method comprises the following steps:
acquiring certificate information of a person who presents the delivery receipt;
comparing the certificate information with prestored certificate information;
if the certificate comparison is passed, acquiring the face image information of the delivery receipt person;
according to the face image information, performing living body detection;
if the living body detection is passed, carrying out face verification according to the face image information;
and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
9. The method for realizing the safe transaction of the goods according to the claim 5 or 8, wherein the certificate comparison comprises:
comparing the text information of the certificate information with the prestored related text information of the certificate;
and comparing the image information of the certificate information with the prestored related image information of the certificate.
10. The method for realizing safe transaction of goods according to claim 7 or 8, characterized in that the live body detection is realized by the following steps, including:
providing an indication to a user to complete a related action;
acquiring the state of the facial organ of the user when the user executes the related action in real time;
and judging whether the face organ is a living body according to the state of the face organ.
11. The method for realizing safe transaction of goods according to claim 7 or 8, characterized in that the live body detection is realized by the following steps, including:
acquiring a real-time video of a user;
executing random frame extraction operation aiming at the video to obtain a corresponding random picture;
performing silent picture living body detection on the random picture;
and judging whether the living body is the living body according to the result of the living body detection of the silent picture.
12. The method for realizing safe transaction of goods according to claim 11, wherein before acquiring a real-time video of the user, the method comprises:
randomly allocating predetermined content for a user;
instructing a user to read the distributed preset content in the process of recording the real-time video;
and in the step of acquiring a piece of video of the user, simultaneously storing the sound of the preset content read by the user in the video.
13. The method for realizing safe transaction of goods according to any one of claims 6 to 8, wherein in the step of obtaining the facial image information, the quality of the facial image information is ensured by adopting facial quality monitoring, and the facial quality monitoring comprises:
in the process of face acquisition, setting a verification condition aiming at an image acquisition condition of a face; the image acquisition condition comprises at least one of the following conditions when the face image is acquired: the method comprises the following steps of (1) obtaining a human face posture angle condition, an illumination condition, a human face shielding condition, a human face fuzziness and a human face integrity;
checking the collected face image according to the checking condition;
and the checked face image meeting the condition requirement enters a subsequent processing step.
14. The method for realizing safe transaction of goods according to claim 1, after the allowing of the delivery of goods corresponding to the delivery receipt, comprising:
acquiring certificate information of a person who presents the delivery receipt;
acquiring the face image information of the bill taker for taking the goods;
acquiring the geographical position information of the bill taker for showing the goods;
the information obtained in the above steps is stored in a predetermined database as evidence of the property transfer of the goods at the time of shipment.
15. The method of effecting a secure transaction for goods according to claim 1, comprising:
acquiring the geographical position information of the goods in the transportation process;
the geographic location information provides evidence of movement of the item.
16. The method for realizing the safe transaction of the goods according to claim 1, characterized in that in the process of handing over the goods, the method comprises the following steps:
acquiring an image of a delivery document;
acquiring a certificate image of a receiver;
acquiring a face image of the receiver;
acquiring the geographical position information of the receiver;
acquiring the information of a goods-taking receipt of a receiver;
the information obtained in the above steps is stored in a predetermined database as proof of the transfer of the property of the goods at the time of delivery.
17. An apparatus for effecting a secure transaction for goods, comprising:
the acquisition unit is used for acquiring an image of the delivery document to be identified;
a determination unit configured to determine whether the image is a real-time image based on additional information of the image;
the extracting unit is used for extracting the goods taking document information contained in the image if the image is true;
the compliance judging unit is used for judging whether the goods taking receipt information is in compliance or not in a preset judging mode according to the extracted goods taking receipt information;
and the transaction control unit is used for allowing the delivery of the goods corresponding to the delivery receipt if the delivery receipt is judged to be in compliance.
18. The apparatus for realizing secure transaction of goods according to claim 17, wherein the additional information of the image is image capturing time, and the determining unit is specifically configured to:
acquiring the image shooting time;
and comparing the image shooting time with the current time, and if the difference value of the image shooting time and the current time meets a preset threshold range, judging the image to be a real-time image.
19. The apparatus for realizing the safe transaction of the goods according to claim 17, wherein the extracted pick-up document information comprises the geographic position information of the pick-up document record; the compliance determination unit is specifically configured to: and acquiring geographical position information in the additional information of the image, comparing the geographical position information with the geographical position information recorded by the pickup receipt, and judging whether the comparison result is consistent or not as one of the bases for judging whether the pickup receipt information is in compliance or not.
20. The apparatus for realizing goods secure transaction according to claim 19, wherein the geographic location information in the additional information of the image is encrypted geographic location information, and the acquiring the geographic location information in the additional information of the image comprises:
and executing decryption operation aiming at the encrypted geographic position information to obtain the decrypted geographic position information.
21. The apparatus for performing secure transaction of goods according to claim 17, wherein the certificate comparison unit, before the acquiring of the image of the pickup document to be recognized, is specifically configured to:
acquiring certificate information of a person who presents the delivery receipt;
comparing the certificate information with prestored certificate information;
and if the certificate comparison is passed, the step of acquiring the image of the document to be identified is carried out.
22. The apparatus for performing secure transaction of goods according to claim 17, wherein the face authentication unit, before the acquiring of the image of the document to be recognized, is specifically configured to:
acquiring the face image information of the person who presents the delivery receipt;
performing face verification according to the face image information;
and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
23. The device for carrying out secure transactions for goods according to claim 22, characterised in that the liveness detection unit, before said step of carrying out the verification of the human face, is particularly adapted to:
according to the face image information, performing living body detection;
and if the living body detection passes, entering the step of face verification.
24. The apparatus for performing secure transaction of goods according to claim 17, wherein the identification information verifying unit, before the acquiring of the image of the pickup document to be identified, is specifically configured to:
acquiring certificate information of a person who presents the delivery receipt;
comparing the certificate information with prestored certificate information;
if the certificate comparison is passed, acquiring the face image information of the delivery receipt person;
according to the face image information, performing living body detection;
if the living body detection is passed, carrying out face verification according to the face image information;
and if the face verification is passed, entering the step of acquiring the image of the document to be identified.
25. The apparatus for performing secure transaction of goods according to claim 21 or 24, wherein the certificate comparison implementation unit is specifically configured to:
comparing the text information of the certificate information with the prestored related text information of the certificate;
and comparing the image information of the certificate information with the prestored related image information of the certificate.
26. Device for carrying out secure transactions for goods according to claim 23 or 24, characterised in that said first liveness detection unit is particularly adapted to:
providing an indication to a user to complete a related action;
acquiring the state of the facial organ of the user when the user executes the related action in real time;
and judging whether the face organ is a living body according to the state of the face organ.
27. Device for carrying out secure transactions for goods according to claim 23 or 24, characterised in that said second liveness detection unit is particularly adapted to:
acquiring a real-time video of a user;
executing random frame extraction operation aiming at the video to obtain a corresponding random picture;
performing silent picture living body detection on the random picture;
and judging whether the living body is the living body according to the result of the living body detection of the silent picture.
28. The apparatus for performing secure transaction of goods according to claim 27, wherein the sound adding unit, before acquiring a real-time video of the user, is specifically configured to:
randomly allocating predetermined content for a user;
instructing a user to read the distributed preset content in the process of recording the real-time video;
and in the step of acquiring a piece of video of the user, simultaneously storing the sound of the preset content read by the user in the video.
29. The apparatus for realizing safe transaction of goods according to any of claims 22-24, wherein the face quality monitoring unit, in the step of obtaining the face image information, is specifically configured to:
in the process of face acquisition, setting a verification condition aiming at an image acquisition condition of a face; the image acquisition condition comprises at least one of the following conditions when the face image is acquired: the method comprises the following steps of (1) obtaining a human face posture angle condition, an illumination condition, a human face shielding condition, a human face fuzziness and a human face integrity;
checking the collected face image according to the checking condition;
and the checked face image meeting the condition requirement enters a subsequent processing step.
30. The apparatus for realizing goods safe transaction according to claim 17, wherein the pick-up property transfer recording unit is specifically configured to, after the allowing of the handing over of the goods corresponding to the pick-up receipt:
acquiring certificate information of a person who presents the delivery receipt;
acquiring the face image information of the bill taker for taking the goods;
acquiring the geographical position information of the bill taker for showing the goods;
the information obtained in the above steps is stored in a predetermined database as evidence of the property transfer of the goods at the time of shipment.
31. The apparatus for performing secure transaction of goods according to claim 17, wherein the geographic location information recording unit is specifically configured to:
acquiring the geographical position information of the goods in the transportation process;
the geographic location information provides evidence of movement of the item.
32. The apparatus for effecting a secure transaction of an item according to claim 17, wherein the delivery asset transfer record unit, during the handing over of the item, is specifically configured to:
acquiring an image of a delivery document;
acquiring a certificate image of a receiver;
acquiring a face image of the receiver;
acquiring the geographical position information of the receiver;
acquiring the information of a goods-taking receipt of a receiver;
the information obtained in the above steps is stored in a predetermined database as proof of the transfer of the property of the goods at the time of delivery.
33. An electronic device, comprising:
a processor;
a memory for storing a program that, when read and executed by the processor, performs the following:
acquiring an image of a delivery document to be identified;
judging whether the image is a real-time image or not according to the additional information of the image;
if yes, extracting the goods taking document information contained in the image;
judging whether the goods taking receipt information is in compliance or not in a preset judgment mode according to the extracted goods taking receipt information;
and if the goods taking bill is judged to be in compliance, the goods corresponding to the goods taking bill is allowed to be handed over.
34. A computer-readable storage medium having a computer program stored thereon, the program, when executed by a processor, performing the steps of:
acquiring an image of a delivery document to be identified;
judging whether the image is a real-time image or not according to the additional information of the image;
if yes, extracting the goods taking document information contained in the image;
judging whether the goods taking receipt information is in compliance or not in a preset judgment mode according to the extracted goods taking receipt information;
and if the goods taking bill is judged to be in compliance, the goods corresponding to the goods taking bill is allowed to be handed over.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811134755.5A CN110956229A (en) | 2018-09-27 | 2018-09-27 | Method and device for realizing goods safe transaction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811134755.5A CN110956229A (en) | 2018-09-27 | 2018-09-27 | Method and device for realizing goods safe transaction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110956229A true CN110956229A (en) | 2020-04-03 |
Family
ID=69975401
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811134755.5A Pending CN110956229A (en) | 2018-09-27 | 2018-09-27 | Method and device for realizing goods safe transaction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110956229A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101303761A (en) * | 2008-06-10 | 2008-11-12 | 裘炅 | Integrated system for claim settlement of vehicle insurance capable of shooting and uploading evidence-obtaining photograph or video by mobile phone |
CN102629985A (en) * | 2011-02-04 | 2012-08-08 | 佳能株式会社 | Information processing apparatus and control method therefor |
CN104240050A (en) * | 2013-06-14 | 2014-12-24 | 华为技术有限公司 | Logistics information processing method and device and business system |
CN104484916A (en) * | 2014-11-14 | 2015-04-01 | 广州宝钢南方贸易有限公司 | Image identification-based cargo entry-exit safety inspection apparatus and method thereof |
CN105405239A (en) * | 2015-11-13 | 2016-03-16 | 合肥安奎思成套设备有限公司 | On-line security monitoring method for freight |
CN106778525A (en) * | 2016-11-25 | 2017-05-31 | 北京旷视科技有限公司 | Identity identifying method and device |
CN107545287A (en) * | 2017-08-30 | 2018-01-05 | 广州云从信息科技有限公司 | A kind of data carrying method based on testimony of a witness unification |
-
2018
- 2018-09-27 CN CN201811134755.5A patent/CN110956229A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101303761A (en) * | 2008-06-10 | 2008-11-12 | 裘炅 | Integrated system for claim settlement of vehicle insurance capable of shooting and uploading evidence-obtaining photograph or video by mobile phone |
CN102629985A (en) * | 2011-02-04 | 2012-08-08 | 佳能株式会社 | Information processing apparatus and control method therefor |
CN104240050A (en) * | 2013-06-14 | 2014-12-24 | 华为技术有限公司 | Logistics information processing method and device and business system |
CN104484916A (en) * | 2014-11-14 | 2015-04-01 | 广州宝钢南方贸易有限公司 | Image identification-based cargo entry-exit safety inspection apparatus and method thereof |
CN105405239A (en) * | 2015-11-13 | 2016-03-16 | 合肥安奎思成套设备有限公司 | On-line security monitoring method for freight |
CN106778525A (en) * | 2016-11-25 | 2017-05-31 | 北京旷视科技有限公司 | Identity identifying method and device |
CN107545287A (en) * | 2017-08-30 | 2018-01-05 | 广州云从信息科技有限公司 | A kind of data carrying method based on testimony of a witness unification |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10885644B2 (en) | Detecting specified image identifiers on objects | |
US10789465B2 (en) | Feature extraction and matching for biometric authentication | |
US20210124919A1 (en) | System and Methods for Authentication of Documents | |
US11669607B2 (en) | ID verification with a mobile device | |
US10332118B2 (en) | Efficient prevention of fraud | |
US9483629B2 (en) | Document authentication based on expected wear | |
JP2019534526A (en) | How to detect falsification of ID | |
KR20200118842A (en) | Identity authentication method and device, electronic device and storage medium | |
US20140270404A1 (en) | Efficient prevention of fraud | |
US10403076B2 (en) | Method for securing and verifying a document | |
US20140270409A1 (en) | Efficient prevention of fraud | |
CN110956567A (en) | Intelligent logistics realization method, device and system based on Internet of things | |
CN107609364B (en) | User identity confirmation method and device | |
US20160125404A1 (en) | Face recognition business model and method for identifying perpetrators of atm fraud | |
CN110415113A (en) | Finance data processing method, device, server and readable storage medium storing program for executing | |
CN110008664A (en) | Authentication information acquisition, account-opening method, device and electronic equipment | |
CN113222952B (en) | Method and device for identifying copied image | |
CN115035533B (en) | Data authentication processing method and device, computer equipment and storage medium | |
CN110956229A (en) | Method and device for realizing goods safe transaction | |
US20220067151A1 (en) | Defense mechanism against component-wise hill climbing using synthetic face generators | |
KR102579610B1 (en) | Apparatus for Detecting ATM Abnormal Behavior and Driving Method Thereof | |
CN114756844A (en) | Identity authentication method, device and equipment | |
KR20220165423A (en) | Identification card spoofing detection method and server therefor | |
JP2023184013A (en) | Age estimation system and age estimation method | |
JP2023172637A (en) | Personal authentication system and personal authentication method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
AD01 | Patent right deemed abandoned | ||
AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20240322 |