Nothing Special   »   [go: up one dir, main page]

WO2015070537A1 - 用户信息提取方法及用户信息提取装置 - Google Patents

用户信息提取方法及用户信息提取装置 Download PDF

Info

Publication number
WO2015070537A1
WO2015070537A1 PCT/CN2014/071143 CN2014071143W WO2015070537A1 WO 2015070537 A1 WO2015070537 A1 WO 2015070537A1 CN 2014071143 W CN2014071143 W CN 2014071143W WO 2015070537 A1 WO2015070537 A1 WO 2015070537A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
image
related information
fundus
module
Prior art date
Application number
PCT/CN2014/071143
Other languages
English (en)
French (fr)
Inventor
杜琳
Original Assignee
北京智谷睿拓技术服务有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京智谷睿拓技术服务有限公司 filed Critical 北京智谷睿拓技术服务有限公司
Priority to US14/888,219 priority Critical patent/US9877015B2/en
Publication of WO2015070537A1 publication Critical patent/WO2015070537A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/327Calibration thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • G06F21/16Program or content traceability, e.g. by watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • G06F21/106Enforcing content protection by specific content processing
    • G06F21/1063Personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals

Definitions

  • the present application relates to the field of digital authentication technologies, and in particular, to a user information extraction method and apparatus.
  • a mobile or wearable device generally sets a screen lock based on energy saving and preventing misoperation.
  • the screen unlock can be encrypted or not encrypted.
  • the user usually needs to remember some special passwords and patterns. , actions, etc., although security can be guaranteed, it is easy to forget the situation, causing inconvenience to the user.
  • other problems are required in the case where another password is required for further operations.
  • Digital identification technology can directly embed some identification information (ie, digital watermark) into the digital carrier without affecting the use of the original carrier, and is not easy to be detected and modified.
  • Digital watermarking technology is applied in many aspects, such as copyright protection, anti-counterfeiting, authentication, information hiding, and so on. If the digital watermarking technology can be used to securely and conceal the user's input such as passwords to obtain the corresponding authorization, the above-mentioned problem that the user cannot be authenticated due to the user forgetting can be solved, and the user experience is improved.
  • a user information extraction technology is provided to provide users with relevant information for use in a corresponding situation under the premise of confidentiality, thereby avoiding the inconvenience caused by the user forgetting the corresponding user-related information.
  • the present application provides a user information extraction method, which includes: Obtaining an image comprising at least one digital watermark;
  • the application for this application please provide a kind of user account information to extract the pick-and-place device, including:
  • the image image of the image is obtained by acquiring a module of the modulo module, and is used for acquiring a image of the image containing the watermark of at least one digit of the digital watermark;
  • the information message is obtained by acquiring the modulo module block, and is used for acquiring the digital watermark printing package included in the image image of the image of the image.
  • the user shall use the relevant information of the user to at least one of the corresponding ones;
  • the one-shot projection module module is configured to project a projection of the user-related information related to the user to the bottom of the eye of the user. .
  • the program will be used in conjunction with the relevant user information.
  • the pair of corresponding field occasions are extracted and taken out, and projected and projected to the bottom of the eye of the user, so that the user's households need no memory, and the user accounts are related.
  • the information of the relevant information, and avoidance avoids the inconvenience caused by the use of the relevant information of the user's household information as described in the legacy forgetting. .
  • Figure 11 is a flow chart showing the steps of the method for extracting and extracting the user's information by using the user's household information information for the present application; 1155 Figure 22aa--22cc For the purpose of this application, please use the user's letter information to extract the method of extracting the method for the application of the application example;
  • FIG. 33aa and FIG. 33bb are schematic diagrams showing another application of the method for extracting and extracting the user's household information by using the user account information method for the application of the present application
  • FIG. 44aa And FIG. 44bb is a practical example for the application of the present application.
  • the first example is to use the user's household information to extract the method of extracting and using the light spot pattern pattern and the bottom of the eye.
  • the obtained package includes a schematic view of the image of the image of the light spot pattern pattern;
  • FIG. 55 is a schematic block diagram showing the structure of a node structure for extracting and loading the device by using the user's household information information for the present application; 2200 FIG. 66aa and 66bb are For the application of this application, please implement the example of the application, and the other two types of user information will be extracted from the pick-and-place device to show the schematic structure of the block diagram; Figure 77aa is the application for this application.
  • Example 1 is a block diagram showing a block structure of a bit position detecting and detecting module module for use in extracting and picking up a device by using a user's household information;
  • Figure 77bb is the embodiment of the application for the present application. Another type of structure is to use the user's household information to extract the pick-and-place device for use. Show schematic block diagram;
  • Figure 77cc and Figure 77dd are the implementation examples of this application.
  • the first example is to use the user's household information to extract the pick-and-place device for use.
  • FIG. 88 is a schematic view showing a schematic application of the user application letter information extraction extraction device according to the present application
  • FIG. 9 is a schematic diagram of another user information extracting apparatus applied to the glasses according to an embodiment of the present invention
  • FIG. 10 is a schematic diagram of another embodiment of the user information extracting apparatus applied to the glasses according to an embodiment of the present application
  • FIG. 12 is a flowchart of steps of a user information embedding method according to an embodiment of the present application.
  • FIG. 13 is a schematic block diagram showing the structure of a user information embedding apparatus according to an embodiment of the present application.
  • the embodiment of the present application provides a user information extraction method, including:
  • S120 acquires an image that includes at least one digital watermark
  • S140 acquires at least one user hole corresponding to a user included in the digital watermark in the image
  • S160 projects the user-related information to the fundus of the user.
  • the user-related information related to the user is obtained from the digital watermark of an image and projected to the user's fundus, so that the user can obtain the corresponding user-related information in a confidential manner without using special memory.
  • the user's use has improved the user experience.
  • S120 acquires an image including at least one digital watermark.
  • an object seen by the user may be photographed by a smart glasses device, for example, when the user sees the image, the smart glasses device captures the image. 2) The image is acquired by means of reception.
  • the image may also be acquired by another device or by interaction with a device that displays the image.
  • S140 acquires at least one user hole 1 corresponding to a user included in the digital watermark in the image.
  • the digital watermark in the image may be analyzed by, for example, a personal private key and a public or private watermark extraction method to extract the user related information.
  • the image may be sent to the outside, for example, to a cloud server and/or a third-party authority, and the digital watermark of the image is extracted by the cloud server or the third-party authority.
  • the user related information may be sent to the outside, for example, to a cloud server and/or a third-party authority, and the digital watermark of the image is extracted by the cloud server or the third-party authority.
  • the image corresponds to a graphical user interface displayed by a device.
  • the graphical user interface may be, for example, a lock screen interface of the device (as shown in FIG. 2a and FIG. 3a), or may also be an input interface for user authentication information such as an application or a website password. .
  • the graphical user interface includes an input interface of the user related information.
  • the track key 230 and the number key 330 shown in Figures 2a and 3a are provided for the user to input corresponding user related information.
  • the user related information includes user authentication information corresponding to the image.
  • the user authentication information may be, for example, the user password described above, or a specific gesture (such as a gesture, an overall posture of the body, etc.).
  • the user authentication information "corresponding to the image" may be, for example, an electronic device (such as a mobile phone, a tablet, a note) displayed on the image.
  • the lock screen interface includes a user authentication information input prompt screen, prompting the user to input corresponding user authentication information, and the user related information obtained by the user according to the image is For the user authentication information to be input; or, for example, an image containing a watermark in the vicinity of an access control requiring a password (the image may be a still image generated by, for example, printing or the like, or may be an electronic image displayed by the electronic device) The user can obtain the password information of the access control through the image by the above method.
  • the user-related information may also be other information, for example, the image is a user environment interface displayed by an electronic device (such as a computer) shared by multiple people.
  • the image is a user environment interface displayed by an electronic device (such as a computer) shared by multiple people.
  • the user uses the method of the embodiment of the present application, it can obtain the entry information of the corresponding application on the user environment interface corresponding to the user from the image, which is convenient for the user to use.
  • the user may further include user authentication to confirm the identity of the user, so that the step S140 can obtain the User related information.
  • the user uses an authenticated smart glasses to implement the functions of the steps of the embodiments of the present application.
  • S160 projects the user-related information to the fundus of the user.
  • the user in order to enable the user to obtain the user-related information in a confidential situation, the user obtains the corresponding user-related information by projecting the user-related information to the user's fundus.
  • the projection may be to directly project the user-related information to the user's fundus through the projection module.
  • the projecting may further display the user-related information in a location that only the user can see (for example, a display surface of a smart glasses), and the user is related through the display surface.
  • the information is projected to the bottom of the user.
  • the first method has higher privacy because it does not need to display the user-related information through the middle display, but directly reaches the user's eyes.
  • the present embodiment is further described below.
  • the projecting the user-related information to the fundus of the user includes: Projecting the user related information;
  • the sharpness criteria described herein can be set according to sharpness measurement parameters commonly used by those skilled in the art, such as parameters such as the effective resolution of the image.
  • the adjusting at least one projection imaging parameter of the optical path between the projection position and the eye of the user includes:
  • At least one imaging parameter of the at least one optic of the optical path between the projected position and the user's eye and/or a position in the optical path is adjusted.
  • the imaging parameters described herein include the focal length of the optical device, the optical axis direction, and the like.
  • the user-related information can be properly projected onto the user's fundus, for example by adjusting the focal length of the optics such that the user-related information is clearly imaged at the user's fundus.
  • "Clear" herein refers to a first definition standard that satisfies the at least one setting.
  • the stereoscopic display effect of the user-related information can also be achieved by projecting the user-related information to the two eyes with a certain deviation. At this time, for example, the optical axis parameter of the optical device can be adjusted.
  • the step S160 further includes: transmitting the user-related information to the fundus of the user corresponding to the position of the pupil when the optical axis direction of the eye is different.
  • the function of the foregoing steps may be implemented by a curved optical device such as a curved beam splitter, but the content to be displayed after the curved optical device is generally deformed, so
  • the step S160 further includes:
  • pre-processing the projected user-related information such that the projected user-related information has an inverse deformation opposite to the deformation, and the anti-deformation effect passes through the curved optical device described above, and the deformation effect of the curved optical device is Offset, so that the user-related information received by the user's eyes is the effect that needs to be presented.
  • the user-related information projected into the eyes of the user does not need to be aligned with the image.
  • the user-related information is a specific action at a specific location, such as drawing a specific trajectory at a specific location as described in FIG. 2b, the user-related information needs to be The image is aligned for display. Therefore, in a possible implementation manner of the embodiment of the present application, the step S160 includes:
  • the projected user related information is aligned with the image seen by the user at the bottom of the user.
  • Step S160 After the user-related information 220 is aligned with the image 210 (presenting the user-related information on a corresponding position on the image), the user is projected on the bottom of the user, so that the user sees the image as shown in FIG. 2b.
  • the displayed screen wherein the trajectory shown by the dotted line is the user-related information 220 projected on the bottom of the user's eyes.
  • the user first sees the image 310 shown in FIG. 3a, the image is, for example, a lock screen interface of a mobile phone device, and corresponding user related information 320 is acquired according to the image 310, for example, The dotted line frame shown in 3b, wherein the corresponding numeric key 330 in the circular frame is the number to be input next.
  • the method further includes: the aligning the projected user related information with the image seen by the user in the fundus of the user includes: The user gaze point aligns the projected user related information with the image seen by the user at the user's fundus relative to the location of the user.
  • the position corresponding to the user's gaze point is the location where the image is located.
  • a depth sensor such as infrared ranging
  • the step of detecting the current gaze point position of the user by the method iii) includes: collecting an image of the user's fundus;
  • the definition standard is the definition standard commonly used by those skilled in the art described above, which may be the same as or different from the first definition standard described above.
  • the "eye funda” presented here is mainly an image presented on the retina, which may be an image of the fundus itself, or may be an image of other objects projected to the fundus, such as the spot pattern mentioned below.
  • the at least one projected imaging parameter of the optical path between the adjusted projection position and the user's eye includes, by adjusting a focal length of the optical device on the optical path between the eye and the gathering position and/or a position in the optical path, An image is obtained in which the fundus meets at least one set second definition standard when the optical device is in a certain position or state. This adjustment can be continuous real time.
  • the optical device may be a focal length tunable lens for performing adjustment of its focal length by adjusting the refractive index and/or shape of the optical device itself. Specifically: 1) adjusting the focal length by adjusting the curvature of at least one side of the focus adjustable lens, for example, increasing or decreasing the liquid medium in the cavity formed by the double transparent layer to adjust the curvature of the focus adjustable lens; 2) changing the focal length
  • the refractive index of the adjustable lens is used to adjust the focal length.
  • the focal length adjustable lens is filled with a specific liquid crystal medium, and the arrangement of the liquid crystal medium is adjusted by adjusting the voltage of the corresponding electrode of the liquid crystal medium, thereby changing the refractive index of the focus adjustable lens.
  • the optical device may be: a lens group for performing adjustment of the focal length of the lens group itself by adjusting the relative position between the lenses in the lens group.
  • one or more of the lenses in the lens group are the focus adjustable lenses described above.
  • the optical path parameters of the system can also be changed by adjusting the position of the optical device on the optical path.
  • the analyzing the image of the fundus collected by the pupil further includes:
  • An image of the collected fundus is analyzed to find an image that satisfies at least one set of second sharpness criteria
  • the optical parameters of the eye are calculated based on the image that satisfies at least one of the set second sharpness criteria, and the imaging parameters that are known when the image that satisfies at least one of the set second sharpness criteria is obtained.
  • At least one projected imaging parameter of the optical path between the adjusted projection position and the user's eye can be summed to an image that satisfies at least one set second sharpness criterion, but is required to be collected by the pair
  • An image of the fundus is analyzed to find an image that satisfies at least one set of second sharpness criteria, and the eye can be calculated by calculating an image that satisfies at least one set of second sharpness criteria and a known optical path parameter Optical parameters.
  • the step of detecting a current gaze point location of the user may further include:
  • the projected spot may be used only to illuminate the fundus without a specific pattern.
  • the projected spot can also include a feature rich pattern.
  • the rich features of the pattern facilitate inspection and improve detection accuracy.
  • the spot is an infrared spot that is invisible to the eye.
  • Light other than the invisible light of the eyes in the projected spot can be filtered out.
  • the method implemented by the application may further comprise the steps of:
  • the brightness of the projected spot is controlled.
  • the analysis results include, for example, characteristics of the image collected by the image, including contrast of image features, texture features, and the like.
  • a special case of controlling the brightness of the projected spot is to start or stop the projection.
  • the observer can periodically stop the projection when the observer keeps watching the eye; the observer's fundus is bright enough.
  • To stop the projection use the fundus information to detect the distance of the eye's current line of sight to the eye.
  • the brightness of the projected spot can be controlled according to the ambient light.
  • the analyzing the image of the fundus collected by the sputum further includes:
  • the calibration of the fundus image is performed to obtain at least one reference image corresponding to the image presented by the fundus.
  • the captured image is compared with the reference image to obtain the image that satisfies at least one set second sharpness criterion.
  • the image satisfying at least one set second sharpness standard may be an image obtained with the smallest difference from the reference image.
  • the difference between the currently obtained image and the reference image can be calculated by an existing image processing algorithm, for example, using a classical phase difference autofocus algorithm.
  • the optical parameters of the eye can include an optical axis direction of the eye derived from characteristics of the eye when the image is captured to the at least one set second clarity criterion.
  • the features of the eye here may be obtained from the image that satisfies at least one of the set second definition criteria, or may be otherwise obtained.
  • the gaze direction of the user's eye line of sight can be obtained according to the optical axis direction of the eye.
  • the optical axis direction of the eye can be obtained according to the feature of the fundus when the image of the second sharpness standard satisfying at least one setting is obtained, and the accuracy of the optical axis direction of the eye is determined to be higher by the features of the fundus.
  • the size of the spot pattern may be larger than the fundus viewable area or smaller than the fundus viewable area, where:
  • a classical feature point matching algorithm for example, Scale Invariant Feature Transform (SIFT) algorithm
  • SIFT Scale Invariant Feature Transform
  • the direction of the observer's line of sight can be determined by determining the direction of the optical axis of the eye by the position of the spot pattern on the resulting image relative to the original spot pattern (obtained by image calibration).
  • the optical axis direction of the eye may also be obtained according to the feature of the eye pupil when the image satisfying the at least one set second definition standard is obtained.
  • the eye pupil may be characterized by the second sharpness criterion that satisfies at least one setting
  • the image obtained on the image can also be obtained separately.
  • the calibration of the optical axis direction of the eye may be included to more accurately determine the direction of the optical axis of the eye.
  • the known imaging parameters include fixed imaging parameters and real-time imaging parameters, wherein the real-time imaging parameters are optical devices when acquiring an image that satisfies at least one set second sharpness criterion Parameter information, which may be recorded in real time when acquiring the image that satisfies at least one set second definition standard.
  • the calculated distance from the eye focus to the eye can be combined (the specific process will be described in detail in conjunction with the device section) to obtain the position of the eye gaze point.
  • the user-related information may be stereoscopically presented to the user in the step S160.
  • the user's fundus projection may be stereoscopically presented to the user in the step S160.
  • the stereoscopic display may be the same information, and the adjustment of the projection position by step S160, so that the information of the parallax seen by the two eyes of the user forms a stereoscopic display. effect.
  • the user-related information includes stereoscopic information corresponding to the two eyes of the user, and in step S160, corresponding user-related information is respectively projected to the two eyes of the user. That is, the user-related information includes left-eye information corresponding to the left eye of the user and right-eye information corresponding to the right eye of the user, and the left-eye information is projected to the left eye of the user when the projection is performed, and the right-eye information is projected to the right of the user. Eyes, so that the user-related information that the user sees has a suitable stereoscopic display effect, resulting in a better user experience.
  • the stereoscopic projection described above allows the user to view the three-dimensional spatial information.
  • the above method of the embodiment of the present application enables the user to see stereoscopic user-related information, and learn the specific location. And a specific gesture, thereby enabling the user to perform the gesture of the user-related information prompting at the specific location, at which time other people see the gesture performed by the user Action, but since the spatial information is not known, the confidentiality effect of the user-related information is better.
  • the embodiment of the present application further provides a user information extraction apparatus 500, including: an image acquisition module 510, configured to acquire an image including at least one digital watermark;
  • An information obtaining module 520 configured to acquire at least one user-related information corresponding to a user included in the digital watermark in the image
  • a projection module 530 is configured to project the user-related information to the fundus of the user.
  • the user-related information related to the user is obtained from the digital watermark of an image and projected to the user's fundus, so that the user can obtain the corresponding user-related information in a confidential manner without using special memory.
  • the user's use has improved the user experience.
  • the device of the embodiment of the present application may be, for example, a wearable device for the vicinity of the user's eyes, such as a smart glasses.
  • a wearable device for the vicinity of the user's eyes, such as a smart glasses.
  • the image acquisition module 510 may be in the form of a plurality of images as shown in FIG. 6a, and the image acquisition module 510 includes a shooting sub-module 511 for capturing the image.
  • the shooting sub-module 511 can be, for example, a camera of smart glasses for capturing images seen by a user.
  • the image obtaining module 510 includes:
  • a first communication sub-module 512 is configured to receive the image.
  • the image may be acquired by another device, and then the implementation of the application is sent.
  • the information acquiring module 520 may be in various forms, for example, as shown in FIG. 6a, the information acquiring module 520 includes: an information extracting submodule 521, for using the image. Extracting the user related information.
  • the information extraction sub-module 512 can analyze the digital watermark in the image by, for example, a personal private key and a public or private watermark extraction method, and extract the user-related information.
  • the information acquiring module 520 includes: a second communication submodule 522, configured to:
  • the image is transmitted to the outside; the user-related information in the image is received from the outside.
  • the image may be sent to the outside, for example, to a cloud server and/or a third-party authority, and the digital watermark of the image is extracted by the cloud server or the third-party authority.
  • the user-related information is sent back to the second communication sub-module 522 of the embodiment of the present application.
  • the functions of the first communication submodule 512 and the second communication submodule 522 may be implemented by the same communication module.
  • the image corresponds to a graphical user interface displayed by a device.
  • the graphical user interface includes an input interface for the user related information.
  • the user related information includes a user authentication information corresponding to the image.
  • a user authentication information corresponding to the image.
  • the identity confirmation of the user may be performed.
  • the smart glasses first authenticate the user, so that the smart glasses know the identity of the user, and then the user-related information is extracted by the information extraction module 520.
  • the user-related information is extracted by the information extraction module 520.
  • the glasses can acquire user-related information for each device of their own or public.
  • the projection module 530 includes:
  • An information projection submodule 531 configured to project the user related information
  • a parameter adjustment sub-module 532 configured to adjust at least one projection imaging parameter of the optical path between the projection position and the eye of the user, until the image formed by the user-related information in the fundus of the user satisfies at least one setting The first definition of clarity.
  • the parameter adjustment submodule 532 includes:
  • At least one tunable lens device has an adjustable focal length and/or an adjustable position on the optical path between the projected position and the user's eye.
  • the projection module 530 includes:
  • a curved surface beam splitting device 533 is configured to respectively transmit the user-related information to the fundus of the user corresponding to the position of the pupil when the optical axis direction of the eye is different.
  • the projection module 530 includes:
  • An inverse deformation processing sub-module 534 is configured to perform an inverse deformation process corresponding to the position of the pupil when the optical axis direction of the eye is different from the user-related information, so that the fundus receives the user-related information that needs to be presented.
  • the projection module 530 includes:
  • An alignment adjustment sub-module 535 is configured to align the projected user-related information with the image seen by the user in the fundus of the user.
  • the device further includes: the alignment adjustment sub-module 535, configured to: the projected user-related information and the user according to the location of the user gaze point relative to the user The images seen are aligned at the bottom of the user's eyes.
  • the location detecting module 540 can have multiple implementation manners, such as a square. I)-iii) of the method of the method corresponding to the method.
  • the embodiment of the present application further illustrates the position detecting module corresponding to the method of the iii) through the corresponding embodiments of FIG. 7a, FIG. 7d, FIG. 8 and FIG. 9.
  • the location detection module 700 includes:
  • a fundus image collection sub-module 710 configured to collect an image of the user's fundus
  • an adjustable imaging sub-module 720 configured to perform at least one of the optical path between the fundus image collection position and the user's eye Adjusting the imaging parameters until the image is captured to meet at least one set second sharpness criterion;
  • An image processing sub-module 730 configured to analyze an image of the fundus collected by the pupil to obtain a position and a location of the fundus image corresponding to the image that satisfies at least one set second definition standard The imaging parameters of the optical path between the eyes and at least one optical parameter of the eye, and calculating a position of the current gaze point of the user relative to the user.
  • the position detecting module 700 analyzes and processes the image of the fundus of the eye to obtain an optical parameter of the eye when the fundus image collecting sub-module obtains an image satisfying at least one set second sharpness standard, and the eye current can be calculated. The location of the gaze point.
  • the "eye funda” presented here is mainly an image presented on the retina, which may be an image of the fundus itself, or may be an image of other objects projected to the fundus.
  • the eyes here can be the human eye or the eyes of other animals.
  • the fundus image collection sub-module 710 is a micro camera.
  • the fundus The image collection sub-module 710 can also directly use a photosensitive imaging device such as a CCD or CMOS device.
  • the adjustable imaging sub-module 720 includes: an adjustable lens device 721 located on an optical path between the eye and the fundus image collection sub-module 710, and a focal length thereof. Adjustable and / or adjustable in position in the light path. Through the tunable lens device 721, the system equivalent focal length between the eye and the fundus image collection sub-module 710 is adjustable, and the fundus image collection sub-module 710 is adjusted by the adjustment of the tunable lens device 721. Adjustable lens device An image of the at least one set second sharpness criterion is obtained at a certain position or state of 721. In the present embodiment, the tunable lens device 721 is continuously adjusted in real time during the detection process.
  • the adjustable lens device 721 is: a focus adjustable lens for adjusting the focal length of the self by adjusting its own refractive index and/or shape. Specifically: 1) adjusting the focal length by adjusting the curvature of at least one side of the focus adjustable lens, for example, increasing or decreasing the liquid medium in the cavity formed by the double transparent layer to adjust the curvature of the focus adjustable lens; 2) changing the focal length
  • the refractive index of the adjustable lens is used to adjust the focal length.
  • the focal length adjustable lens is filled with a specific liquid crystal medium, and the arrangement of the liquid crystal medium is adjusted by adjusting the voltage of the corresponding electrode of the liquid crystal medium, thereby changing the refractive index of the focus adjustable lens.
  • the tunable lens device 721 includes: a lens group composed of a plurality of lenses for adjusting a relative position between the lenses in the lens group to complete a focal length of the lens group itself. Adjustment.
  • the lens group may also include a lens whose imaging parameters such as its own focal length are adjustable.
  • optical path parameters of the system by adjusting the characteristics of the tunable lens device 721 itself, it is also possible to change the optical path parameters of the system by adjusting the position of the tunable lens device 721 on the optical path.
  • the adjustable imaging sub-module 720 further includes: splitting Unit 722 is configured to form an optical transmission path between the eye and the observation object, and between the eye and fundus image collection sub-module 710. This allows the optical path to be folded, reducing the size of the system while minimizing the user's other visual experience.
  • the beam splitting unit includes: a first beam splitting unit, located between the eye and the observation object, for transmitting light of the observation object to the eye, and transmitting the eye to the fundus image to collect the sub-module of the first light splitting unit
  • the image processing sub-module of the system may be a beam splitter, a split optical waveguide (including an optical fiber), or other suitable implementations of the embodiments of the present application.
  • the 730 includes an optical path calibration unit for calibrating the optical path of the system, for example, performing alignment alignment of the optical path optical axis to ensure measurement accuracy.
  • the image processing sub-module 730 includes:
  • the image analyzing unit 731 is configured to analyze an image obtained by the fundus image collection sub-module, and find an image that satisfies at least one set second definition standard;
  • a parameter calculating unit 732 configured to calculate an eye according to the image that satisfies at least one set second sharpness standard, and an imaging parameter known to the system when the image that satisfies at least one set second sharpness criterion is obtained Optical parameters.
  • the fundus image collection sub-module 710 can obtain an image satisfying at least one set second definition standard by the adjustable imaging sub-module 720, but needs to be found by the image analysis unit 731.
  • the image satisfies at least one of the set second sharpness criteria, and the optical parameters of the eye can be calculated by calculating an image of the at least one set second sharpness standard and the optical path parameters known to the system.
  • the optical parameters of the eye may include the optical axis direction of the eye.
  • the system further includes: a projection sub-module 740, configured to project a light spot to the fundus.
  • a projection sub-module 740 configured to project a light spot to the fundus.
  • the function of the projection sub-module can be visualized by a pico projector.
  • the spot projected here can be used to illuminate the fundus without a specific pattern.
  • the projected spot includes a feature-rich pattern.
  • the rich features of the pattern facilitate inspection and improve detection accuracy.
  • the spot is an infrared spot that is invisible to the eye.
  • the exit surface of the projection sub-module may be provided with an invisible light transmission filter for the eye.
  • the incident surface of the fundus image collection sub-module is provided with an eye invisible light transmission filter.
  • the image processing sub-module 730 further includes:
  • the projection control unit 734 is configured to control the brightness of the projection spot of the projection sub-module 740 according to the result obtained by the image analysis unit 731.
  • the projection control unit 734 can adaptively adjust the brightness according to the characteristics of the image obtained by the fundus image collection sub-module 710.
  • the characteristics of the image include the contrast of the image features as well as the texture features.
  • a special case of controlling the brightness of the projection spot of the projection sub-module 740 is to open or close the projection sub-module 740.
  • the user can periodically close the projection sub-module 740 when the user keeps watching a point; when the user's fundus is bright enough
  • the illumination source can be turned off using only fundus information to detect the distance from the eye's current line of sight to the eye.
  • the projection control unit 734 can also control the brightness of the projection spot of the projection sub-module 740 according to the ambient light.
  • the image processing sub-module 730 further includes: an image calibration unit 733, configured to perform calibration of the fundus image to obtain at least one reference image corresponding to the image presented by the fundus.
  • the image analyzing unit 731 compares the image obtained by the fundus image collection sub-module 730 with the reference image to obtain the image satisfying at least one set second definition standard.
  • the image satisfying at least one set second definition standard may be an image obtained with the smallest difference from the reference image.
  • the difference between the currently obtained image and the reference image is calculated by an existing image processing algorithm, for example, using a classical phase difference autofocus algorithm.
  • the parameter calculation unit 732 includes: an eye optical axis direction determining sub-unit 7321, configured to obtain an image that satisfies the at least one set second definition standard according to the obtaining The characteristics of the eye get the direction of the optical axis of the eye.
  • the eye optical axis direction determining sub-unit 7321 includes: a first determining sub-unit, configured to obtain, according to the obtained second definition standard that meets at least one setting The characteristics of the fundus at the time of the image are obtained in the direction of the optical axis of the eye.
  • the accuracy of the optical axis direction of the eye is determined by the characteristics of the fundus as compared with the direction of the optical axis of the eye obtained by the features of the pupil and the surface of the eye.
  • the size of the spot pattern may be larger than the fundus viewable area or smaller than the fundus viewable area, where:
  • a classical feature point matching algorithm for example, Scale Invariant Feature Transform (SIFT) algorithm
  • SIFT Scale Invariant Feature Transform
  • the direction of the user's line of sight can be determined by determining the direction of the optical axis of the eye by the position of the spot pattern on the obtained image relative to the original spot pattern (obtained by the image calibration unit).
  • the eye optical axis direction determining sub-unit 7321 includes: a second determining sub-unit, configured to obtain, according to the obtained second definition standard that meets at least one setting The image when the pupil of the eye is characterized by the direction of the optical axis of the eye.
  • the characteristics of the pupil of the eye may be obtained from the image satisfying at least one set second definition standard, or may be additionally acquired.
  • the optical axis direction of the eye is obtained by the eye pupil feature.
  • the image processing sub-module 730 further includes: an eye optical axis direction calibration unit 735, which is not in a possible implementation of the embodiment of the present application. It is used to calibrate the direction of the optical axis of the eye in order to more accurately determine the direction of the optical axis of the above-mentioned eye.
  • the imaging parameters known by the system include fixed imaging parameters and real-time imaging parameters, wherein the real-time imaging parameters are the adjustable lens devices when acquiring an image satisfying at least one set second definition standard.
  • Parameter information which may be recorded in real time when acquiring the image that satisfies at least one set second definition standard.
  • d are the distance between the current observation object 7010 of the eye and the real image 7020 on the retina to the ocular equivalent lens 7030, respectively, f e is the equivalent focal length of the eye equivalent lens 7030, and X is the viewing direction of the eye (may be made by the eye The direction of the optical axis is obtained).
  • Figure 7d is a schematic diagram showing the distance from the eye gaze point to the eye according to the known optical parameters of the system and the optical parameters of the eye.
  • the spot 7040 becomes a virtual object through the tunable lens device 721 (not shown in Fig. 7d).
  • the virtual image distance from the lens is X (not shown in Figure 7d)
  • combining the equation (1) gives the following equations.
  • d is the light of the spot 7040 to the tunable lens device 721; the effective distance, d ; is the adjustable lens to the eye!: the light of the lens 7030 ; the effective distance, f lakethe focus of the tunable lens device 721
  • the current observation object 7010 (eye gaze point) to the eye i can be obtained.
  • FIG. 8 is an embodiment of a position detection module 800 applied to the glasses G according to a possible implementation of the embodiment of the present application, which includes the contents of the embodiment shown in FIG. 7b, specifically: FIG. It can be seen that in the present embodiment, the module 800 of the present embodiment is integrated on the right side of the glasses G (not limited thereto), and includes:
  • the micro camera 810 has the same function as the fundus image collection sub-module described in the embodiment of FIG. 7b, and is disposed on the right outer side of the glasses G so as not to affect the line of sight of the user's normal viewing object;
  • the first beam splitter 820 has a function
  • the first beam splitting unit described in the embodiment of FIG. 7b is the same, and is disposed at an intersection of the eye A gaze direction and the incident direction of the camera 810 at a certain inclination angle, and transmits the light entering the eye A of the observation object and the light reflecting the eye to the camera 810;
  • the focal length adjustable lens 830 has the same function as the focus adjustable lens described in the embodiment of FIG. 7b, and is located between the first beam splitter 820 and the camera 810, and adjusts the focus value in real time so that at a certain focal length value
  • the camera 810 is capable of capturing an image in which the fundus meets at least one set second definition standard.
  • the image processing sub-module is not shown in Fig. 8, and its function is the same as that of the image processing sub-module shown in Fig. 7b.
  • the fundus is illuminated by one illumination source 840.
  • the illumination source 840 is an invisible light source of the eye, preferably a near-infrared light source that has little effect on the eye A and is sensitive to the camera 810.
  • the illumination source 840 is located outside the spectacle frame on the right side. Therefore, it is necessary to complete the light emitted by the illumination source 840 to the fundus through a second dichroic mirror 850 and the first dichroic mirror 820. transfer.
  • the second beam splitter 850 is located before the incident surface of the camera 810. Therefore, it is also required to transmit light from the fundus to the second beam splitter 850.
  • the first beam splitter 820 may have a high infrared reflectance and a high visible light transmittance.
  • an infrared reflecting film may be provided on the side of the first dichroic mirror 820 facing the eye A to achieve the above characteristics.
  • the position detecting module 800 is located on the side of the lens of the glasses G away from the eye A, when the optical parameters of the eye are calculated, the lens can also be regarded as the eye A. As part of this, it is not necessary to know the optical properties of the lens at this time.
  • the position detecting module 800 may be located on the side of the lens of the glasses G close to the eye A.
  • the optical characteristic parameters of the lens need to be obtained in advance, and when calculating the gaze point distance, Consider the factors that influence the lens.
  • the light emitted by the light source 840 passes through the reflection of the second beam splitter 850, the projection of the focus adjustable lens 830, and the reflection of the first beam splitter 820, and then passes through the lens of the glasses G to enter the user's eyes, and finally arrives.
  • the optical path formed by the camera 810 through the first beam splitter 820, the focal length adjustable lens 830, and the second beam splitter 850 is imaged through the pupil of the eye A to the fundus.
  • the position detecting module and the projection module may simultaneously include: a device having a projection function (such as the information projection sub-module of the projection module described above, and the projection sub-module of the position detection module); and an imaging device with adjustable imaging parameters (such as the parameter adjustment sub-module of the projection module described above, and
  • the functions of the position detecting module and the projection module are implemented by the same device.
  • the illumination source 840 can be used as an information projection sub-module of the projection module in addition to illumination of the position detection module.
  • the light source assists in projecting the user related information.
  • the illumination source 840 can simultaneously project an invisible light for illumination of the position detecting module, and a visible light for assisting in projecting the user related information;
  • the illumination source 840 can also switch the invisible light and the visible light in a time-sharing manner.
  • the location detection module can use the user-related information. To complete the function of illuminating the fundus.
  • the first beam splitter 820, the second point The light mirror 850 and the focus adjustable lens 830 can be used as a parameter adjustment sub-module of the projection module, and can also serve as an adjustable imaging sub-module of the position detecting module.
  • the focal length adjustable lens 830 may be adjusted in a sub-region, and different regions respectively correspond to the position detecting module and the projection module, and the focal length may also be different.
  • the focal length of the focus adjustable lens 830 is integrally adjusted, but the front end of the photosensitive unit (such as a CCD or the like) of the micro camera 810 of the position detecting module is further provided with other optical devices for realizing the position detection.
  • the imaging parameters of the module are assisted.
  • the light path from the light emitting surface of the light source 840 ie, the user related information throwing position
  • the light from the eye to the miniature camera 810 may be configured.
  • the focal length adjustable lens 830 is adjusted until the miniature camera 810 receives the clearest fundus image, the user-related information projected by the illumination source 840 is clearly imaged at the fundus.
  • FIG. 9 is a schematic structural diagram of a position detecting module 900 according to another embodiment of the present application.
  • the present embodiment is similar to the embodiment shown in FIG. 8, and includes a micro camera 910, a second beam splitter 920, and a focus adjustable lens 930, except that the projector in the present embodiment is different.
  • the module 940 is a projection sub-module 940 that projects a spot pattern, and replaces the first beam splitter in the embodiment of FIG. 8 by a curved beam splitter 950 as a curved beam splitting device.
  • the surface beam splitter 950 is used to correspond to the position of the pupil when the optical axis direction of the eye is different, and the image presented by the fundus is transmitted to the fundus image collection sub-module.
  • the camera can capture the image superimposed and superimposed at various angles of the eyeball, but since only the fundus portion through the pupil can be clearly imaged on the camera, other parts will be out of focus and cannot be clearly imaged, and thus will not seriously interfere with the imaging of the fundus portion.
  • the characteristics of the fundus portion can still be detected. Therefore, compared with the embodiment shown in FIG. 8 , the embodiment can obtain an image of the fundus well when the eyes are gazing in different directions, so that the position detecting module of the embodiment has a wider application range and higher detection precision.
  • the position detecting module and the projection module may be multiplexed. Similar to the embodiment shown in FIG. 8, the projection sub-module 940 can simultaneously project the spot pattern and the user-related information simultaneously or in a time-sharing manner; or the position detecting module uses the projected user-related information as a The spot pattern is detected. Similar to the embodiment shown in FIG.
  • the first beam splitter 920, the second beam splitter 950, and the focus adjustable lens 930 can be used as In addition to the parameter adjustment sub-module of the projection module, the position detector module can also be adjusted.
  • the second beam splitter 950 is further configured to respectively correspond to the position of the pupil when the optical axis direction of the eye is different.
  • the light path between the projection module and the fundus Since the user-related information projected by the projection sub-module 940 is deformed after passing through the second beam splitter 950 of the curved surface, in the embodiment, the projection module includes:
  • the anti-deformation processing module (not shown in FIG. 9) is configured to perform inverse deformation processing corresponding to the curved surface spectroscopic device on the user-related information, so that the fundus receives the user-related information that needs to be presented.
  • the projection module is configured to project the user-related information stereoscopically to the fundus of the user.
  • the user-related information includes stereoscopic information corresponding to the two eyes of the user, and the projection module respectively projects corresponding user-related information to the two eyes of the user.
  • the user information extracting apparatus 1000 needs to set two sets of projection modules respectively corresponding to the two eyes of the user, including:
  • a first projection module corresponding to a left eye of the user
  • a second projection module corresponding to the user's right eye.
  • the structure of the second projection module is similar to the structure of the composite position detecting module described in the embodiment of FIG. 10, and is also a structure that can simultaneously implement the function of the position detecting module and the function of the projection module, including the figure. 10 shows the same function of the miniature camera 1021, the second The beam splitter 1022, the second focus adjustable lens 1023, and the first beam splitter 1024 (the image processing sub-module of the position detecting module is not shown in FIG. 10), except that the projector in the present embodiment
  • the module is a second projection sub-module 1025 that can project user-related information corresponding to the right eye. It can also be used to detect the position of the gaze point of the user's eyes, and clearly project the user-related information corresponding to the right eye to the fundus of the right eye.
  • the structure of the first projection module is similar to that of the second projection module 1020, but it does not have a miniature camera and does not have the function of a composite position detecting module. As shown in FIG. 10, the first projection module includes:
  • the first projection sub-module 1011 is configured to project user-related information corresponding to the left eye to the fundus of the left eye;
  • the first focal length adjustable lens 1013 is configured to adjust an imaging parameter between the first projection sub-module 1011 and the fundus so that corresponding user-related information can be clearly presented in the fundus of the left eye and the user can see the presentation.
  • the user related information on the image
  • a third beam splitter 1012 configured to perform optical path transfer between the first projection sub-module 1011 and the first focus adjustable lens 1013;
  • the fourth beam splitter 1014 is configured to perform optical path transmission between the first focus adjustable lens 1013 and the left fundus.
  • the user-related information that the user sees has a suitable stereoscopic display effect, resulting in a better user experience.
  • the stereoscopic projection described above allows the user to view the three-dimensional spatial information. For example, when the user is required to perform a specific gesture in a specific position in the three-dimensional space to correctly input the user-related information, the above method of the embodiment of the present application enables the user to see stereoscopic user-related information, and learn the specific location.
  • FIG. 11 is a schematic structural diagram of another user information extraction apparatus 1100 according to an embodiment of the present application.
  • the specific implementation of the user information extraction apparatus 1100 is not limited in the specific embodiment of the present application.
  • the user information extracting apparatus 1100 may include:
  • a processor 1110 a communication interface 1120, a memory 1130, and a communication bus 1140. among them:
  • the processor 1110, the communication interface 1120, and the memory 1130 perform communication with each other via the communication bus 1140.
  • the communication interface 1120 is configured to communicate with a network element such as a client.
  • the processor 1110 is configured to execute the program 1132. Specifically, the related steps in the foregoing method embodiments may be performed.
  • program 1132 can include program code, the program code including computer operating instructions.
  • the processor 1110 may be a central processing unit CPU, or an application specific integrated circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • the memory 1130 is configured to store the program 1132.
  • Memory 1130 may include high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
  • the program 1132 may be specifically configured to cause the user information extracting apparatus 1100 to perform the following steps:
  • the user related information is projected to the fundus of the user.
  • the embodiment of the present application provides a user information embedding method, including:
  • S1200 embed at least one digital watermark in an image, the digital watermark including at least one user-related information corresponding to at least one user.
  • the digital watermark can be divided into symmetric and asymmetric watermarks according to symmetry.
  • the traditional symmetric watermark embeds and detects the same key, so that once the detection method and key are disclosed, the watermark can be easily removed from the digital carrier.
  • the asymmetric watermarking technique uses a private key to embed the watermark and uses the public key to extract and verify the watermark, making it difficult for an attacker to destroy or remove the watermark embedded with the private key through the public key. Therefore, the asymmetric digital watermark is used in the embodiment of the present application.
  • the user related information includes user authentication information corresponding to the image.
  • the image corresponds to a graphical user interface displayed by a device
  • the digital watermark includes a user hole corresponding to the graphical user interface.
  • the graphical user interface includes an input interface of the user related information.
  • the graphical user interface is a lock screen interface of the device.
  • the embodiment of the present application provides a user information embedding apparatus 1300, including: a watermark embedding module 1310, configured to embed at least one digital watermark in an image, where the digital watermark includes at least one user Corresponding at least one user related information.
  • the user information embedding device 1300 is an electronic device terminal, such as a user terminal such as a mobile phone, a tablet computer, a notebook computer, or a desktop computer.
  • the device 1300 further includes:
  • a display module 1320 configured to display a graphical user interface, the image corresponding to the graphical user interface (as shown in FIG. 2a and FIG. 3a);
  • the digital watermark includes the user related information corresponding to the graphical user interface.
  • the graphical user interface includes an input interface of the user related information.
  • each module of the user information embedding device 1300 For the implementation of the function of each module of the user information embedding device 1300, refer to the corresponding description in the embodiment described above, and details are not described herein again.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present application which is essential to the prior art or part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Technology Law (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种用户信息提取方法及用户信息提取装置,所述方法包括:获取一包含至少一数字水印的图像;获取所述图像中所述数字水印包含的与一用户对应的至少一用户相关信息;将所述用户相关信息向所述用户的眼底投射。本申请将一定场合使用的用户相关信息在该对应场合下提取出来并投射到用户眼底,使得用户无需记忆这些用户相关信息,并避免了因遗忘所述用户相关信息带来的不便。

Description

用户信息提取方法及用户信息提取装置 本申请要求于 2013 年 11 月 15 日提交中国专利局、 申请号为 201310572121.9, 发明名称为 "用户信息获取方法及用户信息获取装置" 的中 国专利申请的优先权, 其全部内容通过引用结合在本申请中。 技术领域
本申请涉及数字鉴权技术领域, 尤其涉及一种用户信息提取方法及装置。 背景技术 移动或可穿戴式设备基于节能和防止误操作的原因, 通常都会设置屏幕 锁定, 而屏幕解锁可以加密或者不加密, 对加密的屏幕进行解锁时通常都需 要用户记忆一些特殊的密码、 图案、 动作等, 虽然能够保证安全性, 但很容 易发生遗忘的情况, 给用户带来不便。 当然, 除了上述屏幕解锁的场合外, 其它需要输入密码以进行进一步操作的场合下, 也存在上述的问题。
通过数字水印技术可以将一些标识信息 (即数字水印) 直接嵌入数字载 体中, 且不影响原载体的使用, 也不容易被探知和修改。 数字水印技术应用 于很多方面, 例如版权保护、 防伪、 鉴权、 信息隐藏等。 如果可以将数字水 印技术用于安全隐蔽地帮助用户进行密码等输入以获得相应授权, 则可以解 决上述因用户遗忘而无法进行鉴权的问题, 提高用户体验。 发明内容
本申请要解决的技术问题是: 提供一种用户信息提取技术, 以在保密的 前提下向用户提供对应场合下使用的用户相关信息, 避免了因用户忘记对应 用户相关信息而造成不便的问题。
为实现上述目的, 第一方面, 本申请提供了一种用户信息提取方法, 其 特征在于, 包括: 获取一包含至少一数字水印的图像;
获取所述图像中所述数字水印包含的与一用户对应的至少一用户相关信 将将所所述述用用户户相相关关信信息息向向所所述述用用户户的的眼眼底底投投射射。。
55 第第二二方方面面,, 本本申申请请提提供供了了一一种种用用户户信信息息提提取取装装置置,, 包包括括::
一一图图像像获获取取模模块块,, 用用于于获获取取一一包包含含至至少少一一数数字字水水印印的的图图像像;;
一一信信息息获获取取模模块块,, 用用于于获获取取所所述述图图像像中中所所述述数数字字水水印印包包含含的的与与一一用用户户对对 应应的的至至少少一一用用户户相相关关信信息息;;
一一投投射射模模块块,, 用用于于将将所所述述用用户户相相关关信信息息向向所所述述用用户户的的眼眼底底投投射射。。
1100 本本申申请请实实施施例例的的至至少少一一个个技技术术方方案案将将一一定定场场合合使使用用的的用用户户相相关关信信息息在在该该 对对应应场场合合下下提提取取出出来来并并投投射射到到用用户户眼眼底底,, 使使得得用用户户无无需需记记忆忆这这些些用用户户相相关关信信 息息,, 并并避避免免了了因因遗遗忘忘所所述述用用户户相相关关信信息息带带来来的的不不便便。。
附附图图说说明明
图图 11为为本本申申请请实实施施例例的的一一种种用用户户信信息息提提取取方方法法的的步步骤骤流流程程图图;; 1155 图图 22aa--22cc为为本本申申请请实实施施例例的的一一种种用用户户信信息息提提取取方方法法的的应应用用示示意意图图;;
图图 33aa和和 33bb为为本本申申请请实实施施例例的的另另一一种种用用户户信信息息提提取取方方法法的的应应用用示示意意图图;; 图图 44aa和和图图 44bb为为本本申申请请实实施施例例一一种种用用户户信信息息提提取取方方法法使使用用的的光光斑斑图图案案以以 及及在在眼眼底底获获得得的的包包括括所所述述光光斑斑图图案案的的图图像像的的示示意意图图;;
图图 55为为本本申申请请实实施施例例一一种种用用户户信信息息提提取取装装置置的的结结构构示示意意框框图图;; 2200 图图 66aa和和 66bb为为本本申申请请实实施施例例另另外外两两种种用用户户信信息息提提取取装装置置的的结结构构示示意意框框图图;; 图图 77aa为为本本申申请请实实施施例例一一种种用用户户信信息息提提取取装装置置使使用用的的位位置置检检测测模模块块的的结结构构 示示意意框框图图;;
图图 77bb为为本本申申请请实实施施例例另另一一种种用用户户信信息息提提取取装装置置使使用用的的位位置置检检测测模模块块的的结结 构构示示意意框框图图;;
2255 图图 77cc和和图图 77dd为为本本申申请请实实施施例例一一种种用用户户信信息息提提取取装装置置使使用用的的位位置置检检测测模模
Figure imgf000003_0001
图图 88为为本本申申请请实实施施例例一一种种用用户户信信息息提提取取装装置置应应用用在在眼眼镜镜上上的的示示意意图图;; 图 9为本申请实施例另一种用户信息提取装置应用在眼镜上的示意图; 图 10为本申请实施例又一种用户信息提取装置应用在眼镜上的示意图; 图 11为本申请实施例再一种用户信息提取装置的结构示意图;
图 12为本申请实施例一种用户信息嵌入方法的步骤流程图;
图 13为本申请实施例一种用户信息嵌入装置的结构示意框图。
具体实施方式
本申请的方法及装置结合附图及实施例详细说明如下。
用户在日常的生活中经常需要使用到各种各样的用户相关信息, 例如用 户需要在各电子设备的锁屏界面中输入的用户密码或特定手势、 在登陆一些 网站或应用的账户时需要使用的用户密码、 或者在一些门禁设备中需要使用 的密码信息等各种用户鉴权信息。 用户需要记忆这些各种各样的用户相关信 息, 并且在对应的场合中输入对应用户相关信息才可以进行下一步的操作, 否则会给用户带来很大的不便。 因此在本申请中, 如图 1 所示, 本申请实施 例提供了一种用户信息提取方法, 包括:
S120获取一包含至少一数字水印的图像;
S140获取所述图像中所述数字水印包含的与一用户对应的至少一用户相 穴 1口 Ά自、 ·,
S160将所述用户相关信息向所述用户的眼底投射。
本申请实施例从一图像的数字水印中获取与用户相关的用户相关信息并 投射到用户的眼底, 使得用户不需要特地记忆就可以在需要使用的场合下保 密地获取对应的用户相关信息, 方便了用户的使用, 提高了用户体验。
下面本申请实施例通过以下的实施方式对各步骤进行进一步的描述: S120获取一包含至少一数字水印的图像。
本申请实施例获取所述图像的方式有多种, 例如:
1 )通过拍摄的方式获取所述图像。
在本申请实施例中, 可以通过一智能眼镜设备, 拍摄用户看到的物体, 例如当用户看到所述图像时, 所述智能眼镜设备拍摄到所述图像。 2 )通过接收的方式获取所述图像。
在本申请实施例的一种可能的实施方式中, 还可以通过其它设备获取所 述图像, 或者通过与一显示该图像的设备之间的交互来获取所述图像。
S140获取所述图像中所述数字水印包含的与一用户对应的至少一用户相 穴 1 、。
在本申请实施例中, 获取所述用户相关信息的方法有多种, 例如为以下
1 )从所述图像中提取所述用户相关信息。
在本实施方式中, 例如可以通过个人私钥及公开或私有的水印提取方法 来分析所述图像中的数字水印, 提取所述用户相关信息。
2 )将所述图像向外部发送; 并从外部接收所述图像中的所述用户相关信 自、
在本实施方式中, 可以将所述图像向外部发送, 例如发送至云端服务器 和 /或一第三方权威机构, 通过所述云端服务器或所述第三方权威机构来提取 所述图像的数字水印中的所述用户相关信息。
在本申请实施例的一种可能的实施方式中, 所述图像与一设备显示的一 图形用户界面对应。 如上面所述, 所述图形用户界面例如可以为所述设备的 锁屏界面 (如图 2a和图 3a所示)、 或者还可以为一个应用或者网站等的密码 等用户鉴权信息的输入界面。
在一种可能的实施方式中, 所述图形用户界面包括一所述用户相关信息 的输入接口。 例如图 2a和图 3a中所示的轨迹键 230和数字键 330, 以便用户 输入对应的用户相关信息。
在本申请实施例的一种可能的实施方式中, 所述用户相关信息包括一与 所述图像对应的用户鉴权信息。
这里, 所述的用户鉴权信息例如可以为上面所述的用户密码、 或特定的 姿势 (如手势、 身体整体姿势等) 等。 所述用户鉴权信息 "与图像对应" 例 如可以为, 所述图像上显示的是一个电子设备(例如手机、 平板电脑、 笔记 本电脑、 台式电脑等) 的锁屏界面, 所述锁屏界面包含有用户鉴权信息输入 提示画面, 提示用户输入对应的用户鉴权信息, 则用户根据该图像获得的所 述用户相关信息即为待输入的用户鉴权信息; 或者例如, 在一个需要输入密 码的门禁附近具有一个包含水印的图像 (该图像可以是例如印刷等产生的静 止的图像, 也可以是通过电子设备显示的电子图像), 用户通过上面的方法可 以通过该图像获取该门禁的密码信息。
当然, 在本申请实施例的其它可能的实施方式中, 所述用户相关信息还 可以为其它信息, 例如, 该图像是一台多人共用的电子设备(如电脑)所显 示的用户环境界面, 当其中一个用户使用本申请实施例的方法时, 其可以从 所述图像上获取到与自己对应的用户环境界面上相应应用的进入信息, 方便 用户使用。
在本申请实施例的一种可能的实施方式中, 在所述步骤 S140之前, 还可 以包括用户认证, 对用户的身份进行确认, 进而使得所述步骤 S140能够获得 与所述用户对应的所述用户相关信息。 例如, 用户使用一经过身份认证的智 能眼镜来实现本申请实施例各步骤的功能。
S160将所述用户相关信息向所述用户的眼底投射。
在本申请实施例中, 为了使得用户可以在保密的场合下得到所述用户相 关信息, 通过将所述用户相关信息向用户眼底投射的方式来使得用户获得对 应的所述用户相关信息。
在一种可能的实施方式中, 该投射可以是将所述用户相关信息通过投射 模块直接投射至用户的眼底。
在另一种可能的实施方式中, 该投射还可以是将所述用户相关信息显示 在只有用户可以看到的位置(例如一智能眼镜的显示面上), 通过该显示面将 所述用户相关信息投射至用户眼底。
其中, 第一种方式由于不需要将用户相关信息通过中间显示, 而是直接 到达用户眼底, 因此其隐私性更高。 下面进一步说明本实施方式, 所述将所 述用户相关信息向所述用户的眼底投射包括: 投射所述用户相关信息;
调整所述投射位置与所述用户的眼睛之间光路的至少一投射成像参数, 直至所述用户相关信息在所述用户的眼底所成的像满足至少一设定的第一清 晰度标准。
这里所述的清晰度标准可以根据本领域技术人员常用的清晰度衡量参数 来设定, 例如图像的有效分辨率等参数。
在本申请实施例的一种可能的实施方式中, 所述调整所述投射位置与所 述用户的眼睛之间光路的至少一投射成像参数包括:
调节所述投射位置与所述用户的眼睛之间光路的至少一光学器件的至少 一成像参数和 /或在光路中的位置。
这里所述成像参数包括光学器件的焦距、 光轴方向等等。 通过该调节, 可以使得所述用户相关信息被合适地投射在用户的眼底, 例如通过调节所述 光学器件的焦距使得所述用户相关信息清晰地在用户的眼底成像。这里的 "清 晰" 指的是满足所述至少一设定的第一清晰度标准。 或者在下面提到的, 需 要进行立体显示时, 除了在生成所述用户相关信息时直接生成带视差的左、 右眼图像外, 还可以与用户的两只眼睛对应的用户相关信息是相同的, 但是 通过具有一定偏差地将所述用户相关信息分别投射至两只眼睛也可以实现所 述用户相关信息的立体显示效果, 此时, 例如可以调节所述光学器件的光轴 参数。
由于用户观看所述用户相关信息时眼睛的视线方向可能会变化, 需要在 用户眼睛的视线方向不同时都能将所述用户相关信息较好的投射到用户的眼 底,因此,在本申请实施例的一种可能的实施方式中,所述步骤 S160还包括: 分别对应所述眼睛光轴方向不同时瞳孔的位置, 将所述用户相关信息向 所述用户的眼底传递。
在本申请实施例的一种可能的实施方式中, 可能会需要通过曲面分光镜 等曲面光学器件来实现所述上述步骤的功能, 但是通过曲面光学器件后待显 示的内容一般会发生变形, 因此,在本申请实施例的一种可能的实施方式中, 所述步骤 S160还包括:
对所述用户相关信息进行与所述眼睛光轴方向不同时瞳孔的位置对应的 反变形处理, 使得眼底接收到需要呈现的所述用户相关信息。
例如: 对投射的用户相关信息进行预处理, 使得投射出的用户相关信息 具有与所述变形相反的反变形, 该反变形效果再经过上述的曲面光学器件后, 与曲面光学器件的变形效果相抵消, 从而用户眼底接收到的用户相关信息是 需要呈现的效果。
在一种可能的实施方式中, 投射到用户眼睛中的用户相关信息不需要与 所述图像进行对齐, 例如, 当需要用户按一定顺序输入一组密码, 如" 1234" 时, 只需要将该组密码投射在用户的眼底被用户看到即可。 但是, 在一些情 况下, 例如, 当所述用户相关信息为一在特定位置完成特定动作, 例如图 2b 中所述的在特定位置画出特定轨迹时, 需要将所述用户相关信息与所述图像 进行对齐显示。 因此, 在本申请实施例的一种可能的实施方式中, 所述步骤 S160包括:
将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼底对 齐。
如图 2a和 2b所示, 在本实施方式中, 用户看到图 2a所示的图像 210, 该图像例如为一手机设备的锁屏界面, 并且根据该图像 210获取了对应的用 户相关信息 220, 所述步骤 S160将所述用户相关信息 220与所述图像 210对 齐后 (将所述用户相关信息呈现在所述图像上对应的位置)投射在用户的眼 底, 使得用户看到了如图 2b所示的画面, 其中, 虛线所示的轨迹为投射在用 户眼底的所述用户相关信息 220。用户根据该用户相关信息在所述锁屏界面上 进行相应的轨迹输入(得到图 2c所示) 之后, 即对手机进行了解锁。
图 3a和 3b为另一个实施方式, 用户首先看到图 3 a所示的图像 310, 该 图像例如为一手机设备的锁屏界面, 根据该图像 310获取了对应的用户相关 信息 320, 例如图 3b所示的虛线框, 其中圆形框中对应的数字键 330为接下 来要输入的数字。 为了实现上述的对齐功能,在一种可能的实施方式中,所述方法还包括: 所述将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼 底对齐包括: 根据所述用户注视点相对于所述用户的所述位置将所述投射的 用户相关信息与所述用户看到的图像在所述用户的眼底对齐。
这里, 由于用户此时正在看所述图像,例如用户的手机锁屏界面, 因此, 用户的注视点对应的位置即为所述图像所在的位置。
本实施方式中, 检测所述用户注视点位置的方式有多种, 例如包括以下 i )釆用一个瞳孔方向检测器检测一个眼睛的光轴方向、 再通过一个深度 传感器 (如红外测距)得到眼睛注视场景的深度, 得到眼睛视线的注视点位 置, 该技术为已有技术, 本实施方式中不再赘述。
ii )分别检测两眼的光轴方向, 再根据所述两眼光轴方向得到用户两眼视 线方向, 通过所述两眼视线方向的交点得到眼睛视线的注视点位置, 该技术 也为已有技术, 此处不再赘述。
iii )根据釆集到眼睛的成像面呈现的满足至少一设定的第二清晰度标准的 图像时图像釆集位置与眼睛之间光路的光学参数以及眼睛的光学参数, 得到 所述眼睛视线的注视点位置, 本申请实施例会在下面给出该方法的详细过程, 此处不再赘述。
当然, 本领域的技术人员可以知道, 除了上述几种形式的注视点检测方 法外, 其它可以用于检测用户眼睛注视点的方法也可以用于本申请实施例的 方法中。
其中, 通过第 iii )种方法检测用户当前的注视点位置的步骤包括: 釆集一所述用户眼底的图像;
进行所述眼底图像釆集位置与所述用户眼睛之间光路的至少一成像参数 的调节直至釆集到一满足至少一设定的第二清晰度标准的图像;
对釆集到的所述眼底的图像进行分析, 得到与所述满足至少一设定的第 二清晰度标准的图像对应的所述眼底图像釆集位置与所述眼睛之间光路的所 述成像参数以及所述眼睛的至少一光学参数, 并计算所述用户当前的注视点 相对于所述用户的位置。
这里所述的第二清晰度标准中, 所述清晰度标准为上面所述本领域技术 人员常用的清晰度标准, 其可以与上面所述的第一清晰度标准相同, 也可以 不同。
通过对眼睛眼底的图像进行分析处理, 得到釆集到满足至少一设定的第 二清晰度标准的图像时眼睛的光学参数, 从而计算得到视线当前的对焦点位 置, 为进一步基于该精确的对焦点的位置对观察者的观察行为进行检测提供 基础。
这里的 "眼底" 呈现的图像主要为在视网膜上呈现的图像, 其可以为眼 底自身的图像, 或者可以为投射到眼底的其它物体的图像, 例如下面提到的 光斑图案。
所述调整投射位置与所述用户的眼睛之间光路的至少一投射成像参数包 括, 通过对眼睛与釆集位置之间的光路上的光学器件的焦距和 /或在光路中的 位置进行调节, 以在该光学器件在某一个位置或状态时获得眼底满足至少一 设定的第二清晰度标准的图像。 该调节可为连续实时的。
在本申请实施例方法的一种可能的实施方式中, 该光学器件可为焦距可 调透镜,用于通过调整该光学器件自身的折射率和 /或形状完成其焦距的调整。 具体为: 1 )通过调节焦距可调透镜的至少一面的曲率来调节焦距, 例如在双 层透明层构成的空腔中增加或减少液体介质来调节焦距可调透镜的曲率; 2 ) 通过改变焦距可调透镜的折射率来调节焦距, 例如焦距可调透镜中填充有特 定液晶介质, 通过调节液晶介质对应电极的电压来调整液晶介质的排列方式, 从而改变焦距可调透镜的折射率。
在本申请实施例的方法的另一种可能的实施方式中, 该光学器件可为: 透镜组, 用于通过调节透镜组中透镜之间的相对位置完成透镜组自身焦距的 调整。 或者, 所述透镜组中的一片或多片透镜为上面所述的焦距可调透镜。 除了上述两种通过光学器件自身的特性来改变系统的光路参数以外, 还 可以通过调节光学器件在光路上的位置来改变系统的光路参数。
此外, 在本申请实施例的方法中, 所述对釆集到的所述眼底的图像进行 分析进一步包括:
对釆集到的眼底的图像进行分析, 找到满足至少一设定的第二清晰度标 准的图像;
根据所述满足至少一设定的第二清晰度标准的图像、 以及得到所述满足 至少一设定的第二清晰度标准的图像时已知的成像参数计算眼睛的光学参数。
所述调整投射位置与所述用户的眼睛之间光路的至少一投射成像参数能 够釆集到满足至少一设定的第二清晰度标准的图像, 但是需要通过所述对釆 集到的所述眼底的图像进行分析来找到该满足至少一设定的第二清晰度标准 的图像, 根据所述满足至少一设定的第二清晰度标准的图像以及已知的光路 参数就可以通过计算得到眼睛的光学参数。
在本申请实施例的方法中, 所述检测用户当前的注视点位置的步骤还可 包括:
向眼底投射光斑。
所投射的光斑可以没有特定图案仅用于照亮眼底。 所投射的光斑还可包 括特征丰富的图案。 图案的特征丰富可以便于检测, 提高检测精度。 如图 4a 所示为一个光斑图案 P 的示例图, 该图案可以由光斑图案生成器形成, 例如 毛玻璃; 图 4b所示为在有光斑图案 P投射时釆集到的眼底的图像。
为了不影响眼睛的正常观看,所述光斑为眼睛不可见的红外光斑。此时, 为了减小其它光谱的干扰: 可滤除投射的光斑中眼睛不可见光之外的光。
相应地, 本申请实施的方法还可包括步骤:
根据上述步骤分析得到的结果, 控制投射光斑亮度。 该分析结果例如包 括所述釆集到的图像的特性, 包括图像特征的反差以及紋理特征等。
需要说明的是, 控制投射光斑亮度的一种特殊的情况为开始或停止投射, 例如观察者持续注视一点时可以周期性停止投射; 观察者眼底足够明亮时可 以停止投射, 利用眼底信息来检测眼睛当前视线对焦点到眼睛的距离。
此外, 还可以根据环境光来控制投射光斑亮度。
在本申请实施例的方法中, 所述对釆集到的所述眼底的图像进行分析还 包括:
进行眼底图像的校准, 获得至少一个与眼底呈现的图像对应的基准图像。 具言之, 将釆集到的图像与所述基准图像进行对比计算, 获得所述满足至少 一设定的第二清晰度标准的图像。 这里, 所述满足至少一设定的第二清晰度 标准的图像可以为获得的与所述基准图像差异最小的图像。 在本实施方式的 方法中, 可以通过现有的图像处理算法计算当前获得的图像与基准图像的差 异, 例如使用经典的相位差值自动对焦算法。
所述眼睛的光学参数可包括根据釆集到所述满足至少一设定的第二清晰 度标准的图像时眼睛的特征得到的眼睛光轴方向。 这里眼睛的特征可以是从 所述满足至少一设定的第二清晰度标准的图像上获取的, 或者也可以是另外 获取的。 根据所述眼睛的光轴方向可以得到用户眼睛视线的注视方向。 具言 之, 可根据得到所述满足至少一设定的第二清晰度标准的图像时眼底的特征 得到眼睛光轴方向, 并且通过眼底的特征来确定眼睛光轴方向精确度更高。
在向眼底投射光斑图案时, 光斑图案的大小有可能大于眼底可视区域或 小于眼底可视区域, 其中:
当光斑图案的面积小于等于眼底可视区域时, 可以利用经典特征点匹配 算法(例如尺度不变特征转换( Scale Invariant Feature Transform, SIFT )算法) 通过检测图像上的光斑图案相对于眼底位置来确定眼睛光轴方向。
当光斑图案的面积大于等于眼底可视区域时, 可以通过得到的图像上的 光斑图案相对于原光斑图案 (通过图像校准获得) 的位置来确定眼睛光轴方 向确定观察者视线方向。
在本申请实施例的方法的另一种可能的实施方式中, 还可根据得到所述 满足至少一设定的第二清晰度标准的图像时眼睛瞳孔的特征得到眼睛光轴方 向。 这里眼睛瞳孔的特征可以是从所述满足至少一设定的第二清晰度标准的 图像上获取的, 也可以是另外获取的。 通过眼睛瞳孔特征得到眼睛光轴方向 为已有技术, 此处不再赘述。
此外, 在本申请实施例的方法中, 还可包括对眼睛光轴方向的校准, 以 便更精确的进行上述眼睛光轴方向的确定。
在本申请实施例的方法中, 所述已知的成像参数包括固定的成像参数和 实时成像参数, 其中实时成像参数为获取满足至少一设定的第二清晰度标准 的图像时所述光学器件的参数信息, 该参数信息可以在获取所述满足至少一 设定的第二清晰度标准的图像时实时记录得到。
在得到眼睛当前的光学参数之后, 就可以结合计算得到的眼睛对焦点到 眼睛的距离 (具体过程将结合装置部分详述), 得到眼睛注视点的位置。
为了让用户看到的用户相关信息具有立体显示效果、 更加真实, 在本申 请实施例的一种可能的实施方式中, 可以在所述步骤 S160中, 将所述用户相 关信息立体地向所述用户的眼底投射。
如上面所述的, 在一种可能的实施方式中, 该立体的显示可以是将相同 的信息, 通过步骤 S160投射位置的调整, 使得用户两只眼睛看到的具有视差 的信息, 形成立体显示效果。
在另一种可能的实施方式中, 所述用户相关信息包括分别与所述用户的 两眼对应的立体信息, 所述步骤 S160中, 分别向所述用户的两眼投射对应的 用户相关信息。 即: 所述用户相关信息包括与用户左眼对应的左眼信息以及 与用户右眼对应的右眼信息, 投射时将所述左眼信息投射至用户左眼, 将右 眼信息投射至用户右眼, 使得用户看到的用户相关信息具有适合的立体显示 效果, 带来更好的用户体验。 此外, 在对用户输入的用户相关信息中含有三 维空间信息时,通过上述的立体投射,使得用户可以看到所述三维空间信息。 例如: 当需要用户在三维空间中特定的位置做特定的手势才能正确输入所述 用户相关信息时, 通过本申请实施例的上述方法使得用户看到立体的用户相 关信息, 获知所述特定的位置和特定的手势, 进而使得用户可以在所述特定 位置做所述用户相关信息提示的手势, 此时其它人即使看到用户进行的手势 动作, 但是由于无法获知所述空间信息, 使得所述用户相关信息的保密效果 更好。 如图 5所示, 本申请实施例还提供了一种用户信息提取装置 500, 包括: 一图像获取模块 510, 用于获取一包含至少一数字水印的图像;
一信息获取模块 520,用于获取所述图像中所述数字水印包含的与一用户 对应的至少一用户相关信息;
一投射模块 530, 用于将所述用户相关信息向所述用户的眼底投射。
本申请实施例从一图像的数字水印中获取与用户相关的用户相关信息并 投射到用户的眼底, 使得用户不需要特地记忆就可以在需要使用的场合下保 密地获取对应的用户相关信息, 方便了用户的使用, 提高了用户体验。
为了让用户更加自然、 方便地获取所述用户相关信息, 本申请实施例的 装置例如可以为一用于用户眼睛附近的可穿戴设备, 例如一智能眼镜。 在用 户视线的注视点落在所述图像上时, 通过所述图像获取模块 510 自动获取所 述图像, 并在获得所述用户相关信息后将所述信息投射至用户眼底。
下面本申请实施例通过以下的实施方式对上述装置的各模块进行进一步 的描述:
在本申请实施例的实施方式中, 所述图像获取模块 510 的形式可以有多 如图 6a所示, 所述图像获取模块 510包括一拍摄子模块 511 , 用于拍摄 所述图像。
其中所述拍摄子模块 511 , 例如可以为一智能眼镜的摄像头, 用于对用户 看到的图像进行拍摄。
如图 6b所示, 在本申请实施例的另一个实施方式中, 所述图像获取模块 510包括:
一第一通信子模块 512, 用于接收所述图像。
在本实施方式中, 可以通过其它设备获取所述图像, 再发送本申请实施 例的装置; 或者通过与一显示该图像的设备之间的交互来获取所述图像(即 所述设备将显示的图像信息传送给本申请实施例的装置)。
在本申请实施例中,所述信息获取模块 520的形式也可以有多种,例如: 如图 6a所示, 所述信息获取模块 520包括: 一信息提取子模块 521 , 用 于从所述图像中提取所述用户相关信息。
在本实施方式中, 所示信息提取子模块 512例如可以通过个人私钥及公 开或私有的水印提取方法来分析所述图像中的数字水印, 提取所述用户相关
1口 Ά自、。
如图 6b所示, 在本申请实施例的另一个实施方式中, 所述信息获取模块 520包括: 一第二通信子模块 522, 用于:
将所述图像向外部发送; 从外部接收所述图像中的所述用户相关信息。 在本实施方式中, 可以将所述图像向外部发送, 例如发送至云端服务器 和 /或一第三方权威机构, 通过所述云端服务器或所述第三方权威机构来提取 所述图像的数字水印中的所述用户相关信息后, 再回发给本申请实施例的所 述第二通信子模块 522。
这里, 所述第一通信子模块 512与所述第二通信子模块 522的功能有可 能由同一通信模块实现。
在本申请实施例的一种可能的实施方式中, 所述图像与一设备显示的一 图形用户界面对应。 所述图形用户界面包括一所述用户相关信息的输入接口。 具体参见上述方法实施例中对应的描述, 这里不再赘述。
在本申请实施例中, 所述用户相关信息包括一与所述图像对应的用户鉴 权信息。 具体参见上述方法实施例中对应的描述, 这里不再赘述。
在本申请实施例中,所述装置 500在使用时,可以进行用户的身份确认。 例如, 当用户使用能实现本申请实施例装置的功能的智能眼镜时, 首先智能 眼镜对用户进行认证, 使得智能眼镜知道用户的身份, 在之后通过所述信息 提取模块 520提取所述用户相关信息时,只获取与用户对应的用户相关信息。 即, 用户只需要对自己的智能眼镜进行一次用户认证后, 就可以通过所述智 能眼镜对自己的或公用的各设备进行用户相关信息的获取。
如图 6a所示, 在本实施方式中, 所述投射模块 530包括:
一信息投射子模块 531 , 用于投射所述用户相关信息;
一参数调整子模块 532,用于调整所述投射位置与所述用户的眼睛之间光 路的至少一投射成像参数, 直至所述用户相关信息在所述用户的眼底所成的 像满足至少一设定的第一清晰度标准。
在一种实施方式中, 所述参数调整子模块 532包括:
至少一可调透镜器件, 其自身焦距可调和 /或在所述投射位置与所述用户 的眼睛之间光路上的位置可调。
如图 6b所示, 在一种实施方式中, 所述投射模块 530包括:
一曲面分光器件 533 ,用于分别对应所述眼睛光轴方向不同时瞳孔的位置, 将所述用户相关信息向所述用户的眼底传递。
在一种实施方式中, 所述投射模块 530包括:
一反变形处理子模块 534,用于对所述用户相关信息进行与所述眼睛光轴 方向不同时瞳孔的位置对应的反变形处理, 使得眼底接收到需要呈现的所述 用户相关信息。
在一种实施方式中, 所述投射模块 530包括:
一对齐调整子模块 535 ,用于将所述投射的用户相关信息与所述用户看到 的图像在所述用户的眼底对齐。
在一种实施方式中, 所述装置还包括: 所述对齐调整子模块 535 ,用于根据所述用户注视点相对于所述用户的所 述位置将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼底 对齐。
上述投射模块的各子模块的功能参见上面方法实施例中对应步骤的描述, 并且在下面图 7a-7d, 图 8和图 9中所示的实施例中也会给出实例。
本申请实施例中, 所述位置检测模块 540可以有多种实现方式, 例如方 法实施例的 i)-iii)种所述方法对应的装置。本申请实施例通过图 7a-图 7d、 图 8 以及图 9对应的实施方式来进一步说明第 iii )种方法对应的位置检测模块: 如图 7a所示, 在本申请实施例的一种可能的实施方式中, 所述位置检测 模块 700包括:
一眼底图像釆集子模块 710, 用于釆集一所述用户眼底的图像; 一可调成像子模块 720,用于进行所述眼底图像釆集位置与所述用户眼睛 之间光路的至少一成像参数的调节直至釆集到一满足至少一设定的第二清晰 度标准的图像;
一图像处理子模块 730, 用于对釆集到的所述眼底的图像进行分析, 得到 与所述满足至少一设定的第二清晰度标准的图像对应的所述眼底图像釆集位 置与所述眼睛之间光路的所述成像参数以及所述眼睛的至少一光学参数, 并 计算所述用户当前的注视点相对于所述用户的位置。
本位置检测模块 700通过对眼睛眼底的图像进行分析处理, 得到所述眼 底图像釆集子模块获得满足至少一设定的第二清晰度标准的图像时眼睛的光 学参数, 就可以计算得到眼睛当前的注视点位置。
这里的 "眼底" 呈现的图像主要为在视网膜上呈现的图像, 其可以为眼 底自身的图像, 或者可以为投射到眼底的其它物体的图像。 这里的眼睛可以 为人眼, 也可以为其它动物的眼睛。
如图 7b所示, 本申请实施例的一种可能的实施方式中, 所述眼底图像釆 集子模块 710为微型摄像头, 在本申请实施例的另一种可能的实施方式中, 所述眼底图像釆集子模块 710 还可以直接使用感光成像器件, 如 CCD 或 CMOS等器件。
在本申请实施例的一种可能的实施方式中, 所述可调成像子模块 720包 括: 可调透镜器件 721 , 位于眼睛与所述眼底图像釆集子模块 710之间的光路 上, 自身焦距可调和 /或在光路中的位置可调。 通过该可调透镜器件 721 , 使 得从眼睛到所述眼底图像釆集子模块 710之间的系统等效焦距可调, 通过可 调透镜器件 721 的调节, 使得所述眼底图像釆集子模块 710在可调透镜器件 721 的某一个位置或状态时获得眼底满足至少一设定的第二清晰度标准的图 像。在本实施方式中,所述可调透镜器件 721在检测过程中连续实时的调节。
在本申请实施例的一种可能的实施方式中, 所述可调透镜器件 721 为: 焦距可调透镜, 用于通过调节自身的折射率和 /或形状完成自身焦距的调整。 具体为: 1 )通过调节焦距可调透镜的至少一面的曲率来调节焦距, 例如在双 层透明层构成的空腔中增加或减少液体介质来调节焦距可调透镜的曲率; 2 ) 通过改变焦距可调透镜的折射率来调节焦距, 例如焦距可调透镜中填充有特 定液晶介质, 通过调节液晶介质对应电极的电压来调整液晶介质的排列方式, 从而改变焦距可调透镜的折射率。
在本申请实施例的另一种可能的实施方式中, 所述可调透镜器件 721包 括: 多片透镜构成的透镜组, 用于调节透镜组中透镜之间的相对位置完成透 镜组自身焦距的调整。 所述透镜组中也可以包括自身焦距等成像参数可调的 透镜。
除了上述两种通过调节可调透镜器件 721 自身的特性来改变系统的光路 参数以外, 还可以通过调节所述可调透镜器件 721 在光路上的位置来改变系 统的光路参数。
在本申请实施例的一种可能的实施方式中, 为了不影响用户对观察对象 的观看体验, 并且为了使得系统可以便携应用在穿戴式设备上, 所述可调成 像子模块 720还包括: 分光单元 722, 用于形成眼睛和观察对象之间、 以及眼 睛和眼底图像釆集子模块 710之间的光传递路径。这样可以对光路进行折叠, 减小系统的体积, 同时尽可能不影响用户的其它视觉体验。
在本实施方式中, 所述分光单元包括: 第一分光单元, 位于眼睛和观察 对象之间, 用于透射观察对象到眼睛的光, 传递眼睛到眼底图像釆集子模块 所述第一分光单元可以为分光镜、 分光光波导 (包括光纤)或其它适合 在本申请实施例的一种可能的实施方式中, 所述系统的图像处理子模块 730包括光路校准单元, 用于对系统的光路进行校准, 例如进行光路光轴的对 齐校准等, 以保证测量的精度。
在本申请实施例的一种可能的实施方式中, 所述图像处理子模块 730包 括:
图像分析单元 731 ,用于对所述眼底图像釆集子模块得到的图像进行分析, 找到满足至少一设定的第二清晰度标准的图像;
参数计算单元 732,用于根据所述满足至少一设定的第二清晰度标准的图 像、 以及得到所述满足至少一设定的第二清晰度标准的图像时系统已知的成 像参数计算眼睛的光学参数。
在本实施方式中, 通过可调成像子模块 720使得所述眼底图像釆集子模 块 710可以得到满足至少一设定的第二清晰度标准的图像, 但是需要通过所 述图像分析单元 731 来找到该满足至少一设定的第二清晰度标准的图像, 此 时根据所述满足至少一设定的第二清晰度标准的图像以及系统已知的光路参 数就可以通过计算得到眼睛的光学参数。 这里眼睛的光学参数可以包括眼睛 的光轴方向。
在本申请实施例的一种可能的实施方式中, 所述系统还包括: 投射子模 块 740, 用于向眼底投射光斑。 在一个可能的实施方式中, 可以通过微型投影 仪来视线该投射子模块的功能。
这里投射的光斑可以没有特定图案仅用于照亮眼底。
在本申请实施例的一种实施方式中, 所述投射的光斑包括特征丰富的图 案。 图案的特征丰富可以便于检测, 提高检测精度。 如图 4a所示为一个光斑 图案 P的示例图, 该图案可以由光斑图案生成器形成, 例如毛玻璃; 图 4b所 示为在有光斑图案 P投射时拍摄到的眼底的图像。
为了不影响眼睛的正常观看, 所述光斑为眼睛不可见的红外光斑。
此时, 为了减小其它光谱的干扰:
所述投射子模块的出射面可以设置有眼睛不可见光透射滤镜。
所述眼底图像釆集子模块的入射面设置有眼睛不可见光透射滤镜。 在本申请实施例的一种可能的实施方式中, 所述图像处理子模块 730还 包括:
投射控制单元 734, 用于根据图像分析单元 731得到的结果, 控制所述投 射子模块 740的投射光斑亮度。
例如所述投射控制单元 734可以根据眼底图像釆集子模块 710得到的图 像的特性自适应调整亮度。 这里图像的特性包括图像特征的反差以及紋理特 征等。
这里, 控制所述投射子模块 740 的投射光斑亮度的一种特殊的情况为打 开或关闭投射子模块 740,例如用户持续注视一点时可以周期性关闭所述投射 子模块 740;用户眼底足够明亮时可以关闭发光源只利用眼底信息来检测眼睛 当前视线注视点到眼睛的距离。
此外, 所述投射控制单元 734还可以根据环境光来控制投射子模块 740 的投射光斑亮度。
在本申请实施例的一种可能的实施方式中, 所述图像处理子模块 730还 包括: 图像校准单元 733 , 用于进行眼底图像的校准, 获得至少一个与眼底呈 现的图像对应的基准图像。
所述图像分析单元 731将眼底图像釆集子模块 730得到的图像与所述基 准图像进行对比计算, 获得所述满足至少一设定的第二清晰度标准的图像。 这里, 所述满足至少一设定的第二清晰度标准的图像可以为获得的与所述基 准图像差异最小的图像。 在本实施方式中, 通过现有的图像处理算法计算当 前获得的图像与基准图像的差异, 例如使用经典的相位差值自动对焦算法。
在本申请实施例的一种可能的实施方式中,所述参数计算单元 732包括: 眼睛光轴方向确定子单元 7321 , 用于根据得到所述满足至少一设定的第 二清晰度标准的图像时眼睛的特征得到眼睛光轴方向。
这里眼睛的特征可以是从所述满足至少一设定的第二清晰度标准的图像 上获取的, 或者也可以是另外获取的。 根据眼睛光轴方向可以得到用户眼睛 视线注视的方向。 在本申请实施例的一种可能的实施方式中, 所述眼睛光轴方向确定子单 元 7321包括: 第一确定子单元, 用于根据得到所述满足至少一设定的第二清 晰度标准的图像时眼底的特征得到眼睛光轴方向。 与通过瞳孔和眼球表面的 特征得到眼睛光轴方向相比, 通过眼底的特征来确定眼睛光轴方向精确度更 高。
在向眼底投射光斑图案时, 光斑图案的大小有可能大于眼底可视区域或 小于眼底可视区域, 其中:
当光斑图案的面积小于等于眼底可视区域时, 可以利用经典特征点匹配 算法(例如尺度不变特征转换( Scale Invariant Feature Transform, SIFT )算法) 通过检测图像上的光斑图案相对于眼底位置来确定眼睛光轴方向;
当光斑图案的面积大于等于眼底可视区域时, 可以通过得到的图像上的 光斑图案相对于原光斑图案 (通过图像校准单元获得) 的位置来确定眼睛光 轴方向确定用户视线方向。
在本申请实施例的另一种可能的实施方式中, 所述眼睛光轴方向确定子 单元 7321包括: 第二确定子单元, 用于根据得到所述满足至少一设定的第二 清晰度标准的图像时眼睛瞳孔的特征得到眼睛光轴方向。 这里眼睛瞳孔的特 征可以是从所述满足至少一设定的第二清晰度标准的图像上获取的, 也可以 是另外获取的。 通过眼睛瞳孔特征得到眼睛光轴方向为已有技术, 此处不再 在本申请实施例的一种可能的实施方式中, 所述图像处理子模块 730还 包括: 眼睛光轴方向校准单元 735 , 用于进行眼睛光轴方向的校准, 以便更精 确的进行上述眼睛光轴方向的确定。
在本实施方式中, 所述系统已知的成像参数包括固定的成像参数和实时 成像参数, 其中实时成像参数为获取满足至少一设定的第二清晰度标准的图 像时所述可调透镜器件的参数信息, 该参数信息可以在获取所述满足至少一 设定的第二清晰度标准的图像时实时记录得到。
下面再计算得到眼睛注视点到眼睛的距离, 具体为: -示意图, 结合经典光学理论中的透镜成像公式, 由
Figure imgf000022_0001
7 d + 7 = (1) 其中 d。和 de分别为眼睛当前观察对象 7010和视网膜上的实像 7020到眼 等效透镜 7030的距离, fe为眼睛等效透镜 7030的等效焦距, X为眼睛的视 方向 (可以由所述眼睛的光轴方向得到)。
图 7d所示为根据系统已知光学参数和眼睛的光学参数得到眼睛注视点到 睛的距离的示意图, 图 7d中光斑 7040通过可调透镜器件 721会成一个虛 (图 7d中未示出 ), 假设该虛像距离透镜距离为 X (图 7d中未示出 ), 结合 么、式 (1)可以得到如下方程组
Figure imgf000022_0002
其中 d为光斑 7040到可调透镜器件 721的光 ;效距离, d;为可调透镜 到眼! :透镜 7030的光 ;效距离, f„为可调透镜器件 721的焦
Figure imgf000022_0003
由(1)和 (2)可以得出当前观察对象 7010 (眼睛注视点) 到眼 i
7030的距离 d。如公式 (3)所示:
Figure imgf000022_0004
根据上述计算得 察对象 7010到眼睛的距离, 又由于之前的记载可 以得到眼睛光轴方向, Ϊ易得到眼睛的注视点位置, 为后续与眼睛相 关的进一步交互提供了基础 如图 8所示为本申请实施例的一种可能的实施方式的位置检测模块 800 应用在眼镜 G上的实施例,其包括图 7b所示实施方式的记载的内容,具体为: 由图 8可以看出, 在本实施方式中, 在眼镜 G右侧 (不局限于此)集成了本 实施方式的模块 800, 其包括:
微型摄像头 810, 其作用与图 7b实施方式中记载的眼底图像釆集子模块 相同, 为了不影响用户正常观看对象的视线, 其被设置于眼镜 G右外侧; 第一分光镜 820, 其作用与图 7b实施方式中记载的第一分光单元相同, 以一定倾角设置于眼睛 A注视方向和摄像头 810入射方向的交点处, 透射观 察对象进入眼睛 A的光以及反射眼睛到摄像头 810的光;
焦距可调透镜 830,其作用与图 7b实施方式中记载的焦距可调透镜相同, 位于所述第一分光镜 820和摄像头 810之间, 实时进行焦距值的调整, 使得 在某个焦距值时, 所述摄像头 810 能够拍到眼底满足至少一设定的第二清晰 度标准的图像。
在本实施方式中, 所述图像处理子模块在图 8 中未表示出, 其功能与图 7b所示的图像处理子模块相同。
由于一般情况下, 眼底的亮度不够, 因此, 最好对眼底进行照明, 在本 实施方式中, 通过一个发光源 840来对眼底进行照明。 为了不影响用户的体 验, 这里所述发光源 840为眼睛不可见光发光源, 优选对眼睛 A影响不大并 且摄像头 810又比较敏感的近红外光发光源。
在本实施方式中, 所述发光源 840位于右侧的眼镜架外侧, 因此需要通 过一个第二分光镜 850与所述第一分光镜 820—起完成所述发光源 840发出 的光到眼底的传递。 本实施方式中, 所述第二分光镜 850 又位于摄像头 810 的入射面之前, 因此其还需要透射眼底到第二分光镜 850的光。
可以看出, 在本实施方式中, 为了提高用户体验和提高摄像头 810 的釆 集清晰度, 所述第一分光镜 820 可以具有对红外反射率高、 对可见光透射率 高的特性。 例如可以在第一分光镜 820朝向眼睛 A的一侧设置红外反射膜实 现上述特性。 由图 8可以看出, 由于在本实施方式中, 所述位置检测模块 800位于眼 镜 G的镜片远离眼睛 A的一侧, 因此进行眼睛光学参数进行计算时, 可以将 镜片也看成是眼睛 A的一部分, 此时不需要知道镜片的光学特性。
在本申请实施例的其它实施方式中, 所述位置检测模块 800可能位于眼 镜 G的镜片靠近眼睛 A的一侧, 此时, 需要预先得到镜片的光学特性参数, 并在计算注视点距离时, 考虑镜片的影响因素。
本实施例中发光源 840发出的光通过第二分光镜 850的反射、 焦距可调 透镜 830的投射、 以及第一分光镜 820的反射后再透过眼镜 G的镜片进入用 户眼睛, 并最终到达眼底的视网膜上; 摄像头 810经过所述第一分光镜 820、 焦距可调透镜 830以及第二分光镜 850构成的光路透过眼睛 A的瞳孔拍摄到 眼底的图像。
在一种可能的实施方式中, 本申请实施例的装置的其它部分也实现在所 述眼镜 G上,并且,由于所述位置检测模块和所述投射模块有可能同时包括: 具有投射功能的设备(如上面所述的投射模块的信息投射子模块, 以及所述 位置检测模块的投射子模块); 以及成像参数可调的成像设备(如上面所述的 投射模块的参数调整子模块, 以及所述位置检测模块的可调成像子模块)等, 因此在本申请实施例的一种可能的实施方式中, 所述位置检测模块和所述投 射模块的功能由同一设备实现。
如图 8所示, 在本申请实施例的一种可能的实施方式中, 所述发光源 840 除了可以用于所述位置检测模块的照明外, 还可以作为所述投射模块的信息 投射子模块的光源辅助投射所述用户相关信息。 在一种可能的实施方式中, 所述发光源 840可以同时分别投射一个不可见的光用于所述位置检测模块的 照明; 以及一个可见光, 用于辅助投射所述用户相关信息; 在另一种可能的 实施方式中, 所述发光源 840还可以分时地切换投射所述不可见光与所述可 见光; 在又一种可能的实施方式中, 所述位置检测模块可以使用所述用户相 关信息来完成照亮眼底的功能。
在本申请实施例的一种可能的实施方式中, 所述第一分光镜 820、 第二分 光镜 850以及所述焦距可调透镜 830除了可以作为所述的投射模块的参数调 整子模块外, 还可以作为所述位置检测模块的可调成像子模块。 这里, 所述 的焦距可调透镜 830在一种可能的实施方式中, 其焦距可以分区域的调节, 不同的区域分别对应于所述位置检测模块和所述投射模块, 焦距也可能会不 同。 或者, 所述焦距可调透镜 830 的焦距是整体调节的, 但是所述位置检测 模块的微型摄像头 810的感光单元(如 CCD等 )的前端还设置有其它光学器 件, 用于实现所述位置检测模块的成像参数辅助调节。 此外, 在另一种可能 的实施方式中, 可以配置使得从所述发光源 840 的发光面 (即用户相关信息 投出位置) 到眼睛的光程与所述眼睛到所述微型摄像头 810 的光程相同, 则 所述焦距可调透镜 830调节至所述微型摄像头 810接收到最清晰的眼底图像 时, 所述发光源 840投射的用户相关信息正好在眼底清晰地成像。
由上述可以看出, 本申请实施例用户信息提取装置的位置检测模块与投 射模块的功能可以由一套设备实现, 使得整个系统结构简单、 体积小、 更加 便于携带。 如图 9所示为本申请实施例的另一种实施方式位置检测模块 900的结构 示意图。 由图 9可以看出, 本实施方式与图 8所示的实施方式相似, 包括微 型摄像头 910、 第二分光镜 920、 焦距可调透镜 930, 不同之处在于, 在本实 施方式中的投射子模块 940为投射光斑图案的投射子模块 940,并且通过一个 曲面分光镜 950作为曲面分光器件取代了图 8实施方式中的第一分光镜。
这里釆用了曲面分光镜 950分别对应眼睛光轴方向不同时瞳孔的位置, 将眼底呈现的图像传递到眼底图像釆集子模块。 这样摄像头可以拍摄到眼球 各个角度混合叠加的成像, 但由于只有通过瞳孔的眼底部分能够在摄像头上 清晰成像, 其它部分会失焦而无法清晰成像, 因而不会对眼底部分的成像构 成严重干扰, 眼底部分的特征仍然可以检测出来。 因此, 与图 8所示的实施 方式相比, 本实施方式可以在眼睛注视不同方向时都能很好的得到眼底的图 像, 使得本实施方式的位置检测模块适用范围更广, 检测精度更高。 在本申请实施例的一种可能的实施方式中, 本申请实施例的用户信息提 取装置的其它部分也实现在所述眼镜 G上。 在本实施方式中, 所述位置检测 模块和所述投射模块也可以复用。 与图 8所示的实施例类似地, 此时所述投 射子模块 940可以同时或者分时切换地投射光斑图案以及所述用户相关信息; 或者所述位置检测模块将投射的用户相关信息作为所述光斑图案进行检测。 与图 8所示的实施例类似地, 在本申请实施例的一种可能的实施方式中, 所 述第一分光镜 920、第二分光镜 950以及所述焦距可调透镜 930除了可以作为 所述的投射模块的参数调整子模块外, 还可以作为所述位置检测模块的可调 此时, 所述第二分光镜 950还用于分别对应眼睛光轴方向不同时瞳孔的 位置, 进行所述投射模块与眼底之间的光路传递。 由于所述投射子模块 940 投射的用户相关信息经过所述曲面的第二分光镜 950之后会发生变形, 因此 在本实施方式中, 所述投射模块包括:
反变形处理模块(图 9中未示出), 用于对所述用户相关信息进行与所述 曲面分光器件对应的反变形处理, 使得眼底接收到需要呈现的用户相关信息。 在一种实施方式中, 所述投射模块用于将所述用户相关信息立体地向所 述用户的眼底投射。
所述用户相关信息包括分别与所述用户的两眼对应的立体信息, 所述投 射模块, 分别向所述用户的两眼投射对应的用户相关信息。
如图 10所示,在需要进行立体显示的情况下,所述用户信息提取装置 1000 需要分别与用户的两只眼睛对应的设置两套投射模块, 包括:
与用户的左眼对应的第一投射模块; 以及
与用户的右眼对应的第二投射模块。
其中, 所述第二投射模块的结构与图 10的实施例中记载的复合有位置检 测模块功能的结构类似, 也为可以同时实现位置检测模块功能以及投射模块 功能的结构,包括与所述图 10所示实施例功能相同的微型摄像头 1021、 第二 分光镜 1022、 第二焦距可调透镜 1023, 第一分光镜 1024 (所述位置检测模块 的图像处理子模块在图 10 中未示出), 不同之处在于, 在本实施方式中的投 射子模块为可以投射右眼对应的用户相关信息的第二投射子模块 1025。 其同 时可以用于检测用户眼睛的注视点位置, 并且把与右眼对应的用户相关信息 清晰投射至右眼眼底。
所述第一投射模块的结构与所述第二投射模块 1020的结构类似, 但是其 不具有微型摄像头, 并且没有复合位置检测模块的功能。 如图 10所示, 所述 第一投射模块包括:
第一投射子模块 1011 , 用于将与左眼对应的用户相关信息向左眼眼底投 射;
第一焦距可调透镜 1013,用于对所述第一投射子模块 1011与眼底之间的 成像参数进行调节, 使得对应的用户相关信息可以清晰地呈现在左眼眼底并 且使得用户可以看到呈现在所述图像上的所述用户相关信息;
第三分光镜 1012,用于在所述第一投射子模块 1011与所述第一焦距可调 透镜 1013之间进行光路传递;
第四分光镜 1014,用于在所述第一焦距可调透镜 1013与所述左眼眼底之 间进行光路传递。
通过本实施例, 使得用户看到的用户相关信息具有适合的立体显示效果, 带来更好的用户体验。 此外, 在对用户输入的用户相关信息中含有三维空间 信息时, 通过上述的立体投射,使得用户可以看到所述三维空间信息。例如: 当需要用户在三维空间中特定的位置做特定的手势才能正确输入所述用户相 关信息时, 通过本申请实施例的上述方法使得用户看到立体的用户相关信息, 获知所述特定的位置和特定的手势, 进而使得用户可以在所述特定位置做所 述用户相关信息提示的手势, 此时其它人即使看到用户进行的手势动作, 但 是由于无法获知所述空间信息, 使得所述用户相关信息的保密效果更好。 此外, 本申请还提供一种计算机可读介质, 包括在被执行时进行以下操 机可读指令: 执行图 1所示方法实施例中的步骤 S120、 S140和 S160
Figure imgf000028_0001
图 11为本申请实施例提供的又一种用户信息提取装置 1100的结构示意 图, 本申请具体实施例并不对用户信息提取装置 1100的具体实现做限定。 如 图 11所示, 该用户信息提取装置 1100可以包括:
处理器 (processor)1110、 通信接口(Communications Interface) 1120、 存储器 (memory) 1130, 以及通信总线 1140。 其中:
处理器 1110、 通信接口 1120、 以及存储器 1130通过通信总线 1140完成 相互间的通信。
通信接口 1120, 用于与比如客户端等的网元通信。
处理器 1110, 用于执行程序 1132, 具体可以执行上述方法实施例中的相 关步骤。
具体地, 程序 1132可以包括程序代码, 所述程序代码包括计算机操作指 令。
处理器 1110 可能是一个中央处理器 CPU, 或者是特定集成电路 ASIC ( Application Specific Integrated Circuit ) , 或者是被配置成实施本申请实施例 的一个或多个集成电路。
存储器 1130, 用于存放程序 1132。 存储器 1130可能包含高速 RAM存储 器, 也可能还包括非易失性存储器 ( non- volatile memory ), 例如至少一个磁 盘存储器。 程序 1132具体可以用于使得所述用户信息提取装置 1100执行以 下步骤:
获取一包含至少一数字水印的图像;
获取所述图像中所述数字水印包含的与一用户对应的至少一用户相关信 息;
将所述用户相关信息向所述用户的眼底投射。
程序 1132中各步骤的具体实现可以参见上述实施例中的相应步骤和单元 中对应的描述, 在此不赘述。 所属领域的技术人员可以清楚地了解到, 为描 述的方便和简洁, 上述描述的设备和模块的具体工作过程, 可以参考前述方 法实施例中的对应过程描述, 在此不再赘述。 如图 12所示, 本申请实施例提供了一种用户信息嵌入方法, 包括:
S1200在一图像中嵌入至少一数字水印,所述数字水印中包含与至少一用 户对应的至少一用户相关信息。
其中, 所述数字水印可以按照对称性分为对称式和非对称式水印两种。 传统的对称水印嵌入和检测的密钥相同, 这样一旦公开了检测的方法和密钥, 水印就会很容易从数字载体中移除。 而非对称水印技术通过使用私钥来嵌入 水印, 使用公钥来提取和验证水印, 这样攻击者就很难通过公钥来破坏或者 移除利用私钥嵌入的水印。 因此, 本申请实施例中使用所述非对称式数字水 印。
在本申请实施例的一种可能的实施方式中, 所述用户相关信息包括一与 所述图像对应的用户鉴权信息。
在本申请实施例的一种可能的实施方式中, 所述图像与一设备显示的一 图形用户界面对应, 所述数字水印中包含与所述图形用户界面对应的用户相 穴 1口 Ά自、 °
在本申请实施例的一种可能的实施方式中, 所述图形用户界面包括一所 述用户相关信息的输入接口。
在本申请实施例的一种可能的实施方式中, 所述图形用户界面为所述设 备的锁屏界面。
上述各步骤的实现参见图 1、 图 2a-2c以及图 3a和 3b所示的方法实施例 中对应的描述, 此处不再赘述。
应理解, 在本申请的各种实施例中, 上述各过程的序号的大小并不意味 着执行顺序的先后, 各过程的执行顺序应以其功能和内在逻辑确定, 而不应 对本申请实施例的实施过程构成任何限定。 如图 13所示, 本申请实施例提供了一种用户信息嵌入装置 1300, 包括: 一水印嵌入模块 1310, 用于在一图像中嵌入至少一数字水印, 所述数字 水印中包含与至少一用户对应的至少一用户相关信息。
在本申请实施例的一种可能的实施方式中, 所示用户信息嵌入装置 1300 为一电子设备终端, 例如手机、 平板电脑、 笔记本、 台式机等用户终端。
此时, 所述装置 1300还包括:
一显示模块 1320, 用于显示一图形用户界面, 所述图像与所述图形用户 界面对应 (如图 2a和图 3a所示);
所述数字水印中包含与所述图形用户界面对应的所述用户相关信息。 在本申请实施例的一种可能的实施方式中, 所述图形用户界面包括一所 述用户相关信息的输入接口。
上述用户信息嵌入装置 1300的各模块功能的实现参见上面所述的实施例 中对应的描述, 此处不再赘述。
本领域普通技术人员可以意识到, 结合本文中所公开的实施例描述的各 示例的单元及方法步骤, 能够以电子硬件、 或者计算机软件和电子硬件的结 合来实现。 这些功能究竟以硬件还是软件方式来执行, 取决于技术方案的特 定应用和设计约束条件。 专业技术人员可以对每个特定的应用来使用不同方 法来实现所描述的功能, 但是这种实现不应认为超出本申请的范围。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用 时, 可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本申请的 技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可 以以软件产品的形式体现出来, 该计算机软件产品存储在一个存储介质中, 包括若干指令用以使得一台计算机设备(可以是个人计算机, 服务器, 或者 网络设备等) 执行本申请各个实施例所述方法的全部或部分步骤。 而前述的 存储介质包括: U盘、 移动硬盘、 只读存储器(ROM, Read-Only Memory ), 随机存取存储器(RAM, Random Access Memory ),磁碟或者光盘等各种可以 存储程序代码的介质。
以上实施方式仅用于说明本申请, 而并非对本申请的限制, 有关技术领 域的普通技术人员, 在不脱离本申请的精神和范围的情况下, 还可以做出各 种变化和变型, 因此所有等同的技术方案也属于本申请的范畴, 本申请的专 利保护范围应由权利要求限定。

Claims

权 利 要 求
1、 一种用户信息提取方法, 其特征在于, 包括:
获取一包含至少一数字水印的图像;
获取所述图像中所述数字水印包含的与一用户对应的至少一用户相关信 '
2、 如权利要求 1所述的方法, 其特征在于, 所述用户相关信息包括一与 所述图像对应的用户鉴权信息。
3、 如权利要求 1所述的方法, 其特征在于, 所述图像与一设备显示的一 图形用户界面对应。
4、 如权利要求 3所述的方法, 其特征在于, 所述图形用户界面包括一所 述用户相关信息的输入接口。
5、 如权利要求 1所述的方法, 其特征在于, 所述获取一包含至少一数字 水印的图像包括:
通过拍摄的方式获取所述图像。
6、 如权利要求 1所述的方法, 其特征在于, 所述获取一包含至少一数字 水印的图像包括:
通过接收的方式获取所述图像。
7、 如权利要求 1所述的方法, 其特征在于, 所述获取所述图像中所述数 字水印包含的与一用户对应的至少一用户相关信息包括:
从所述图像中提取所述用户相关信息。
8、 如权利要求 1所述的方法, 其特征在于, 所述获取所述图像中所述数 字水印包含的与一用户对应的至少一用户相关信息包括:
将所述图像向外部发送;
从外部接收所述图像中的所述用户相关信息。
9、 如权利要求 1所述的方法, 其特征在于, 所述将所述用户相关信息向 所述用户的眼底投射包括: 投射所述用户相关信息;
调整投射位置与所述用户的眼睛之间光路的至少一投射成像参数, 直至 所述用户相关信息在所述用户的眼底所成的像满足至少一设定的第一清晰度 标准。
10、 如权利要求 9所述的方法, 其特征在于, 所述调整投射位置与所述 用户的眼睛之间光路的至少一投射成像参数包括:
调节所述投射位置与所述用户的眼睛之间光路的至少一光学器件的至少 一成像参数和 /或在光路中的位置。
11、 如权利要求 9所述的方法, 其特征在于, 所述将所述用户相关信息 向所述用户的眼底投射还包括:
分别对应所述眼睛光轴方向不同时瞳孔的位置, 将所述用户相关信息向 所述用户的眼底传递。
12、 如权利要求 11所述的方法, 其特征在于, 所述将所述用户相关信息 向所述用户的眼底投射包括:
对所述用户相关信息进行与所述眼睛光轴方向不同时瞳孔的位置对应的 反变形处理, 使得眼底接收到需要呈现的所述用户相关信息。
13、 如权利要求 1 所述的方法, 其特征在于, 所述将所述用户相关信息 向所述用户的眼底投射还包括:
将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼底对 齐。
14、 如权利要求 13所述的方法, 其特征在于, 所述方法还包括: 所述将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼 底对齐包括:
根据所述用户注视点相对于所述用户的所述位置将所述投射的用户相关 信息与所述用户看到的图像在所述用户的眼底对齐。
15、 如权利要求 14所述的方法, 其特征在于, 所述检测所述用户的注视 点相对于所述用户的位置包括:
釆集所述用户眼底的一图像;
进行所述眼底图像釆集位置与所述用户眼睛之间光路的至少一成像参数 的调节直至釆集到一满足至少一设定的第二清晰度标准的图像;
对釆集到的所述眼底的图像进行分析, 得到与所述满足至少一设定的第 二清晰度标准的图像对应的所述眼底图像釆集位置与所述眼睛之间光路的所 述成像参数以及所述眼睛的至少一光学参数, 并计算所述用户当前的注视点 相对于所述用户的位置。
16、 如权利要求 15所述的方法, 其特征在于, 所述调整投射位置与所述 用户的眼睛之间光路的至少一投射成像参数包括:
调节所述眼底图像釆集位置与所述眼睛之间光路上的至少一光学器件的 焦距和 /或在所述光路上的位置。
17、 如权利要求 15所述的方法, 其特征在于, 所述检测所述用户的注视 点相对于所述用户的位置还包括:
分别对应所述眼睛光轴方向不同时瞳孔的位置, 将所述用户眼底的图像 传递到所述眼底图像釆集位置。
18、 如权利要求 15所述的方法, 其特征在于, 所述检测所述用户的注视 点相对于所述用户的位置还包括: 向所述眼底投射一光斑图案。
19、 如权利要求 1 所述的方法, 其特征在于, 所述将所述用户相关信息 向所述用户的眼底投射包括:
将所述用户相关信息立体地向所述用户的眼底投射。
20、 如权利要求 19所述的方法, 其特征在于, 所述用户相关信息包括分 别与所述用户的两眼对应的立体信息;
所述将所述用户相关信息向所述用户的眼底投射包括: 分别向所述用户 的两眼投射对应的用户相关信息。
21、 一种用户信息提取装置, 其特征在于, 包括:
一图像获取模块, 用于获取一包含至少一数字水印的图像; 一信息获取模块, 用于获取所述图像中所述数字水印包含的与一用户对 应的至少一用户相关信息;
一投射模块, 用于将所述用户相关信息向所述用户的眼底投射。
22、 如权利要求 21所述的装置, 其特征在于, 所述图像与一设备显示的 一图形用户界面对应。
23、 如权利要求 22所述的装置, 其特征在于, 所述图形用户界面包括一 所述用户相关信息的输入接口。
24、 如权利要求 21所述的装置, 其特征在于, 所述图像获取模块包括: 一拍摄子模块, 用于拍摄所述图像。
25、 如权利要求 21所述的装置, 其特征在于, 所述图像获取模块包括: 一第一通信子模块, 用于接收所述图像。
26、 如权利要求 21所述的装置, 其特征在于, 所述信息获取模块包括: 一信息提取子模块, 用于从所述图像中提取所述用户相关信息。
27、 如权利要求 21 所述的装置, 其特征在于, 所述信息获取模块包括: 一第二通信子模块, 用于:
将所述图像向外部发送;
从外部接收所述图像中的所述用户相关信息。
28、 如权利要求 21所述的装置, 其特征在于, 所述投射模块包括: 一信息投射子模块, 用于投射所述用户相关信息;
一参数调整子模块, 用于调整所述投射位置与所述用户的眼睛之间光路 的至少一投射成像参数, 直至所述用户相关信息在所述用户的眼底所成的像 满足至少一设定的第一清晰度标准。
29、如权利要求 28所述的装置,其特征在于,所述参数调整子模块包括: 至少一可调透镜器件, 其自身焦距可调和 /或在所述投射位置与所述用户 的眼睛之间光路上的位置可调。
30、 如权利要求 28所述的装置, 其特征在于, 所述投射模块还包括: 一曲面分光器件, 用于分别对应所述眼睛光轴方向不同时瞳孔的位置, 将所述用户相关信息向所述用户的眼底传递。
31、 如权利要求 30所述的装置, 其特征在于, 所述投射模块还包括: 一反变形处理子模块, 用于对所述用户相关信息进行与所述眼睛光轴方 向不同时瞳孔的位置对应的反变形处理。
32、 如权利要求 21所述的装置, 其特征在于, 所述投射模块还包括: 一对齐调整子模块, 用于将所述投射的用户相关信息与所述用户看到的 图像在所述用户的眼底对齐。
33、 如权利要求 32所述的装置, 其特征在于, 所述装置还包括: 所述对齐调整子模块, 用于根据所述用户注视点相对于所述用户的所述 位置将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼底对 齐。
34、 如权利要求 33所述的装置, 其特征在于, 所述位置检测模块包括: 一眼底图像釆集子模块, 用于釆集一所述用户眼底的图像;
一可调成像子模块, 用于进行所述眼底图像釆集位置与所述用户眼睛之 间光路的至少一成像参数的调节直至釆集到一满足至少一设定的第二清晰度 标准的图像;
一图像处理子模块, 用于对釆集到的所述眼底的图像进行分析, 得到与 所述满足至少一设定的第二清晰度标准的图像对应的所述眼底图像釆集位置 与所述眼睛之间光路的所述成像参数以及所述眼睛的至少一光学参数, 并计 算所述用户当前的注视点相对于所述用户的位置。
35、如权利要求 34所述的装置,其特征在于,所述可调成像子模块包括: 一可调透镜器件, 其自身焦距可调和 /或所述眼底图像釆集位置与所述眼 睛之间光路上的位置可调。
36、如权利要求 34所述的装置,其特征在于,所述可调成像子模块包括: 一曲面分光器件, 用于分别对应所述眼睛光轴方向不同时瞳孔的位置, 将所述用户眼底的图像传递到所述眼底图像釆集位置。
37、如权利要求 34所述的装置,其特征在于,所述位置检测模块还包括: 一投射子模块, 用于向所述眼底投射一光斑图案。
38、 如权利要求 34所述的装置, 其特征在于, 所述位置检测模块的功能 与所述投射模块的功能由同一设备实现。
39、 如权利要求 21所述的装置, 其特征在于, 所述投射模块用于: 将所述用户相关信息立体地向所述用户的眼底投射。
40、 如权利要求 39所述的装置, 其特征在于,
所述用户相关信息包括分别与所述用户的两眼对应的立体信息; 所述投射模块, 用于分别向所述用户的两眼投射对应的用户相关信息。
41、 如权利要求 21所述的装置, 其特征在于, 所述装置为一眼镜。
42、 一种计算机可读存储介质, 其特征在于, 所述计算机可读存储介质 包含可执行指令, 当一可穿戴设备的中央处理器执行所述可执行指令时, 所 述可执行指令用于使所述可穿戴设备执行如下方法:
获取一包含至少一数字水印的图像;
获取所述图像中所述数字水印包含的与一用户对应的至少一用户相关信 自 · 将所述用户相关信息向所述用户的眼底投射。
43、 一种用户信息提取装置, 其特征在于, 包括中央处理器和存储器, 所述存储器存储计算机执行指令, 所述中央处理器与所述存储器通过通信总 线连接, 当所述用户信息提取装置运行时, 所述中央处理器执行所述存储器 存储的所述计算机执行指令, 使得所述用户信息提取装置执行如下方法: 获取一包含至少一数字水印的图像;
获取所述图像中所述数字水印包含的与一用户对应的至少一用户相关信 自 · 将所述用户相关信息向所述用户的眼底投射。
PCT/CN2014/071143 2013-11-15 2014-01-22 用户信息提取方法及用户信息提取装置 WO2015070537A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/888,219 US9877015B2 (en) 2013-11-15 2014-01-22 User information extraction method and user information extraction apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310572121.9A CN103678971B (zh) 2013-11-15 2013-11-15 用户信息提取方法及用户信息提取装置
CN201310572121.9 2013-11-15

Publications (1)

Publication Number Publication Date
WO2015070537A1 true WO2015070537A1 (zh) 2015-05-21

Family

ID=50316494

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/071143 WO2015070537A1 (zh) 2013-11-15 2014-01-22 用户信息提取方法及用户信息提取装置

Country Status (3)

Country Link
US (1) US9877015B2 (zh)
CN (1) CN103678971B (zh)
WO (1) WO2015070537A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631503B (zh) * 2013-11-15 2017-12-22 北京智谷睿拓技术服务有限公司 信息交互方法及信息交互装置
CN104461014A (zh) * 2014-12-26 2015-03-25 小米科技有限责任公司 屏幕解锁方法及装置
CN105550548A (zh) * 2015-06-30 2016-05-04 宇龙计算机通信科技(深圳)有限公司 一种信息处理的方法及其终端
CN107590376A (zh) * 2017-08-09 2018-01-16 华南理工大学 一种图形辅助记忆的密码输入方法及系统
CN109145566A (zh) * 2018-09-08 2019-01-04 太若科技(北京)有限公司 基于注视点信息解锁ar眼镜的方法、装置及ar眼镜
CN111324871B (zh) * 2020-03-09 2022-09-09 河南大学 一种基于人工智能的大数据水印方法及装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116717A (zh) * 2013-01-25 2013-05-22 东莞宇龙通信科技有限公司 一种用户登录方法及系统
CN103315706A (zh) * 2012-03-23 2013-09-25 明达医学科技股份有限公司 辅助凝视与成像对焦装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8332478B2 (en) * 1998-10-01 2012-12-11 Digimarc Corporation Context sensitive connected content
US6580815B1 (en) * 1999-07-19 2003-06-17 Mandylion Research Labs, Llc Page back intrusion detection device
US8091025B2 (en) * 2000-03-24 2012-01-03 Digimarc Corporation Systems and methods for processing content objects
US6948068B2 (en) * 2000-08-15 2005-09-20 Spectra Systems Corporation Method and apparatus for reading digital watermarks with a hand-held reader device
US8965460B1 (en) * 2004-01-30 2015-02-24 Ip Holdings, Inc. Image and augmented reality based networks using mobile devices and intelligent electronic glasses
CN101355684B (zh) * 2007-07-24 2010-10-20 中国移动通信集团公司 图像类数字内容发送、接收方法及其发送器和接收器
EP2605485B1 (en) * 2008-10-31 2017-05-03 Orange Communication system incorporating ambient sound pattern detection and method of operation thereof
US20100226526A1 (en) * 2008-12-31 2010-09-09 Modro Sierra K Mobile media, devices, and signaling
US20130024698A1 (en) * 2010-03-31 2013-01-24 Nec Corporation Digital content management system, device, program and method
CN103018914A (zh) * 2012-11-22 2013-04-03 唐葵源 一种眼镜式3d显示头戴电脑
CN103150013A (zh) * 2012-12-20 2013-06-12 天津三星光电子有限公司 一种移动终端
CN102970307B (zh) * 2012-12-21 2016-01-13 网秦无限(北京)科技有限公司 密码安全系统和密码安全方法
US9372531B2 (en) * 2013-03-12 2016-06-21 Gracenote, Inc. Detecting an event within interactive media including spatialized multi-channel audio content
KR102039427B1 (ko) * 2013-07-01 2019-11-27 엘지전자 주식회사 스마트 글라스
US9727753B2 (en) * 2013-08-26 2017-08-08 Nbcuniversal Media, Llc Watermark access/control system and method
US9331856B1 (en) * 2014-02-10 2016-05-03 Symantec Corporation Systems and methods for validating digital signatures

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103315706A (zh) * 2012-03-23 2013-09-25 明达医学科技股份有限公司 辅助凝视与成像对焦装置
CN103116717A (zh) * 2013-01-25 2013-05-22 东莞宇龙通信科技有限公司 一种用户登录方法及系统

Also Published As

Publication number Publication date
CN103678971B (zh) 2019-05-07
US20160073099A1 (en) 2016-03-10
CN103678971A (zh) 2014-03-26
US9877015B2 (en) 2018-01-23

Similar Documents

Publication Publication Date Title
CN104834901B (zh) 一种基于双目立体视觉的人脸检测方法、装置及系统
JP6873918B2 (ja) 傾斜シフト虹彩撮像
WO2015070537A1 (zh) 用户信息提取方法及用户信息提取装置
WO2018040307A1 (zh) 一种基于红外可见双目图像的活体检测方法及装置
WO2015027599A1 (zh) 内容投射系统及内容投射方法
WO2015176657A1 (zh) 虹膜识别装置及其制造方法和应用
US10521662B2 (en) Unguided passive biometric enrollment
CN111225157B (zh) 追焦方法及相关设备
CN106503680B (zh) 用于移动终端虹膜识别的引导指示人机接口系统和方法
WO2015035822A1 (en) Pickup of objects in three-dimensional display
TW201033724A (en) Apparatus and method for capturing images
WO2015070536A1 (zh) 用户信息获取方法及用户信息获取装置
KR101231068B1 (ko) 생체 정보 수집 장치 및 방법
KR20150141326A (ko) 실외 및 실내에서 홍채인식이 가능한 손 부착형 웨어러블 장치
CN111670455A (zh) 认证装置
JP2016532166A (ja) 虹彩撮像装置の準最適向きを補償する方法および装置
KR20140053647A (ko) 3차원 얼굴 인식 시스템 및 그의 얼굴 인식 방법
WO2015070623A1 (en) Information interaction
US20160155000A1 (en) Anti-counterfeiting for determination of authenticity
JP2019502415A (ja) ライトフィールド顕微鏡法を用いる眼科手術
JP2016105555A (ja) ヘッドマウントディスプレイ装置、撮影制御方法、及びプログラム
WO2015070624A1 (en) Information interaction
TW201133124A (en) Apparatus and method for capturing three-dimensional image
US11948402B2 (en) Spoof detection using intraocular reflection correspondences
KR102209613B1 (ko) 보조 카메라를 이용한 단말기에서의 홍채 영상 획득 장치 및 그 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14862756

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14888219

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14862756

Country of ref document: EP

Kind code of ref document: A1