Nothing Special   »   [go: up one dir, main page]

CN110955797A - User position determining method and device, electronic equipment and storage medium - Google Patents

User position determining method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110955797A
CN110955797A CN201911181575.7A CN201911181575A CN110955797A CN 110955797 A CN110955797 A CN 110955797A CN 201911181575 A CN201911181575 A CN 201911181575A CN 110955797 A CN110955797 A CN 110955797A
Authority
CN
China
Prior art keywords
user
coordinate
determining
preset
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911181575.7A
Other languages
Chinese (zh)
Other versions
CN110955797B (en
Inventor
朱兆琪
董玉新
车广富
陈宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201911181575.7A priority Critical patent/CN110955797B/en
Publication of CN110955797A publication Critical patent/CN110955797A/en
Application granted granted Critical
Publication of CN110955797B publication Critical patent/CN110955797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a user position determining method and device, electronic equipment and a storage medium. The user position determining method provided by the embodiment of the application comprises the following steps: the method comprises the steps of obtaining a target image covering a preset space range, determining head position coordinates of a user in the target image, and determining foot position coordinates of the user according to the head position coordinates and a preset mapping function, wherein the foot position coordinates are used for representing the physical position of the user in the preset space range. According to the user position determining method provided by the embodiment of the application, the head position coordinate is determined from the obtained target image, and the foot position coordinate corresponding to the head position coordinate is determined according to the preset mapping function, so that the actual physical position of the user in the preset space range is determined, and the user in the preset space is accurately positioned.

Description

User position determining method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for determining a user position, an electronic device, and a storage medium.
Background
In recent years, with the internet of things technology of the retail industry becoming more mature, the retail industry is more urgent to the demands of the retail industry for shop unmanned and container intelligentization.
In order to realize the binding between the user and the goods in the unmanned store, the position of the user needs to be located first.
Therefore, a method for determining the position of a user is needed to accurately locate the position of the user in an unmanned store.
Disclosure of Invention
The embodiment of the application provides a user position determining method and device, electronic equipment and a storage medium, and aims to solve the technical problem that users in an unmanned store cannot be accurately positioned in the prior art.
In a first aspect, an embodiment of the present application provides a method for determining a user location, including:
acquiring a target image covering a preset space range;
determining head position coordinates of a user in the target image;
and determining foot position coordinates of the user according to the head position coordinates and a preset mapping function, wherein the foot position coordinates are used for representing the physical position of the user in the preset spatial range.
In one possible design, after the determining the head position coordinates of the user in the target image, the method further includes:
and determining a corrected head position coordinate according to the head position coordinate and a preset camera correction mapping function, and determining the foot position coordinate according to the corrected head position coordinate and the preset mapping function.
In one possible design, after determining the foot position coordinates of the user according to the head position coordinates and a preset mapping function, the method further includes:
acquiring commodity taking and placing information, wherein the commodity taking and placing information comprises commodity information and taking and placing position information;
and determining attribution information of the commodity information according to the pick-and-place position information and the foot position coordinates, wherein the attribution information comprises user information of the user.
In one possible design, the method for determining a user location further includes:
acquiring an annotation data set, wherein the annotation data set comprises a first coordinate and a second coordinate, the first coordinate is a pixel coordinate of the head of a user in a target image, and the second coordinate is a numerical coordinate of the foot of the user in a preset marking coordinate system;
and determining the preset mapping function according to the labeled data set and a preset fitting algorithm.
In one possible design, before the determining the preset mapping function according to the labeled data set and a preset fitting algorithm, the method further includes:
and determining a correction first coordinate according to the first coordinate and a preset camera correction mapping function, and determining the preset mapping function according to the correction first coordinate, the second coordinate and the preset fitting algorithm.
In one possible design, the predetermined fitting algorithm is a least squares method.
In one possible design, the method for determining a user location further includes:
obtaining distortion parameters corresponding to the wide-angle camera according to a cylindrical projection algorithm;
and determining the preset camera rectification mapping function according to the distortion parameter.
In one possible design, the determining head position coordinates of the user in the target image includes:
determining a head marking box of a user in the target image, wherein the head marking box is used for identifying the range of the head of the user in the target image;
and determining the coordinate corresponding to the central position of the head marking frame as the head position coordinate.
In a second aspect, an embodiment of the present application further provides a user position determining apparatus, including:
the acquisition module is used for acquiring a target image covering a preset space range;
a processing module for determining head position coordinates of a user in the target image;
the processing module is further configured to determine foot position coordinates of the user according to the head position coordinates and a preset mapping function, where the foot position coordinates are used to represent a physical position of the user in the preset spatial range.
In one possible design, the processing module is further configured to determine a corrected head position coordinate according to the head position coordinate and a preset camera correction mapping function, so as to determine the foot position coordinate according to the corrected head position coordinate and the preset mapping function.
The acquisition module is further used for acquiring commodity taking and placing information, wherein the commodity taking and placing information comprises commodity information and taking and placing position information;
the processing module is further configured to determine attribution information of the commodity information according to the pick-and-place position information and the foot position coordinates, where the attribution information includes user information of the user.
In one possible design, the user position determining apparatus further includes:
the marking module is used for acquiring a marking data set, wherein the marking data set comprises a first coordinate and a second coordinate, the first coordinate is a pixel coordinate of the head of the user in the target image, and the second coordinate is a numerical coordinate of the foot of the user in a preset marking coordinate system;
and the fitting module is used for determining the preset mapping function according to the labeled data set and a preset fitting algorithm.
In one possible design, the user position determining apparatus further includes:
and the correction module is used for determining a first correction coordinate according to the first coordinate and a preset camera correction mapping function so as to determine the preset mapping function according to the first correction coordinate, the second coordinate and the preset fitting algorithm.
In one possible design, the predetermined fitting algorithm is a least squares method.
In one possible design, the user position determining apparatus further includes:
the determining module is used for acquiring distortion parameters corresponding to the wide-angle camera according to a cylindrical projection algorithm;
the determining module is further configured to determine the preset camera rectification mapping function according to the distortion parameter.
In one possible design, the processing module is specifically configured to:
determining a head marking box of a user in the target image, wherein the head marking box is used for identifying the range of the head of the user in the target image;
and determining the coordinate corresponding to the central position of the head marking frame as the head position coordinate.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
a processor; and the number of the first and second groups,
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform any one of the possible user location determination methods of the first aspect via execution of the executable instructions.
In a fourth aspect, the present application further provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement any one of the possible user location determining methods in the first aspect.
According to the user position determining method and device, the electronic device and the storage medium, the head position coordinate is determined from the obtained target image, and the foot position coordinate corresponding to the head position coordinate is determined according to the preset mapping function, so that the actual physical position of the user in the preset space range is determined, and the user in the preset space is accurately positioned.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic diagram illustrating an application scenario architecture of a user location determination method according to an example embodiment;
FIG. 2 is a flow diagram illustrating a user location determination method according to an example embodiment;
FIG. 3 is a schematic flow chart illustrating the step of determining the predetermined mapping function in the embodiment of FIG. 2;
FIG. 4A is a target image corresponding to region 1 in FIG. 1;
FIG. 4B is a target image corresponding to region 2 of FIG. 1;
FIG. 4C is a target image corresponding to region 3 of FIG. 1;
FIG. 4D is a target image corresponding to region 4 of FIG. 1;
FIG. 5 is a flow diagram illustrating a user location determination method according to another example embodiment;
FIG. 6 is a flowchart illustrating the step of determining the predetermined camera rectification mapping function in the embodiment shown in FIG. 5;
FIGS. 7A-7B are schematic diagrams of cylindrical projections provided in the embodiment shown in FIG. 5;
8A-8B are schematic diagrams of test images used to determine distortion parameters for the embodiment shown in FIG. 5;
FIG. 9 is a flowchart illustrating the step of determining the predetermined mapping function in the embodiment of FIG. 5;
FIG. 10 is a flow chart diagram illustrating a user location determination method according to yet another example embodiment;
FIG. 11 is a schematic diagram illustrating the structure of a user position determination device according to an example embodiment;
FIG. 12 is a schematic diagram of a user position determination device shown in the present application according to another example embodiment;
FIG. 13 is a schematic diagram of a user position determination device shown in the present application according to yet another example embodiment;
fig. 14 is a schematic structural diagram of an electronic device shown in the present application according to an example embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic diagram of an application scenario architecture of a user position determination method according to an example embodiment. As shown in fig. 1, the user position determining method provided by the present embodiment may be used to locate the physical position of the user in a preset space, for example, may be applied to locate a customer in an unmanned store. In which, a customer purchases at an unmanned store and needs to confirm a position where the customer stands, so that a coordinate system needs to be established for the unmanned store, as shown in fig. 1, the coordinate system may be established for one unmanned store of 55dm by 40dm, and then the unmanned store is monitored by using a plurality of cameras (for example, 4 cameras), which may be wide-angle cameras.
Specifically, with reference to fig. 1, the area 1 is an area that can be monitored by the first camera, the area 2 is an area that can be monitored by the second camera, the area 3 is an area that can be monitored by the third camera, and the area 4 is an area that can be monitored by the fourth camera, and the entire unmanned shop can be covered by the monitoring areas of the four cameras.
Then, head position coordinates of the user in the target image are determined through the acquired target image, and foot position coordinates of the user are determined according to the head position coordinates and a preset mapping function, wherein the foot position coordinates are used for representing the physical position of the user in a preset space range, and therefore the position of the customer standing in the unmanned store is determined.
The user position determining method provided by the application can be exemplified by selecting an unmanned store as a preset spatial range:
fig. 2 is a flowchart illustrating a user location determination method according to an example embodiment of the present application. As shown in fig. 2, the method for determining a user position provided by this embodiment includes:
step 101, acquiring a target image covering a preset spatial range.
Specifically, the target images in the unmanned shop range may be acquired by cameras (for example, wide-angle cameras) arranged in the unmanned shop, where the number of the arranged cameras may be one or multiple, and the specific number may be determined according to characteristic factors such as layout and area of the unmanned shop.
As shown in fig. 1, the unmanned store may be monitored by 4 cameras, and optionally, the camera is a wide-angle camera, where an area 1 is an area that can be monitored by a first camera, an area 2 is an area that can be monitored by a second camera, an area 3 is an area that can be monitored by a third camera, and an area 4 is an area that can be monitored by a fourth camera, and the monitoring areas of the four cameras can cover the entire unmanned store.
And 102, determining the head position coordinates of the user in the target image.
After the target image covering the preset space range is acquired, the area where the head of the user is located can be located from the target image in an image recognition mode, and the head position coordinate of the user in the target image is determined according to the area.
Specifically, a head marking frame of the user in the target image may be determined, where the head marking frame is used to identify a range of the head of the user in the target image, and then a coordinate corresponding to a center position of the head marking frame is determined as a head position coordinate.
And 103, determining the foot position coordinates of the user according to the head position coordinates and a preset mapping function.
In this step, after the head position coordinates of the user in the target image are determined, the foot position coordinates of the user may be determined according to the head position coordinates and a preset mapping function, and it is understood that the preset mapping function may be determined by way of labeling and fitting.
FIG. 3 is a flow chart illustrating the step of determining the predetermined mapping function in the embodiment shown in FIG. 2. As shown in fig. 3, the step of determining the preset mapping function in this embodiment includes:
and step 1031, obtaining the labeling data set.
Specifically, marking points are arranged on X and Y axes of the floor of the unmanned store at intervals of 30cm, and then an experimenter stops at the marking points of the unmanned store and records the images to respectively acquire image information under different cameras. The head position of the experimenter in the image is marked to obtain the pixel position of the head center of the experimenter in the image, and the position of the experimenter in the global coordinate system of the unmanned shop can be obtained by observing the image, so that a group of points of the head position coordinate and the foot position coordinate can be obtained through experiments.
Fig. 4A is a target image corresponding to the area 1 in fig. 1, fig. 4B is a target image corresponding to the area 2 in fig. 1, fig. 4C is a target image corresponding to the area 3 in fig. 1, and fig. 4D is a target image corresponding to the area 4 in fig. 1. As shown in fig. 4A-4D, an annotation data set may be established by annotating the head position of the target image to obtain head position coordinates, and determining foot position coordinates by referring to the marking points on the X, Y axes of the floor.
And 1032, determining a preset mapping function according to the marked data set and a preset fitting algorithm.
After the annotation data set is established, the preset mapping function may be determined according to the annotation data set and a preset fitting algorithm, where the preset fitting algorithm may be a least square method. It is worth noting that the least squares method is a mathematical optimization technique that finds the best functional match of the data by minimizing the sum of the squares of the errors. Unknown data can be easily obtained by the least square method, and the sum of squares of errors between these obtained data and actual data is minimized. The least squares method can also be used for curve fitting. Other optimization problems may also be expressed in a least squares method by minimizing energy or maximizing entropy.
In this embodiment, the head position coordinates are determined from the acquired target image, and the foot position coordinates corresponding to the head position coordinates are determined according to the preset mapping function, so as to determine the actual physical position of the user in the preset space range, so as to accurately position the user in the preset space.
Fig. 5 is a flowchart illustrating a user location determination method according to another example embodiment. As shown in fig. 5, the method for determining a user position provided by this embodiment includes:
step 201, acquiring a target image covering a preset space range.
Step 202, determining the head position coordinates of the user in the target image.
It should be noted that steps 201 to 202 in this embodiment are similar to the specific implementation manner of steps 101 to 102 in the embodiment shown in fig. 2, and no additional details are described in this embodiment.
And step 203, determining the corrected head position coordinate according to the head position coordinate and a preset camera correction mapping function.
It is worth mentioning that wide-angle cameras are commonly used in unmanned stores and unmanned restaurants due to cost considerations. Due to the fact that the wide-angle camera is large in distortion, image distortion can be caused, the difficulty of subsequent data analysis is increased, and the accuracy of data analysis is reduced. The method has great significance for correcting the images collected by the wide-angle camera, however, the distortion characteristic of the wide-angle camera is not fully considered in the existing image correction processing method, so that the images collected by the wide-angle camera cannot be effectively corrected.
Fig. 6 is a flowchart illustrating a step of determining a predetermined camera rectification mapping function in the embodiment shown in fig. 5. As shown in fig. 6, the step of determining the preset camera rectification mapping function in this embodiment includes:
step 2031, obtaining a distortion parameter corresponding to the wide-angle camera according to the cylindrical projection algorithm.
Step 2032, determining a preset camera rectification mapping function according to the distortion parameter.
Fig. 7A-7B are schematic diagrams of cylindrical projections provided in the embodiment shown in fig. 5. Fig. 7A is a perspective view of the cylindrical projection, and fig. 7B is a top view of the cylindrical projection. Cylindrical projection is the projection of an image on a plane onto a cylindrical surface. In fig. 7A and 7B, O is the observation point, points A, A', G are all points on a plane, point a has coordinates (x, y, z), the plane is tangent to the cylinder, and R is the radius of the cylinder; the point G is a tangent point of the cylindrical surface and the plane; a 'is a vertical point of the point A in the horizontal direction, B is a projection point of the point A on the cylindrical surface, and the coordinates are (x', y ', z'); the point B ' is a projection point of the point A ' on the cylindrical surface, and the coordinates of the point B ' are (x ',0, z '); f is the perpendicular point of the B' point on GO. The size of the cylindrical radius R may reflect the distortion characteristic of the wide-angle camera, and therefore, R is used to represent the distortion parameter corresponding to the wide-angle camera in this embodiment.
As shown in fig. 7A, based on a similar geometric relationship of triangles, i.e.
Figure BDA0002291408560000082
Figure BDA0002291408560000083
Then BB 'kAA', B 'F kA' G, OF kR, where k is the similarity coefficient and k is<1. Then kx ═ x ', ky ═ y'.
According to the Pythagorean theorem, the following can be known: OF2+B'F2=B'O2Then k is2R2+k2x2=R2Thus, the relationship of k to R can be found as follows:
Figure BDA0002291408560000081
then:
Figure BDA0002291408560000091
Figure BDA0002291408560000092
according to the two formulas, the corresponding relation between the position of the pixel in the plane image and the position of the pixel on the cylindrical surface can be obtained, and because the image acquired by the wide-angle camera has the characteristic that the plane image is mapped to the cylindrical surface, the following formula can be obtained by reversely deducing the two formulas:
Figure BDA0002291408560000093
Figure BDA0002291408560000094
the pixels on the cylinder can be mapped onto the plane according to equation 1 and equation 2.
In this embodiment, the distortion parameter of the wide-angle camera is obtained by an experimental method of performing correction processing on the test image. Fig. 8A-8B are schematic diagrams of test images used to determine distortion parameters in the embodiment shown in fig. 5. Fig. 8A is a test image used in the present embodiment, and fig. 8B is a distorted image obtained by shooting the test image shown in fig. 8A with a wide-angle camera. Setting different values for R, and performing correction processing on the image shown in FIG. 8B according to a formula 1 and a formula 2, so that R with the minimum error between the corrected image and the test image 8A is the distortion parameter of the wide-angle camera.
In some embodiments, one implementation of obtaining the abscissa of the corrected head coordinate corresponding to the head position coordinate from the distortion parameter corresponding to the wide-angle camera and the abscissa of the head position coordinate may be:
according to formula 1, obtaining an abscissa value of the corrected head coordinate corresponding to the abscissa value of the head position coordinate:
Figure BDA0002291408560000095
wherein x' is an abscissa value of the head position coordinate, x is an abscissa value of the corrected head coordinate corresponding to the abscissa value of the head position coordinate, and R is a distortion parameter corresponding to the wide-angle camera. It is understood that, if the abscissa value x of the corrected head coordinate corresponding to the first abscissa value calculated according to the formula 1 is a decimal, the method further includes rounding x, including but not limited to rounding, rounding up, and rounding down.
In some embodiments, one implementation of obtaining the ordinate of the corrected head coordinate corresponding to the head position coordinate from the distortion parameter corresponding to the wide-angle camera and the ordinate of the head position coordinate may be:
according to the formula 2, obtaining a longitudinal coordinate value of the correction head coordinate corresponding to the longitudinal coordinate value of the head position coordinate:
Figure BDA0002291408560000101
wherein y' is a longitudinal coordinate value of the head position coordinate, y is a longitudinal coordinate value of the corrected head coordinate corresponding to the longitudinal coordinate value of the head position coordinate, and R is a distortion parameter corresponding to the wide-angle camera. It can be understood that, if the ordinate value y of the corrected head coordinate corresponding to the ordinate value of the head position coordinate calculated according to the formula 2 is a decimal, the method further includes performing a rounding operation on y, where the rounding operation includes, but is not limited to, a rounding operation, a rounding-up operation, and a rounding-down operation.
The image processing method provided by the embodiment of the invention fully considers the distortion characteristic of the wide-angle camera, not only realizes effective correction of the distorted image collected by the wide-angle camera, but also improves the accuracy of the corrected image. And the image correction is realized by a coordinate transformation mode, the processing complexity is low, the processing speed of the image correction is improved, the time consumption of the image correction is reduced, and a foundation is laid for real-time data processing.
And 204, determining foot position coordinates according to the corrected head position coordinates and a preset mapping function.
Specifically, the method comprises the following steps:
(xglobal,yglobal)=fmapping(x,y)
wherein (x)global,yglobal) As foot position coordinates, (x, y) as corrected head coordinates, fmappingIs a preset mapping function.
Fig. 9 is a flowchart illustrating the step of determining the predetermined mapping function in the embodiment shown in fig. 5. As shown in fig. 9, the step of determining the preset mapping function in this embodiment includes:
and 2041, acquiring an annotation data set.
Step 2042, determining a rectification first coordinate according to the first coordinate and a preset camera rectification mapping function.
Step 2043, determining a preset mapping function according to the corrected first coordinate, the corrected second coordinate and a preset fitting algorithm.
In one possible design, the annotation data set includes a first coordinate and a second coordinate, where the first coordinate is a pixel coordinate of the head of the user in the target image, and the second coordinate is a numerical coordinate of the foot of the user in a preset mark coordinate system.
In the embodiment, the global mapping in the unmanned store can be realized, and the problem that the actual global position of the customer in the store is obtained through the pixel point position of the customer detected by the image of the camera can be well solved.
FIG. 10 is a flowchart illustrating a user location determination method according to yet another example embodiment. As shown in fig. 10, the method for determining a user position provided by this embodiment includes:
step 301, acquiring a target image covering a preset spatial range.
Step 302, determining the head position coordinates of the user in the target image.
And step 303, determining a corrected head position coordinate according to the head position coordinate and a preset camera correction mapping function.
And step 304, determining foot position coordinates according to the corrected head position coordinates and a preset mapping function.
It should be noted that steps 301 to 304 in this embodiment are similar to the specific implementation manner of steps 201 to 204 in the embodiment shown in fig. 5, and no additional details are described in this embodiment.
And 305, acquiring commodity taking and placing information.
The method includes the steps of obtaining commodity taking and placing information, wherein the commodity taking and placing information comprises commodity information and taking and placing position information, the commodity taking and placing information can be obtained through an intelligent shelf, the intelligent shelf can be any intelligent shelf in the prior art, and specific limitation is not made in the embodiment.
And step 306, determining attribution information of the commodity information according to the pick-and-place position information and the foot position coordinates.
And determining attribution information of the commodity information according to the pick-and-place position information and the foot position coordinates, wherein the attribution information comprises user information of the user.
It should be understood that the commodity on the shelf acquired by the intelligent shelf may be picked and placed, and then the attribution of the picked and placed commodity is determined by combining the determined foot position coordinates, for example, a customer who determines the foot position coordinate closest to the picked and placed position may be used as a user who actually picks and places the commodity, so that the attribution information of the commodity information includes the user information of the user, and a payment operation is performed subsequently.
Fig. 11 is a schematic structural diagram of a user position determination device according to an example embodiment. As shown in fig. 11, the user position determining apparatus 400 provided in the present embodiment includes:
an obtaining module 401, configured to obtain a target image covering a preset spatial range;
a processing module 402 for determining head position coordinates of a user in the target image;
the processing module 402 is further configured to determine foot position coordinates of the user according to the head position coordinates and a preset mapping function, where the foot position coordinates are used to represent a physical position of the user in the preset spatial range.
In a possible design, the processing module 402 is further configured to determine corrected head position coordinates according to the head position coordinates and a preset camera correction mapping function, so as to determine the foot position coordinates according to the corrected head position coordinates and the preset mapping function.
The obtaining module 401 is further configured to obtain commodity taking and placing information, where the commodity taking and placing information includes commodity information and taking and placing position information;
the processing module 402 is further configured to determine attribution information of the commodity information according to the pick-and-place position information and the foot position coordinate, where the attribution information includes user information of the user.
On the basis of the embodiment shown in fig. 11, fig. 12 is a schematic structural diagram of a user position determination device according to another exemplary embodiment shown in the present application. As shown in fig. 12, the user position determining apparatus 400 provided in this embodiment further includes:
a labeling module 403, configured to obtain a labeling data set, where the labeling data set includes a first coordinate and a second coordinate, where the first coordinate is a pixel coordinate of a head of a user in a target image, and the second coordinate is a numerical coordinate of a foot of the user in a preset labeling coordinate system;
a fitting module 405, configured to determine the preset mapping function according to the labeled data set and a preset fitting algorithm.
In one possible design, the user position determining apparatus 400 further includes:
a correcting module 404, configured to determine a corrected first coordinate according to the first coordinate and a preset camera correcting mapping function, so as to determine the preset mapping function according to the corrected first coordinate, the second coordinate, and the preset fitting algorithm.
In one possible design, the predetermined fitting algorithm is a least squares method.
On the basis of the embodiment shown in fig. 12, fig. 13 is a schematic structural diagram of a user position determination device according to another exemplary embodiment shown in the present application. As shown in fig. 13, the user position determining apparatus 400 provided in this embodiment further includes:
the determining module 406 is configured to obtain a distortion parameter corresponding to the wide-angle camera according to a cylindrical projection algorithm;
the determining module 406 is further configured to determine the preset camera rectification mapping function according to the distortion parameter.
In one possible design, the processing module 402 is specifically configured to:
determining a head marking box of a user in the target image, wherein the head marking box is used for identifying the range of the head of the user in the target image;
and determining the coordinate corresponding to the central position of the head marking frame as the head position coordinate.
It should be noted that the user position determining apparatus provided in the embodiments shown in fig. 11 to 13 may be used to execute the steps provided in any of the above method embodiments, and the specific implementation manner and the technical effect are similar and will not be described again here.
Fig. 14 is a schematic structural diagram of an electronic device shown in the present application according to an example embodiment. As shown in fig. 14, the present embodiment provides an electronic device 500, including:
a processor 501; and the number of the first and second groups,
a memory 502 for storing executable instructions of the processor, which may also be a flash (flash memory);
wherein the processor 501 is configured to perform the steps of the above-described method via execution of the executable instructions. Reference may be made in particular to the description relating to the preceding method embodiment.
Alternatively, the memory 502 may be separate or integrated with the processor 501.
When the memory 502 is a device independent of the processor 501, the electronic apparatus may further include:
a bus 503 for connecting the processor 501 and the memory 502.
The present embodiment also provides a readable storage medium, in which a computer program is stored, and when at least one processor of the electronic device executes the computer program, the electronic device executes the steps in the methods provided in the above-mentioned various embodiments.
The present embodiment also provides a program product comprising a computer program stored in a readable storage medium. The computer program can be read from a readable storage medium by at least one processor of the electronic device, and the execution of the computer program by the at least one processor causes the electronic device to implement the steps of the methods provided by the various embodiments described above.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (11)

1. A method for determining a location of a user, comprising:
acquiring a target image covering a preset space range;
determining head position coordinates of a user in the target image;
and determining foot position coordinates of the user according to the head position coordinates and a preset mapping function, wherein the foot position coordinates are used for representing the physical position of the user in the preset spatial range.
2. The method of claim 1, further comprising, after the determining head position coordinates of the user in the target image:
and determining a corrected head position coordinate according to the head position coordinate and a preset camera correction mapping function, and determining the foot position coordinate according to the corrected head position coordinate and the preset mapping function.
3. The method of claim 1, further comprising, after determining the foot position coordinates of the user according to the head position coordinates and a preset mapping function:
acquiring commodity taking and placing information, wherein the commodity taking and placing information comprises commodity information and taking and placing position information;
and determining attribution information of the commodity information according to the pick-and-place position information and the foot position coordinates, wherein the attribution information comprises user information of the user.
4. The method of any one of claims 1-3, further comprising:
acquiring an annotation data set, wherein the annotation data set comprises a first coordinate and a second coordinate, the first coordinate is a pixel coordinate of the head of a user in a target image, and the second coordinate is a numerical coordinate of the foot of the user in a preset marking coordinate system;
and determining the preset mapping function according to the labeled data set and a preset fitting algorithm.
5. The method of claim 4, wherein prior to determining the predetermined mapping function based on the annotated data set and a predetermined fitting algorithm, further comprising:
and determining a correction first coordinate according to the first coordinate and a preset camera correction mapping function, and determining the preset mapping function according to the correction first coordinate, the second coordinate and the preset fitting algorithm.
6. The method of claim 5, wherein the predetermined fitting algorithm is a least squares method.
7. The method of claim 5, further comprising:
obtaining distortion parameters corresponding to the wide-angle camera according to a cylindrical projection algorithm;
and determining the preset camera rectification mapping function according to the distortion parameter.
8. The method according to any one of claims 1 to 3, wherein the determining head position coordinates of the user in the target image comprises:
determining a head marking box of a user in the target image, wherein the head marking box is used for identifying the range of the head of the user in the target image;
and determining the coordinate corresponding to the central position of the head marking frame as the head position coordinate.
9. A user position determination apparatus, comprising:
the acquisition module is used for acquiring a target image covering a preset space range;
a processing module for determining head position coordinates of a user in the target image;
the processing module is further configured to determine foot position coordinates of the user according to the head position coordinates and a preset mapping function, where the foot position coordinates are used to represent a physical position of the user in the preset spatial range.
10. An electronic device, comprising:
a processor; and the number of the first and second groups,
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the user location determination method of any of claims 1 to 8 via execution of the executable instructions.
11. A storage medium on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of user position determination according to any one of claims 1 to 8.
CN201911181575.7A 2019-11-27 2019-11-27 User position determining method, device, electronic equipment and storage medium Active CN110955797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911181575.7A CN110955797B (en) 2019-11-27 2019-11-27 User position determining method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911181575.7A CN110955797B (en) 2019-11-27 2019-11-27 User position determining method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110955797A true CN110955797A (en) 2020-04-03
CN110955797B CN110955797B (en) 2023-05-02

Family

ID=69978578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911181575.7A Active CN110955797B (en) 2019-11-27 2019-11-27 User position determining method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110955797B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111540106A (en) * 2020-04-14 2020-08-14 合肥工业大学 Unmanned supermarket system
CN113516707A (en) * 2020-04-10 2021-10-19 支付宝(杭州)信息技术有限公司 Object positioning method and device based on image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140348382A1 (en) * 2013-05-22 2014-11-27 Hitachi, Ltd. People counting device and people trajectory analysis device
WO2016141744A1 (en) * 2015-03-09 2016-09-15 杭州海康威视数字技术股份有限公司 Target tracking method, apparatus and system
CN109146969A (en) * 2018-08-01 2019-01-04 北京旷视科技有限公司 Pedestrian's localization method, device and processing equipment and its storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140348382A1 (en) * 2013-05-22 2014-11-27 Hitachi, Ltd. People counting device and people trajectory analysis device
WO2016141744A1 (en) * 2015-03-09 2016-09-15 杭州海康威视数字技术股份有限公司 Target tracking method, apparatus and system
CN109146969A (en) * 2018-08-01 2019-01-04 北京旷视科技有限公司 Pedestrian's localization method, device and processing equipment and its storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MARKUS ZANK: "Tracking human locomotion by relative positional feet tracking", 《2015 IEEE VIRTUAL REALITY (VR)》 *
崔一博等: "视频监控中基于行人跟踪的摄像机自动标定", 《清华大学学报(自然科学版)》 *
金璐等: "基于多摄像头的目标定位", 《工业控制计算机》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516707A (en) * 2020-04-10 2021-10-19 支付宝(杭州)信息技术有限公司 Object positioning method and device based on image
CN111540106A (en) * 2020-04-14 2020-08-14 合肥工业大学 Unmanned supermarket system

Also Published As

Publication number Publication date
CN110955797B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN110111388B (en) Three-dimensional object pose parameter estimation method and visual equipment
US9165365B2 (en) Method and system for estimating attitude of camera
CN112132907B (en) Camera calibration method and device, electronic equipment and storage medium
JP5773436B2 (en) Information terminal equipment
CN112254633B (en) Object size measuring method, device and equipment
CN110728754B (en) Rigid body mark point identification method, device, equipment and storage medium
CN110955797B (en) User position determining method, device, electronic equipment and storage medium
CN111958604A (en) Efficient special-shaped brush monocular vision teaching grabbing method based on CAD model
CN107680112B (en) Image registration method
CN111881894A (en) Method, system, equipment and storage medium for collecting goods selling information of container
CN111274848A (en) Image detection method and device, electronic equipment and storage medium
CN111401363A (en) Frame number image generation method and device, computer equipment and storage medium
CN111996883A (en) Method for detecting width of road surface
CN111428743B (en) Commodity identification method, commodity processing device and electronic equipment
CN109784227A (en) Image detection recognition methods and device
CN111681268A (en) Method, device, equipment and storage medium for identifying and detecting sequence number of optical mark point by mistake
EP3825804A1 (en) Map construction method, apparatus, storage medium and electronic device
CN109166136B (en) Target object following method of mobile robot based on monocular vision sensor
CN112489240B (en) Commodity display inspection method, inspection robot and storage medium
CN111399634A (en) Gesture-guided object recognition method and device
CN116258663A (en) Bolt defect identification method, device, computer equipment and storage medium
CN109829951B (en) Parallel equipotential detection method and device and automatic driving system
CN106778925B (en) Face recognition pose over-complete face automatic registration method and device
CN111898552A (en) Method and device for distinguishing person attention target object and computer equipment
CN116721582A (en) Three-dimensional machine vision training method, system, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant