Nothing Special   »   [go: up one dir, main page]

HK1201618A1 - Method and device for recognizing image - Google Patents

Method and device for recognizing image

Info

Publication number
HK1201618A1
HK1201618A1 HK15102029.4A HK15102029A HK1201618A1 HK 1201618 A1 HK1201618 A1 HK 1201618A1 HK 15102029 A HK15102029 A HK 15102029A HK 1201618 A1 HK1201618 A1 HK 1201618A1
Authority
HK
Hong Kong
Prior art keywords
picture
white
point
main body
track
Prior art date
Application number
HK15102029.4A
Other languages
Chinese (zh)
Other versions
HK1201618B (en
Inventor
曹陽
曹阳
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of HK1201618A1 publication Critical patent/HK1201618A1/en
Publication of HK1201618B publication Critical patent/HK1201618B/en

Links

Landscapes

  • Image Analysis (AREA)

Description

Method and device for identifying picture
Technical Field
The present application relates to the field of device identification, and in particular, to a method and a device for identifying a picture. Background
Conventionally, there are many B2C websites that can provide good commodities for commercial products and quality. On the B2C website, the pictures come from different vendors, and the production style and quality of the pictures are different. Among them, the overall display effect of some B2C websites is uneven, which greatly affects the positioning of the merchandise and the trust level of the consumer on the merchandise.
Moreover, these B2C websites can only mandate the merchants therein to make pictures according to a uniform standard by formulating a picture specification. Meanwhile, pictures in violation are supervised and penalized manually. However, the current method is inefficient and extremely labor-intensive due to the large number of products. Therefore, it is highly desirable to automatically determine whether the picture meets the industry standard by a technical means.
Therefore, how to solve the problem that the display effect of the commodities of different merchants in the B2C website is uneven and the standard of the pictures cannot be detected to be uniform becomes a technical problem to be solved urgently.
Disclosure of Invention
In view of this, the present application provides a method and an apparatus for recognizing pictures to solve the problem that the standards of pictures cannot be detected to be non-uniform due to the uneven display effect of the commodities of different merchants in the B2C website.
In order to solve the above technical problem, the present application discloses a method for recognizing a picture, including: receiving a picture and identifying a white area and a non-white area in the picture, wherein the white area is only formed by connecting white pixel points, and the non-white area is only formed by connecting non-white pixel points; and when the area ratio of the white area in the picture exceeds a preset area ratio threshold, acquiring a non-white area with the significance exceeding the preset threshold, and positioning a main object in the picture according to the attribute of the non-white area.
Further, the attribute of the non-white area with the significance exceeding the preset threshold includes one or more of the following combinations: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas; positioning a main object in the picture according to the attribute of the non-white area with the significance exceeding a preset threshold, wherein the attribute comprises one or more of the following combinations: acquiring the number of main body objects in the picture according to the non-white area with the significance exceeding a preset threshold; acquiring the area ratio of a main object in the picture according to the area ratio of the non-white area with the significance degree exceeding a preset threshold in the picture; acquiring azimuth information of a main object in the picture according to the barycentric coordinates of a non-white area with the significance exceeding a preset threshold; and acquiring the boundary of the main object in the picture according to the leftmost and rightmost horizontal coordinates and the uppermost and lowermost vertical coordinates of the non-white area with the significance exceeding a preset threshold.
Further, still include: and for any non-white area positioned as a main object, fitting all non-white pixel points to obtain a contour track of the corresponding main object, and judging the long axis direction of the shape surrounded by the contour track as the placing direction of the main object.
Further, the contour trajectory of the subject object includes: an ellipse or rectangle; the long axis direction of the shape enclosed by the contour locus comprises: the included angle between the long axis of the shape surrounded by the outline tracks and the horizontal transverse axis of the picture is formed; determining a long axis direction of a shape surrounded by the contour trajectory as a placement direction of the main body object, further comprising: if the included angle between the long axis of the shape surrounded by the outline tracks and the positive half axis of the horizontal cross axis of the picture is larger than 90 degrees, the placing direction of the main body object faces to the left, otherwise, the placing direction of the main body object faces to the right.
Further, still include: for any non-white area positioned as a main body object, acquiring a convex hull track of the non-white area, and acquiring a contour track of the bottom of the main body object according to the convex hull track; acquiring a lowest point of a vertical coordinate in a contour track of the bottom of the main body object, and determining that the extension of the lowest point of the vertical coordinate on the contour track of the bottom of the main body object to the opposite direction of the placing direction is an effective contour track of the bottom of the main body object by combining the placing direction of the main body object; enumerating any point on the effective contour track as a starting point and expanding, continuously calculating an included angle between a connecting line of a current point and a subsequent point and a horizontal cross shaft of the picture, taking the current point as a terminal point when the included angle is larger than a threshold, and intercepting a track from the starting point to the terminal point on the effective contour track as a track to be selected; selecting the longest target contour track as the bottom of the main body object from the obtained plurality of candidate tracks; and acquiring an included angle between a connecting line from the starting point to the end point of the target contour track and a horizontal transverse axis of the picture so as to judge the placing angle trend of the main body object in the picture according to the included angle.
Further, obtaining a contour trajectory of the bottom of the body object according to the convex hull trajectory includes: starting from any point in the upper left corner of the convex hull track, comparing the abscissa of the current point and the abscissa of the subsequent point in an iteration mode according to a counterclockwise sequence, and setting the subsequent point as a starting point when the abscissa of the subsequent point at a certain moment is larger than the abscissa of the current point; then setting the current point as an end point when the abscissa of the subsequent point at another moment is smaller than the abscissa of the current point; and forming a contour track of the bottom of the main body object through the traversal track from the starting point to the end point.
Further, still include: when the area ratio of the white area in the picture exceeds a preset area ratio threshold, a non-white area with the significance not exceeding the preset threshold is further obtained, the description information in the picture is positioned according to the attributes of the non-white area, and the text content of the description information is identified.
Further, the attribute of the non-white area with the significance not exceeding the preset threshold includes one or more of the following combinations: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas; positioning the description information in the picture according to the attribute of the non-white area with the significance not exceeding the preset threshold, wherein the positioning comprises one or more of the following combinations: acquiring the number of the description information in the picture according to the non-white area with the significance degree not exceeding a preset threshold; acquiring the area ratio of the description information in the picture according to the area ratio of the non-white area with the significance degree not exceeding a preset threshold in the picture; acquiring azimuth information of the description information in the picture according to the barycentric coordinates of the non-white area with the significance degree not exceeding a preset threshold; and acquiring the boundary of the description information in the picture according to the left and right horizontal coordinates and the top and bottom vertical coordinates of the non-white area with the significance degree not exceeding a preset threshold.
In order to solve the above technical problem, the present application discloses an apparatus for recognizing a picture, including: the system comprises an area identification module and a positioning identification module; the region identification module is used for receiving the picture, identifying a white region and a non-white region in the picture and sending the white region and the non-white region to the positioning identification module, wherein the white region is only formed by communicating white pixel points, and the non-white region is only formed by communicating non-white pixel points; and the positioning identification module is used for acquiring a non-white area with the significance exceeding a preset threshold when the area ratio of the white area in the picture exceeds a preset area ratio threshold, and positioning the main object in the picture according to the attribute of the non-white area.
Further, the location identification module obtains attributes of non-white areas with significance exceeding a preset threshold, including one or more of the following combinations: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas; the positioning identification module positions the main object in the picture according to the attribute of the non-white area with the significance exceeding a preset threshold, and the positioning identification module comprises one or more of the following combinations: acquiring the number of main body objects in the picture according to the non-white area with the significance exceeding a preset threshold; acquiring the area ratio of a main object in the picture according to the area ratio of the non-white area with the significance degree exceeding a preset threshold in the picture; acquiring azimuth information of a main object in the picture according to the barycentric coordinates of a non-white area with the significance exceeding a preset threshold; and acquiring the boundary of the main object in the picture according to the leftmost and rightmost horizontal coordinates and the uppermost and lowermost vertical coordinates of the non-white area with the significance exceeding a preset threshold.
Further, still include: and the placement identification module is used for fitting all non-white pixel points in any non-white area positioned as the main body object to obtain a contour track of the corresponding main body object, and determining the long axis direction of the shape surrounded by the contour track as the placement direction of the main body object.
Further, the contour trajectory of the main body object obtained by the placement identification module includes: an ellipse or rectangle; the long axis direction of the shape enclosed by the outline track obtained by the placing identification module comprises: the included angle between the long axis of the shape surrounded by the outline tracks and the horizontal transverse axis of the picture is formed; the placement recognition module determines a long axis direction of a shape surrounded by the contour traces as a placement direction of the main body object, and further includes: if the included angle between the long axis of the shape surrounded by the outline tracks and the positive half axis of the horizontal cross axis of the picture is larger than 90 degrees, the placing direction of the main body object faces to the left, otherwise, the placing direction of the main body object faces to the right.
Further, still include: the bottom identification module is used for acquiring a convex hull track of any non-white area positioned as a main body object and acquiring a contour track of the bottom of the main body object according to the convex hull track; the positioning device is further configured to obtain a lowest point of a vertical coordinate in the contour track of the bottom of the main body object, and determine, in combination with the placement direction of the main body object, that the lowest point of the vertical coordinate on the contour track of the bottom of the main body object extends in a direction opposite to the placement direction to be an effective contour track of the bottom of the main body object; the image selecting device is further used for enumerating any point on the effective contour track as a starting point and expanding the effective contour track, continuously calculating an included angle between a connecting line of a current point and a subsequent point and a horizontal cross shaft of the image, taking the current point as a terminal point when the included angle is larger than a threshold, and intercepting the track from the starting point to the terminal point on the effective contour track as a track to be selected; selecting the longest target contour track as the bottom of the main body object from the obtained plurality of candidate tracks; and the included angle between a connecting line from the starting point to the end point of the target contour track and the horizontal transverse axis of the picture is obtained, so that the placing angle trend of the main body object in the picture is judged according to the included angle.
Further, the bottom identification module acquires a contour trajectory of the bottom of the main body object according to the convex hull trajectory, and is further configured to compare abscissa of a current point and abscissa of a subsequent point in an iterative manner in a counterclockwise order starting from any point at the upper left corner in the convex hull trajectory, and set the subsequent point as a starting point when the abscissa of the subsequent point at a certain time is greater than the abscissa of the current point; then setting the current point as an end point when the abscissa of the subsequent point at another moment is smaller than the abscissa of the current point; and forming a contour track of the bottom of the main body object through the traversal track from the starting point to the end point.
Further, the positioning and identifying module is further configured to, when the area ratio of the white region in the picture exceeds a preset area ratio threshold, obtain a non-white region whose degree of significance does not exceed the preset threshold, position the description information in the picture according to an attribute of the non-white region, and identify text content of the description information.
Further, the significance obtained by the positioning and identifying module does not exceed a non-white area of a preset threshold, and the attributes of the non-white area include one or more of the following combinations: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas; the positioning identification module positions the description information in the picture according to the attribute of the non-white area with the significance degree not exceeding a preset threshold, and the positioning identification module comprises one or more of the following combinations: acquiring the number of the description information in the picture according to the non-white area with the significance degree not exceeding a preset threshold; acquiring the area ratio of the description information in the picture according to the area ratio of the non-white area with the significance degree not exceeding a preset threshold in the picture; acquiring azimuth information of the description information in the picture according to the barycentric coordinates of the non-white area with the significance degree not exceeding a preset threshold; and acquiring the boundary of the description information in the picture according to the left and right horizontal coordinates and the top and bottom vertical coordinates of the non-white area with the significance degree not exceeding a preset threshold.
Compared with the existing scheme, the technical effect obtained by the application is as follows:
1) the method and the device for recognizing the pictures can solve the problems that the commodity display effects of different merchants in a B2C website are different, and the standard of the pictures cannot be detected to be non-uniform.
2) The method and the device for recognizing the picture can further obtain the number of the main commodities in the picture, the area ratio of each main commodity (the number of pixels in the connected region/the picture area), the orientation (barycentric coordinates) of the main commodity, and the upper, lower, left and right boundaries (the leftmost, rightmost horizontal coordinates and the uppermost and lowermost vertical coordinates) of the main commodity in more detail.
Of course, it is not necessary for any one product to achieve all of the above-described technical effects simultaneously.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart illustrating a method for recognizing a picture according to a first embodiment of the present application;
FIG. 2 is a block diagram of an apparatus for recognizing pictures according to a second embodiment of the present disclosure;
FIG. 3 is a block diagram illustrating a flow of a method for determining whether a picture meets a determination criterion in an embodiment of the present application;
fig. 4 is a schematic diagram of a picture of an article of manufacture involved in the embodiments of the present application.
Detailed Description
The embodiments of the present application will be described in detail with reference to the drawings and examples, so that how to implement the technical means for solving the technical problems and achieving the technical effects of the present application can be fully understood and implemented.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
As shown in fig. 1, a method for recognizing a picture according to a first embodiment of the present application includes:
step 101, receiving a picture and identifying a white area and a non-white area in the picture, wherein the white area is only formed by communicating white pixel points, and the non-white area is only formed by communicating non-white pixel points;
and 102, when the area ratio of the white area in the picture exceeds a preset area ratio threshold, acquiring a non-white area with the significance exceeding the preset threshold, and positioning a main object (a main commodity) in the picture according to the attribute of the non-white area.
Further, the attribute of the non-white area with the significance exceeding the preset threshold in step 102 includes one or more of the following combinations: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas;
positioning the subject object in the picture according to the attribute of the non-white area with the significance exceeding a preset threshold, further comprising one or more of the following combinations:
acquiring the number of main body objects in the picture according to the non-white area with the significance exceeding a preset threshold;
acquiring the area ratio of a main object in the picture according to the area ratio of the non-white area with the significance degree exceeding a preset threshold in the picture;
acquiring azimuth information of a main object in the picture according to the barycentric coordinates of a non-white area with the significance exceeding a preset threshold;
and acquiring the boundary of the main object in the picture according to the leftmost and rightmost horizontal coordinates and the uppermost and lowermost vertical coordinates of the non-white area with the significance exceeding a preset threshold.
In addition, the method according to the first embodiment of the present application further includes:
and for any non-white area positioned as a main object, fitting all non-white pixel points to obtain a contour track of the corresponding main object, and judging the long axis direction of the shape surrounded by the contour track as the placing direction of the main object.
For the above content of the first embodiment of the present application, the method specifically further includes:
a contour trajectory of the subject object, comprising: an ellipse or rectangle;
the long axis direction of the shape enclosed by the contour locus comprises: the included angle between the long axis of the shape surrounded by the outline tracks and the horizontal transverse axis of the picture is formed;
determining a long axis direction of a shape surrounded by the contour trajectory as a placement direction of the main body object, further comprising: if the included angle between the long axis of the shape surrounded by the outline tracks and the positive half axis of the horizontal cross axis of the picture is larger than 90 degrees, the placing direction of the main body object faces to the left, otherwise, the placing direction of the main body object faces to the right.
For the above content of the first embodiment of the present application, if specific goods such as shoes, bags, etc. are targeted, the above steps include:
for any non-white area positioned as a main body object, acquiring a convex hull track of the non-white area, and acquiring a contour track of the bottom of the main body object according to the convex hull track;
acquiring a lowest point of a vertical coordinate in a contour track of the bottom of the main body object, and determining that the extension of the lowest point of the vertical coordinate on the contour track of the bottom of the main body object to the opposite direction of the placing direction is an effective contour track of the bottom of the main body object by combining the placing direction of the main body object;
enumerating any point on the effective contour track as a starting point and expanding, continuously calculating an included angle between a connecting line of a current point and a subsequent point and a horizontal cross shaft of the picture, taking the current point as a terminal point when the included angle is larger than a threshold, and intercepting a track from the starting point to the terminal point on the effective contour track as a track to be selected; selecting the longest target contour track as the bottom of the main body object from the obtained plurality of candidate tracks;
and acquiring an included angle between a connecting line from the starting point to the end point of the target contour track and a horizontal transverse axis of the picture so as to judge the placing angle trend of the main body object in the picture according to the included angle.
In addition, obtaining a contour trajectory of the bottom of the subject object according to the convex hull trajectory, further includes:
traversing the points in the convex hull track in a counterclockwise order by taking the leftmost point in the convex hull track as an origin, and setting the point as a starting point when the abscissa of the subsequent point of the point is greater than the abscissa of the point; when the abscissa of a subsequent point of the point is smaller than the abscissa of the point, setting the point as an end point;
and forming a contour track of the bottom of the main body object through the traversal track from the starting point to the end point.
The step 102 further includes:
when the area ratio of the white area in the picture exceeds a preset area ratio threshold, a non-white area with the significance not exceeding the preset threshold is further obtained, description information (non-main goods, generally, logo and other marking contents, although the application is not limited thereto) in the picture is located according to the attribute of the non-white area, and the text content of the description information is identified.
Further, the method comprises, among others,
the attribute of the non-white area with the significance not exceeding the preset threshold comprises one or more of the following combinations: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas;
positioning the description information in the picture according to the attribute of the non-white area with the significance not exceeding the preset threshold, and further comprising one or more of the following combinations:
acquiring the number of the description information in the picture according to the non-white area with the significance degree not exceeding a preset threshold;
acquiring the area ratio of the description information in the picture according to the area ratio of the non-white area with the significance degree not exceeding a preset threshold in the picture;
acquiring azimuth information of the description information in the picture according to the barycentric coordinates of the non-white area with the significance degree not exceeding a preset threshold;
and acquiring the boundary of the description information in the picture according to the left and right horizontal coordinates and the top and bottom vertical coordinates of the non-white area with the significance degree not exceeding a preset threshold.
As shown in fig. 2, the apparatus for recognizing a picture according to the second embodiment of the present application includes: an area identification module 201 and a positioning identification module 202; wherein the content of the first and second substances,
the region identification module 201 receives a picture, identifies a white region and a non-white region in the picture, and sends the white region and the non-white region to the positioning identification module 202, wherein the white region is formed by only connecting white pixels, and the non-white region is formed by only connecting non-white pixels;
the positioning identification module 202, when the area ratio of the white area in the picture exceeds a preset area ratio threshold, obtains a non-white area with a significance exceeding the preset threshold, and positions the main object in the picture according to the attribute of the non-white area.
Further, the location identification module 202 obtains attributes of the non-white area with the significance exceeding the preset threshold, including one or more of the following combinations:
the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas;
the positioning identification module 202 positions the subject object in the picture according to the attribute of the non-white area whose significance exceeds a preset threshold, and further includes one or more of the following combinations:
acquiring the number of main body objects in the picture according to the non-white area with the significance exceeding a preset threshold;
acquiring the area ratio of a main object in the picture according to the area ratio of the non-white area with the significance degree exceeding a preset threshold in the picture;
acquiring azimuth information of a main object in the picture according to the barycentric coordinates of a non-white area with the significance exceeding a preset threshold;
and acquiring the boundary of the main object in the picture according to the leftmost and rightmost horizontal coordinates and the uppermost and lowermost vertical coordinates of the non-white area with the significance exceeding a preset threshold.
As shown in fig. 2, the apparatus further comprises:
the placement identification module 203 fits all the non-white pixel points in any non-white area positioned as the main body object to obtain a contour track of the corresponding main body object, and determines the long axis direction of the shape surrounded by the contour track as the placement direction of the main body object.
Wherein, in particular, the contour trajectory of the subject object includes: an ellipse or rectangle;
the long axis direction of the shape enclosed by the contour locus comprises: the included angle between the long axis of the shape surrounded by the outline tracks and the horizontal transverse axis of the picture is formed;
determining a long axis direction of a shape surrounded by the contour trajectory as a placement direction of the main body object, further comprising: if the included angle between the long axis of the shape surrounded by the outline tracks and the positive half axis of the horizontal cross axis of the picture is larger than 90 degrees, the placing direction of the main body object faces to the left, otherwise, the placing direction of the main body object faces to the right.
For the above content of the second embodiment of the present application, if the specific goods, such as shoes, bags, etc., are targeted, the apparatus further includes:
a bottom identifying module 204, configured to, for any non-white area located as a main object, obtain a convex hull trajectory of the non-white area, and obtain a contour trajectory of the bottom of the main object according to the convex hull trajectory; the positioning device is further configured to obtain a lowest point of a vertical coordinate in the contour track of the bottom of the main body object, and determine, in combination with the placement direction of the main body object, that the lowest point of the vertical coordinate on the contour track of the bottom of the main body object extends in a direction opposite to the placement direction to be an effective contour track of the bottom of the main body object; the image selecting device is further used for enumerating any point on the effective contour track as a starting point and expanding the effective contour track, continuously calculating an included angle between a connecting line of a current point and a subsequent point and a horizontal cross shaft of the image, taking the current point as a terminal point when the included angle is larger than a threshold, and intercepting the track from the starting point to the terminal point on the effective contour track as a track to be selected; selecting the longest target contour track as the bottom of the main body object from the obtained plurality of candidate tracks; and the included angle between a connecting line from the starting point to the end point of the target contour track and the horizontal transverse axis of the picture is obtained, so that the placing angle trend of the main body object in the picture is judged according to the included angle.
The bottom identifying module 204 obtains a contour trajectory of the bottom of the main object according to the convex hull trajectory, and further includes:
starting from any point in the upper left corner of the convex hull track, comparing the abscissa of the current point and the abscissa of the subsequent point in an iteration mode according to a counterclockwise sequence, and setting the subsequent point as a starting point when the abscissa of the subsequent point at a certain moment is larger than the abscissa of the current point; then setting the current point as an end point when the abscissa of the subsequent point at another moment is smaller than the abscissa of the current point; and forming a contour track of the bottom of the main body object through the traversal track from the starting point to the end point.
The positioning and identifying module 202 in the apparatus is further configured to, when the area ratio of the white region in the picture exceeds a preset area ratio threshold, obtain a non-white region whose degree of significance does not exceed the preset threshold, position the description information in the picture according to an attribute of the non-white region, and identify text content of the description information.
Wherein, in particular, the attribute of the non-white area whose significance does not exceed the preset threshold includes one or more of the following combinations: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas;
positioning the description information in the picture according to the attribute of the non-white area with the significance not exceeding the preset threshold, and further comprising one or more of the following combinations:
acquiring the number of the description information in the picture according to the non-white area with the significance degree not exceeding a preset threshold;
acquiring the area ratio of the description information in the picture according to the area ratio of the non-white area with the significance degree not exceeding a preset threshold in the picture;
acquiring azimuth information of the description information in the picture according to the barycentric coordinates of the non-white area with the significance degree not exceeding a preset threshold;
and acquiring the boundary of the description information in the picture according to the left and right horizontal coordinates and the top and bottom vertical coordinates of the non-white area with the significance degree not exceeding a preset threshold.
Through the detailed description of the pair of methods of the above embodiments and the pair of devices of the embodiments, a specific application embodiment is described below, in which a certain B2C website issues a uniform production standard for commodity pictures and requires a merchant to produce pictures according to the production standard; according to the manufacturing standard, the method designs a judgment index of the commodity picture and quantifies the judgment index, and the judgment index of the commodity picture comprises the following steps: background requirements (the area ratio of a white area in a commodity picture exceeds a preset area ratio threshold), position size requirements of a main object in the picture (the number of non-white areas with the significance exceeding the preset threshold, the area ratio requirements of the non-white areas in the picture, barycentric coordinates, leftmost and rightmost horizontal coordinates and uppermost and lowermost vertical coordinates of the non-white areas), left-facing placement of the main object in the picture (whether an included angle between a major axis of a shape surrounded by a contour track of the main object and a positive half axis of a horizontal axis of the picture is greater than 90 degrees or not, of course, the application is not limited to left-facing placement), and an included angle requirement of the main object in the picture and the horizontal line (a placement angle trend of the main object in the picture):
according to the operation steps of the application, the judgment is performed on whether the commodity picture uploaded and displayed by the merchant meets the requirement or not by combining the judgment index, as shown in fig. 3, the method comprises the following steps:
step 300, receiving a picture (as shown in fig. 4, the picture is a commodity picture, wherein a main object is a shoe, and other white areas are backgrounds) and identifying a white area and a non-white area therein, wherein the white area is formed by only communicating white pixel points, and the non-white area is formed by only communicating non-white pixel points; in the present embodiment, referring to the HSV (Hue, Saturation, and Value lightness) color model, a white area is defined as a color whose Saturation is lower than 0.04 and lightness is higher than 0.96; and non-white areas are defined as all colors not belonging to the last range; i.e., the portion of the shoe in the figure is a non-white area.
Generally, white background is often used as a main requirement in the pictures describing the commodities. Therefore, in the description of the present application, the background of the commercial product picture to be processed is also white. For the commodity picture with the white background, the commodity picture can be processed so as to accurately identify indexes such as main body object positioning, placing direction, bottom and the like; for the commodity pictures with non-white backgrounds, the deficiency of the white backgrounds of the commodity pictures is seriously violated, and the subsequent steps are not needed to be further judged;
step 301, when the area ratio of the white area in the picture meets the requirement of the judgment index, namely when the area ratio exceeds a preset area ratio threshold, performing step 302; otherwise, directly judging that the commodity picture is unqualified.
Step 302, when the area ratio of the white area in the picture exceeds a preset area ratio threshold, acquiring a non-white area with a significance exceeding a preset threshold (obtained by judging the number of the non-white areas exceeding the preset threshold, the area ratio of the non-white areas in the picture, barycentric coordinates of the non-white area, left and right horizontal coordinates of the non-white area, and top and bottom vertical coordinates); positioning a main object (a main commodity) in the picture according to the attribute of the non-white area;
the method specifically comprises the following steps:
1) acquiring the number of main body objects in the picture according to the non-white area with the significance exceeding a preset threshold; in practice, the non-white areas can be sorted according to the significance, and the non-white areas arranged in the top 10% can be regarded as main objects in the picture, so that the number of the non-white areas can be calculated; of course, the present application has other methods, such as recognizing that the non-white area, which is located in the middle of the picture and occupies more than 50% of the area of the picture, is regarded as the subject object in the picture, and the present application is not limited thereto.
2) Acquiring the area ratio of a main object in the picture according to the area ratio of the non-white area with the significance degree exceeding a preset threshold in the picture;
3) acquiring azimuth information of a main object in the picture according to the barycentric coordinates of a non-white area with the significance exceeding a preset threshold;
4) and acquiring the boundary of the main object in the picture according to the leftmost and rightmost horizontal coordinates and the uppermost and lowermost vertical coordinates of the non-white area with the significance exceeding a preset threshold.
The attributes of the above located subject object include: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas;
step 303, if the attributes meet the requirements of the judgment index, step 304 is performed; otherwise, directly judging that the commodity picture is unqualified.
Step 304: for any non-white area positioned as a main object, fitting all non-white pixel points therein to obtain a contour track of the corresponding main object, and determining a long axis direction of a shape surrounded by the contour track as a placing direction of the main object (as shown in fig. 4, the picture is a commodity picture, wherein a transverse arrow line is the long axis direction);
specifically, the contour trajectory of the main object is an ellipse, and the major axis direction of the shape surrounded by the contour trajectory is obtained according to an included angle between the major axis of the shape surrounded by the contour trajectory and the horizontal transverse axis of the picture. In addition, if the included angle between the major axis of the shape surrounded by the outline tracks and the positive half axis of the horizontal transverse axis of the picture is larger than 90 degrees, the placing direction of the main body object faces to the left, otherwise, the placing direction of the main body object faces to the right.
Of course, the contour trajectory of the subject object (main product) may also be a rectangle, that is, a rectangle fitting is performed on all the non-white pixel points therein. The fitting algorithm generally uses a least square method, which is not limited herein.
Step 305, judging that the placing direction of the main body object is towards the left and meets the requirement of the judgment index, and then executing step 306; otherwise, directly judging that the commodity picture is unqualified.
At step 306, for some goods (e.g., shoes, bags, cosmetics, etc.), the angle at which the bottom is placed is more interesting than the angle of the bottom itself. Therefore, step 306 is also required for this type of merchandise, and of course, under certain conditions, step 306 may be used instead of step 304:
for any non-white area positioned as a main body object, acquiring a convex hull track of the non-white area, and acquiring a contour track of the bottom of the main body object according to the convex hull track; starting from any point in the upper left corner of the convex hull track, comparing the abscissa of the current point and the abscissa of the subsequent point in an iteration mode according to a counterclockwise sequence, and setting the subsequent point as a starting point when the abscissa of the subsequent point at a certain moment is larger than the abscissa of the current point; then setting the current point as an end point when the abscissa of the subsequent point at another moment is smaller than the abscissa of the current point; forming a contour track of the bottom of the main body object through the traversal track from the starting point to the end point;
acquiring a lowest point of a vertical coordinate in a contour track of the bottom of the main body object, and determining that the extension of the lowest point of the vertical coordinate on the contour track of the bottom of the main body object to the opposite direction of the placing direction is an effective contour track of the bottom of the main body object by combining the placing direction of the main body object;
enumerating any point on the effective contour track as a starting point and expanding, continuously calculating an included angle between a connecting line of a current point and a subsequent point and a horizontal cross shaft of the picture, taking the current point as a terminal point when the included angle is larger than a threshold, and intercepting a track from the starting point to the terminal point on the effective contour track as a track to be selected; selecting the longest target contour track as the bottom of the main body object from the obtained plurality of candidate tracks;
and acquiring an included angle between a connecting line from the starting point to the end point of the target contour track and a horizontal transverse axis of the picture so as to judge the placing angle trend of the main body object in the picture according to the included angle.
Step 307, judging that the placing angle trend of the main body object meets the requirement of the judgment index, and finally judging that the commodity picture is qualified; otherwise, judging that the commodity picture is unqualified.
In the step 306, the convex hull is calculated by using a Graham scanning method, but it is needless to say that other methods may be adopted by those skilled in the art, and are not limited herein.
The convex hull trajectory is a convex polygon that completely encompasses the primary merchandise area. The convex hull trajectory can be characterized by a series of vertices (pixels). And (3) taking the point at the upper left corner in the convex hull track as an origin (0, 0) in the coordinate system, and traversing the points in the convex hull tracks in an iterative recursion mode according to a counterclockwise sequence: comparing the abscissa of the subsequent point with the abscissa of the current point starting from the origin, if the abscissa of the subsequent point is less than or equal to the abscissa of the current point, the subsequent point is still positioned at the upper side or the side part of the main commodity, the subsequent point is taken as the current point when the bottom is not reached, and continuing to compare … backwards, if the abscissa of the subsequent point at a certain moment is greater than the current point, the subsequent point can be considered to start to enter the bottom of the main commodity, the subsequent point is set as the starting point, and the subsequent point is represented by S; s then goes on until the abscissa of the subsequent point of a current point is smaller than the current point, which indicates that the subsequent point starts to enter another side of the main product, and at this time, the current point may be set as the end point E, which is considered to have left the bottom of the main product. By the above method, one point queue P = { S, X1, X2, …, Xk, E }. In this embodiment, due to the property of the convex hull trajectory, there is one and only one queue that meets the condition. The queue represents the basic bottom of the primary item (SE is the basic bottom, as shown in fig. 4).
However, merely positioning to the base bottom does not determine the bottom angle of the article. In this queue, an angle can be determined between two adjacent points. Therefore, one angle queue D = { dS _ x1, dx1_ x2, …, dxk _ xk-1, dxk _ E } can be obtained. It can be determined that each element is distinct from the others in this angular queue. Otherwise, it contradicts the convex hull trajectory properties (then some vertices should not be considered as a vertex in the convex hull algorithm).
Next, it is necessary to further locate the bottom range where the merchandise is really valid within the basic bottom range. In the queue P, the lowest point L (the ordinate maximum point, if there are more, the top is 2 due to the convex hull trajectory property guarantee, and then one is selected) is found. If the primary item was identified as being left-facing in the previous step, then L to E (L and L to the right) may be considered as a true bottom candidate; if the primary item is deemed to be right-facing, then S to L (L and L to the left) may be considered true bottom candidates (LE being the base bottom as shown in FIG. 4). Even if the range of the bottom is reduced, the angle trend of the bottom of the commodity cannot be well represented. Next, a "longest continuous" line needs to be found in this queue (S to L, or L to E). The method comprises the following steps:
let { x1, x2 …, xn } be the vertex queue to be examined, { d1, d2., dn-1} be the straight line angle between two of these points, { l1, l2., ln-1} be the euclidean distance between two of these points, where lk = | xk-xk +1| 2. Enumerating each point as a starting point, set to Xp, and attempting to expand backward. The condition of expansion is that the angle difference between dp and dp +1 is not more than 15 deg. And if the conditions are met, regarding Xp- > Xp +1- > Xp +2 as a continuous line, and continuously inspecting dp +1 and dp + 2. Expand according to the rule until a certain angular difference exceeds 15 degrees or an end point is encountered. Assuming that the cutoff point is Xq, the sum of the distances from Xp to the line through which Xq passes is defined as the continuous bottom distance between Xp to Xq; of course, if the expansion cannot be achieved at the beginning, the bottom line is only Xp- > Xp +1 continuously with Xp as the starting point, and the bottom distance is lp continuously. Enumerating all the possible cases of Xp, and obtaining a continuous line with the longest continuous bottom distance from Xp, which represents the real bottom area and is set as Xp 'to Xq'. In the longest continuous line, the angle formed by the connection line of the starting point Xp 'and the end point Xq' represents the true bottom angle trend of the product (as shown in fig. 4, Xp 'to Xq' represent the true bottom angle trend of the product).
Compared with the existing scheme, the technical effect obtained by the application is as follows:
1) the method and the device for recognizing the pictures can solve the problems that the commodity display effects of different merchants in a B2C website are different, and the standard of the pictures cannot be detected to be non-uniform.
2) The method and the device for recognizing the picture can further obtain the number of the main commodities in the picture, the area ratio of each main commodity (the number of pixels in the connected region/the picture area), the orientation (barycentric coordinates) of the main commodity, and the upper, lower, left and right boundaries (the leftmost, rightmost horizontal coordinates and the uppermost and lowermost vertical coordinates) of the main commodity in more detail.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing description shows and describes several preferred embodiments of the present application, but as aforementioned, it is to be understood that the application is not limited to the forms disclosed herein, but is not to be construed as excluding other embodiments and is capable of use in various other combinations, modifications, and environments and is capable of changes within the scope of the application as described herein, commensurate with the above teachings, or the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the application, which is to be protected by the claims appended hereto.

Claims (16)

1. A method of recognizing a picture, comprising:
receiving a picture and identifying a white area and a non-white area in the picture, wherein the white area is only formed by connecting white pixel points, and the non-white area is only formed by connecting non-white pixel points;
and when the area ratio of the white area in the picture exceeds a preset area ratio threshold, acquiring a non-white area with the significance exceeding the preset threshold, and positioning a main object in the picture according to the attribute of the non-white area.
2. The method of recognizing pictures according to claim 1,
the attribute of the non-white area with the significance exceeding the preset threshold comprises one or more of the following combinations: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas;
positioning the subject object in the picture according to the attribute of the non-white area with the significance exceeding a preset threshold, further comprising one or more of the following combinations:
acquiring the number of main body objects in the picture according to the non-white area with the significance exceeding a preset threshold;
acquiring the area ratio of a main object in the picture according to the area ratio of the non-white area with the significance degree exceeding a preset threshold in the picture;
acquiring azimuth information of a main object in the picture according to the barycentric coordinates of a non-white area with the significance exceeding a preset threshold;
and acquiring the boundary of the main object in the picture according to the leftmost and rightmost horizontal coordinates and the uppermost and lowermost vertical coordinates of the non-white area with the significance exceeding a preset threshold.
3. The method of recognizing pictures according to claim 2, further comprising:
and for any non-white area positioned as a main object, fitting all non-white pixel points to obtain a contour track of the corresponding main object, and judging the long axis direction of the shape surrounded by the contour track as the placing direction of the main object.
4. The method of recognizing pictures according to claim 3,
a contour trajectory of the subject object, comprising: an ellipse or rectangle;
the long axis direction of the shape enclosed by the contour locus comprises: the included angle between the long axis of the shape surrounded by the outline tracks and the horizontal transverse axis of the picture is formed;
determining a long axis direction of a shape surrounded by the contour trajectory as a placement direction of the main body object, further comprising: if the included angle between the long axis of the shape surrounded by the outline tracks and the positive half axis of the horizontal cross axis of the picture is larger than 90 degrees, the placing direction of the main body object faces to the left, otherwise, the placing direction of the main body object faces to the right.
5. The method of recognizing pictures according to claim 1, further comprising:
for any non-white area positioned as a main body object, acquiring a convex hull track of the non-white area, and acquiring a contour track of the bottom of the main body object according to the convex hull track;
acquiring a lowest point of a vertical coordinate in a contour track of the bottom of the main body object, and determining that the extension of the lowest point of the vertical coordinate on the contour track of the bottom of the main body object to the opposite direction of the placing direction is an effective contour track of the bottom of the main body object by combining the placing direction of the main body object;
enumerating any point on the effective contour track as a starting point and expanding, continuously calculating an included angle between a connecting line of a current point and a subsequent point and a horizontal cross shaft of the picture, taking the current point as a terminal point when the included angle is larger than a threshold, and intercepting a track from the starting point to the terminal point on the effective contour track as a track to be selected; selecting the longest target contour track as the bottom of the main body object from the obtained plurality of candidate tracks;
and acquiring an included angle between a connecting line from the starting point to the end point of the target contour track and a horizontal transverse axis of the picture so as to judge the placing angle trend of the main body object in the picture according to the included angle.
6. The method of recognizing pictures according to claim 5,
obtaining a contour trajectory of the bottom of the subject object according to the convex hull trajectory, further comprising:
starting from any point in the upper left corner of the convex hull track, comparing the abscissa of the current point and the abscissa of the subsequent point in an iteration mode according to a counterclockwise sequence, and setting the subsequent point as a starting point when the abscissa of the subsequent point at a certain moment is larger than the abscissa of the current point; then setting the current point as an end point when the abscissa of the subsequent point at another moment is smaller than the abscissa of the current point;
and forming a contour track of the bottom of the main body object through the traversal track from the starting point to the end point.
7. The method of recognizing pictures according to claim 1, further comprising:
when the area ratio of the white area in the picture exceeds a preset area ratio threshold, a non-white area with the significance not exceeding the preset threshold is further obtained, the description information in the picture is positioned according to the attributes of the non-white area, and the text content of the description information is identified.
8. The method of recognizing pictures according to claim 7,
the attribute of the non-white area with the significance not exceeding the preset threshold comprises one or more of the following combinations: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas;
positioning the description information in the picture according to the attribute of the non-white area with the significance not exceeding the preset threshold, and further comprising one or more of the following combinations:
acquiring the number of the description information in the picture according to the non-white area with the significance degree not exceeding a preset threshold;
acquiring the area ratio of the description information in the picture according to the area ratio of the non-white area with the significance degree not exceeding a preset threshold in the picture;
acquiring azimuth information of the description information in the picture according to the barycentric coordinates of the non-white area with the significance degree not exceeding a preset threshold;
and acquiring the boundary of the description information in the picture according to the left and right horizontal coordinates and the top and bottom vertical coordinates of the non-white area with the significance degree not exceeding a preset threshold.
9. An apparatus for recognizing a picture, comprising: the system comprises an area identification module and a positioning identification module; wherein the content of the first and second substances,
the region identification module is used for receiving the picture and identifying a white region and a non-white region in the picture and sending the white region and the non-white region to the positioning identification module, wherein the white region is only formed by communicating white pixel points, and the non-white region is only formed by communicating non-white pixel points;
and the positioning identification module is used for acquiring a non-white area with the significance exceeding a preset threshold when the area ratio of the white area in the picture exceeds a preset area ratio threshold, and positioning the main object in the picture according to the attribute of the non-white area.
10. The apparatus for recognizing picture according to claim 9,
the positioning identification module acquires the attribute of the non-white area with the significance exceeding a preset threshold, wherein the attribute comprises one or more of the following combinations: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas;
the positioning identification module positions the main object in the picture according to the attribute of the non-white area with the significance exceeding a preset threshold, and further comprises one or more of the following combinations:
acquiring the number of main body objects in the picture according to the non-white area with the significance exceeding a preset threshold;
acquiring the area ratio of a main object in the picture according to the area ratio of the non-white area with the significance degree exceeding a preset threshold in the picture;
acquiring azimuth information of a main object in the picture according to the barycentric coordinates of a non-white area with the significance exceeding a preset threshold;
and acquiring the boundary of the main object in the picture according to the leftmost and rightmost horizontal coordinates and the uppermost and lowermost vertical coordinates of the non-white area with the significance exceeding a preset threshold.
11. The apparatus for recognizing a picture according to claim 10, further comprising:
and the placement identification module is used for fitting all non-white pixel points in any non-white area positioned as the main body object to obtain a contour track of the corresponding main body object, and determining the long axis direction of the shape surrounded by the contour track as the placement direction of the main body object.
12. The apparatus for recognizing picture according to claim 11,
the outline track of the main body object obtained by the placement identification module comprises: an ellipse or rectangle;
the long axis direction of the shape enclosed by the outline track obtained by the placing identification module comprises: the included angle between the long axis of the shape surrounded by the outline tracks and the horizontal transverse axis of the picture is formed;
the placement recognition module determines a long axis direction of a shape surrounded by the contour traces as a placement direction of the main body object, and further includes: if the included angle between the long axis of the shape surrounded by the outline tracks and the positive half axis of the horizontal cross axis of the picture is larger than 90 degrees, the placing direction of the main body object faces to the left, otherwise, the placing direction of the main body object faces to the right.
13. The apparatus for recognizing a picture according to claim 9, further comprising:
the bottom identification module is used for acquiring a convex hull track of any non-white area positioned as a main body object and acquiring a contour track of the bottom of the main body object according to the convex hull track; the positioning device is further configured to obtain a lowest point of a vertical coordinate in the contour track of the bottom of the main body object, and determine, in combination with the placement direction of the main body object, that the lowest point of the vertical coordinate on the contour track of the bottom of the main body object extends in a direction opposite to the placement direction to be an effective contour track of the bottom of the main body object; the image selecting device is further used for enumerating any point on the effective contour track as a starting point and expanding the effective contour track, continuously calculating an included angle between a connecting line of a current point and a subsequent point and a horizontal cross shaft of the image, taking the current point as a terminal point when the included angle is larger than a threshold, and intercepting the track from the starting point to the terminal point on the effective contour track as a track to be selected; selecting the longest target contour track as the bottom of the main body object from the obtained plurality of candidate tracks; and the included angle between a connecting line from the starting point to the end point of the target contour track and the horizontal transverse axis of the picture is obtained, so that the placing angle trend of the main body object in the picture is judged according to the included angle.
14. The apparatus for recognizing picture according to claim 13,
the bottom recognition module acquires a contour track of the bottom of the main body object according to the convex hull track, and is further used for comparing the abscissa of the current point and the abscissa of the subsequent point in an iteration mode according to a counterclockwise sequence starting from any point at the upper left corner in the convex hull track, and setting the subsequent point as a starting point when the abscissa of the subsequent point at a certain moment is greater than the abscissa of the current point; then setting the current point as an end point when the abscissa of the subsequent point at another moment is smaller than the abscissa of the current point; and forming a contour track of the bottom of the main body object through the traversal track from the starting point to the end point.
15. The apparatus for recognizing picture according to claim 9,
and the positioning identification module is used for acquiring a non-white area with the significance degree not exceeding a preset threshold when the area ratio of the white area in the picture exceeds a preset area ratio threshold, positioning the description information in the picture according to the attribute of the non-white area, and identifying the text content of the description information.
16. The apparatus for recognizing picture according to claim 15,
the significance degree obtained by the positioning identification module does not exceed a non-white area of a preset threshold, and the attributes of the non-white area include one or more of the following combinations: the number of the non-white areas, the area ratio of the non-white areas in the picture, the barycentric coordinates of the non-white areas, the left-most and right-most horizontal coordinates and the top-most and bottom-most vertical coordinates of the non-white areas;
the positioning identification module positions the description information in the picture according to the attribute of the non-white area with the significance not exceeding a preset threshold, and further comprises one or more of the following combinations:
acquiring the number of the description information in the picture according to the non-white area with the significance degree not exceeding a preset threshold;
acquiring the area ratio of the description information in the picture according to the area ratio of the non-white area with the significance degree not exceeding a preset threshold in the picture;
acquiring azimuth information of the description information in the picture according to the barycentric coordinates of the non-white area with the significance degree not exceeding a preset threshold;
and acquiring the boundary of the description information in the picture according to the left and right horizontal coordinates and the top and bottom vertical coordinates of the non-white area with the significance degree not exceeding a preset threshold.
HK15102029.4A 2015-02-28 Method and device for recognizing image HK1201618B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310146353.8A CN104123528B (en) 2013-04-24 2013-04-24 A kind of method and device for identifying picture

Publications (2)

Publication Number Publication Date
HK1201618A1 true HK1201618A1 (en) 2015-09-04
HK1201618B HK1201618B (en) 2018-10-05

Family

ID=

Also Published As

Publication number Publication date
CN104123528A (en) 2014-10-29
CN104123528B (en) 2018-01-02

Similar Documents

Publication Publication Date Title
US11113836B2 (en) Object detection method, device, apparatus and computer-readable storage medium
US8590794B2 (en) Barcode recognion method and computer product thereof
US10922794B2 (en) Image correction method and device
CN110765992B (en) Seal identification method, medium, equipment and device
CN104123528B (en) A kind of method and device for identifying picture
US9984286B2 (en) Method and apparatus for detecting persons, and non-transitory computer-readable recording medium
Lloyd et al. Recognition of 3D package shapes for single camera metrology
JP2012518223A (en) Image feature extraction method and system
US20120257833A1 (en) Image processing apparatus, image processing method, and computer readable medium
WO2015120772A1 (en) Method and device for computing psoriasis score of product image
US20100128993A1 (en) Application of classifiers to sub-sampled integral images for detecting faces in images
CN113592826B (en) Method and system for identifying surface defects of wire
CN109858542A (en) A character recognition method and device
CA3162655A1 (en) Image processing based methods and apparatus for planogram compliance
CN111028195B (en) Example segmentation based redirected image quality information processing method and system
CN107452003A (en) A kind of method and device of the image segmentation containing depth information
CN110807792A (en) Method for comparing and tracking objects and electronic device
CN103914825A (en) Three-dimensional model texture coloring method based on image segmentation
US20230196707A1 (en) Fiducial patterns
US11790204B2 (en) Read curved visual marks
HK1201618B (en) Method and device for recognizing image
US10417783B2 (en) Image processing apparatus, image processing method, and storage medium
US11557056B2 (en) Image-capturing control apparatus, image-capturing control method, and storage medium for evaluating appearance of object
JP6171786B2 (en) Image processing apparatus, image processing method, and image processing program
Karthik et al. Localized Harris-FAST interest point detector

Legal Events

Date Code Title Description
PC Patent ceased (i.e. patent has lapsed due to the failure to pay the renewal fee)

Effective date: 20220422