Nothing Special   »   [go: up one dir, main page]

CN112541380B - Item selection method, selection device and terminal equipment - Google Patents

Item selection method, selection device and terminal equipment Download PDF

Info

Publication number
CN112541380B
CN112541380B CN202010278689.XA CN202010278689A CN112541380B CN 112541380 B CN112541380 B CN 112541380B CN 202010278689 A CN202010278689 A CN 202010278689A CN 112541380 B CN112541380 B CN 112541380B
Authority
CN
China
Prior art keywords
candidate
article
type
item
articles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010278689.XA
Other languages
Chinese (zh)
Other versions
CN112541380A (en
Inventor
顾震江
孙其民
刘大志
罗沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youdi Robot Wuxi Co ltd
Original Assignee
Youdi Robot Wuxi Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Youdi Robot Wuxi Co ltd filed Critical Youdi Robot Wuxi Co ltd
Priority to CN202010278689.XA priority Critical patent/CN112541380B/en
Publication of CN112541380A publication Critical patent/CN112541380A/en
Application granted granted Critical
Publication of CN112541380B publication Critical patent/CN112541380B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0621Item configuration or customization

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本申请适用于人工智能技术领域,提供了一种物品选取方法、选取装置及终端设备,该方法包括:获取选取指令,选取指令中包含目标物品类型,获取选取指令指向的候选物品的物品图像,根据物品图像对候选物品进行类型识别,得到候选物品的候选物品类型,从候选物品中筛选出对应的候选物品类型属于目标物品类型的初选物品,并对初选物品进行质量检测,得到初选物品的质量信息,从初选物品中筛选出质量信息满足预设质量标准的目标物品,并将目标物体作为物品选取的选取结果。本申请可以提高网上购物时选取出的物品与用户的匹配度。

The present application is applicable to the field of artificial intelligence technology, and provides an item selection method, a selection device, and a terminal device, the method comprising: obtaining a selection instruction, the selection instruction includes a target item type, obtaining an item image of a candidate item pointed to by the selection instruction, performing type recognition on the candidate item according to the item image, obtaining a candidate item type of the candidate item, screening out preliminary selected items whose corresponding candidate item type belongs to the target item type from the candidate items, performing quality inspection on the preliminary selected items, obtaining quality information of the preliminary selected items, screening out target items whose quality information meets a preset quality standard from the preliminary selected items, and using the target object as the selection result of item selection. The present application can improve the matching degree between the selected items and the user during online shopping.

Description

Article selection method, selection device and terminal equipment
Technical Field
The application belongs to the field of artificial intelligence, and particularly relates to an article selecting method, an article selecting device and terminal equipment.
Background
With the development of society, people's life rhythm is faster and faster. Therefore, to reduce shopping time, people often choose to purchase items in a shopping-on-line manner. Currently, online shopping is generally performed in two ways, namely, purchasing articles through web pages or Application (App), and transmitting shopping lists to a shopping robot, wherein the shopping robot performs shopping according to the shopping lists. However, these shopping approaches may allow the user to purchase items that do not meet the user's needs. For example, when purchasing on a web page or application, it is difficult to pick an item that meets its own needs because the user cannot see the actual appearance of the item. When shopping is performed by the shopping robot, the shopping robot may pick out the articles which do not meet the user's requirements because various articles with the same type but uneven quality exist in the market or mall. Therefore, the existing online article selecting method has the problem that articles meeting the requirements of users cannot be selected, namely, the problem that the matching degree between the selected articles and the users is low.
Disclosure of Invention
The embodiment of the application provides an article selecting method, an article selecting device and terminal equipment, which can solve the problem of low matching degree between an article selected during online shopping and a user in the prior art.
In a first aspect, an embodiment of the present application provides a method for selecting an article, including:
Acquiring a selection instruction, wherein the selection instruction comprises a target object type;
Acquiring an article image of the candidate article according to the selection instruction;
performing type recognition on the candidate articles according to the article images to obtain candidate article types of the candidate articles;
Screening the corresponding primary selected articles of which the candidate article types belong to the target article types from the candidate articles, and carrying out quality detection on the primary selected articles to obtain quality information of the primary selected articles;
and screening out the target objects with the quality information meeting the preset quality standard from the initially selected objects, and taking the target objects as the selection results of object selection.
In a second aspect, an embodiment of the present application provides an article selecting apparatus, including:
The selecting instruction acquisition module is used for acquiring a selecting instruction, wherein the selecting instruction comprises a target object type;
The image acquisition module is used for acquiring the article image of the candidate article according to the selection instruction;
the identification module is used for carrying out type identification on the candidate articles according to the article images to obtain candidate article types of the candidate articles;
The quality information acquisition module is used for screening the corresponding primary selected articles of which the candidate article types belong to the target article types from the candidate articles, and carrying out quality detection on the primary selected articles to obtain the quality information of the primary selected articles;
and the screening module is used for screening the target objects with the quality information meeting the preset quality standard from the initially selected objects, and taking the target objects as the selection results of the object selection.
In a third aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when the processor executes the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to perform the article selection method according to any one of the first aspects above.
In a sixth aspect, an embodiment of the present application provides another article selecting method, including:
Acquiring a selection instruction, wherein the selection instruction comprises a target object type;
Acquiring an article image of the candidate article according to the selection instruction;
performing type recognition on the candidate articles according to the article images to obtain candidate article types of the candidate articles;
Screening the corresponding primary selected articles of which the candidate article types belong to the target article types from the candidate articles, and carrying out quality detection on the primary selected articles to obtain quality information of the primary selected articles;
outputting the quality information of the initially selected articles to a user in a preset output mode;
And receiving a selection instruction fed back by the user based on the quality information, screening out a target object pointed by the selection instruction from the initially selected objects, and taking the target object as a selection result of object selection.
In a seventh aspect, an embodiment of the present application provides another article selection device, including:
The selecting instruction acquisition module is used for acquiring a selecting instruction, wherein the selecting instruction comprises a target object type;
The image acquisition module is used for acquiring the article image of the candidate article according to the selection instruction;
the identification module is used for carrying out type identification on the candidate articles according to the article images to obtain candidate article types of the candidate articles;
The quality information acquisition module is used for screening the corresponding primary selected articles of which the candidate article types belong to the target article types from the candidate articles, and carrying out quality detection on the primary selected articles to obtain the quality information of the primary selected articles;
The quality information output module is used for outputting the quality information of the initially selected articles to a user in a preset output mode;
And the receiving module is used for receiving a selection instruction fed back by the user based on the quality information, screening out a target object pointed by the selection instruction from the initially selected objects, and taking the target object as a selection result of object selection.
In an eighth aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the sixth aspect when the processor executes the computer program.
In a ninth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method according to the sixth aspect.
In a tenth aspect, embodiments of the present application provide a computer program product for, when run on a terminal device, causing the terminal device to perform the article selection method of any one of the sixth aspects above.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Compared with the prior art, the embodiment of the application has the beneficial effects that in the embodiment of the application, the selection instruction is firstly obtained, and the selection instruction contains the type of the target object. And obtaining an article image of the candidate article according to the selection instruction, and carrying out type identification on the candidate article according to the article image to obtain the candidate article type of the candidate article. And screening the corresponding candidate item type from the candidate items, namely, the initially selected item of the target item type, and carrying out quality detection on the initially selected item to obtain the quality information of the initially selected item. And finally, screening target articles with quality information meeting a preset quality standard from the initially selected articles, and taking the target articles as selection results of article selection. Since the quality of the primary selected items is also detected after the primary selected items are obtained. And taking the initially selected article with the quality information meeting the preset quality standard as a target article. And finally, taking the target object as an object selection result. Rather than directly taking the initially selected item as a result of item selection. Thus, an efficient screening of the items may be achieved. Meanwhile, the preset quality standard can be set according to the requirements of users. Therefore, even if shopping is performed online, the embodiment of the application can further select the object according to the requirement of the user, so that the object with higher matching degree with the user is selected.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for selecting an article according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a neural network model according to an embodiment of the present application;
FIG. 3 is a flow chart of another method for selecting an article according to an embodiment of the present application;
FIG. 4 is a schematic view of an article selecting apparatus according to an embodiment of the present application;
FIG. 5 is a schematic view of another article selection apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The method for selecting the object provided by the embodiment of the application can be applied to terminal equipment such as mobile phones, tablet computers, vehicle-mounted equipment, augmented reality (augmented reality, AR)/Virtual Reality (VR) equipment, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal DIGITAL ASSISTANT, PDA) and the like, and the embodiment of the application does not limit the specific type of the terminal equipment.
For example, the terminal device may be a cellular telephone, a cordless telephone, a session initiation protocol (Session Initiation Protocol, SIP) phone, a wireless local loop (Wireless Local Loop, WLL) station, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, an in-vehicle device, a car networking terminal, a computer, a laptop computer, a handheld communication device, a handheld computing device, a satellite radio device, a Set Top Box (STB), a customer premise equipment (customer premise equipment, CPE) and/or other devices for communicating over a wireless system, as well as next generation communication systems. Such as a mobile terminal in a 5G network or a mobile terminal in a future evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
Example 1
A method for selecting an article according to a first embodiment of the present application is described below with reference to fig. 1, where the method includes:
step S101, acquiring a selection instruction, wherein the selection instruction comprises a target object type;
In step S101, the selection instruction includes some requirements of the items that the user needs to purchase. Such as physical properties of the item, the purchase location, etc. The physical attribute of the article is not limited herein, and may be set by the actual requirement of the user. For example, any one or more of price, color, quantity, and type of item of interest to the user. The target object type refers to an object type to be selected, for example, apples, grapes, strawberries and the like are required to be selected. When the user needs to purchase the articles, the user can input some requirements on the purchased articles to the terminal equipment, and the terminal equipment generates corresponding selection instructions.
And step S102, acquiring an article image of the candidate article according to the selection instruction.
In step S102, the present application does not specifically limit the source of the item image of the candidate item. The object image of the candidate object may be an object image acquired by the terminal device of the present embodiment after receiving the selection instruction, or an object image acquired by the terminal device of the present embodiment after receiving the selection instruction, and then the robot may transmit the acquired object image to the terminal device of the present embodiment. The candidate item includes the target item. For example, when the target object is an apple, the candidate object includes an apple, a pear, an orange, and a dragon fruit, and the device terminal of the present embodiment controls the robot to collect an object image at the shopping site, where the object image includes an apple image, a pear image, an orange image, and a dragon fruit image, and the robot sends the object image to the device terminal of the present embodiment after collecting the object image.
And step S103, carrying out type recognition on the candidate articles according to the article images to obtain candidate article types of the candidate articles.
In step S103, when the item image of the candidate item is obtained, it is necessary to identify the type of the candidate item based on the item image, and obtain the candidate item type of the candidate item, so that the candidate can determine which of the candidate items are target items. The specific type recognition method is not limited herein, and may be selected or designed by a skilled person according to actual requirements. For example, in some alternative embodiments, item identification may be implemented using a neural network-based type identification method or a wavelet moment-based type identification method.
In some possible implementations, the candidate item may be type-identified by a pre-trained first neural network model that includes a first processing layer, a determiner, and a plurality of second processing layers. Firstly, extracting features of the article image by using the first processing layer to obtain first features, and identifying types of the candidate articles according to the first features. If the identification of the candidate item type according to the first characteristic fails, a first type set corresponding to the candidate item is identified according to the first characteristic, the first type set is input to the determiner, and the first type set comprises a plurality of item types.
It should be appreciated that the first feature may comprise a plurality of features of the candidate item. For example, when the candidate item is spinach, the first feature may include a green feature, a leaf feature, a root feature, and the like. The number of the first features may be set according to the features of the article that can be identified by the first processing layer, and the present application is not limited in detail herein. In addition, the first processing layer and the plurality of second processing layers can all identify a plurality of objects at the same time. For example, the first treatment layer may identify spinach, water spinach, cabbage, and the like. The number of the articles that can be identified by the processing layer can be set according to the actual requirement of the user, and the application is not limited in detail herein. The first type set corresponding to the candidate item refers to a set of candidate items having the first characteristic. Although the first processing layer cannot identify what the candidate object is, the first processing layer may obtain the candidate object having the first feature according to the first feature, so as to reduce the calculation amount of the type identification of the subsequent candidate object, and further improve the type identification efficiency.
For example, when the candidate object is cabbage, the first processing layer performs feature extraction on the cabbage image to obtain green features, leaf features and flat features of the cabbage. At this time, the first processing layer may identify that the type of the candidate item is cabbage. If the candidate object is a cucumber, the features extracted by the first processing layer are green features only. At this time, the first processing layer cannot identify the type of the candidate item, i.e., fails to identify, according to the green feature. However, the first processing layer may screen out which candidate items may be the candidate items according to the green feature, that is, screen out candidate items having the green feature, and input the candidate items having the green feature into the determiner.
And after the determiner receives the first type set, extracting the characteristics of the article image to obtain a second characteristic, and identifying the type of the candidate article according to the second characteristic and the first type set. If the type identification of the candidate object is failed according to the second characteristic and the first type set, one or more object types are removed from the first type set according to the second characteristic, and a corresponding second type set is obtained.
The type identification of the candidate articles in the first type set according to the second characteristic comprises the step of type matching of the article types in the first type set according to the second characteristic, if the matching is successful, the successfully matched article type is judged to be the candidate article type of the candidate articles, and if the matching is failed, the candidate article type identification is judged to be failed. For example, when the first type set includes cucumber, white gourd, luffa, bitter gourd and pumpkin, if the second feature includes a long-strip feature and a thorn feature, the candidate object is matched as cucumber, and the matching is successful. If the second feature only includes a long bar, the determiner cannot identify the type of the candidate article, but the determiner may reject the type of the article in the first type set according to the second feature. For example, pumpkin does not have a long strip-shaped characteristic, and therefore, pumpkin is removed from the first type set to obtain a second type set.
After the determiner obtains the second type set, a target processing layer corresponding to the second type set is selected from the plurality of second processing layers, and the second type set is input to the target processing layer. And each second processing layer is used for extracting the characteristics of the object images of one or more candidate objects and identifying the object types corresponding to the candidate objects. And selecting a target processing layer corresponding to the second type set from the plurality of second processing layers, wherein the selecting a processing layer capable of processing each article type in the second type set from the plurality of second processing layers, and the selecting a processing layer capable of processing each article type in the second type set as the target processing layer.
Because each processing layer only identifies the preset article type, after the determiner obtains the second type set, it first determines which processing layers in the second type set are target processing layers for identifying the article type in the second type set, and after finding out the target processing layer corresponding to the second type set, it directly inputs the second type set into the target processing layer, so that the second type set can be directly input into the target processing layer without other processing layers, thereby improving the efficiency of identifying the type of candidate article.
An example is illustrated. Referring to fig. 2, it is assumed that the neural network model has 10 processing layers, but the tenth processing layer is a target processing layer for recognizing a cucumber image. The prior art scheme is to input the cucumber image to the first processing layer. If the first processing layer fails to identify, the cucumber image is input into the second processing layer. If the second processing layer fails to identify, the cucumber image is input into the third processing layer and the fourth processing layer until the tenth processing layer. That is, in the prior art solution, it is not known which processing layer is the target processing layer of the cucumber image. The target processing layer of the cucumber image is found by passing the cucumber image sequentially through the sequentially arranged processing layers (as indicated by 201 in fig. 2). But this approach is less efficient in recognition. In the present application, the tenth processing layer is determined to be the target processing layer of the cucumber image by the determiner, so that the cucumber image is directly input to the tenth processing layer (as shown by 202 in fig. 2), so that the cucumber does not pass through the processing layer in front of the tenth processing layer, and the recognition efficiency is improved.
It should be noted that, since a determiner may be connected to each processing layer, when the determiner cannot find the target processing layer of the object type in the second type set, the second type set may be directly input to the next processing layer. If the next processing layer fails to recognize, the second type set is input to a determiner connected to the next processing layer, and the determiner connected to the next processing layer determines which processing layers are target processing layers for processing the object types in the second type set.
And after the target processing layer obtains the second type set, extracting the characteristics of the article image to obtain a third characteristic, and carrying out type recognition on the candidate article according to the third characteristic and the second type set to obtain the candidate article type of the candidate article. The type identification of the candidate articles according to the third characteristic and the second type set comprises the step of type matching of the article types in the second type set according to the third characteristic, and if the matching is successful, the successfully matched article type is judged to be the article type of the candidate article. If the matching fails, the candidate item type identification is determined to fail.
If the recognition fails, the first neural network model may be skipped. When the target processing layer is connected with the determiner, one or more article types can be removed from the second type set according to the third characteristic to obtain a third type set, and the third type set is input to the determiner connected with the target processing layer for further processing by the determiner connected with the target processing layer.
A specific application scenario of the first neural network model is described below:
And when the candidate object is a cucumber, the first processing layer performs feature extraction on the cucumber image to obtain green features. At this time, the first processing layer cannot identify the type of the candidate item, i.e., fails to identify the candidate item according to the green feature, but the first processing layer may identify the candidate item having the green feature. For example, the candidate articles include cucumber, white gourd, luffa, balsam pear and pumpkin, and these candidate articles with green features are input into the determiner, and the determiner further extracts the second features. If the second characteristic is a thorn characteristic, the type of the candidate object corresponding to the cucumber image is determined as the cucumber. If the second feature is a strip feature, the determiner cannot determine the type of candidate object corresponding to the cucumber image at this time. However, the pumpkin does not have the long-strip characteristic, so the pumpkin is screened out, and the cucumber, the white gourd, the towel gourd and the balsam pear are input into the target treatment layer for treating the cucumber, the white gourd, the towel gourd and the balsam pear. The target processing layer further extracts a third feature, wherein the third feature is a thorn feature, and the type of the candidate object corresponding to the cucumber image is judged to be cucumber.
In this embodiment, the target processing layer for identifying the type of the candidate item is determined by the determiner in the first neural network, and then the target processing layer identifies the item image corresponding to the candidate item, so that the item image corresponding to the candidate item does not need to sequentially pass through the sequentially arranged processing layers, and further the efficiency of identifying the item image corresponding to the candidate item is improved.
Step S104, screening the corresponding candidate item types from the candidate items, namely the initially selected items with the target item types, and carrying out quality detection on the initially selected items to obtain quality information of the initially selected items.
In step S104, when the candidate item type belongs to the target item type, the candidate items are screened out, and the candidate items are used as initial items. After the primary selected articles are obtained, quality detection is carried out on the primary selected articles, and quality information of the primary selected articles is obtained.
In some embodiments, the quality of the first selected item may be detected using a pre-trained second neural network model, thereby obtaining quality information of the first selected item. The second neural network model trained in advance can adopt a convolutional neural network (Convolutional Neural Network, CNN), a cyclic neural network (Recurrent Neural Networks, RNN), a deep neural network (Deep Neural Network, DNN) and the like. The type of neural network model used to detect the quality of the initially selected articles can be selected according to actual requirements, and the application is not limited in detail herein.
In other embodiments, the terminal device may first obtain the physical parameter of the initially selected object, and then obtain the quality information of the initially selected object according to the physical parameter. The physical parameters refer to non-visual parameters of the article, that is, parameters that cannot be obtained from the image of the article, including, but not limited to, smell, temperature, viscosity, and hardness of the initially selected article. It should be understood that, when the quality information of the initially selected article needs to be obtained according to both the visual parameter and the physical parameter, the non-visual parameter of the initially selected article needs to be obtained in addition to the image of the article being input to the pre-trained second neural network model, so as to obtain the quality information of the initially selected article. For example, when the initially selected object is a tomato, besides the price information of the tomato, the hardness parameter of the tomato needs to be obtained, and finally the quality information of the tomato is obtained according to the hardness parameter and the price information.
In some possible implementations, the physical parameters of the initially selected item may be obtained by a manipulator of a robot. Because the manipulator is provided with at least one sensor, when the manipulator is controlled to touch or grab the primary selected object, the sensor can acquire corresponding data, for example, the touch sensor can acquire hardness information, viscosity information and the like of the primary selected object, and then the quality information of the primary selected object is acquired according to the data acquired by the sensor.
Step 105, screening out the target articles with quality information meeting the preset quality standard from the initially selected articles, and taking the target articles as the selection result of article selection.
In step S105, when the quality information of the initially selected item meets a preset quality standard, the initially selected item is taken as a target item, and the target item is taken as a selection result of item selection.
In some embodiments, the quality information of the preliminary selected item includes item attribute data. For example, when the initially selected item is an apple, the item attribute data includes hardness information, shape information, price, and the like of the apple. The preset quality standard comprises a plurality of attribute standard data set by a user. For example, when the attribute data of the apples includes hardness information, shape information, and price, the user may set the attribute standard data of the apples to have hardness a, shape B, price 6, or may set the attribute standard data of the apples to have hardness a, shape B, price 4, or the like. The attribute standard data may be set according to the needs of the user, and the present application is not particularly limited herein. The item attribute data may include overall evaluation information of the item. For example, when the article is an apple, the overall evaluation information of the apple is given according to the hardness information and the appearance information of the apple, and then whether the overall evaluation information and the price meet attribute standard data set by a user is judged. At this time, the attribute criterion data of the user also needs to include the overall evaluation data of the article. For example, the overall evaluation information of an apple with a hardness of a and an appearance of B is B, and the price is 6. At this time, it is further determined whether the apple with the overall evaluation information of B and the price of 6 satisfies attribute standard data set by the user.
Since the user sets a plurality of attribute criterion data, the user can set the priority of each attribute criterion data. For example, the attribute standard data with the hardness of a, the appearance of B, the price of 4 is set as the first priority, the hardness of a, the appearance of B, the attribute standard data with the price of 6 is set as the second priority, the hardness of B, the appearance of B, and the attribute standard data with the price of 4 is set as the third priority. Alternatively, the overall evaluation information may be set to B, the attribute standard data with a price of 4 may be set to a first priority, the overall evaluation information may be set to B, the attribute standard data with a price of 6 may be set to a second priority, the overall evaluation information may be set to a, and the price may be set to 8 may be set to a third priority. The priority of the attribute standard data is set according to the actual requirement of the user, and the present application is not particularly limited herein.
In the process of screening the initially selected articles, whether the article attribute data meets the requirement of the attribute standard data is sequentially judged according to the order of the priority from high to low. For example, the first priority is attribute standard data of hardness a, shape B, price 4, and the second priority is attribute standard data of hardness a, shape B, price 6. When the article attribute data is hardness A, appearance B and price 6, judging whether the article attribute data meets the attribute standard data of the first priority, and if not, judging whether the article attribute data meets the attribute standard data of the second priority. At this time, if the item attribute data satisfies the attribute standard data of the second priority, the judgment is stopped, and if not, the judgment is continued whether the item attribute data satisfies the attribute standard data of the other priority.
When the user does not set the priorities of the plurality of attribute standard data, the item attribute data needs to be traversed through all the attribute standard data until the item attribute data conforming to the attribute standard data is found in the process of screening the initially selected item. And if the attribute data of the article do not meet all the attribute standard data, sending information to the user so that the user can make further selection.
In other embodiments, the user information may be obtained first, and the preset quality standard may be generated according to the user information, where the user information includes one or more of a history of the user, consumption data, and user identity information. At this time, the preset quality standard is generated according to the user information, the user is not required to set the preset quality standard, and the object with higher matching degree with the user can be selected, so that the method is more convenient.
It should be understood that the method for selecting the article in this embodiment may be applied to a terminal device, and after the terminal device determines the selection result, the terminal device sends the selection result to a robot, and the robot grabs the article according to the selection result. The method for selecting the articles in the embodiment can also be applied to a robot, and the robot can control the mechanical arm to grasp the articles according to the selection result after obtaining the selection result.
In view of the foregoing, the present application provides a method for selecting an article, first, a selection instruction is obtained, where the selection instruction includes a target article type (the target article type refers to an article type that needs to be selected, for example, apple, grape, strawberry, etc.). And obtaining an item image of the candidate item according to the selection instruction, and carrying out type identification on the candidate item according to the item image to obtain the candidate item type of the candidate item (for example, the candidate item type comprises apples, pears, watermelons and the like). And screening out the corresponding primary selected articles with candidate article types belonging to the target article types from the candidate articles (for example, the target article types are apples, the candidate articles comprise apples, pears, watermelons and the like, and determining the apples in the candidate articles as the primary selected articles), and carrying out quality detection on the primary selected articles to obtain the quality information of the primary selected articles. And finally, screening target articles with quality information meeting a preset quality standard (the preset quality standard can be set according to user requirements) from the initially selected articles, and taking the target articles as selection results of article selection. Since the quality of the primary selected items is also detected after the primary selected items are obtained. And taking the initially selected object with the quality information meeting the preset quality standard as a target object, and finally taking the target object as an object selection result. Rather than directly taking the initially selected item as a result of item selection. Thus, an efficient screening of the items may be achieved. Meanwhile, the preset quality standard can be set according to the requirements of users. Therefore, even if shopping is performed online, the embodiment of the application can further select the object according to the requirement of the user, so that the object with higher matching degree with the user is selected.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Example two
Another method for selecting an article according to the second embodiment of the present application is described below with reference to fig. 3, and the method includes:
S301, acquiring a selection instruction, wherein the selection instruction comprises a target object type.
S302, acquiring an article image of the candidate article according to the selection instruction.
S303, carrying out type recognition on the item image to obtain candidate item types of the candidate items.
S304, screening the corresponding candidate item types from the candidate items, namely, the initially selected items with the target item types, and carrying out quality detection on the initially selected items to obtain quality information of the initially selected items.
The specific implementation manners of steps S301 to S304 are the same as S101 to S104 in the first embodiment, and specific reference may be made to the description of the first embodiment, which is not repeated here.
S305, outputting the quality information of the initially selected object to a user in a preset output mode.
In step S305, after obtaining the quality information of the initially selected item, the quality information is output to the user according to a preset output mode, so that the user can know the quality information of the initially selected item. It should be noted that, since there are many east and west items to be selected, the terminal device may obtain quality information of one item to be selected and send the quality information to the user, or may obtain quality information of all items to be selected and then send the quality information of all items to the user. However, when the terminal device obtains the quality information of all the articles to be selected and then sends the quality information of all the articles to the user together, after obtaining the quality information of one article, it is necessary to establish a mapping relationship between the quality information of the article and the position information of the article and store the mapping relationship. So that after the selection result is obtained, the position information of the selected object can be directly obtained, and the selection result and the position information are sent to the robot so that the robot moves to the position of the selected object to grasp the object.
S306, receiving a selection instruction fed back by a user based on the quality information, screening a target object pointed by the selection instruction from the initially selected objects, and taking the target object as a selection result of object selection.
In step S306, after receiving the quality information sent by the terminal device, the user may select an article based on the quality information, and then feed back a selection instruction to the terminal device, where the device terminal screens, according to the selection instruction, a target article pointed by the selection instruction from the previous initially selected article, and uses the target article as a selection result of article selection.
In some possible implementation manners, because the position information of the selected article and the quality information of the selected article have established a mapping relationship, if the user needs to further interact with the vendor or the shopping guide, the user can set interaction information in the selection instruction, after receiving the selection instruction, the equipment terminal controls the robot to move to the position of the target article pointed by the selection instruction according to the interaction information in the selection instruction, and meanwhile, the robot can synchronize the sound and the head portrait of the user, so that the user can further interact with the vendor or the shopping guide, and the user can experience remote shopping while selecting the article meeting the requirement of the user.
It should be appreciated that the contents of each of the alternative embodiments described in the above embodiment can be applied in combination with the present embodiment without contradiction to the present embodiment. For example, when the type of the article is identified, the first neural network model in the first embodiment may be used to identify the type of the article, and the determiner is used to screen out the target processing layers of the second type set from the plurality of second processing layers, and then directly input the second type set to the target processing layers, so that the second type set may be directly input to the target processing layers without going through other processing layers for type identification, thereby improving the efficiency of identifying the candidate type of the article. For another example, in the case of quality detection of the primary selected article, the quality detection of the primary selected article may be performed using the second neural network model in the first embodiment, or the quality information of the primary selected article may be obtained by a method based on the obtained physical parameters of the primary selected article. In addition, the physical parameters of the initially selected item may be obtained by sensors on the robot's manipulator. Such combined embodiments should also be included within the scope of the present application.
In this embodiment, after obtaining the quality information of the initially selected article, the terminal device outputs the quality information to the user, so that the user can select the article based on the quality information, and then feeds back a selection instruction to the device terminal, and the device terminal screens out the target article pointed by the selection instruction from the initially selected article according to the selection instruction, and uses the target article as a selection result of article selection. The quality information of the initially selected articles enables users to more conveniently select the articles meeting the demands of the users.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
It should be particularly noted that while the first neural network model is used for type identification of an item in each of the embodiments of the present application described above. However, the essence of the first neural network model is to perform type recognition on the object, where the object may be an article in the above embodiments, or may be a human body or other animals and plants. Therefore, according to different requirements of practical application, the first neural network model is applied in combination with more practical scenes. At the moment, the identification object of the first neural network model and the selection of the characteristics of the identification object are only required to be adaptively modified. Meanwhile, the first neural network model has the effective effect of improving the object type recognition efficiency and enabling the type recognition to be quicker and more accurate compared with the existing object type recognition method. However, in different practical situations, the beneficial effects of the first neural network model may also change appropriately.
For example, the first neural network model may be applied to face recognition scenarios. The recognition object is a human face at this time. The features can be selected from one or more of skin color, expression, facial texture, five-sense organ coordinates, etc. Other facial features may be selected, and are not limited in this regard. The face classification (such as age, gender, race, or identity classification and identification) is realized. Correspondingly, the first neural network model has the beneficial effects that the efficiency of face recognition can be improved, so that the face recognition is more accurate and efficient.
For another example, the first neural network model may also be applied to an animal type scene. The object to be identified is an animal at this time. The characteristics can be selected from one or more of hair, limbs, tail, etc. Other animal characteristics may be selected and are not limited in this regard. Classification of animals (e.g., classification of cats and dogs, or classification of mammals and reptiles, etc.) is achieved. Correspondingly, the first neural network model has the beneficial effects that the animal identification efficiency can be improved, so that the animal identification is more accurate and efficient.
In theory, all object type recognition scenes can be combined with the first neural network model to apply, for example, the type recognition of different types of vehicles, machines and the like can be realized. Therefore, the present application is not limited to the above examples, and may be specifically determined according to the actual application scenario. It should be understood that these different practical applications fall within the scope of the present application of "object type identification with the first neural network model".
Example III
Fig. 4 shows a block diagram of an article selecting apparatus according to an embodiment of the present application, corresponding to the article selecting method described in the first embodiment, and only the portions related to the embodiment of the present application are shown for convenience of explanation. The apparatus 400 includes:
the selection instruction obtaining module 401 is configured to obtain a selection instruction, where the selection instruction includes a target object type.
An image acquisition module 402, configured to acquire an item image of the candidate item according to the selection instruction.
And the identification module 403 is configured to perform type identification on the candidate item according to the item image, so as to obtain a candidate item type of the candidate item.
And the quality information acquisition module 404 is configured to screen the first selected item of the corresponding candidate item type belonging to the target item type from the candidate items, and perform quality detection on the first selected item to obtain quality information of the first selected item.
And the screening module 405 is configured to screen a target item whose quality information meets a preset quality standard from the initially selected items, and take the target item as a selection result of item selection.
Optionally, the identification module 403 is configured to perform:
And acquiring a pre-trained first neural network model, wherein the first neural network model comprises a first processing layer, a determiner and a plurality of second processing layers.
And extracting the characteristics of the object image by using the first processing layer to obtain first characteristics, and identifying the types of the candidate objects according to the first characteristics.
If the identification of the candidate object types according to the first characteristics fails, a first type set corresponding to the candidate object is identified according to the first characteristics, the first type set is input to the determiner, and the first type set comprises a plurality of object types.
And extracting the characteristics of the object image by using the determiner to obtain a second characteristic, and identifying the types of the candidate objects according to the second characteristic and the first type set.
If the candidate object type identification fails according to the second characteristic and the first type set, one or more object types are removed from the first type set according to the second characteristic, and a corresponding second type set is obtained.
And screening target processing layers corresponding to the second type set from the plurality of second processing layers, and inputting the second type set into the target processing layers.
And extracting the characteristics of the object image by using the target processing layer to obtain a third characteristic, and identifying the type of the candidate object according to the third characteristic and the second type set to obtain the candidate object type of the candidate object.
Optionally, the quality information acquisition module 404 is configured to perform:
and obtaining the physical parameters of the initially selected object.
And obtaining the quality information of the initially selected object according to the physical parameters.
Optionally, if the article selecting device 400 is a robot having a manipulator, and at least one sensor is disposed in the manipulator, the quality information obtaining module 404 is configured to perform:
and controlling the manipulator to touch or grasp the initially selected object, and determining physical parameters according to data generated by at least one sensor in the touch or grasp process.
Optionally, the quality information acquisition module 404 is configured to perform:
and inputting the object image into a pre-trained second neural network model for processing to obtain the quality information of the initially selected object.
Optionally, the quality information includes item attribute data, and the preset quality criteria includes a plurality of attribute criteria data set by a user, where in the screening operation on the initially selected item, the screening module 405 is configured to perform:
And acquiring the priority of each attribute standard data, and sequentially judging whether the attribute data of the article meets the requirements of the attribute standard data according to the order of the priority from high to low.
And if the item attribute data meets the requirements of one or more attribute standard data, determining the initially selected item as the target item.
Optionally, the screening module 405 includes:
The user information acquisition unit is used for acquiring user information and generating preset quality standards according to the user information, wherein the user information comprises one or more of historical article selection records, consumption data and user identity information of a user.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to a part of the method embodiment, and will not be described herein.
Example IV
Fig. 5 shows a block diagram of an article selecting device according to an embodiment of the present application, corresponding to the article selecting method described in the second embodiment, and only the portions related to the embodiment of the present application are shown for convenience of explanation. The apparatus 500 includes:
The selection instruction obtaining module 501 is configured to obtain a selection instruction, where the selection instruction includes a target object type.
The image acquisition module 502 is configured to acquire an item image of the candidate item according to the selection instruction.
And the identifying module 503 is configured to identify the type of the candidate item according to the item image, so as to obtain a candidate item type of the candidate item.
The quality information obtaining module 504 is configured to screen the first selected item whose corresponding candidate item type belongs to the target item type from the candidate items, and perform quality detection on the first selected item to obtain quality information of the first selected item.
The quality information output module 505 outputs the quality information of the initially selected item to the user in a preset output mode.
And the receiving module 506 is configured to receive a selection instruction fed back by the user based on the quality information, screen a target article pointed by the selection instruction from the initially selected articles, and take the target article as a selection result of article selection.
Optionally, the identifying module 503 is configured to perform:
And acquiring a pre-trained first neural network model, wherein the first neural network model comprises a first processing layer, a determiner and a plurality of second processing layers.
And extracting the characteristics of the object image by using the first processing layer to obtain first characteristics, and identifying the types of the candidate objects according to the first characteristics.
If the identification of the candidate object types according to the first characteristics fails, a first type set corresponding to the candidate object is identified according to the first characteristics, the first type set is input to the determiner, and the first type set comprises a plurality of object types.
And extracting the characteristics of the object image by using the determiner to obtain a second characteristic, and identifying the types of the candidate objects according to the second characteristic and the first type set.
If the candidate object type identification fails according to the second characteristic and the first type set, one or more object types are removed from the first type set according to the second characteristic, and a corresponding second type set is obtained.
And screening target processing layers corresponding to the second type set from the plurality of second processing layers, and inputting the second type set into the target processing layers.
And extracting the characteristics of the object image by using the target processing layer to obtain a third characteristic, and identifying the type of the candidate object according to the third characteristic and the second type set to obtain the candidate object type of the candidate object.
Optionally, the quality information acquisition module 504 is configured to perform:
and obtaining the physical parameters of the initially selected object.
And obtaining the quality information of the initially selected object according to the physical parameters.
Optionally, if the article selecting device 500 is a robot having a manipulator, and at least one sensor is disposed in the manipulator, the quality information obtaining module 504 is configured to perform:
and controlling the manipulator to touch or grasp the initially selected object, and determining physical parameters according to data generated by at least one sensor in the touch or grasp process.
Optionally, the quality information acquisition module 504 is configured to perform:
and inputting the object image into a pre-trained second neural network model for processing to obtain the quality information of the initially selected object.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to a part of the method embodiment, and will not be described herein.
Example five
Fig. 6 is a schematic diagram of a terminal device according to a fifth embodiment of the present application. As shown in fig. 6, the terminal device 600 of this embodiment includes a processor 601, a memory 602, and a computer program 603 stored in the memory 602 and executable on the processor 601. The steps of the various method embodiments described above are implemented when the processor 601 executes the computer program 603 described above. Or the processor 601, when executing the computer program 603, performs the functions of the modules/units in the device embodiments.
Illustratively, the computer program 603 may be partitioned into one or more modules/units that are stored in the memory 602 and executed by the processor 601 to perform the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 603 in the terminal device 600. For example, the computer program 603 may be divided into a selection instruction acquisition module, an image acquisition module, an identification module, a quality information acquisition module, and a filtering module, where each module specifically functions as follows:
Acquiring a selection instruction, wherein the selection instruction comprises a target object type;
Acquiring an article image of the candidate article according to the selection instruction;
Performing type recognition on the candidate articles according to the article images to obtain candidate article types of the candidate articles;
Screening the corresponding primary selected articles of which the candidate article types belong to the target article types from the candidate articles, and carrying out quality detection on the primary selected articles to obtain quality information of the primary selected articles;
And screening target objects with the quality information meeting a preset quality standard from the initially selected objects, and taking the target objects as selection results of object selection.
The terminal device may include, but is not limited to, a processor 601, a memory 602. It will be appreciated by those skilled in the art that fig. 6 is merely an example of a terminal device 600 and is not intended to limit the terminal device 600, and may include more or fewer components than shown, or may combine certain components, or different components, such as the terminal device described above may also include input and output devices, network access devices, buses, etc.
The Processor 601 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware plug-in, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 602 may be an internal storage unit of the terminal device 600, for example, a hard disk or a memory of the terminal device 600. The memory 602 may be an external storage device of the terminal device 600, for example, a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided in the terminal device 600. Further, the memory 602 may also include both an internal storage unit and an external storage device of the terminal device 600. The memory 602 is used for storing the computer program and other programs and data required for the terminal device. The memory 602 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units described above is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or plug-ins may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the above-described method embodiments, or may be implemented by a computer program to instruct related hardware, where the above-described computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of the above-described method embodiments. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium described above can be appropriately increased or decreased according to the requirements of the jurisdiction's legislation and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the legislation and the patent practice.
The foregoing embodiments are merely for illustrating the technical solution of the present application, but not for limiting the same, and although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that the technical solution described in the foregoing embodiments may be modified or substituted for some of the technical features thereof, and that these modifications or substitutions should not depart from the spirit and scope of the technical solution of the embodiments of the present application.

Claims (9)

1. A method of selecting an article, comprising:
Acquiring a selection instruction, wherein the selection instruction comprises a target object type;
Acquiring an article image of the candidate article according to the selection instruction;
Performing type recognition on the candidate articles according to the article images to obtain candidate article types of the candidate articles;
Screening the corresponding primary selected articles of which the candidate article types belong to the target article types from the candidate articles, and carrying out quality detection on the primary selected articles to obtain quality information of the primary selected articles;
Screening target articles with the quality information meeting a preset quality standard from the initially selected articles, and taking the target articles as selection results of article selection;
the step of carrying out type recognition on the candidate item according to the item image to obtain the candidate item type of the candidate item comprises the following steps:
Acquiring a pre-trained first neural network model, wherein the first neural network model comprises a first processing layer, a determiner and a plurality of second processing layers;
Extracting features of the object image by using the first processing layer to obtain first features, and identifying types of the candidate objects according to the first features;
if the identification of the candidate object types according to the first characteristics fails, identifying a first type set corresponding to the candidate object according to the first characteristics, and inputting the first type set into the determiner, wherein the first type set comprises a plurality of object types;
Extracting features of the article image by using the determiner to obtain a second feature, and identifying the type of the candidate article according to the second feature and the first type set;
if the identification of the candidate object types according to the second characteristic and the first type set fails, one or more object types are removed from the first type set according to the second characteristic, and a corresponding second type set is obtained;
Screening target processing layers corresponding to the second type set from the plurality of second processing layers, and inputting the second type set into the target processing layers;
and extracting features of the object image by using the target processing layer to obtain a third feature, and identifying the type of the candidate object according to the third feature and the second type set to obtain the candidate object type of the candidate object.
2. The method for selecting an item according to claim 1, wherein the step of performing quality detection on the initially selected item to obtain quality information of the initially selected item comprises:
acquiring physical parameters of the initially selected object;
And obtaining the quality information of the initially selected object according to the physical parameters.
3. The method of claim 2, applied to a robot having a manipulator with at least one sensor disposed therein, the acquiring the physical parameters of the initially selected item comprising:
and controlling the manipulator to touch or grasp the initially selected object, and determining the physical parameter according to data generated by the at least one sensor in the touch or grasp process.
4. The method for selecting an item according to claim 1, wherein the step of performing quality detection on the initially selected item to obtain quality information of the initially selected item comprises:
inputting the object image into a pre-trained second neural network model for processing to obtain the quality information of the initially selected object.
5. The method for selecting an article according to claim 1, wherein the quality information includes article attribute data, and the preset quality criteria includes a plurality of attribute criteria data set by a user;
Accordingly, in the operation of screening out the target articles whose quality information satisfies a preset quality standard from the initially selected articles, the screening operation for the single initially selected articles includes:
Acquiring the priority of each item of attribute standard data, and sequentially judging whether the item of attribute data meets the requirements of the attribute standard data according to the order of the priority from high to low;
And if the item attribute data meets the requirements of one or more attribute standard data, judging the initially selected item as a target item.
6. The article selection method as recited in claim 1, further comprising:
And acquiring user information, and generating the preset quality standard according to the user information, wherein the user information comprises one or more of historical article selection records, consumption data and user identity information of a user.
7. A method of selecting an article, comprising:
Acquiring a selection instruction, wherein the selection instruction comprises a target object type;
Acquiring an article image of the candidate article according to the selection instruction;
Performing type recognition on the candidate articles according to the article images to obtain candidate article types of the candidate articles;
Screening the corresponding primary selected articles of which the candidate article types belong to the target article types from the candidate articles, and carrying out quality detection on the primary selected articles to obtain quality information of the primary selected articles;
Outputting the quality information of the initially selected articles to a user in a preset output mode;
Receiving a selection instruction fed back by the user based on the quality information, screening a target article pointed by the selection instruction from the initially selected articles, and taking the target article as a selection result of article selection;
the step of carrying out type recognition on the candidate item according to the item image to obtain the candidate item type of the candidate item comprises the following steps:
Acquiring a pre-trained first neural network model, wherein the first neural network model comprises a first processing layer, a determiner and a plurality of second processing layers;
Extracting features of the object image by using the first processing layer to obtain first features, and identifying types of the candidate objects according to the first features;
if the identification of the candidate object types according to the first characteristics fails, identifying a first type set corresponding to the candidate object according to the first characteristics, and inputting the first type set into the determiner, wherein the first type set comprises a plurality of object types;
Extracting features of the article image by using the determiner to obtain a second feature, and identifying the type of the candidate article according to the second feature and the first type set;
if the identification of the candidate object types according to the second characteristic and the first type set fails, one or more object types are removed from the first type set according to the second characteristic, and a corresponding second type set is obtained;
Screening target processing layers corresponding to the second type set from the plurality of second processing layers, and inputting the second type set into the target processing layers;
and extracting features of the object image by using the target processing layer to obtain a third feature, and identifying the type of the candidate object according to the third feature and the second type set to obtain the candidate object type of the candidate object.
8. An article selection device, comprising:
The selecting instruction acquisition module is used for acquiring a selecting instruction, wherein the selecting instruction comprises a target object type;
the image acquisition module is used for acquiring an article image of the candidate article according to the selection instruction;
The identification module is used for carrying out type identification on the candidate articles according to the article images to obtain candidate article types of the candidate articles;
The quality information acquisition module is used for screening the corresponding primary selected articles of which the candidate article types belong to the target article types from the candidate articles, and carrying out quality detection on the primary selected articles to obtain the quality information of the primary selected articles;
the screening module is used for screening target articles with the quality information meeting a preset quality standard from the initially selected articles, and taking the target articles as selection results of article selection;
The identification module is specifically configured to:
Acquiring a pre-trained first neural network model, wherein the first neural network model comprises a first processing layer, a determiner and a plurality of second processing layers;
Extracting features of the object image by using the first processing layer to obtain first features, and identifying types of the candidate objects according to the first features;
if the identification of the candidate object types according to the first characteristics fails, identifying a first type set corresponding to the candidate object according to the first characteristics, and inputting the first type set into the determiner, wherein the first type set comprises a plurality of object types;
Extracting features of the article image by using the determiner to obtain a second feature, and identifying the type of the candidate article according to the second feature and the first type set;
if the identification of the candidate object types according to the second characteristic and the first type set fails, one or more object types are removed from the first type set according to the second characteristic, and a corresponding second type set is obtained;
Screening target processing layers corresponding to the second type set from the plurality of second processing layers, and inputting the second type set into the target processing layers;
and extracting features of the object image by using the target processing layer to obtain a third feature, and identifying the type of the candidate object according to the third feature and the second type set to obtain the candidate object type of the candidate object.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-7 when executing the computer program.
CN202010278689.XA 2020-04-10 2020-04-10 Item selection method, selection device and terminal equipment Active CN112541380B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010278689.XA CN112541380B (en) 2020-04-10 2020-04-10 Item selection method, selection device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010278689.XA CN112541380B (en) 2020-04-10 2020-04-10 Item selection method, selection device and terminal equipment

Publications (2)

Publication Number Publication Date
CN112541380A CN112541380A (en) 2021-03-23
CN112541380B true CN112541380B (en) 2025-01-03

Family

ID=75013430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010278689.XA Active CN112541380B (en) 2020-04-10 2020-04-10 Item selection method, selection device and terminal equipment

Country Status (1)

Country Link
CN (1) CN112541380B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820943A (en) * 2015-03-27 2015-08-05 嘉兴市德宝威微电子有限公司 Shopping robot
CN110287824A (en) * 2019-06-10 2019-09-27 秒针信息技术有限公司 Identify the method and device of food

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4733236B2 (en) * 2009-09-30 2011-07-27 楽天株式会社 A system to recommend items that do not exist in the image
US10198758B2 (en) * 2014-03-28 2019-02-05 Ecovacs Robotics, Inc. Guide robot for shopping guiding system and method
US20160364785A1 (en) * 2015-06-09 2016-12-15 International Business Machines Corporation Automated in store shopping system
CN106272478A (en) * 2016-09-30 2017-01-04 河海大学常州校区 A kind of full-automatic shopping robot and using method
CN107451859A (en) * 2017-07-26 2017-12-08 上海与德通讯技术有限公司 A kind of robot purchase method and device
CN107578307A (en) * 2017-08-24 2018-01-12 上海与德通讯技术有限公司 A kind of purchase method based on robot
CN110020604B (en) * 2019-03-11 2021-09-17 潍坊学院 Vegetable quality detection method and system
CN110738546A (en) * 2019-09-19 2020-01-31 刘付荣 Shopping method and system for new retail
CN110826476A (en) * 2019-11-02 2020-02-21 国网浙江省电力有限公司杭州供电公司 Image detection method, device, electronic device and storage medium for identifying target object

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820943A (en) * 2015-03-27 2015-08-05 嘉兴市德宝威微电子有限公司 Shopping robot
CN110287824A (en) * 2019-06-10 2019-09-27 秒针信息技术有限公司 Identify the method and device of food

Also Published As

Publication number Publication date
CN112541380A (en) 2021-03-23

Similar Documents

Publication Publication Date Title
CN111192292B (en) Target tracking method and related equipment based on attention mechanism and Siamese network
CN107633207B (en) AU characteristic recognition methods, device and storage medium
CN108304882B (en) Image classification method and device, server, user terminal and storage medium
US11163978B2 (en) Method and device for face image processing, storage medium, and electronic device
CN111626371B (en) Image classification method, device, equipment and readable storage medium
CN111144284B (en) Method and device for generating depth face image, electronic equipment and medium
CN111260665A (en) Image segmentation model training method and device
CN110222728A (en) The training method of article discrimination model, system and article discrimination method, equipment
CN113254491A (en) Information recommendation method and device, computer equipment and storage medium
US11501564B2 (en) Mediating apparatus and method, and computer-readable recording medium thereof
CN110532883A (en) On-line tracking is improved using off-line tracking algorithm
US20150055834A1 (en) Method and apparatus for improved facial recognition
CN112580472A (en) Rapid and lightweight face recognition method and device, machine readable medium and equipment
CN110647938A (en) Image processing method and related device
CN115410274A (en) Gesture recognition method and device and storage medium
CN111383138A (en) Catering data processing method and device, computer equipment and storage medium
CN114567693B (en) Video generation method and device and electronic equipment
CN111582382B (en) State identification method and device and electronic equipment
CN112541380B (en) Item selection method, selection device and terminal equipment
CN110610131A (en) Method and device for detecting face motion unit, electronic equipment and storage medium
CN110059721A (en) Floor plan area recognizing method, device, equipment and computer readable storage medium
CN110309774A (en) Iris segmentation method, apparatus, storage medium and electronic equipment
CN117152567B (en) Training method, classifying method and device of feature extraction network and electronic equipment
CN114360182A (en) Intelligent alarm method, device, equipment and storage medium
CN113257254B (en) Voiceprint recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: Unit 7-11, 6th Floor, Building B2, No. 999-8 Gaolang East Road, Wuxi Economic Development Zone, Wuxi City, Jiangsu Province, China 214000

Applicant after: Youdi Robot (Wuxi) Co.,Ltd.

Address before: 5D, Building 1, Tingwei Industrial Park, No. 6 Liufang Road, Xingdong Community, Xin'an Street, Bao'an District, Shenzhen City, Guangdong Province

Applicant before: UDITECH Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant