Nothing Special   »   [go: up one dir, main page]

WO2022014133A1 - Mobile manipulator, method for controlling same, program - Google Patents

Mobile manipulator, method for controlling same, program Download PDF

Info

Publication number
WO2022014133A1
WO2022014133A1 PCT/JP2021/018229 JP2021018229W WO2022014133A1 WO 2022014133 A1 WO2022014133 A1 WO 2022014133A1 JP 2021018229 W JP2021018229 W JP 2021018229W WO 2022014133 A1 WO2022014133 A1 WO 2022014133A1
Authority
WO
WIPO (PCT)
Prior art keywords
manipulator
posture
mobile
mobile manipulator
target object
Prior art date
Application number
PCT/JP2021/018229
Other languages
French (fr)
Japanese (ja)
Inventor
弘之 岡
義弘 坂本
健太 加藤
高志 佐藤
史朗 佐久間
雄希 松尾
Original Assignee
東京ロボティクス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 東京ロボティクス株式会社 filed Critical 東京ロボティクス株式会社
Publication of WO2022014133A1 publication Critical patent/WO2022014133A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/10Programme-controlled manipulators characterised by positioning means for manipulator elements

Definitions

  • the present invention relates to a mobile body equipped with a manipulator and autonomously moving, for example, a mobile manipulator or the like.
  • Patent Document 1 discloses a mobile robot that transports an article acquired from an article storage device.
  • the mobile manipulator moves to the vicinity of a shelf or the like on which the target object is placed while performing self-position estimation or the like, and grips the target object. It is necessary to acquire it by doing so, move to the next destination again while holding it, and place the target object.
  • the present invention has been made to solve the above-mentioned technical problems, and an object of the present invention is to ensure subsequent operations such as leaching even if an error occurs in the stop position / posture after movement.
  • the purpose is to provide mobile manipulators and the like that can be achieved.
  • the mobile manipulator according to the present invention is a mobile manipulator that moves to the vicinity of the mounted body and reaches the target object on the previously described stationary body, and is a trolley target stop position and a trolley target stop position near the previously described stationary body.
  • relative relationship recognition unit object recognition unit that recognizes an object placed in the field of view of the hand camera after stopping, and when the recognized object includes the target object, the relative distance and the relative
  • the reach operation control of the manipulator with respect to the target object is performed based on the posture, and when the recognized object does not include the target object, the position information of the recognized object is acquired, and the position information is obtained based on the position information. It is provided with a reaching control unit that controls the leaching operation of the manipulator with respect to the target object based on the relative distance and the relative posture after moving the hand position of the manipulator.
  • the bogie target stop position and the bogie target stop posture may be generated based on the reach preparation distance and the reach preparation posture.
  • the leaching preparation distance and the leaching preparation posture may be generated based on the position of the target object and the relative relationship between the preset pre-described body and the hand.
  • the optimum preparation distance and preparation posture for leaching are generated from the position of the target object, so that stable leaching, gripping, and the like can be realized.
  • the recognition of the target object by the hand camera may be performed at the reach preparation distance and the reach preparation posture.
  • the moving means may be controlled based on the result of the self-position estimation process, which is provided with a self-position estimation unit that performs the self-position estimation process by the environment recognition means.
  • the environmental recognition means may be a rider.
  • the hand camera may be a monocular camera.
  • It may be further provided with a gripping control unit that controls gripping of the target object after the leaching operation control.
  • the target object can be gripped, so that it can be transported or the like.
  • the gripping control unit may further include a gripping position determining unit that determines the gripping position of the target object based on the image acquired by the hand camera.
  • the gripping position of the target object can be appropriately determined, so that the possibility of gripping failure can be reduced.
  • the gripping position determining unit further comprises a gripping position estimation unit including a trained model machine-learned to estimate the gripping position of the target object based on an image acquired by the hand camera. You may.
  • the gripping position of the target object can be adaptively determined by the action of machine learning, so that the possibility of gripping failure can be reduced.
  • the movement of the hand position of the manipulator may be performed in parallel with the previously described body.
  • the hand since the relative relationship between the hand and the mounted body is not changed, the hand can be moved in a manner suitable for the subsequent leaching operation.
  • It may be provided with a bogie movement control unit that moves the bogie unit for moving the hand position of the manipulator.
  • the relative relationship recognition unit further determines the relative distance and the relative posture between the hand camera and the previously described body based on the image information including the first marker provided on the target object or the previously described body. It may be one that recognizes and is provided with a marker recognition unit.
  • a plurality of the first markers may be arranged on the target object or the above-mentioned predicate.
  • the relative distance and the relative posture may be recognized based on the information regarding the plurality of the first markers and the constraint conditions based on the shape of the above-mentioned figurine.
  • the marker when the marker is provided on the front surface of the shelf, the marker is present on a plane perpendicular to the floor surface by adding a constraint condition to the hand camera and the above-mentioned object.
  • the recognition of the relative distance and the relative posture between them becomes more accurate.
  • the image information including the first marker may be an image captured by the hand camera.
  • the image information including the plurality of the first markers is imaged by pointing the hand camera obliquely with respect to the plurality of the first markers so as to keep the plurality of the first markers in the field of view. It may be obtained by.
  • the mobile manipulator further comprises a camera arranged on the manipulator so as to have a wider field of view than the hand camera, and the image information including the first marker is captured by the camera. May be good.
  • a plurality of first markers can be easily imaged in a wide field of view, so that rapid imaging can be performed.
  • the mobile manipulator may further include a head, and the camera may be provided on the head.
  • a plurality of first markers can be easily imaged by using a camera that can image a wide field of view from the head.
  • the trolley unit may be an omnidirectional moving trolley.
  • the manipulator is an articulated arm, the articulated arm includes one or a plurality of force sensors at its tip, its joint or its root, and the leaching motion control unit further has a certain or more force sensors. It may be provided with a leaching stop control unit that controls to stop the leaching operation when a value is detected.
  • the manipulator includes an articulated arm and an end effector provided at the tip of the articulated arm, and the end effector includes a force sensor and / or a contact sensor on a contact surface with the object to control the leaching motion.
  • the unit may include a leaching stop control unit for an effector that controls the leaching operation to be stopped when a value equal to or higher than a certain value is detected by the force sensor and / or the contact sensor.
  • the manipulator may include an arm portion and a vertical movement mechanism provided at the base of the arm portion.
  • the manipulator can be moved including the vertical movement, so that the manipulator can be easily moved in parallel, and as a result, the leaching operation to the target object becomes easier.
  • the object recognition in the object recognition unit may be performed by capturing the two-dimensional code information provided in the object.
  • the reaching operation control unit further receives the transmission unit that transmits the recognized identification information of the object to a predetermined data server via a network, and the position information corresponding to the identification information from the data server.
  • a receiver and may be provided.
  • the present invention can also be considered as a method. That is, the method according to the present invention is a method for controlling a mobile manipulator that moves to the vicinity of a mounted body and reaches a target object on the previously described body, and the manipulator is near the previously described body.
  • the control method comprises a trolley portion provided with a moving means for moving toward a target stop position of the trolley and a target stop posture of the trolley, and a manipulator equipped with a hand camera at the tip or near the tip.
  • the reach operation control of the manipulator with respect to the target object is performed based on the relative distance and the relative posture, and when the recognized object does not include the target object, the recognized object is used.
  • Reaching control step that acquires the position information of the manipulator, moves the hand position of the manipulator based on the position information, and then controls the leaching operation of the manipulator with respect to the target object based on the relative distance and the relative posture. And have.
  • the present invention can also be thought of as a program. That is, the program according to the present invention is a control program of a mobile manipulator that moves to the vicinity of the mounted body and reaches the target object on the previously described body, and the manipulator is near the previously described body.
  • the control program comprises a trolley unit equipped with a moving means for moving toward a trolley target stop position and a trolley target stop posture, and a manipulator equipped with a hand camera at or near the tip, and the control program is described after the stop.
  • the reach operation control of the manipulator with respect to the target object is performed based on the relative distance and the relative posture, and when the recognized object does not include the target object, the recognized object is used.
  • Reaching control step that acquires the position information of the manipulator, moves the hand position of the manipulator based on the position information, and then controls the leaching operation of the manipulator with respect to the target object based on the relative distance and the relative posture. And have.
  • the present invention it is possible to provide a mobile manipulator or the like that can surely achieve a subsequent operation such as leaching even if an error occurs in a stop position / posture after movement.
  • FIG. 1 is an overall configuration diagram of the system.
  • FIG. 2 is an external perspective view of the mobile manipulator.
  • FIG. 3 is a functional block diagram of a mobile manipulator that moves autonomously.
  • FIG. 4 is a functional block diagram of a mobile manipulator that controls the arm portion.
  • FIG. 5 is an explanatory diagram regarding the layout in the factory.
  • FIG. 6 is a general flowchart regarding the operation of the mobile manipulator.
  • FIG. 7 is a detailed flowchart of the target setting process.
  • FIG. 8 is an explanatory diagram regarding the target position and the target posture of the hand camera.
  • FIG. 9 is a detailed flowchart of the movement process.
  • FIG. 10 is an explanatory diagram showing a state of the mobile manipulator when it is determined that the target position and posture of the bogie have been reached.
  • FIG. 11 is a detailed flowchart of the picking process.
  • FIG. 12 is a perspective view of the mobile manipulator and the shelf after the movement process.
  • FIG. 13 is a conceptual diagram relating to the process of calculating the relative distance and posture of the hand camera with respect to the shelf.
  • FIG. 14 is an explanatory diagram showing a state of the mobile manipulator after the control process of the hand camera.
  • FIG. 15 is an explanatory diagram showing a state after moving the hand camera to the target position and posture.
  • FIG. 16 is a perspective view of a mobile manipulator that performs a translation operation of the hand camera.
  • FIG. 1 is an overall configuration diagram of the system 500.
  • the system 500 is configured by connecting the mobile manipulator 100, the data server 200, and the client device 300 to each other via a LAN in the factory.
  • a LAN in the factory.
  • the present invention is not limited to such an example, and a plurality of each device may be provided.
  • the data server 200 is composed of an information processing device, stores various information described later, and provides the data server 200 in response to a request from the mobile manipulator 100 or the client device 300. For example, it manages the object transportation command and setting information, the book ID, the book position information, etc., which will be described later, and provides the corresponding information in response to a request from the mobile manipulator 100 or the like.
  • the data server 200 communicates with a control unit including a CPU or the like for executing various programs, a storage unit including a ROM, RAM or flash memory for storing the executed program and various data, and an external device. It is provided with a communication unit for performing, an input unit for processing input from a keyboard, a mouse, and the like, a display unit for displaying various images, and the like, and they are connected to each other via a bus.
  • the client device 300 is composed of an information processing device, and in cooperation with the data server 200, provides management information about the system 500 to the system administrator and the like, and as will be described later, various types of the mobile manipulator 100 related to the user. It is possible to input settings.
  • the client device 300 communicates with a control unit including a CPU or the like for executing various programs, a storage unit including a ROM, RAM or flash memory for storing the executed program and various data, and an external device. It is provided with a communication unit for performing, an input unit for processing input from a keyboard, a mouse, and the like, a display unit for displaying various images, and the like, and they are connected to each other via a bus.
  • FIG. 2 is an external perspective view of the mobile manipulator 100.
  • the mobile manipulator 100 of the present embodiment has a substantially human-shaped shape, and includes a robot main body 10, a mobile trolley 20 that supports the robot main body 10, and a robot main body 10. It is composed of a head portion 30 provided at the upper end and an arm portion 40 extending from the front surface of the robot main body portion 10.
  • the mobile trolley 20 that supports the robot main body 10 is an omnidirectional mobile trolley having a function of moving on the surface (floor surface) on which the mobile manipulator 100 is placed.
  • the omnidirectional moving trolley is a trolley provided with, for example, a plurality of omni wheels as driving wheels and configured to be able to move in all directions.
  • the omnidirectional mobile trolley may be referred to as an omnidirectional trolley, an omnidirectional mobile trolley, or an omnidirectional trolley.
  • the omnidirectional moving trolley can move 360 degrees in all directions and can move freely even in a narrow passage or the like.
  • the mobile carriage 20 of the present embodiment has three drive wheels 22 and a moving mechanism driving means 2 for driving the drive wheels 22 via a belt or the like inside the skirt 21 represented on the outside. It may be composed of and.
  • the moving mechanism driving means 2 of the present embodiment is composed of one or a plurality of motors, and hereinafter, the moving mechanism driving means 2 is also referred to as a motor 2.
  • the head 30 may be provided at the upper end of the robot main body 10 as a configuration corresponding to a human head (a portion above the neck) in the approximately humanoid mobile manipulator 100.
  • the head 30 is configured so that the front surface of the head 30 can be recognized as a face when viewed by a person.
  • the head 30 is configured to be able to control the direction in which the front faces.
  • the mobile manipulator 100 of the present embodiment includes one or a plurality of head driving means 3 (actuators 3), and controls the direction of the front surface thereof by moving the head via the head driving means 3. It is configured to be able to.
  • the front surface of the head 30 will be referred to as a face portion 31.
  • a head camera 35 is provided on the upper part of the face portion 31. According to the head camera 35, it is possible to take an image in a wider field of view than the hand camera 42 described later due to the arrangement thereof.
  • the mobile manipulator 100 of the present embodiment has a vertical rotation axis that rotates the head 30 in the horizontal direction (horizontal direction) with respect to the floor surface, and enables the head 30 to be rotationally driven in the left-right direction. It has a unit driving means 3 (servomotor 3a). As a result, the mobile manipulator 100 can rotate and drive only the head 30 to the left and right with respect to the vertical direction.
  • the arrangement of the servomotor 3a is not limited to the arrangement shown in the figure, and may be appropriately changed as long as the direction of the face portion 31 can be moved in the left-right direction.
  • the servomotor 3a may be provided so as to rotationally drive the robot main body 10 to the left and right with respect to the moving carriage 20. In this case, the servomotor 3a can change the direction of the face portion 31 by rotationally driving the head portion 30 together with the robot main body portion 10 to the left and right in the vertical direction.
  • the mobile manipulator 100 has a horizontal rotation axis that drives the head 30 in the vertical direction with respect to the floor surface, and the head driving means 3 (servo motor) that enables an operation of looking up and down of the head 30. 3b) may be provided.
  • the mobile manipulator 100 can also move the direction of the face portion 31 in the vertical direction.
  • An environment recognition means 32 is provided in the protrusion of the front upper edge of the mobile trolley 20.
  • the environment recognition means 32 is a lidar (LiDAR: Light Detection And Ranking) unit that measures a distance and a direction to an object by using a laser beam.
  • the environment recognition means 32 is not limited to the rider unit, and various other sensors and the like can be adopted.
  • an image sensor may be provided to recognize the environment by image processing, or a radar, a microphone array, or the like may be used.
  • the mobile manipulator 100 is provided with an arm portion 40 on the front surface of the robot body portion 10.
  • the arm portion 40 is provided with a manipulator mechanism, a gripping mechanism 41 for gripping an article to be transported, and in the present embodiment, a gripper is provided at a tip corresponding to a free end.
  • a monocular hand camera 42 is provided near the base of the gripping mechanism 41. With this hand camera 42, it is possible to take an image of a gripping object or the like facing the gripping mechanism 41 from the front.
  • the shape, number, and arrangement of the arm portions 40 are not limited to the illustrated embodiments and may be appropriately changed according to the purpose.
  • the mobile manipulator 100 may be configured as a dual-arm robot having arms 40 on both side surfaces of the robot body 10.
  • the mobile manipulator 100 includes other configurations required for controlling the operation of the mobile manipulator 100, for example, other actuators for driving the arm portion 40 and the like, and a power source.
  • a battery or the like may be provided.
  • FIG. 3 is a functional block diagram of the mobile manipulator 100 that moves autonomously.
  • the environmental information obtained from the environmental recognition unit 32 composed of riders is acquired by the environmental information acquisition unit 101 and provided to the mobile vehicle control unit 102.
  • the mobile trolley 20 is provided with an encoder that detects the rotation of each drive wheel 22 (not shown), and the value from this encoder is read out by the encoder information acquisition unit 106 to the mobile trolley control unit 102. Provided.
  • the mobile trolley control unit 102 performs various processes described later based on the environmental information and the encoder information to generate control information for the mobile trolley 20, and controls the mobile mechanism driving means 2.
  • FIG. 4 is a functional block diagram of the mobile manipulator 100 when controlling the arm portion 40.
  • the manipulator control unit 113 controls one or a plurality of actuators 118 including a servomotor or the like that controls the operation of the arm unit 40.
  • the arm portion 40 is provided with a sensor 116 including an encoder (not shown), a torque sensor, or the like that senses the movement of each arm, and information from the sensor 116 is acquired by the sensor information acquisition unit 117 to control the manipulator.
  • unit 113 Provided to unit 113.
  • the hand camera information acquisition unit 110 acquires image information from the hand camera 42 and provides it to the manipulator control unit 113. Further, the head camera information acquisition unit 111 acquires image information from the head camera 35 and provides it to the manipulator control unit 113. That is, the manipulator control unit 113 can control the arm unit 40 based on the image information acquired from each camera (35, 42).
  • FIG. 5 is an explanatory diagram regarding the layout in the factory 800 in which the mobile manipulator 100 operates. First, an outline of the operation according to the present embodiment will be described with reference to the figure.
  • the mobile manipulator 100 exists at the lower left position in the figure.
  • the mobile manipulator 100 receives the transport command of the target object 601a in the present embodiment, the mobile manipulator 100 starts moving and stops in front of the shelf 600 arranged in the upper right of the figure.
  • the transport command of the present 601a is set by the client device 300 and stored in the data server 200.
  • the picking operation described later is performed to acquire the book 601a.
  • the mobile manipulator 100 holding the book 601a then moves to the front of the work table 700 arranged at the lower right of the figure, stops, and releases the book 601a onto the work table. This ends the series of operations.
  • the book is exemplified as a target object in the present embodiment, the present invention is not limited to such a configuration. Therefore, for example, in the case of a factory, the target object may be a material, a work tool, or the like used for manufacturing.
  • FIG. 6 is a general flowchart regarding the operation of the mobile manipulator 100.
  • the mobile manipulator 100 performs a process of requesting setting information to the data server 200 via the LAN in the factory 800 and a process of acquiring the setting information (S1).
  • the setting information includes the relative relationship information between the hand camera 42 and the shelf 600 and the target object information.
  • the relative relationship information is information on the relationship between the hand camera 42 and the shelf 600 after moving to the front of the shelf 600 and before the leaching operation, that is, the relative distance and the relative posture between the hand camera 42 and the shelf 600.
  • the information is such that the relative distance is 200 mm and the relative posture is 180 °.
  • the relative posture of 180 ° means a state in which the hand faces or faces the shelf 600.
  • the target object information includes information about an object held or carried by the mobile manipulator 100, that is, a book ID which is identification information of a book in the present embodiment and a book position information indicating a position of the book in the factory 800. There is.
  • the setting information is acquired from the data server 200, but the present invention is not limited to such a configuration, and for example, the mobile manipulator 100 may store the setting information in advance. ..
  • FIG. 7 is a detailed flowchart of the target setting process. As is clear from the figure, when the process starts, the book ID of the book which is the target object and the book position information are read from the setting information (S31).
  • a process of calculating the target position and the target posture of the hand camera 42 before the leaching operation is performed based on the book position information (S32). That is, the position and posture of the hand camera 42 when the relative distance from the book, which is the target object, is 200 mm and the relative posture is 180 ° are calculated.
  • a predetermined reference posture is preset in the mobile manipulator 100.
  • this reference posture is set to a predetermined posture in which the arm 40 is folded to a certain extent in consideration of safety of movement.
  • FIG. 8 is an explanatory diagram regarding the target position and the target posture of the hand camera 42.
  • the mobile manipulator 100 in the initial state is drawn in the lower left of the figure, and the shelf 600 and the book 601a of the target object arranged on the shelf 600 are drawn in the upper right of the figure. Further, in the figure, the initial position and the initial posture of the hand camera 42 are indicated by two orthogonal arrows C1, and the initial position and the initial posture of the mobile carriage 20 are also indicated by the two orthogonal arrows C2. Has been done.
  • the calculated target position and target posture of the hand camera 42 are indicated by two orthogonal arrows C3 in front of the shelf 600, and further, the target position and target posture of the mobile trolley 20 are two orthogonal. It is indicated by the arrow C4.
  • the mobile manipulator 100 autonomously moves toward the target position and target posture of the mobile vehicle 20.
  • FIG. 9 is a detailed flowchart of the movement process (S4).
  • the environment information acquisition unit 101 acquires the environment information by using the environment recognition means 32 and performs the process of providing the environment information to the mobile trolley control unit 102 (S41).
  • the mobile trolley control unit 102 Upon acquiring the environmental information, the mobile trolley control unit 102 then performs recognition processing of the local environment around itself based on the environmental information, and estimates the self-position on the global environment map generated in advance for the factory 800. Processing is performed (S43). As this self-position estimation process, various methods known to those skilled in the art can be applied.
  • the mobile trolley control unit 102 performs a process (pass planning) for planning a movement route (S44).
  • a process for planning a movement route (S44).
  • a global map and a local map around the mobile manipulator 100 are generated from the environmental information, and a route from the estimated self-position to the end point is generated. More specifically, a route is planned in which the cost defined by the distance from the wall or the like is equal to or less than a predetermined threshold value and the arrival time to the end point is minimized.
  • the route planning method is not limited to such a method, and any method known to those skilled in the art may be used.
  • the mobile trolley control unit 102 controls the movement control process (SS46), that is, the process of controlling the moving mechanism driving means 2 so as to move along the route planned by the route planning process (S44). I do.
  • SS46 movement control process
  • a process of determining whether or not the moving trolley 20 has reached the target position and posture is performed (S47). That is, the self-position estimation process is performed, and the determination process of whether or not the target position and the posture of the moving trolley 20 are matched or sufficiently approached is performed. (S41 to S47) is repeated. On the other hand, when it is determined that the target position and the posture of the moving carriage 20 match or are sufficiently approached (S47YES), the route movement processing ends and stops.
  • FIG. 10 is an explanatory diagram showing the state of the mobile manipulator 100 when it is determined that the mobile trolley 20 has reached the target position and posture.
  • the mobile manipulator 100 determines that the target position and posture of the mobile trolley 20 have been reached based on the self-position estimation, the position and posture of the mobile trolley 20 and the hand camera 42 are actually obtained. The position and posture of are deviated from the target position and posture, respectively.
  • the actual position and posture of the hand camera 42 shown by C1 in the figure deviates from the target position and posture of the hand camera 42 shown by C3. Further, the actual position and posture of the mobile trolley 20 shown by C2 in the figure deviates from the target position and posture of the mobile trolley 20 shown by C4. For example, if the picking process or the like is performed on the 601a in this state, there is a possibility that the picking process will fail.
  • FIG. 11 is a detailed flowchart of the picking process. As is clear from the figure, when the process starts, first, the process of recognizing the relative relationship between the mobile manipulator 100 and the shelf 600 is performed (S61).
  • FIG. 12 is a perspective view of the mobile manipulator 100 and the shelf 600 after the movement process.
  • AR markers 603 are arranged at predetermined intervals on the surface of the shelf 600 facing the mobile manipulator 100.
  • the AR marker 603 is two-dimensional code information.
  • a QR code (registered trademark) 604 composed of a two-dimensional dot pattern is arranged at the lower part of the spine cover 601x of the book arranged on the shelf.
  • the mobile manipulator 100 uses the head camera 35 to capture a plurality of AR markers 603 in the field of view and take an image. Thereby, the relative distance and posture between the head camera 35 and the shelf 600 can be acquired based on the sizes and angles of the plurality of AR markers 603 in the field of view.
  • FIG. 13 is a conceptual diagram relating to the process of calculating the relative distance and posture of the hand camera 42 with respect to the shelf 600 based on the plurality of AR markers 603.
  • the calculation accuracy can be improved as compared with the use of one AR marker.
  • the manipulator control unit 113 sets a plane equation perpendicular to the floor surface based on the positions and postures of the six AR markers 603 acquired by the head camera 35.
  • the equation of a plane is set in this way based on the assumption that the front surface of the shelf 600 is a plane perpendicular to the floor surface.
  • the relative distance and the relative posture of the hand camera 42 with respect to the set plane are geometrically calculated. In this way, the actual positional relationship of the hand camera 42 with respect to the shelf 600 can be grasped.
  • the calculation of the relative relationship is not limited to the method of the present embodiment, and various known methods can be adopted.
  • the manipulator control unit 113 controls the arm portion 40 without moving the moving carriage 20 and designates the hand camera 42 by the setting information.
  • the relative distance and the relative posture that is, the process of facing the hand camera 42 at a position 200 mm from the shelf 600 is performed (S62).
  • the process of recognizing a book existing in the front or near the front of the hand camera 42 in the field of view that is, arranged on the spine of the book placed on the shelf board of the shelf 600.
  • a process of recognizing the QR code 604 and detecting the book ID is performed (S63).
  • the manipulator control unit 113 receives the position information of the book currently in the field of view from the data server 200 by inquiring the result of this recognition process to the data server 200 (S65).
  • the manipulator control unit 113 performs a process of determining whether or not the book existing in front of or near the hand camera 42 is the target book (S66). As a result of this determination process, when the book existing in front of or near the hand camera 42 is the target book 601a (S66YES), the gripping position specifying process (S74) described later is performed.
  • the hand is targeted along the front of the shelf 600 based on the position information acquired from the data server 200.
  • a process of translating toward the book 601a is performed (S68). After this predetermined translation, the process of recognizing the book in the field of view of the hand camera 42 by the QR code 604 is performed again (S69), and the process of inquiring the recognition result to the data server 200 is performed (S70).
  • FIG. 14 is an explanatory diagram showing an example of the state of the mobile manipulator 100 after the control process (S62) of the hand camera.
  • the book 601b which is not the target book 601a, is arranged in front of or near the hand camera 42.
  • the mobile manipulator 100 determines by the object recognition process (S63) in the field of view that the book existing in front of or near the hand camera 42 is not the target book 601a (S66NO), so the data server 200 is reached. Based on the relative positional relationship from the object in the visual field obtained by the inquiry process (S65) to the target book 601a, the hand is moved in parallel toward the target book 601a (S68). This translation process is performed until the target object 601a is placed in front of or in the vicinity thereof (S69 to S72YES). That is, the process of moving the position and posture of the hand camera 42 indicated by C1 toward the target position and posture of the hand camera 42 indicated by C3 is performed.
  • FIG. 15 is an explanatory diagram showing a state after the hand camera 42 is translated to the target position and posture.
  • the position and posture of the hand camera 42 indicated by C1 coincide with the target position and posture of the hand camera 42 shown by C3. There is.
  • FIG. 16 is a perspective view of an example of a mobile manipulator 100 that performs a translation operation of the hand camera 42.
  • the figure on the left side shows the state before the translation operation (S68) of the hand camera 42
  • the figure on the right side shows the state after the translation operation (S68) of the hand camera 42.
  • the book 601b which is not the target book 601a, is arranged in front of or near the hand camera 42.
  • the mobile manipulator 100 determines that the book existing in front of or near the hand camera 42 is not the target book 601a, the mobile manipulator 100 acquires the position information of the object in the field of view from the data server 200 and targets the book. Translation is started toward 601a (S68 to S72).
  • the target book 601a is arranged in front of or near the hand camera 42 in the state after the translation operation. In this state, by performing the leaching operation or the like (S74, S75) described later, it is possible to surely grip the target book 601a or the like.
  • the above-mentioned translation process of the hand or hand camera 42 was performed in a plane along the front surface of the shelf 600, but the present invention is not limited to such a configuration. Therefore, for example, it may be moved while maintaining a constant distance and posture along a surface including a curved surface or a step corresponding to the shape of a mounted body such as a shelf or a storage body.
  • the process of specifying the gripping position is performed using the trained model by detecting the contour line of the gripping object from the image obtained from the hand camera 42 and inputting the information on the detected contour line.
  • This trained model is generated by performing machine learning based on training data including the correct gripping position annotated by a human on the contour line of the gripping object.
  • the machine learning method is obtained by using a neural network, particularly deep learning, in the present embodiment, it is not limited to such a machine learning method, and other known methods may be used. good.
  • the manipulator control unit 113 controls the leaching motion of extending or flexing each joint of the arm portion 40 toward the gripping position (S74). This leaching operation is performed until a constant reaction force is detected by the sensor 116 including the joint torque sensor of the arm portion 40, that is, until the arm portion 40 comes into contact with the main 601a.
  • the manipulator control unit 113 controls the operation of gripping the target object 601a by using the gripping mechanism 41 which is a gripper (S75). This gripping operation is performed until a constant reaction force is detected by measuring the current value of the gripper or the like.
  • the mobile manipulator 100 After the gripping process, the mobile manipulator 100 performs a process of folding the arm portion 40 by bending each joint of the arm portion 40 while gripping the book 601a (S77). This ends the picking process (S6).
  • the mobile manipulator 100 performs the move process again (S8). That is, the mobile manipulator 100 reads out the target mounting position 701 on the workbench 700 from the data server 200, and calculates the target position and posture of the hand camera 42. After that, the target stop position and posture of the moving trolley 20 are calculated from the target position and posture of the hand camera 42, and the process of moving while holding the book 601a while estimating the self-position toward the target position and posture is performed. conduct. Since the details of the movement process based on the self-position estimation are substantially the same as those shown in FIG. 9, the description thereof will be omitted.
  • the mobile manipulator 100 When the mobile manipulator 100 reaches the target stop position and posture of the mobile trolley 20 near the workbench 700, the mobile manipulator 100 performs a process of releasing the book 601a by mounting the book 601a to a predetermined mounting position 701 on the workbench 700 (S9). .. This completes a series of transportation processes.
  • the present invention can be used at least in an industry that manufactures autonomous mobile objects such as mobile manipulators.
  • Robot body 2 Mobile mechanism drive means 20 Mobile cart 21 Skirt 22 Drive wheel 3 Head drive means 30 Head 31 Face 32 Environmental recognition means 35 Head camera 40 Arm 41 Grip mechanism 42 Hand camera 100 Mobile manipulator 200 Data Server 300 Client equipment 500 System 800 Factory

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

Provided is a mobile manipulator equipped with: a dolly part equipped with a movement means that moves towards a dolly target stop attitude and a dolly target stop position in the vicinity of a placement body; a manipulator equipped with a hand-end camera at the tip or in the vicinity of the tip; a relative relationship recognition unit for recognizing the relative distance and relative attitude of the hand-end camera with respect to the placement body after a stop; an object recognition unit for recognizing an object located in the field-of-view of the hand-end camera after a stop; and a reaching control unit. If the recognized object includes a target object, the reaching control unit performs reaching operation control on a reaching operation of the manipulator with respect to the target object on the basis of the relative distance and the relative attitude. If the recognized object does not include the target object, the reaching control unit acquires position information for the recognized object, moves the hand-end position of the manipulator on the basis of the position information, and then performs reaching operation control on a reaching operation of the manipulator with respect to the target object on the basis of the relative distance and the relative attitude.

Description

モバイルマニピュレータ及びその制御方法及びプログラムMobile manipulators and their control methods and programs
 この発明は、マニピュレータを備えて自律移動する移動体、例えば、モバイルマニピュレータ等に関する。 The present invention relates to a mobile body equipped with a manipulator and autonomously moving, for example, a mobile manipulator or the like.
 近年、工場や倉庫等において物品の運搬等を目的として、自律的に移動するモバイルマニピュレータが注目されつつある。マニピュレータを備えた移動ロボットの例として、特許文献1には、物品収納機器から取得した物品を運搬する移動ロボットが開示されている。 In recent years, mobile manipulators that move autonomously for the purpose of transporting goods in factories, warehouses, etc. are attracting attention. As an example of a mobile robot provided with a manipulator, Patent Document 1 discloses a mobile robot that transports an article acquired from an article storage device.
特許第4007204号公報Japanese Patent No. 4007204
 ところで、物体を所定の目的地へと運搬するためには、モバイルマニピュレータは、例えば、自己位置推定等を行いつつ、目標物体を載置している棚等の近傍まで移動して目標物体を把持する等して取得し、それを保持したまま再度次の目的地へと移動し、目標物体を載置する必要がある。 By the way, in order to transport an object to a predetermined destination, the mobile manipulator moves to the vicinity of a shelf or the like on which the target object is placed while performing self-position estimation or the like, and grips the target object. It is necessary to acquire it by doing so, move to the next destination again while holding it, and place the target object.
 しかしながら、工場や倉庫等の比較的に大きな空間において自己位置推定等を行いつつ自律移動を行うモバイルマニピュレータにおいては、自己位置推定の誤差や、移動に利用される台車の制御誤差等が発生することがある。これらの誤差により、棚等の近傍においてモバイルマニピュレータの目標停止位置・姿勢と実際の停止位置・姿勢との間には誤差が生じ、その結果、その後に行われるリーチングや把持等の動作に失敗してしまうおそれがあった。 However, in a mobile manipulator that performs autonomous movement while performing self-position estimation in a relatively large space such as a factory or warehouse, an error in self-position estimation and a control error in the dolly used for movement may occur. There is. Due to these errors, an error occurs between the target stop position / posture of the mobile manipulator and the actual stop position / posture in the vicinity of the shelf, etc., and as a result, subsequent operations such as leaching and gripping fail. There was a risk that it would end up.
 本発明は、上述の技術的課題を解決するためになされたものであり、その目的とするところは、移動後の停止位置・姿勢に誤差が生じても、リーチング等のその後の動作を確実に達成することができるモバイルマニピュレータ等を提供することにある。 The present invention has been made to solve the above-mentioned technical problems, and an object of the present invention is to ensure subsequent operations such as leaching even if an error occurs in the stop position / posture after movement. The purpose is to provide mobile manipulators and the like that can be achieved.
 上述の技術的課題は、以下の構成を有するモバイルマニピュレータ等により解決することができる。 The above technical problem can be solved by a mobile manipulator or the like having the following configuration.
 すなわち、本発明に係るモバイルマニピュレータは、載置体近傍へと移動して、前記載置体上の目標物体へとリーチングする、モバイルマニピュレータであって、前記載置体近傍の台車目標停止位置及び台車目標停止姿勢を目指して移動する移動手段を備えた、台車部と、先端又は先端近傍に手先カメラを備えた、マニピュレータと、停止後に前記手先カメラの前記載置体に対する相対距離及び相対姿勢を認識する、相対関係認識部と、停止後に前記手先カメラの視野内に配置される物体を認識する、物体認識部と、認識された前記物体が前記目標物体を含む場合、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行い、認識された前記物体が前記目標物体を含まない場合、認識された前記物体の位置情報を取得し、前記位置情報に基づいて前記マニピュレータの手先位置を移動させた後、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行う、リーチング制御部と、を備えている。 That is, the mobile manipulator according to the present invention is a mobile manipulator that moves to the vicinity of the mounted body and reaches the target object on the previously described stationary body, and is a trolley target stop position and a trolley target stop position near the previously described stationary body. The relative distance and the relative posture of the trolley part equipped with a moving means for moving toward the trolley target stop posture, the manipulator equipped with the hand camera at the tip or near the tip, and the hand camera after stopping with respect to the previously described object. Recognizing, relative relationship recognition unit, object recognition unit that recognizes an object placed in the field of view of the hand camera after stopping, and when the recognized object includes the target object, the relative distance and the relative The reach operation control of the manipulator with respect to the target object is performed based on the posture, and when the recognized object does not include the target object, the position information of the recognized object is acquired, and the position information is obtained based on the position information. It is provided with a reaching control unit that controls the leaching operation of the manipulator with respect to the target object based on the relative distance and the relative posture after moving the hand position of the manipulator.
 このような構成によれば、移動後の停止位置・姿勢に誤差が生じても、リーチング等のその後の動作を確実に達成することができるモバイルマニピュレータ等を提供することができる。 According to such a configuration, it is possible to provide a mobile manipulator or the like that can surely achieve a subsequent operation such as leaching even if an error occurs in the stop position / posture after movement.
 前記手先カメラと前記載置体との間の前記相対距離及び前記相対姿勢に基づいて、前記マニピュレータをリーチング準備距離及びリーチング準備姿勢となるよう制御する、リーチング準備動作制御部をさらに備える、ものであってもよい。 It further comprises a reach preparation operation control unit that controls the manipulator to be in the reach preparation distance and the reach preparation posture based on the relative distance and the relative posture between the hand camera and the above-mentioned relative body. There may be.
 このような構成によれば、常にリーチング準備距離及びリーチング準備姿勢を起点としてその後の種々の動作を行うことができる。 According to such a configuration, various subsequent operations can always be performed starting from the reach preparation distance and the reach preparation posture.
 前記台車目標停止位置及び前記台車目標停止姿勢は、前記リーチング準備距離及び前記リーチング準備姿勢に基づいて生成される、ものであってもよい。 The bogie target stop position and the bogie target stop posture may be generated based on the reach preparation distance and the reach preparation posture.
 このような構成によれば、リーチングに好適な位置及び姿勢から逆算的に台車の停止位置及び姿勢が生成されるので、移動後に安定したリーチングや把持等を実現することができる。 According to such a configuration, since the stop position and posture of the bogie are generated backward from the position and posture suitable for leaching, stable leaching, gripping and the like can be realized after movement.
 前記リーチング準備距離及び前記リーチング準備姿勢は、前記目標物体の位置と、事前に設定された前記載置体と前記手先の相対的関係性に基づいて生成される、ものであってもよい。 The leaching preparation distance and the leaching preparation posture may be generated based on the position of the target object and the relative relationship between the preset pre-described body and the hand.
 このような構成によれば、目標物体の位置からリーチングに最適な準備距離及び準備姿勢が生成されるので、安定したリーチングや把持等を実現することができる。 According to such a configuration, the optimum preparation distance and preparation posture for leaching are generated from the position of the target object, so that stable leaching, gripping, and the like can be realized.
 前記手先カメラによる前記目標物体の認識は、前記リーチング準備距離及び前記リーチング準備姿勢において行われる、ものであってもよい。 The recognition of the target object by the hand camera may be performed at the reach preparation distance and the reach preparation posture.
 このような構成によれば、常に一定の位置関係の下に、対象物体の認識を行うことが出来るので、物体の認識精度を向上させることができる。 According to such a configuration, it is possible to recognize the target object under a constant positional relationship, so that the recognition accuracy of the object can be improved.
 環境認識手段により自己位置推定処理を行う、自己位置推定部を備え、前記自己位置推定処理の結果に基づいて前記移動手段は制御される、ものであってもよい。 The moving means may be controlled based on the result of the self-position estimation process, which is provided with a self-position estimation unit that performs the self-position estimation process by the environment recognition means.
 このような構成によれば、自己位置推定の誤差等により停止位置や姿勢に多少の誤差が生じても、リーチング等のその後の動作を確実に達成することができるモバイルマニピュレータ等を提供することができる。 According to such a configuration, it is possible to provide a mobile manipulator or the like that can surely achieve the subsequent operation such as leaching even if some error occurs in the stop position or the posture due to the error of self-position estimation or the like. can.
 前記環境認識手段は、ライダーであってもよい。 The environmental recognition means may be a rider.
 このような構成によれば、カメラシステム等を組み込むより簡易に高精度で自己位置推定を行うことができる。 With such a configuration, self-position estimation can be performed more easily and with high accuracy than incorporating a camera system or the like.
 前記手先カメラは、単眼カメラであってもよい。 The hand camera may be a monocular camera.
 このような構成によれば、低コストに手先カメラを実現することができる。 With such a configuration, a hand camera can be realized at low cost.
 前記リーチング動作制御の後に前記目標物体の把持制御を行う、把持制御部をさらに備える、ものであってもよい。 It may be further provided with a gripping control unit that controls gripping of the target object after the leaching operation control.
 このような構成によれば、目標物体の把持を行うことができるので運搬等を行うことができる。 According to such a configuration, the target object can be gripped, so that it can be transported or the like.
 前記把持制御部は、さらに、前記手先カメラにより取得された画像に基づいて前記目標物体の把持位置を決定する、把持位置決定部を備える、ものであってもよ。 The gripping control unit may further include a gripping position determining unit that determines the gripping position of the target object based on the image acquired by the hand camera.
 このような構成によれば、目標物体の把持位置を適切に決定することができるので把持の失敗の可能性を低減させることができる。 According to such a configuration, the gripping position of the target object can be appropriately determined, so that the possibility of gripping failure can be reduced.
 前記把持位置決定部は、さらに、前記手先カメラにより取得された画像に基づいて前記目標物体の把持位置を推測するよう機械学習された学習済モデルを含む、把持位置推測部を備える、ものであってもよい。 The gripping position determining unit further comprises a gripping position estimation unit including a trained model machine-learned to estimate the gripping position of the target object based on an image acquired by the hand camera. You may.
 このような構成によれば、機械学習による作用により目標物体の把持位置を適応的に決定することができるので把持の失敗の可能性を低減することができる。 According to such a configuration, the gripping position of the target object can be adaptively determined by the action of machine learning, so that the possibility of gripping failure can be reduced.
 前記マニピュレータの手先位置の移動は、前記載置体に対して平行に行われるものであってもよい。 The movement of the hand position of the manipulator may be performed in parallel with the previously described body.
 このような構成によれば、手先と載置体との間の相対的関係性を変化させることがないので、その後のリーチング動作に好適な態様で手先を移動させることができる。 According to such a configuration, since the relative relationship between the hand and the mounted body is not changed, the hand can be moved in a manner suitable for the subsequent leaching operation.
 前記マニピュレータの手先位置の移動のため前記台車部を移動させる、台車移動制御部を備える、ものであってもよい。 It may be provided with a bogie movement control unit that moves the bogie unit for moving the hand position of the manipulator.
 このような構成によれば、目標物体がマニピュレータのリーチ外にある場合でもリーチング等を実現することができる。 According to such a configuration, reaching and the like can be realized even when the target object is outside the reach of the manipulator.
 前記相対関係認識部は、さらに、前記目標物体又は前記載置体に備えられた第1のマーカーを含む画像情報に基づいて、前記手先カメラと前記載置体の間の相対距離及び相対姿勢を認識する、マーカー認識部を備える、ものであってもよい。 The relative relationship recognition unit further determines the relative distance and the relative posture between the hand camera and the previously described body based on the image information including the first marker provided on the target object or the previously described body. It may be one that recognizes and is provided with a marker recognition unit.
 このような構成によれば、マーカーを利用して高精度に手先カメラと載置体の間の相対距離及び相対姿勢を認識することができる。 According to such a configuration, it is possible to recognize the relative distance and the relative posture between the hand camera and the mounted body with high accuracy by using the marker.
 前記第1のマーカーは、前記目標物体又は前記載置体上に複数個配置される、ものであってもよい。 A plurality of the first markers may be arranged on the target object or the above-mentioned predicate.
 このような構成によれば、複数のマーカーを利用してより高精度に相対的関係を認識することができる。 According to such a configuration, it is possible to recognize the relative relationship with higher accuracy by using a plurality of markers.
 複数の前記第1のマーカーに関する情報と、前記載置体の形状に基づく拘束条件に基づいて、前記相対距離及び前記相対姿勢を認識する、ものであってもよい。 The relative distance and the relative posture may be recognized based on the information regarding the plurality of the first markers and the constraint conditions based on the shape of the above-mentioned figurine.
 このような構成によれば、例えば棚の前面にマーカーが備えられている場合にマーカーが床面と垂直な平面上に存在するといった拘束条件を加えることで、前記手先カメラと前記載置体の間の相対距離及び相対姿勢の認識がより高精度となる。 According to such a configuration, for example, when the marker is provided on the front surface of the shelf, the marker is present on a plane perpendicular to the floor surface by adding a constraint condition to the hand camera and the above-mentioned object. The recognition of the relative distance and the relative posture between them becomes more accurate.
 前記第1のマーカーを含む画像情報は、前記手先カメラにより撮像される、ものであってもよい。 The image information including the first marker may be an image captured by the hand camera.
 このような構成によれば、第1のマーカーを撮像するための他のカメラ等を設ける必要がなくなるためコストの削減等を行うことができる。 According to such a configuration, it is not necessary to provide another camera or the like for capturing the first marker, so that the cost can be reduced.
 複数の前記第1のマーカーを含む画像情報は、複数の前記第1のマーカーをその視野内におさめるように、複数の前記第1のマーカーに対して前記手先カメラを斜めに向けて撮像することにより得られたものであってもよい。 The image information including the plurality of the first markers is imaged by pointing the hand camera obliquely with respect to the plurality of the first markers so as to keep the plurality of the first markers in the field of view. It may be obtained by.
 このような構成によれば、手先カメラを複数の第1のマーカーに対して斜めに向けて撮像することにより、視野の狭い手先カメラを利用する場合であっても効率的に撮像することができる。 According to such a configuration, by aiming the hand camera at an angle with respect to the plurality of first markers, it is possible to efficiently take an image even when a hand camera having a narrow field of view is used. ..
 前記モバイルマニピュレータは、さらに、前記マニピュレータ上に前記手先カメラより広い視野を有するよう配置された、カメラを備え、前記第1のマーカーを含む画像情報は、前記カメラにより撮像される、ものであってもよい。 The mobile manipulator further comprises a camera arranged on the manipulator so as to have a wider field of view than the hand camera, and the image information including the first marker is captured by the camera. May be good.
 このような構成によれば、広い視野で容易に複数の第1のマーカーを撮像することができるので迅速な撮像を行うことができる。 According to such a configuration, a plurality of first markers can be easily imaged in a wide field of view, so that rapid imaging can be performed.
 前記モバイルマニピュレータは、さらに、頭部を備え、前記カメラは前記頭部に備えられる、ものであってもよい。 The mobile manipulator may further include a head, and the camera may be provided on the head.
 このような構成によれば、頭部から広い視野で撮像できるカメラを利用して容易に複数の第1のマーカーを撮像することができる。 According to such a configuration, a plurality of first markers can be easily imaged by using a camera that can image a wide field of view from the head.
 前記台車部は、全方位移動台車であってもよい。 The trolley unit may be an omnidirectional moving trolley.
 このような構成によれば、狭い場所での移動や棚に対する平行移動等が容易となる。 With such a configuration, it is easy to move in a narrow place or move in parallel to a shelf.
 前記マニピュレータは、多関節アームであり、前記多関節アームは、その先端、その関節又はその根元に1又は複数の力センサを備え、前記リーチング動作制御部は、さらに、前記力センサで一定以上の値を検出した場合に、前記リーチング動作を停止するよう制御する、リーチング停止制御部を備える、ものであってもよい。 The manipulator is an articulated arm, the articulated arm includes one or a plurality of force sensors at its tip, its joint or its root, and the leaching motion control unit further has a certain or more force sensors. It may be provided with a leaching stop control unit that controls to stop the leaching operation when a value is detected.
 このような構成によれば、目標物体へと当接するまでリーチング動作を行うことから、確実にその後の目標物体の把持等を実現することができる。 According to such a configuration, since the leaching operation is performed until it comes into contact with the target object, it is possible to surely realize the subsequent gripping of the target object.
 前記マニピュレータは、多関節アームと、前記多関節アームの先端に備えられたエンドエフェクタを備え、前記エンドエフェクタは、前記物体との接触面に力センサ及び/又は接触センサを備え、前記リーチング動作制御部は、前記力センサ及び/又は前記接触センサで一定以上の値を検出した場合に、前記リーチング動作を停止するよう制御する、エフェクタ用リーチング停止制御部を備える、ものであってもよい。 The manipulator includes an articulated arm and an end effector provided at the tip of the articulated arm, and the end effector includes a force sensor and / or a contact sensor on a contact surface with the object to control the leaching motion. The unit may include a leaching stop control unit for an effector that controls the leaching operation to be stopped when a value equal to or higher than a certain value is detected by the force sensor and / or the contact sensor.
 このような構成によれば、目標物体へと当接するまで把持動作を行うことから、確実に目標物体の把持等を実現することができる。 According to such a configuration, since the gripping operation is performed until it comes into contact with the target object, it is possible to reliably grip the target object and the like.
 前記マニピュレータは、腕部と、前記腕部の付け根に備えられた上下動機構とを備えている、ものであってもよい。 The manipulator may include an arm portion and a vertical movement mechanism provided at the base of the arm portion.
 このような構成によれば、上下動も含めたマニピュレータの移動が出来ることから平行移動等が容易となり、その結果、より目標物体へのリーチング動作等が容易となる。 According to such a configuration, the manipulator can be moved including the vertical movement, so that the manipulator can be easily moved in parallel, and as a result, the leaching operation to the target object becomes easier.
 前記物体認識部における物体の認識は、前記物体に備えられた2次元コード情報を撮像することにより行われる、ものであってもよい。 The object recognition in the object recognition unit may be performed by capturing the two-dimensional code information provided in the object.
 このような構成によれば、2次元コードに基づいて物体の認識を確実に行うことができる。 With such a configuration, it is possible to reliably recognize an object based on a two-dimensional code.
 前記リーチング動作制御部は、さらに、認識された前記物体の識別情報をネットワークを介して所定のデータサーバへと送信する、送信部と、前記データサーバから前記識別情報と対応する位置情報を受信する、受信部と、を備える、ものであってもよい。 The reaching operation control unit further receives the transmission unit that transmits the recognized identification information of the object to a predetermined data server via a network, and the position information corresponding to the identification information from the data server. , A receiver, and may be provided.
 このような構成によれば、物体の位置情報を一元的に管理しているデータサーバを利用して正確に目標物体までの制御情報を生成することができる。 According to such a configuration, it is possible to accurately generate control information up to the target object by using a data server that centrally manages the position information of the object.
 また、本発明は、方法としても観念することができる。すなわち、本発明に係る方法は、載置体近傍へと移動して、前記載置体上の目標物体へとリーチングする、モバイルマニピュレータの制御方法であって、前記マニピュレータは、前記載置体近傍の台車目標停止位置及び台車目標停止姿勢を目指して移動する移動手段を備えた、台車部と、先端又は先端近傍に手先カメラを備えた、マニピュレータと、を備え、前記制御方法は、停止後に前記手先カメラの前記載置体に対する相対距離及び相対姿勢を認識する、相対関係認識ステップと、停止後に前記手先カメラの視野内に配置される物体を認識する、物体認識ステップと、認識された前記物体が前記目標物体を含む場合、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行い、認識された前記物体が前記目標物体を含まない場合、認識された前記物体の位置情報を取得し、前記位置情報に基づいて前記マニピュレータの手先位置を移動させた後、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行う、リーチング制御ステップと、を備えている。 The present invention can also be considered as a method. That is, the method according to the present invention is a method for controlling a mobile manipulator that moves to the vicinity of a mounted body and reaches a target object on the previously described body, and the manipulator is near the previously described body. The control method comprises a trolley portion provided with a moving means for moving toward a target stop position of the trolley and a target stop posture of the trolley, and a manipulator equipped with a hand camera at the tip or near the tip. The relative relationship recognition step of recognizing the relative distance and the relative posture of the hand camera with respect to the previously described object, the object recognition step of recognizing the object placed in the field of view of the hand camera after stopping, and the recognized object. When the object includes the target object, the reach operation control of the manipulator with respect to the target object is performed based on the relative distance and the relative posture, and when the recognized object does not include the target object, the recognized object is used. Reaching control step that acquires the position information of the manipulator, moves the hand position of the manipulator based on the position information, and then controls the leaching operation of the manipulator with respect to the target object based on the relative distance and the relative posture. And have.
 さらに、本発明は、プログラムとしても観念することができる。すなわち、本発明に係るプログラムは、載置体近傍へと移動して、前記載置体上の目標物体へとリーチングする、モバイルマニピュレータの制御プログラムであって、前記マニピュレータは、前記載置体近傍の台車目標停止位置及び台車目標停止姿勢を目指して移動する移動手段を備えた、台車部と、先端又は先端近傍に手先カメラを備えた、マニピュレータと、を備え、前記制御プログラムは、停止後に前記手先カメラの前記載置体に対する相対距離及び相対姿勢を認識する、相対関係認識ステップと、停止後に前記手先カメラの視野内に配置される物体を認識する、物体認識ステップと、認識された前記物体が前記目標物体を含む場合、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行い、認識された前記物体が前記目標物体を含まない場合、認識された前記物体の位置情報を取得し、前記位置情報に基づいて前記マニピュレータの手先位置を移動させた後、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行う、リーチング制御ステップと、を備えている。 Furthermore, the present invention can also be thought of as a program. That is, the program according to the present invention is a control program of a mobile manipulator that moves to the vicinity of the mounted body and reaches the target object on the previously described body, and the manipulator is near the previously described body. The control program comprises a trolley unit equipped with a moving means for moving toward a trolley target stop position and a trolley target stop posture, and a manipulator equipped with a hand camera at or near the tip, and the control program is described after the stop. The relative relationship recognition step of recognizing the relative distance and the relative posture of the hand camera with respect to the previously described object, the object recognition step of recognizing the object placed in the field of view of the hand camera after stopping, and the recognized object. When the object includes the target object, the reach operation control of the manipulator with respect to the target object is performed based on the relative distance and the relative posture, and when the recognized object does not include the target object, the recognized object is used. Reaching control step that acquires the position information of the manipulator, moves the hand position of the manipulator based on the position information, and then controls the leaching operation of the manipulator with respect to the target object based on the relative distance and the relative posture. And have.
 本発明によれば、移動後の停止位置・姿勢に誤差が生じても、リーチング等のその後の動作を確実に達成することができるモバイルマニピュレータ等を提供することができる。 According to the present invention, it is possible to provide a mobile manipulator or the like that can surely achieve a subsequent operation such as leaching even if an error occurs in a stop position / posture after movement.
図1は、システムの全体構成図である。FIG. 1 is an overall configuration diagram of the system. 図2は、モバイルマニピュレータの外観斜視図である。FIG. 2 is an external perspective view of the mobile manipulator. 図3は、自律移動するモバイルマニピュレータの機能ブロック図である。FIG. 3 is a functional block diagram of a mobile manipulator that moves autonomously. 図4は、腕部制御するモバイルマニピュレータの機能ブロック図である。FIG. 4 is a functional block diagram of a mobile manipulator that controls the arm portion. 図5は、工場内のレイアウトに関する説明図である。FIG. 5 is an explanatory diagram regarding the layout in the factory. 図6は、モバイルマニピュレータの動作に関するゼネラルフローチャートである。FIG. 6 is a general flowchart regarding the operation of the mobile manipulator. 図7は、目標設定処理の詳細フローチャートである。FIG. 7 is a detailed flowchart of the target setting process. 図8は、手先カメラの目標位置と目標姿勢に関する説明図である。FIG. 8 is an explanatory diagram regarding the target position and the target posture of the hand camera. 図9は、移動処理の詳細フローチャートである。FIG. 9 is a detailed flowchart of the movement process. 図10は、台車目標位置及び姿勢に至ったと判定されたときのモバイルマニピュレータの状態について示す説明図である。FIG. 10 is an explanatory diagram showing a state of the mobile manipulator when it is determined that the target position and posture of the bogie have been reached. 図11は、ピッキング処理の詳細フローチャートである。FIG. 11 is a detailed flowchart of the picking process. 図12は、移動処理後のモバイルマニピュレータと棚の斜視図である。FIG. 12 is a perspective view of the mobile manipulator and the shelf after the movement process. 図13は、棚に対する手先カメラの相対距離及び姿勢を算出する過程に関する概念図である。FIG. 13 is a conceptual diagram relating to the process of calculating the relative distance and posture of the hand camera with respect to the shelf. 図14は、手先カメラの制御処理後のモバイルマニピュレータの状態について示す説明図である。FIG. 14 is an explanatory diagram showing a state of the mobile manipulator after the control process of the hand camera. 図15は、手先カメラを目標位置及び姿勢へと移動させた後の状態について示す説明図である。FIG. 15 is an explanatory diagram showing a state after moving the hand camera to the target position and posture. 図16は、手先カメラの平行移動動作を行うモバイルマニピュレータの斜視図である。FIG. 16 is a perspective view of a mobile manipulator that performs a translation operation of the hand camera.
 以下、本発明の好適な実施の形態について添付の図面を参照しつつ詳細に説明する。 Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.
 (1.第1の実施形態)
  第1の実施形態として、本発明を工場内を自律移動して作業を行うモバイルマニピュレータへと適用した例について説明する。なお、本発明はこのような装置に限定されるものではなく、工場以外の様々な環境にも適用可能である。
(1. First Embodiment)
As a first embodiment, an example in which the present invention is applied to a mobile manipulator that autonomously moves in a factory to perform work will be described. The present invention is not limited to such a device, and can be applied to various environments other than factories.
 (1.1 構成)
  図1~図4を参照しつつ、本実施形態に係るシステム500の構成について説明する。
(1.1 Configuration)
The configuration of the system 500 according to the present embodiment will be described with reference to FIGS. 1 to 4.
 図1は、システム500の全体構成図である。同図から明らかな通り、システム500は、モバイルマニピュレータ100、データサーバ200及びクライアント装置300が、互いに工場内のLANを経由して接続されることにより構成されている。なお、同図においては各1つずつ記載されているもののこのような例に限定されず、各装置を複数設けてもよい。 FIG. 1 is an overall configuration diagram of the system 500. As is clear from the figure, the system 500 is configured by connecting the mobile manipulator 100, the data server 200, and the client device 300 to each other via a LAN in the factory. Although one is shown in the figure, the present invention is not limited to such an example, and a plurality of each device may be provided.
 データサーバ200は、情報処理装置から成り、後述の各種の情報を蓄積し、モバイルマニピュレータ100やクライアント装置300からの要求に応じて提供を行う。例えば、後述の物体運搬指令や設定情報、書籍IDと書籍位置情報等を管理しており、モバイルマニピュレータ100等からのリクエストに応じて対応する情報を提供する。 The data server 200 is composed of an information processing device, stores various information described later, and provides the data server 200 in response to a request from the mobile manipulator 100 or the client device 300. For example, it manages the object transportation command and setting information, the book ID, the book position information, etc., which will be described later, and provides the corresponding information in response to a request from the mobile manipulator 100 or the like.
 なお、データサーバ200は、各種プログラムを実行するCPU等から成る制御部、実行されるプログラムや各種のデータを記憶するROM、RAM又はフラッシュメモリ等から成る記憶部、外部装置との間の通信を行う通信ユニット、キーボードやマウス等からの入力を処理するための入力部、各種の画像表示を行う表示部等を備え、それらがバスを介して互いに接続されて構成されている。 The data server 200 communicates with a control unit including a CPU or the like for executing various programs, a storage unit including a ROM, RAM or flash memory for storing the executed program and various data, and an external device. It is provided with a communication unit for performing, an input unit for processing input from a keyboard, a mouse, and the like, a display unit for displaying various images, and the like, and they are connected to each other via a bus.
 クライアント装置300は、情報処理装置から成り、データサーバ200と連携してシステム管理者等に対してシステム500に関する管理情報を提供すると共に、後述するように、ユーザに対してモバイルマニピュレータ100に関する各種の設定入力を行うことを可能とする。 The client device 300 is composed of an information processing device, and in cooperation with the data server 200, provides management information about the system 500 to the system administrator and the like, and as will be described later, various types of the mobile manipulator 100 related to the user. It is possible to input settings.
 なお、クライアント装置300は、各種プログラムを実行するCPU等から成る制御部、実行されるプログラムや各種のデータを記憶するROM、RAM又はフラッシュメモリ等から成る記憶部、外部装置との間の通信を行う通信ユニット、キーボードやマウス等からの入力を処理するための入力部、各種の画像表示を行う表示部等を備え、それらがバスを介して互いに接続されて構成されている。 The client device 300 communicates with a control unit including a CPU or the like for executing various programs, a storage unit including a ROM, RAM or flash memory for storing the executed program and various data, and an external device. It is provided with a communication unit for performing, an input unit for processing input from a keyboard, a mouse, and the like, a display unit for displaying various images, and the like, and they are connected to each other via a bus.
 図2は、モバイルマニピュレータ100の外観斜視図である。同図から明らかな通り、本実施形態のモバイルマニピュレータ100は、略人型の形状を有しており、ロボット本体部10と、ロボット本体部10を支持する移動台車20と、ロボット本体部10の上端に設けられた頭部30と、ロボット本体部10の前面から延びる腕部40とから構成されている。 FIG. 2 is an external perspective view of the mobile manipulator 100. As is clear from the figure, the mobile manipulator 100 of the present embodiment has a substantially human-shaped shape, and includes a robot main body 10, a mobile trolley 20 that supports the robot main body 10, and a robot main body 10. It is composed of a head portion 30 provided at the upper end and an arm portion 40 extending from the front surface of the robot main body portion 10.
 ロボット本体部10を支持する移動台車20は、モバイルマニピュレータ100が置かれた面(床面)の上を移動する機能を有する全方位移動台車である。全方位移動台車とは、駆動輪として例えば複数のオムニホイールを備え、全方向に移動することができるように構成された台車である。全方位移動台車は、全方位台車、全方向移動台車、または全方向台車と称されてもよい。また、全方位移動台車は、360度の全方向への移動が可能であり狭い通路等でも自在に移動できる。本実施形態の移動台車20は、図示するように、外観に表されるスカート21の内側に、三つの駆動輪22と、駆動輪22をベルト等を介して駆動するための移動機構駆動手段2とから構成されてよい。本実施形態の移動機構駆動手段2は一または複数のモータから構成されるものとし、以下では、移動機構駆動手段2をモータ2とも称する。 The mobile trolley 20 that supports the robot main body 10 is an omnidirectional mobile trolley having a function of moving on the surface (floor surface) on which the mobile manipulator 100 is placed. The omnidirectional moving trolley is a trolley provided with, for example, a plurality of omni wheels as driving wheels and configured to be able to move in all directions. The omnidirectional mobile trolley may be referred to as an omnidirectional trolley, an omnidirectional mobile trolley, or an omnidirectional trolley. In addition, the omnidirectional moving trolley can move 360 degrees in all directions and can move freely even in a narrow passage or the like. As shown in the figure, the mobile carriage 20 of the present embodiment has three drive wheels 22 and a moving mechanism driving means 2 for driving the drive wheels 22 via a belt or the like inside the skirt 21 represented on the outside. It may be composed of and. The moving mechanism driving means 2 of the present embodiment is composed of one or a plurality of motors, and hereinafter, the moving mechanism driving means 2 is also referred to as a motor 2.
 頭部30は、略人型のモバイルマニピュレータ100において、人の頭(首より上の部分)に相当する構成としてロボット本体部10の上端に設けられてよい。頭部30は、人が見た際にその正面を顔として認識できるように構成されている。頭部30は、正面が向く方向を制御可能に構成される。具体的には、本実施形態のモバイルマニピュレータ100は、一又は複数の頭部駆動手段3(アクチュエータ3)を備え、頭部駆動手段3を介して頭部を動かすことによりその正面の向きを制御できるように構成される。なお、以下では、頭部30の正面を顔部31と称する。 The head 30 may be provided at the upper end of the robot main body 10 as a configuration corresponding to a human head (a portion above the neck) in the approximately humanoid mobile manipulator 100. The head 30 is configured so that the front surface of the head 30 can be recognized as a face when viewed by a person. The head 30 is configured to be able to control the direction in which the front faces. Specifically, the mobile manipulator 100 of the present embodiment includes one or a plurality of head driving means 3 (actuators 3), and controls the direction of the front surface thereof by moving the head via the head driving means 3. It is configured to be able to. In the following, the front surface of the head 30 will be referred to as a face portion 31.
 顔部31の上部には頭部カメラ35が備えられている。この頭部カメラ35によれば、その配置により、後述する手先カメラ42に比べてより広い視野で撮像を行うことができる。 A head camera 35 is provided on the upper part of the face portion 31. According to the head camera 35, it is possible to take an image in a wider field of view than the hand camera 42 described later due to the arrangement thereof.
 本実施形態のモバイルマニピュレータ100は、頭部30を床面に対して水平方向(左右方向)に回転させる鉛直方向の回転軸を有し頭部30の左右方向への回転駆動を可能とする頭部駆動手段3(サーボモータ3a)を有する。これにより、モバイルマニピュレータ100は、頭部30のみを鉛直方向に対して左右へ回転駆動させることができる。ただし、サーボモータ3aの配置は、図示する配置に制限されず、顔部31の向きを左右方向に動かすことができる限り適宜変更されてよい。例えば、サーボモータ3aは、ロボット本体部10を移動台車20に対して左右へ回転駆動させるように設けられてもよい。この場合、サーボモータ3aは、頭部30をロボット本体部10とともに垂直方向に対して左右へ回転駆動させることにより顔部31の向きを変えることができる。 The mobile manipulator 100 of the present embodiment has a vertical rotation axis that rotates the head 30 in the horizontal direction (horizontal direction) with respect to the floor surface, and enables the head 30 to be rotationally driven in the left-right direction. It has a unit driving means 3 (servomotor 3a). As a result, the mobile manipulator 100 can rotate and drive only the head 30 to the left and right with respect to the vertical direction. However, the arrangement of the servomotor 3a is not limited to the arrangement shown in the figure, and may be appropriately changed as long as the direction of the face portion 31 can be moved in the left-right direction. For example, the servomotor 3a may be provided so as to rotationally drive the robot main body 10 to the left and right with respect to the moving carriage 20. In this case, the servomotor 3a can change the direction of the face portion 31 by rotationally driving the head portion 30 together with the robot main body portion 10 to the left and right in the vertical direction.
 また、モバイルマニピュレータ100は、頭部30を床面に対して上下方向に駆動させる水平方向の回転軸を有し頭部30の上下を仰ぎ見る動作を可能とする頭部駆動手段3(サーボモータ3b)を有してもよい。これにより、モバイルマニピュレータ100は、顔部31の向きを上下方向にも動かすことができる。 Further, the mobile manipulator 100 has a horizontal rotation axis that drives the head 30 in the vertical direction with respect to the floor surface, and the head driving means 3 (servo motor) that enables an operation of looking up and down of the head 30. 3b) may be provided. As a result, the mobile manipulator 100 can also move the direction of the face portion 31 in the vertical direction.
 移動台車20の前面上縁部の突部内には、環境認識手段32が設けられている。本実施形態において、環境認識手段32は、レーザー光を用いて対象物までの距離や方向を計測するライダー(LiDAR:Light Detection And Ranging)ユニットである。ただし、環境認識手段32は、ライダーユニットに限定されず他の種々のセンサ等を採用することができる。例えば、撮像素子を設けて画像処理により環境認識を行ってもよいし、レーダーやマイクロフォンアレイ等を用いてもよい。 An environment recognition means 32 is provided in the protrusion of the front upper edge of the mobile trolley 20. In the present embodiment, the environment recognition means 32 is a lidar (LiDAR: Light Detection And Ranking) unit that measures a distance and a direction to an object by using a laser beam. However, the environment recognition means 32 is not limited to the rider unit, and various other sensors and the like can be adopted. For example, an image sensor may be provided to recognize the environment by image processing, or a radar, a microphone array, or the like may be used.
 モバイルマニピュレータ100はロボット本体部10の前面に腕部40を備える。腕部40は、マニピュレータ機構を備え、運搬する物品等を把持するための把持機構41、本実施形態においてはグリッパが自由端に相当する先端に設けられている。また、把持機構41の付け根付近には単眼の手先カメラ42が設けられている。この手先カメラ42により把持機構41と対向する把持対象物等を正面から撮像することができる。 The mobile manipulator 100 is provided with an arm portion 40 on the front surface of the robot body portion 10. The arm portion 40 is provided with a manipulator mechanism, a gripping mechanism 41 for gripping an article to be transported, and in the present embodiment, a gripper is provided at a tip corresponding to a free end. Further, a monocular hand camera 42 is provided near the base of the gripping mechanism 41. With this hand camera 42, it is possible to take an image of a gripping object or the like facing the gripping mechanism 41 from the front.
 ただし、腕部40の形状、数、および配置は図示する態様に制限されず目的に応じて適宜変更されてよい。例えば、モバイルマニピュレータ100は、ロボット本体部10の両側面に腕部40を備える双腕ロボットとして構成されてもよい。 However, the shape, number, and arrangement of the arm portions 40 are not limited to the illustrated embodiments and may be appropriately changed according to the purpose. For example, the mobile manipulator 100 may be configured as a dual-arm robot having arms 40 on both side surfaces of the robot body 10.
 以上が、本実施形態のモバイルマニピュレータ100の構成例である。なお、図2では省略されているが、モバイルマニピュレータ100は、モバイルマニピュレータ100の動作を制御するのに必要となる他の構成、例えば、腕部40等を駆動する他のアクチュエータや、電力源となるバッテリ等を備えてもよい。 The above is a configuration example of the mobile manipulator 100 of this embodiment. Although omitted in FIG. 2, the mobile manipulator 100 includes other configurations required for controlling the operation of the mobile manipulator 100, for example, other actuators for driving the arm portion 40 and the like, and a power source. A battery or the like may be provided.
 図3は、自律移動するモバイルマニピュレータ100の機能ブロック図である。同図から明らかな通り、ライダーで構成される環境認識部32から得られた環境情報は、環境情報取得部101により取得されて移動台車制御部102へと提供される。また、移動台車20には、図示しない各駆動輪22の回転を検出するエンコーダが備えられており、このエンコーダからの値はエンコーダ情報取得部106により読み出されて、移動台車制御部102へと提供される。 FIG. 3 is a functional block diagram of the mobile manipulator 100 that moves autonomously. As is clear from the figure, the environmental information obtained from the environmental recognition unit 32 composed of riders is acquired by the environmental information acquisition unit 101 and provided to the mobile vehicle control unit 102. Further, the mobile trolley 20 is provided with an encoder that detects the rotation of each drive wheel 22 (not shown), and the value from this encoder is read out by the encoder information acquisition unit 106 to the mobile trolley control unit 102. Provided.
 移動台車制御部102は、環境情報とエンコーダ情報に基づいて後述の様々な処理を行って移動台車20に対する制御情報を生成し、移動機構駆動手段2を制御する。 The mobile trolley control unit 102 performs various processes described later based on the environmental information and the encoder information to generate control information for the mobile trolley 20, and controls the mobile mechanism driving means 2.
 図4は、腕部40の制御を行う際のモバイルマニピュレータ100の機能ブロック図である。同図から明らかな通り、マニピュレータ制御部113は、腕部40の動作を制御するサーボモータ等で成る1又は複数のアクチュエータ118を制御する。また、腕部40には、各アームの動作をセンシングする図示しないエンコーダやトルクセンサ等で成るセンサ116が備えられており、センサ116からの情報は、センサ情報取得部117により取得され、マニピュレータ制御部113へと提供される。 FIG. 4 is a functional block diagram of the mobile manipulator 100 when controlling the arm portion 40. As is clear from the figure, the manipulator control unit 113 controls one or a plurality of actuators 118 including a servomotor or the like that controls the operation of the arm unit 40. Further, the arm portion 40 is provided with a sensor 116 including an encoder (not shown), a torque sensor, or the like that senses the movement of each arm, and information from the sensor 116 is acquired by the sensor information acquisition unit 117 to control the manipulator. Provided to unit 113.
 手先カメラ情報取得部110は、手先カメラ42からの画像情報を取得して、マニピュレータ制御部113へと提供する。また、頭部カメラ情報取得部111は、頭部カメラ35からの画像情報を取得して、マニピュレータ制御部113へと提供する。すなわち、マニピュレータ制御部113は、各カメラ(35、42)から取得される画像情報に基づいて腕部40の制御を行うことができる。 The hand camera information acquisition unit 110 acquires image information from the hand camera 42 and provides it to the manipulator control unit 113. Further, the head camera information acquisition unit 111 acquires image information from the head camera 35 and provides it to the manipulator control unit 113. That is, the manipulator control unit 113 can control the arm unit 40 based on the image information acquired from each camera (35, 42).
 (1.2 動作)
  次に、本実施形態に係るシステム500の動作について説明する。
(1.2 operation)
Next, the operation of the system 500 according to the present embodiment will be described.
 図5は、モバイルマニピュレータ100が動作する工場800内のレイアウトに関する説明図である。同図を利用しつつ、まず、本実施形態に係る動作の概要について説明する。 FIG. 5 is an explanatory diagram regarding the layout in the factory 800 in which the mobile manipulator 100 operates. First, an outline of the operation according to the present embodiment will be described with reference to the figure.
 初期状態において、モバイルマニピュレータ100は、同図左下の位置に存在する。この状態において、モバイルマニピュレータ100は、本実施形態において目標物体となる本601aの運搬指令を受領すると、移動を開始し、同図右上に配置されている棚600の前にて停止する。なお、本601aの運搬指令は、クライアント装置300により設定されてデータサーバ200に記憶されていたものである。 In the initial state, the mobile manipulator 100 exists at the lower left position in the figure. In this state, when the mobile manipulator 100 receives the transport command of the target object 601a in the present embodiment, the mobile manipulator 100 starts moving and stops in front of the shelf 600 arranged in the upper right of the figure. The transport command of the present 601a is set by the client device 300 and stored in the data server 200.
 モバイルマニピュレータ100は、停止後に、後述のピッキング動作を行って本601aを取得する。本601aを把持したモバイルマニピュレータ100は、その後、同図右下に配置される作業台700の前へと移動して停止し、本601aを作業台上へとリリースする動作を行う。これにより、一連の動作は終了する。なお、本実施形態において目標物体として本を例示するが、本発明はこのような構成に限定されない。従って、例えば工場の例にあっては、目標物体は製造に用いられる材料や作業工具等であってもよい。 After the mobile manipulator 100 is stopped, the picking operation described later is performed to acquire the book 601a. The mobile manipulator 100 holding the book 601a then moves to the front of the work table 700 arranged at the lower right of the figure, stops, and releases the book 601a onto the work table. This ends the series of operations. Although the book is exemplified as a target object in the present embodiment, the present invention is not limited to such a configuration. Therefore, for example, in the case of a factory, the target object may be a material, a work tool, or the like used for manufacturing.
 図6は、モバイルマニピュレータ100の動作に関するゼネラルフローチャートである。同図から明らかな通り、処理が開始すると、モバイルマニピュレータ100は、工場800内のLANを経由してデータサーバ200へと設定情報のリクエスト処理を行い設定情報を取得する処理を行う(S1)。ここで、本実施形態において、設定情報は、手先カメラ42と棚600との相対的関係情報と目標物体情報を含む。 FIG. 6 is a general flowchart regarding the operation of the mobile manipulator 100. As is clear from the figure, when the process starts, the mobile manipulator 100 performs a process of requesting setting information to the data server 200 via the LAN in the factory 800 and a process of acquiring the setting information (S1). Here, in the present embodiment, the setting information includes the relative relationship information between the hand camera 42 and the shelf 600 and the target object information.
 相対的関係情報は、棚600の前への移動後、リーチング動作前の手先カメラ42と棚600との関係性に関する情報、すなわち、手先カメラ42と棚600との間の相対距離と相対姿勢に関する情報を含む。例えば、相対距離200mm、相対姿勢180°といった情報である。なお、相対姿勢180°とは、棚600に対して手先が正対する又は対向する状態を意味している。 The relative relationship information is information on the relationship between the hand camera 42 and the shelf 600 after moving to the front of the shelf 600 and before the leaching operation, that is, the relative distance and the relative posture between the hand camera 42 and the shelf 600. Contains information. For example, the information is such that the relative distance is 200 mm and the relative posture is 180 °. The relative posture of 180 ° means a state in which the hand faces or faces the shelf 600.
 目標物体情報は、モバイルマニピュレータ100の把持する又は運搬する物体に関する情報、すなわち、本実施形態においては書籍の識別情報である書籍IDと、書籍の工場800内における位置を示す書籍位置情報を含んでいる。 The target object information includes information about an object held or carried by the mobile manipulator 100, that is, a book ID which is identification information of a book in the present embodiment and a book position information indicating a position of the book in the factory 800. There is.
 なお、本実施形態においては、設定情報をデータサーバ200から取得する構成としたが、本発明はこのような構成に限定されず、例えば、モバイルマニピュレータ100に予め設定情報を記憶させていてもよい。 In the present embodiment, the setting information is acquired from the data server 200, but the present invention is not limited to such a configuration, and for example, the mobile manipulator 100 may store the setting information in advance. ..
 設定情報を受信するまで待機状態(S2NO)にあったモバイルマニピュレータ100は、データサーバ200から設定情報を受信すると(S2YES)、目標設定処理を行う(S3)。 When the mobile manipulator 100, which has been in the standby state (S2NO) until receiving the setting information, receives the setting information from the data server 200 (S2YES), performs the target setting process (S3).
 図7は、目標設定処理の詳細フローチャートである。同図から明らかな通り、処理が開始すると、設定情報から目標物体である本の書籍IDと、書籍位置情報を読み出す処理を行う(S31)。 FIG. 7 is a detailed flowchart of the target setting process. As is clear from the figure, when the process starts, the book ID of the book which is the target object and the book position information are read from the setting information (S31).
 その後、書籍位置情報に基づいて、リーチング動作前の手先カメラ42の目標位置と目標姿勢を算出する処理が行われる(S32)。すなわち、目標物体である本からの相対距離が200mm、相対姿勢が180°となる場合の手先カメラ42の位置及び姿勢が算出される。 After that, a process of calculating the target position and the target posture of the hand camera 42 before the leaching operation is performed based on the book position information (S32). That is, the position and posture of the hand camera 42 when the relative distance from the book, which is the target object, is 200 mm and the relative posture is 180 ° are calculated.
 この手先カメラ42の目標位置と目標姿勢が算出されると、当該目標位置と目標姿勢から逆算するように移動台車20の目標位置と目標姿勢が算出される(S34)。このとき、モバイルマニピュレータ100には所定の基準姿勢が予め設定されている。この基準姿勢は、本実施形態においては、移動の安全も考慮して腕部40を一定程度折りたたんだ所定の姿勢に設定されている。 When the target position and target posture of the hand camera 42 are calculated, the target position and target posture of the moving trolley 20 are calculated so as to calculate back from the target position and target posture (S34). At this time, a predetermined reference posture is preset in the mobile manipulator 100. In this embodiment, this reference posture is set to a predetermined posture in which the arm 40 is folded to a certain extent in consideration of safety of movement.
 その後、算出結果を記憶部へと記憶する処理が行われ、目標設定処理は終了する(S36)。 After that, the process of storing the calculation result in the storage unit is performed, and the target setting process ends (S36).
 図8は、手先カメラ42の目標位置と目標姿勢に関する説明図である。同図左下には初期状態にあるモバイルマニピュレータ100が描かれており、同図右上には棚600と棚600上に配置されている目標物体の本601aが描かれている。また、同図において、手先カメラ42の初期位置及び初期姿勢は、2つの直行する矢印C1にて示されており、移動台車20の初期位置及び初期姿勢も同様に2つの直行する矢印C2により示されている。 FIG. 8 is an explanatory diagram regarding the target position and the target posture of the hand camera 42. The mobile manipulator 100 in the initial state is drawn in the lower left of the figure, and the shelf 600 and the book 601a of the target object arranged on the shelf 600 are drawn in the upper right of the figure. Further, in the figure, the initial position and the initial posture of the hand camera 42 are indicated by two orthogonal arrows C1, and the initial position and the initial posture of the mobile carriage 20 are also indicated by the two orthogonal arrows C2. Has been done.
 また、算出された手先カメラ42の目標位置と目標姿勢は、棚600の前において2つの直行する矢印C3にて示されており、さらに、移動台車20の目標位置と目標姿勢は2つの直行する矢印C4にて示されている。後述するように、モバイルマニピュレータ100は、移動台車20の目標位置と目標姿勢を目指して自律移動することとなる。 Further, the calculated target position and target posture of the hand camera 42 are indicated by two orthogonal arrows C3 in front of the shelf 600, and further, the target position and target posture of the mobile trolley 20 are two orthogonal. It is indicated by the arrow C4. As will be described later, the mobile manipulator 100 autonomously moves toward the target position and target posture of the mobile vehicle 20.
 図6に戻り、目標設定処理(S3)が完了すると、次にモバイルマニピュレータ100の移動処理が行われる(S4)。 Returning to FIG. 6, when the target setting process (S3) is completed, the mobile manipulator 100 is then moved (S4).
 図9は、移動処理(S4)の詳細フローチャートである。同図から明らかな通り、処理が開始すると、環境情報取得部101は、環境認識手段32を用いて環境情報を取得して移動台車制御部102へと提供する処理を行う(S41)。 FIG. 9 is a detailed flowchart of the movement process (S4). As is clear from the figure, when the process starts, the environment information acquisition unit 101 acquires the environment information by using the environment recognition means 32 and performs the process of providing the environment information to the mobile trolley control unit 102 (S41).
 環境情報を取得すると、移動台車制御部102は、次に、当該環境情報に基づいて自身の周囲のローカル環境の認識処理を行い、予め工場800に対して生成されたグローバル環境地図における自己位置推定処理を行う(S43)。なお、この自己位置推定処理として、当業者に知られる種々の手法を適用可能である。 Upon acquiring the environmental information, the mobile trolley control unit 102 then performs recognition processing of the local environment around itself based on the environmental information, and estimates the self-position on the global environment map generated in advance for the factory 800. Processing is performed (S43). As this self-position estimation process, various methods known to those skilled in the art can be applied.
 この自己位置推定処理が行われた後、移動台車制御部102は、移動経路を計画する処理(パスプランニング)を行う(S44)。本実施形態においては、環境情報からグローバル地図及びモバイルマニピュレータ100の周辺のローカル地図を生成し、推定自己位置から終点までの経路を生成する。より詳細には、壁からの距離等で定義されるコストが所定の閾値以下であって、かつ、終点までの到達時間が最小となる経路を計画する。ただし、経路計画手法はこのような手法に限定されず、当業者に知られるいずれの手法を利用してもよい。 After this self-position estimation process is performed, the mobile trolley control unit 102 performs a process (pass planning) for planning a movement route (S44). In the present embodiment, a global map and a local map around the mobile manipulator 100 are generated from the environmental information, and a route from the estimated self-position to the end point is generated. More specifically, a route is planned in which the cost defined by the distance from the wall or the like is equal to or less than a predetermined threshold value and the arrival time to the end point is minimized. However, the route planning method is not limited to such a method, and any method known to those skilled in the art may be used.
 経路計画処理の完了後、移動台車制御部102は、移動制御処理(SS46)、すなわち、経路計画処理(S44)にて計画された経路に沿って移動するよう移動機構駆動手段2を制御する処理を行う。 After the route planning process is completed, the mobile trolley control unit 102 controls the movement control process (SS46), that is, the process of controlling the moving mechanism driving means 2 so as to move along the route planned by the route planning process (S44). I do.
 その後、移動制御処理が完了すると、移動台車20が目標位置及び姿勢に至ったか否かの判定処理が行われる(S47)。すなわち、自己位置推定処理を行い、移動台車20の目標位置及び姿勢に一致するか又は十分に近づいたか否かの判定処理が行われ、未だ十分に近づいていない場合には(S47NO)、再び一連の処理(S41~S47)が繰り返される。一方、移動台車20の目標位置及び姿勢に一致するか又は十分に近づいたと判定された場合(S47YES)、経路移動処理は終了して停止する。 After that, when the movement control process is completed, a process of determining whether or not the moving trolley 20 has reached the target position and posture is performed (S47). That is, the self-position estimation process is performed, and the determination process of whether or not the target position and the posture of the moving trolley 20 are matched or sufficiently approached is performed. (S41 to S47) is repeated. On the other hand, when it is determined that the target position and the posture of the moving carriage 20 match or are sufficiently approached (S47YES), the route movement processing ends and stops.
 図10は、移動台車20が目標位置及び姿勢に至ったと判定されたときのモバイルマニピュレータ100の状態について示す説明図である。同図から明らかな通り、モバイルマニピュレータ100は、自己位置推定に基づき移動台車20の目標位置及び姿勢に至ったと判定しているものの、実際には移動台車20の位置及び姿勢、並びに、手先カメラ42の位置及び姿勢は、それぞれ目標位置及び姿勢とずれている。 FIG. 10 is an explanatory diagram showing the state of the mobile manipulator 100 when it is determined that the mobile trolley 20 has reached the target position and posture. As is clear from the figure, although the mobile manipulator 100 determines that the target position and posture of the mobile trolley 20 have been reached based on the self-position estimation, the position and posture of the mobile trolley 20 and the hand camera 42 are actually obtained. The position and posture of are deviated from the target position and posture, respectively.
 すなわち、同図中でC1で示される手先カメラ42の実際の位置及び姿勢は、C3で示される手先カメラ42の目標位置及び姿勢とずれている。また、同図中でC2で示される移動台車20の実際の位置及び姿勢は、C4で示される移動台車20の目標位置及び姿勢とずれている。例えば、この状態において本601aへとピッキング処理等を行えば失敗するおそれがある。 That is, the actual position and posture of the hand camera 42 shown by C1 in the figure deviates from the target position and posture of the hand camera 42 shown by C3. Further, the actual position and posture of the mobile trolley 20 shown by C2 in the figure deviates from the target position and posture of the mobile trolley 20 shown by C4. For example, if the picking process or the like is performed on the 601a in this state, there is a possibility that the picking process will fail.
 図6に戻り、移動処理が完了すると、次にピッキング処理(S6)が行われる。 Returning to FIG. 6, when the movement process is completed, the picking process (S6) is performed next.
 図11は、ピッキング処理の詳細フローチャートである。同図から明らかな通り、処理が開始すると、まず、モバイルマニピュレータ100と棚600との相対的関係を認識する処理が行われる(S61)。 FIG. 11 is a detailed flowchart of the picking process. As is clear from the figure, when the process starts, first, the process of recognizing the relative relationship between the mobile manipulator 100 and the shelf 600 is performed (S61).
 図12は、移動処理後のモバイルマニピュレータ100と棚600の斜視図である。同図から明らかな通り、本実施形態においては、棚600の棚板のモバイルマニピュレータ100との対向面には所定間隔でARマーカー603が配置されている。同図中の拡大図から明らかな通り、ARマーカー603は2次元のコード情報である。また、棚に配置された本の背表紙601xの下部にはそれぞれ2次元ドットパターンから成るQRコード(登録商標)604が配置されている。 FIG. 12 is a perspective view of the mobile manipulator 100 and the shelf 600 after the movement process. As is clear from the figure, in the present embodiment, AR markers 603 are arranged at predetermined intervals on the surface of the shelf 600 facing the mobile manipulator 100. As is clear from the enlarged view in the figure, the AR marker 603 is two-dimensional code information. Further, a QR code (registered trademark) 604 composed of a two-dimensional dot pattern is arranged at the lower part of the spine cover 601x of the book arranged on the shelf.
 相対的関係を認識するため、モバイルマニピュレータ100は、頭部カメラ35を用いて複数のARマーカー603を視野内に収めて撮像する。これにより、視野内の複数のARマーカー603の大きさや角度に基づいて頭部カメラ35と棚600との相対的な距離及び姿勢を取得することができる。 In order to recognize the relative relationship, the mobile manipulator 100 uses the head camera 35 to capture a plurality of AR markers 603 in the field of view and take an image. Thereby, the relative distance and posture between the head camera 35 and the shelf 600 can be acquired based on the sizes and angles of the plurality of AR markers 603 in the field of view.
 図13は、複数のARマーカー603に基づいて棚600に対する手先カメラ42の相対距離及び姿勢を算出する過程に関する概念図である。複数のARマーカー603を利用することにより、1つのARマーカーを利用するより算出精度を向上させることができる。 FIG. 13 is a conceptual diagram relating to the process of calculating the relative distance and posture of the hand camera 42 with respect to the shelf 600 based on the plurality of AR markers 603. By using a plurality of AR markers 603, the calculation accuracy can be improved as compared with the use of one AR marker.
 まず、同図左側の図にある通り、マニピュレータ制御部113は、頭部カメラ35により取得された6つのARマーカー603の位置及び姿勢に基づき、床面に垂直な平面方程式を設定する。このように平面方程式を設定したのは、棚600の前面が床面に垂直な平面であるとの仮定に基づくものである。その後、右側の図にあるように、設定した平面に対する手先カメラ42の相対距離及び相対姿勢を幾何学的に算出する。このようにして、棚600に対する手先カメラ42の実際の位置関係を把握することができる。なお、相対的関係の算出は本実施形態の手法に限定されず、種々の公知の手法を採用することができる。 First, as shown in the figure on the left side of the figure, the manipulator control unit 113 sets a plane equation perpendicular to the floor surface based on the positions and postures of the six AR markers 603 acquired by the head camera 35. The equation of a plane is set in this way based on the assumption that the front surface of the shelf 600 is a plane perpendicular to the floor surface. After that, as shown in the figure on the right side, the relative distance and the relative posture of the hand camera 42 with respect to the set plane are geometrically calculated. In this way, the actual positional relationship of the hand camera 42 with respect to the shelf 600 can be grasped. The calculation of the relative relationship is not limited to the method of the present embodiment, and various known methods can be adopted.
 図11に戻り、棚600との相対的関係の認識処理が完了すると、マニピュレータ制御部113は、移動台車20を移動させることなく、腕部40を制御して手先カメラ42を、設定情報で指定される相対距離及び相対姿勢、すなわち、手先カメラ42を棚600から200mmの位置で正対させる処理が行われる(S62)。 Returning to FIG. 11, when the recognition process of the relative relationship with the shelf 600 is completed, the manipulator control unit 113 controls the arm portion 40 without moving the moving carriage 20 and designates the hand camera 42 by the setting information. The relative distance and the relative posture, that is, the process of facing the hand camera 42 at a position 200 mm from the shelf 600 is performed (S62).
 この手先カメラ42の制御後、手先カメラ42の視野内であって正面又はその近傍に存在する本を認識する処理、すなわち、棚600の棚板上に配置された本の背表紙に配置されたQRコード604を認識して書籍IDを検出する処理が行われる(S63)。 After the control of the hand camera 42, the process of recognizing a book existing in the front or near the front of the hand camera 42 in the field of view, that is, arranged on the spine of the book placed on the shelf board of the shelf 600. A process of recognizing the QR code 604 and detecting the book ID is performed (S63).
 マニピュレータ制御部113は、この認識処理の結果をデータサーバ200へと照会することによりデータサーバ200から現在視野内にある本の位置情報を受信する(S65)。 The manipulator control unit 113 receives the position information of the book currently in the field of view from the data server 200 by inquiring the result of this recognition process to the data server 200 (S65).
 その後、マニピュレータ制御部113は、手先カメラ42の正面又はその近傍に存在する本が目標とする本であるか否かを判定する処理を行う(S66)。この判定処理の結果、手先カメラ42の正面又はその近傍に存在する本が目標とする本601aである場合(S66YES)、後述の把持位置の特定処理(S74)が行われる。 After that, the manipulator control unit 113 performs a process of determining whether or not the book existing in front of or near the hand camera 42 is the target book (S66). As a result of this determination process, when the book existing in front of or near the hand camera 42 is the target book 601a (S66YES), the gripping position specifying process (S74) described later is performed.
 判定処理の結果、手先カメラ42の正面又はその近傍に存在する本が目標とする本601aでない場合(S66NO)、データサーバ200から取得した位置情報に基づき、手先を棚600の前面に沿って目標とする本601aに向けて平行移動させる処理が行われる(S68)。この所定の平行移動の後、再度手先カメラ42の視野内の本をQRコード604により認識する処理を行い(S69)、認識結果をデータサーバ200へと照会する処理を行う(S70)。 As a result of the determination process, if the book existing in front of or near the hand camera 42 is not the target book 601a (S66NO), the hand is targeted along the front of the shelf 600 based on the position information acquired from the data server 200. A process of translating toward the book 601a is performed (S68). After this predetermined translation, the process of recognizing the book in the field of view of the hand camera 42 by the QR code 604 is performed again (S69), and the process of inquiring the recognition result to the data server 200 is performed (S70).
 その後、再び、手先カメラ42の正面又はその近傍に存在する本が目標とする本601aであるか否かを判定する処理が行われる(S72)。この判定処理の結果、手先カメラ42の正面又はその近傍に存在する本が目標とする本601aである場合(S72YES)、後述する把持位置の特定処理(S74)が行われる。一方、判定処理の結果、手先カメラ42の正面又はその近傍に存在する本が目標とする本601aでない場合(S72NO)、再び一連の処理(S68~S70)が行われる。 After that, a process of determining whether or not the book existing in front of or near the hand camera 42 is the target book 601a is performed again (S72). As a result of this determination process, when the book existing in front of or in the vicinity of the hand camera 42 is the target book 601a (S72YES), the gripping position specifying process (S74) described later is performed. On the other hand, as a result of the determination process, if the book existing in front of or near the hand camera 42 is not the target book 601a (S72NO), a series of processes (S68 to S70) are performed again.
 図14は、手先カメラの制御処理(S62)後のモバイルマニピュレータ100の状態の一例について示す説明図である。同図の例にあっては、手先カメラ42の正面又はその近傍には、目標とする本601aではない本601bが配置されている。 FIG. 14 is an explanatory diagram showing an example of the state of the mobile manipulator 100 after the control process (S62) of the hand camera. In the example of the figure, the book 601b, which is not the target book 601a, is arranged in front of or near the hand camera 42.
 この場合、モバイルマニピュレータ100は、視野内の物体の認識処理(S63)により手先カメラ42の正面又はその近傍に存在する本が目標とする本601aでないと判定するので(S66NO)、データサーバ200への照会処理(S65)により得られた視野内の物体から目標とする本601aまでの相対的位置関係に基づいて、目標とする本601aに向けて手先の平行移動を行う(S68)。この平行移動処理は、目標物体である本601aが正面又はその近傍に配置するまで行われる(S69~S72YES)。すなわち、C1で示される手先カメラ42の位置と姿勢を、C3で示される手先カメラ42の目標位置及び姿勢へと向けて移動させる処理が行われる。 In this case, the mobile manipulator 100 determines by the object recognition process (S63) in the field of view that the book existing in front of or near the hand camera 42 is not the target book 601a (S66NO), so the data server 200 is reached. Based on the relative positional relationship from the object in the visual field obtained by the inquiry process (S65) to the target book 601a, the hand is moved in parallel toward the target book 601a (S68). This translation process is performed until the target object 601a is placed in front of or in the vicinity thereof (S69 to S72YES). That is, the process of moving the position and posture of the hand camera 42 indicated by C1 toward the target position and posture of the hand camera 42 indicated by C3 is performed.
 図15は、手先カメラ42を目標位置及び姿勢へと平行移動させた後の状態について示す説明図である。同図から明らかな通り、距離dだけ手先カメラ42を平行移動させたことにより、C1で示される手先カメラ42の位置と姿勢は、C3で示される手先カメラ42の目標位置及び姿勢と一致している。 FIG. 15 is an explanatory diagram showing a state after the hand camera 42 is translated to the target position and posture. As is clear from the figure, by translating the hand camera 42 by the distance d, the position and posture of the hand camera 42 indicated by C1 coincide with the target position and posture of the hand camera 42 shown by C3. There is.
 このような構成によれば、自己位置推定誤差等により移動台車20の位置及び姿勢が目標位置及び姿勢とずれている場合であっても、手先カメラ42位置を確実に目標位置及び姿勢へと一致させることができるので、後述のリーチング動作等(S74、S75)の失敗の可能性を大幅に低減させることができる。 According to such a configuration, even if the position and posture of the moving carriage 20 deviate from the target position and posture due to a self-position estimation error or the like, the position of the hand camera 42 surely matches the target position and posture. Therefore, the possibility of failure of the leaching operation or the like (S74, S75) described later can be significantly reduced.
 図16は、手先カメラ42の平行移動動作を行うモバイルマニピュレータ100の一例に関する斜視図である。同図において左側の図は、手先カメラ42の平行移動動作(S68)前の状態を示しており、右側の図は、手先カメラ42の平行移動動作(S68)後の状態を示している。 FIG. 16 is a perspective view of an example of a mobile manipulator 100 that performs a translation operation of the hand camera 42. In the figure, the figure on the left side shows the state before the translation operation (S68) of the hand camera 42, and the figure on the right side shows the state after the translation operation (S68) of the hand camera 42.
 左側の図から明らかな通り、平行移動動作(S68)前の状態においては手先カメラ42の正面又はその近傍には、目標とする本601aではない本601bが配置されている。モバイルマニピュレータ100は、手先カメラ42の正面又はその近傍に存在する本が目標とする本601aでないと判定した場合には、データサーバ200から視野内の物体の位置情報を取得して目標とする本601aに向けて平行移動を開始する(S68~S72)。 As is clear from the figure on the left side, in the state before the translation operation (S68), the book 601b, which is not the target book 601a, is arranged in front of or near the hand camera 42. When the mobile manipulator 100 determines that the book existing in front of or near the hand camera 42 is not the target book 601a, the mobile manipulator 100 acquires the position information of the object in the field of view from the data server 200 and targets the book. Translation is started toward 601a (S68 to S72).
 平行移動動作(S72YES)後の右側の図から明らかな通り、平行移動動作後の状態においては手先カメラ42の正面又はその近傍に、目標とする本601aが配置されている。この状態において、後述のリーチング動作等(S74、S75)を行うことにより、確実に目標とする本601aの把持等を行うことができる。 As is clear from the figure on the right side after the translation operation (S72YES), the target book 601a is arranged in front of or near the hand camera 42 in the state after the translation operation. In this state, by performing the leaching operation or the like (S74, S75) described later, it is possible to surely grip the target book 601a or the like.
 なお、上述の手先又は手先カメラ42の平行移動処理は、棚600の前面に沿って平面的に行われたが、本発明はこのような構成に限定されない。従って、例えば、棚等の載置体又は格納体の形状に対応する曲面又は段差を含む面に沿って一定の距離及び姿勢を保って移動させてもよい。 The above-mentioned translation process of the hand or hand camera 42 was performed in a plane along the front surface of the shelf 600, but the present invention is not limited to such a configuration. Therefore, for example, it may be moved while maintaining a constant distance and posture along a surface including a curved surface or a step corresponding to the shape of a mounted body such as a shelf or a storage body.
 図11に戻り、目標となる物体である本601aの正面(180°)に相対距離200mmで手先カメラ42を配置させた後、当該目標物体、すなわち、本601aの把持位置を特定する処理が行われる(S73)。 Returning to FIG. 11, after arranging the hand camera 42 at a relative distance of 200 mm in front of the target object 601a (180 °), a process of specifying the gripping position of the target object, that is, the book 601a is performed. It is (S73).
 把持位置の特定処理は、手先カメラ42から得られた画像から把持対象物の輪郭線を検出し、検出した輪郭線に関する情報を入力として学習済モデルを利用して行われる。この学習済モデルは、把持対象物の輪郭線上において人間によりアノテーションされた正解把持位置を含むトレーニングデータに基づいて機械学習を行って生成されたものである。 The process of specifying the gripping position is performed using the trained model by detecting the contour line of the gripping object from the image obtained from the hand camera 42 and inputting the information on the detected contour line. This trained model is generated by performing machine learning based on training data including the correct gripping position annotated by a human on the contour line of the gripping object.
 このような学習済モデルを用いた把持位置特定処理によれば、把持成功の可能性が高い把持位置を的確に特定することができる。 By the gripping position specifying process using such a learned model, it is possible to accurately identify the gripping position with a high possibility of successful gripping.
 なお、機械学習手法は本実施形態においてはニューラルネットワーク、特に深層学習を利用して得られたものであるが、このような機械学習手法に限定されず、他の公知の手法を利用してもよい。 Although the machine learning method is obtained by using a neural network, particularly deep learning, in the present embodiment, it is not limited to such a machine learning method, and other known methods may be used. good.
 この把持位置特定処理の後、マニピュレータ制御部113は、把持位置へと向けて腕部40の各関節を伸展又は屈曲させるリーチング動作の制御を行う(S74)。このリーチング動作は腕部40の関節トルクセンサを含むセンサ116において一定の反力が検出されるまで、すなわち、腕部40が本601aへと当接するまで行われる。 After this gripping position specifying process, the manipulator control unit 113 controls the leaching motion of extending or flexing each joint of the arm portion 40 toward the gripping position (S74). This leaching operation is performed until a constant reaction force is detected by the sensor 116 including the joint torque sensor of the arm portion 40, that is, until the arm portion 40 comes into contact with the main 601a.
 このリーチング動作の後、マニピュレータ制御部113は、グリッパである把持機構41を用いて目標物体である本601aを把持する動作の制御を行う(S75)。この把持動作は、グリッパの電流値等を計測することで、一定の反力が検出されるまで行われる。 After this leaching operation, the manipulator control unit 113 controls the operation of gripping the target object 601a by using the gripping mechanism 41 which is a gripper (S75). This gripping operation is performed until a constant reaction force is detected by measuring the current value of the gripper or the like.
 把持処理の後、モバイルマニピュレータ100は、本601aを把持した状態で腕部40の各関節を屈曲させることにより腕部40を折りたたむ処理を行う(S77)。これによりピッキング処理は終了する(S6)。 After the gripping process, the mobile manipulator 100 performs a process of folding the arm portion 40 by bending each joint of the arm portion 40 while gripping the book 601a (S77). This ends the picking process (S6).
 図6に戻り、ピッキング処理が終了すると、モバイルマニピュレータ100は、再び移動処理を行う(S8)。すなわち、モバイルマニピュレータ100は、データサーバ200から作業台700上の目標載置位置701を読み出して、手先カメラ42の目標位置及び姿勢を算出する。この後、手先カメラ42の目標位置及び姿勢から移動台車20の目標停止位置及び姿勢を算出し、当該目標位置及び姿勢へと向けて自己位置推定を行いつつ本601aを把持したまま移動する処理を行う。自己位置推定に基づく移動処理の詳細は、図9に示したものと略同一であるので説明は省略する。 Returning to FIG. 6, when the picking process is completed, the mobile manipulator 100 performs the move process again (S8). That is, the mobile manipulator 100 reads out the target mounting position 701 on the workbench 700 from the data server 200, and calculates the target position and posture of the hand camera 42. After that, the target stop position and posture of the moving trolley 20 are calculated from the target position and posture of the hand camera 42, and the process of moving while holding the book 601a while estimating the self-position toward the target position and posture is performed. conduct. Since the details of the movement process based on the self-position estimation are substantially the same as those shown in FIG. 9, the description thereof will be omitted.
 モバイルマニピュレータ100は、作業台700近傍の移動台車20の目標停止位置及び姿勢に到達すると、作業台700上の所定の載置位置701へと本601aを載置によりリリースする処理を行う(S9)。これにより一連の運搬処理は終了する。 When the mobile manipulator 100 reaches the target stop position and posture of the mobile trolley 20 near the workbench 700, the mobile manipulator 100 performs a process of releasing the book 601a by mounting the book 601a to a predetermined mounting position 701 on the workbench 700 (S9). .. This completes a series of transportation processes.
 このような構成によれば、自己位置推定誤差等により移動後の停止位置・姿勢に誤差が生じても、リーチング等のその後の動作を確実に達成することができるモバイルマニピュレータ100等を提供することができる。 According to such a configuration, even if an error occurs in the stop position / posture after movement due to a self-position estimation error or the like, it is possible to provide a mobile manipulator 100 or the like that can surely achieve a subsequent operation such as leaching. Can be done.
 以上、本発明の実施形態について説明したが、上記実施形態は本発明の適用例の一部を示したに過ぎず、本発明の技術的範囲を上記実施形態の具体的構成に限定する趣旨ではない。また、上記の実施形態は、矛盾が生じない範囲で適宜組み合わせ可能である。 Although the embodiments of the present invention have been described above, the above embodiments are only a part of the application examples of the present invention, and the technical scope of the present invention is limited to the specific configuration of the above embodiments. No. Further, the above embodiments can be appropriately combined as long as there is no contradiction.
 本発明は、少なくともモバイルマニピュレータ等の自律移動体を製造する産業において利用可能である。 The present invention can be used at least in an industry that manufactures autonomous mobile objects such as mobile manipulators.
 10 ロボット本体部
 2 移動機構駆動手段
 20 移動台車
 21 スカート
 22 駆動輪
 3 頭部駆動手段
 30 頭部
 31 顔部
 32 環境認識手段
 35 頭部カメラ
 40 腕部
 41 把持機構
 42 手先カメラ
 100 モバイルマニピュレータ
 200 データサーバ
 300 クライアント装置
 500 システム
 800 工場
10 Robot body 2 Mobile mechanism drive means 20 Mobile cart 21 Skirt 22 Drive wheel 3 Head drive means 30 Head 31 Face 32 Environmental recognition means 35 Head camera 40 Arm 41 Grip mechanism 42 Hand camera 100 Mobile manipulator 200 Data Server 300 Client equipment 500 System 800 Factory

Claims (28)

  1.  載置体近傍へと移動して、前記載置体上の目標物体へとリーチングする、モバイルマニピュレータであって、
     前記載置体近傍の台車目標停止位置及び台車目標停止姿勢を目指して移動する移動手段を備えた、台車部と、
     先端又は先端近傍に手先カメラを備えた、マニピュレータと、
     停止後に前記手先カメラの前記載置体に対する相対距離及び相対姿勢を認識する、相対関係認識部と、
     停止後に前記手先カメラの視野内に配置される物体を認識する、物体認識部と、
     認識された前記物体が前記目標物体を含む場合、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行い、認識された前記物体が前記目標物体を含まない場合、認識された前記物体の位置情報を取得し、前記位置情報に基づいて前記マニピュレータの手先位置を移動させた後、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行う、リーチング制御部と、
    を備えた、モバイルマニピュレータ。
    A mobile manipulator that moves to the vicinity of the mounting body and reaches the target object on the mounting body described above.
    The bogie unit equipped with a moving means for moving toward the bogie target stop position and the bogie target stop posture in the vicinity of the above-mentioned body, and
    With a manipulator equipped with a hand camera at or near the tip,
    A relative relationship recognition unit that recognizes the relative distance and the relative posture of the hand camera with respect to the previously described body after stopping.
    An object recognition unit that recognizes an object placed in the field of view of the hand camera after stopping,
    When the recognized object includes the target object, the reach operation control of the manipulator with respect to the target object is performed based on the relative distance and the relative posture, and when the recognized object does not include the target object. After acquiring the recognized position information of the object and moving the hand position of the manipulator based on the position information, the reaching operation control of the manipulator with respect to the target object is performed based on the relative distance and the relative posture. Reaching control unit and
    Equipped with a mobile manipulator.
  2.  前記手先カメラと前記載置体との間の前記相対距離及び前記相対姿勢に基づいて、前記マニピュレータをリーチング準備距離及びリーチング準備姿勢となるよう制御する、リーチング準備動作制御部をさらに備える、請求項1に記載のモバイルマニピュレータ。 A claim further comprising a reach preparation operation control unit that controls the manipulator to be in the reach preparation distance and the reach preparation posture based on the relative distance and the relative posture between the hand camera and the above-mentioned relative body. The mobile manipulator according to 1.
  3.  前記台車目標停止位置及び前記台車目標停止姿勢は、前記リーチング準備距離及び前記リーチング準備姿勢に基づいて生成される、請求項2に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 2, wherein the trolley target stop position and the trolley target stop posture are generated based on the leaching preparation distance and the leaching preparation posture.
  4.  前記リーチング準備距離及び前記リーチング準備姿勢は、前記目標物体の位置と、事前に設定された前記載置体と前記手先の相対的関係性に基づいて生成される、請求項3に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 3, wherein the reach preparation distance and the reach preparation posture are generated based on the position of the target object and the relative relationship between the preset predicated body and the hand. ..
  5.  前記手先カメラによる前記目標物体の認識は、前記リーチング準備距離及び前記リーチング準備姿勢において行われる、請求項2に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 2, wherein the recognition of the target object by the hand camera is performed at the reach preparation distance and the reach preparation posture.
  6.  環境認識手段により自己位置推定処理を行う、自己位置推定部を備え、
     前記自己位置推定処理の結果に基づいて前記移動手段は制御される、請求項1に記載のモバイルマニピュレータ。
    Equipped with a self-position estimation unit that performs self-position estimation processing by environment recognition means
    The mobile manipulator according to claim 1, wherein the moving means is controlled based on the result of the self-position estimation process.
  7.  前記環境認識手段は、ライダーである、請求項6に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 6, wherein the environment recognition means is a rider.
  8.  前記手先カメラは、単眼カメラである、請求項1に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 1, wherein the hand camera is a monocular camera.
  9.  前記リーチング動作制御の後に前記目標物体の把持制御を行う、把持制御部をさらに備える、請求項1に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 1, further comprising a gripping control unit that controls gripping of the target object after the leaching motion control.
  10.  前記把持制御部は、さらに、
     前記手先カメラにより取得された画像に基づいて前記目標物体の把持位置を決定する、把持位置決定部を備える、請求項9に記載のモバイルマニピュレータ。
    The grip control unit further
    The mobile manipulator according to claim 9, further comprising a gripping position determining unit that determines a gripping position of the target object based on an image acquired by the hand camera.
  11.  前記把持位置決定部は、さらに、
     前記手先カメラにより取得された画像に基づいて前記目標物体の把持位置を推測するよう機械学習された学習済モデルを含む、把持位置推測部を備える、請求項10に記載のモバイルマニピュレータ。
    The gripping position determining unit further
    The mobile manipulator according to claim 10, comprising a gripping position estimation unit including a trained model machine-learned to estimate the gripping position of the target object based on an image acquired by the hand camera.
  12.  前記マニピュレータの手先位置の移動は、前記載置体に対して平行に行われる、請求項1に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 1, wherein the movement of the hand position of the manipulator is performed in parallel with the previously described body.
  13.  前記マニピュレータの手先位置の移動のため前記台車部を移動させる、台車移動制御部を備える、請求項1に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 1, further comprising a trolley movement control unit that moves the trolley unit for moving the hand position of the manipulator.
  14.  前記相対関係認識部は、さらに、
     前記目標物体又は前記載置体に備えられた第1のマーカーを含む画像情報に基づいて、前記手先カメラと前記載置体の間の相対距離及び相対姿勢を認識する、マーカー認識部を備える、請求項1に記載のモバイルマニピュレータ。
    The relative relationship recognition unit further
    A marker recognition unit that recognizes a relative distance and a relative posture between the hand camera and the previously described body based on image information including a first marker provided on the target object or the previously described body. The mobile manipulator according to claim 1.
  15.  前記第1のマーカーは、前記目標物体又は前記載置体上に複数個配置される、請求項14に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 14, wherein a plurality of the first markers are arranged on the target object or the above-mentioned device.
  16.  複数の前記第1のマーカーに関する情報と、前記載置体の形状に基づく拘束条件に基づいて、前記相対距離及び前記相対姿勢を認識する、請求項15に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 15, which recognizes the relative distance and the relative posture based on the information regarding the plurality of the first markers and the constraint condition based on the shape of the above-mentioned body.
  17.  前記第1のマーカーを含む画像情報は、前記手先カメラにより撮像される、請求項14に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 14, wherein the image information including the first marker is captured by the hand camera.
  18.  複数の前記第1のマーカーを含む画像情報は、複数の前記第1のマーカーをその視野内におさめるように、複数の前記第1のマーカーに対して前記手先カメラを斜めに向けて撮像することにより得られたものである、請求項15に記載のマニピュレータ。 The image information including the plurality of the first markers is imaged by pointing the hand camera obliquely with respect to the plurality of the first markers so as to keep the plurality of the first markers in the field of view. The manipulator according to claim 15, which is obtained by the above.
  19.  前記モバイルマニピュレータは、さらに、前記マニピュレータ上に前記手先カメラより広い視野を有するよう配置された、カメラを備え、
     前記第1のマーカーを含む画像情報は、前記カメラにより撮像される、請求項14に記載のモバイルマニピュレータ。
    The mobile manipulator further comprises a camera arranged on the manipulator so as to have a wider field of view than the hand camera.
    The mobile manipulator according to claim 14, wherein the image information including the first marker is captured by the camera.
  20.  前記モバイルマニピュレータは、さらに、頭部を備え、
     前記カメラは前記頭部に備えられる、請求項19に記載のモバイルマニピュレータ。
    The mobile manipulator also comprises a head.
    The mobile manipulator according to claim 19, wherein the camera is provided on the head.
  21.  前記台車部は、全方位移動台車である、請求項1に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 1, wherein the trolley unit is an omnidirectional mobile trolley.
  22.  前記マニピュレータは、多関節アームであり、
     前記多関節アームは、その先端、その関節又はその根元に1又は複数の力センサを備え、
     前記リーチング動作制御部は、さらに、
     前記力センサで一定以上の値を検出した場合に、前記リーチング動作を停止するよう制御する、リーチング停止制御部を備える、請求項1に記載のモバイルマニピュレータ。
    The manipulator is an articulated arm and is
    The articulated arm comprises one or more force sensors at its tip, its joint or its root.
    The leaching motion control unit further
    The mobile manipulator according to claim 1, further comprising a leaching stop control unit that controls the leaching operation to be stopped when a value equal to or higher than a certain value is detected by the force sensor.
  23.  前記マニピュレータは、多関節アームと、前記多関節アームの先端に備えられたエンドエフェクタを備え、
     前記エンドエフェクタは、前記物体との接触面に力センサ及び/又は接触センサを備え、
     前記リーチング動作制御部は、
     前記力センサ及び/又は前記接触センサで一定以上の値を検出した場合に、前記リーチング動作を停止するよう制御する、エフェクタ用リーチング停止制御部を備える、請求項1に記載のモバイルマニピュレータ。
    The manipulator includes an articulated arm and an end effector provided at the tip of the articulated arm.
    The end effector includes a force sensor and / or a contact sensor on the contact surface with the object.
    The leaching operation control unit is
    The mobile manipulator according to claim 1, further comprising a leaching stop control unit for an effector that controls the leaching operation to be stopped when a value equal to or higher than a certain value is detected by the force sensor and / or the contact sensor.
  24.  前記マニピュレータは、腕部と、前記腕部の付け根に備えられた上下動機構とを備えている、請求項1に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 1, wherein the manipulator includes an arm portion and a vertical movement mechanism provided at the base of the arm portion.
  25.  前記物体認識部における物体の認識は、前記物体に備えられた2次元コード情報を撮像することにより行われる、請求項1に記載のモバイルマニピュレータ。 The mobile manipulator according to claim 1, wherein the recognition of an object in the object recognition unit is performed by capturing a two-dimensional code information provided on the object.
  26.  前記リーチング動作制御部は、さらに、
     認識された前記物体の識別情報をネットワークを介して所定のデータサーバへと送信する、送信部と、
     前記データサーバから前記識別情報と対応する位置情報を受信する、受信部と、を備える、請求項1に記載のモバイルマニピュレータ。
    The leaching motion control unit further
    A transmission unit that transmits the recognized identification information of the object to a predetermined data server via a network.
    The mobile manipulator according to claim 1, further comprising a receiving unit that receives location information corresponding to the identification information from the data server.
  27.  載置体近傍へと移動して、前記載置体上の目標物体へとリーチングする、モバイルマニピュレータの制御方法であって、
     前記マニピュレータは、
      前記載置体近傍の台車目標停止位置及び台車目標停止姿勢を目指して移動する移動手段を備えた、台車部と、
      先端又は先端近傍に手先カメラを備えた、マニピュレータと、を備え、
     前記制御方法は、
      停止後に前記手先カメラの前記載置体に対する相対距離及び相対姿勢を認識する、相対関係認識ステップと、
     停止後に前記手先カメラの視野内に配置される物体を認識する、物体認識ステップと、
     認識された前記物体が前記目標物体を含む場合、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行い、認識された前記物体が前記目標物体を含まない場合、認識された前記物体の位置情報を取得し、前記位置情報に基づいて前記マニピュレータの手先位置を移動させた後、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行う、リーチング制御ステップと、
    を備えた、モバイルマニピュレータの制御方法。
    It is a control method of a mobile manipulator that moves to the vicinity of the mounting body and reaches the target object on the mounting body described above.
    The manipulator is
    The bogie unit equipped with a moving means for moving toward the bogie target stop position and the bogie target stop posture in the vicinity of the above-mentioned body, and
    With a manipulator, equipped with a hand camera at or near the tip,
    The control method is
    A relative relationship recognition step that recognizes the relative distance and the relative posture of the hand camera with respect to the previously described object after stopping.
    An object recognition step that recognizes an object placed in the field of view of the hand camera after stopping,
    When the recognized object includes the target object, the reach operation control of the manipulator with respect to the target object is performed based on the relative distance and the relative posture, and when the recognized object does not include the target object. After acquiring the recognized position information of the object and moving the hand position of the manipulator based on the position information, the reaching operation control of the manipulator with respect to the target object is performed based on the relative distance and the relative posture. Reaching control steps to perform and
    How to control a mobile manipulator with.
  28.  載置体近傍へと移動して、前記載置体上の目標物体へとリーチングする、モバイルマニピュレータの制御プログラムであって、
     前記マニピュレータは、
      前記載置体近傍の台車目標停止位置及び台車目標停止姿勢を目指して移動する移動手段を備えた、台車部と、
      先端又は先端近傍に手先カメラを備えた、マニピュレータと、を備え、
     前記制御プログラムは、
      停止後に前記手先カメラの前記載置体に対する相対距離及び相対姿勢を認識する、相対関係認識ステップと、
     停止後に前記手先カメラの視野内に配置される物体を認識する、物体認識ステップと、
     認識された前記物体が前記目標物体を含む場合、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行い、認識された前記物体が前記目標物体を含まない場合、認識された前記物体の位置情報を取得し、前記位置情報に基づいて前記マニピュレータの手先位置を移動させた後、前記相対距離及び前記相対姿勢に基づいて前記目標物体に対する前記マニピュレータのリーチング動作制御を行う、リーチング制御ステップと、
    を備えた、モバイルマニピュレータの制御プログラム。
    A mobile manipulator control program that moves to the vicinity of the mounting body and reaches the target object on the mounting body described above.
    The manipulator is
    The bogie unit equipped with a moving means for moving toward the bogie target stop position and the bogie target stop posture in the vicinity of the above-mentioned body, and
    With a manipulator, equipped with a hand camera at or near the tip,
    The control program is
    A relative relationship recognition step that recognizes the relative distance and the relative posture of the hand camera with respect to the previously described object after stopping.
    An object recognition step that recognizes an object placed in the field of view of the hand camera after stopping,
    When the recognized object includes the target object, the reach operation control of the manipulator with respect to the target object is performed based on the relative distance and the relative posture, and when the recognized object does not include the target object. After acquiring the recognized position information of the object and moving the hand position of the manipulator based on the position information, the reaching operation control of the manipulator with respect to the target object is performed based on the relative distance and the relative posture. Reaching control steps to perform and
    Mobile manipulator control program with.
PCT/JP2021/018229 2020-07-16 2021-05-13 Mobile manipulator, method for controlling same, program WO2022014133A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020122033A JP7475663B2 (en) 2020-07-16 2020-07-16 Mobile manipulator and control method and program thereof
JP2020-122033 2020-07-16

Publications (1)

Publication Number Publication Date
WO2022014133A1 true WO2022014133A1 (en) 2022-01-20

Family

ID=79554695

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/018229 WO2022014133A1 (en) 2020-07-16 2021-05-13 Mobile manipulator, method for controlling same, program

Country Status (2)

Country Link
JP (1) JP7475663B2 (en)
WO (1) WO2022014133A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115860642A (en) * 2023-02-02 2023-03-28 上海仙工智能科技有限公司 Access management method and system based on visual identification

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2024145678A (en) * 2023-03-31 2024-10-15 Johnan株式会社 Robot Control System

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05338713A (en) * 1992-06-15 1993-12-21 Hitachi Electron Eng Co Ltd Storage rack positioning controller
JP2010162635A (en) * 2009-01-14 2010-07-29 Fanuc Ltd Method for correcting position and attitude of self-advancing robot
JP2018158391A (en) * 2017-03-22 2018-10-11 株式会社東芝 Object handling device and calibration method for the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05338713A (en) * 1992-06-15 1993-12-21 Hitachi Electron Eng Co Ltd Storage rack positioning controller
JP2010162635A (en) * 2009-01-14 2010-07-29 Fanuc Ltd Method for correcting position and attitude of self-advancing robot
JP2018158391A (en) * 2017-03-22 2018-10-11 株式会社東芝 Object handling device and calibration method for the same

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115860642A (en) * 2023-02-02 2023-03-28 上海仙工智能科技有限公司 Access management method and system based on visual identification

Also Published As

Publication number Publication date
JP7475663B2 (en) 2024-04-30
JP2022018716A (en) 2022-01-27

Similar Documents

Publication Publication Date Title
US11241796B2 (en) Robot system and method for controlling robot system
JP6359756B2 (en) Manipulator, manipulator operation planning method, and manipulator control system
US11396101B2 (en) Operating system, control device, and computer program product
US11584004B2 (en) Autonomous object learning by robots triggered by remote operators
JP6855492B2 (en) Robot system, robot system control device, and robot system control method
WO2022014133A1 (en) Mobile manipulator, method for controlling same, program
JP6697204B1 (en) Robot system control method, non-transitory computer-readable recording medium, and robot system control device
JP2018020423A (en) Robot system and picking method
JP7395877B2 (en) Robot system and control method
WO2018230517A1 (en) Operation system
JP2012135820A (en) Automatic picking device and automatic picking method
CN113146614B (en) Control method and control device for mobile robot and robot system
JP2008264901A (en) Mobile robot, its automatic coupling method, and drive control method for parallel link mechanism
JP7353948B2 (en) Robot system and robot system control method
TWI851310B (en) Robot and method for autonomously moving and grabbing objects
US20240316767A1 (en) Robot and method for autonomously moving and grasping objects
WO2024057800A1 (en) Method for controlling mobile object, transport device, and work system
JP2015104796A (en) Gripping method, transportation method and robot
CN116100562B (en) Visual guiding method and system for multi-robot cooperative feeding and discharging
Jayarathne et al. Vision-based Leader-Follower Mobile Robots for Cooperative Object Handling
WO2024075394A1 (en) Control device, and control method
TW202438256A (en) Robot and method for autonomously moving and grabbing objects
WO2023053374A1 (en) Control device and robot system
WO2024166393A1 (en) Information generation system
Ali et al. Mobile robot transportation for multiple labware with hybrid pose correction in life science laboratories

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21841801

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21841801

Country of ref document: EP

Kind code of ref document: A1