Nothing Special   »   [go: up one dir, main page]

US9292924B2 - Method and device for monitoring a spatial region - Google Patents

Method and device for monitoring a spatial region Download PDF

Info

Publication number
US9292924B2
US9292924B2 US13/354,405 US201213354405A US9292924B2 US 9292924 B2 US9292924 B2 US 9292924B2 US 201213354405 A US201213354405 A US 201213354405A US 9292924 B2 US9292924 B2 US 9292924B2
Authority
US
United States
Prior art keywords
image recording
distance
spatial region
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/354,405
Other versions
US20120182419A1 (en
Inventor
Martin WIETFELD
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pilz GmbH and Co KG
Original Assignee
Pilz GmbH and Co KG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pilz GmbH and Co KG filed Critical Pilz GmbH and Co KG
Assigned to PILZ GMBH & CO. KG reassignment PILZ GMBH & CO. KG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WIETFELD, MARTIN
Publication of US20120182419A1 publication Critical patent/US20120182419A1/en
Application granted granted Critical
Publication of US9292924B2 publication Critical patent/US9292924B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/0042
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16PSAFETY DEVICES IN GENERAL; SAFETY DEVICES FOR PRESSES
    • F16P3/00Safety devices acting in conjunction with the control or operation of a machine; Control arrangements requiring the simultaneous use of two or more parts of the body
    • F16P3/12Safety devices acting in conjunction with the control or operation of a machine; Control arrangements requiring the simultaneous use of two or more parts of the body with means, e.g. feelers, which in case of the presence of a body part of a person in or near the danger zone influence the control or operation of the machine
    • F16P3/14Safety devices acting in conjunction with the control or operation of a machine; Control arrangements requiring the simultaneous use of two or more parts of the body with means, e.g. feelers, which in case of the presence of a body part of a person in or near the danger zone influence the control or operation of the machine the means being photocells or other devices sensitive without mechanical contact
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16PSAFETY DEVICES IN GENERAL; SAFETY DEVICES FOR PRESSES
    • F16P3/00Safety devices acting in conjunction with the control or operation of a machine; Control arrangements requiring the simultaneous use of two or more parts of the body
    • F16P3/12Safety devices acting in conjunction with the control or operation of a machine; Control arrangements requiring the simultaneous use of two or more parts of the body with means, e.g. feelers, which in case of the presence of a body part of a person in or near the danger zone influence the control or operation of the machine
    • F16P3/14Safety devices acting in conjunction with the control or operation of a machine; Control arrangements requiring the simultaneous use of two or more parts of the body with means, e.g. feelers, which in case of the presence of a body part of a person in or near the danger zone influence the control or operation of the machine the means being photocells or other devices sensitive without mechanical contact
    • F16P3/142Safety devices acting in conjunction with the control or operation of a machine; Control arrangements requiring the simultaneous use of two or more parts of the body with means, e.g. feelers, which in case of the presence of a body part of a person in or near the danger zone influence the control or operation of the machine the means being photocells or other devices sensitive without mechanical contact using image capturing devices
    • G06T7/0085
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present invention relates to a method and a device for monitoring a spatial region, especially in the context of safeguarding a hazardous machine.
  • EP 1 543 270 B1 discloses a method and a device which serve for safeguarding the hazardous region of a machine operating in automated fashion.
  • the device has at least two image recording units arranged at a defined distance from one another.
  • Each image recording unit provides an image of the spatial region in which the machine is arranged.
  • the images of the two image recording units are evaluated by means of at least two different scene analysis methods, in order to obtain a three-dimensional image of the monitored spatial region.
  • positions of individual objects in the spatial region are determined on the basis of the images recorded in parallel, wherein said positions also include distance information.
  • the scene analysis methods and, in particular, the determination of object distances are typically based on objects and/or structures in each of the two recorded images, which objects have to be identified and assigned to one another. Since the two image recording units are arranged at a distance from one another, each image recording unit has a different viewing angle. Therefore, identical objects and/or structures appear offset with respect to one another in the two images, wherein the relative offset of an object in the two images is dependent on the defined distance between the image recording units and on the distance of the object with respect to the image recording units. Consequently, given a known distance between the image recording units, it is possible to determine the distance to the object on the basis of the images.
  • the functional principle corresponds in a certain way to human three-dimensional vision.
  • edges can be three-dimensional structures such as, for instance, a staircase, a lattice fence or vertical blinds, or they can be two-dimensional structures such as, for instance, a striped pattern on the floor or on a wall.
  • the parallel edges make it more difficult to unambiguously assign an edge in the two recorded images.
  • An incorrect assignment, where different edges are assigned to one another, generally leads to an incorrect distance measurement. This is particularly problematic if the device and the method are used for safeguarding a hazardous region, such as, for instance, the hazardous region of a machine operating in automated fashion.
  • DE 10 2005 063 217 A1 describes a method for configuring a prior art device for monitoring a spatial region.
  • the configuration includes the definition of protection spaces and/or protection regions on the basis of variable geometry elements.
  • the geometry elements are produced graphically in or above a real image of the spatial region to be monitored.
  • the real image can include a number of reference marks that are used to define a configuration plane.
  • the variable geometry element is then produced relative to the configuration plane in order to provide an unambiguous relationship between the “virtual” geometry element and the real spatial region. Incorrect assignments of parallel edges and measurement errors ensuing therefrom cannot, however, be avoided with the known method.
  • a method for monitoring a spatial region comprising a number of movable objects, the method comprising the steps of providing a first and a second image recording unit, which are arranged at a defined distance from one another; recording a first image of the spatial region by means of the first image recording unit and recording a second image of the spatial region by means of the second image recording unit; determining a number of object positions on the basis of the first and second images, wherein each object position represents a spatial distance of an object relative to the image recording units; and generating a switching signal depending on the object positions; wherein the spatial region comprises a structure having a plurality of relatively closely spaced substantially parallel edges; wherein a number of defined reference marks is arranged at the structure; wherein a number of reference distances between the image recording units and the reference marks is determined; and wherein a structure position of the structure is determined on the basis of the reference distances.
  • a device for monitoring a spatial region comprising a number of movable objects and comprising a stationary structure having a plurality of relatively closely spaced substantially parallel edges
  • the device comprising a first image recording unit for recording a first image of the spatial region; a second image recording unit for recording a second image of the spatial region, the first and second image recording units being arranged at a defined distance from one another; and an evaluation and control unit designed to determine a number of object positions on the basis of the first and second images and to generate a switching signal depending on the object positions; wherein each object position represents a spatial distance of one of the number of objects relative to the image recording units; wherein a number of reference marks are arranged at the stationary structure; and wherein the evaluation and control unit is further designed to determine a number of defined reference distances between the image recording units and the reference marks, and to determine a structure position of the stationary structure on the basis of the reference distances.
  • a computer readable storage medium designed for interfacing with a computer that is connected to a first and a second image recording unit which are arranged at a defined distance from one another, the computer readable storage medium comprising an interface for communicating with the computer and comprising program code configured to execute a method comprising the steps of recording a first image of a spatial region by means of the first image recording unit and recording a second image of the spatial region by means of the second image recording unit; determining a number of object positions using the first and second images, wherein each object position represents a distance between a moveable object located in the spatial region and the image recording units; and generating a switching signal depending on the object positions; wherein the spatial region also comprises a stationary structure having a plurality of relatively closely spaced substantially parallel edges; wherein a number of defined reference marks is arranged at the structure; wherein a number of reference distances between the image recording units and the reference marks is determined; and wherein a structure position of the structure is determined on the basis of the reference distances
  • novel method and the novel device are advantageously implemented by means of such a computer program.
  • the novel method and the novel device use defined reference marks in order to selectively mark the problematic edge structure.
  • the reference marks are arranged and preferably fixed at the structure having a plurality of relatively closely spaced substantially parallel edges.
  • the reference marks have a visible surface having light and/or dark regions, which differs from the structure and the immediate vicinity thereof, such that the reference marks can be unambiguously identified and assigned to one another in the recorded images.
  • the reference marks are adhesive films that are adhesively bonded to the circumference and/or to corners of the structure having the parallel edges.
  • it is advantageous if the reference marks are very flat perpendicular to their visible surface, i.e.
  • the reference marks are embodied in paper- or film-like fashion, such that the reference distances between the reference marks and the image recording units have no or at most a very small difference with respect to the corresponding distances of the structure in the region of reference marks. Consequently, the reference marks are artificially generated target points within the recorded images which are used to mark the structure in order to facilitate the distance measurement with regard to the structure.
  • the position of the problematic structure can be determined relatively simply and reliably in an “indirect” manner. This includes, in particular, determining the distance of the structure relative to the image recording units.
  • the reference distances are at least temporarily used as distance measurement values for the structure.
  • the novel method and the novel device can be realized cost-effectively in a relatively simple manner. Moreover, they enable high identification reliability and make it possible to avoid or at least reduce incorrect assignments in the images simultaneously recorded. Therefore, the novel method and the novel device are suitable, in particular, for use for safeguarding hazardous regions.
  • the evaluation and control unit of the novel device is designed to determine a virtual reference area on the basis of at least three reference marks which are arranged at the structure and define a plane.
  • the virtual reference area which is “invisible” in the images, can be determined simply and with high accuracy by means of at least three, preferably at least four, reference marks. Furthermore, the position (including the distance with respect to the image recording units) of the virtual reference area can also be determined simply and with high accuracy on the basis of the images recorded in parallel. Consequently, the reference area marks the problematic edge region, such that this region can be subjected to a separate evaluation in a targeted manner. In this way, the detection dependability and reliability of the novel device and of the novel method can be improved in a very efficient manner.
  • an edge distance relative to the image recording units is determined on the basis of the first and second images, and also on the basis of the virtual reference area.
  • an edge distance is determined for a plurality and particularly preferably for all of the edges of the structure.
  • This refinement yields accurate distance information for the problematic edges by virtue of the fact that, in addition to the image information and the known distance between the image recording units, the newly obtained information representing position and distance of the reference area is evaluated. On the other hand, the distance information is attributed to the actual image contents i.e. the parallel edges visible in the images. With this refinement it is possible to obtain unambiguous and accurate distance information for all problematic edges.
  • the further evaluation of the images, in particular the monitoring of protection areas, is simplified by this refinement because problematic edge regions can be processed like “normal” or unproblematic image regions on the basis of the unambiguously determined distance.
  • an edge position within the first and/or second image is determined and permanently stored.
  • the position of the at least one edge in the two-dimensional images is determined and permanently stored.
  • “Permanently stored” means that the edge position is provided in a memory of the novel device at least for the entire duration of uninterrupted operation.
  • the edge position is stored in a nonvolatile memory of the device, such that the edge position can be read out even after an interruption of the power supply.
  • the refinement has the advantage that the problematic edge region can be identified in each case very simply and rapidly in the recorded images during a monitoring mode of operation in which first and second images are repeatedly recorded. The evaluation of a new image pair in a cyclic monitoring mode of operation is simplified, such that this refinement contributes to real-time operation that can be realized in a cost-effective manner.
  • the edge distance for the at least one edge is also permanently stored.
  • This refinement further contributes to enabling a monitoring mode of operation in real time, with the problematic edge structure being taken into account in a simple and cost-effective manner.
  • the unambiguously determined and stored edge distance can advantageously be used in the evaluation of each new image pair for the verification of a currently obtained measurement value and/or instead of a current measurement value.
  • the first and second images are repeatedly recorded, and the object positions are repeatedly determined, wherein the structure position is determined on the basis of the stored edge distance. It is particularly advantageous if the stored edge distance, which, in particular, was determined in a separate refinement mode, is permanently stored and used for each new image pair as the current edge distance.
  • the edge distance determined once replaces subsequent “measurement values” of the edge distance in the repeatedly recorded images.
  • the distance of the edges, despite the repeated image recordings, is determined only once and then permanently used.
  • This refinement is advantageously employed only in the case of structures which are spatially stationary or not moved. By contrast, current distances are determined for all other objects in the spatial region, including for areas that do not have parallel edges.
  • the refinement has the advantage that correct “measurement values” are made available for the problematic edge region in a very efficient manner. False alarms and resulting machine stops are avoided in a simple manner during the monitoring of a dangerous machine.
  • this refinement contributes to high detection reliability, since objects in the foreground of the structure must always have a smaller distance with respect to the image recording units and, consequently, can be verified very easily on the basis of the different distances.
  • At least one of the reference marks remains permanently attached at the structure. It is particularly preferred if the stored edge distance is accepted on the basis of the permanently arranged reference mark; in particular, it is only accepted if the at least one reference mark is detected at its originally determined position and/or with its originally determined reference distance.
  • This refinement provides even higher failsafety when safeguarding a machine, since a stored edge distance is used as a “current measurement value” only when, on the basis of the at least one reference mark that has remained, it has been ascertained that the structure has not changed its distance and position relative to the image recording units.
  • the object position of a movable object is verified on the basis of the structure position of the structure. It is advantageous if the stored edge distance in this case represents the structure position of the structure, such that the object position of a movable object is verified on the basis of the at least one edge distance.
  • a plausibility check takes place, which is used to check whether the measured object position of a detected movable object lies within the available spatial region. If the edge structure is e.g. a lattice or a striped pattern at the floor of the spatial region, the object distance to a movable object has to be less than the edge distance. In other words, the movable object is situated in the foreground, while the structure having the parallel edges constitutes a background.
  • Such a plausibility check contributes to even higher failsafety when monitoring a spatial region, and it contributes to a lower false alarm rate.
  • the structure is spatially stationary.
  • This refinement makes it possible to implement the novel method very simply and efficiently, in particular if the edge distances determined once are intended to be used instead of current measurement values in the course of a monitoring operation.
  • this refinement fits to an application that occurs very often in practice because structures having parallel edges are often the result of barriers, background markings and/or other structural measures.
  • the refinement enables, in a very efficient manner, a high detection reliability when monitoring a spatial region in which parallel edges make it difficult to achieve unambiguous assignment in the images recorded in parallel.
  • FIG. 1 shows a schematic illustration of an exemplary embodiment of the novel device
  • FIG. 2 shows the device of FIG. 1 (in parts) in a plan view
  • FIG. 3 shows a simplified illustration of a first image of a monitored spatial region
  • FIG. 4 shows a simplified illustration of a second image of the spatial region
  • FIG. 5 shows a further image of the spatial region, wherein reference marks defined here are arranged at a structure having parallel edges,
  • FIG. 6 shows a further image of the spatial region with a person
  • FIG. 7 shows a flow chart for illustrating an exemplary embodiment of the novel method
  • FIG. 8 shows a further flow chart for illustrating an exemplary embodiment of the novel method.
  • an exemplary embodiment of the novel device is designated by the reference numeral 10 in its entirety.
  • the device 10 serves here for safeguarding the working region 12 of a robot 14 .
  • the robot 14 is a machine which operates in automated fashion and which, on account of its movements, constitutes a hazard for persons who enter the working region 12 .
  • the novel device and the novel method are not restricted to safeguarding machines. They can also be used for monitoring spatial regions for other reasons, for instance for monitoring a strongroom.
  • the device 10 has two image recording units 16 , 18 , which are arranged at a defined and known distance b from one another (see FIG. 2 ).
  • the device 10 has a third image recording unit (not illustrated here), wherein two image recording units in each case are arranged at the defined and known distance b from one another and wherein the three image recording units are arranged in an L-shaped manner with respect to one another.
  • the three image recording units form two pairs of image recording units, wherein only one pair comprising the image recording units 16 , 18 is described hereinafter for the sake of simplicity.
  • An L-shaped arrangement of at least three image recording units ensures that edges having arbitrary directions of progression can be identified by at least one of the pairs and evaluated.
  • the device 10 further has an evaluation and control unit 20 , which is connected to the image recording units 16 , 18 .
  • the evaluation and control unit 20 controls the image recording by the image recording units 16 , 18 and evaluates the recorded images, in order to drive the robot 14 in a manner dependent on the images.
  • the evaluation and control unit 20 can be part of an operating controller of the robot 14 .
  • the evaluation and control unit 20 is a separate safety controller that serves for safeguarding the hazardous working region 12 .
  • the operating controller (not illustrated here) of the robot 14 is then realized separately.
  • the evaluation and control unit 20 is able to generate a switching signal in a manner dependent on the images of the image recording units 16 , 18 , by means of which switching signal the robot 14 can be switched off completely or into operation at reduced speed.
  • Reference numeral 22 represents a cable via which the evaluation and control unit 20 transmits the switching signal to the robot 14 .
  • the switching signal can include a switch-off signal, a control signal, an enable signal and/or a warning signal.
  • Reference numeral 24 designates a configuration device for configuring the device 10 .
  • the configuration device 24 includes a PC 26 and a display 28 .
  • the display 28 is designed, inter alia, for displaying images recorded by the image recording units 16 , 18 . However, the display of these images is primarily used for configuring and/or checking the device 10 .
  • the device 10 preferably operates fully automatically, i.e. the evaluation and the control unit 20 generates the switching signal 22 in a manner dependent on the images of the image recording units 16 , 18 , without the images having to be displayed on the display 28 .
  • the PC 26 and the evaluation and control unit 20 are connected via a bidirectional data link for the transmission of data and images, said data link being indicated by a double-headed arrow in FIG. 1 .
  • Reference numerals 32 , 34 represent the recording regions of the image recording units 16 , 18 . As can be readily seen, the recording regions 32 , 34 are somewhat offset with respect to one another. However, there is a common or overlapping recording region 33 , which defines the spatial region to be monitored and in which the working region 12 lies here. Only within the common recording region 33 is the evaluation and control unit 20 able to determine distances between the image recording units 16 , 18 and objects in the manner described below.
  • Reference numeral 34 designates a warning zone and reference numeral 36 designates a switch-off zone.
  • the warning zone 34 and the switch-off zone 36 are monitoring regions extending at least partly around the working region 12 of the robot 14 .
  • the monitoring regions 34 , 36 are virtual protective fences that can be defined by means of the configuration device 24 . If a moving object, such as a person 38 , enters the warning zone 34 and/or the switch-off zone 36 , this is identified by means of the device 10 and the evaluation and control unit 20 generates the switching signal 22 for the robot 14 . If the person 38 enters the warning zone 34 , the evaluation and control unit 20 may generate e.g. a switching signal which has the effect that the robot 14 operates at a reduced speed. Moreover, an acoustic and/or visual warning signal may be generated. If the person 38 enters the switch-off zone 36 despite these measures, the evaluation and control unit 20 may generate a further switching signal, which shuts down the robot 14 .
  • the evaluation and control unit 20 determines a three-dimensional image of the entire recording region 33 in a manner dependent on the images of the image recording units 16 , 18 .
  • the three-dimensional image includes distance information representing a distance between the image recording units 16 , 18 and individual objects and structures within the common recording region 33 .
  • the warning and switch-off zones 34 , 36 are defined on the basis of coordinates within the three-dimensional image, such that the entry of a person 38 into one of the zones can be detected solely on the basis of the image data from the image recording units 16 , 18 .
  • An advantageous method for configuring the warning and switch-off zones 34 , 36 is described in DE 10 2005 063 217 A1 cited in the introduction, the disclosure of which is here incorporated by reference in its entirety.
  • FIG. 3 shows, by way of example and in a simplified manner, a first image 40 of the monitored spatial region, which was recorded by the first image recording unit 16 .
  • the image 40 shows here by way of example a machine 42 and a switchgear cabinet 44 .
  • a structure 46 having a plurality of relatively closely spaced substantially parallel edges 48 is illustrated in front of the machine 42 .
  • the structure 46 could be e.g. a striped pattern comprising light and dark stripes which is arranged on the floor in front of the machine 42 .
  • the structure 46 could be a lattice arranged in front of the machine 42 e.g. with run-off channels for liquids.
  • Further examples of structures 46 having parallel or substantially parallel edges may be lattice fences or staircase-like three-dimensional structures.
  • FIG. 4 shows a second image 50 , which was recorded by the second image recording unit 18 .
  • the second image 50 shows the same spatial region as the first image 40 .
  • the first and second images are recorded synchronously and substantially simultaneously with respect to one another.
  • the images 40 , 50 it is also conceivable for the images 40 , 50 to be recorded in a temporally offset manner, provided that the temporal offset between the two image recordings is small enough that only a small change in the position of movable objects can occur in the common recording region 33 .
  • the first image 40 and the second image 50 share the same scene.
  • the objects and structures 42 , 44 , 46 in the two images 40 , 50 appear offset with respect to one another, as is illustrated by reference numeral 52 .
  • the offset d is a consequence of the offset recording regions 30 , 32 of the two image recording units 16 , 18 .
  • the two image recording units 16 , 18 have parallel optical axes and the two image recording units 16 , 18 are at least substantially identical.
  • f is the focal length of the image recording units
  • b is the distance (the so-called base width) between the two image recording units 16 , 18
  • r is the distance between an object or a structure edge and the common reference point of the image recording units 16 , 18
  • d is the so-called disparity, i.e. the offset 52 in the two images.
  • a prerequisite for determining the distance r is that the mutually corresponding structure or object edges in the two images 40 , 50 can be unambiguously identified and assigned to one another, in order to determine the respective disparity d.
  • the assignment of mutually corresponding edges is difficult and susceptible to errors, particularly if a large number of parallel edges having the same appearance exist. An incorrect edge assignment leads to incorrect distance information.
  • FIG. 5 shows a first image 40 ′, in which four reference marks 56 a , 56 b , 56 c , 56 d were arranged at the corners of the structure 46 .
  • the reference marks 56 are punctiform or circular film pieces which have a defined light-dark pattern.
  • the reference marks are fixed, e.g. adhesively bonded, to the structure 46 .
  • each reference mark 56 forms a defined target point within the images, wherein only the first image 40 ′ is illustrated here for the sake of simplicity.
  • the reference marks 56 can easily be identified and assigned to one another in the recorded images on the basis of their known and/or high-contrast features. On the basis of the respective disparity it is possible to determine the distance r between the image recording units 16 , 18 and the individual reference marks 56 a , 56 b , 56 c , 56 d.
  • the reference marks 56 a , 56 b , 56 c , 56 d define a virtual reference area 58 , which overlays the structure 46 congruently here. Since the distances of the individual reference marks 56 a , 56 b , 56 c , 56 d can be determined accurately, the position and distance of the virtual reference area 58 can also be determined accurately on the basis of the recorded images. In the preferred exemplary embodiments, the disparities and distances of the individual parallel edges 48 a , 48 b are determined based on these data and are assigned to the respective edges 48 a , 48 b .
  • the disparities of the parallel edges 48 a , 48 b it is advantageous that the disparities of the edges change proportionally within the (now known) reference area.
  • the respectively corresponding edge in the second image can be unambiguously assigned. In this way, the disparities and distances of all the edges 48 a , 48 b can be determined unambiguously and in a manner free of errors.
  • FIG. 6 shows a further image 40 ′′ of the first image recording unit 16 , wherein now the person 38 is approaching the machine 42 .
  • the person 38 has to be situated in front of the structure 46 , which forms a background.
  • it is necessary to detect contour edges 39 of the person in the parallel images and to assign them to one another in order to determine the disparities. Should it emerge on the basis of the disparities with respect to the contour edges 39 that the person 38 is further away than the edge contour 46 , an incorrect edge assignment with regard to the contour edge 39 is evidently present.
  • the edge assignment with regard to the contour edge 39 is then corrected and the disparities are determined with the corrected edge assignments.
  • the corrected measurement is accepted as a valid measurement. It is only if no edge assignment is possible which produces a valid measurement that the robot is switched off. Otherwise, the distance accepted as plausible is used for the further evaluation. Consequently, it is possible to determine the distance with respect to the contour edge 39 and thus the three-dimensional position of the person 38 within the monitored spatial region despite the parallel edges in an efficient and reliable manner.
  • FIGS. 7 and 8 show a preferred exemplary embodiment of the novel method on the basis of two flow charts.
  • FIG. 7 shows a configuration mode by means of which the device 10 is configured before the actual monitoring mode of operation begins.
  • the images 40 , 50 are recorded—preferably synchronously—by means of the image recording units 16 , 18 .
  • Step 66 involves searching for the disparities with respect to the various object edges and structure edges in the images. This search includes identifying and assigning mutually corresponding object edges and structure edges.
  • step 68 involves identifying structures for which the disparities can be determined only with difficulty or cannot be determined at all, owing to lack of assignment. In the examples with FIGS. 3 to 6 , this applies to the structure 46 .
  • step 70 reference marks 56 are arranged at the identified structure.
  • steps 72 and 74 a further pair of images is recorded by means of the image recording units 16 , 18 .
  • step 76 involves identifying the reference marks in the further images.
  • step 78 involves determining the disparities of the reference marks.
  • the virtual reference area 58 is determined by means of the disparities of the reference marks.
  • step 80 involves determining the disparities of the individual edges 48 a , 48 b of the structure 46 , wherein the now known position of the reference area 58 and the known disparities of the reference marks 56 are used in order to obtain an unambiguous assignment of the respectively corresponding edges 48 a , 48 b in the two images.
  • Step 82 involves permanently storing the disparities of the individual edges 48 a , 48 b .
  • step 84 involves storing the image positions of the individual edges 48 a , 48 b and also the positions of the reference marks 56 .
  • image position designates the position of the edges and reference marks within the two-dimensional images of the image recording units 16 , 18 .
  • step 86 it is possible to configure individual protection spaces such as, for instance, the warning zone 34 or the switch-off zone 36 .
  • the protection spaces are preferably configured according to the method described in DE 10 2005 063 217 A1 cited in the introduction.
  • the device 10 can then be brought to the monitoring mode, which is illustrated with reference to FIG. 8 .
  • a pair of images is recorded by means of the image recording units 16 , 18 .
  • the reference marks 56 are identified in the recorded images in accordance with step 92 .
  • the distances and positions of the reference marks 56 are determined on the basis of the disparities.
  • Step 94 involves checking whether the distances and/or positions of the reference marks 56 are unchanged relative to the stored distance and position information for the reference marks. If this is not the case, step 96 involves generating a switching signal which, in the preferred exemplary embodiments, includes a switch-off signal for switching off the robot or the monitored machine. As an alternative thereto, the switching signal can trigger a new disparity determination in the region of the reference marks and/or of the edge structure, with the machine being switched off only if no plausible distance values arise in the new disparity determination.
  • Step 98 involves identifying the parallel edges of the structure 46 on the basis of the stored image positions.
  • the stored edge disparities are assigned to the identified edges 48 a , 48 b .
  • the preferred exemplary embodiment of the novel method dispenses with a procedure in which the disparities and distances of the parallel edges 48 a , 48 b are repeatedly determined in each case during the monitoring mode of operation. Rather, the edge disparities determined and stored in the configuration mode replace current measurement values.
  • step 102 involves determining the disparities for all further object edges.
  • step 104 then involves monitoring the protection space, i.e. evaluating whether a person or some other object has entered one of the defined protection spaces 34 , 36 . If an intrusion into a protection space is detected in accordance with step 106 , step 108 involves generating a signal which can once again include a switch-off signal for switching off the monitored machine. If no intrusion into a protection space is detected, the method returns to image recording in steps 88 , 90 in accordance with loop 110 . In the monitoring mode, the method in accordance with FIG. 8 is cyclically repeated therefore.
  • the novel method and the novel device serve for preventing an incorrect assignment of parallel edges in stereoscopically recorded images, by using defined reference marks, such as artificial ring patterns for instance, in order to carry out correct disparity and distance determination for the parallel edges.
  • the disparities and/or distances for the parallel edges are advantageously stored permanently in a memory of the evaluation and control unit and assigned to the problematic edges in the subsequent monitoring mode of operation.
  • the position of the parallel edges in the two-dimensional images is determined and stored together with the disparity and/or distance information. Incorrect assignments of edges can be reduced to a minimum by means of this method.
  • at least three reference marks are used in the configuration mode in order to determine the disparities/distances with respect to the parallel edges.
  • At least one of the reference marks remains permanently at the problematic structure in order to ensure, by means of said reference mark, that the disparities/distances determined in the configuration mode are also valid in the monitoring mode of operation. It is therefore advantageous if the problematic structures having parallel edges are stationary and delimit the monitored spatial region, since the parallel edges in this case form a background against which movable objects move in the monitoring mode of operation. The respective distance of the moving objects must therefore be smaller than the distances to the parallel edges in the background, which enables a simple plausibility check. By means of the novel method, it is possible to significantly reduce computational complexity when determining disparities and distances at parallel edges. Moreover, incorrect assignments and measurement errors ensuing therefrom can be avoided.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Image Analysis (AREA)
  • Burglar Alarm Systems (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Manipulator (AREA)

Abstract

A first and a second image recording unit, which are arranged at a defined distance from one another, are provided for monitoring a spatial region. The spatial region has at least one structure having a plurality of substantially parallel edges. A number of reference marks are arranged at the structure. A first image of the spatial region is recorded by means of the first image recording unit. A second image is recorded by means of the second image recording unit. A number of reference distances between the image recording units and the reference marks are determined. A structure position of the structure is determined on the basis of the reference distances. Moreover, a number of object positions are determined on the basis of the first and second images, wherein each object position represents the spatial distance of an object relative to the image recording units. Depending on the object positions, a switching signal is generated.

Description

CROSS REFERENCES TO RELATED APPLICATIONS
This application is a continuation of international patent application PCT/EP2010/060686 filed on Jul. 23, 2010 designating the U.S., which international patent application has been published in German language and claims priority from German patent application DE 10 2009 035 755.6 filed on Jul. 24, 2009. The entire contents of these priority applications are incorporated herein by reference.
BACKGROUND OF THE INVENTION
The present invention relates to a method and a device for monitoring a spatial region, especially in the context of safeguarding a hazardous machine.
EP 1 543 270 B1 discloses a method and a device which serve for safeguarding the hazardous region of a machine operating in automated fashion. The device has at least two image recording units arranged at a defined distance from one another. Each image recording unit provides an image of the spatial region in which the machine is arranged. The images of the two image recording units are evaluated by means of at least two different scene analysis methods, in order to obtain a three-dimensional image of the monitored spatial region. In other words, positions of individual objects in the spatial region are determined on the basis of the images recorded in parallel, wherein said positions also include distance information. Afterward, on the basis of defined protection regions or protection spaces, it is possible to decide whether an object, such as a person, for instance, has approached the machine operating in automated fashion to an extent such that a hazard exists. If appropriate, the machine is switched off or operated at reduced speed.
With such a device, the scene analysis methods and, in particular, the determination of object distances are typically based on objects and/or structures in each of the two recorded images, which objects have to be identified and assigned to one another. Since the two image recording units are arranged at a distance from one another, each image recording unit has a different viewing angle. Therefore, identical objects and/or structures appear offset with respect to one another in the two images, wherein the relative offset of an object in the two images is dependent on the defined distance between the image recording units and on the distance of the object with respect to the image recording units. Consequently, given a known distance between the image recording units, it is possible to determine the distance to the object on the basis of the images. The functional principle corresponds in a certain way to human three-dimensional vision.
The Assignee's practical experience with a device of this type has shown that a reliable and accurate distance measurement using this principle is very complex and difficult if structures having a plurality of relatively closely spaced substantially parallel edges are present within the monitored spatial region. Such edges can be three-dimensional structures such as, for instance, a staircase, a lattice fence or vertical blinds, or they can be two-dimensional structures such as, for instance, a striped pattern on the floor or on a wall. The parallel edges make it more difficult to unambiguously assign an edge in the two recorded images. An incorrect assignment, where different edges are assigned to one another, generally leads to an incorrect distance measurement. This is particularly problematic if the device and the method are used for safeguarding a hazardous region, such as, for instance, the hazardous region of a machine operating in automated fashion.
DE 10 2005 063 217 A1 describes a method for configuring a prior art device for monitoring a spatial region. The configuration includes the definition of protection spaces and/or protection regions on the basis of variable geometry elements. In one exemplary embodiment, the geometry elements are produced graphically in or above a real image of the spatial region to be monitored. The real image can include a number of reference marks that are used to define a configuration plane. The variable geometry element is then produced relative to the configuration plane in order to provide an unambiguous relationship between the “virtual” geometry element and the real spatial region. Incorrect assignments of parallel edges and measurement errors ensuing therefrom cannot, however, be avoided with the known method.
SUMMARY OF THE INVENTION
Against this background, it is an object of the present invention to provide a method and a device that enable reliable monitoring of a spatial region even when structures having relatively closely spaced substantially parallel edges are present in the spatial region. It is another object to provide a method and a device that enable reliable safeguarding of a hazardous region of a machine operating in automated fashion if structures having substantially parallel edges are present in the vicinity of the machine. It is yet another object to provide a method and device that allow to monitor a region including structures having substantially parallel edges in a simple and cost-effective manner.
In accordance with a first aspect of the invention, there is provided a method for monitoring a spatial region comprising a number of movable objects, the method comprising the steps of providing a first and a second image recording unit, which are arranged at a defined distance from one another; recording a first image of the spatial region by means of the first image recording unit and recording a second image of the spatial region by means of the second image recording unit; determining a number of object positions on the basis of the first and second images, wherein each object position represents a spatial distance of an object relative to the image recording units; and generating a switching signal depending on the object positions; wherein the spatial region comprises a structure having a plurality of relatively closely spaced substantially parallel edges; wherein a number of defined reference marks is arranged at the structure; wherein a number of reference distances between the image recording units and the reference marks is determined; and wherein a structure position of the structure is determined on the basis of the reference distances.
According to another aspect of the invention, there is provided a device for monitoring a spatial region comprising a number of movable objects and comprising a stationary structure having a plurality of relatively closely spaced substantially parallel edges, the device comprising a first image recording unit for recording a first image of the spatial region; a second image recording unit for recording a second image of the spatial region, the first and second image recording units being arranged at a defined distance from one another; and an evaluation and control unit designed to determine a number of object positions on the basis of the first and second images and to generate a switching signal depending on the object positions; wherein each object position represents a spatial distance of one of the number of objects relative to the image recording units; wherein a number of reference marks are arranged at the stationary structure; and wherein the evaluation and control unit is further designed to determine a number of defined reference distances between the image recording units and the reference marks, and to determine a structure position of the stationary structure on the basis of the reference distances.
According to yet another aspect, there is provided a computer readable storage medium designed for interfacing with a computer that is connected to a first and a second image recording unit which are arranged at a defined distance from one another, the computer readable storage medium comprising an interface for communicating with the computer and comprising program code configured to execute a method comprising the steps of recording a first image of a spatial region by means of the first image recording unit and recording a second image of the spatial region by means of the second image recording unit; determining a number of object positions using the first and second images, wherein each object position represents a distance between a moveable object located in the spatial region and the image recording units; and generating a switching signal depending on the object positions; wherein the spatial region also comprises a stationary structure having a plurality of relatively closely spaced substantially parallel edges; wherein a number of defined reference marks is arranged at the structure; wherein a number of reference distances between the image recording units and the reference marks is determined; and wherein a structure position of the structure is determined on the basis of the reference distances.
The novel method and the novel device are advantageously implemented by means of such a computer program.
The novel method and the novel device use defined reference marks in order to selectively mark the problematic edge structure. The reference marks are arranged and preferably fixed at the structure having a plurality of relatively closely spaced substantially parallel edges. The reference marks have a visible surface having light and/or dark regions, which differs from the structure and the immediate vicinity thereof, such that the reference marks can be unambiguously identified and assigned to one another in the recorded images. In preferred exemplary embodiments, the reference marks are adhesive films that are adhesively bonded to the circumference and/or to corners of the structure having the parallel edges. Generally, it is advantageous if the reference marks are very flat perpendicular to their visible surface, i.e. are embodied in paper- or film-like fashion, such that the reference distances between the reference marks and the image recording units have no or at most a very small difference with respect to the corresponding distances of the structure in the region of reference marks. Consequently, the reference marks are artificially generated target points within the recorded images which are used to mark the structure in order to facilitate the distance measurement with regard to the structure. By means of the reference marks, the position of the problematic structure can be determined relatively simply and reliably in an “indirect” manner. This includes, in particular, determining the distance of the structure relative to the image recording units. The reference distances are at least temporarily used as distance measurement values for the structure.
The novel method and the novel device can be realized cost-effectively in a relatively simple manner. Moreover, they enable high identification reliability and make it possible to avoid or at least reduce incorrect assignments in the images simultaneously recorded. Therefore, the novel method and the novel device are suitable, in particular, for use for safeguarding hazardous regions.
In a preferred refinement, at least three reference marks defining a virtual reference area are arranged at the structure, wherein the virtual reference area substantially overlays the plurality of parallel edges. Accordingly, the evaluation and control unit of the novel device is designed to determine a virtual reference area on the basis of at least three reference marks which are arranged at the structure and define a plane.
This refinement has the advantage that that the majority or even all of the problematic edges of the structure can be marked, without fundamentally altering or concealing the structure. The virtual reference area, which is “invisible” in the images, can be determined simply and with high accuracy by means of at least three, preferably at least four, reference marks. Furthermore, the position (including the distance with respect to the image recording units) of the virtual reference area can also be determined simply and with high accuracy on the basis of the images recorded in parallel. Consequently, the reference area marks the problematic edge region, such that this region can be subjected to a separate evaluation in a targeted manner. In this way, the detection dependability and reliability of the novel device and of the novel method can be improved in a very efficient manner.
In a further refinement, for at least one of the parallel edges, an edge distance relative to the image recording units is determined on the basis of the first and second images, and also on the basis of the virtual reference area. Preferably, an edge distance is determined for a plurality and particularly preferably for all of the edges of the structure.
This refinement yields accurate distance information for the problematic edges by virtue of the fact that, in addition to the image information and the known distance between the image recording units, the newly obtained information representing position and distance of the reference area is evaluated. On the other hand, the distance information is attributed to the actual image contents i.e. the parallel edges visible in the images. With this refinement it is possible to obtain unambiguous and accurate distance information for all problematic edges. The further evaluation of the images, in particular the monitoring of protection areas, is simplified by this refinement because problematic edge regions can be processed like “normal” or unproblematic image regions on the basis of the unambiguously determined distance.
In a further refinement, for the at least one edge, an edge position within the first and/or second image is determined and permanently stored.
In this refinement, the position of the at least one edge in the two-dimensional images is determined and permanently stored. “Permanently stored” means that the edge position is provided in a memory of the novel device at least for the entire duration of uninterrupted operation. Preferably, the edge position is stored in a nonvolatile memory of the device, such that the edge position can be read out even after an interruption of the power supply. The refinement has the advantage that the problematic edge region can be identified in each case very simply and rapidly in the recorded images during a monitoring mode of operation in which first and second images are repeatedly recorded. The evaluation of a new image pair in a cyclic monitoring mode of operation is simplified, such that this refinement contributes to real-time operation that can be realized in a cost-effective manner.
In a further refinement, the edge distance for the at least one edge is also permanently stored.
This refinement further contributes to enabling a monitoring mode of operation in real time, with the problematic edge structure being taken into account in a simple and cost-effective manner. The unambiguously determined and stored edge distance can advantageously be used in the evaluation of each new image pair for the verification of a currently obtained measurement value and/or instead of a current measurement value.
In a further refinement, the first and second images are repeatedly recorded, and the object positions are repeatedly determined, wherein the structure position is determined on the basis of the stored edge distance. It is particularly advantageous if the stored edge distance, which, in particular, was determined in a separate refinement mode, is permanently stored and used for each new image pair as the current edge distance.
In this refinement, the edge distance determined once replaces subsequent “measurement values” of the edge distance in the repeatedly recorded images. In other words, the distance of the edges, despite the repeated image recordings, is determined only once and then permanently used. This refinement is advantageously employed only in the case of structures which are spatially stationary or not moved. By contrast, current distances are determined for all other objects in the spatial region, including for areas that do not have parallel edges. The refinement has the advantage that correct “measurement values” are made available for the problematic edge region in a very efficient manner. False alarms and resulting machine stops are avoided in a simple manner during the monitoring of a dangerous machine. On the other hand, this refinement contributes to high detection reliability, since objects in the foreground of the structure must always have a smaller distance with respect to the image recording units and, consequently, can be verified very easily on the basis of the different distances.
In a further refinement, at least one of the reference marks remains permanently attached at the structure. It is particularly preferred if the stored edge distance is accepted on the basis of the permanently arranged reference mark; in particular, it is only accepted if the at least one reference mark is detected at its originally determined position and/or with its originally determined reference distance.
This refinement provides even higher failsafety when safeguarding a machine, since a stored edge distance is used as a “current measurement value” only when, on the basis of the at least one reference mark that has remained, it has been ascertained that the structure has not changed its distance and position relative to the image recording units.
In a further refinement, the object position of a movable object is verified on the basis of the structure position of the structure. It is advantageous if the stored edge distance in this case represents the structure position of the structure, such that the object position of a movable object is verified on the basis of the at least one edge distance.
In this refinement, a plausibility check takes place, which is used to check whether the measured object position of a detected movable object lies within the available spatial region. If the edge structure is e.g. a lattice or a striped pattern at the floor of the spatial region, the object distance to a movable object has to be less than the edge distance. In other words, the movable object is situated in the foreground, while the structure having the parallel edges constitutes a background. Such a plausibility check contributes to even higher failsafety when monitoring a spatial region, and it contributes to a lower false alarm rate.
In a further refinement, the structure is spatially stationary.
This refinement makes it possible to implement the novel method very simply and efficiently, in particular if the edge distances determined once are intended to be used instead of current measurement values in the course of a monitoring operation. On the other hand, this refinement fits to an application that occurs very often in practice because structures having parallel edges are often the result of barriers, background markings and/or other structural measures. The refinement enables, in a very efficient manner, a high detection reliability when monitoring a spatial region in which parallel edges make it difficult to achieve unambiguous assignment in the images recorded in parallel.
It goes without saying that the features mentioned above and those yet to be explained below can be used not only in the combination respectively specified, but also in other combinations or by themselves, without departing from the scope of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
Exemplary embodiments of the invention are illustrated in the drawing and are explained in greater detail in the description below. In the Figures:
FIG. 1 shows a schematic illustration of an exemplary embodiment of the novel device,
FIG. 2 shows the device of FIG. 1 (in parts) in a plan view,
FIG. 3 shows a simplified illustration of a first image of a monitored spatial region,
FIG. 4 shows a simplified illustration of a second image of the spatial region,
FIG. 5 shows a further image of the spatial region, wherein reference marks defined here are arranged at a structure having parallel edges,
FIG. 6 shows a further image of the spatial region with a person,
FIG. 7 shows a flow chart for illustrating an exemplary embodiment of the novel method, and
FIG. 8 shows a further flow chart for illustrating an exemplary embodiment of the novel method.
DESCRIPTION OF PREFERRED EMBODIMENTS
In FIG. 1, an exemplary embodiment of the novel device is designated by the reference numeral 10 in its entirety. The device 10 serves here for safeguarding the working region 12 of a robot 14. The robot 14 is a machine which operates in automated fashion and which, on account of its movements, constitutes a hazard for persons who enter the working region 12. Even though it is a preferred exemplary embodiment, the novel device and the novel method are not restricted to safeguarding machines. They can also be used for monitoring spatial regions for other reasons, for instance for monitoring a strongroom.
The device 10 has two image recording units 16, 18, which are arranged at a defined and known distance b from one another (see FIG. 2). In the preferred exemplary embodiments, the device 10 has a third image recording unit (not illustrated here), wherein two image recording units in each case are arranged at the defined and known distance b from one another and wherein the three image recording units are arranged in an L-shaped manner with respect to one another. In this case, the three image recording units form two pairs of image recording units, wherein only one pair comprising the image recording units 16, 18 is described hereinafter for the sake of simplicity. An L-shaped arrangement of at least three image recording units ensures that edges having arbitrary directions of progression can be identified by at least one of the pairs and evaluated.
The device 10 further has an evaluation and control unit 20, which is connected to the image recording units 16, 18. The evaluation and control unit 20 controls the image recording by the image recording units 16, 18 and evaluates the recorded images, in order to drive the robot 14 in a manner dependent on the images.
In principle, the evaluation and control unit 20 can be part of an operating controller of the robot 14. In some exemplary embodiments, the evaluation and control unit 20 is a separate safety controller that serves for safeguarding the hazardous working region 12. The operating controller (not illustrated here) of the robot 14 is then realized separately. The evaluation and control unit 20 is able to generate a switching signal in a manner dependent on the images of the image recording units 16, 18, by means of which switching signal the robot 14 can be switched off completely or into operation at reduced speed. Reference numeral 22 represents a cable via which the evaluation and control unit 20 transmits the switching signal to the robot 14. The switching signal can include a switch-off signal, a control signal, an enable signal and/or a warning signal.
Reference numeral 24 designates a configuration device for configuring the device 10. In the preferred exemplary embodiments, the configuration device 24 includes a PC 26 and a display 28. The display 28 is designed, inter alia, for displaying images recorded by the image recording units 16, 18. However, the display of these images is primarily used for configuring and/or checking the device 10. In the normal working mode of operation, the device 10 preferably operates fully automatically, i.e. the evaluation and the control unit 20 generates the switching signal 22 in a manner dependent on the images of the image recording units 16, 18, without the images having to be displayed on the display 28. The PC 26 and the evaluation and control unit 20 are connected via a bidirectional data link for the transmission of data and images, said data link being indicated by a double-headed arrow in FIG. 1.
Reference numerals 32, 34 represent the recording regions of the image recording units 16, 18. As can be readily seen, the recording regions 32, 34 are somewhat offset with respect to one another. However, there is a common or overlapping recording region 33, which defines the spatial region to be monitored and in which the working region 12 lies here. Only within the common recording region 33 is the evaluation and control unit 20 able to determine distances between the image recording units 16, 18 and objects in the manner described below.
Reference numeral 34 designates a warning zone and reference numeral 36 designates a switch-off zone. The warning zone 34 and the switch-off zone 36 are monitoring regions extending at least partly around the working region 12 of the robot 14. The monitoring regions 34, 36 are virtual protective fences that can be defined by means of the configuration device 24. If a moving object, such as a person 38, enters the warning zone 34 and/or the switch-off zone 36, this is identified by means of the device 10 and the evaluation and control unit 20 generates the switching signal 22 for the robot 14. If the person 38 enters the warning zone 34, the evaluation and control unit 20 may generate e.g. a switching signal which has the effect that the robot 14 operates at a reduced speed. Moreover, an acoustic and/or visual warning signal may be generated. If the person 38 enters the switch-off zone 36 despite these measures, the evaluation and control unit 20 may generate a further switching signal, which shuts down the robot 14.
In order to detect the intrusion of a person 38 or some other movable object into one of the zones 34, 36, the evaluation and control unit 20 determines a three-dimensional image of the entire recording region 33 in a manner dependent on the images of the image recording units 16, 18. The three-dimensional image includes distance information representing a distance between the image recording units 16, 18 and individual objects and structures within the common recording region 33. The warning and switch-off zones 34, 36 are defined on the basis of coordinates within the three-dimensional image, such that the entry of a person 38 into one of the zones can be detected solely on the basis of the image data from the image recording units 16, 18. An advantageous method for configuring the warning and switch-off zones 34, 36 is described in DE 10 2005 063 217 A1 cited in the introduction, the disclosure of which is here incorporated by reference in its entirety.
A preferred method for determining the distance information is briefly explained below with reference to FIGS. 3 and 4. FIG. 3 shows, by way of example and in a simplified manner, a first image 40 of the monitored spatial region, which was recorded by the first image recording unit 16. The image 40 shows here by way of example a machine 42 and a switchgear cabinet 44. In addition, a structure 46 having a plurality of relatively closely spaced substantially parallel edges 48 is illustrated in front of the machine 42. The structure 46 could be e.g. a striped pattern comprising light and dark stripes which is arranged on the floor in front of the machine 42. Furthermore, the structure 46 could be a lattice arranged in front of the machine 42 e.g. with run-off channels for liquids. Further examples of structures 46 having parallel or substantially parallel edges may be lattice fences or staircase-like three-dimensional structures.
FIG. 4 shows a second image 50, which was recorded by the second image recording unit 18. The second image 50 shows the same spatial region as the first image 40. In all of the preferred exemplary embodiments, the first and second images are recorded synchronously and substantially simultaneously with respect to one another. In principle, however it is also conceivable for the images 40, 50 to be recorded in a temporally offset manner, provided that the temporal offset between the two image recordings is small enough that only a small change in the position of movable objects can occur in the common recording region 33.
As is illustrated with reference to FIGS. 3 and 4, the first image 40 and the second image 50 share the same scene. However, the objects and structures 42, 44, 46 in the two images 40, 50 appear offset with respect to one another, as is illustrated by reference numeral 52. The offset d is a consequence of the offset recording regions 30, 32 of the two image recording units 16, 18.
Preferably, the two image recording units 16, 18 have parallel optical axes and the two image recording units 16, 18 are at least substantially identical. For this case, the following relationship holds true:
r=b·f/d
where f is the focal length of the image recording units, b is the distance (the so-called base width) between the two image recording units 16, 18, r is the distance between an object or a structure edge and the common reference point of the image recording units 16, 18, and d is the so-called disparity, i.e. the offset 52 in the two images. Given a known base width b and a known focal length f, the respective distance r of an object or structure edge with respect to the image recording units can be determined on the basis of the measured disparities.
A prerequisite for determining the distance r, however, is that the mutually corresponding structure or object edges in the two images 40, 50 can be unambiguously identified and assigned to one another, in order to determine the respective disparity d. In the case of structures 46 having a plurality of relatively closely spaced parallel edges 48 a, 48 b, the assignment of mutually corresponding edges is difficult and susceptible to errors, particularly if a large number of parallel edges having the same appearance exist. An incorrect edge assignment leads to incorrect distance information.
FIG. 5 shows a first image 40′, in which four reference marks 56 a, 56 b, 56 c, 56 d were arranged at the corners of the structure 46. In preferred exemplary embodiments, the reference marks 56 are punctiform or circular film pieces which have a defined light-dark pattern. Furthermore, it is advantageous if the reference marks are fixed, e.g. adhesively bonded, to the structure 46.
As is illustrated in FIG. 5, each reference mark 56 forms a defined target point within the images, wherein only the first image 40′ is illustrated here for the sake of simplicity. The reference marks 56 can easily be identified and assigned to one another in the recorded images on the basis of their known and/or high-contrast features. On the basis of the respective disparity it is possible to determine the distance r between the image recording units 16, 18 and the individual reference marks 56 a, 56 b, 56 c, 56 d.
As can further be discerned in FIG. 5, the reference marks 56 a, 56 b, 56 c, 56 d define a virtual reference area 58, which overlays the structure 46 congruently here. Since the distances of the individual reference marks 56 a, 56 b, 56 c, 56 d can be determined accurately, the position and distance of the virtual reference area 58 can also be determined accurately on the basis of the recorded images. In the preferred exemplary embodiments, the disparities and distances of the individual parallel edges 48 a, 48 b are determined based on these data and are assigned to the respective edges 48 a, 48 b. For determining the disparities of the parallel edges 48 a, 48 b, it is advantageous that the disparities of the edges change proportionally within the (now known) reference area. By means of this additional information, for each edge 48 a, 48 b in the first image 40′, the respectively corresponding edge in the second image can be unambiguously assigned. In this way, the disparities and distances of all the edges 48 a, 48 b can be determined unambiguously and in a manner free of errors.
FIG. 6 shows a further image 40″ of the first image recording unit 16, wherein now the person 38 is approaching the machine 42. The person 38 has to be situated in front of the structure 46, which forms a background. In order to be able to determine the distance of the person 38, it is necessary to detect contour edges 39 of the person in the parallel images and to assign them to one another in order to determine the disparities. Should it emerge on the basis of the disparities with respect to the contour edges 39 that the person 38 is further away than the edge contour 46, an incorrect edge assignment with regard to the contour edge 39 is evidently present. Preferably, the edge assignment with regard to the contour edge 39 is then corrected and the disparities are determined with the corrected edge assignments. If the distance with respect to the person is now less than that with respect to the structure 46, the corrected measurement is accepted as a valid measurement. It is only if no edge assignment is possible which produces a valid measurement that the robot is switched off. Otherwise, the distance accepted as plausible is used for the further evaluation. Consequently, it is possible to determine the distance with respect to the contour edge 39 and thus the three-dimensional position of the person 38 within the monitored spatial region despite the parallel edges in an efficient and reliable manner.
FIGS. 7 and 8 show a preferred exemplary embodiment of the novel method on the basis of two flow charts. FIG. 7 shows a configuration mode by means of which the device 10 is configured before the actual monitoring mode of operation begins. In steps 62 and 64, the images 40, 50 are recorded—preferably synchronously—by means of the image recording units 16, 18. Step 66 involves searching for the disparities with respect to the various object edges and structure edges in the images. This search includes identifying and assigning mutually corresponding object edges and structure edges. Afterward, step 68 involves identifying structures for which the disparities can be determined only with difficulty or cannot be determined at all, owing to lack of assignment. In the examples with FIGS. 3 to 6, this applies to the structure 46.
In accordance with step 70, reference marks 56 are arranged at the identified structure. In accordance with steps 72 and 74, a further pair of images is recorded by means of the image recording units 16, 18. Afterward, step 76 involves identifying the reference marks in the further images. Step 78 involves determining the disparities of the reference marks. The virtual reference area 58 is determined by means of the disparities of the reference marks. Afterward, step 80 involves determining the disparities of the individual edges 48 a, 48 b of the structure 46, wherein the now known position of the reference area 58 and the known disparities of the reference marks 56 are used in order to obtain an unambiguous assignment of the respectively corresponding edges 48 a, 48 b in the two images. Step 82 involves permanently storing the disparities of the individual edges 48 a, 48 b. Moreover, step 84 involves storing the image positions of the individual edges 48 a, 48 b and also the positions of the reference marks 56. In this case, “image position” designates the position of the edges and reference marks within the two-dimensional images of the image recording units 16, 18. Afterward, in accordance with step 86, it is possible to configure individual protection spaces such as, for instance, the warning zone 34 or the switch-off zone 36. The protection spaces are preferably configured according to the method described in DE 10 2005 063 217 A1 cited in the introduction. The device 10 can then be brought to the monitoring mode, which is illustrated with reference to FIG. 8.
In accordance with steps 88 and 90 in FIG. 8, in the monitoring mode of operation, too, a pair of images is recorded by means of the image recording units 16, 18. Afterward, the reference marks 56 are identified in the recorded images in accordance with step 92. Moreover, the distances and positions of the reference marks 56 are determined on the basis of the disparities.
Step 94 involves checking whether the distances and/or positions of the reference marks 56 are unchanged relative to the stored distance and position information for the reference marks. If this is not the case, step 96 involves generating a switching signal which, in the preferred exemplary embodiments, includes a switch-off signal for switching off the robot or the monitored machine. As an alternative thereto, the switching signal can trigger a new disparity determination in the region of the reference marks and/or of the edge structure, with the machine being switched off only if no plausible distance values arise in the new disparity determination.
If the position of the reference marks is unchanged in comparison with the data obtained in the configuration mode, the monitoring mode of operation is continued. Step 98 involves identifying the parallel edges of the structure 46 on the basis of the stored image positions. In accordance with step 100, the stored edge disparities are assigned to the identified edges 48 a, 48 b. In other words, the preferred exemplary embodiment of the novel method dispenses with a procedure in which the disparities and distances of the parallel edges 48 a, 48 b are repeatedly determined in each case during the monitoring mode of operation. Rather, the edge disparities determined and stored in the configuration mode replace current measurement values.
Afterward, step 102 involves determining the disparities for all further object edges. Step 104 then involves monitoring the protection space, i.e. evaluating whether a person or some other object has entered one of the defined protection spaces 34, 36. If an intrusion into a protection space is detected in accordance with step 106, step 108 involves generating a signal which can once again include a switch-off signal for switching off the monitored machine. If no intrusion into a protection space is detected, the method returns to image recording in steps 88, 90 in accordance with loop 110. In the monitoring mode, the method in accordance with FIG. 8 is cyclically repeated therefore.
Overall, the novel method and the novel device serve for preventing an incorrect assignment of parallel edges in stereoscopically recorded images, by using defined reference marks, such as artificial ring patterns for instance, in order to carry out correct disparity and distance determination for the parallel edges. The disparities and/or distances for the parallel edges are advantageously stored permanently in a memory of the evaluation and control unit and assigned to the problematic edges in the subsequent monitoring mode of operation. Preferably, for this purpose, the position of the parallel edges in the two-dimensional images is determined and stored together with the disparity and/or distance information. Incorrect assignments of edges can be reduced to a minimum by means of this method. Advantageously, at least three reference marks are used in the configuration mode in order to determine the disparities/distances with respect to the parallel edges. Furthermore, it is preferred that at least one of the reference marks remains permanently at the problematic structure in order to ensure, by means of said reference mark, that the disparities/distances determined in the configuration mode are also valid in the monitoring mode of operation. It is therefore advantageous if the problematic structures having parallel edges are stationary and delimit the monitored spatial region, since the parallel edges in this case form a background against which movable objects move in the monitoring mode of operation. The respective distance of the moving objects must therefore be smaller than the distances to the parallel edges in the background, which enables a simple plausibility check. By means of the novel method, it is possible to significantly reduce computational complexity when determining disparities and distances at parallel edges. Moreover, incorrect assignments and measurement errors ensuing therefrom can be avoided.

Claims (18)

What is claimed is:
1. A method for monitoring a spatial region comprising a number of movable objects and a structure having a plurality of relatively closely spaced substantially parallel edges, the method comprising the steps of:
providing a first and a second image recording unit, which are arranged at a defined distance from one another,
recording a first image of the spatial region by means of the first image recording unit and recording a second image of the spatial region by means of the second image recording unit,
determining a number of object positions on the basis of the first and second images, wherein each object position represents a spatial distance of an object relative to each of the first and second image recording units,
determining a position of said structure on the basis of the first and second images, wherein the position of said structure is defined by a spatial distance of said structure relative to each of the image recording units, and
generating a switching signal depending on the object positions,
wherein a number of defined reference marks are placed in the spatial region proximate to said structure,
wherein a number of reference distances between the first and second image recording units and the reference marks is determined, and
wherein the spatial distance of said structure relative to each of said image recording units is determined on the basis of the reference distances.
2. The method of claim 1, wherein at least three reference marks are arranged at the structure, said at least three reference marks defining a reference area substantially overlaying the plurality of parallel edges.
3. The method of claim 2, wherein an edge distance relative to the first and second image recording units is determined for at least one of the parallel edges using the first and second images and using the reference area.
4. The method of claim 3, wherein the edge distance for the at least one edge is permanently stored into a memory.
5. The method of claim 4, wherein the first and second images are repeatedly recorded and the object positions are repeatedly determined thereby defining a plurality of operation cycles, with the structure position being repeatedly determined on the basis of the stored edge distance in each operation cycle.
6. The method of claim 1, wherein an edge position for the at least one of the parallel edges is determined, and said edge position is permanently stored into a memory.
7. The method claim 1, wherein at least one of the reference marks is permanently arranged at the structure.
8. The method of claim 1, wherein an object position of at least one of the movable objects is verified using the structure position.
9. The method of claim 1, wherein the structure is stationary within the spatial region.
10. A device for monitoring a spatial region comprising a number of movable objects and comprising a stationary structure having a plurality of relatively closely spaced substantially parallel edges, the device comprising:
a first image recording unit for recording a first image of the spatial region,
a second image recording unit for recording a second image of the spatial region, the first and second image recording units being arranged at a defined distance from one another, and
an evaluation and control unit designed to determine a number of object positions on the basis of the first and second images and to generate a switching signal depending on the object positions,
wherein each object position represents a spatial distance of one of the number of objects relative to the first and second image recording units,
wherein a number of reference marks are placed in the spatial region proximate to said stationary structure, and
wherein the evaluation and control unit is further designed to determine a number of defined reference distances between the first and second image recording units and the reference marks, and to determine a position of said stationary structure on the basis of the reference distances.
11. The device of claim 10, wherein at least three reference marks defining a reference area are arranged at the stationary structure, with the reference area substantially overlaying the plurality of parallel edges.
12. The device of claim 11, wherein the evaluation and control unit is further designed to determine an edge distance relative to the first and second image recording units for at least one of the parallel edges using the first and second images and using the reference area.
13. The device of claim 12, further comprising a memory for permanently storing the edge distance.
14. The device of claim 11, wherein the evaluation and control unit is further designed to determine an edge position within at least one of the first and second images for at least one of the plurality of edges.
15. The device of claim 14, further comprising a memory for permanently storing the edge position.
16. The device of claim 10, wherein at least one of the reference marks is permanently arranged at the stationary structure.
17. The device of claim 10, wherein the evaluation and control unit is further designed to verify an object position of one of the number of movable objects using at least one of the reference distances and the position of the structure.
18. A non-transitory computer readable storage medium designed for interfacing with a computer that is connected to a first and a second image recording unit which are arranged at a defined distance from one another, the computer readable storage medium comprising an interface for communicating with the computer and comprising program code configured to execute a method comprising the steps of:
recording a first image of a spatial region by means of the first image recording unit and recording a second image of the spatial region by means of the second image recording unit,
determining a number of object positions using the first and second images, wherein each object position represents a distance between a moveable object located in the spatial region and the first and second image recording units, and
generating a switching signal depending on the object positions,
wherein the spatial region also comprises a stationary structure having a plurality of relatively closely spaced substantially parallel edges,
wherein a number of defined reference marks are placed in the spatial region proximate to said stationary structure,
wherein a number of reference distances between the first and second image recording units and the reference marks is determined, and
wherein a position of said stationary structure is determined on the basis of the reference distances.
US13/354,405 2009-07-24 2012-01-20 Method and device for monitoring a spatial region Active 2032-05-15 US9292924B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
DE102009035755.6 2009-07-24
DE102009035755A DE102009035755A1 (en) 2009-07-24 2009-07-24 Method and device for monitoring a room area
DE102009035755 2009-07-24
PCT/EP2010/060686 WO2011009933A1 (en) 2009-07-24 2010-07-23 Method and device for monitoring a spatial region

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/060686 Continuation WO2011009933A1 (en) 2009-07-24 2010-07-23 Method and device for monitoring a spatial region

Publications (2)

Publication Number Publication Date
US20120182419A1 US20120182419A1 (en) 2012-07-19
US9292924B2 true US9292924B2 (en) 2016-03-22

Family

ID=42835492

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/354,405 Active 2032-05-15 US9292924B2 (en) 2009-07-24 2012-01-20 Method and device for monitoring a spatial region

Country Status (7)

Country Link
US (1) US9292924B2 (en)
EP (1) EP2457217B1 (en)
JP (1) JP5806212B2 (en)
CN (1) CN102483846B (en)
DE (1) DE102009035755A1 (en)
HK (1) HK1168682A1 (en)
WO (1) WO2011009933A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5924134B2 (en) * 2012-05-30 2016-05-25 セイコーエプソン株式会社 Intrusion detection device, robot system, intrusion detection method, and intrusion detection program
DE102012209316A1 (en) * 2012-06-01 2013-12-05 Robert Bosch Gmbh Method and device for processing sensor data of a stereo sensor system
US9855664B2 (en) * 2015-11-25 2018-01-02 Denso Wave Incorporated Robot safety system
ES2673167B2 (en) * 2016-12-20 2018-10-10 Universidad De Extremadura CURTAIN SAFETY VIDEO FOR REDUNDANT AND DIRECTIONAL DETECTION OF ACCESS TO DANGER AREA ASSOCIATED WITH INDUSTRIAL MACHINERY
US11014240B2 (en) * 2017-09-05 2021-05-25 Abb Schweiz Ag Robot having dynamic safety zones
WO2019097676A1 (en) * 2017-11-17 2019-05-23 三菱電機株式会社 Three-dimensional space monitoring device, three-dimensional space monitoring method, and three-dimensional space monitoring program
EP3572971B1 (en) * 2018-05-22 2021-02-24 Sick Ag Securing a surveillance area with at least one machine
US20200311886A1 (en) * 2019-03-28 2020-10-01 Carl Zeiss Microscopy Gmbh Method for determining an image recording aberration
US20220012911A1 (en) * 2020-07-13 2022-01-13 United States Postal Service System and method for analyzing an image to determine spacing between a person and an object
US11486893B2 (en) * 2020-10-06 2022-11-01 Pixart Imaging Inc. Optical sensing system and optical navigation system
SE2250812A1 (en) * 2022-06-29 2023-12-30 Flasheye Ab Information carrier for providing information to a lidar sensor

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19536297A1 (en) 1995-09-29 1997-04-03 Daimler Benz Ag Geometric calibration of optical 3-D sensors for three=dimensional measurement of objects
JP2000115810A (en) 1998-09-30 2000-04-21 Matsushita Electric Ind Co Ltd Method and device for processing stereoscopic image and system for monitoring intruding object
DE10026710A1 (en) * 2000-05-30 2001-12-06 Sick Ag Optoelectronic protection device for surveillance area has latter enclosed by reflector with coded reflector segments
JP2001351200A (en) 2000-06-09 2001-12-21 Nissan Motor Co Ltd Onboard object detecting device
US6538732B1 (en) * 1999-05-04 2003-03-25 Everest Vit, Inc. Inspection system and method
JP2004184240A (en) 2002-12-03 2004-07-02 Topcon Corp Image profiling apparatus, image-measuring method and image processing apparatus
US20040234123A1 (en) * 2002-06-26 2004-11-25 Pentax Corporation Surveying system
JP2005250994A (en) 2004-03-05 2005-09-15 Fuji Heavy Ind Ltd Stereo image processor
US20050207618A1 (en) 2002-09-24 2005-09-22 Christian Wohler Method and device for safeguarding a hazardous area
US6950550B1 (en) * 2000-07-07 2005-09-27 Koji Kajimura Tracing technique and recording media of object motion
EP1647357A1 (en) * 2004-10-13 2006-04-19 Robosoft N.V. Method and device for observing and securing a danger zone of a machine or the like
EP1543270B1 (en) 2002-09-24 2006-08-09 Daimler Chrysler AG Method and device for making a hazardous area safe
DE102005063217A1 (en) 2005-12-22 2007-07-05 Pilz Gmbh & Co. Kg Method for configuring monitoring device to monitor space area, involves recording and indicating three dimensional image of space area
WO2007085330A1 (en) 2006-01-30 2007-08-02 Abb Ab A method and a system for supervising a work area including an industrial robot
JP2007212187A (en) 2006-02-07 2007-08-23 Mitsubishi Electric Corp Stereo photogrammetry system, stereo photogrammetry method, and stereo photogrammetry program
US20070211143A1 (en) * 2006-03-10 2007-09-13 Brodie Keith J Systems and methods for prompt picture location tagging
DE102006023787A1 (en) * 2006-05-20 2007-11-22 Sick Ag Opto-electronic protection device operating method, involves partially deactivating image signals of monitoring devices, proportional to respective viewing angle of camera at zones and combining with switching signal in evaluating unit
US20080069406A1 (en) * 2006-09-19 2008-03-20 C/O Pentax Industrial Instruments Co., Ltd. Surveying Apparatus
WO2008061607A1 (en) 2006-11-24 2008-05-29 Pilz Gmbh & Co. Kg Method and apparatus for monitoring a three-dimensional spatial area
US20080123903A1 (en) * 2006-07-03 2008-05-29 Pentax Industrial Instruments Co., Ltd. Surveying apparatus
US20080170755A1 (en) * 2007-01-17 2008-07-17 Kamal Nasser Methods and apparatus for collecting media site data
WO2008098831A1 (en) 2007-02-15 2008-08-21 Kuka Roboter Gmbh Method and device for securing a work space
US20090046895A1 (en) * 2007-08-10 2009-02-19 Leica Geosystems Ag Method and measurement system for contactless coordinate measurement on an object surface
US20090096884A1 (en) * 2002-11-08 2009-04-16 Schultz Stephen L Method and Apparatus for Capturing, Geolocating and Measuring Oblique Images
US7564990B2 (en) * 2005-08-18 2009-07-21 Nu Skin International, Inc. Imaging system and method for physical feature analysis
US20110054665A1 (en) * 2007-12-21 2011-03-03 Jochen Wingbermuehle Machine tool device having a monitoring unit

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19536297A1 (en) 1995-09-29 1997-04-03 Daimler Benz Ag Geometric calibration of optical 3-D sensors for three=dimensional measurement of objects
JP2000115810A (en) 1998-09-30 2000-04-21 Matsushita Electric Ind Co Ltd Method and device for processing stereoscopic image and system for monitoring intruding object
US6538732B1 (en) * 1999-05-04 2003-03-25 Everest Vit, Inc. Inspection system and method
DE10026710A1 (en) * 2000-05-30 2001-12-06 Sick Ag Optoelectronic protection device for surveillance area has latter enclosed by reflector with coded reflector segments
JP2001351200A (en) 2000-06-09 2001-12-21 Nissan Motor Co Ltd Onboard object detecting device
US6950550B1 (en) * 2000-07-07 2005-09-27 Koji Kajimura Tracing technique and recording media of object motion
US20040234123A1 (en) * 2002-06-26 2004-11-25 Pentax Corporation Surveying system
US20050207618A1 (en) 2002-09-24 2005-09-22 Christian Wohler Method and device for safeguarding a hazardous area
EP1543270B1 (en) 2002-09-24 2006-08-09 Daimler Chrysler AG Method and device for making a hazardous area safe
US7729511B2 (en) * 2002-09-24 2010-06-01 Pilz Gmbh & Co. Kg Method and device for safeguarding a hazardous area
US20090096884A1 (en) * 2002-11-08 2009-04-16 Schultz Stephen L Method and Apparatus for Capturing, Geolocating and Measuring Oblique Images
JP2004184240A (en) 2002-12-03 2004-07-02 Topcon Corp Image profiling apparatus, image-measuring method and image processing apparatus
JP2005250994A (en) 2004-03-05 2005-09-15 Fuji Heavy Ind Ltd Stereo image processor
EP1647357A1 (en) * 2004-10-13 2006-04-19 Robosoft N.V. Method and device for observing and securing a danger zone of a machine or the like
US7564990B2 (en) * 2005-08-18 2009-07-21 Nu Skin International, Inc. Imaging system and method for physical feature analysis
US20090015663A1 (en) 2005-12-22 2009-01-15 Dietmar Doettling Method and system for configuring a monitoring device for monitoring a spatial area
DE102005063217A1 (en) 2005-12-22 2007-07-05 Pilz Gmbh & Co. Kg Method for configuring monitoring device to monitor space area, involves recording and indicating three dimensional image of space area
WO2007085330A1 (en) 2006-01-30 2007-08-02 Abb Ab A method and a system for supervising a work area including an industrial robot
JP2007212187A (en) 2006-02-07 2007-08-23 Mitsubishi Electric Corp Stereo photogrammetry system, stereo photogrammetry method, and stereo photogrammetry program
US20070211143A1 (en) * 2006-03-10 2007-09-13 Brodie Keith J Systems and methods for prompt picture location tagging
DE102006023787A1 (en) * 2006-05-20 2007-11-22 Sick Ag Opto-electronic protection device operating method, involves partially deactivating image signals of monitoring devices, proportional to respective viewing angle of camera at zones and combining with switching signal in evaluating unit
US20080123903A1 (en) * 2006-07-03 2008-05-29 Pentax Industrial Instruments Co., Ltd. Surveying apparatus
US20080069406A1 (en) * 2006-09-19 2008-03-20 C/O Pentax Industrial Instruments Co., Ltd. Surveying Apparatus
WO2008061607A1 (en) 2006-11-24 2008-05-29 Pilz Gmbh & Co. Kg Method and apparatus for monitoring a three-dimensional spatial area
US20090268029A1 (en) * 2006-11-24 2009-10-29 Joerg Haussmann Method and apparatus for monitoring a three-dimensional spatial area
US20080170755A1 (en) * 2007-01-17 2008-07-17 Kamal Nasser Methods and apparatus for collecting media site data
WO2008098831A1 (en) 2007-02-15 2008-08-21 Kuka Roboter Gmbh Method and device for securing a work space
US20090046895A1 (en) * 2007-08-10 2009-02-19 Leica Geosystems Ag Method and measurement system for contactless coordinate measurement on an object surface
US20110054665A1 (en) * 2007-12-21 2011-03-03 Jochen Wingbermuehle Machine tool device having a monitoring unit

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ISA/EP; English language translation of International Preliminary Report on Patentability (Chapter 1); issued by WIPO Feb. 7, 2012; 12 pages.

Also Published As

Publication number Publication date
CN102483846B (en) 2015-11-25
HK1168682A1 (en) 2013-01-04
EP2457217A1 (en) 2012-05-30
EP2457217B1 (en) 2013-09-11
WO2011009933A1 (en) 2011-01-27
CN102483846A (en) 2012-05-30
US20120182419A1 (en) 2012-07-19
DE102009035755A1 (en) 2011-01-27
JP5806212B2 (en) 2015-11-10
JP2013500512A (en) 2013-01-07

Similar Documents

Publication Publication Date Title
US9292924B2 (en) Method and device for monitoring a spatial region
US10265035B2 (en) Method and device for motion control of a mobile medical device
US9596451B2 (en) Device for monitoring at least one three-dimensional safety area
US9151446B2 (en) Method and system for configuring a monitoring device for monitoring a spatial area
US10452939B2 (en) Monitoring system, monitoring device, and monitoring method
ES2616487T3 (en) Procedure, device and computer program to extract information about a ladder
JP5655134B2 (en) Method and apparatus for generating texture in 3D scene
JP2020531849A5 (en)
US10459451B2 (en) Method for processing a floor
US20140091140A1 (en) System for evaluating identification marks, identification marks and use thereof
JP2018106695A (en) Method of operating autonomously traveling transfer vehicle
CN113557713A (en) Context aware monitoring
WO2016040271A1 (en) Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device
JP2013113610A (en) Method and apparatus for measuring radiation
KR102489290B1 (en) System and method for detecting and notifying access to dangerous areas in workplace using image processing and location tracking technology
JP7569376B2 (en) Enhancing triangulation-based 3D distance measurements using time-of-flight information
JP6825623B2 (en) Monitoring system setting method and monitoring system
KR102290218B1 (en) Position tracking system using a plurality of cameras and method for position tracking using the same
WO2020111053A1 (en) Monitoring device, monitoring system, monitoring method, and monitoring program
JP6404985B1 (en) Imaging device for detecting abnormality of range image
KR20200042782A (en) 3d model producing apparatus and method for displaying image thereof
KR101684098B1 (en) Monitoring system with 3-dimensional sensor and image analysis integrated
JP2018160145A (en) Three-dimensional measurement device, three-dimensional measurement program, and three-dimensional measurement system

Legal Events

Date Code Title Description
AS Assignment

Owner name: PILZ GMBH & CO. KG, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WIETFELD, MARTIN;REEL/FRAME:027964/0041

Effective date: 20120314

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8