WO2019066693A1 - Operator assistance system and a method in relation to the system - Google Patents
Operator assistance system and a method in relation to the system Download PDFInfo
- Publication number
- WO2019066693A1 WO2019066693A1 PCT/SE2018/050829 SE2018050829W WO2019066693A1 WO 2019066693 A1 WO2019066693 A1 WO 2019066693A1 SE 2018050829 W SE2018050829 W SE 2018050829W WO 2019066693 A1 WO2019066693 A1 WO 2019066693A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- working equipment
- view
- occluded
- processing unit
- Prior art date
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66C—CRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
- B66C13/00—Other constructional features or details
- B66C13/18—Control systems or devices
- B66C13/46—Position indicators for suspended loads or for crane elements
Definitions
- the present disclosure relates to an operator assistance system and a method in connection with the system.
- the operator assistance system is in particular used on a vehicle provided with a working equipment, e.g. a crane, in assisting the operator during loading and unloading procedures.
- a working equipment e.g. a crane
- Working vehicles are often provided with various working equipment, e.g.
- movable cranes which are attached to the vehicle via a joint.
- These cranes comprise movable crane parts, e.g. booms, that may be extended, and that are joined together by joints such that the crane parts may be folded together at the vehicle and extended to reach a load.
- Various tools e.g. buckets or forks, may be attached to the crane tip, often via a rotator.
- the object of the present invention is to achieve an operator assistance system provided with a digital see-through capability for vehicles with a working equipment that solves the aforementioned visibility issues.
- the invention relates to an operator assistance system for a vehicle provided with a working equipment.
- the assistance system comprises an image capturing system arranged at the vehicle and/or at the working equipment and capable of capturing parameters related to images, of the working equipment and of the environment outside the vehicle in a predetermined field of view.
- the operator assistance system further comprises a processing unit configured to receive image related parameter signals from the image capturing system and to process the image related parameter signals, and a display unit configured to present images to an operator.
- the image capturing system comprises at least two sensor unit assemblies capable of capturing parameters related to essentially overlapping images of the predetermined field of view, and that the processing unit is configured to generate a merged image based upon said overlapping images.
- the processing unit is configured to determine a shape and position of an image representation of the working equipment occluding a part of an image in the predetermined field of view, obtained by one of the sensor unit assemblies, by processing image related signals from at least one of the other sensor unit assemblies.
- the processing unit is further configured to determine if the part occluded by the working equipment is visible in any image in the predetermined field of view obtained by any of the other sensor unit assemblies. If the occluded part is visible by any of the other sensor unit assemblies, an occluded part image, being an image representation of the occluded part, is determined in the field of view obtained by said other sensor unit assembly.
- the processing unit is then configured to merge the occluded part image into the merged image in the determined position of the working equipment, and to display the merged image at the display unit.
- At least one of the sensor unit assemblies comprises at least one angle sensor and/or at least one length sensor structured to be arranged at the working equipment, and adapted to measure angles and lengths related to movements of the working equipment, and at least one camera unit mounted at the vehicle and/or at the working equipment (6).
- the sensor unit assemblies comprises at least two camera units that are mounted at separate mounting positions at the vehicle in relation to the working equipment, or at the working equipment, such that different sides of the working equipment are visible at images obtained by the at least two camera units.
- the processing unit is provided with a set of image representations of the working equipment, and is further configured to apply a pattern recognition algorithm to identify any image representation of the working equipment in an image in the predetermined field of view by comparison to the set of image representations. Thereby it is assured that a safe and fast recognition of an occluding object is achieved.
- the processing unit is further configured to identify a predetermined part, e.g. a hook, a load, a fork, of the occluding working equipment to be visible in the merged image, and to display said predetermined part at said display unit. This is advantageous as the operator then easily can manoeuvre the hook or fork as it is clearly visible and highlighted at the display unit.
- the processing unit is configured to determine a presentation mode of the occluded part image, from a set of presentation modes including a transparent mode, wherein the working equipment defining the occluded part image is fully transparent, a semi-transparent mode, wherein the working equipment defining the occluded part image has a variable opacity. This is an important feature as an operator may choose an optimal presentation in dependence of a specific situation.
- the working equipment is a crane, a demountable arm, a boom, or a bucket, or any other tool arranged at a vehicle.
- a method is provided that is applied by an operator assistance system for a vehicle provided with a working equipment.
- the assistance system comprises an image capturing system arranged at the vehicle, and/or at the working equipment, and capable of capturing parameters related to images of the working equipment and of the environment outside the vehicle in a predetermined field of view.
- the assistance system further comprises a processing unit configured to receive image related parameter signals from the image capturing system and to process the image related parameter signals, and a display unit configured to present images to an operator.
- the image capturing system comprises at least two sensor unit assemblies capable of capturing parameters related to essentially overlapping images of the predetermined field of view, and that the processing unit is configured to generate a merged image based upon the overlapping images.
- the method comprises:
- the method further comprises:
- identifying a predetermined part e.g. a hook, a load, a fork, of the occluding working equipment to be visible in said merged image
- the method comprises:
- Figure 1 is a schematic illustration of a vehicle provided with an operator assistance system according to the present invention.
- Figure 2 is a schematic illustration of images obtained by sensor unit assemblies, and of a merged image.
- Figures 3a-3c are schematic views from above illustrating various aspects of embodiments of the present invention.
- Figure 4 is a schematic view from above illustrating another embodiment of the present invention.
- FIG. 5 is a flow diagram illustrating the method according to the present invention. Detailed description
- a block diagram is disclosed schematically illustrating an operator assistance system 2 for a vehicle 4 provided with a working equipment 6.
- the vehicle may be a cargo vehicle, a truck, a forklift or a working vehicle provided with a working equipment 6 that is e.g. a crane, a demountable arm, a mast boom, a hook-lift equipment provided with a predetermined part 22, e.g. a hook, a fork, or a bucket.
- the assistance system comprises an image capturing system 8 arranged at the vehicle and/or at the working equipment 6, and capable of capturing parameters related to images of the working equipment 6 and of the environment outside the vehicle in a predetermined field of view 10, a processing unit 12 configured to receive image related parameter signals 14 from the image capturing system 8, to process the image related parameter signals, and a display unit 16 configured to present captured images to an operator.
- the image capturing system 8 comprises at least two sensor unit assemblies 18, 20 capable of capturing parameters related to essentially overlapping images of said predetermined field of view 10.
- the processing unit 12 has a processing capability and is configured to generate a merged image based upon the overlapping images.
- the sensor unit assembly is a camera unit, an angle sensor, a length sensor, or any other sensing unit capable of capturing parameters related to images.
- the parameters related to images may be parameters directly related to images, e.g. optical parameters detected by e.g. a camera unit, or indirectly related to images, e.g. various parameters representing movement and position of the working equipment, e.g. angles, lengths, width related to parts of the working equipment.
- the processing unit is configured to determine a shape and position of an image representation of the working equipment 6 occluding a part of an image in the predetermined field of view 10, obtained by one of said sensor unit assemblies, by processing image related signals from at least one of the other sensor unit assemblies.
- the processing unit 12 is further configured to determine if the part occluded by the working equipment 6 is visible in any image in the predetermined field of view 10 obtained by any of the other sensor unit assemblies. If the occluded part is visible by any of the other sensor unit assemblies, an occluded part image, being an image representation of the occluded part, is determined in the field of view 10 obtained by said other sensor unit assembly.
- the processing unit is then configured to merge the determined occluded part image into the merged image in the determined position of the working equipment 6, and to display the merged image at the display unit 16.
- At least one of the sensor unit assemblies comprises at least one angle sensor and at least one length sensor structured to be arranged at the working equipment, and adapted to measure angles and lengths related to movements of said working equipment.
- at least one camera unit 18, 20 is provided which is mounted at the vehicle 4 and/or at the working equipment 6.
- the angle sensor(s) and/or the length sensor(s) are arranged at the working equipment and structured to generate sensor signals including angle values and length values which are applied to the processing unit. Based upon those sensor signals and information indicating the type of working equipment the processing unit may determine the shape and position of the working equipment.
- the sensor unit assemblies comprise at least two camera units 18, 20 that are mounted at separate mounting positions at the vehicle 4 in relation to the working equipment 6, or at the working equipment 6, such that different sides of the working equipment are visible at images obtained by the at least two camera units 18, 20.
- the camera units 18, 20 are mounted at separate mounting positions at the working equipment, e.g. a boom of a crane, and move thereby together with the crane.
- the definition that the camera units are mounted at different sides of the working equipment should be interpreted broadly, and not being limited to different sides in a horizontal plane.
- the important aspect is that the view of sights obtained by the camera units mounted at the different sides cover parts that potentially may be occluded by the working equipment during normal use.
- the camera units may be mounted at different heights or at other positions where a full overview may be obtained.
- two camera units are mounted at different sides of the working equipment, e.g.
- FIG. 2 is shown above two images obtained by two camera units arranged at opposite sides of the working equipment, or by a sensor unit assembly at the working equipment, and at least one camera unit.
- the working equipment is visible at the right image where it occludes parts of the environment and thereby prevents the operator from having complete visual overview.
- the position and shape of the occluding working equipment is determined, and the positions are then applied in the image obtained by another camera unit to identify the corresponding positions therein. This is schematically illustrated on the image to the left where the image representation of the occluded part is indicated by dashed lines.
- the processing unit is provided with a set of image
- representations of the working equipment and more particularly a set of searchable data representations of images of the working equipment in various views and sizes.
- the processing unit then is configured to apply a search procedure to identify any image representation of the working equipment in an image in the predetermined field of view obtained by any of the sensor unit assemblies by comparison to the set of image representations.
- This is preferably performed by applying a dedicated pattern recognition algorithm capable of comparing captured images with the stored set of data representations of images of the working equipment.
- the working equipment may occlude a part, e.g. a larger part or a smaller part, of the field of view obtained by one camera unit. Even if only a smaller part of the field of view is occluded the search procedure may be performed.
- the search procedure preferably is continuously performed for all images obtained by all sensor unit assemblies, e.g. the camera units, and that occluded parts in images obtained from different sensor unit assemblies and in different positions then may be identified, and eventually applied in a merged image.
- the processing unit is configured to identify a predetermined part 22 of the occluding working equipment (see figure 1 and figures 3a-3c), e.g. a hook, a load, a bucket, or a fork, to be visually presented in the merged image, and to display the predetermined part at the display unit. This is preferably performed by applying another pattern recognition algorithm to be set up in advance dependent on the presently used predetermined part of the working equipment.
- the merged image is presented to the operator on the display unit.
- the display unit may be any type of presentation unit configured and arranged such that the operator is provided with the necessary support in order to operate the working equipment safely and efficiently. Various presentation modes are available, where the occluding working equipment
- the processing unit is configured to determine a presentation mode of the occluded part image, from a set of presentation modes including a transparent mode, wherein the working equipment defining the occluded part image is fully transparent and a semi-transparent mode, wherein the working equipment defining the occluded part image has a variable opacity.
- the processing unit is configured to determine a presentation mode where an outer boundary of the occluded part image is indicated in the merged image.
- the boundary of the occluded part image may e.g. be indicated by a dashed line.
- Figures 3a-3c illustrate various presentation modes.
- stereo cameras may be used for making 3D pictures, or for range imaging. Unlike most other approaches to depth sensing, such as structured light or time-of-flight measurements, stereo vision is a purely passive technology which also works in bright daylight.
- the image capturing system may comprise at least one camera unit, but may in addition also include one or many sensor units, e.g. angle sensor units, length sensor units, capable of capturing various supporting image data to be supplied to the processing unit.
- sensor units e.g. angle sensor units, length sensor units, capable of capturing various supporting image data to be supplied to the processing unit.
- the image capturing system applies the Lidar-technology.
- Lidar is sometimes considered an acronym of Light Detection and Ranging (sometimes Light Imaging, Detection, and Ranging), and is a surveying method that measures distance to a target by illuminating that target with a laser light.
- Lidar is popularly used to make high-resolution maps, with applications in geodesy, forestry, laser guidance, airborne laser swath mapping (ALSM), and laser altimetry.
- Lidar sometimes is called laser scanning and 3D scanning, with terrestrial, airborne, and mobile applications.
- the image capturing system also includes a 3D scanning device.
- a 3D scanner is a device that analyses a real-world object or environment to collect data on its shape and possibly its appearance (e.g. colour). The collected data can then be used to construct digital three-dimensional models.
- Many different technologies can be used to build these 3D-scanning devices; each technology comes with its own limitations, advantages and costs. Many limitations in the kind of objects that can be digitised are still present, for example, optical technologies encounter many difficulties with shiny, mirroring or
- industrial computed tomography scanning can be used to construct digital 3D models, applying non-destructive testing.
- 3D scanner The purpose of a 3D scanner is usually to create a point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called reconstruction). If colour information is collected at each point, then the colours on the surface of the subject can also be determined.
- 3D scanners share several traits with cameras. Like most cameras, they have a cone-like field of view, and like cameras, they can only collect information about surfaces that are not obscured. While a camera collects colour information about surfaces within its field of view, a 3D scanner collects distance information about surfaces within its field of view. The "picture" produced by a 3D scanner describes the distance to a surface at each point in the picture. This allows the three dimensional position of each point in the picture to be identified.
- a so-called time-of-flight Lidar scanner may be used, together with the camera units, to produce a 3D model.
- the Lidar can aim its laser beam in a wide range: its head rotates horizontally, a mirror flips vertically.
- the laser beam is used to measure the distance to the first object on its path.
- the time-of-flight 3D laser scanner is an active scanner that uses laser light to probe the subject.
- a time-of-flight laser range finder finds the distance of a surface by timing the round- trip time of a pulse of light.
- a laser is used to emit a pulse of light and the amount of time before the reflected light is seen by a detector is measured. Since the speed of light c is known, the round-trip time determines the travel distance of the light, which is twice the distance between the scanner and the surface.
- the accuracy of a time-of-flight 3D laser scanner depends on how precisely we can measure the t; 3.3 picoseconds (approx.) is the time taken for light to travel 1 millimetre.
- the laser range finder only detects the distance of one point in its direction of view.
- the scanner scans its entire field of view one point at a time by changing the range finder's direction of view to scan different points.
- the view direction of the laser range finder can be changed either by rotating the range finder itself, or by using a system of rotating mirrors. The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy.
- Typical time-of-flight 3D laser scanners can measure the distance of 10,000-100,000 points every second.
- the image capturing system uses a structured-light 3D scanner that projects a pattern of light on the subject and look at the deformation of the pattern on the subject.
- the pattern is projected onto the subject using either an LCD projector or other stable light source.
- a camera offset slightly from the pattern projector, looks at the shape of the pattern and calculates the distance of every point in the field of view.
- structured-light 3D scanners is speed and precision. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. Scanning an entire field of view in a fraction of a second reduces or eliminates the problem of distortion from motion.
- the display unit may be a display arranged e.g. at a control unit or in the vehicle.
- the display unit 14 is a pair of glasses, for example of the type sold under the trademark Hololens.
- the pair of glasses is structured to present the 3D representation such that the 3D representation is overlaid on the
- the display unit 14 is a pair of virtual reality goggles. These types of goggles comprise two displays to be arranged in front of the operator's eyes. This variation is particularly advantageous when the operator has no direct line of sight to an object to be handled. Often VR goggles are provided with an orientation sensor that senses the orientation of the VR goggles. It may then be possible for a user to change the field of view to locate potential obstacles close to the load, provided that the object detecting device has a larger field of vision than the image presented at the displays of the VR goggles.
- the present invention also relates to a method applied by an operator assistance system for a vehicle provided with a working equipment. The method will now be described in detail with reference to the flow diagram shown in figure 5.
- the assistance system is described above and comprises thus an image capturing system arranged at the vehicle and/or at the working equipment, and capable of capturing parameters related to images of the working equipment and of the environment outside said vehicle in a predetermined field of view.
- the assistance system further comprises a processing unit configured to receive image related parameter signals from the image capturing system and to process the image related parameter signals, and a display unit configured to present images to an operator.
- the image capturing system comprises at least two sensor unit assemblies capable of capturing parameters related to essentially overlapping images of said predetermined field of view.
- the sensor unit assembly may be a camera unit, an angle sensor, a length sensor, or any other sensor unit capable of capturing parameters related to images.
- the processing unit is configured to generate a merged image based upon said overlapping images.
- the method then comprises determining a shape and position of an image representation of the working equipment occluding a part of an image in the predetermined field of view, by processing image related signals, and determining if the part occluded by the working equipment is visible in any image in the predetermined field of view obtained by any of the other sensor unit assemblies. If the occluded part is visible by any of the other sensor unit assemblies, the method further comprises determining an occluded part image, being an image
- the method preferably comprises identifying a predetermined part, e.g. a hook, a load, a fork, of the occluding working equipment to be visible in the merged image, and then displaying the predetermined part at the display unit.
- a predetermined part e.g. a hook, a load, a fork
- the method comprises determining a presentation mode of the occluded part image.
- the presentation mode is determined from a set of presentation modes including a transparent mode, wherein the working equipment defining the occluded part image is fully transparent, and a semi-transparent mode, wherein the working equipment defining the occluded part image has a variable opacity.
- the method further comprises determining a presentation mode where an outer boundary of the occluded part image is indicated in the merged image, e.g. by a dashed line.
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Component Parts Of Construction Machinery (AREA)
Abstract
An operator assistance system (2) for a vehicle (4) provided with a working equipment (6), the assistance system comprises an image capturing system (8) arranged at said vehicle and/or at said working equipment (6), and capable of capturing parameters related to images, of said working equipment (6) and of the environment outside said vehicle in a predetermined field of view (10). A processing unit (12) is provided configured to receive image related parameter signals (14) from said image capturing system (8) and to process said image related parameter signals (14), and a display unit (16) configured to present images to an operator. The processing unit (12) is configured to determine a shape and position of an image representation of said working equipment (6) occluding a part of an image in the predetermined field of view (10), obtained by one of at least two sensor unit assemblies (16, 18), by processing image related signals from at least one of the other sensor unit assemblies, the processing unit (12) is further configured to determine if the part occluded by said working equipment (6) is visible in any image in the predetermined field of view (10) obtained by any of the other sensor unit assemblies. If the occluded part is visible by any of the other sensor unit assemblies, an occluded part image, being an image representation of said occluded part, is determined in the field of view (10) obtained by said other sensor unit assembly. The processing unit is configured to merge said occluded part image into said merged image in said determined position of said working equipment (6), and to display said merged image at said display unit (16).
Description
Operator assistance system and a method in relation to the system Technical field
The present disclosure relates to an operator assistance system and a method in connection with the system. The operator assistance system is in particular used on a vehicle provided with a working equipment, e.g. a crane, in assisting the operator during loading and unloading procedures.
Background
Working vehicles are often provided with various working equipment, e.g.
movable cranes, which are attached to the vehicle via a joint. These cranes comprise movable crane parts, e.g. booms, that may be extended, and that are joined together by joints such that the crane parts may be folded together at the vehicle and extended to reach a load. Various tools, e.g. buckets or forks, may be attached to the crane tip, often via a rotator.
Today, many working vehicles are provided with various camera systems applied to display images of the load and the environment around the vehicle in order to assist the operator during the working procedure. Full visibility of terrain and targets is an important requirement in the design and operation of mobile work machines. The machine operator must see the work targets all the time while the machine and its parts, such as a boom, are moving. Furthermore, the operator must be aware of any bystanders being around or passing by the machine.
However, providing full visibility is not always possible due to occlusions caused by the working equipment, e.g. by boom motion or obstructions caused by the machine's profile. The lack of visibility slows down the use of the machine, endangers others working and moving in the area, and physically stresses the operator due to constantly avoiding and trying to look around any obstacles occluding the sight for the operator.
The object of the present invention is to achieve an operator assistance system provided with a digital see-through capability for vehicles with a working equipment that solves the aforementioned visibility issues. Summary
The above-mentioned object is achieved by the present invention according to the independent claims.
Preferred embodiments are set forth in the dependent claims.
According to a first aspect, the invention relates to an operator assistance system for a vehicle provided with a working equipment. The assistance system comprises an image capturing system arranged at the vehicle and/or at the working equipment and capable of capturing parameters related to images, of the working equipment and of the environment outside the vehicle in a predetermined field of view. The operator assistance system further comprises a processing unit configured to receive image related parameter signals from the image capturing system and to process the image related parameter signals, and a display unit configured to present images to an operator. The image capturing system comprises at least two sensor unit assemblies capable of capturing parameters related to essentially overlapping images of the predetermined field of view, and that the processing unit is configured to generate a merged image based upon said overlapping images.
The processing unit is configured to determine a shape and position of an image representation of the working equipment occluding a part of an image in the predetermined field of view, obtained by one of the sensor unit assemblies, by processing image related signals from at least one of the other sensor unit assemblies. The processing unit is further configured to determine if the part occluded by the working equipment is visible in any image in the predetermined field of view obtained by any of the other sensor unit assemblies. If the occluded part is visible by any of the other sensor unit assemblies, an occluded part image,
being an image representation of the occluded part, is determined in the field of view obtained by said other sensor unit assembly.
The processing unit is then configured to merge the occluded part image into the merged image in the determined position of the working equipment, and to display the merged image at the display unit.
According to an embodiment at least one of the sensor unit assemblies comprises at least one angle sensor and/or at least one length sensor structured to be arranged at the working equipment, and adapted to measure angles and lengths related to movements of the working equipment, and at least one camera unit mounted at the vehicle and/or at the working equipment (6).
According to another embodiment the sensor unit assemblies comprises at least two camera units that are mounted at separate mounting positions at the vehicle in relation to the working equipment, or at the working equipment, such that different sides of the working equipment are visible at images obtained by the at least two camera units.
According to one embodiment the processing unit is provided with a set of image representations of the working equipment, and is further configured to apply a pattern recognition algorithm to identify any image representation of the working equipment in an image in the predetermined field of view by comparison to the set of image representations. Thereby it is assured that a safe and fast recognition of an occluding object is achieved.
According to another embodiment the processing unit is further configured to identify a predetermined part, e.g. a hook, a load, a fork, of the occluding working equipment to be visible in the merged image, and to display said predetermined part at said display unit. This is advantageous as the operator then easily can manoeuvre the hook or fork as it is clearly visible and highlighted at the display unit.
According to still another embodiment the processing unit is configured to determine a presentation mode of the occluded part image, from a set of presentation modes including a transparent mode, wherein the working equipment defining the occluded part image is fully transparent, a semi-transparent mode, wherein the working equipment defining the occluded part image has a variable opacity. This is an important feature as an operator may choose an optimal presentation in dependence of a specific situation.
In a further embodiment of the assistance system the processing unit is
configured to determine a presentation mode where an outer boundary of the occluded part image is indicated in the merged image. By indicating the outer boundary of the working equipment the operator will have a complete overview of the working site and also of the working equipment. The working equipment is a crane, a demountable arm, a boom, or a bucket, or any other tool arranged at a vehicle.
According to a second aspect of the present invention a method is provided that is applied by an operator assistance system for a vehicle provided with a working equipment. The assistance system comprises an image capturing system arranged at the vehicle, and/or at the working equipment, and capable of capturing parameters related to images of the working equipment and of the environment outside the vehicle in a predetermined field of view. The assistance system further comprises a processing unit configured to receive image related parameter signals from the image capturing system and to process the image related parameter signals, and a display unit configured to present images to an operator. The image capturing system comprises at least two sensor unit assemblies capable of capturing parameters related to essentially overlapping images of the predetermined field of view, and that the processing unit is configured to generate a merged image based upon the overlapping images. The method comprises:
- determining a shape and position of an image representation of the working
equipment occluding a part of an image in the predetermined field of view, by processing image related signals,
- determining if the part occluded by the working equipment is visible in any image in the predetermined field of view obtained by any of the other sensor unit assemblies, and if the part is visible by any of the other sensor unit assemblies, the method further comprises:
- determining an occluded part image, being an image representation of the occluded part, in the field of view obtained by the other sensor unit assembly,
- merging, by the processing unit, the occluded part image into the merged image in the determined position of the working equipment,
- displaying the merged image at the display unit.
According to one embodiment the method comprises:
- applying a pattern recognition algorithm to identify any image representation of the working equipment in an image in the predetermined field of view by comparison to a set of image representations of the working equipment. Thereby it is assured that a safe and fast recognition of an occluding object is achieved.
According to a further embodiment the method comprises:
- identifying a predetermined part, e.g. a hook, a load, a fork, of the occluding working equipment to be visible in said merged image, and
- displaying the predetermined part at the display unit.
This is advantageous as the operator then easily can manoeuvre the hook or fork as it is clearly visible and highlighted at the display unit.
In still another embodiment the method comprises:
- determining a presentation mode of the occluded part image, from a set of presentation modes including a transparent mode, wherein the working
equipment, defining the occluded part image, is fully transparent, a semi- transparent mode, wherein the working equipment defining the occluded part image has a variable opacity. This is an important feature as an operator may choose an optimal presentation in dependence of a specific situation.
According to another further embodiment the method comprises:
- determining a presentation mode where an outer boundary of the occluded part image is indicated in the merged image. By indicating the outer boundary of the working equipment the operator will have a complete overview of the working site and also of the working equipment.
Brief description of the drawings
Figure 1 is a schematic illustration of a vehicle provided with an operator assistance system according to the present invention.
Figure 2 is a schematic illustration of images obtained by sensor unit assemblies, and of a merged image.
Figures 3a-3c are schematic views from above illustrating various aspects of embodiments of the present invention.
Figure 4 is a schematic view from above illustrating another embodiment of the present invention.
Figure 5 is a flow diagram illustrating the method according to the present invention. Detailed description
The operator assistance system, and the method, will now be described in detail with references to the appended figures. Throughout the figures the same, or similar, items have the same reference signs. Moreover, the items and the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
With references to figure 1 , a block diagram is disclosed schematically illustrating an operator assistance system 2 for a vehicle 4 provided with a working equipment 6. The vehicle may be a cargo vehicle, a truck, a forklift or a working vehicle provided with a working equipment 6 that is e.g. a crane, a demountable arm, a mast boom, a hook-lift equipment provided with a predetermined part 22, e.g. a hook, a fork, or a bucket.
The assistance system comprises an image capturing system 8 arranged at the vehicle and/or at the working equipment 6, and capable of capturing parameters related to images of the working equipment 6 and of the environment outside the vehicle in a predetermined field of view 10, a processing unit 12 configured to receive image related parameter signals 14 from the image capturing system 8, to process the image related parameter signals, and a display unit 16 configured to present captured images to an operator.
The image capturing system 8 comprises at least two sensor unit assemblies 18, 20 capable of capturing parameters related to essentially overlapping images of said predetermined field of view 10. The processing unit 12 has a processing capability and is configured to generate a merged image based upon the overlapping images. As will be further discussed below the sensor unit assembly is a camera unit, an angle sensor, a length sensor, or any other sensing unit capable of capturing parameters related to images.
The parameters related to images may be parameters directly related to images, e.g. optical parameters detected by e.g. a camera unit, or indirectly related to images, e.g. various parameters representing movement and position of the working equipment, e.g. angles, lengths, width related to parts of the working equipment.
The processing unit is configured to determine a shape and position of an image representation of the working equipment 6 occluding a part of an image in the predetermined field of view 10, obtained by one of said sensor unit assemblies, by processing image related signals from at least one of the other sensor unit assemblies.
The processing unit 12 is further configured to determine if the part occluded by the working equipment 6 is visible in any image in the predetermined field of view 10 obtained by any of the other sensor unit assemblies. If the occluded part is visible by any of the other sensor unit assemblies, an occluded part image, being an image representation of the occluded part, is determined in the field of view 10 obtained by said other sensor unit assembly.
The processing unit is then configured to merge the determined occluded part
image into the merged image in the determined position of the working equipment 6, and to display the merged image at the display unit 16.
According to one embodiment at least one of the sensor unit assemblies comprises at least one angle sensor and at least one length sensor structured to be arranged at the working equipment, and adapted to measure angles and lengths related to movements of said working equipment. In addition at least one camera unit 18, 20 is provided which is mounted at the vehicle 4 and/or at the working equipment 6. The angle sensor(s) and/or the length sensor(s) are arranged at the working equipment and structured to generate sensor signals including angle values and length values which are applied to the processing unit. Based upon those sensor signals and information indicating the type of working equipment the processing unit may determine the shape and position of the working equipment.
According to another embodiment the sensor unit assemblies comprise at least two camera units 18, 20 that are mounted at separate mounting positions at the vehicle 4 in relation to the working equipment 6, or at the working equipment 6, such that different sides of the working equipment are visible at images obtained by the at least two camera units 18, 20.
In one variation, which is schematically illustrated in figure 4, the camera units 18, 20 are mounted at separate mounting positions at the working equipment, e.g. a boom of a crane, and move thereby together with the crane. The definition that the camera units are mounted at different sides of the working equipment should be interpreted broadly, and not being limited to different sides in a horizontal plane. The important aspect is that the view of sights obtained by the camera units mounted at the different sides cover parts that potentially may be occluded by the working equipment during normal use. Thus, the camera units may be mounted at different heights or at other positions where a full overview may be obtained. In one exemplary variation two camera units are mounted at different sides of the working equipment, e.g. at the roof of the operator's cabin, and a third camera unit is mounted at an opposite end of the vehicle.
In figure 2 is shown above two images obtained by two camera units arranged at opposite sides of the working equipment, or by a sensor unit assembly at the working equipment, and at least one camera unit. The working equipment is visible at the right image where it occludes parts of the environment and thereby prevents the operator from having complete visual overview. The position and shape of the occluding working equipment is determined, and the positions are then applied in the image obtained by another camera unit to identify the corresponding positions therein. This is schematically illustrated on the image to the left where the image representation of the occluded part is indicated by dashed lines. In this image is also seen the working equipment that occludes a part of the image seen a bit further to the right, which is due to the different positions of the camera units in the horizontal plane. These two images are then merged into the image shown below in figure 2. The merging procedure may be applied by identifying easily identifiable objects in both images and then combine and position the images such that these identified objects correspond to each other. In one embodiment the processing unit is provided with a set of image
representations of the working equipment, and more particularly a set of searchable data representations of images of the working equipment in various views and sizes.
The processing unit then is configured to apply a search procedure to identify any image representation of the working equipment in an image in the predetermined field of view obtained by any of the sensor unit assemblies by comparison to the set of image representations. This is preferably performed by applying a dedicated pattern recognition algorithm capable of comparing captured images with the stored set of data representations of images of the working equipment. The working equipment may occlude a part, e.g. a larger part or a smaller part, of the field of view obtained by one camera unit. Even if only a smaller part of the field of view is occluded the search procedure may be performed.
It should be noted that the search procedure preferably is continuously performed for all images obtained by all sensor unit assemblies, e.g. the camera units, and that occluded parts in images obtained from different sensor unit assemblies and in different positions then may be identified, and eventually applied in a merged image.
According to one further embodiment the processing unit is configured to identify a predetermined part 22 of the occluding working equipment (see figure 1 and figures 3a-3c), e.g. a hook, a load, a bucket, or a fork, to be visually presented in the merged image, and to display the predetermined part at the display unit. This is preferably performed by applying another pattern recognition algorithm to be set up in advance dependent on the presently used predetermined part of the working equipment. The merged image is presented to the operator on the display unit. The display unit may be any type of presentation unit configured and arranged such that the operator is provided with the necessary support in order to operate the working equipment safely and efficiently. Various presentation modes are available, where the occluding working
equipment is made invisible, is slightly visible, and optionally having the borders indicated by e.g. dashed lines.
Thus, the processing unit is configured to determine a presentation mode of the occluded part image, from a set of presentation modes including a transparent mode, wherein the working equipment defining the occluded part image is fully transparent and a semi-transparent mode, wherein the working equipment defining the occluded part image has a variable opacity.
In addition the processing unit is configured to determine a presentation mode where an outer boundary of the occluded part image is indicated in the merged image. The boundary of the occluded part image may e.g. be indicated by a dashed line.
Figures 3a-3c illustrate various presentation modes.
In figure 3a the boundary of the working equipment 6 and the predetermined part 22 are indicated by dashed lines.
In figure 3b the boundary of the working equipment 6 is indicated by a dashed line and the predetermined part 22 is highlighted by a solid line.
In figure 3c the working equipment 6 is invisible and the predetermined part 22 is highlighted by a solid line.
In the embodiment where two camera units are used they have overlapping field of views. This allows the camera units to simulate human binocular vision, and therefore gives it the ability to capture three-dimensional images, a process known as stereo photography. Stereo cameras may be used for making 3D pictures, or for range imaging. Unlike most other approaches to depth sensing, such as structured light or time-of-flight measurements, stereo vision is a purely passive technology which also works in bright daylight.
As discussed above the image capturing system may comprise at least one camera unit, but may in addition also include one or many sensor units, e.g. angle sensor units, length sensor units, capable of capturing various supporting image data to be supplied to the processing unit.
In one variation the image capturing system applies the Lidar-technology. Lidar is sometimes considered an acronym of Light Detection and Ranging (sometimes Light Imaging, Detection, and Ranging), and is a surveying method that measures distance to a target by illuminating that target with a laser light. Lidar is popularly used to make high-resolution maps, with applications in geodesy, forestry, laser guidance, airborne laser swath mapping (ALSM), and laser altimetry. Lidar sometimes is called laser scanning and 3D scanning, with terrestrial, airborne, and mobile applications.
In still another variation the image capturing system also includes a 3D scanning device. A 3D scanner is a device that analyses a real-world object or environment
to collect data on its shape and possibly its appearance (e.g. colour). The collected data can then be used to construct digital three-dimensional models. Many different technologies can be used to build these 3D-scanning devices; each technology comes with its own limitations, advantages and costs. Many limitations in the kind of objects that can be digitised are still present, for example, optical technologies encounter many difficulties with shiny, mirroring or
transparent objects. For example, industrial computed tomography scanning can be used to construct digital 3D models, applying non-destructive testing.
The purpose of a 3D scanner is usually to create a point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called reconstruction). If colour information is collected at each point, then the colours on the surface of the subject can also be determined.
3D scanners share several traits with cameras. Like most cameras, they have a cone-like field of view, and like cameras, they can only collect information about surfaces that are not obscured. While a camera collects colour information about surfaces within its field of view, a 3D scanner collects distance information about surfaces within its field of view. The "picture" produced by a 3D scanner describes the distance to a surface at each point in the picture. This allows the three dimensional position of each point in the picture to be identified.
In still another variation a so-called time-of-flight Lidar scanner may be used, together with the camera units, to produce a 3D model. The Lidar can aim its laser beam in a wide range: its head rotates horizontally, a mirror flips vertically. The laser beam is used to measure the distance to the first object on its path.
The time-of-flight 3D laser scanner is an active scanner that uses laser light to probe the subject. At the heart of this type of scanner is a time-of-flight laser range finder. The laser range finder finds the distance of a surface by timing the round- trip time of a pulse of light. A laser is used to emit a pulse of light and the amount of time before the reflected light is seen by a detector is measured. Since the speed of light c is known, the round-trip time determines the travel distance of the light, which is twice the distance between the scanner and the surface. The
accuracy of a time-of-flight 3D laser scanner depends on how precisely we can measure the t; 3.3 picoseconds (approx.) is the time taken for light to travel 1 millimetre. The laser range finder only detects the distance of one point in its direction of view. Thus, the scanner scans its entire field of view one point at a time by changing the range finder's direction of view to scan different points. The view direction of the laser range finder can be changed either by rotating the range finder itself, or by using a system of rotating mirrors. The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy. Typical time-of-flight 3D laser scanners can measure the distance of 10,000-100,000 points every second.
In another variation the image capturing system uses a structured-light 3D scanner that projects a pattern of light on the subject and look at the deformation of the pattern on the subject. The pattern is projected onto the subject using either an LCD projector or other stable light source. A camera, offset slightly from the pattern projector, looks at the shape of the pattern and calculates the distance of every point in the field of view. The advantage of structured-light 3D scanners is speed and precision. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. Scanning an entire field of view in a fraction of a second reduces or eliminates the problem of distortion from motion. Some existing systems are capable of scanning moving objects in real-time.
The display unit may be a display arranged e.g. at a control unit or in the vehicle. As an alternative, the display unit 14 is a pair of glasses, for example of the type sold under the trademark Hololens. The pair of glasses is structured to present the 3D representation such that the 3D representation is overlaid on the
transparent glasses through which a user observes the object. Various additional information may also be presented as overlaid information and preferably presented such that the additional information is presented close to an illustrated part of the object.
In still another alternative the display unit 14 is a pair of virtual reality goggles. These types of goggles comprise two displays to be arranged in front of the operator's eyes. This variation is particularly advantageous when the operator has no direct line of sight to an object to be handled. Often VR goggles are provided with an orientation sensor that senses the orientation of the VR goggles. It may then be possible for a user to change the field of view to locate potential obstacles close to the load, provided that the object detecting device has a larger field of vision than the image presented at the displays of the VR goggles. The present invention also relates to a method applied by an operator assistance system for a vehicle provided with a working equipment. The method will now be described in detail with reference to the flow diagram shown in figure 5.
The assistance system is described above and comprises thus an image capturing system arranged at the vehicle and/or at the working equipment, and capable of capturing parameters related to images of the working equipment and of the environment outside said vehicle in a predetermined field of view. The assistance system further comprises a processing unit configured to receive image related parameter signals from the image capturing system and to process the image related parameter signals, and a display unit configured to present images to an operator. The image capturing system comprises at least two sensor unit assemblies capable of capturing parameters related to essentially overlapping images of said predetermined field of view. As discussed above the sensor unit assembly may be a camera unit, an angle sensor, a length sensor, or any other sensor unit capable of capturing parameters related to images.
The processing unit is configured to generate a merged image based upon said overlapping images.
The method then comprises determining a shape and position of an image representation of the working equipment occluding a part of an image in the predetermined field of view, by processing image related signals, and determining if the part occluded by the working equipment is visible in any image in the predetermined field of view obtained by any of the other sensor unit assemblies. If
the occluded part is visible by any of the other sensor unit assemblies, the method further comprises determining an occluded part image, being an image
representation of the occluded part, in the field of view obtained by the other sensor unit assembly, merging, by said processing unit, the occluded part image into said merged image in said determined position of said working equipment, and finally displaying said merged image at said display unit.
According to one embodiment the method comprises applying a pattern
recognition algorithm to identify any image representation of the working equipment in an image in the predetermined field of view by comparison to a set of image representations of the working equipment.
The method preferably comprises identifying a predetermined part, e.g. a hook, a load, a fork, of the occluding working equipment to be visible in the merged image, and then displaying the predetermined part at the display unit.
In a further embodiment the method comprises determining a presentation mode of the occluded part image. The presentation mode is determined from a set of presentation modes including a transparent mode, wherein the working equipment defining the occluded part image is fully transparent, and a semi-transparent mode, wherein the working equipment defining the occluded part image has a variable opacity.
In addition the method further comprises determining a presentation mode where an outer boundary of the occluded part image is indicated in the merged image, e.g. by a dashed line.
The present invention is not limited to the above-described preferred
embodiments. Various alternatives, modifications and equivalents may be used. Therefore, the above embodiments should not be taken as limiting the scope of the invention, which is defined by the appending claims.
Claims
1 . An operator assistance system (2) for a vehicle (4) provided with a working equipment (6), the assistance system comprises an image capturing system (8) arranged at said vehicle and/or at said working equipment (6), and capable of capturing parameters related to images, of said working equipment (6) and of the environment outside said vehicle in a predetermined field of view (10), a processing unit (12) configured to receive image related parameter signals (14) from said image capturing system (8) and to process said image related
parameter signals (14), and a display unit (16) configured to present images to an operator, said image capturing system (8) comprises at least two sensor unit assemblies (18, 20) capable of capturing parameters related to essentially overlapping images of said predetermined field of view (10), and that said processing unit (12) is configured to generate a merged image based upon said overlapping images,
c h a r a c t e r i z e d i n that said processing unit (12) is configured to determine a shape and position of an image representation of said working equipment (6) occluding a part of an image in the predetermined field of view (10), obtained by one of said sensor unit assemblies, by processing image related signals from at least one of the other sensor unit assemblies, the processing unit (12) is further configured to determine if the part occluded by said working equipment (6) is visible in any image in the predetermined field of view (10) obtained by any of the other sensor unit assemblies, and if said part is visible by any of the other sensor unit assemblies, an occluded part image, being an image representation of said occluded part, is determined in the field of view (10) obtained by said other sensor unit assembly,
wherein said processing unit is configured to merge said occluded part image into said merged image in said determined position of said working equipment (6), and to display said merged image at said display unit (16).
2. The operator assistance system according to claim 1 , wherein at least one of said sensor unit assemblies comprises at least one angle sensor and/or at least one length sensor structured to be arranged at said working equipment, and
adapted to measure angles and lengths related to movements of said working equipment, and at least one camera unit (18, 20) mounted at said vehicle (4) and/or at said working equipment (6).
3. The operator assistance system according to claim 1 or 2, wherein said sensor unit assemblies comprises at least two camera units (18, 20) that are mounted at separate mounting positions at said vehicle (4) in relation to said working equipment (6), or at said working equipment (6), such that different sides of said working equipment are visible at images obtained by said at least two camera units (18, 20).
4. The assistance system according to any of claims 1 -3, wherein said processing unit is provided with a set of image representations of the working equipment, and is further configured to apply a pattern recognition algorithm to identify any image representation of the working equipment in an image in the predetermined field of view by comparison to the set of image representations.
5. The assistance system according to any of claims 1 -4, wherein said processing unit is further configured to identify a predetermined part, e.g. a hook, a load, a fork, of said occluding working equipment to be visible in said merged image, and to display said predetermined part at said display unit.
6. The assistance system according to any of claims 1 -5, wherein said processing unit is configured to determine a presentation mode of said occluded part image, from a set of presentation modes including a transparent mode, wherein said working equipment defining the occluded part image is fully transparent, a semi-transparent mode, wherein said working equipment defining said occluded part image has a variable opacity.
7. The assistance system according to any of claims 1 -6, wherein said processing unit is configured to determine a presentation mode where an outer boundary of the occluded part image is indicated in the merged image.
8. The assistance system according to any of claims 1 -7, wherein said working equipment is a crane, a demountable arm, a boom, or a bucket.
9. A method applied by an operator assistance system (2) for a vehicle
(4) provided with a working equipment (6), the assistance system comprises an image capturing system (8) arranged at said vehicle and/or at the working equipment (6), and capable of capturing parameters related to images of said working equipment (6) and of the environment outside said vehicle in a
predetermined field of view (10), a processing unit (12) configured to receive image related parameter signals (14) from said image capturing system (8) and to process said image related parameter signals (14), and a display unit (16) configured to present images to an operator, said image capturing system (8) comprises at least two sensor unit assemblies (18, 20) capable of capturing parameters related to essentially overlapping images of said predetermined field of view (10), and that said processing unit (12) is configured to generate a merged image based upon said overlapping images,
c h a r a c t e r i z e d i n that method comprises:
- determining a shape and position of an image representation of said working equipment (6) occluding a part of an image in the predetermined field of view (10), by processing image related signals,
- determining if the part occluded by said working equipment (6) is visible in any image in the predetermined field of view (10) obtained by any of the other sensor unit assemblies, and if said part is visible by any of the other sensor unit assemblies, the method further comprises:
- determining an occluded part image, being an image representation of said occluded part, in the field of view (10) obtained by said other sensor unit assembly,
- merging, by said processing unit, said occluded part image into said merged image in said determined position of said working equipment (6),
- displaying said merged image at said display unit (16).
10. The method according to claim 9 comprising:
- applying a pattern recognition algorithm to identify any image representation of the working equipment in an image in the predetermined field of view by comparison to a set of image representations of the working equipment.
1 1 . The method according to claim 9 or 10, comprising:
- identifying a predetermined part, e.g. a hook, a load, a fork, of said occluding working equipment to be visible in said merged image, and
- displaying said predetermined part at said display unit.
12. The method according to any of claims 9-1 1 , comprising:
- determining a presentation mode of said occluded part image, from a set of presentation modes including a transparent mode, wherein said working equipment defining the occluded part image is fully transparent, a semi- transparent mode, wherein said working equipment defining said occluded part image has a variable opacity.
13. The method according to any of claims 9-12, comprising:
- determining a presentation mode where an outer boundary of the occluded part image is indicated in the merged image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18762407.7A EP3687937B1 (en) | 2017-09-26 | 2018-08-16 | Operator assistance system and a method in relation to the system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SE1751193 | 2017-09-26 | ||
SE1751193-2 | 2017-09-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019066693A1 true WO2019066693A1 (en) | 2019-04-04 |
Family
ID=63442763
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/SE2018/050829 WO2019066693A1 (en) | 2017-09-26 | 2018-08-16 | Operator assistance system and a method in relation to the system |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP3687937B1 (en) |
WO (1) | WO2019066693A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020229593A1 (en) * | 2019-05-16 | 2020-11-19 | Jungheinrich Ag | Method for warehousing assistance for an industrial truck, and industrial truck |
WO2023100889A1 (en) * | 2021-11-30 | 2023-06-08 | 株式会社タダノ | Maneuvering assistance system and work vehicle |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010204821A (en) * | 2009-03-02 | 2010-09-16 | Hitachi Constr Mach Co Ltd | Working machine equipped with periphery monitoring device |
JP2013113044A (en) * | 2011-11-30 | 2013-06-10 | Sumitomo (Shi) Construction Machinery Co Ltd | Monitor system for construction machine |
WO2014157567A1 (en) * | 2013-03-28 | 2014-10-02 | 三井造船株式会社 | Crane operator cab and crane |
-
2018
- 2018-08-16 WO PCT/SE2018/050829 patent/WO2019066693A1/en unknown
- 2018-08-16 EP EP18762407.7A patent/EP3687937B1/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010204821A (en) * | 2009-03-02 | 2010-09-16 | Hitachi Constr Mach Co Ltd | Working machine equipped with periphery monitoring device |
JP2013113044A (en) * | 2011-11-30 | 2013-06-10 | Sumitomo (Shi) Construction Machinery Co Ltd | Monitor system for construction machine |
WO2014157567A1 (en) * | 2013-03-28 | 2014-10-02 | 三井造船株式会社 | Crane operator cab and crane |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020229593A1 (en) * | 2019-05-16 | 2020-11-19 | Jungheinrich Ag | Method for warehousing assistance for an industrial truck, and industrial truck |
WO2023100889A1 (en) * | 2021-11-30 | 2023-06-08 | 株式会社タダノ | Maneuvering assistance system and work vehicle |
JP7613609B2 (en) | 2021-11-30 | 2025-01-15 | 株式会社タダノ | Pilot assistance system and work vehicle |
Also Published As
Publication number | Publication date |
---|---|
EP3687937A1 (en) | 2020-08-05 |
EP3687937B1 (en) | 2021-10-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11292700B2 (en) | Driver assistance system and a method | |
EP3589575B1 (en) | A vehicle provided with an arrangement for determining a three dimensional representation of a movable member | |
WO2018169467A1 (en) | A vehicle with a crane with object detecting device | |
US10132611B2 (en) | Laser scanner | |
EP3235773B1 (en) | Surrounding information-obtaining device for working vehicle | |
US7974461B2 (en) | Method and apparatus for displaying a calculated geometric entity within one or more 3D rangefinder data sets | |
US7403268B2 (en) | Method and apparatus for determining the geometric correspondence between multiple 3D rangefinder data sets | |
US7477359B2 (en) | Method and apparatus for making and displaying measurements based upon multiple 3D rangefinder data sets | |
EP3657455B1 (en) | Methods and systems for detecting intrusions in a monitored volume | |
JPH06293236A (en) | Travel environment monitoring device | |
EP3687937B1 (en) | Operator assistance system and a method in relation to the system | |
US10890430B2 (en) | Augmented reality-based system with perimeter definition functionality | |
EP4227708A1 (en) | Augmented reality alignment and visualization of a point cloud | |
KR20210136194A (en) | Display device for construction equipment using LiDAR and AR | |
JP2002354466A (en) | Ambient monitoring device for vehicles | |
US12225288B2 (en) | Software camera view lock allowing editing of drawing without any shift in the view | |
US20240320933A1 (en) | Systems and methods for visualizing floor data in mixed reality environment | |
US20240161435A1 (en) | Alignment of location-dependent visualization data in augmented reality | |
WO2024158964A1 (en) | Image-based localization and tracking using three-dimensional data | |
JP2025004443A (en) | Underwater work equipment | |
WO2024102428A1 (en) | Alignment of location-dependent visualization data in augmented reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18762407 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018762407 Country of ref document: EP Effective date: 20200428 |