WO2024184519A1 - Method for three-dimensional reconstruction of a mobile element - Google Patents
Method for three-dimensional reconstruction of a mobile element Download PDFInfo
- Publication number
- WO2024184519A1 WO2024184519A1 PCT/EP2024/056207 EP2024056207W WO2024184519A1 WO 2024184519 A1 WO2024184519 A1 WO 2024184519A1 EP 2024056207 W EP2024056207 W EP 2024056207W WO 2024184519 A1 WO2024184519 A1 WO 2024184519A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mobile element
- images
- optronic system
- imager
- dimensional model
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000011156 evaluation Methods 0.000 claims description 11
- 230000001960 triggered effect Effects 0.000 claims description 4
- 230000006870 function Effects 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001143 conditioned effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000528 statistical test Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Definitions
- TITLE Method for three-dimensional reconstruction of a moving element
- the present invention relates to a method for three-dimensional reconstruction of a mobile element in a scene.
- the present invention also relates to a corresponding optronic system and a platform comprising such an optronic system.
- the present description relates to a method for three-dimensional reconstruction of a mobile element in a scene, the method being implemented by an optronic system comprising an imager and a computer, the method comprising the following steps: a. setting up a tracking of a mobile element of the scene so that the imager of the optronic system acquires several successive images of the mobile element seen from different angles depending on the movement of the mobile element relative to the optronic system, and b. three-dimensional reconstruction of the mobile element based on the acquired images to obtain a three-dimensional model of the mobile element.
- the proposed method makes it possible to address the limitations of the state of the art, by intertwining the automatic reconstruction and tracking capabilities of mobile objects.
- the quality of the information produced is based in particular on opportunity events, even if it is not rare for a vehicle to change direction in its movement, since the relative rotation of a non-cooperating object with respect to the optronic system is not controlled over a short time ( ⁇ 10 seconds) or for relative movements of the approach/distance type and at a great distance from the object.
- the tracking remains permanent and its quality is sufficient, and this particularly in the absence of any rotation event of the object even if the information produced during the reconstruction is likely to maintain the tracking in rare more critical conditions where it could have been interrupted.
- the method comprises one or more of the following characteristics, taken in isolation or in all technically possible combinations: the reconstruction step is triggered automatically when the acquired images comprise at least two images of the mobile element taken at angles compatible with the three-dimensional reconstruction;
- the method comprises a step of determining the geographical position and the speed of movement of the mobile element relative to the optronic system as a function of the image acquired at the current time, of at least one image acquired at a previous time, and of the three-dimensional model of the mobile element;
- the method comprises updating the shooting and detection conditions of the mobile element according to primitives of the images used to obtain the three-dimensional model;
- the method comprises calculating the precisions on the determined geographical position and speed of movement, and possibly on the conditions of taking pictures of the images; the tracking of the mobile element is updated according to the last position and the last speed determined for the mobile element, and of the three-dimensional model of the mobile element; - the method comprises, over time, a step of updating the three-dimensional model as a function of the last image acquired by the imager; the three-dimensional model of the mobile element is obtained by extracting primitives characteristic of the mobile element in the images acquired by the imager and by matching the extracted primitives between said images; the three-dimensional model of the mobile element is also obtained by extracting primitives characteristic of the scene in the images acquired by the imager and by matching the extracted primitives between said images; the three-dimensional model obtained is displayed on a screen; the reconstruction step comprises determining a texture for the mobile element as a function of the acquired images, the three-dimensional model obtained being a model reproducing a texture for the mobile element; and
- the optronic system further comprises a distance evaluation unit capable of determining a distance between the mobile element and the optronic system, the evaluation unit comprising a rangefinder and/or a digital terrain model.
- the present invention also relates to an optronic system comprising an imager and a computer, the optronic system being configured to implement a method as described above.
- the present invention also relates to a platform, such as a vehicle, comprising such an optronic system.
- FIG 1 figure 1, a schematic representation of a scene in which a mobile element (car on a road) and an optronic system carried by a platform evolve,
- FIG 2 a schematic representation of an example of an optronic system comprising an imager, a calculator and optionally a distance evaluation unit, and
- Figure 3 a flowchart of an example of implementation of a three-dimensional reconstruction method of a mobile element in a scene.
- the scene 10 comprises at least one element 12 that is mobile in the scene 10, that is, moving in the scene 10.
- the mobile element 12 is, for example, an object, such as a vehicle, or a living being, such as an animal or a human.
- the mobile element 12 is a car moving on a road.
- the platform 14 is mobile or fixed. In particular, when the platform 14 is mobile, the trajectory of the platform 14 is not modified specifically for the implementation of the reconstruction method which will be described in the remainder of the description.
- the platform 14 is, for example, a vehicle, such as a land, sea or air vehicle (aircraft, such as an airplane or a drone). In the example illustrated by FIG. 1, the platform 14 is a drone. Alternatively, the platform 14 is a ground installation.
- the platform 14 preferably has at least one position and attitude measuring device, making it possible to obtain the position and attitude of the optronic system 16, as well as to date these measurements.
- the measuring device is, for example, an inertial unit, or a GNSS (Global Navigation Satellite System). Alternatively, the measuring device is integrated into the optronic system 16.
- the optronic system 16 is configured to implement a three-dimensional reconstruction method of mobile elements of the scene 10, such as the mobile element 12.
- the optronic system 16 comprises an imager 20 and a calculator 22.
- the optronic system 16 further comprises a distance evaluation unit 24.
- the imager 20 (also referred to as “sensor” in the description) is a passive sensor capable of acquiring images of scene 10.
- the imager 20 has a line of sight that can be oriented according to commands issued by the computer 22, making it possible in particular to track a mobile element 12 in the scene 10.
- the imager 20 comprises, for example, an actuator making it possible to orient the imager 20.
- the actuator is preferably automatic in nature.
- the actuator thus comprises an automatic attitude control system for the line of sight of the optronic system, possibly itself entangled with the object tracking/tracking function.
- the imager 20 comprises at least one camera.
- the imager 20 comprises a single camera, which makes it possible to dispense with a communication network between different sensors.
- the imager 20 is formed by a set of cameras.
- the imager 20 comprises at least one optical channel.
- the optical channel is suitable for producing a digital video at a sufficient rate with respect to the rotation speed of the mobile element 12.
- an infrared detector is preferred, whereas during the day a visible channel, generally more resolved and having richer textures, is preferred.
- the imager 20 has several optical channels, in different spectral bands and/or different fields of vision, suitable for acquiring simultaneous videos.
- the information obtained on the different channels can be merged to enrich/improve the processing results.
- the use of a multi-spectral camera also helps to characterize the nature of the materials of the mobile element 12 and to enrich its visual representation.
- the calculator 22 comprises a calculation unit, and preferably also a display unit (screen) and a human-machine interface.
- the computing unit includes, for example, a processor and memories.
- the computing unit interacts with a computer program product that includes an information medium.
- the information medium is a medium readable by the computing unit.
- the readable information medium is a medium suitable for storing electronic instructions and capable of being coupled to a bus of a computer system.
- the computer program product including program instructions is stored on the information medium.
- the computer program is loadable onto the computing unit and causes the implementation of a three-dimensional reconstruction method of a mobile element 12 in a scene 10, when the computer program is implemented on the computing unit as will be described in the remainder of the description.
- the distance evaluation unit 24 comprises, for example, either a rangefinder, such as a laser rangefinder, or a digital terrain model (DTM) associated with ray tracing processing, or both sources of information.
- a DTM is data representing the ground surface of the scene 10.
- the DTM makes it possible to retrieve the altitude of a point with known coordinates in latitude and longitude, or to measure a distance from the sensor to the scene 10 by ray tracing processing.
- the DTM is, for example, incorporated in software in a memory of the computer 22.
- the two distances obtained are, for example, merged according to their respective covariances. This operation allows the first of all to assess whether the mobile object is moving well on the ground surface, by means of a statistical test on the two distances and their variances, and in this case, to obtain a better evaluation of the distance.
- the reconstruction method comprises a step 1 10 of tracking a mobile element 12 of the scene 10.
- the tracking is carried out so that the imager 20 of the optronic system 16 acquires several successive images (at least two) of the mobile element 12 seen from different angles depending on the movement of the mobile element 12 relative to the optronic system 16.
- the mobile element 12 has a rotational movement relative to the optronic system 16 during acquisitions, which makes it possible to image different faces of the mobile element 12.
- Such rotational movements (as well as more generally the evolution of the shooting angles) are analyzed during tracking. This allows the triggering of the reconstruction when the conditions on the feasibility of the 3D calculations are met as will be described in the remainder of the description.
- the tracking of the mobile element 12 makes it possible to maintain the mobile element 12 in the field of vision of the imager 20, and advantageously close to the center of the image if telemetry is used.
- the tracking is, for example, carried out by servocontrols making it possible to control the line of sight of the imager 20 (i.e. the orientation of the imager 20) as a function of inertial data from the optronic system 16 (obtained via the position and attitude measuring device).
- the orientation of the imager 20 is, for example, carried out manually, or automatically by coupling with the tracking processing.
- the tracking also makes it possible to locate the mobile element 12 on the successive images, and this in a precise and robust manner.
- Such tracking is, for example, based on a classic image processing algorithm compatible with the frame rate.
- the algorithm is an ATDR algorithm (for “automatic target detection and recognition”).
- the algorithm implements a correlation approach or a Siamese neural network type approach.
- the tracking makes it possible to obtain two-dimensional information on the mobile element 12, namely its location, its kinematics, and its contour.
- the images obtained are preferably associated with auxiliary image data characterizing the geographic position and attitude of the optronic system 16 during the acquisition of said images.
- Figure 1 illustrates an example of image acquisition of a mobile element 12 at different times t1, t2, t3 and t4, making it possible to obtain images of the mobile element 12 from different viewing angles.
- the reconstruction method comprises a step 120 of three-dimensional reconstruction of the mobile element 12 based on the images acquired during tracking, to obtain a three-dimensional model of the mobile element 12.
- the reconstruction step 120 is triggered automatically when the acquired images comprise at least two images of the mobile element 12 taken at angles compatible with the three-dimensional reconstruction, that is to say that at least one different face of the mobile element 12 is imaged on at least two images.
- the reconstruction of the three-dimensional model is, for example, carried out by determining the 2D-3D optical flow or by correspondence between characteristic points on the different images of the mobile element 12.
- the choice of technique depends in particular on the level of resolution of the image and the spectral band.
- the reconstruction step 120 comprises the extraction of primitives (points or set of points) characteristic of the mobile element 12 in the images acquired by the imager 20, and the matching of the extracted primitives between said images to obtain the three-dimensional model.
- the reconstruction step 120 also comprises the extraction of characteristic primitives of the scene 10 in the images acquired by the imager 20, and the matching of the extracted primitives between said images to improve the knowledge of the shooting parameters and consequently the quality of the reconstruction (3D model, kinematics, etc.).
- the three-dimensional model is also determined as a function of a determined distance between the mobile element 12 and the optronic system 16. The distance is determined by the evaluation unit.
- the evaluation unit comprises a rangefinder
- the distance is determined via the rangefinder.
- the distance is, for example, determined by a ray tracing method.
- the distance obtained is the distance between the position of the optronic system 16 and the intersection of a predetermined half-line with the ground of a digital terrain model.
- the predetermined half-line passes through the position of the optronic system 16 and has the orientation of the line of sight of the imager 20 of the optronic system 16.
- the shooting conditions (CPDV) and the distance allow to build a 3D model to scale with the right dimensions, note that in the absence of a rangefinder the DTM would allow to have a sufficient scale quality (let's say better than the class of 5/100). Note that the quality of the CPDV and the distance also conditions that of the kinematics a distance precision of X% hardly allows to determine the geographical position and the speed of the object to better than x%.
- the reconstruction step 120 comprises determining a texture for the mobile element 12 based on the acquired images.
- the texture is the appearance of the surface of the mobile element 12 (smooth, rough, grainy, etc.).
- the three-dimensional model obtained is a model reproducing a texture for the mobile element 12.
- the three-dimensional model obtained is displayed on a screen.
- the model is displayed in the form of points or facets, with and without texture.
- the display is carried out for example from a direction fixed by default or chosen by the user, possibly in the form of a video presenting different images under variable directions historicizing the angular domain on which the object could be reconstructed until then.
- the reconstruction step 120 also comprises the determination of the 3D position and the 3D movement speed of the mobile element 12 relative to the optronic system 16 as a function of the acquired images and the three-dimensional model of the mobile element 12.
- the three-dimensional model makes it possible, in fact, to complete the two-dimensional positions and kinematics of the mobile element 12, obtained during tracking.
- the tracking step 110 operates independently, preferably at the image acquisition rate, while the reconstruction step 120 is only triggered punctually (for example if conditions are met).
- the tracking is implemented in parallel with the possible reconstruction and is not modified regardless of the state of the reconstruction (inactive, initialization or maintenance).
- the tracking is enriched with the results of the reconstruction.
- the tracking function implementing the tracking takes into account, for example, the last position and the last speed determined for the mobile element 12, as well as the three-dimensional model of the mobile element 12.
- the three-dimensional model makes it possible to better characterize the dimensions of the mobile element 12, its own rotations and its kinematics, and thus to feed the automatic tracking and recognition algorithm. Tracking thus benefits from spatial segmentation and better prediction of the appearance of the mobile element 12 on the current image, which eliminates some cases of disconnection in difficult conditions. Tracking also benefits from better characterization of the 3D kinematics of the mobile element 12 and its 3D rotation. Automatic recognition is improved with the measurements of the 3D model.
- the method comprises a step 130 of updating the three-dimensional model over time based on the current images of the mobile element 12 acquired by the imager 20.
- the 3D reconstruction is an incremental method which is enriched with image information over time both by the angular variability of the reconstruction and by the resolution of the model which can be reconstructed.
- the three-dimensional model is, thus, either the same model as the previous model, or a better resolved model (shape with more details) than the previous model. This can also allow a “progressive unmasking” of certain parts of the element considered.
- the updated three-dimensional model is displayed on a screen.
- the update step 130 comprises updating the shooting and detection conditions of the mobile element according to primitives of the images used to obtain the three-dimensional model.
- a triangularization is for example carried out.
- the updating step 130 comprises updating over time the 3D geographic position and the 3D movement speed of the mobile element 12 relative to the optronic system 16 as a function of the image acquired at the current time, of at least one image acquired at a previous time, and of the three-dimensional model of the mobile element 12.
- data relating to the precision of the 3D geographic position and of the 3D speed are also determined.
- the update step 130 also includes the calculation of the details on the determined geographic position and speed of movement, and possibly on the conditions for taking pictures of the images.
- the reconstruction method described makes it possible to reconstruct a mobile element 12 in three dimensions, under the following operating conditions: by means of the imager 20 of the optronic system 16 tracking the mobile element 12 (this means that the mobile element 12 is maintained in the image of the optronic video independently of the respective movements of the mobile element 12 and of the optronic system 16). even with imperfect measurements of the positions of the optronic system 16, the attitudes of the images and the internal parameters of the optronic channel used. by detecting, in a timely manner, rotational movements of the mobile element 12. on a non-cooperating element, therefore without the possibility of acting on the presentation of the mobile element 12 relative to the imager 20. in critical acquisition conditions in terms of the ratio between the base of the shooting vertices at the distance of the optronic system 16 to the mobile element 12, in a short time as soon as there is an opportunity for rotation of the object on itself.
- the reconstruction method described makes it possible to reconstruct in 3D the external shape of a mobile element in a scene 10, with a level of detail depending on the resolution of the images, even if the element is non-cooperative and evolves at a great distance (several km and even several tens of km) from the optronic system 16, therefore with an unfavorable base distance ratio (typically less than 0.1).
- the reconstruction can be carried out with a single passive sensor (therefore discretely and without communication between sensors), and in a short time of the order of 0 to 10 seconds.
- Such a method also makes it possible to characterize the kinematics of the mobile element 12.
- the reconstruction is all the more complete as the mobile element 12 moves relative to the optronic system 16, and if necessary performs rotational movements.
- the reconstruction is all the more precise as the images are of better resolution, conditioned by the characteristics of the optronic channel used in the system, as well as the distance to the object. Furthermore, the precision of the model is also improved by the redundancy of the images used for the reconstruction.
- the proposed 3D reconstruction process of a mobile element is called optronic inverse reconstruction (RIO), since unlike most classic 3D reconstruction techniques which exploit the movement of the sensor to reconstruct a scene 10, the 3D reconstruction problem of an element here exploits the mobility of the element, the sensor being able to be extremely fixed during the acquisitions.
- RIO optronic inverse reconstruction
- the three-dimensional model obtained also makes it possible to improve the robustness of other algorithms implemented by the optronic system 16 (tracking, tracking, etc.), as well as the visual decision of the operators.
- the method also makes it possible to improve the visualization, allowing better resolution of the mobile element 12, and the possibility of detaching the mobile element 12 from the scene 10, of presenting it under different faces with complementary visual clues useful for its identification (reduces the delay and improves the analysis).
- the method is also applicable for the simultaneous tracking and reconstruction of several moving elements of the scene 10, as long as these elements remain visible from the same line of sight for the imager 20.
- the method comprises, either at each instant (in real time) or with a set of images, an update of the shooting parameters by benefiting from the correspondences between the primitives extracted from the current image and the fixed primitives of the scene previously extracted by means of a process of the simultaneous mapping and localization (SLAM) type or of the aero-triangulation type.
- SLAM simultaneous mapping and localization
- the recognition of a vehicle-type object could, for example, give rise to a 3D model completion treatment by symmetries if we wish to propose a more exhaustive vision of the reconstructed partial model.
- This completion by symmetrisations is limited to duplicating the extracted shapes and structures on the opposite side face of the vehicle.
- the process is particularly suitable for the following industrial applications:
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The method is implemented by an optronic system (16) comprising an imager and a computer. The method comprises the following steps: a. carrying out tracking of a mobile element of the scene such that the imager of the optronic system acquires several successive images of the mobile element seen at different angles according to the movement of the mobile element relative to the optronic system, and b. three-dimensionally reconstructing the mobile element as a function of the acquired images to obtain a three-dimensional model of the mobile element.
Description
DESCRIPTION DESCRIPTION
TITRE : Procédé de reconstruction en trois dimensions d’un élément mobile TITLE: Method for three-dimensional reconstruction of a moving element
La présente invention concerne un procédé de reconstruction en trois dimensions d’un élément mobile dans une scène. La présente invention concerne aussi un système optronique correspondant et une plateforme comprenant un tel système optronique. The present invention relates to a method for three-dimensional reconstruction of a mobile element in a scene. The present invention also relates to a corresponding optronic system and a platform comprising such an optronic system.
Lors de la réalisation d’une mission, par exemple d’observation, il est d’intérêt de détecter et de localiser des éléments de la scène, voire d’obtenir des informations plus précises sur ces éléments, notamment des informations en trois dimensions (3D) décrivant la forme de ces éléments. When carrying out a mission, for example observation, it is of interest to detect and locate elements of the scene, or even to obtain more precise information on these elements, in particular three-dimensional (3D) information describing the shape of these elements.
A cet effet, il est connu des méthodes de reconstruction utilisant des capteurs passifs. Néanmoins, ces méthodes sont relativement complexes, et longues à mettre en place. En particulier, dans le cas d’un unique capteur, ces méthodes imposent des contraintes de trajectoire du capteur de façon à disposer d'une base suffisante et d'un délai temporel qui peut être important (il faut environ une minute pour effectuer par exemple 20 km à Machl en bénéficiant d’un déplacement transverse) pour parcourir la distance de base. Dans le cas de plusieurs capteurs délocalisés, ces méthodes nécessitent une infrastructure réseau et des communications entre les capteurs, ce qui complexifie l’organisation, la logistique et l’autonomie en terme décisionnel. En outre, la mise en correspondance des parties communes de l’élément entre les vues des différents capteurs, n’est pas toujours aisée. For this purpose, reconstruction methods using passive sensors are known. However, these methods are relatively complex and take a long time to set up. In particular, in the case of a single sensor, these methods impose sensor trajectory constraints in order to have a sufficient base and a time delay that can be significant (it takes about a minute to travel 20 km at Machl, for example, using a transverse movement) to cover the base distance. In the case of several delocalized sensors, these methods require a network infrastructure and communications between the sensors, which complicates the organization, logistics and autonomy in terms of decision-making. In addition, matching the common parts of the element between the views of the different sensors is not always easy.
D’autres approches reposent sur l’utilisation de capteurs actifs, tels que des lidars ou des radars. Toutefois, ces méthodes ne permettent pas d’opérer de manière discrète. En outre, elles requièrent une consommation en énergie conséquente pour la production de signaux par le capteur. Other approaches rely on the use of active sensors, such as lidars or radars. However, these methods do not allow for discrete operation. In addition, they require significant energy consumption for the production of signals by the sensor.
Il existe donc un besoin pour une méthode permettant à un système optronique de reconstruire en trois dimensions des éléments mobiles non coopérants et potentiellement à grande distance du système optronique, de manière simple, rapide et discrète, et sans imposer de contraintes de déplacement, spécifiques pour la reconstruction, au système optronique. There is therefore a need for a method enabling an optronic system to reconstruct in three dimensions non-cooperating moving elements, potentially at a great distance from the optronic system, in a simple, rapid and discreet manner, and without imposing displacement constraints, specific for the reconstruction, on the optronic system.
A cet effet, la présente description a pour objet un procédé de reconstruction en trois dimensions d’un élément mobile dans une scène, le procédé étant mis en œuvre par un système optronique comprenant un imageur et un calculateur, le procédé comprenant les étapes suivantes :
a. la mise en place d’un suivi d’un élément mobile de la scène de sorte que l’imageur du système optronique acquiert plusieurs images successives de l’élément mobile vu sous différents angles en fonction du déplacement de l’élément mobile par rapport au système optronique, et b. la reconstruction en trois dimensions de l’élément mobile en fonction des images acquises pour obtenir un modèle tridimensionnel de l’élément mobile.For this purpose, the present description relates to a method for three-dimensional reconstruction of a mobile element in a scene, the method being implemented by an optronic system comprising an imager and a computer, the method comprising the following steps: a. setting up a tracking of a mobile element of the scene so that the imager of the optronic system acquires several successive images of the mobile element seen from different angles depending on the movement of the mobile element relative to the optronic system, and b. three-dimensional reconstruction of the mobile element based on the acquired images to obtain a three-dimensional model of the mobile element.
Ainsi, le procédé proposé permet d’adresser les limites de l’état de la technique, en intriquant les capacités automatiques de reconstruction et de suivi d’objet mobile. La qualité des informations produites repose notamment sur des évènements d’opportunité, même s’il n’est pas rare qu’un véhicule change de direction dans son mouvement, puisque la rotation relative d’un objet non coopérant par rapport au système optronique n’est pas maitrisée sur un temps court (<10 secondes) ou pour des mouvements relatifs de type rapprochement/éloignement et à grande distance de l’objet. Le suivi reste permanent et sa qualité est suffisante, et cela particulièrement en absence de tout événement de rotation de l’objet même si les informations produites lors de la reconstruction sont susceptibles de maintenir le suivi dans de rares conditions plus critiques où il aurait pu s’interrompre. Thus, the proposed method makes it possible to address the limitations of the state of the art, by intertwining the automatic reconstruction and tracking capabilities of mobile objects. The quality of the information produced is based in particular on opportunity events, even if it is not rare for a vehicle to change direction in its movement, since the relative rotation of a non-cooperating object with respect to the optronic system is not controlled over a short time (<10 seconds) or for relative movements of the approach/distance type and at a great distance from the object. The tracking remains permanent and its quality is sufficient, and this particularly in the absence of any rotation event of the object even if the information produced during the reconstruction is likely to maintain the tracking in rare more critical conditions where it could have been interrupted.
Suivant des modes particuliers de mise en œuvre, le procédé comprend une ou plusieurs des caractéristiques suivantes, prise(s) isolément ou suivant toutes les combinaisons techniquement possibles : l’étape de reconstruction est déclenchée automatiquement lorsque les images acquises comprennent au moins deux images de l’élément mobile prises selon des angles compatibles avec la reconstruction tridimensionnelle ; According to particular modes of implementation, the method comprises one or more of the following characteristics, taken in isolation or in all technically possible combinations: the reconstruction step is triggered automatically when the acquired images comprise at least two images of the mobile element taken at angles compatible with the three-dimensional reconstruction;
- le procédé comprend une étape de détermination de la position géographique et de la vitesse de déplacement de l’élément mobile par rapport au système optronique en fonction de l’image acquise à l’instant courant, d’au moins une image acquise à un instant précédent, et du modèle tridimensionnel de l’élément mobile ; - the method comprises a step of determining the geographical position and the speed of movement of the mobile element relative to the optronic system as a function of the image acquired at the current time, of at least one image acquired at a previous time, and of the three-dimensional model of the mobile element;
- le procédé comprend la mise à jour des conditions de prise de vue et de détection de l’élément mobile en fonction de primitives des images utilisées pour obtenir le modèle tridimensionnel ; - the method comprises updating the shooting and detection conditions of the mobile element according to primitives of the images used to obtain the three-dimensional model;
- le procédé comprend le calcul des précisions sur la position géographique et la vitesse de déplacement déterminées, et éventuellement sur les conditions de prise de vue des images ; le suivi de l’élément mobile est mis à jour en fonction de la dernière position et de la dernière vitesse déterminées pour l’élément mobile, et du modèle tridimensionnel de l’élément mobile ;
- le procédé comprend, au cours du temps, une étape de mise à jour du modèle tridimensionnel en fonction de la dernière image acquise par l’imageur ; le modèle tridimensionnel de l’élément mobile est obtenu par extraction de primitives caractéristiques de l’élément mobile dans les images acquises par l’imageur et par mise en correspondance des primitives extraites entre lesdites images ; le modèle tridimensionnel de l’élément mobile est aussi obtenu par extraction de primitives caractéristiques de la scène dans les images acquises par l’imageur et par mise en correspondance des primitives extraites entre lesdites images ; le modèle tridimensionnel obtenu est affiché sur un écran ; l’étape de reconstruction comprend la détermination d’une texture pour l’élément mobile en fonction des images acquises, le modèle tridimensionnel obtenu étant un modèle reproduisant une texture pour l’élément mobile ; et - the method comprises calculating the precisions on the determined geographical position and speed of movement, and possibly on the conditions of taking pictures of the images; the tracking of the mobile element is updated according to the last position and the last speed determined for the mobile element, and of the three-dimensional model of the mobile element; - the method comprises, over time, a step of updating the three-dimensional model as a function of the last image acquired by the imager; the three-dimensional model of the mobile element is obtained by extracting primitives characteristic of the mobile element in the images acquired by the imager and by matching the extracted primitives between said images; the three-dimensional model of the mobile element is also obtained by extracting primitives characteristic of the scene in the images acquired by the imager and by matching the extracted primitives between said images; the three-dimensional model obtained is displayed on a screen; the reconstruction step comprises determining a texture for the mobile element as a function of the acquired images, the three-dimensional model obtained being a model reproducing a texture for the mobile element; and
- le système optronique comprend, en outre, une unité d’évaluation de distances propre à déterminer une distance entre l’élément mobile et le système optronique, l’unité d’évaluation comprenant un télémètre et/ou un modèle numérique de terrain. - the optronic system further comprises a distance evaluation unit capable of determining a distance between the mobile element and the optronic system, the evaluation unit comprising a rangefinder and/or a digital terrain model.
La présente invention a aussi pour objet un système optronique comprenant un imageur et un calculateur, le système optronique étant configuré pour mettre en œuvre un procédé tel que décrit ci-dessus. The present invention also relates to an optronic system comprising an imager and a computer, the optronic system being configured to implement a method as described above.
La présente invention a aussi pour objet une plateforme, tel qu’un véhicule, comprenant un tel système optronique. The present invention also relates to a platform, such as a vehicle, comprising such an optronic system.
D’autres caractéristiques et avantages de l’invention apparaîtront à la lecture de la description qui suit de modes de réalisation de l’invention, donnés à titre d’exemple uniquement et en référence aux dessins qui sont : Other features and advantages of the invention will become apparent from reading the following description of embodiments of the invention, given by way of example only and with reference to the drawings which are:
- [Fig 1] figure 1 , une représentation schématique d’une scène dans laquelle évolue un élément mobile (voiture sur une route) et d’un système optronique porté par une plateforme, - [Fig 1] figure 1, a schematic representation of a scene in which a mobile element (car on a road) and an optronic system carried by a platform evolve,
- [Fig 2] figure 2, une représentation schématique d’un exemple d’un système optronique comprenant un imageur, un calculateur et optionnellement une unité d’évaluation des distances, et - [Fig 2] figure 2, a schematic representation of an example of an optronic system comprising an imager, a calculator and optionally a distance evaluation unit, and
- [Fig 3] figure 3, un organigramme d’un exemple de mise en œuvre d’un procédé de reconstruction en trois dimensions d’un élément mobile dans une scène. - [Fig 3] Figure 3, a flowchart of an example of implementation of a three-dimensional reconstruction method of a mobile element in a scene.
Une scène 10 est illustrée à titre d’exemple sur la figure 1. Une scène désigne un théâtre d’opérations, c’est-à-dire le lieu où se déroule une action. La scène est donc un
espace étendu avec des dimensions suffisantes pour permettre le déroulement d’une action. La scène est typiquement un espace extérieur. A scene 10 is illustrated as an example in Figure 1. A scene designates a theater of operations, that is, the place where an action takes place. The scene is therefore a large space with sufficient dimensions to allow an action to take place. The stage is typically an outdoor space.
La scène 10 comprend au moins un élément 12 mobile dans la scène 10, c’est-à- dire se déplaçant dans la scène 10. L’élément mobile 12 est, par exemple, un objet, tel qu’un véhicule, ou encore un être vivant, tel qu’un animal ou un humain. Dans l’exemple de la figure 1 , l’élément mobile 12 est une voiture évoluant sur une route. The scene 10 comprises at least one element 12 that is mobile in the scene 10, that is, moving in the scene 10. The mobile element 12 is, for example, an object, such as a vehicle, or a living being, such as an animal or a human. In the example of FIG. 1, the mobile element 12 is a car moving on a road.
Une plateforme 14, sur laquelle est monté un système optronique 16, est également illustrée sur la figure 1. A platform 14, on which an optronic system 16 is mounted, is also illustrated in FIG. 1.
La plateforme 14 est mobile ou fixe. En particulier, lorsque la plateforme 14 est mobile, la trajectoire de la plateforme 14 n’est pas modifiée spécifiquement pour la mise en œuvre du procédé de reconstruction qui sera décrit dans la suite de la description. The platform 14 is mobile or fixed. In particular, when the platform 14 is mobile, the trajectory of the platform 14 is not modified specifically for the implementation of the reconstruction method which will be described in the remainder of the description.
La plateforme 14 est, par exemple, un véhicule, tel qu’un véhicule terrestre, maritime ou aérien (aéronef, tel qu’un avion ou un drone). Dans l’exemple illustré par la figure 1 , la plateforme 14 est un drone. En variante, la plateforme 14 est une installation au sol. The platform 14 is, for example, a vehicle, such as a land, sea or air vehicle (aircraft, such as an airplane or a drone). In the example illustrated by FIG. 1, the platform 14 is a drone. Alternatively, the platform 14 is a ground installation.
La plateforme 14 dispose, de préférence, d’au moins un dispositif de mesure de position et d’attitude, permettant d’obtenir la position et l’attitude du système optronique 16, ainsi que de dater ces mesures. Le dispositif de mesure est, par exemple, une centrale inertielle, ou un GNSS (Global Navigation Satellite System traduit en français par Géolocalisation et Navigation par un Système de Satellites). En variante, le dispositif de mesure est intégré dans le système optronique 16. The platform 14 preferably has at least one position and attitude measuring device, making it possible to obtain the position and attitude of the optronic system 16, as well as to date these measurements. The measuring device is, for example, an inertial unit, or a GNSS (Global Navigation Satellite System). Alternatively, the measuring device is integrated into the optronic system 16.
Le système optronique 16 est configuré pour mettre en œuvre un procédé de reconstruction en trois dimensions d’éléments mobile de la scène 10, tel que l’élément mobile 12. The optronic system 16 is configured to implement a three-dimensional reconstruction method of mobile elements of the scene 10, such as the mobile element 12.
Comme illustré schématiquement sur la figure 2, le système optronique 16 comprend un imageur 20 et un calculateur 22. Optionnellement, le système optronique 16 comprend, en outre, une unité 24 d’évaluation de distances. As schematically illustrated in FIG. 2, the optronic system 16 comprises an imager 20 and a calculator 22. Optionally, the optronic system 16 further comprises a distance evaluation unit 24.
L’imageur 20 (aussi désigné par « senseur » dans la description) est un capteur passif propre à acquérir des images de scène 10. The imager 20 (also referred to as “sensor” in the description) is a passive sensor capable of acquiring images of scene 10.
L’imageur 20 présente une ligne de visée qui est orientable en fonction de commandes émises par le calculateur 22, permettant notamment de réaliser la poursuite d’un élément mobile 12 dans la scène 10. En particulier, l’imageur 20 comprend, par exemple, un actionneur permettant d’orienter l’imageur 20. L’actionneur est, de préférence, de nature automatique. L’actionneur comprend, ainsi, un système automatique de contrôle d’attitude de la ligne de visée du système optronique, éventuellement lui-même intriqué avec la fonction de poursuite/suivi de l’objet.
L’imageur 20 comprend au moins une caméra. De préférence, l’imageur 20 comprend une seule caméra, ce qui permet de s’affranchir d’un réseau de communication entre différents capteurs. En variante, l’imageur 20 est formé par un ensemble de caméras. The imager 20 has a line of sight that can be oriented according to commands issued by the computer 22, making it possible in particular to track a mobile element 12 in the scene 10. In particular, the imager 20 comprises, for example, an actuator making it possible to orient the imager 20. The actuator is preferably automatic in nature. The actuator thus comprises an automatic attitude control system for the line of sight of the optronic system, possibly itself entangled with the object tracking/tracking function. The imager 20 comprises at least one camera. Preferably, the imager 20 comprises a single camera, which makes it possible to dispense with a communication network between different sensors. Alternatively, the imager 20 is formed by a set of cameras.
L’imageur 20 comprend au moins une voie optique. La voie optique est propre à réaliser une vidéo numérique à cadence suffisante vis-à-vis de la vitesse de rotation de l’élément mobile 12. Pour une mise en œuvre nocturne, on privilégie un détecteur infrarouge, alors que de jour une voie visible, généralement plus résolue et présentant des textures plus riches, est privilégiée. The imager 20 comprises at least one optical channel. The optical channel is suitable for producing a digital video at a sufficient rate with respect to the rotation speed of the mobile element 12. For nighttime implementation, an infrared detector is preferred, whereas during the day a visible channel, generally more resolved and having richer textures, is preferred.
Avantageusement, l’imageur 20 dispose de plusieurs voies optiques, dans différentes bandes spectrales et/ou différents champ de vision, propres à acquérir des vidéos simultanées. Ainsi, les informations obtenues sur les différentes voies peuvent être fusionnées pour enrichir / améliorer les résultats du traitement. L’utilisation d’une caméra multi-spectrale permet également de contribuer à caractériser la nature des matériaux de l’élément mobile 12 et d’enrichir sa représentation visuelle. Advantageously, the imager 20 has several optical channels, in different spectral bands and/or different fields of vision, suitable for acquiring simultaneous videos. Thus, the information obtained on the different channels can be merged to enrich/improve the processing results. The use of a multi-spectral camera also helps to characterize the nature of the materials of the mobile element 12 and to enrich its visual representation.
Le calculateur 22 comprend une unité de calcul, et de préférence aussi une unité d’affichage (écran) et une interface homme-machine. The calculator 22 comprises a calculation unit, and preferably also a display unit (screen) and a human-machine interface.
L’unité de calcul comprend, par exemple, un processeur et des mémoires. The computing unit includes, for example, a processor and memories.
Dans un exemple, l’unité de calcul est en interaction avec un produit-programme d’ordinateur qui comporte un support d’informations. Le support d’informations est un support lisible par l’unité de calcul. Le support lisible d’informations est un médium adapté à mémoriser des instructions électroniques et capable d’être couplé à un bus d’un système informatique. Sur le support d’informations est mémorisé le produit-programme d’ordinateur comprenant des instructions de programme. In one example, the computing unit interacts with a computer program product that includes an information medium. The information medium is a medium readable by the computing unit. The readable information medium is a medium suitable for storing electronic instructions and capable of being coupled to a bus of a computer system. The computer program product including program instructions is stored on the information medium.
Le programme d’ordinateur est chargeable sur l’unité de calcul et entraîne la mise en œuvre d’un procédé de reconstruction en trois dimensions d’un élément mobile 12 dans une scène 10, lorsque le programme d’ordinateur est mis en œuvre sur l’unité de calcul comme cela sera décrit dans la suite de la description. The computer program is loadable onto the computing unit and causes the implementation of a three-dimensional reconstruction method of a mobile element 12 in a scene 10, when the computer program is implemented on the computing unit as will be described in the remainder of the description.
L’unité d’évaluation de distances 24 comprend, par exemple, soit un télémètre, tel qu’un télémètre laser, soit un modèle numérique de terrain (MNT) associé à un traitement de lancer de rayon, soit les 2 sources d’information. En particulier, un MNT est une donnée représentant la surface au sol de la scène 10. Le MNT permet d’aller chercher l’altitude d’un point de coordonnées connues en latitude et longitude, ou de mesurer une distance du senseur à la scène 10 par un traitement de lancer de rayon. Le MNT est, par exemple, incorporé logiciellement dans une mémoire du calculateur 22. Dans le cas d’un système bénéficiant à la fois d’un télémètre et d’un MNT, les deux distances obtenues sont, par exemple, fusionnées d’après leurs covariances respectives. Cette opération permet au
préalable d’évaluer si l’objet mobile évolue bien à la surface du sol, au moyen d’un test statistique sur les deux distances et leurs variances, et dans ce cas, d’obtenir une meilleure évaluation de la distance. The distance evaluation unit 24 comprises, for example, either a rangefinder, such as a laser rangefinder, or a digital terrain model (DTM) associated with ray tracing processing, or both sources of information. In particular, a DTM is data representing the ground surface of the scene 10. The DTM makes it possible to retrieve the altitude of a point with known coordinates in latitude and longitude, or to measure a distance from the sensor to the scene 10 by ray tracing processing. The DTM is, for example, incorporated in software in a memory of the computer 22. In the case of a system benefiting from both a rangefinder and a DTM, the two distances obtained are, for example, merged according to their respective covariances. This operation allows the first of all to assess whether the mobile object is moving well on the ground surface, by means of a statistical test on the two distances and their variances, and in this case, to obtain a better evaluation of the distance.
Le fonctionnement du système optronique 16 entraînant la mise en œuvre d’un procédé de reconstruction en trois dimensions d’un élément mobile 12 dans une scène 10, va maintenant être décrit en référence à l’organigramme de la figure 3. The operation of the optronic system 16 leading to the implementation of a three-dimensional reconstruction method of a mobile element 12 in a scene 10 will now be described with reference to the flowchart in FIG. 3.
Le procédé de reconstruction comprend une étape 1 10 de suivi (en anglais tracking) d’un élément mobile 12 de la scène 10. Le suivi est réalisé de sorte que l’imageur 20 du système optronique 16 acquiert plusieurs images successives (au moins deux) de l’élément mobile 12 vu sous différents angles en fonction du déplacement de l’élément mobile 12 par rapport au système optronique 16. The reconstruction method comprises a step 1 10 of tracking a mobile element 12 of the scene 10. The tracking is carried out so that the imager 20 of the optronic system 16 acquires several successive images (at least two) of the mobile element 12 seen from different angles depending on the movement of the mobile element 12 relative to the optronic system 16.
De préférence, l’élément mobile 12 présente un mouvement de rotation par rapport au système optronique 16 lors des acquisitions, ce qui permet d’imager différentes faces de l’élément mobile 12. De tels mouvements de rotation (ainsi que plus généralement l’évolution des angles de prise de vue) sont analysés au cours du suivi. Cela permet le déclenchement de la reconstruction lorsque les conditions sur la faisabilité des calculs 3D sont remplies comme cela sera décrit dans la suite de la description. Preferably, the mobile element 12 has a rotational movement relative to the optronic system 16 during acquisitions, which makes it possible to image different faces of the mobile element 12. Such rotational movements (as well as more generally the evolution of the shooting angles) are analyzed during tracking. This allows the triggering of the reconstruction when the conditions on the feasibility of the 3D calculations are met as will be described in the remainder of the description.
Le suivi de l’élément mobile 12 permet de maintenir l’élément mobile 12 dans le champ de vision de l’imageur 20, et avantageusement proche du centre de l’image si la télémétrie est utilisée. Le suivi est, par exemple, réalisé par des asservissements permettant de piloter la ligne de visée de l’imageur 20 (c’est-à-dire l’orientation de l’imageur 20) en fonction de données inertielles du système optronique 16 (obtenues via le dispositif de mesure de positions et d’attitudes). L’orientation de l’imageur 20 est, par exemple, réalisée de manière manuelle, ou de manière automatique par couplage avec les traitements de poursuite. The tracking of the mobile element 12 makes it possible to maintain the mobile element 12 in the field of vision of the imager 20, and advantageously close to the center of the image if telemetry is used. The tracking is, for example, carried out by servocontrols making it possible to control the line of sight of the imager 20 (i.e. the orientation of the imager 20) as a function of inertial data from the optronic system 16 (obtained via the position and attitude measuring device). The orientation of the imager 20 is, for example, carried out manually, or automatically by coupling with the tracking processing.
Le suivi permet aussi de localiser l’élément mobile 12 sur les images successives, et ce de manière précise et robuste. Un tel suivi est, par exemple, basé sur un algorithme classique de traitement d’image compatibles de la cadence des images. Par exemple, l’algorithme est un algorithme ATDR (pour « automatic target detection and recognition » traduit en français par « reconnaissance et détection automatique de cible »). En complément ou en variante, l’algorithme met en œuvre une approche par corrélation ou une approche de type réseau neuronal siamois. The tracking also makes it possible to locate the mobile element 12 on the successive images, and this in a precise and robust manner. Such tracking is, for example, based on a classic image processing algorithm compatible with the frame rate. For example, the algorithm is an ATDR algorithm (for “automatic target detection and recognition”). In addition or as a variant, the algorithm implements a correlation approach or a Siamese neural network type approach.
Ainsi, le suivi permet d’obtenir des informations en deux dimensions sur l’élément mobile 12, à savoir sa localisation, sa cinématique, et son contour.
Les images obtenues sont de préférence associées à des données auxiliaires image caractérisant la position géographique et l’attitude du système optronique 16 lors de l’acquisition desdites images. Thus, the tracking makes it possible to obtain two-dimensional information on the mobile element 12, namely its location, its kinematics, and its contour. The images obtained are preferably associated with auxiliary image data characterizing the geographic position and attitude of the optronic system 16 during the acquisition of said images.
La figure 1 illustre un exemple d’acquisition d’images d’un élément mobile 12 à différents instants t1 , t2, t3 et t4, permettant d’obtenir des images de l’élément mobile 12 sous différents angles de vue. Figure 1 illustrates an example of image acquisition of a mobile element 12 at different times t1, t2, t3 and t4, making it possible to obtain images of the mobile element 12 from different viewing angles.
Le procédé de reconstruction comprend une étape 120 de reconstruction en trois dimensions de l’élément mobile 12 en fonction des images acquises lors du suivi, pour obtenir un modèle tridimensionnel de l’élément mobile 12. The reconstruction method comprises a step 120 of three-dimensional reconstruction of the mobile element 12 based on the images acquired during tracking, to obtain a three-dimensional model of the mobile element 12.
De préférence, l’étape de reconstruction 120 est déclenchée automatiquement lorsque les images acquises comprennent au moins deux images de l’élément mobile 12 prises selon des angles compatibles avec la reconstruction tridimensionnelle, c’est-à-dire qu’au moins une face différente de l’élément mobile 12 est imagée sur au moins deux images. Preferably, the reconstruction step 120 is triggered automatically when the acquired images comprise at least two images of the mobile element 12 taken at angles compatible with the three-dimensional reconstruction, that is to say that at least one different face of the mobile element 12 is imaged on at least two images.
La reconstruction du modèle tridimensionnel est, par exemple, réalisée par détermination du flot optique 2D-3D ou par correspondance entre des points caractéristiques sur les différentes images de l’élément mobile 12. Le choix de la technique dépend notamment du niveau de résolution de l’image et de la bande spectrale. The reconstruction of the three-dimensional model is, for example, carried out by determining the 2D-3D optical flow or by correspondence between characteristic points on the different images of the mobile element 12. The choice of technique depends in particular on the level of resolution of the image and the spectral band.
Dans un exemple de mise en œuvre, l’étape de reconstruction 120 comprend l’extraction de primitives (points ou ensemble de points) caractéristiques de l’élément mobile 12 dans les images acquises par l’imageur 20, et la mise en correspondance des primitives extraites entre lesdites images pour obtenir le modèle tridimensionnel. In an exemplary implementation, the reconstruction step 120 comprises the extraction of primitives (points or set of points) characteristic of the mobile element 12 in the images acquired by the imager 20, and the matching of the extracted primitives between said images to obtain the three-dimensional model.
De préférence, l’étape de reconstruction 120 comprend aussi l’extraction de primitives caractéristiques de la scène 10 dans les images acquises par l’imageur 20, et la mise en correspondance des primitives extraites entre lesdites images pour améliorer la connaissance des paramètres de prise de vue et par conséquence la qualité de la reconstruction (modèle 3D, cinématique, ...). Cela permet d’enrichir les observations et d’augmenter les contraintes dans l’estimation du modèle, ainsi que d’améliorer les valeurs des paramètres de prise de vue portés par les données auxiliaires de l’image. Dans un exemple de mise en œuvre facultatif, le modèle tridimensionnel est aussi déterminé en fonction d’une distance déterminée entre l’élément mobile 12 et le système optronique 16. La distance est déterminée par l’unité d’évaluation. Ainsi, optionnellement, dans un exemple, lorsque l’unité d’évaluation comprend un télémètre, la distance est déterminée via le télémètre. Lorsque l’unité d’évaluation comprend un modèle numérique de terrain, la distance est, par exemple, déterminée par une méthode de lancer de rayon. Dans ce cas, la distance obtenue est la distance entre la position du système optronique 16 et
l’intersection d’une demi-droite prédéterminée avec le sol d’un modèle numérique de terrain. La demi-droite prédéterminée passe par la position du système optronique 16 et a pour orientation, l’orientation de la ligne de visée de l’imageur 20 du système optronique 16. Preferably, the reconstruction step 120 also comprises the extraction of characteristic primitives of the scene 10 in the images acquired by the imager 20, and the matching of the extracted primitives between said images to improve the knowledge of the shooting parameters and consequently the quality of the reconstruction (3D model, kinematics, etc.). This makes it possible to enrich the observations and to increase the constraints in the estimation of the model, as well as to improve the values of the shooting parameters carried by the auxiliary data of the image. In an optional exemplary implementation, the three-dimensional model is also determined as a function of a determined distance between the mobile element 12 and the optronic system 16. The distance is determined by the evaluation unit. Thus, optionally, in one example, when the evaluation unit comprises a rangefinder, the distance is determined via the rangefinder. When the evaluation unit comprises a digital terrain model, the distance is, for example, determined by a ray tracing method. In this case, the distance obtained is the distance between the position of the optronic system 16 and the intersection of a predetermined half-line with the ground of a digital terrain model. The predetermined half-line passes through the position of the optronic system 16 and has the orientation of the line of sight of the imager 20 of the optronic system 16.
De manière générale, les conditions de prise de vue (CPDV) et la distance permettent de construire un modèle 3D à l’échelle ayant les bonnes dimensions, notons qu’en absence de télémètre le MNT permettrait de disposer d’une qualité d’échelle suffisante (disons mieux que la classe de 5/100). Notons que la qualité des CPDV et de la distance conditionne aussi celle de la cinématique une précision de distance de X% ne permet guerre de déterminer la position géographique et la vitesse de l’objet à mieux que x%. Generally speaking, the shooting conditions (CPDV) and the distance allow to build a 3D model to scale with the right dimensions, note that in the absence of a rangefinder the DTM would allow to have a sufficient scale quality (let's say better than the class of 5/100). Note that the quality of the CPDV and the distance also conditions that of the kinematics a distance precision of X% hardly allows to determine the geographical position and the speed of the object to better than x%.
De préférence, l’étape de reconstruction 120 comprend la détermination d’une texture pour l’élément mobile 12 en fonction des images acquises. La texture est l’aspect de la surface de l’élément mobile 12 (lisse, rugueux, granuleux...). Le modèle tridimensionnel obtenu est un modèle reproduisant une texture pour l’élément mobile 12. Preferably, the reconstruction step 120 comprises determining a texture for the mobile element 12 based on the acquired images. The texture is the appearance of the surface of the mobile element 12 (smooth, rough, grainy, etc.). The three-dimensional model obtained is a model reproducing a texture for the mobile element 12.
De préférence, le modèle tridimensionnel obtenu est affiché sur un écran. Par exemple, le modèle est affiché sous forme de points ou de facettes, avec et sans texture. L’affichage est effectué par exemple depuis une direction fixée par défaut ou choisie par l’utilisateur, éventuellement sous la forme d’une vidéo présentant différentes images sous des directions variables historisant le domaine angulaire sur lequel l’objet a pu être reconstruit jusqu’alors. Preferably, the three-dimensional model obtained is displayed on a screen. For example, the model is displayed in the form of points or facets, with and without texture. The display is carried out for example from a direction fixed by default or chosen by the user, possibly in the form of a video presenting different images under variable directions historicizing the angular domain on which the object could be reconstructed until then.
De préférence, l’étape de reconstruction 120 comprend aussi la détermination de la position en 3D et de la vitesse de déplacement 3D de l’élément mobile 12 par rapport au système optronique 16 en fonction des images acquises et du modèle tridimensionnel de l’élément mobile 12. Le modèle tridimensionnel permet, en effet, de compléter les positions et cinématiques en deux dimensions de l’élément mobile 12, obtenues lors du suivi. Preferably, the reconstruction step 120 also comprises the determination of the 3D position and the 3D movement speed of the mobile element 12 relative to the optronic system 16 as a function of the acquired images and the three-dimensional model of the mobile element 12. The three-dimensional model makes it possible, in fact, to complete the two-dimensional positions and kinematics of the mobile element 12, obtained during tracking.
Ainsi, l’étape de suivi 110 fonctionne de manière indépendante, de préférence à la cadence d’acquisition image, alors que l’étape de reconstruction 120 n’est déclenchée que ponctuellement (par exemple si des conditions sont remplies). En particulier, le suivi est mis en œuvre en parallèle de l’éventuelle reconstruction et n’est pas modifié quel que soit l’état de la reconstruction (inactif, initialisation ou entretien). Thus, the tracking step 110 operates independently, preferably at the image acquisition rate, while the reconstruction step 120 is only triggered punctually (for example if conditions are met). In particular, the tracking is implemented in parallel with the possible reconstruction and is not modified regardless of the state of the reconstruction (inactive, initialization or maintenance).
De préférence, lorsqu’une reconstruction a eu lieu, le suivi s’enrichit des résultats de la reconstruction. En particulier, la fonction de poursuite mettant en œuvre le suivi prend, par exemple, en compte la dernière position, et la dernière vitesse déterminées pour l’élément mobile 12, ainsi que le modèle tridimensionnel de l’élément mobile 12. Preferably, when a reconstruction has taken place, the tracking is enriched with the results of the reconstruction. In particular, the tracking function implementing the tracking takes into account, for example, the last position and the last speed determined for the mobile element 12, as well as the three-dimensional model of the mobile element 12.
En particulier, le modèle tridimensionnel permet de mieux caractériser les dimensions de l’élément mobile 12, ses rotations propres et sa cinématique, et ainsi
d’alimenter l’algorithme de suivi et de reconnaissance automatique. Le suivi bénéficie, ainsi, d’une segmentation spatiale et d’une meilleure prédiction d’apparence de l’élément mobile 12 sur l’image courante, ce qui supprime quelques cas de décrochage en conditions difficiles. Le suivi bénéficie aussi d’une meilleure caractérisation de la cinématique 3D de l’élément mobile 12 et de sa rotation 3D. La reconnaissance automatique est améliorée avec les mensurations du modèle 3D. In particular, the three-dimensional model makes it possible to better characterize the dimensions of the mobile element 12, its own rotations and its kinematics, and thus to feed the automatic tracking and recognition algorithm. Tracking thus benefits from spatial segmentation and better prediction of the appearance of the mobile element 12 on the current image, which eliminates some cases of disconnection in difficult conditions. Tracking also benefits from better characterization of the 3D kinematics of the mobile element 12 and its 3D rotation. Automatic recognition is improved with the measurements of the 3D model.
De préférence, le procédé comprend une étape 130 de mise à jour du modèle tridimensionnel au cours du temps en fonction des images courantes de l’élément mobile 12 acquises par l’imageur 20. En effet, la reconstruction 3D est un procédé incrémental qui s’enrichit des informations images aux cours du temps tant par la variabilité angulaire de la reconstruction que par la résolution du modèle pouvant être reconstruit. Le modèle tridimensionnel est, ainsi, soit le même modèle que le modèle précédent, soit un modèle mieux résolu (forme avec plus de détails) que le modèle précédent. Cela peut permettre également un « démasquage progressif » de certaines parties de l’élément considéré. De préférence, le modèle tridimensionnel mis à jour est affiché sur un écran. Preferably, the method comprises a step 130 of updating the three-dimensional model over time based on the current images of the mobile element 12 acquired by the imager 20. Indeed, the 3D reconstruction is an incremental method which is enriched with image information over time both by the angular variability of the reconstruction and by the resolution of the model which can be reconstructed. The three-dimensional model is, thus, either the same model as the previous model, or a better resolved model (shape with more details) than the previous model. This can also allow a “progressive unmasking” of certain parts of the element considered. Preferably, the updated three-dimensional model is displayed on a screen.
De préférence, l’étape de mise à jour 130 comprend la mise à jour des conditions de prise de vue et de détection de l’élément mobile en fonction de primitives des images utilisées pour obtenir le modèle tridimensionnel. Pour cela, une triangularisation est par exemple réalisée. Preferably, the update step 130 comprises updating the shooting and detection conditions of the mobile element according to primitives of the images used to obtain the three-dimensional model. For this, a triangularization is for example carried out.
De préférence, l’étape de mise à jour 130 comprend la mise à jour au cours du temps de la position géographique 3D et de la vitesse de déplacement 3D de l’élément mobile 12 par rapport au système optronique 16 en fonction de l’image acquise à l’instant courant, d’au moins une image acquise à un instant précédent, et du modèle tridimensionnel de l’élément mobile 12. De préférence, des données relatives à la précision de la position géographique 3D et de la vitesse 3D sont aussi déterminées. Preferably, the updating step 130 comprises updating over time the 3D geographic position and the 3D movement speed of the mobile element 12 relative to the optronic system 16 as a function of the image acquired at the current time, of at least one image acquired at a previous time, and of the three-dimensional model of the mobile element 12. Preferably, data relating to the precision of the 3D geographic position and of the 3D speed are also determined.
De préférence, l’étape de mise à jour 130 comprend aussi le calcul des précisions sur la position géographique et la vitesse de déplacement déterminées, et éventuellement sur les conditions de prise de vue des images. Preferably, the update step 130 also includes the calculation of the details on the determined geographic position and speed of movement, and possibly on the conditions for taking pictures of the images.
Ainsi, le procédé de reconstruction décrit permet de reconstruire un élément mobile 12 en trois dimensions, dans les conditions suivantes de fonctionnement : au moyen de l’imageur 20 du système optronique 16 en poursuite sur l'élément mobile 12 (cela signifie que l’élément mobile 12 se trouve maintenu dans l'image de la vidéo optronique indépendamment des mouvements respectifs de l’élément mobile 12 et du système optronique 16).
même en disposant de mesures imparfaites des positions du système optronique 16, des attitudes des images et des paramètres internes de la voie optronique utilisée. en détectant, de manière opportune, des mouvements de rotation de l’élément mobile 12. sur un élément non coopérant, donc sans la possibilité d'agir sur la présentation de l’élément mobile 12 par rapport à l’imageur 20. dans des conditions d’acquisitions critiques en matière de rapport entre la base des sommets de prise de vue à la distance du système optronique 16 à l’élément mobile 12, dans un temps court dès lors qu’on dispose d’une opportunité de rotation de l’objet sur lui-même. Thus, the reconstruction method described makes it possible to reconstruct a mobile element 12 in three dimensions, under the following operating conditions: by means of the imager 20 of the optronic system 16 tracking the mobile element 12 (this means that the mobile element 12 is maintained in the image of the optronic video independently of the respective movements of the mobile element 12 and of the optronic system 16). even with imperfect measurements of the positions of the optronic system 16, the attitudes of the images and the internal parameters of the optronic channel used. by detecting, in a timely manner, rotational movements of the mobile element 12. on a non-cooperating element, therefore without the possibility of acting on the presentation of the mobile element 12 relative to the imager 20. in critical acquisition conditions in terms of the ratio between the base of the shooting vertices at the distance of the optronic system 16 to the mobile element 12, in a short time as soon as there is an opportunity for rotation of the object on itself.
Ainsi, le procédé de reconstruction décrit permet, de reconstruire en 3D, la forme externe d'un élément mobile dans une scène 10, avec un niveau de détail fonction de la résolution des images, et cela même si l’élément est non-coopérant et évolue à grande distance (plusieurs km et voire plusieurs dizaines de km) du système optronique 16, donc avec un rapport base distance défavorable (typiquement inférieur à 0,1). La reconstruction peut être réalisée avec un unique senseur passif (donc de manière discrète et sans communication entre senseurs), et dans un délai court de l’ordre de 0 à 10 secondes. Un tel procédé permet également de caractériser la cinématique de l’élément mobile 12. Thus, the reconstruction method described makes it possible to reconstruct in 3D the external shape of a mobile element in a scene 10, with a level of detail depending on the resolution of the images, even if the element is non-cooperative and evolves at a great distance (several km and even several tens of km) from the optronic system 16, therefore with an unfavorable base distance ratio (typically less than 0.1). The reconstruction can be carried out with a single passive sensor (therefore discretely and without communication between sensors), and in a short time of the order of 0 to 10 seconds. Such a method also makes it possible to characterize the kinematics of the mobile element 12.
La reconstruction est d’autant plus complète que l’élément mobile 12 se déplace par rapport au système optronique 16, et effectue le cas échéant des mouvements de rotation. La reconstruction est d’autant plus précise que les images sont de meilleure résolution, conditionnée par les caractéristiques de la voie optronique utilisée du système, ainsi que de la distance à l’objet. Par ailleurs, la précision du modèle est aussi améliorée par la redondance des images utilisées pour la reconstruction The reconstruction is all the more complete as the mobile element 12 moves relative to the optronic system 16, and if necessary performs rotational movements. The reconstruction is all the more precise as the images are of better resolution, conditioned by the characteristics of the optronic channel used in the system, as well as the distance to the object. Furthermore, the precision of the model is also improved by the redundancy of the images used for the reconstruction.
Le processus de reconstruction 3D d'un élément mobile proposé est qualifié de reconstruction inverse optronique (RIO), puisque à la différence de la plupart des techniques de reconstruction 3D classique qui exploitent le mouvement du senseur pour reconstruire une scène 10, le problème de reconstruction 3D d'un élément exploite ici la mobilité de l'élément, le senseur pouvant à l'extrême être fixe durant les acquisitions. The proposed 3D reconstruction process of a mobile element is called optronic inverse reconstruction (RIO), since unlike most classic 3D reconstruction techniques which exploit the movement of the sensor to reconstruct a scene 10, the 3D reconstruction problem of an element here exploits the mobility of the element, the sensor being able to be extremely fixed during the acquisitions.
Le modèle tridimensionnel obtenu permet également d’améliorer la robustesse d’autres algorithmes mis en œuvre par le système optronique 16 (suivi, pistage...), ainsi que la décision visuelle des opérateurs. Le procédé permet aussi d’améliorer la
visualisation, en permettant une meilleure résolution de l’élément mobile 12, et la possibilité de détacher l’élément mobile 12 de la scène 10, de le présenter sous différentes faces avec indices visuels complémentaires utiles à son identification (réduit le délai et améliore l’analyse). The three-dimensional model obtained also makes it possible to improve the robustness of other algorithms implemented by the optronic system 16 (tracking, tracking, etc.), as well as the visual decision of the operators. The method also makes it possible to improve the visualization, allowing better resolution of the mobile element 12, and the possibility of detaching the mobile element 12 from the scene 10, of presenting it under different faces with complementary visual clues useful for its identification (reduces the delay and improves the analysis).
Le procédé est aussi applicable pour le suivi et la reconstruction en simultanée de plusieurs éléments mobiles de la scène 10, tant que ces éléments restent visibles à partir d’une même ligne de visée pour l’imageur 20. The method is also applicable for the simultaneous tracking and reconstruction of several moving elements of the scene 10, as long as these elements remain visible from the same line of sight for the imager 20.
De préférence, le procédé comprend, soit à chaque instant (en temps réel), soit avec un ensemble d’images, une mise à jour des paramètres de prise de vue en bénéficiant des correspondances entre les primitives extraites de l’image courante et les primitives fixes de la scène précédemment extraites au moyen d’un processus de type cartographie et localisation simultanées (SLAM) ou de type aéro-triangulation. Preferably, the method comprises, either at each instant (in real time) or with a set of images, an update of the shooting parameters by benefiting from the correspondences between the primitives extracted from the current image and the fixed primitives of the scene previously extracted by means of a process of the simultaneous mapping and localization (SLAM) type or of the aero-triangulation type.
De préférence, la reconnaissance d’un objet de type véhicule pourra par exemple donner lieu à un traitement de complétion du modèle 3D par symétries si l’on souhaite proposer d’une vision plus exhaustive du modèle partiel reconstruit. Cette complétude par symétrisassions se limite à dupliquer les formes et structures extraites sur la face latérale opposée du véhicule. Preferably, the recognition of a vehicle-type object could, for example, give rise to a 3D model completion treatment by symmetries if we wish to propose a more exhaustive vision of the reconstructed partial model. This completion by symmetrisations is limited to duplicating the extracted shapes and structures on the opposite side face of the vehicle.
Le procédé est particulièrement adapté pour les applications industrielles suivantes : The process is particularly suitable for the following industrial applications:
- reconstruction de véhicule depuis un senseur positionné sur un autre véhicule.- vehicle reconstruction from a sensor positioned on another vehicle.
- aide à une meilleure identification d’un élément mobile par ajout d’une dimension spatiale et d’une amélioration de la résolution grâce à la multiplicité des images (super résolution). - helps to better identify a moving element by adding a spatial dimension and improving the resolution thanks to the multiplicity of images (super resolution).
- caractérisation de la cinématique de l’élément mobile en terme de vitesse et orientation 3D au cours du temps. - characterization of the kinematics of the mobile element in terms of speed and 3D orientation over time.
L’homme du métier comprendra que les modes de réalisation et variantes de la description peuvent être combinés pourvu qu’ils soient compatibles techniquement.
Those skilled in the art will understand that the embodiments and variants of the description can be combined provided that they are technically compatible.
Claims
1. Procédé de reconstruction en trois dimensions d’un élément mobile (12) dans une scène (10), le procédé étant mis en œuvre par un système optronique (16) comprenant un imageur (20) et un calculateur (22), le procédé comprenant les étapes suivantes: a. la mise en place d’un suivi d’un élément mobile (12) de la scène (10) de sorte que l’imageur (20) du système optronique (16) acquiert plusieurs images successives de l’élément mobile (12) vu sous différents angles en fonction du déplacement de l’élément mobile (12) par rapport au système optronique (16), et b. la reconstruction en trois dimensions de l’élément mobile (12) en fonction des images acquises pour obtenir un modèle tridimensionnel de l’élément mobile (12). 1. A method for three-dimensional reconstruction of a mobile element (12) in a scene (10), the method being implemented by an optronic system (16) comprising an imager (20) and a computer (22), the method comprising the following steps: a. setting up tracking of a mobile element (12) of the scene (10) such that the imager (20) of the optronic system (16) acquires several successive images of the mobile element (12) seen from different angles as a function of the movement of the mobile element (12) relative to the optronic system (16), and b. three-dimensional reconstruction of the mobile element (12) as a function of the acquired images to obtain a three-dimensional model of the mobile element (12).
2. Procédé selon la revendication 1 , dans lequel l’étape de reconstruction est déclenchée automatiquement lorsque les images acquises comprennent au moins deux images de l’élément mobile (12) prises selon des angles compatibles avec la reconstruction tridimensionnelle. 2. Method according to claim 1, in which the reconstruction step is triggered automatically when the acquired images comprise at least two images of the mobile element (12) taken at angles compatible with the three-dimensional reconstruction.
3. Procédé selon la revendication 1 ou 2, dans lequel le procédé comprend une étape de détermination de la position géographique et de la vitesse de déplacement de l’élément mobile (12) par rapport au système optronique (16) en fonction de l’image acquise à l’instant courant, d’au moins une image acquise à un instant précédent, et du modèle tridimensionnel de l’élément mobile (12). 3. Method according to claim 1 or 2, in which the method comprises a step of determining the geographical position and the speed of movement of the mobile element (12) relative to the optronic system (16) as a function of the image acquired at the current time, of at least one image acquired at a previous time, and of the three-dimensional model of the mobile element (12).
4. Procédé selon l’une quelconque des revendications 1 à 3, dans lequel le procédé comprend la mise à jour des conditions de prise de vue et de détection de l’élément mobile en fonction de primitives des images utilisées pour obtenir le modèle tridimensionnel. 4. Method according to any one of claims 1 to 3, in which the method comprises updating the conditions for taking pictures and detecting the moving element as a function of primitives of the images used to obtain the three-dimensional model.
5. Procédé selon la revendication 3 ou 4, dans lequel le procédé comprend le calcul des précisions sur la position géographique et la vitesse de déplacement déterminées, et éventuellement sur les conditions de prise de vue des images.
5. Method according to claim 3 or 4, in which the method comprises calculating the details of the determined geographical position and speed of movement, and possibly of the conditions for taking the images.
6. Procédé selon l’une quelconque des revendications 1 à 5, dans lequel le suivi de l’élément mobile (12) est mis à jour en fonction de la dernière position et de la dernière vitesse déterminées pour l’élément mobile (12), et du modèle tridimensionnel de l’élément mobile (12). 6. Method according to any one of claims 1 to 5, in which the tracking of the mobile element (12) is updated according to the last position and the last speed determined for the mobile element (12), and the three-dimensional model of the mobile element (12).
7. Procédé selon l’une quelconque des revendications 1 à 6, dans lequel le procédé comprend, au cours du temps, une étape de mise à jour du modèle tridimensionnel en fonction de la dernière image acquise par l’imageur (20). 7. Method according to any one of claims 1 to 6, in which the method comprises, over time, a step of updating the three-dimensional model as a function of the last image acquired by the imager (20).
8. Procédé selon l’une quelconque des revendications 1 à 7, dans lequel le modèle tridimensionnel de l’élément mobile (12) est obtenu par extraction de primitives caractéristiques de l’élément mobile (12) dans les images acquises par l’imageur (20) et par mise en correspondance des primitives extraites entre lesdites images. 8. Method according to any one of claims 1 to 7, in which the three-dimensional model of the mobile element (12) is obtained by extracting characteristic primitives of the mobile element (12) in the images acquired by the imager (20) and by matching the extracted primitives between said images.
9. Procédé selon la revendication 8, dans lequel le modèle tridimensionnel de l’élément mobile (12) est aussi obtenu par extraction de primitives caractéristiques de la scène (10) dans les images acquises par l’imageur (20) et par mise en correspondance des primitives extraites entre lesdites images. 9. Method according to claim 8, in which the three-dimensional model of the mobile element (12) is also obtained by extracting characteristic primitives of the scene (10) in the images acquired by the imager (20) and by matching the extracted primitives between said images.
10. Procédé selon l’une quelconque des revendications 1 à 9, dans lequel le modèle tridimensionnel obtenu est affiché sur un écran. 10. Method according to any one of claims 1 to 9, in which the three-dimensional model obtained is displayed on a screen.
11. Procédé selon l’une quelconque des revendications 1 à 10, dans lequel l’étape de reconstruction comprend la détermination d’une texture pour l’élément mobile (12) en fonction des images acquises, le modèle tridimensionnel obtenu étant un modèle reproduisant une texture pour l’élément mobile (12). 11. Method according to any one of claims 1 to 10, in which the reconstruction step comprises determining a texture for the mobile element (12) based on the acquired images, the three-dimensional model obtained being a model reproducing a texture for the mobile element (12).
12. Procédé selon l’une quelconque des revendications 1 à 1 1 , dans lequel le système optronique (16) comprend, en outre, une unité d’évaluation de distances (24) propre à déterminer une distance entre l’élément mobile (12) et le système optronique (16), l’unité d’évaluation (24) comprenant un télémètre et/ou un modèle numérique de terrain. 12. Method according to any one of claims 1 to 11, in which the optronic system (16) further comprises a distance evaluation unit (24) capable of determining a distance between the mobile element (12) and the optronic system (16), the evaluation unit (24) comprising a rangefinder and/or a digital terrain model.
13. Système optronique (16) comprenant un imageur (20) et un calculateur (22), le système optronique (16) étant configuré pour mettre en œuvre un procédé selon l’une quelconque des revendications 1 à 12.
13. Optronic system (16) comprising an imager (20) and a computer (22), the optronic system (16) being configured to implement a method according to any one of claims 1 to 12.
14. Plateforme (14), tel qu’un véhicule, comprenant un système optronique (16) selon la revendication 13.
14. Platform (14), such as a vehicle, comprising an optronic system (16) according to claim 13.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FRFR2302133 | 2023-03-08 | ||
FR2302133A FR3146517A1 (en) | 2023-03-08 | 2023-03-08 | Method for three-dimensional reconstruction of a moving element |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024184519A1 true WO2024184519A1 (en) | 2024-09-12 |
Family
ID=88146516
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2024/056207 WO2024184519A1 (en) | 2023-03-08 | 2024-03-08 | Method for three-dimensional reconstruction of a mobile element |
Country Status (2)
Country | Link |
---|---|
FR (1) | FR3146517A1 (en) |
WO (1) | WO2024184519A1 (en) |
-
2023
- 2023-03-08 FR FR2302133A patent/FR3146517A1/en active Pending
-
2024
- 2024-03-08 WO PCT/EP2024/056207 patent/WO2024184519A1/en unknown
Non-Patent Citations (3)
Title |
---|
BULLINGER SEBASTIAN ET AL: "Moving object reconstruction in monocular video data using boundary generation", 2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), IEEE, 4 December 2016 (2016-12-04), pages 240 - 246, XP033085591, DOI: 10.1109/ICPR.2016.7899640 * |
JONATHON LUITEN ET AL: "Track to Reconstruct and Reconstruct to Track", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 1 October 2019 (2019-10-01), XP081646252, DOI: 10.1109/LRA.2020.2969183 * |
ÜNZ MARTIN R ET AL: "MaskFusion: Real-Time Recognition, Tracking and Reconstruction of Multiple Moving Objects", 22 October 2018 (2018-10-22), XP093119012, Retrieved from the Internet <URL:https://arxiv.org/pdf/1804.09194.pdf> [retrieved on 20240112] * |
Also Published As
Publication number | Publication date |
---|---|
FR3146517A1 (en) | 2024-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3278301B1 (en) | Method of determining a direction of an object on the basis of an image of the object | |
EP2724203B1 (en) | Generation of map data | |
EP2428934B1 (en) | Method for estimating the movement of a carrier in relation to an environment and calculation device for a navigation system | |
KR102200299B1 (en) | A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof | |
FR2960082A1 (en) | METHOD AND SYSTEM FOR MERGING DATA FROM IMAGE SENSORS AND MOTION OR POSITION SENSORS | |
EP1724592A1 (en) | System for estimating the speed of an aircraft and its application to the detection of obstacles | |
EP2478334B1 (en) | Three-dimensional location of target land area by merging images captured by two satellite-based sensors | |
EP3200153B1 (en) | Method for detecting targets on the ground and in motion, in a video stream acquired with an airborne camera | |
FR2953313A1 (en) | OPTRONIC SYSTEM AND METHOD FOR PREPARING THREE-DIMENSIONAL IMAGES FOR IDENTIFICATION | |
FR2879791A1 (en) | METHOD FOR PROCESSING IMAGES USING AUTOMATIC GEOREFERENCING OF IMAGES FROM A COUPLE OF IMAGES TAKEN IN THE SAME FOCAL PLAN | |
EP3359978B1 (en) | Method for processing an sar image and associated target-detecting method | |
EP3679517B1 (en) | Method for determining projecting edges of a target on an image | |
Yun et al. | Sthereo: Stereo thermal dataset for research in odometry and mapping | |
EP0863488A1 (en) | Method for detecting level contours in two stereoscopic images | |
WO2022144366A1 (en) | Method for determining, using an optronic system, positions in a scene, and associated optronic system | |
WO2024184519A1 (en) | Method for three-dimensional reconstruction of a mobile element | |
WO2017093057A1 (en) | Method for characterising a scene by calculating the 3d orientation | |
FR3085082A1 (en) | ESTIMATION OF THE GEOGRAPHICAL POSITION OF A ROAD VEHICLE FOR PARTICIPATORY PRODUCTION OF ROAD DATABASES | |
WO2021165237A1 (en) | Method and device for determining altitude obstacles | |
FR3065097B1 (en) | AUTOMATED METHOD FOR RECOGNIZING AN OBJECT | |
EP4341897B1 (en) | Method and device for processing a sequence of images in order to determine continuous thumbnails in said sequence of images | |
EP3999865A1 (en) | Method for determining extrinsic calibration parameters for a measuring system | |
FR2749419A1 (en) | METHOD AND DEVICE FOR IDENTIFYING AND LOCATING FIXED OBJECTS ALONG A ROUTE | |
FR3113330A1 (en) | Method for aligning at least two images formed from three-dimensional points | |
FR2678127A1 (en) | IMAGE FORMATION SYSTEM. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24709751 Country of ref document: EP Kind code of ref document: A1 |