WO2009042933A1 - Photogrammetric networks for positional accuracy and ray mapping - Google Patents
Photogrammetric networks for positional accuracy and ray mapping Download PDFInfo
- Publication number
- WO2009042933A1 WO2009042933A1 PCT/US2008/077972 US2008077972W WO2009042933A1 WO 2009042933 A1 WO2009042933 A1 WO 2009042933A1 US 2008077972 W US2008077972 W US 2008077972W WO 2009042933 A1 WO2009042933 A1 WO 2009042933A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- determining
- point
- correlation
- ray
- location
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the invention relates to position determining software and to methods to analyze an image. More specifically, the field of the invention is that of visual survey system software for enhancing the accuracy of visual survey systems in real time applications and to generate a ray map of a region being imaged. Description of the Related Art.
- the purpose of a surveying system is to record the absolute position/orientation of objects that can be seen in imagery taken from survey vehicles. To do this, the survey vehicle must accurately record its own absolute position/orientation and it must be able to attain the relative position/orientation of the target object. It can then calculate the absolute position/orientation of the target object to within the combined errors of the absolute and relative systems.
- the survey vehicle has several instruments designed to record the position and orientation of itself and the objects around it.
- the vehicle travels through the survey area capturing and recording the data from the sensors at specified time or distance intervals. That is, at particular moments, the cameras, GPS, INS, and other instruments capture their readings.
- Objects visible in particular images can be located by correlating the capture point data to the image.
- Other information about the objects can be extracted such as sign types, road markings, centerlines, and other visible attributes.
- the present invention involves a positional computation system and method for surveying system which minimizes the potential error in the calculation of location information.
- a feedback technique using data captured by the survey system corrects the determination of relative position of nearby survey capture points. This will primarily use the imagery data from the survey; however the technique in general is not limited to this type of data.
- These nearby relative corrections may be used to create a "rigid" mesh over the entire survey. This mesh may be used to correct the survey as a whole by pinning it to points with known low error or by allowing averaging over greater sample sets.
- the present invention in one form, relates to a surveying system for determining the location of a object point from images. Two image gathering devices are coupled at a known relative distance, and each image gathering device is adapted to generate an image.
- a location calculator has a plurality of instructions enabling the location calculator to correlate at least two reference points appearing on the two images, and to determine the position of the object point based on the two images and the at least two reference points.
- the present invention in another form, is a method for determining the position of a object point using two images.
- the first step is correlating at least two reference points appearing on the two images.
- the next step is determining the position of the object point based on the two images and at least the two reference points.
- the method mitigates against the multiplication of errors through the several measurements by an incremental type of calculation, deriving relatively accurate reference points which are subsequently used for determining the location of the object point.
- the present invention in a further form, is a method of generating a ray map for a first camera is provided.
- the method comprising the steps of: obtaining a digital image with the camera, the digital image; obtaining camera position information of the firs camera; and determining for a plurality of pixels in the digital image a direction vector based on the camera position information and region information.
- the ray map including the direction vector and the region information.
- the present invention in a still further form, is a method of associating a plurality of rays with a point in a region.
- the method comprising the steps of: for each of a plurality of images of the region obtaining camera position information for the camera taking the image and determining for a plurality of pixels in the digital image a direction vector based on the camera position information and region information.
- the region information including an intensity.
- the method further comprising the steps of determining intersecting direction vectors from multiple images which intersect at the point; and associating the intersecting direction vectors with the point.
- the present invention in a still another form, is a method of generating a virtual image of a region for a first position is provided.
- the method comprising the steps of: determining a ray map associated with the region including a plurality of rays, each ray including region information; determining a subset of the plurality of rays which are viewable from the first position; assigning the region information for each ray of the subset of plurality of rays to a corresponding location in the virtual image; and determining the region information for a remainder of the virtual image.
- the remainder of the virtual image corresponding to points in the region for which a known ray is not viewable from the first position.
- the present invention in a yet still further form, is a computer readable medium including instructions to generate a virtual image of a region for a first position is provided.
- the computer readable medium comprising instructions to determine a ray map associated with the region including a plurality of rays, each ray including region information; instructions to determine a subset of the plurality of rays which are viewable from the first position; instructions to assign the region information for each ray of the subset of plurality of rays to a corresponding location in the virtual image; and instructions to determine the region information for a remainder of the virtual image, the remainder of the virtual image corresponding to points in the region for which a known ray is not viewable from the first position.
- Figure 1 is a perspective view of an example survey vehicle.
- Figure 2 is a perspective view of the stereo camera pair of Figure 1.
- Figure 3 is a left and right image view of a possible image from the stereo camera pair of Figure 2.
- Figure 4 is a top plan view of stereo camera pair in relation to a viewed scene.
- Figure 5 is a top and side view of the arrangement of Figure 4.
- Figure 6 is a dual view of a similar object having different depths.
- Figures 7A-C are schematic diagrams illustrating positional accuracy errors.
- Figure 8 is a schematic diagram illustrating directional accuracy error.
- Figure 9 is a perspective view of uncorrelated views.
- Figure 10 is a schematic diagram of multiple view discrepancy.
- Figure 11 is a flow chart diagram of a method of one embodiment of the present invention.
- Figures 12A and 12B are left and right image views, respectively.
- Figure 13 is a top plan view of correlating reference points.
- Figure 14 is a perspective view of a reference correlation.
- Figure 15 is a perspective view of the results of a mutual reference correlation.
- Figure 16 is a two-dimensional representation of a plurality of camera views imaging a region.
- Figure 17 is a detail view of a portion of Figure 16.
- Figure 18 is an exemplary method of generating a ray map which is associated with points in the region.
- Figure 19 is a two-dimensional representation of the use of ray mapping in the generation of a virtual image.
- Figure 20 is a perspective view of a vehicle including a pair of cameras supported thereon.
- Data structures greatly facilitate data management by data processing systems, and are not accessible except through sophisticated software systems.
- Data structures are not the information content of a memory, rather they represent specific electronic structural elements which impart a physical organization on the information stored in memory. More than mere abstraction, the data structures are specific electrical or magnetic structural elements in memory which simultaneously represent complex data accurately and provide increased efficiency in computer operation.
- the manipulations performed are often referred to in terms, such as comparing or adding, commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary, or desirable in most cases, in any of the operations described herein which form part of the present invention; the operations are machine operations.
- Useful machines for performing the operations of the present invention include general purpose digital computers or other similar devices. In all cases the distinction between the method operations in operating a computer and the method of computation itself should be recognized.
- the present invention relates to a method and apparatus for operating a computer in processing electrical or other (e.g., mechanical, chemical) physical signals to generate other desired physical signals.
- the present invention also relates to an apparatus for performing these operations.
- This apparatus may be specifically constructed for the required purposes or it may comprise a general purpose computer as selectively activated or reconfigured by a computer program stored in the computer.
- the algorithms presented herein are not inherently related to any particular computer or other apparatus.
- various general purpose machines may be used with programs written in accordance with the teachings herein, or it may prove more convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the description below.
- the present invention deals with "object-oriented” software, and particularly with an "object-oriented” operating system.
- the "object-oriented” software is organized into “objects”, each comprising a block of computer instructions describing various procedures ("methods") to be performed in response to "messages" sent to the object or "events" which occur with the object.
- Such operations include, for example, the manipulation of variables, the activation of an object by an external event, and the transmission of one or more messages to other objects.
- Messages are sent and received between objects having certain functions and knowledge to carry out processes. Messages are generated in response to user instructions, for example, by a user activating an icon with a "mouse" pointer generating an event. Also, messages may be generated by an object in response to the receipt of a message. When one of the objects receives a message, the object carries out an operation (a message procedure) corresponding to the message and, if necessary, returns a result of the operation. Each object has a region where internal states (instance variables) of the object itself are stored and where the other objects are not allowed to access.
- One feature of the object-oriented system is inheritance. For example, an object for drawing a "circle" on a display may inherit functions and knowledge from another object for drawing a "shape" on a display.
- a programmer "programs" in an object-oriented programming language by writing individual blocks of code each of which creates an object by defining its methods.
- a collection of such objects adapted to communicate with one another by means of messages comprises an object-oriented program.
- Object-oriented computer programming facilitates the modeling of interactive systems in that each component of the system can be modeled with an object, the behavior of each component being simulated by the methods of its corresponding object, and the interactions between components being simulated by messages transmitted between objects.
- Objects may also be invoked recursively, allowing for multiple applications of an objects methods until a condition is satisfied. Such recursive techniques may be the most efficient way to programmatically achieve a desired result.
- An operator may stimulate a collection of interrelated objects comprising an object-oriented program by sending a message to one of the objects.
- the receipt of the message may cause the object to respond by carrying out predetermined functions which may include sending additional messages to one or more other objects.
- the other objects may in turn carry out additional functions in response to the messages they receive, including sending still more messages.
- sequences of message and response may continue indefinitely or may come to an end when all messages have been responded to and no new messages are being sent.
- a programmer need only think in terms of how each component of a modeled system responds to a stimulus and not in terms of the sequence of operations to be performed in response to some stimulus. Such sequence of operations naturally flows out of the interactions between the objects in response to the stimulus and need not be preordained by the programmer.
- object-oriented programming makes simulation of systems of interrelated components more intuitive, the operation of an object-oriented program is often difficult to understand because the sequence of operations carried out by an object-oriented program is usually not immediately apparent from a software listing as in the case for sequentially organized programs. Nor is it easy to determine how an object-oriented program works through observation of the readily apparent manifestations of its operation. Most of the operations carried out by a computer in response to a program are "invisible" to an observer since only a relatively few steps in a program typically produce an observable computer output. In the following description, several terms which are used frequently have specialized meanings in the present context.
- object relates to a set of computer instructions and associated data which can be activated directly or indirectly by the user.
- windowing environment "running in windows”, and "object oriented operating system” are used to denote a computer user interface in which information is manipulated and displayed on a video display such as within bounded regions on a raster scanned video display.
- network means two or more computers which are connected in such a manner that messages may be transmitted between the computers.
- typically one or more computers operate as a “server”, a computer with large storage devices such as hard disk drives and communication hardware to operate peripheral devices such as printers or modems.
- a processor may be a microprocessor, a digital signal processor (“DSP"), a central processing unit (“CPU”), or other circuit or equivalent capable of interpreting instructions or performing logical actions on information.
- Memory includes both volatile and non- volatile memory, including temporary and cache, in electronic, magnetic, optical, printed, or other format used to store information.
- CDMA code- division multiple access
- TDMA time division multiple access
- GSM Global System for Mobile Communications
- PDC personal digital cellular
- CDPD packet-data technology over analog systems
- AMPS Advance Mobile Phone Service
- wireless application protocol or "WAP” mean a universal specification to facilitate the delivery and presentation of web-based data on handheld and mobile devices with small user interfaces.
- GPS means Global Positioning System.
- INS Inertial Navigation System.
- object when not used in its software programming definition, means the target for which the location or position is being obtained.
- the exemplary embodiment disclosed herein relates to vehicle 10 with two cameras 12 located fixed distance 20 relative to each other, see Figures 1 and 2.
- the sensing involved is visual sensing and the methods disclosed directly relate to visual image processing.
- other embodiments of the invention may use other sensory devices and work with different data in a similar manner to accomplish the error minimization of the present invention.
- the camera is the primary instrument for determining the relative position and orientation of objects to survey vehicle 10.
- a single image can be used to determine the relative direction to the object, however two images at a know distance and orientation are needed to determine the relative distance to the object.
- Other sensory instruments e.g., sonar, radar, ultrasound, may alternatively be used in the appropriate circumstances.
- cameras 12 that take these images are referred to as a stereo pair. Since cameras 12 are fixed to survey vehicle 10, their orientation and distance to each other is measured very accurately.
- the two images (e.g., left image 30 and right image 40 of Figure 3) taken by stereo pair 12 work together to determine the location of the objects they see.
- the intersection of the viewing rays 50 from left and right cameras 12 are used to calculate the distance and orientation from vehicle 10, see Figures 4 and 5. Since the orientation and geometry of each camera 12 is known, the angle of an object (left/right and up/down) may be determined (e.g., view angles 32 and 42). This combined with the calculated distance may be used to determine the relative three dimensional position of the object in the images.
- the survey system determines the absolute vehicle position and orientation. GPS
- INS 16 measures the direction and orientation of survey vehicle 10. In this way survey system 10 may know where it was and the direction it was facing when a set of images was taken. Known systems have relative accuracy (vehicle to object) which is very good.
- Figure 6 shows several sizes of stripes on the images, whereas each stripe on the road is actually the same size. The stripes that are further from the survey have less resolution, therefore their relative position is less accurately known.
- Radial determination (relative angle or direction to a point) may be determined with a single image; and when combined with the radial determination from the other image of a stereo pair radial accuracy increases slightly. The accuracy of the radial determination decreases with distance but it does so linearly. Depth determination (relative distance to a point) requires both images in the stereo pair. Because of this depth determination accuracy is at best half of the radial accuracy. Depth accuracy also decreases geometrically over distance. This is due to the decreasing angle of incident between the two images and between the survey vehicle and the ground. As such, depth determination accuracy decreases much more rapidly than does radial determination over distance.
- GPS location and heading The absolute accuracy of the survey vehicle is based primarily on two devices.
- GPS 14 primarily determines the position (latitude, longitude and altitude) of the survey vehicle, and INS which determines the direction the survey vehicle is facing.
- the INS also assists the GPS by recording changes to the vehicle position and orientation.
- GPS Global System for Mobile Communications
- Drift is the error caused by atmospheric conditions and other ambient factors. GPS units can be very accurate when given time to fix on a point. By averaging readings over a short period while stationary, precision error can be reduced. If a survey vehicle were to stop and not move for a several minutes, its location may be determined to a higher degree of accuracy. This however is not practical for a mobile survey application. Likewise, drift error may be reduced by monitoring trends over a long period of time. To do this the unit must be stationary as well. But, in many cases, a second stationary GPS unit may be used to record these drift trends which are then applied to the mobile survey data. Theoretically, the second GPS unit is affected by the same ambient condition as mobile unit. Using a second GPS unit does not eliminate all drift, as there will be differences between the ambient conditions experienced by the two units that are compounded by their distance.
- Directional accuracy is the error in determining the orientation of the vehicle, see Figure 8.
- this error is multiplied over distance.
- a two degree error will cause an object 3 meters away to be miscalculated by about 10 centimeters. But an object 30 meters away would be off by more than a meter.
- measurement and calculation error may introduce significant error in absolute object position determination.
- relative and absolute errors are combined.
- the system used for determining the absolute position of the survey vehicle is essentially separate from the system used to locate the objects relative to the vehicle. In this system, the error follows the form:
- ⁇ is total error
- p ⁇ absolute positional error (precision and drift combined)
- a ⁇ is absolute angular error
- l ⁇ is relative depth error
- ⁇ ⁇ is relative radial error
- d is the distance of the target point from the survey vehicle.
- capture points A (92) and B (94) represent survey vehicle 10 at different times and places in the survey. If attempt to extract the sign feature, we will get a different location if we choice A's view or B's view. The discrepancy is as high as the sum of the individual errors ( ⁇ A + ⁇ B ) .
- This potential discrepancy may not only cause confusion, but may make the system seem less accurate than it actually is. For example, say that the location of the object can be determined to within 1 meter. If both views are off a meter in opposite directions, then it will appear that the object is off by 2 meters. This also makes it difficult to map previously located objects within other views since they may not line up from view to view.
- the creation and utilization of a photogrammetric feedback network for survey correction is implemented to achieve more precise location determinations.
- accuracy issues and causes with known methods used for image based survey vehicles.
- the discussion below involves an overall method for increasing the accuracy of the survey and sub methods and techniques that addresses specific aspects of the correction process. Some methods may be optional or there may be alternates with various applications.
- the techniques are discussed with the goal of increasing the survey accuracy to the point that advanced applications, requiring highly accurate maps, are feasible.
- FIG. 11 shows the task steps 200 involved in the procedure. Attached to each task are potential methods 220 that may be alternatively employed to achieve the corresponding task step.
- the first listed method associated with each task involves computations which are the easiest to implement, while the last listed method involves computations which are the most accurate method of the group.
- this rating of tasks does not necessarily exist for every application. At least one method for each task is typically implemented for each task, and tasks may be computationally combined where appropriate. In some cases, a combination of methods for different situations may yield better results.
- the survey system identifies points in the survey image that are to be used to orient the capture point data. These points typically represent stationary features that may be identified in other images of the capture point and ideally in the imagery from other nearby capture points as well. Additionally, the relative position of points to the capture point's reference frame must be attainable to within a known radius of error. Basically the reference points lie within the field of view of a stereo pair. A minimum of three (non-collinear) points is required. Possible methods include manual observation 222, strip mapping 224, depth mapping 226, and ray mapping 228. In one embodiment of the invention, ray mapping 228 involves the teaching disclosed in the section herein titled RAY MAPPING.
- FIGs 12A and 12B some example points are shown from the left and right views 110 and 112, respectively, of capture point A.
- the base of the stop sign, the end double yellow line and the corner of a pavement patch are chosen. Since these are visible and identifiable in both views of the stereo pair, their location may be determined by the survey system. Points on mobile object like other vehicles or shadows that will change later in the day will work for stereo correlation, but are poor choices where images taken at different time may need to be used.
- these reference points are found and recorded in the views of other capture points (e.g., views 120 and 122 of Figure 13). This correlation process produces a data table (not shown) that lists all the possible capture point views of each reference point.
- capture point A (132) and B (134) perceive the locations of the correlated reference points differently as shown in Figure 14.
- the red and blue X's and ghosted images show where capture point A and B have calculated the locations of the reference points.
- the reference points have been correlated as indicated by the ovals 132.
- On the left of the image we can see the difference in position and orientation of the B capture point.
- the ghosted image of the survey vehicle indicates its location based on its own frame of reference and that of capture point A (136, 138).
- the references associated with each capture point are weighted based on an error function for that point.
- the weighting factor are used to determine a weighted average point between the references as the correlation point. This is done for all the correlated reference points, for example by ramp averaging 236 or spline averaging 238.
- Ramp averaging involves a simple interpolation between the two or three closest correctable points, a linear correction or best fit line.
- a cubic spline curve fit may be used to adjust the correlation points. This provides for a smoother interpolation and provides better vehicle heading correct.
- step 228 the position of each of the capture points is recalculated so that its reference points best match the correlation points, see Figure 15. In the case of three correlation points between capture points, this results in the new capture points being in the same frame of reference.
- the correction may be accomplished using reference correction 240, 3 point correction 242, or RMS correction 244. These corrections use the correlation points to correct capture points (lat, long, alt, pith, roll, azimuth) recorded by survey vehicle 10.
- Reference correction may be the simplest method, shifting the lat and long of the capture points so that they best match the adjusted correlation points. This method works with a single correlation point between capture points. If more than one exists, either the best (the one weighted the most by both capture points), or an average of the results of several may be used.
- the relative positions between two capture points may be solved and all six degrees corrected. If more than 3 correlation points are available between capture points, either the best 3 or an average of the results of several combinations may be used. This correction has the advantage of correcting survey vehicle 10 capture point reading in all six degrees of freedom.
- RMS is root mean squared and uses 4 or more points together and a weighted average based on distance to find the best corrected location of the capture points relative to each other.
- a ray map 300 for a region 302 is represented.
- One or more cameras 304 A-D obtain one or more images of region 302.
- multiple stationary cameras are used.
- a single camera or multiple cameras which are supported by a moveable vehicle are used.
- An exemplary moveable vehicle including two cameras mounted thereto is the GPSVISION mobile mapping system available from Lambda Tech International, Inc. located at 1410 Production Road, Fort Wayne, IN 46808.
- four cameras 340A-D are illustrated, a single camera 304 may be used and moved to the various locations. Further, the discussion related to one of the cameras, such as camera 304A, is applicable to the remaining cameras 340B- D.
- Camera 304A is at a position 306A and receives a plurality of rays of light 308A which carry information regarding objects within region 302.
- light reflected or generated by objects in region 302 is received through a lens system of camera 304A and imaged on a detecting device to produce an image 310A having a plurality of pixels.
- a standard photographic image records a 2d array of data that represents the color and intensity of light entering the lens at different angles at a moment in time. This is a still image.
- Each pixel has region information regarding a portion of region 302, such as color and intensity.
- a ray map for 108A corresponding to image 31OA may be generated based on the region information of each pixel and the position 306A of camera 304A.
- position 306A of camera 304 includes both the location and the direction of camera 304.
- Region 302 is within the viewing field of each of cameras 304A-D.
- Ray map 308 A includes a plurality of ray vectors 320 which correspond to a plurality of respective points 322 of region 302 and the location of camera 304.
- the direction of vector 320 entering camera 304A from point 322 may be determined.
- a discussion of determining the position of point 322 is provided herein, but point 322 does lie on the ray defined by the pixel of image 310A associated with point 322 and the position 306A of camera 304A.
- the region information for the pixel in image 31OA that corresponds to point 322 is associated with vector 320.
- a ray having an endpoint at the associated point 322 a direction defined by the associated vector 320, and color and intensity provided by the associated region information from image 31OA may be determined.
- not all pixels are included in the ray map.
- the location of points 322 is determined in the following manner.
- the ray vectors 322 from several of the ray maps are combined.
- a given ray vector 320 passes through location 306A and has a direction based on position 306A and also passes through point 322, however the location of point 322 is not yet known.
- Another ray vector 320, associated with camera 304B, passes through location 306B and has a direction based on position 306B and also passes through point 322.
- ray vectors 322 which intersect within a given tolerance specify the location of a point 322.
- Method 400 may be embodied in one or more software programs having instructions to direct one or more computing devices to carry out method 400.
- Data regarding region 302 is collected, as represented by block 402.
- a plurality of images are obtained from one or more cameras, as represented by blocks 404.
- camera position data is obtained, as represented by blocks 406.
- one or more ray maps are generated, as represented by block 408.
- the one or more ray maps are generated for a plurality of desired points in the region.
- a ray vector is determined for the point, as represented by block 410.
- the ray vector passes through the pixel in the respective image that contains point 322 and is in the direction defined by position 306A. Region information from the image regarding the desired point is associated with the ray vector, as represented by block 412.
- the ray maps 308 are maps for a given viewing position while ray map 300 is the overall ray map for region 302.
- FIG. 19 one exemplary application of ray maps 308 is shown.
- a virtual camera 350 is represented.
- Camera 350 is at a virtual position 352.
- Virtual position 352 includes both the location and the direction of camera 350.
- a set of rays 364A-D from ray maps 308A-D which would enter the lens of camera 350 may be determined. These rays are indicated as reused rays from the map.
- additional rays 364A-G may be determined. In one embodiment, the additional rays may be determined by selecting the nearest neighbor ray for point 322 that would fall within the viewing field of the virtual camera.
- the additional rays are determined by a weighted average of a plurality of the nearest rays.
- a virtual image 370 may be generated of region 302 for virtual camera 350. This virtual image 370 may be used to compare to an actual image from a camera located at position 352.
- an initial ray map is created for region 302.
- a mobile camera moves through an area wherein region 302 is imaged.
- the live images from the mobile camera are compared to virtual images for a camera determined based on the position of the mobile camera and the ray map.
- the mobile camera does not need to follow the exact path or take images at exactly the same place as the original cameras.
- the live and virtual images may be compared by a computing device and the differences highlighted. These differences may show changes in region 302, such as the addition of a section of a curb, the ground raked a different way, a pile of dirt, or other changes.
- the camera position 306 is calibrated as follows for a vehicle 500 (see Figure 20) having a pair of cameras 502 and 504 supported thereby. Camera and lens calibration are used to achieve an accurate ray-map. Digital cameras do not linearly represent images across the imaging array. This is due to distortions caused by the lens, aperture and imaging element geometry as explained in David A. Forsyth, Jean Ponce /'Computer vision : a modern approach" Prentice Hall, 2006. The camera is the primary instrument for determining the relative position of objects in region 302 to vehicle 500. A single image 310 may be used to determine the relative direction to the object 322, however two images 310 at a know distance and orientation are needed to determine the relative distance to the object 322. The cameras 502 and 504 that take these images 310 are known as a stereo pair. Since these cameras 502 and 504 are fixed to vehicle 500, their orientation and distance to each other may be measured very accurately.
- the calibration of the mobile mapping system consists of camera calibration, camera relative orientation and the offset determination.
- the camera calibration is performed by an analytical method which includes: capturing images with different location and view angles of known control points in a test field, measuring the image coordinates and performing the computations to obtain camera parameters.
- the relative orientation and rotation offset are determined using constraints without ground control points.
- the camera calibration processing is to determine camera parameters by a well-known bundle adjustment method. Cameras, whether metric, semi- metric or non-metric, do not possess a perfect lens system. To achieve high positioning accuracy, the lens distortions have to be corrected. For this purpose, six distortion parameters are used to correct the radial, decentering and affine distortions. The total camera parameters to be determined consist of the focal length, the principal point, and the lens distortion. The unknown camera parameters are determined using the known control points based on the co-linearity equation. Co-linearity equations are defined by:
- Nx x X 0 + dx - c — N z
- y y o + dy - c— Nz
- Nx r n (X- X 0 ) + r 2l (Y- Y 0 ) + r 3l (Z- Z 0 )
- Nz r 13 (X- X 0 ) + r 23 (Y- Y 0 ) + r 33 (Z- Z 0 ) c Focal length
- the camera parameters, the position and rotation of every image may be computed using known control points.
- the third calibration is to determine the position and orientation offset between the positioning system and the stereo cameras. This procedure may be conducted with or without known control points.
- the principal of the calibration is to determine the offset by using following conditions:
- An object point located from different image pairs has an unique (X,Y,Z ) coordinate.
- the calibration procedure is based on the above positioning equation. Only three rotation offset and three position offset parameters are unknown. By measuring objects from different image pairs, the six offset parameters may be accurately determined.
- the positioning component provides the system position and orientation. After the system is calibrated, every object "seen" by two cameras may be precisely located in a global coordinate system.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
Abstract
The present invention involves a surveying system and method which determines the position of a object point using two images. First, at least two reference points appearing on the two images are correlated. Then the position of the object point is determined based on the two images and the two reference points. The application discloses the use of light rays from a region and based on those light rays generates a map of the region.
Description
PHOTOGRAMMETRIC NETWORKS FOR POSITIONAL ACCURACY
AND RAY MAPPING
BACKGROUND OF THE INVENTION
Field of the Invention.
The invention relates to position determining software and to methods to analyze an image. More specifically, the field of the invention is that of visual survey system software for enhancing the accuracy of visual survey systems in real time applications and to generate a ray map of a region being imaged. Description of the Related Art.
The approaches described in this section could be pursued, but are not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, the approaches described in this section are not teachings or suggestions of the prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
There is increasing need for systems that can map transportation infrastructure to ever higher levels of accuracy. Highly accurate survey map data may be used for lane departure warning systems on highways, corridor clearance in transportation systems, and vehicle automation in general with particular application to detection of pavement surface difference for condition assessment and to automated roadway foreign object detection. As vehicular automation, robotic applications, and transportation electronic infrastructure evolve, further use of visual survey systems will be needed.
The purpose of a surveying system is to record the absolute position/orientation of objects that can be seen in imagery taken from survey vehicles. To do this, the survey vehicle must accurately record its own absolute position/orientation and it must be able to attain the relative position/orientation of the target object. It can then calculate the absolute position/orientation of the target object to within the combined errors of the absolute and relative systems.
To achieve this, the survey vehicle has several instruments designed to record the position and orientation of itself and the objects around it. The vehicle travels through the
survey area capturing and recording the data from the sensors at specified time or distance intervals. That is, at particular moments, the cameras, GPS, INS, and other instruments capture their readings.
Once data is collected for the survey, it is used to extract information about the survey area. Objects visible in particular images can be located by correlating the capture point data to the image. Other information about the objects can be extracted such as sign types, road markings, centerlines, and other visible attributes.
However, known systems have measurement error because of the physical limitations of the survey systems. While accurate in many respects, there are situations where enhanced accuracy is desired.
SUMMARY OF THE INVENTION
The present invention involves a positional computation system and method for surveying system which minimizes the potential error in the calculation of location information. A feedback technique using data captured by the survey system corrects the determination of relative position of nearby survey capture points. This will primarily use the imagery data from the survey; however the technique in general is not limited to this type of data. These nearby relative corrections may be used to create a "rigid" mesh over the entire survey. This mesh may be used to correct the survey as a whole by pinning it to points with known low error or by allowing averaging over greater sample sets. The present invention, in one form, relates to a surveying system for determining the location of a object point from images. Two image gathering devices are coupled at a known relative distance, and each image gathering device is adapted to generate an image. A location calculator has a plurality of instructions enabling the location calculator to correlate at least two reference points appearing on the two images, and to determine the position of the object point based on the two images and the at least two reference points.
The present invention, in another form, is a method for determining the position of a object point using two images. The first step is correlating at least two reference points appearing on the two images. The next step is determining the position of the object point based on the two images and at least the two reference points.
The method mitigates against the multiplication of errors through the several measurements by an incremental type of calculation, deriving relatively accurate reference points which are subsequently used for determining the location of the object point. The present invention, in a further form, is a method of generating a ray map for a first camera is provided. The method comprising the steps of: obtaining a digital image with the camera, the digital image; obtaining camera position information of the firs camera; and determining for a plurality of pixels in the digital image a direction vector based on the camera position information and region information. The ray map including the direction vector and the region information.
The present invention, in a still further form, is a method of associating a plurality of rays with a point in a region is provided. The method comprising the steps of: for each of a plurality of images of the region obtaining camera position information for the camera taking the image and determining for a plurality of pixels in the digital image a direction vector based on the camera position information and region information. The region information including an intensity. The method further comprising the steps of determining intersecting direction vectors from multiple images which intersect at the point; and associating the intersecting direction vectors with the point.
The present invention, in a still another form, is a method of generating a virtual image of a region for a first position is provided. The method comprising the steps of: determining a ray map associated with the region including a plurality of rays, each ray including region information; determining a subset of the plurality of rays which are viewable from the first position; assigning the region information for each ray of the subset of plurality of rays to a corresponding location in the virtual image; and determining the region information for a remainder of the virtual image. The remainder of the virtual image corresponding to points in the region for which a known ray is not viewable from the first position.
The present invention, in a yet still further form, is a computer readable medium including instructions to generate a virtual image of a region for a first position is provided. The computer readable medium comprising instructions to determine a ray map associated with the region including a plurality of rays, each ray including region
information; instructions to determine a subset of the plurality of rays which are viewable from the first position; instructions to assign the region information for each ray of the subset of plurality of rays to a corresponding location in the virtual image; and instructions to determine the region information for a remainder of the virtual image, the remainder of the virtual image corresponding to points in the region for which a known ray is not viewable from the first position.
BRIEF DESCRIPTION OF THE DRAWINGS
The above mentioned and other features and objects of this invention, and the manner of attaining them, will become more apparent and the invention itself will be better understood by reference to the following description of an embodiment of the invention taken in conjunction with the accompanying drawings, wherein:
Figure 1 is a perspective view of an example survey vehicle. Figure 2 is a perspective view of the stereo camera pair of Figure 1.
Figure 3 is a left and right image view of a possible image from the stereo camera pair of Figure 2.
Figure 4 is a top plan view of stereo camera pair in relation to a viewed scene. Figure 5 is a top and side view of the arrangement of Figure 4. Figure 6 is a dual view of a similar object having different depths. Figures 7A-C are schematic diagrams illustrating positional accuracy errors. Figure 8 is a schematic diagram illustrating directional accuracy error.
Figure 9 is a perspective view of uncorrelated views. Figure 10 is a schematic diagram of multiple view discrepancy.
Figure 11 is a flow chart diagram of a method of one embodiment of the present invention. Figures 12A and 12B are left and right image views, respectively.
Figure 13 is a top plan view of correlating reference points. Figure 14 is a perspective view of a reference correlation.
Figure 15 is a perspective view of the results of a mutual reference correlation.
Figure 16 is a two-dimensional representation of a plurality of camera views imaging a region.
Figure 17 is a detail view of a portion of Figure 16. Figure 18 is an exemplary method of generating a ray map which is associated with points in the region.
Figure 19 is a two-dimensional representation of the use of ray mapping in the generation of a virtual image.
Figure 20 is a perspective view of a vehicle including a pair of cameras supported thereon.
Corresponding reference characters indicate corresponding parts throughout the several views. Although the drawings represent embodiments of the present invention, the drawings are not necessarily to scale and certain features may be exaggerated in order to better illustrate and explain the present invention. The exemplifications set out herein illustrate embodiments of the invention, in several forms, and such exemplifications are not to be construed as limiting the scope of the invention in any manner.
DETAILED DESCRIPTION OF THE DRAWINGS
The embodiments disclosed below are not intended to be exhaustive or limit the invention to the precise forms disclosed in the following detailed description. Rather, the embodiments are chosen and described so that others skilled in the art may utilize their teachings.
The detailed descriptions which follow are presented in part in terms of algorithms and symbolic representations of operations on data bits within a computer memory representing alphanumeric characters or other information. These descriptions and representations are the means used by those skilled in the art of data processing arts to most effectively convey the substance of their work to others skilled in the art.
An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. These steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of
electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It proves convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, symbols, characters, display data, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely used here as convenient labels applied to these quantities.
Some algorithms may use data structures for both inputting information and producing the desired result. Data structures greatly facilitate data management by data processing systems, and are not accessible except through sophisticated software systems. Data structures are not the information content of a memory, rather they represent specific electronic structural elements which impart a physical organization on the information stored in memory. More than mere abstraction, the data structures are specific electrical or magnetic structural elements in memory which simultaneously represent complex data accurately and provide increased efficiency in computer operation. Further, the manipulations performed are often referred to in terms, such as comparing or adding, commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary, or desirable in most cases, in any of the operations described herein which form part of the present invention; the operations are machine operations. Useful machines for performing the operations of the present invention include general purpose digital computers or other similar devices. In all cases the distinction between the method operations in operating a computer and the method of computation itself should be recognized. The present invention relates to a method and apparatus for operating a computer in processing electrical or other (e.g., mechanical, chemical) physical signals to generate other desired physical signals.
The present invention also relates to an apparatus for performing these operations. This apparatus may be specifically constructed for the required purposes or it may comprise a general purpose computer as selectively activated or reconfigured by a computer program stored in the computer. The algorithms presented herein are not inherently related to any particular computer or other apparatus. In particular, various general purpose machines may be used with programs written in accordance with the
teachings herein, or it may prove more convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the description below.
The present invention deals with "object-oriented" software, and particularly with an "object-oriented" operating system. The "object-oriented" software is organized into "objects", each comprising a block of computer instructions describing various procedures ("methods") to be performed in response to "messages" sent to the object or "events" which occur with the object. Such operations include, for example, the manipulation of variables, the activation of an object by an external event, and the transmission of one or more messages to other objects.
Messages are sent and received between objects having certain functions and knowledge to carry out processes. Messages are generated in response to user instructions, for example, by a user activating an icon with a "mouse" pointer generating an event. Also, messages may be generated by an object in response to the receipt of a message. When one of the objects receives a message, the object carries out an operation (a message procedure) corresponding to the message and, if necessary, returns a result of the operation. Each object has a region where internal states (instance variables) of the object itself are stored and where the other objects are not allowed to access. One feature of the object-oriented system is inheritance. For example, an object for drawing a "circle" on a display may inherit functions and knowledge from another object for drawing a "shape" on a display.
A programmer "programs" in an object-oriented programming language by writing individual blocks of code each of which creates an object by defining its methods. A collection of such objects adapted to communicate with one another by means of messages comprises an object-oriented program. Object-oriented computer programming facilitates the modeling of interactive systems in that each component of the system can be modeled with an object, the behavior of each component being simulated by the methods of its corresponding object, and the interactions between components being simulated by messages transmitted between objects. Objects may also be invoked recursively, allowing for multiple applications of an objects methods until a condition is
satisfied. Such recursive techniques may be the most efficient way to programmatically achieve a desired result.
An operator may stimulate a collection of interrelated objects comprising an object-oriented program by sending a message to one of the objects. The receipt of the message may cause the object to respond by carrying out predetermined functions which may include sending additional messages to one or more other objects. The other objects may in turn carry out additional functions in response to the messages they receive, including sending still more messages. In this manner, sequences of message and response may continue indefinitely or may come to an end when all messages have been responded to and no new messages are being sent. When modeling systems utilizing an object-oriented language, a programmer need only think in terms of how each component of a modeled system responds to a stimulus and not in terms of the sequence of operations to be performed in response to some stimulus. Such sequence of operations naturally flows out of the interactions between the objects in response to the stimulus and need not be preordained by the programmer.
Although object-oriented programming makes simulation of systems of interrelated components more intuitive, the operation of an object-oriented program is often difficult to understand because the sequence of operations carried out by an object-oriented program is usually not immediately apparent from a software listing as in the case for sequentially organized programs. Nor is it easy to determine how an object-oriented program works through observation of the readily apparent manifestations of its operation. Most of the operations carried out by a computer in response to a program are "invisible" to an observer since only a relatively few steps in a program typically produce an observable computer output. In the following description, several terms which are used frequently have specialized meanings in the present context. The term "object" relates to a set of computer instructions and associated data which can be activated directly or indirectly by the user. The terms "windowing environment", "running in windows", and "object oriented operating system" are used to denote a computer user interface in which information is manipulated and displayed on a video display such as within bounded regions on a raster scanned video display. The terms "network", "local area network",
"LAN", "wide area network", or "WAN" mean two or more computers which are connected in such a manner that messages may be transmitted between the computers. In such computer networks, typically one or more computers operate as a "server", a computer with large storage devices such as hard disk drives and communication hardware to operate peripheral devices such as printers or modems. Other computers, termed "workstations", provide a user interface so that users of computer networks can access the network resources, such as shared data files, common peripheral devices, and inter-workstation communication. The computers have at least one processor for executing machine instructions, and memory for storing instructions and other information. Many combinations of processing circuitry and information storing equipment are known by those of ordinary skill in these arts. A processor may be a microprocessor, a digital signal processor ("DSP"), a central processing unit ("CPU"), or other circuit or equivalent capable of interpreting instructions or performing logical actions on information. Memory includes both volatile and non- volatile memory, including temporary and cache, in electronic, magnetic, optical, printed, or other format used to store information. Users activate computer programs or network resources to create "processes" which include both the general operation of the computer program along with specific operating characteristics determined by input variables and its environment. In wireless wide area networks, communication primarily occurs through the transmission of radio signals over analog, digital cellular, or personal communications service ("PCS") networks. Signals may also be transmitted through microwaves and other electromagnetic waves. At the present time, most wireless data communication takes place across cellular systems using second generation technology such as code- division multiple access ("CDMA"), time division multiple access ("TDMA"), the Global System for Mobile Communications ("GSM"), personal digital cellular ("PDC"), or through packet-data technology over analog systems such as cellular digital packet data (CDPD") used on the Advance Mobile Phone Service ("AMPS").
The terms "wireless application protocol" or "WAP" mean a universal specification to facilitate the delivery and presentation of web-based data on handheld and mobile devices with small user interfaces. The term "GPS" means Global Positioning System. The term "INS" means Inertial Navigation System. The term "object," when not
used in its software programming definition, means the target for which the location or position is being obtained.
The exemplary embodiment disclosed herein relates to vehicle 10 with two cameras 12 located fixed distance 20 relative to each other, see Figures 1 and 2. The sensing involved is visual sensing and the methods disclosed directly relate to visual image processing. However, other embodiments of the invention may use other sensory devices and work with different data in a similar manner to accomplish the error minimization of the present invention.
In the exemplary embodiment, the camera is the primary instrument for determining the relative position and orientation of objects to survey vehicle 10. A single image can be used to determine the relative direction to the object, however two images at a know distance and orientation are needed to determine the relative distance to the object. Other sensory instruments, e.g., sonar, radar, ultrasound, may alternatively be used in the appropriate circumstances. In this exemplary embodiment, cameras 12 that take these images are referred to as a stereo pair. Since cameras 12 are fixed to survey vehicle 10, their orientation and distance to each other is measured very accurately.
The two images (e.g., left image 30 and right image 40 of Figure 3) taken by stereo pair 12 work together to determine the location of the objects they see. The intersection of the viewing rays 50 from left and right cameras 12 are used to calculate the distance and orientation from vehicle 10, see Figures 4 and 5. Since the orientation and geometry of each camera 12 is known, the angle of an object (left/right and up/down) may be determined (e.g., view angles 32 and 42). This combined with the calculated distance may be used to determine the relative three dimensional position of the object in the images. The survey system determines the absolute vehicle position and orientation. GPS
14 records the position of the vehicle, specifically recording the latitude, longitude, and altitude of the survey vehicle in the exemplary embodiment. INS 16 measures the direction and orientation of survey vehicle 10. In this way survey system 10 may know where it was and the direction it was facing when a set of images was taken. Known systems have relative accuracy (vehicle to object) which is very good.
This is because it relies on the physical geometry of the survey vehicle and can be
measured and calibrated to a high degree of certainty. However, distance from the survey vehicle decreases the accuracy. Objects further from the survey vehicle appear smaller in the imagery and thus consume fewer pixels. As an example of this fact, Figure 6 shows several sizes of stripes on the images, whereas each stripe on the road is actually the same size. The stripes that are further from the survey have less resolution, therefore their relative position is less accurately known.
Both radial and depth determination are needed for survey systems. Radial determination (relative angle or direction to a point) may be determined with a single image; and when combined with the radial determination from the other image of a stereo pair radial accuracy increases slightly. The accuracy of the radial determination decreases with distance but it does so linearly. Depth determination (relative distance to a point) requires both images in the stereo pair. Because of this depth determination accuracy is at best half of the radial accuracy. Depth accuracy also decreases geometrically over distance. This is due to the decreasing angle of incident between the two images and between the survey vehicle and the ground. As such, depth determination accuracy decreases much more rapidly than does radial determination over distance.
Survey systems often need to be reliable in terms of Absolute Accuracy (GPS location and heading). The absolute accuracy of the survey vehicle is based primarily on two devices. GPS 14 primarily determines the position (latitude, longitude and altitude) of the survey vehicle, and INS which determines the direction the survey vehicle is facing. The INS also assists the GPS by recording changes to the vehicle position and orientation.
The accuracy of GPS relies on its precision and on GPS drift, see Figures 7A-C. Precision is the short-term repeatability. Drift is the error caused by atmospheric conditions and other ambient factors. GPS units can be very accurate when given time to fix on a point. By averaging readings over a short period while stationary, precision error can be reduced. If a survey vehicle were to stop and not move for a several minutes, its location may be determined to a higher degree of accuracy. This however is not practical for a mobile survey application. Likewise, drift error may be reduced by monitoring trends over a long period of time. To do this the unit must be stationary as well. But, in many cases, a second
stationary GPS unit may be used to record these drift trends which are then applied to the mobile survey data. Theoretically, the second GPS unit is affected by the same ambient condition as mobile unit. Using a second GPS unit does not eliminate all drift, as there will be differences between the ambient conditions experienced by the two units that are compounded by their distance.
Directional accuracy is the error in determining the orientation of the vehicle, see Figure 8. When used to determine the location of objects near the vehicle, this error is multiplied over distance. A two degree error will cause an object 3 meters away to be miscalculated by about 10 centimeters. But an object 30 meters away would be off by more than a meter. Thus there are several areas where measurement and calculation error may introduce significant error in absolute object position determination. When determining the surveyed position of an object in question, relative and absolute errors are combined. Currently the system used for determining the absolute position of the survey vehicle is essentially separate from the system used to locate the objects relative to the vehicle. In this system, the error follows the form:
ε = pε + d ■ ήn(aε +φε) + d2lε
Where... ε is total error, pε is absolute positional error (precision and drift combined), aε is absolute angular error, lε is relative depth error, φε is relative radial error and d is the distance of the target point from the survey vehicle.
As illustrated in Figures 9 and 10, capture points A (92) and B (94) represent survey vehicle 10 at different times and places in the survey. If attempt to extract the sign feature, we will get a different location if we choice A's view or B's view. The discrepancy is as high as the sum of the individual errors (εA + εB) .
This potential discrepancy may not only cause confusion, but may make the system seem less accurate than it actually is. For example, say that the location of the object can be determined to within 1 meter. If both views are off a meter in opposite directions, then it will appear that the object is off by 2 meters. This also makes it
difficult to map previously located objects within other views since they may not line up from view to view.
In accordance with one embodiment of the present invention, the creation and utilization of a photogrammetric feedback network for survey correction is implemented to achieve more precise location determinations. In the above discussion we showed some accuracy issues and causes with known methods used for image based survey vehicles. The discussion below involves an overall method for increasing the accuracy of the survey and sub methods and techniques that addresses specific aspects of the correction process. Some methods may be optional or there may be alternates with various applications. The techniques are discussed with the goal of increasing the survey accuracy to the point that advanced applications, requiring highly accurate maps, are feasible.
This is accomplished by means of a feedback technique using data captured by the survey system to correct the relative position of nearby survey capture points. This will primarily use the imagery data from the survey; however the technique in general is not limited to this type of data.
These nearby relative corrections may be used to create a "rigid" mesh over the entire survey. This mesh may be used to correct the survey as a whole by pinning it to points with known low error or by allowing averaging over now greater sample sets. Thus, by incrementally decreasing error over specific segments, the total error introduced by the physical systems may be minimized. While this disclosed embodiment describes using one set of reference points to assist in determining the position of the target object, multiple levels of reference points may be used to mitigate against increases in total error, and thusly may resulting in several levels of reference points. Figure 11 shows the task steps 200 involved in the procedure. Attached to each task are potential methods 220 that may be alternatively employed to achieve the corresponding task step. In general, the first listed method associated with each task involves computations which are the easiest to implement, while the last listed method involves computations which are the most accurate method of the group. However, this rating of tasks does not necessarily exist for every application. At least one method for each task is typically implemented for each task, and tasks may be computationally
combined where appropriate. In some cases, a combination of methods for different situations may yield better results.
First in step 222, the survey system identifies points in the survey image that are to be used to orient the capture point data. These points typically represent stationary features that may be identified in other images of the capture point and ideally in the imagery from other nearby capture points as well. Additionally, the relative position of points to the capture point's reference frame must be attainable to within a known radius of error. Basically the reference points lie within the field of view of a stereo pair. A minimum of three (non-collinear) points is required. Possible methods include manual observation 222, strip mapping 224, depth mapping 226, and ray mapping 228. In one embodiment of the invention, ray mapping 228 involves the teaching disclosed in the section herein titled RAY MAPPING.
In Figures 12A and 12B, some example points are shown from the left and right views 110 and 112, respectively, of capture point A. In this exemplary embodiment, the base of the stop sign, the end double yellow line and the corner of a pavement patch are chosen. Since these are visible and identifiable in both views of the stereo pair, their location may be determined by the survey system. Points on mobile object like other vehicles or shadows that will change later in the day will work for stereo correlation, but are poor choices where images taken at different time may need to be used. Next in step 204, these reference points are found and recorded in the views of other capture points (e.g., views 120 and 122 of Figure 13). This correlation process produces a data table (not shown) that lists all the possible capture point views of each reference point. In Figure 13, three points 124 identified from capture point A are also visible to capture point B of the survey. In this case all three reference points from A are assigned to each of the same reference points from B. These are the correlated reference points between A and B. Exemplary methods for this step include mesh correlation 230, sequence correlation 232, and orientation aware weighing 234.
Because of the errors in the position and orientation of the different capture points, capture point A (132) and B (134) perceive the locations of the correlated reference points differently as shown in Figure 14. Here the red and blue X's and ghosted images show where capture point A and B have calculated the locations of the reference points. The
reference points have been correlated as indicated by the ovals 132. On the left of the image we can see the difference in position and orientation of the B capture point. The ghosted image of the survey vehicle indicates its location based on its own frame of reference and that of capture point A (136, 138). To determine the best place for the correlation point for each correlated reference, the references associated with each capture point are weighted based on an error function for that point. The weighting factor are used to determine a weighted average point between the references as the correlation point. This is done for all the correlated reference points, for example by ramp averaging 236 or spline averaging 238. Ramp averaging involves a simple interpolation between the two or three closest correctable points, a linear correction or best fit line. For spline averaging, a cubic spline curve fit may be used to adjust the correlation points. This provides for a smoother interpolation and provides better vehicle heading correct.
Finally, in step 228 the position of each of the capture points is recalculated so that its reference points best match the correlation points, see Figure 15. In the case of three correlation points between capture points, this results in the new capture points being in the same frame of reference. Here we show capture points A and B referencing their mutual correlation points. The correction may be accomplished using reference correction 240, 3 point correction 242, or RMS correction 244. These corrections use the correlation points to correct capture points (lat, long, alt, pith, roll, azimuth) recorded by survey vehicle 10.
Reference correction may be the simplest method, shifting the lat and long of the capture points so that they best match the adjusted correlation points. This method works with a single correlation point between capture points. If more than one exists, either the best (the one weighted the most by both capture points), or an average of the results of several may be used.
For the 3 -point method, the relative positions between two capture points may be solved and all six degrees corrected. If more than 3 correlation points are available between capture points, either the best 3 or an average of the results of several combinations may be used. This correction has the advantage of correcting survey vehicle 10 capture point reading in all six degrees of freedom.
RMS is root mean squared and uses 4 or more points together and a weighted average based on distance to find the best corrected location of the capture points relative to each other.
RAY MAPPING Referring to Figure 16, a ray map 300 for a region 302 is represented. One or more cameras 304 A-D obtain one or more images of region 302. In one embodiment, multiple stationary cameras are used. In one embodiment, a single camera or multiple cameras which are supported by a moveable vehicle are used. An exemplary moveable vehicle including two cameras mounted thereto is the GPSVISION mobile mapping system available from Lambda Tech International, Inc. located at 1410 Production Road, Fort Wayne, IN 46808. Although four cameras 340A-D are illustrated, a single camera 304 may be used and moved to the various locations. Further, the discussion related to one of the cameras, such as camera 304A, is applicable to the remaining cameras 340B- D. Camera 304A is at a position 306A and receives a plurality of rays of light 308A which carry information regarding objects within region 302. As is known, light reflected or generated by objects in region 302 is received through a lens system of camera 304A and imaged on a detecting device to produce an image 310A having a plurality of pixels. A standard photographic image records a 2d array of data that represents the color and intensity of light entering the lens at different angles at a moment in time. This is a still image. Each pixel has region information regarding a portion of region 302, such as color and intensity.
A ray map for 108A corresponding to image 31OA may be generated based on the region information of each pixel and the position 306A of camera 304A. In one embodiment, position 306A of camera 304 includes both the location and the direction of camera 304. Region 302 is within the viewing field of each of cameras 304A-D. By knowing the location and attitude of the camera 304A at the time the image 310A was taken then the color and intensity of the rays of light traveling to a known point has been captured. Ray map 308 A includes a plurality of ray vectors 320 which correspond to a plurality of respective points 322 of region 302 and the location of camera 304. Based on
the position 306A of camera 304A the direction of vector 320 entering camera 304A from point 322 may be determined. A discussion of determining the position of point 322 is provided herein, but point 322 does lie on the ray defined by the pixel of image 310A associated with point 322 and the position 306A of camera 304A. The region information for the pixel in image 31OA that corresponds to point 322 is associated with vector 320. As such, for each point 322 in region 302 for which a ray map is desired, a ray having an endpoint at the associated point 322, a direction defined by the associated vector 320, and color and intensity provided by the associated region information from image 31OA may be determined. In one embodiment, not all pixels are included in the ray map. In one embodiment, the location of points 322 is determined in the following manner. The ray vectors 322 from several of the ray maps are combined. For camera 304A, a given ray vector 320 passes through location 306A and has a direction based on position 306A and also passes through point 322, however the location of point 322 is not yet known. Another ray vector 320, associated with camera 304B, passes through location 306B and has a direction based on position 306B and also passes through point 322.
Since both of these vectors pass through point 322, their intersection defines the position of point 322 in space. Additional ray vectors from other cameras 304 may also intersect these two ray vectors and thereby also define the location of point 322. As such, each point will have multiple rays 322 having associated region information associated therewith. In one embodiment, ray vectors 322 which intersect within a given tolerance specify the location of a point 322.
Referring to Figure 18, an exemplary method 400 for generating one or more ray maps is shown. Method 400 may be embodied in one or more software programs having instructions to direct one or more computing devices to carry out method 400. Data regarding region 302 is collected, as represented by block 402. A plurality of images are obtained from one or more cameras, as represented by blocks 404. For each image, camera position data is obtained, as represented by blocks 406. Based on the obtained images and camera position data, one or more ray maps are generated, as represented by block 408. In one embodiment, the one or more ray maps are generated for a plurality of desired points in the region. For each desired point in the region, a ray vector is
determined for the point, as represented by block 410. The ray vector passes through the pixel in the respective image that contains point 322 and is in the direction defined by position 306A. Region information from the image regarding the desired point is associated with the ray vector, as represented by block 412. The ray maps 308 are maps for a given viewing position while ray map 300 is the overall ray map for region 302.
Referring to Figure 19, one exemplary application of ray maps 308 is shown. As shown in Figure 19, a virtual camera 350 is represented. Camera 350 is at a virtual position 352. Virtual position 352 includes both the location and the direction of camera 350. Based on virtual position 352 and by knowing the field of view of camera 350, a set of rays 364A-D from ray maps 308A-D which would enter the lens of camera 350 may be determined. These rays are indicated as reused rays from the map. Further, based on known rays from the maps 308, additional rays 364A-G may be determined. In one embodiment, the additional rays may be determined by selecting the nearest neighbor ray for point 322 that would fall within the viewing field of the virtual camera. In one embodiment, the additional rays are determined by a weighted average of a plurality of the nearest rays. As such, a virtual image 370 may be generated of region 302 for virtual camera 350. This virtual image 370 may be used to compare to an actual image from a camera located at position 352.
In one embodiment, an initial ray map is created for region 302. A mobile camera moves through an area wherein region 302 is imaged. The live images from the mobile camera are compared to virtual images for a camera determined based on the position of the mobile camera and the ray map. The mobile camera does not need to follow the exact path or take images at exactly the same place as the original cameras. The live and virtual images may be compared by a computing device and the differences highlighted. These differences may show changes in region 302, such as the addition of a section of a curb, the ground raked a different way, a pile of dirt, or other changes.
In one embodiment, the camera position 306 is calibrated as follows for a vehicle 500 (see Figure 20) having a pair of cameras 502 and 504 supported thereby. Camera and lens calibration are used to achieve an accurate ray-map. Digital cameras do not linearly represent images across the imaging array. This is due to distortions caused by the lens, aperture and imaging element geometry as explained in David A. Forsyth, Jean Ponce
/'Computer vision : a modern approach" Prentice Hall, 2006. The camera is the primary instrument for determining the relative position of objects in region 302 to vehicle 500. A single image 310 may be used to determine the relative direction to the object 322, however two images 310 at a know distance and orientation are needed to determine the relative distance to the object 322. The cameras 502 and 504 that take these images 310 are known as a stereo pair. Since these cameras 502 and 504 are fixed to vehicle 500, their orientation and distance to each other may be measured very accurately.
An accurate position and orientation for each camera and sensor on the vehicle 500 must be determined and registered. The calibration of the mobile mapping system consists of camera calibration, camera relative orientation and the offset determination. The camera calibration is performed by an analytical method which includes: capturing images with different location and view angles of known control points in a test field, measuring the image coordinates and performing the computations to obtain camera parameters. The relative orientation and rotation offset are determined using constraints without ground control points.
Camera Calibration
In one embodiment, the camera calibration processing is to determine camera parameters by a well-known bundle adjustment method. Cameras, whether metric, semi- metric or non-metric, do not possess a perfect lens system. To achieve high positioning accuracy, the lens distortions have to be corrected. For this purpose, six distortion parameters are used to correct the radial, decentering and affine distortions. The total camera parameters to be determined consist of the focal length, the principal point, and the lens distortion. The unknown camera parameters are determined using the known control points based on the co-linearity equation. Co-linearity equations are defined by:
Nx x = X0 + dx - c — Nz (1) y = yo + dy - c— Nz with
Nx = rn(X- X0) + r2l(Y- Y0) + r3l (Z- Z0)
Ny = rn (X- X0) + r22 (Y- Y0) + r32 (Z- Z0) (2)
Nz = r13(X- X0) + r23(Y- Y0) + r33(Z- Z0)
c Focal length
X0 , y0 Coordinate of principal point X0 , Y0 , Z0 Perspective center of a camera dx,dy Camera distortion rn , ,r33 Element of rotation matrix
In one embodiment with a least squares solution, the camera parameters, the position and rotation of every image may be computed using known control points.
Camera Relative Orientation
For stereo camera system, two cameras are mounted on a stationary platform This means that the relative relationship between two cameras is constant. The method to determine the relative orientation is using the co-planarity equation. This means that two conjugate image points and the two prospective centers are in one plane:
where (u,v,w) and (u' ,v' , w') are the three dimensional image coordinates on left and right images and (bx ,by,bz) is the base vector between two cameras.
The five independent parameters, the x, y, and the three angular parameters, since the height of the camera is known, are the relative orientation parameters. At least 5 points are needed to solve the relative orientation parameter. For relative orientation, only image points are measured and used for determination, no control points are required. This method works as long as the parallax is large enough. This is true for the aero- photography, but in most stereo camera system, the base vector is limited and the parallax is small. This causes the very high correlation between the relative orientation parameters. To fix this problem, one method is to determine the relative orientation by applying the relative orientation constraints. It means that the same distance measured from two different image pairs should have the same value in the calibration procedure.
Offset Calibration
The third calibration is to determine the position and orientation offset between the positioning system and the stereo cameras. This procedure may be conducted with or
without known control points. The principal of the calibration is to determine the offset by using following conditions:
1) An object point located from different image pairs has an unique (X,Y,Z ) coordinate.
2) Different points in a vertical line has the unique (X, Y) coordinate 3) Different point in a horizontal plane has a unique (Z) coordinate.
Xv = K(RlKK(X* - Xl) - Dr r b) (4)
The calibration procedure is based on the above positioning equation. Only three rotation offset and three position offset parameters are unknown. By measuring objects from different image pairs, the six offset parameters may be accurately determined. The positioning component provides the system position and orientation. After the system is calibrated, every object "seen" by two cameras may be precisely located in a global coordinate system.
While this invention has been described as having an exemplary design, the present invention may be further modified within the spirit and scope of this disclosure. This application is therefore intended to cover any variations, uses, or adaptations of the invention using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains.
Claims
1. A surveying system for determining the location of a object point from images, said computer comprising: two image gathering devices coupled at a known relative distance, each said image gathering device adapted to generate an image; and a location calculator, said location calculator having a plurality of instructions enabling said location calculator to correlate at least two reference points appearing on the two images, and to determine the position of the object point based on the two images and the at least two reference points.
2. The surveying system of claim 1 further comprising a global positioning system associated with said location calculator.
3. The surveying system of claim 1 further comprising an internal navigation system.
4. In surveying system, a method of determining the position of a object point using two images, said method comprising the steps of: correlating at least two reference points appearing on the two images; and determining the position of the object point based on the two images and at least the two reference points.
5. The method of claim 4 further comprising the step of identifying the two reference points.
6. The method of claim 5 wherein said step of identifying is accomplished by one of manual observation, strip mapping, depth mapping, and ray mapping.
7. The method of claim 4 wherein said step of correlating is accomplished by one of mesh correlation, sequence correlation, and orientation aware weighing.
8. The method of claim 4 further comprising the step of determining the location of correlation points.
9. The method of claim 8 wherein said determining the location of correlation step uses the correlation points to correct the location of the object point.
10. The method of claim 8 wherein said step of determining the location of correlation points is accomplished by one of ramp averaging and spline averaging.
11. The method of claim 4 wherein said step of determining the position of the object point is accomplished by one of reference correlation, 3 point correlation, and RMS correlation.
12. A machine-readable program storage device for storing encoded instructions for a method of determining the location of a object point using two images, said method comprising the steps of: correlating at least two reference points appearing on the two images; and determining the position of the object point based on the two images and the at least two reference points.
13. The machine-readable program storage device of claim 12 wherein said method further comprises the step of identifying the two reference points.
14. The machine-readable program storage device of claim 13 wherein said method has said step of identifying being accomplished by one of manual observation, strip mapping, depth mapping, and ray mapping.
15. The machine-readable program storage device of claim 12 wherein said method has said step of correlating being accomplished by one of mesh correlation, sequence correlation, and orientation aware weighing.
16. The machine-readable program storage device of claim 12 wherein said method further comprises the step of determining the location of correlation points.
17. The machine-readable program storage device of claim 16 wherein said method includes said determining the location of correlation step using the correlation points to correct the location of the object point.
18. The machine-readable program storage device of claim 16 wherein said method includes said step of determining the location of correlation points being accomplished by one of ramp averaging and spline averaging.
19. The machine-readable program storage device of claim 12 wherein said method includes said step of determining the position of the object point being accomplished by one of reference correlation, 3 point correlation, and RMS correlation.
20. A method of generating a ray map for a first camera, including the steps of: obtaining a digital image with the camera, the digital image; obtaining camera position information of the firs camera; determining for a plurality of pixels in the digital image a direction vector based on the camera position information and region information, the ray map including the direction vector and the region information.
21. A method of associating a plurality of rays with a point in a region, the method comprising the steps of: for each of a plurality of images of the region
(a) obtaining camera position information for the camera taking the image; and
(b) determining for a plurality of pixels in the digital image a direction vector based on the camera position information and region information, the region information including an intensity determining intersecting direction vectors from multiple images which intersect at the point; and associating the intersecting direction vectors with the point.
22. A method of generating a virtual image of a region for a first position, the method comprising the steps of: determining a ray map associated with the region including a plurality of rays, each ray including region information; determining a subset of the plurality of rays which are viewable from the first position; assigning the region information for each ray of the subset of plurality of rays to a corresponding location in the virtual image; and determining the region information for a remainder of the virtual image, the remainder of the virtual image corresponding to points in the region for which a known ray is not viewable from the first position.
23. The method of claim 22 , wherein the step of determining the region information for the remainder of the virtual image includes the step of assigning for each point in the remainder of the virtual image the region information of a ray associated with the point which is the nearest to being viewable from the first position.
24. The method of claim 22 , wherein the step of determining the region information for the remainder of the virtual image includes the step of assigning for each point in the remainder of the virtual image a region information determined by a weighted average of a plurality of rays associated with the point.
25. A computer readable medium including instructions to generate a virtual image of a region for a first position, comprising: instructions to determine a ray map associated with the region including a plurality of rays, each ray including region information; instructions to determine a subset of the plurality of rays which are viewable from the first position; instructions to assign the region information for each ray of the subset of plurality of rays to a corresponding location in the virtual image; and instructions to determine the region information for a remainder of the virtual image, the remainder of the virtual image corresponding to points in the region for which a known ray is not viewable from the first position.
26. The computer readable medium of claim 25 , wherein the instructions to determine the region information for the remainder of the virtual image includes instructions to assign for each point in the remainder of the virtual image the region information of a ray associated with the point which is the nearest to being viewable from the first position.
27. The computer readable medium of claim 25, wherein the instructions to determine the region information for the remainder of the virtual image includes instructions to assign for each point in the remainder of the virtual image a region information determined by a weighted average of a plurality of rays associated with the point.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/864,515 US8264537B2 (en) | 2007-09-28 | 2007-09-28 | Photogrammetric networks for positional accuracy |
US11/864,515 | 2007-09-28 | ||
US11/864,377 | 2007-09-28 | ||
US11/864,377 US20090087013A1 (en) | 2007-09-28 | 2007-09-28 | Ray mapping |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009042933A1 true WO2009042933A1 (en) | 2009-04-02 |
Family
ID=40511887
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2008/077972 WO2009042933A1 (en) | 2007-09-28 | 2008-09-26 | Photogrammetric networks for positional accuracy and ray mapping |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2009042933A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171695A (en) * | 2017-12-29 | 2018-06-15 | 安徽农业大学 | A kind of express highway pavement detection method based on image procossing |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6222482B1 (en) * | 1999-01-29 | 2001-04-24 | International Business Machines Corporation | Hand-held device providing a closest feature location in a three-dimensional geometry database |
US6707487B1 (en) * | 1998-11-20 | 2004-03-16 | In The Play, Inc. | Method for representing real-time motion |
US6928366B2 (en) * | 2002-03-01 | 2005-08-09 | Gentex Corporation | Electronic compass system |
-
2008
- 2008-09-26 WO PCT/US2008/077972 patent/WO2009042933A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6707487B1 (en) * | 1998-11-20 | 2004-03-16 | In The Play, Inc. | Method for representing real-time motion |
US6222482B1 (en) * | 1999-01-29 | 2001-04-24 | International Business Machines Corporation | Hand-held device providing a closest feature location in a three-dimensional geometry database |
US6928366B2 (en) * | 2002-03-01 | 2005-08-09 | Gentex Corporation | Electronic compass system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171695A (en) * | 2017-12-29 | 2018-06-15 | 安徽农业大学 | A kind of express highway pavement detection method based on image procossing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8264537B2 (en) | Photogrammetric networks for positional accuracy | |
EP2111530B1 (en) | Automatic stereo measurement of a point of interest in a scene | |
CN110927708B (en) | Calibration method, device and equipment of intelligent road side unit | |
EP1596330B1 (en) | Estimating position and orientation of markers in digital images | |
CA2395257C (en) | Any aspect passive volumetric image processing method | |
EP2103903B1 (en) | Method for geocoding a perspective image | |
EP2847741B1 (en) | Camera scene fitting of real world scenes for camera pose determination | |
US20060215935A1 (en) | System and architecture for automatic image registration | |
KR20130138247A (en) | Rapid 3d modeling | |
CN113345028B (en) | Method and equipment for determining target coordinate transformation information | |
Höhle | Photogrammetric measurements in oblique aerial images | |
US20080036758A1 (en) | Systems and methods for determining a global or local position of a point of interest within a scene using a three-dimensional model of the scene | |
Gerke | Using horizontal and vertical building structure to constrain indirect sensor orientation | |
CN110260857A (en) | Calibration method, device and the storage medium of vision map | |
US20160169662A1 (en) | Location-based facility management system using mobile device | |
CN111750838A (en) | Method, device and equipment for generating agricultural land planning map and storage medium | |
CN112529957A (en) | Method and device for determining pose of camera device, storage medium and electronic device | |
JPH11514434A (en) | Method and apparatus for determining camera position and orientation using image data | |
CN111161350B (en) | Position information and position relation determining method, position information acquiring device | |
Deng et al. | Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images | |
US20090087013A1 (en) | Ray mapping | |
US20220276046A1 (en) | System and method for providing improved geocoded reference data to a 3d map representation | |
CN216116064U (en) | Pose calibration system of heading machine | |
WO2009042933A1 (en) | Photogrammetric networks for positional accuracy and ray mapping | |
CN112665613A (en) | Pose calibration method and system of heading machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08834613 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 08834613 Country of ref document: EP Kind code of ref document: A1 |