US20170323448A1 - Pedestrian tracking at a traffic intersection to identify vulnerable roadway users for traffic signal timing, pedestrian safety, and traffic intersection control - Google Patents
Pedestrian tracking at a traffic intersection to identify vulnerable roadway users for traffic signal timing, pedestrian safety, and traffic intersection control Download PDFInfo
- Publication number
- US20170323448A1 US20170323448A1 US15/470,627 US201715470627A US2017323448A1 US 20170323448 A1 US20170323448 A1 US 20170323448A1 US 201715470627 A US201715470627 A US 201715470627A US 2017323448 A1 US2017323448 A1 US 2017323448A1
- Authority
- US
- United States
- Prior art keywords
- pedestrian
- safe passage
- view
- field
- passage zone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0116—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/005—Traffic control systems for road vehicles including pedestrian guidance indicator
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0129—Traffic data processing for creating historical data or processing based on historical data
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/015—Detecting movement of traffic to be counted or controlled with provision for distinguishing between two or more types of vehicles, e.g. between motor-cars and cycles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/091—Traffic information broadcasting
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/096—Arrangements for giving variable traffic instructions provided with indicators in which a mark progresses showing the time elapsed, e.g. of green phase
Definitions
- the present invention relates to the field of traffic detection. Specifically, the present invention relates to calibrating a pedestrian speed in a region of interest of a traffic detection zone used by pedestrians, for pedestrian detection and counting in traffic intersection control.
- Outputs are sent to external devices or locations for use or storage, such as for example to a traffic signal controller, which performs control and timing functions based on the information provided. These outputs also provide traffic planners and engineers with information on the volume of traffic at key points in a traffic network. This information is important for comparing volumes over periods of time to help with accurate adjustment of signal timing and managing traffic flow.
- Current systems and methods of traffic detection provide data that results only from a count of a total number of vehicles, which may or may not include bicycles or other road users, as therefore there is no way differentiating between different types of vehicles. As the need for modified signal timing to accommodate bicyclists, pedestrians and others becomes more critical for proper traffic management, a method for separating the count of all modes of use on a thoroughfare is needed to improve the ability to accurately manage traffic environments.
- Traffic planners and engineers require data on the volume of pedestrian traffic at key points in a traffic network. This data is important for comparing volumes over periods of time to help with accurate adjustment of signal timing. No current method for automatic count and data collection for pedestrian activity exists in a traffic detection system. As the need for modified signal timing to accommodate roadway users such as pedestrians becomes more critical for proper traffic management, a method for accurately identifying and counting pedestrians using a roadway intersection would greatly improve the ability to efficiently manage traffic environments.
- Yet another objective of the present invention is to automatically calibrate a traffic detection system by calculating pedestrian speed in a field of view for improved traffic intersection control.
- a further objective is to provide a system and method of identifying pedestrian incidents in a traffic detection zone, and triggering an alarm based on pedestrian incidents. It is still a further objective of the present invention to combine vehicle detection, bicycle detection, and pedestrian detection in a whole scene analysis of a field of view for traffic intersection control.
- the present invention provides systems and methods of identifying a presence, volume, velocity and trajectory of pedestrians in a region of interest in a field of view of a traffic detection zone. These systems and methods present an approach to traffic intersection control that includes, in aspect embodiment, both identification of a pedestrian detection zone in the field of view, and identification of individual pedestrians in the pedestrian detection zone. This approach, styled as a pedestrian zone detection, identification and counting framework, enable improved pedestrian counting in the pedestrian detection zone, and increased accuracy in various aspects of roadway management.
- Identification of a pedestrian detection zone in the field of view in the present invention is performed, in one embodiment, using a zone position analysis that automatically determines a pedestrian area in an intersection based on locations of one or both of vehicle and bicycle detection zones.
- Such vehicular and bicycle detection zones are either themselves automatically determined in a field of view, or drawn by a user. Regardless, knowledge of the location of these zones allows the present invention to calculate a pedestrian detection zone based on their position relative to a stop bar at a traffic intersection.
- Identification of a pedestrian detection zone in the field of view in the present invention is performed, in another embodiment, using an object movement analysis automatically determines a pedestrian area in an intersection based on movement of various other objects within the field of view irrespective of the location of other detection zones.
- objects include vehicles, motorcycles, bicycles, pedestrians, and any other moving objects that may be detected by sensors capturing data in the field of view. Regardless, analysis of image pixel activity of these moving objects allows the present invention to calculate a pedestrian detection zone.
- Identification of individual pedestrians in the pedestrian detection zone in the present is performed, in one embodiment, by comparing a part-based object recognition analysis with a model of a single walking pedestrian to differentiate individual pedestrians from groups of moving pedestrians. Such a comparison analyzes image characteristics to separate groups of pedestrians for improved count accuracy.
- the present invention also includes calibration of pedestrian speed in traffic intersection control.
- the pedestrian zone detection, identification and counting framework locates a region of interest based on locations of intersection or pavement markings and lane structures, such as a stop bar and lane lines, and computes features of an image inside the region of interest to calculate the pedestrian speed.
- the present invention also includes incident detection in traffic intersection control.
- the pedestrian zone detection, identification and counting framework learns a background of the pedestrian detection zone, and looks for changes in the background to identify non-moving objects such as prone objects or pedestrians or unauthorized vehicles. Identification of such non-moving objects initiate an alarm for responsible authorities to improve emergency response and efficient intersection performance.
- FIG. 1 is a system diagram for a pedestrian zone detection, identification and counting according to one aspect of the present invention
- FIG. 2 is a flowchart of steps performed for pedestrian zone detection, identification and counting according to one aspect of the present invention
- FIG. 3 is a flowchart of steps performed for calibrating a pedestrian speed in pedestrian zone detection, identification and counting according to another aspect of the present invention
- FIG. 4 is a flowchart of steps performed for incident detection in the pedestrian zone detection and pedestrian identification and counting according to one embodiment of the present invention
- FIG. 5 is an exemplary representation of a field of view in a traffic detection zone, showing in a particular a region of interest for pedestrian detection according to the present invention
- FIG. 6 is another exemplary representation of a field of view in a traffic detection zone, showing vehicular, bicycle, and pedestrian detection zones according to one embodiment of the present invention.
- FIG. 7 is further exemplary representation of a field of view in a traffic detection zone, showing accumulated tracks of vehicles and pedestrians according to one embodiment of the present invention.
- FIG. 1 is a system diagram illustrating elements of a pedestrian tracking and counting framework 100 , according to one aspect of the present invention.
- the pedestrian tracking and counting framework 100 is performed within one or more systems and/or methods that includes several components, each of which define distinct activities for defining an area used by pedestrians 102 in a traffic detection zone 114 , and accurately counting pedestrians 102 in such an area, for traffic intersection control.
- FIG. 5-7 are exemplary screenshot images 111 of a field of view 112 in a traffic detection 114 .
- a region of interest 103 is highlighted for a pedestrian detection zone 104 , and a pedestrians 102 are shown present therein.
- the pedestrian detection zone 104 is shown below user-drawn vehicular and bicycle detection zones 105 in the field of view 112 .
- arrows indicate accumulated tracks 106 of moving objects 107 are shown therein, as are pedestrian tracks 108 in the region of interest 103 .
- FIG. 5-7 also show standard intersection roadway markings and lane structures 109 .
- the pedestrian tracking and counting framework 100 ingests, receives, requests, or otherwise obtains input data 110 that represents a field of view 112 of the traffic detection zone 114 .
- Input data 110 is collected from the one or more sensors 120 , which may be positioned in or near a roadway area for which the traffic detection zone 114 is identified and drawn.
- the one or more sensors 120 include video systems 121 such as cameras, thermal cameras, radar systems 122 , magnetometers 123 , acoustic sensors 124 , and any other devices or systems 125 which are capable of detecting a presence of objects within a traffic intersection environment.
- the input data 110 is applied to a plurality of data processing component 140 within a computing environment 130 that also includes one or more processors 132 , a plurality of software and hardware components, and one or more modular software and hardware packages configured to perform specific processing functions.
- the one or more processors 132 , plurality of software and hardware components, and one or more modular software and hardware packages are configured to execute program instructions to perform algorithms for various functions within the pedestrian tracking and counting framework 100 that are described in detail herein, and embodied in the one or more data processing modules 140 .
- the plurality of data processing components 140 include a data ingest component 141 configured to ingest, receive, request, or otherwise obtain input data 110 as noted above, and a pedestrian zone detection and counting initialization component 142 configure to initialize the pedestrian tracking and counting framework 100 and retrieval of input data 110 for performing the various functions of the present invention.
- the plurality of data processing modules 140 also include a pedestrian zone identification component 143 , an image processing and pedestrian detection learning component 144 , a speed calibration component 145 , an incident detection component 146 , and a counting component 147 .
- Output data 180 may include of a pedestrian count, generated by the counting component 147 according to one or more embodiments of the present invention.
- Output data 180 may also include a calibrated pedestrian speed, generated by the speed calibration component 145 according to another embodiment of the present invention.
- Output data 180 may further include an alarm indicated an incident detected in a pedestrian area 104 , generated by the incident detection component 146 according to still another embodiment of the present invention.
- Output data 180 may also be provided for additional analytics and processing in one or more third party or external applications 190 . These may include a traffic management tool 191 , a zone and lane analysis component 192 , a traffic management system 193 , and a signal controller 194 .
- the pedestrian zone identification component 143 is configured to define a pedestrian detection zone 104 in the field of view 112 of the traffic detection zone 114 for subsequent counting of pedestrians 102 therein. Differential analytical approaches 160 may be applied to achieve this determination. In one embodiment, the pedestrian zone identification component 143 applies a zone position analysis 161 that determines the pedestrian detection zone 104 based on locations of one or more of vehicle and bicycle detection zones 105 in the field of view 112 .
- Vehicle and bicycle detection zones 105 are typically drawn in various places in the field of view 112 depending on user requirements. In most situations, the user requires detection at or near the stop bar. Detection zones 105 are usually drawn above the stop bar, and an algorithm is applied to identify the detection zones 105 nearest to the stop bar. An area comprised of a pedestrian strip is created up to the top line of these zones 105 , extending from the left to right edge of the field of view 112 below the top lines of the zones 105 . The pedestrian strip height is determined by a calculation of the vehicle and bicycle zone heights, and may be extended to cover a larger area that is more likely to be used by all pedestrians 102 .
- the zone position analysis 161 therefore accomplishes defining a pedestrian detection zone 104 by identifying a position of at least one vehicle detection zone 105 and at least one bicycle detection zone 105 in nearest proximity to a stop bar, with each of the at least one vehicle detection zone 105 and the at least one bicycle detection zone 105 have a height that extends to or near to the stop bar.
- the zone position analysis 161 calculates a height of a pedestrian strip in the field of view 112 from the height of the at least one vehicle detection zone 105 and the height of the at least one bicycle detection zone 105 , and extends a length of the pedestrian strip to a leftmost edge of the field of view 112 , and a rightmost edge of the field of view 112 .
- the zone position analysis 161 may also extend the height of the pedestrian strip into a portion of the at least one vehicle detection zone 105 and into a portion of the at least one bicycle detection zone 105 .
- the pedestrian zone identification component 143 applies an object movement analysis 162 that determines the pedestrian detection zone 104 based on movement of various objects within the field of view 112 , such as vehicles, bicycles, and other pedestrians 102 .
- This analysis 162 does not rely upon any other data, such as the locations of vehicle and bicycle detection zones 105 in the field of view 112 , or user drawing of such zones 105 .
- the object movement analysis 162 determines the area of the field of view 112 where pedestrians 102 typically enter the roadway, by identifying and differentiating pedestrians 102 from other roadway users and tracking their position as the move through the field of view 112 .
- Pedestrians 102 have characteristics that differ markedly from other roadway objects, such as vehicles and bicycles. These characteristics include, but are not limited to, size, gesture, speed, and entry and points in the field of view 112 . Standard intersection roadway markings and lane structures 109 may also be used to identify areas where pedestrians 102 should be traveling.
- the pedestrian zone identification component 143 identifies normal pedestrian tracks 108 in the field of view 112 , a boundary box is created and the area can then be used to collect additional data from various analytics, such as determining count, speed, trajectory, and grouping of pedestrians 102 . Additionally, by analyzing the motion strength and frequency of activity of each pixel, the pedestrian zone identification component 143 obtains accumulated tracks 106 of moving objects 107 in the field of view 112 . This enables refining the boundary of pedestrian detection zone 104 , as well as other detection zones 105 .
- the object movement analysis 162 therefore accomplishes defining a pedestrian detection zone 104 by ascertaining a region of interest 103 in the field of view 112 for pedestrian tracks 108 , based on at least one of lane structures and intersection road markings 109 and movement of pixels representing moving objects 107 relative to those lane structures and intersection road markings 109 .
- Accumulated tracks 106 of moving objects 107 are determined in the field of view 112 by analyzing motion strength and frequency of activity of each pixel representing the moving objects 107 in the field of view 112 .
- the present invention also tracks pedestrian characteristics in the region of interest 103 to distinguish the accumulated tracks 106 of the moving objects 107 from the pedestrian tracks 108 .
- Analyzing motion strength of pixels in the object movement analysis 162 may include computing a binary thresholded image defining a histogram of oriented gradient features that further define a pedestrian contour, and updating the histogram as pixel activity occurs in the changing image.
- Analyzing a frequency of pixel activity may include computing an activity frequency threshold and finding accumulated tracks 106 from pixel frequency activity.
- the image processing and pedestrian detection learning component 144 is configured to detect one or more pedestrians 102 in the pedestrian zone 104 from similarities of a single walking pedestrian model with part-based object recognition of individual pedestrians 102 , and increment a count for the counting module 147 . Multiple analytical approaches 170 may therefore be applied to detect the one or more pedestrians 102 for the counting module 147 .
- the image processing and pedestrian detection learning component 144 applies a part-based object recognition analysis 171 and image analysis using a histogram of oriented gradient features 172 to develop a model 173 of the single walking pedestrian.
- the image processing and pedestrian detection learning component 144 applies these analytical approaches 170 by, in one aspect of the present invention, analyzing portions of the field of view 112 by moving a sliding window through the pedestrian detection zone 104 in the field of view 112 , and computing features of current pixel content identified in the sliding window by identifying part-based features that define an individual pedestrian 102 .
- the part-based features include one or more of body structure combinations, body shape, body width and walking gestures.
- the image processing and pedestrian detection learning component 144 also determines a width and a height of one or more object parts, compares body structure combinations with one or more predetermined templates, and applies one or more geometric constraints to separate the part-based features.
- the image processing and pedestrian detection learning component 144 then proceeds with developing the model 173 of a single walking pedestrian to separate each individual pedestrian in a group of moving pedestrians in the field of view 112 . This is accomplished by computing a histogram of oriented gradient pedestrian features 172 based on pixels defining a pedestrian contour.
- the image processing and pedestrian detection learning component 144 next determines a matching confidence between an individual pedestrian and a group of moving pedestrians by calculating a mathematical similarity between the computed features of current pixel content and the model of the single walking pedestrian 173 . Where a matching confidence is high, this indicates that an individual pedestrian has been identified, and the present invention increments a pedestrian count in the counting component 147 . Where a matching confidence is low, the present invention analyzes the next portion of the field of view 112 by moving the sliding window to the next position in the field of view 112 for further image processing.
- FIG. 2 is a flowchart illustrating steps in a process 200 for performing the pedestrian tracking and counting framework 100 , according to certain embodiments of the present invention.
- a process 200 may include one or more algorithms for pedestrian zone identification within the component 143 , and for image processing and pedestrian detection learning within the component 144 , and for the various analytical approaches applied within each such component.
- Pedestrian zone identification and counting in the process 200 are initialized at step 210 by retrieving input data 110 representing a field of view 112 for a traffic detection zone 114 .
- the process 200 then detects and defines the pedestrian zone 104 , using one of the analytical approaches 160 , in either step 220 or 230 .
- the process 200 determines and defines a pedestrian zone 104 using existing positions of one or more of vehicle and bicycle lanes 105 in the traffic detection zone 114 .
- Those, as noted above, may be either manually drawn by users, or automatically determined, and the process at step 220 proceeds by identifying a position of at least one of the vehicle detection zones and bicycle detection zones 105 in nearest proximity to a stop bar, and calculating a height of a pedestrian strip in the field of view 112 from the height of vehicle detection zone(s) 105 and the height of the bicycle detection zone(s) 105 .
- the process 200 does not require both vehicle detection zones and bicycle zones 105 , and therefore the pedestrian zone 104 may be calculated using one or both of these types of zones 105 . Additionally, one or more of each zone may be used to determine and define the pedestrian zone 104 according to this embodiment of the present invention.
- the process 200 applies the analytical approach 162 to determine and define pedestrian zones 104 at step 230 , using movement of one or more objects 107 in the field of view 112 .
- this approach 162 ascertains a region of interest 103 in the field of view 112 for pedestrian tracks 108 , based lane structures and intersection road markings 109 and movement of pixels representing moving objects 107 .
- the process 200 identifies a region of interest 103 in the form of a pedestrian detection zone 104 for further processing of images to detect and count pedestrians 102 .
- pixels in the region of interest 103 are processed to analyze pixel content, using a combination of analytical approaches 170 that examine characteristics of a person to separate groups of people and improve count accuracy.
- One such analytical approach 170 is a part-based object recognition approach 172 which identifies an individual person from a group by using local features which are not affected by occlusion as compared to global features.
- a single object, in this case a human pedestrian 102 can be thought of as having many individual parts like a head, arms, torso, legs, and each of those parts can be assigned a standard representative pixel size. Identification of these parts, and the relationship between them, can be used to recognize a person from a group, even if partly occluded.
- the head can be approximated as a circular shaped feature, and the shoulders may be approximate as an arc in the image, such as using for example an edge feature space technique.
- predetermined templates may be used to identify these parts using template matching techniques, such as for example edge intensity template matching.
- Geometric constraints relative to the relationship between the parts may also be applied. For example, a constraint that the head cannot be next to the torso may be used to remove false matches. Additionally, other techniques such as boosted cascade like classifiers with edgelet features may be applied to learn part detection.
- parts can include full body, head, torso, shoulder, legs, head-shoulders and many other combinations of such parts.
- Another analytical approach 170 employed at step 250 is to develop a model 173 of a single walking pedestrian using a histogram of oriented gradient features 172 . Because pedestrians 102 often travel in groups, this may cause the ability to count pedestrians 102 accurately to degrade. The present invention therefore uses various characteristics of pedestrians such as height, width, body shape, head shape, speed and location to separate each individual that may be in a group.
- the process 200 creates a complex model 173 for the single walking pedestrian, based on all the ‘single walking pedestrians’ that have been identified.
- the model 173 therefore continually evolves as more data is collected within the present invention.
- the single walking pedestrian model 173 is comprised of a histogram of oriented gradient features (HoG) 172 that include head-torso-leg body structure, body shape, body width, walking gestures, and others to define a pedestrian contour.
- the process 200 computes this by analyzing portions of the field of view 112 in a sliding window that moves through grouped pedestrians in the image 111 to separate individual pedestrians from the grouped pedestrians based on the matching confidence between the single walking pedestrian model 173 and the computed features of the current content in the portions of the field of view 112 in the sliding window.
- the matching confidence is the mathematical interpretation of the similarity between the single walking pedestrian model 173 and the computed features of the current content of the sliding window.
- the process 200 concludes that a single walking pedestrian is found. If it is low, the analysis proceeds to the next portion of the field of view 112 by moving the sliding window to the next position and performs the comparison again, until it reaches the end of the grouped pedestrians.
- pedestrian detection using a HoG approach 172 and a single walking pedestrian model 173 in step 250 therefore takes an image 111 from input data 110 , and may create a multi-scale image pyramid as the process 200 slides a moving window through the image to compute HoG features.
- the process 200 may also apply one or more statistical classifiers, such as for example SVM or the like, to detect a pedestrian using these HoG features.
- the process 200 learns by fusing results of these statistical classifiers across all portions of the field of view 112 in sliding window positions and different image scales, and develops the model 173 to detect pedestrians 102 .
- the pedestrian speed calibration component 145 is configured to calibrate a pedestrian speed with a region of interest 103 in the field of view 112 for more accurate detection and counting of pedestrians 102 in traffic intersection control. It is to be noted that pedestrian speed calibration may be performed manually by a user or automatically using one or more image processing steps as discussed below.
- the pedestrian speed calibration component 145 performs automatic calibration of pedestrian speed with a region of interest 103 in the field of view 112 through a transformation of image pixels to actual distance traveled of a pedestrian 102 in the image. Because of the constant possibility of movement of sensors 120 such as cameras, and other changes such as focal length in the case of video cameras, the pedestrian speed calibration component 145 attempts a transformation from a pixel-based image 111 to an actual distance-based environment so that a proper speed is calculated in relation to a defined pedestrian zone 104 .
- the pedestrian speed calibration component 145 uses the intersection pavement markings and lane structures 109 to determine the speed at which a pedestrian 102 is moving in the field of view 112 . Based on the position of vehicle and bicycle detection zones 105 in the field of view 112 , the present invention detects the horizontal stop bar and lane lines to locate the stop bar location. A stop bar finding algorithm may also be applied to identify one or more horizontally straight white lines in an image, by finding a peak in the horizontal projection. The layout of the traffic detection zone 114 may also be used to find the stop bar, as the bottom zones of each lane are typically close to the stop bar. Once the stop bar is found, the present invention attempts to find lane lines which intersect with the stop bar. Zone coordinates are also utilized to find most vertically-oriented lane lines, either to the left or to the right of a vehicle detection zone 105 .
- the pedestrian speed calibration component 145 therefore performs automatic calibration of pedestrian speed with a region of interest 103 in the field of view 112 by initially identifying a location of one or more of a stop bar and lane lines in the field of view 112 , and determining an intersection of the lane lines with the stop bar to develop coordinates of the region of interest 103 .
- the pedestrian speed calibration component 145 also identifies a vertical orientation of the lane lines relative to the stop bar.
- the pedestrian speed calibration component 145 then computes features of an image 111 inside the region of interest 103 to differentiate between image pixels.
- Features analyzed may include edge gradients, thresholded grayscale pixels, and feature projections.
- the present invention measures an inter-lane distance between the image pixels using a known lane line width and the vertical orientation of the lane lines relative to the stop bar to map the image pixels to an actual distance traveled of a pedestrian 102 in the region of interest 103 .
- the pedestrian speed calibration component 145 uses this measurement and mapping, calculates a pedestrian speed from the actual distance traveled that is calibrated with the region of interest 103 .
- the calculation includes computing the number of feet or meters traveled relative to lane lines and stop bar markings, and the distance per unit of time traveled by the pedestrian 102 .
- the calibrated pedestrian speed may then be provided to a traffic controller system as output data 180 , or other external devices or location for storage or use.
- FIG. 3 is a flowchart illustrating steps in a process 300 for performing the calibration of pedestrian speed in the pedestrian tracking and counting framework 100 , according to another embodiment of the present invention.
- Such a process 300 may include one or more algorithms for pedestrian speed calibration in a region of interest 103 within the component 145 .
- the process 300 is initialized at step 310 by retrieving input data 110 representing a field of view 112 in the traffic detection zone 114 .
- the process 300 analyzes this input data 110 to ascertain, at step 320 , a region of interest 103 in which pedestrians 102 may use the roadway within the traffic detection zone 114 .
- the region of interest 103 may or may not be the specific pedestrian detection zone 104 referenced above with respect to other aspects of the present invention.
- the process 300 determines the region of interest 103 using one or more of pavement or intersection markings and lane structures 109 , positions of other detection zones 105 , movement of objects 107 in the field of view 112 , or some combination of these approaches.
- the process 300 attempts to identify positions of both a stop bar and lane lines for vehicles and bicycles in the region of interest 103 . Using this information, the process 300 develops positional coordinates of the region of interest 103 at step 340 . This may be performed in combination with the approach(es) used to ascertain a region of interest 103 . Regardless, these zonal coordinates are used to further identify vertical orientations of the lane lines relative to the stop bar, so that those lane lines with the most vertical orientations relative to the detected stop bar are used for further computations of pedestrian speed as noted below.
- the process 300 attempts to ascertain a relationship between an actual distance traveled by a pedestrian 102 and image pixels in the input data 110 at step 350 . This involves measuring an inter-lane distance between the image pixels at step 360 , and mapping image pixels to the actual distance traveled. This is performed using standard lane widths, so that once most vertical orientations of lane lines are established in step 340 , the transformation from an image to actual distance traveled by a pedestrian 102 can be accomplished.
- step 370 by calculating pedestrian speed. This is performed as noted above by computing the distance, in number of feet or meters, traveled relative to lane lines and stop bar markings, and the distance per unit of time traveled by the pedestrian 102 .
- the pedestrian speed is therefore calibrated to the region of interest 103 for appropriate traffic intersection control, and the speed is provided as output data 180 to one or more of a traffic management tool 191 , traffic management system 193 , intersection signal controller 194 , additional analytics 192 , or any other additional or external applications 190 .
- the incident detection component 146 is configured to detect various pedestrian incidents and to provide an alarm as output data 180 when a pedestrian incident is determined.
- Incidents may include non-moving objects within the pedestrian detection zone 104 , or within the field of view 112 generally, that can cause abnormal pedestrian and vehicle movements. Incidents may also include prone objects or pedestrians 102 within the pedestrian detection zone 104 , for example pedestrians 102 have fallen to the pavement. Other types of incidents include a presence of unauthorized vehicles in the pedestrian detection zone 104 .
- the incident detection component 146 learns the background of the pedestrian detection zone 104 to continually search for parts of that area that are different than the background it has learned. If a change in the background has been present for some amount of time, and/or where moving vehicles (or even other walking pedestrians) being tracked are avoiding the area that has changed to avoid contact, the incident detection component 146 concludes that non-moving objects are in the pedestrian detection zone 104 and generates a warning signal.
- Non-moving objects may include fallen pedestrians, stalled vehicles, objects that have fallen from moving vehicles, motorcyclists or bicyclists who are down, or objects placed in the pedestrian detection zone 104 by someone.
- the incident detection component 146 tracks walking pedestrians 102 as they move all the way through the pedestrian detection zone 104 , from the entry point through to the exit point. If a pedestrian 102 stops at the middle of the pedestrian detection zone 104 for some time, and does not move forward or backward and continues to be present in the zone 104 , then the present invention can issue an alarm signaling “pedestrian down in the roadway” to alert the responsible authorities.
- the incident detection component 146 may also track movement of vehicles, bicycles, motorcycles, and other objects 107 in the field of view 112 . Where it detects that an object 107 has entered the pedestrian detection zone 104 , and stop there and not proceed for some time, the incident detection component 146 may signal that an unauthorized vehicle is present in the pedestrian detection zone 104 to alert authorities for further investigation.
- FIG. 4 is a flow diagram illustrating steps in a process 400 of incident detection in the pedestrian tracking and counting framework 100 according to one embodiment of the present invention.
- a process 400 may include one or more algorithms for incident detection within the component 146 .
- the present invention receives an image 111 representing the field of view 112 in step 405 and thereby initializes the incident detection component 146 .
- the present invention performs pedestrian detection using one or more methods as described herein, and if a pedestrian 102 is identified at step 420 , proceeds with tracking the pedestrian 102 at step 430 , together with updating the identification, location, speed, and other characteristics. If no pedestrian 102 is identified at step 420 , the algorithm loops back to begin processing a new image 111 representing the field of view 112 .
- the algorithm for incident detection in the component 146 determines if the pedestrian 102 is moving at step 440 . If the pedestrian is found to be in motion, the process 400 returns to begin processing a new image 111 representing the field of view 112 . If, however, the pedestrian 102 is not in motion at step 440 , the algorithm for incident detection proceeds to determine how long the pedestrian 102 has been stationary at step 450 . If the pedestrian 102 is not in motion in excess of a certain amount of time, a pedestrian down alarm is generated at step 470 as output data 180 .
- the certain amount of time may be preset by a user, and may also be learned by the process 400 as pedestrians 102 and other objects 107 are identified and tracked.
- a timer may be updated at step 460 for determining whether a pedestrian 102 is not in motion for a certain amount of time, and this value is returned to the beginning of the algorithm. In this matter, were the incident detection components that pedestrians 102 are not in motion for some specific reason (for example, a blockage in traffic) then this value can be stored and used by the process 400 .
- the pedestrian tracking and counting framework 100 may be configured to provide a separate output 180 to a traffic signal controller 194 when a group of pre-determined people is identified to enable additional functions to be performed.
- a user may set sample size for this output 180 using the traffic management tool 191 , or it may be automatically determined within the present invention.
- an identified group as an output 180 .
- the traffic signal controller 194 may extend the walk time or hold a red light for vehicles to allow safe passage through the intersection.
- the present invention may use an identified group of people to further identify periods of high pedestrian traffic for better intersection efficiency. It is therefore to be understood that many uses of output data 180 in applications for traffic intersection signal control are possible and within the scope of the present invention.
- the pedestrian tracking and counting framework 100 of the present invention may be applied in many different circumstances.
- the present invention may be used to identify pedestrians 102 during adverse weather conditions when physical danger may increase due to reduced visibility.
- the present invention may therefore perform pedestrian detection in low-light, fog or other low-contrast conditions for improved roadway and intersection.
- the present invention may be used to identify the difference between a pedestrian 102 and the pedestrian's shadow.
- the pedestrian detection is improved through rejection of pedestrian shadows to ensure improved accuracy in pedestrian detection and counting.
- the present invention may be used to determine a normal or average crossing speed for pedestrians 102 in a detection zone 104 . This may be then be used to identify slow-moving pedestrians 102 , such as the elderly, children, and disabled or wheelchair-bound persons, to extend and/or adjust a signal timing for crossing the intersection for safer passage. It may also be used to identify faster-moving intersection users, such as pedestrians 102 using hover boards, skateboards, or other such devices in the pedestrian detection zone 104 .
- the present invention may further be used to identify late arrivals in the pedestrian detection zone 104 , to extend and/or adjust signal timing for safe intersection passage.
- the present invention may also receive and use additional input from the traffic signal controller to identify when a pedestrian 102 starts to cross the intersection after a certain percentage of the crossing time has expired.
- the present invention may also be utilized to compute a crosswalk occupancy, for example to determine a pedestrian density in the detection zone 104 .
- the pedestrian tracking and counting framework 100 may be utilized in combination with existing approaches to determining vehicle and bicycle detection zones 105 , and may be therefore performed using the existing field of view 112 in a traffic detection zone 114 that is designed to detect vehicles, bicycles and other road users needing the traffic signal to cross an intersection.
- the present invention may use an existing vehicle detection status, such as speed or saturation, to dynamically change the sensitivity of pedestrian detection.
- a known vehicular status may be applied to increase the likelihood of pedestrian crossing when stopped vehicle is detected, or when no vehicle in present. Conversely, it may be used to decrease the likelihood of pedestrian crossing while vehicular traffic is freely flowing. Therefore, the present invention use knowledge of either stopped or moving vehicles or bicycles in the respective other detection zones 105 or moving vehicles to improve pedestrian detection accuracy.
- the present invention may be part of a whole scene analysis that combines vehicular, bicycle, and pedestrian detection to identify different moving objects 107 , such as vehicles, motorcycles, bicycles and pedestrians.
- Each object type has its own unique characteristics, and the present invention is configured to automatically learn these unique characteristics and apply them to identify the different types.
- Output data 180 from such a whole scene analysis provides traffic engineers, responsible authorities, and the public with a complete picture of street and intersection activity (for example, who is using what and at what time and for how long) for improved roadway management.
- the pedestrian tracking and counting framework 100 may be configured to learn features of a traffic intersection, such as the background, using the image processing paradigms discussed herein. This may further include one or more approaches for learning roadway lane structures for improving accuracy of in identifying vehicles, bicycles, pedestrians, and other objects 107 in a traffic detection zone 114 .
- detection and false call rates are key metrics used to measure accuracy, having low missed and false calls improves the overall performance and efficiency of a traffic management system.
- the present invention may include an approach that incorporates a highly robust model that learns roadway structures to improve sensing accuracy of a traffic management system.
- a roadway structure model provides a high confidence that learned structures correspond to physical roadway structures, and is robust to short-term changes lighting conditions, such as shadows cast by trees, buildings, clouds, vehicles, rain, fog, etc.
- the model also adaptively learns long-term appearance changes of the roadway structures caused by natural wear and tear, seasonal changes in lighting (winter vs. summer), etc.
- the model also exhibits a low decision time (in milliseconds) for responding to occlusions caused by fast moving traffic, and low computational complexity capable of running on an embedded computing platform.
- the present invention looks at user-drawn zones 105 to initialize and establish borders for regions of interest 103 for various detection zones. Images 111 are processed to compute features inside borders for the region of interest 103 , and find roadway structures using these computed features. The model is then developed to learn background structures from these features to detect an occlusion, and learn the relationship between structure occlusions and detection zones 105 .
- roadway structures such as lane lines, curbs, and medians are generally found adjoining detection zone boundaries.
- roadway structures exhibit strong feature patterns that can be generalized. For example, they contain strong edges and are relatively bright in grayscale. Such structures can be effectively described by overlapping projector peaks of positive edges, negative edges and thresholded grayscale pixels. These structures are also persistent, and their feature signatures can be learned over time to detect occlusions and draw inferences regarding the presence of vehicles in the neighboring zones.
- every zone requires the computation of a left and a right border region of interest 103 . If two zones are considered horizontal neighbors, then they will share a border region of interest 103 , and the area between the zones is established as the border region of interest 103 . If a zone has no neighboring zones to the left or right, then the corresponding the boundary of the corresponding side is extended by an area proportionate to the zone width, and this extended area serves as the border region of interest 103 for the zone. Also, each border region of interest 103 may be sub-divided into tile regions of interest based on the size of the user-drawn zones. A larger zone provides a larger border area, allowing the model to work with smaller tiles that provide a more localized knowledge of structures and occlusion.
- Features are computed in the border region of interest 103 by computing edges from projecting positive and negative edges across rows, and finding peak segments from each projected positive and negative edge. Additionally, the peak segments may be determined by computing a gray histogram and a cumulative histogram from image pixels, determining a gray threshold image, and projecting resulting pixels across rows.
- Roadway structures are learned from each computed feature by finding overlapping feature segment locations, accumulating peak segment locations of overlapping features in a histogram, and finding peaks in the feature background histograms. The model of roadway structures is therefore established using feature histogram peak locations. This is used to identify an occlusion by finding overlapping positive edge peak segments, negative edge peak segments, and gray threshold peak segments with the background histogram. Matching scores are compute for each of these overlaps and compared to threshold values to differentiate between a visible structure and an occlusion.
- the systems and methods of the present invention may be implemented in many different computing environments. For example, they may be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, electronic or logic circuitry such as discrete element circuit, a programmable logic device or gate array such as a PLD, PLA, FPGA, PAL, and any comparable means.
- a special purpose computer a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, electronic or logic circuitry such as discrete element circuit, a programmable logic device or gate array such as a PLD, PLA, FPGA, PAL, and any comparable means.
- any means of implementing the methodology illustrated herein can be used to implement the various aspects of the present invention.
- Exemplary hardware that can be used for the present invention includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids
- processors e.g., a single or multiple microprocessors or general processing units
- memory e.g., a single or multiple microprocessors or general processing units
- nonvolatile storage e.g., a single or multiple nonvolatile storage
- input devices e.g., keyboards, mice, joysticks, joysticks, joysticks, joysticks, joysticks, joysticks, joysticks, joysticks, joysticks, joysticks, etc.
- output devices e.g., a single or multiple microprocessors or general processing units
- alternative software implementations including, but not limited to, distributed processing, parallel processing, or virtual machine processing can also be configured to perform the methods described herein.
- the systems and methods of the present invention may also be wholly or partially implemented in software that can be stored on a non-transitory computer-readable storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like.
- the systems and methods of this invention can be implemented as a program embedded on a mobile device or personal computer through such mediums as an applet, JAVA® or CGI script, as a resource residing on one or more servers or computer workstations, as a routine embedded in a dedicated measurement system, system component, or the like.
- the system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system.
- the data processing functions disclosed herein may be performed by one or more program instructions stored in or executed by such memory, and further may be performed by one or more modules configured to carry out those program instructions. Modules are intended to refer to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, expert system or combination of hardware and software that is capable of performing the data processing functionality described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
- This patent application claims priority to, and is a continuation of, U.S. non-provisional application Ser. No. 15/150,280, filed on May 9, 2016, the contents of which are incorporated in their entirety herein. In accordance with 37 C.F.R. §1.76, a claim of priority is included in an Application Data Sheet filed concurrently herewith.
- The present invention relates to the field of traffic detection. Specifically, the present invention relates to calibrating a pedestrian speed in a region of interest of a traffic detection zone used by pedestrians, for pedestrian detection and counting in traffic intersection control.
- There are many conventional traffic detection systems for intersection control. Conventional systems typically utilize sensors, either in the roadway itself, or positioned at a roadside location or on traffic lights proximate to the roadway. Common types of vehicular sensors are inductive coils, or loops, embedded in a road surface, and video cameras, radar sensors, acoustic sensors, and magnetometers, either in the road itself, or at either the side of a roadway or positioned higher above traffic to observe and detect vehicles in a desired area. Each type of sensor provides information used to determine a presence of vehicles in specific traffic lanes, to provide information for proper actuation of traffic signals.
- These conventional detection systems are commonly set up with ‘virtual zones’, which are hand- or machine-drawn areas on an image where objects may be moving or present. Traditionally, a vehicle passes through or stops in a zone, and these zones generate an “output” when an object is detected as passing through or resting within all or part of the zone. Many detection systems are capable of detecting different types of vehicles, such as cars, trucks, bicycles, motorcycles, pedestrians, etc. This is accomplished by creating special zones within a field of view to differentiate objects, such as bicycle zones and pedestrian zones. Therefore, conventional detection systems are capable of differentiating, for example, bicycles from other types of vehicles by analyzing these special zones.
- Outputs are sent to external devices or locations for use or storage, such as for example to a traffic signal controller, which performs control and timing functions based on the information provided. These outputs also provide traffic planners and engineers with information on the volume of traffic at key points in a traffic network. This information is important for comparing volumes over periods of time to help with accurate adjustment of signal timing and managing traffic flow. Current systems and methods of traffic detection provide data that results only from a count of a total number of vehicles, which may or may not include bicycles or other road users, as therefore there is no way differentiating between different types of vehicles. As the need for modified signal timing to accommodate bicyclists, pedestrians and others becomes more critical for proper traffic management, a method for separating the count of all modes of use on a thoroughfare is needed to improve the ability to accurately manage traffic environments.
- Traffic planners and engineers require data on the volume of pedestrian traffic at key points in a traffic network. This data is important for comparing volumes over periods of time to help with accurate adjustment of signal timing. No current method for automatic count and data collection for pedestrian activity exists in a traffic detection system. As the need for modified signal timing to accommodate roadway users such as pedestrians becomes more critical for proper traffic management, a method for accurately identifying and counting pedestrians using a roadway intersection would greatly improve the ability to efficiently manage traffic environments.
- It is therefore one objective of the present invention to provide a system and method of determining a pedestrian area within a traffic detection zone for traffic intersection control. It is another objective of the present invention to provide a system and method of determining a pedestrian area within a traffic detection zone based on the location of one or both of a vehicle detection zone(s) and a bicycle detection zone(s). It is still another objective to provide a system and method of determining a pedestrian area within a traffic detection zone based on movement of various objects in a field of view of the traffic detection zone, such as pedestrians, vehicles, and bicycles.
- It is a further objective of the present invention to provide a system and method of accurately counting pedestrians within a traffic detection zone for traffic intersection control. It is yet another objective of the present invention to provide a system and method of identifying characteristics of a pedestrian to improve count accuracy. Another objective of the present invention is to incorporate part-based object recognition to identify characteristics of a pedestrian within a field of a view of a traffic detection zone.
- Yet another objective of the present invention is to automatically calibrate a traffic detection system by calculating pedestrian speed in a field of view for improved traffic intersection control. A further objective is to provide a system and method of identifying pedestrian incidents in a traffic detection zone, and triggering an alarm based on pedestrian incidents. It is still a further objective of the present invention to combine vehicle detection, bicycle detection, and pedestrian detection in a whole scene analysis of a field of view for traffic intersection control.
- The present invention provides systems and methods of identifying a presence, volume, velocity and trajectory of pedestrians in a region of interest in a field of view of a traffic detection zone. These systems and methods present an approach to traffic intersection control that includes, in aspect embodiment, both identification of a pedestrian detection zone in the field of view, and identification of individual pedestrians in the pedestrian detection zone. This approach, styled as a pedestrian zone detection, identification and counting framework, enable improved pedestrian counting in the pedestrian detection zone, and increased accuracy in various aspects of roadway management.
- Identification of a pedestrian detection zone in the field of view in the present invention is performed, in one embodiment, using a zone position analysis that automatically determines a pedestrian area in an intersection based on locations of one or both of vehicle and bicycle detection zones. Such vehicular and bicycle detection zones are either themselves automatically determined in a field of view, or drawn by a user. Regardless, knowledge of the location of these zones allows the present invention to calculate a pedestrian detection zone based on their position relative to a stop bar at a traffic intersection.
- Identification of a pedestrian detection zone in the field of view in the present invention is performed, in another embodiment, using an object movement analysis automatically determines a pedestrian area in an intersection based on movement of various other objects within the field of view irrespective of the location of other detection zones. These objects include vehicles, motorcycles, bicycles, pedestrians, and any other moving objects that may be detected by sensors capturing data in the field of view. Regardless, analysis of image pixel activity of these moving objects allows the present invention to calculate a pedestrian detection zone.
- Identification of individual pedestrians in the pedestrian detection zone in the present is performed, in one embodiment, by comparing a part-based object recognition analysis with a model of a single walking pedestrian to differentiate individual pedestrians from groups of moving pedestrians. Such a comparison analyzes image characteristics to separate groups of pedestrians for improved count accuracy.
- The present invention also includes calibration of pedestrian speed in traffic intersection control. In this embodiment, the pedestrian zone detection, identification and counting framework locates a region of interest based on locations of intersection or pavement markings and lane structures, such as a stop bar and lane lines, and computes features of an image inside the region of interest to calculate the pedestrian speed.
- The present invention also includes incident detection in traffic intersection control. In this embodiment, the pedestrian zone detection, identification and counting framework learns a background of the pedestrian detection zone, and looks for changes in the background to identify non-moving objects such as prone objects or pedestrians or unauthorized vehicles. Identification of such non-moving objects initiate an alarm for responsible authorities to improve emergency response and efficient intersection performance.
- Other objects, embodiments, features and advantages of the present invention will become apparent from the following description of the embodiments, taken together with the accompanying drawings, which illustrate, by way of example, the principles of the invention.
- The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments of the invention and together with the description, serve to explain the principles of the invention.
-
FIG. 1 is a system diagram for a pedestrian zone detection, identification and counting according to one aspect of the present invention; -
FIG. 2 is a flowchart of steps performed for pedestrian zone detection, identification and counting according to one aspect of the present invention; -
FIG. 3 is a flowchart of steps performed for calibrating a pedestrian speed in pedestrian zone detection, identification and counting according to another aspect of the present invention; -
FIG. 4 is a flowchart of steps performed for incident detection in the pedestrian zone detection and pedestrian identification and counting according to one embodiment of the present invention; -
FIG. 5 is an exemplary representation of a field of view in a traffic detection zone, showing in a particular a region of interest for pedestrian detection according to the present invention; -
FIG. 6 is another exemplary representation of a field of view in a traffic detection zone, showing vehicular, bicycle, and pedestrian detection zones according to one embodiment of the present invention; and -
FIG. 7 is further exemplary representation of a field of view in a traffic detection zone, showing accumulated tracks of vehicles and pedestrians according to one embodiment of the present invention. - In the following description of the present invention reference is made to the exemplary embodiments illustrating the principles of the present invention and how it is practiced. Other embodiments will be utilized to practice the present invention and structural and functional changes will be made thereto without departing from the scope of the present invention.
-
FIG. 1 is a system diagram illustrating elements of a pedestrian tracking and countingframework 100, according to one aspect of the present invention. The pedestrian tracking and countingframework 100 is performed within one or more systems and/or methods that includes several components, each of which define distinct activities for defining an area used bypedestrians 102 in atraffic detection zone 114, and accurately countingpedestrians 102 in such an area, for traffic intersection control. -
FIG. 5-7 areexemplary screenshot images 111 of a field ofview 112 in atraffic detection 114. In theexemplary image 111 ofFIG. 5 , a region ofinterest 103 is highlighted for apedestrian detection zone 104, and apedestrians 102 are shown present therein. In theexemplary image 111 ofFIG. 6 , thepedestrian detection zone 104 is shown below user-drawn vehicular andbicycle detection zones 105 in the field ofview 112. In the exemplary image ofFIG. 7 , arrows indicate accumulatedtracks 106 of movingobjects 107 are shown therein, as arepedestrian tracks 108 in the region ofinterest 103. Each ofFIG. 5-7 also show standard intersection roadway markings andlane structures 109. - Returning to
FIG. 1 , the pedestrian tracking and countingframework 100 ingests, receives, requests, or otherwise obtainsinput data 110 that represents a field ofview 112 of thetraffic detection zone 114.Input data 110 is collected from the one ormore sensors 120, which may be positioned in or near a roadway area for which thetraffic detection zone 114 is identified and drawn. The one ormore sensors 120 includevideo systems 121 such as cameras, thermal cameras,radar systems 122,magnetometers 123,acoustic sensors 124, and any other devices orsystems 125 which are capable of detecting a presence of objects within a traffic intersection environment. - The
input data 110 is applied to a plurality ofdata processing component 140 within acomputing environment 130 that also includes one ormore processors 132, a plurality of software and hardware components, and one or more modular software and hardware packages configured to perform specific processing functions. The one ormore processors 132, plurality of software and hardware components, and one or more modular software and hardware packages are configured to execute program instructions to perform algorithms for various functions within the pedestrian tracking and countingframework 100 that are described in detail herein, and embodied in the one or moredata processing modules 140. - The plurality of
data processing components 140 include a data ingestcomponent 141 configured to ingest, receive, request, or otherwise obtaininput data 110 as noted above, and a pedestrian zone detection and countinginitialization component 142 configure to initialize the pedestrian tracking and countingframework 100 and retrieval ofinput data 110 for performing the various functions of the present invention. The plurality ofdata processing modules 140 also include a pedestrianzone identification component 143, an image processing and pedestriandetection learning component 144, aspeed calibration component 145, anincident detection component 146, and acounting component 147. - At least some of these
data processing components 140 are configured to generateoutput data 180 that may take many different forms.Output data 180 may include of a pedestrian count, generated by thecounting component 147 according to one or more embodiments of the present invention.Output data 180 may also include a calibrated pedestrian speed, generated by thespeed calibration component 145 according to another embodiment of the present invention.Output data 180 may further include an alarm indicated an incident detected in apedestrian area 104, generated by theincident detection component 146 according to still another embodiment of the present invention.Output data 180 may also be provided for additional analytics and processing in one or more third party orexternal applications 190. These may include atraffic management tool 191, a zone andlane analysis component 192, atraffic management system 193, and asignal controller 194. - The pedestrian
zone identification component 143 is configured to define apedestrian detection zone 104 in the field ofview 112 of thetraffic detection zone 114 for subsequent counting ofpedestrians 102 therein. Differentialanalytical approaches 160 may be applied to achieve this determination. In one embodiment, the pedestrianzone identification component 143 applies azone position analysis 161 that determines thepedestrian detection zone 104 based on locations of one or more of vehicle andbicycle detection zones 105 in the field ofview 112. - Vehicle and
bicycle detection zones 105 are typically drawn in various places in the field ofview 112 depending on user requirements. In most situations, the user requires detection at or near the stop bar.Detection zones 105 are usually drawn above the stop bar, and an algorithm is applied to identify thedetection zones 105 nearest to the stop bar. An area comprised of a pedestrian strip is created up to the top line of thesezones 105, extending from the left to right edge of the field ofview 112 below the top lines of thezones 105. The pedestrian strip height is determined by a calculation of the vehicle and bicycle zone heights, and may be extended to cover a larger area that is more likely to be used by allpedestrians 102. - The
zone position analysis 161 therefore accomplishes defining apedestrian detection zone 104 by identifying a position of at least onevehicle detection zone 105 and at least onebicycle detection zone 105 in nearest proximity to a stop bar, with each of the at least onevehicle detection zone 105 and the at least onebicycle detection zone 105 have a height that extends to or near to the stop bar. Next, thezone position analysis 161 calculates a height of a pedestrian strip in the field ofview 112 from the height of the at least onevehicle detection zone 105 and the height of the at least onebicycle detection zone 105, and extends a length of the pedestrian strip to a leftmost edge of the field ofview 112, and a rightmost edge of the field ofview 112. As noted above, thezone position analysis 161 may also extend the height of the pedestrian strip into a portion of the at least onevehicle detection zone 105 and into a portion of the at least onebicycle detection zone 105. - In another embodiment, the pedestrian
zone identification component 143 applies anobject movement analysis 162 that determines thepedestrian detection zone 104 based on movement of various objects within the field ofview 112, such as vehicles, bicycles, andother pedestrians 102. Thisanalysis 162 does not rely upon any other data, such as the locations of vehicle andbicycle detection zones 105 in the field ofview 112, or user drawing ofsuch zones 105. - The
object movement analysis 162 determines the area of the field ofview 112 wherepedestrians 102 typically enter the roadway, by identifying and differentiatingpedestrians 102 from other roadway users and tracking their position as the move through the field ofview 112.Pedestrians 102 have characteristics that differ markedly from other roadway objects, such as vehicles and bicycles. These characteristics include, but are not limited to, size, gesture, speed, and entry and points in the field ofview 112. Standard intersection roadway markings andlane structures 109 may also be used to identify areas wherepedestrians 102 should be traveling. - Once the pedestrian
zone identification component 143 identifies normal pedestrian tracks 108 in the field ofview 112, a boundary box is created and the area can then be used to collect additional data from various analytics, such as determining count, speed, trajectory, and grouping ofpedestrians 102. Additionally, by analyzing the motion strength and frequency of activity of each pixel, the pedestrianzone identification component 143 obtains accumulatedtracks 106 of movingobjects 107 in the field ofview 112. This enables refining the boundary ofpedestrian detection zone 104, as well asother detection zones 105. - The
object movement analysis 162 therefore accomplishes defining apedestrian detection zone 104 by ascertaining a region ofinterest 103 in the field ofview 112 for pedestrian tracks 108, based on at least one of lane structures andintersection road markings 109 and movement of pixels representing movingobjects 107 relative to those lane structures andintersection road markings 109.Accumulated tracks 106 of movingobjects 107 are determined in the field ofview 112 by analyzing motion strength and frequency of activity of each pixel representing the movingobjects 107 in the field ofview 112. The present invention also tracks pedestrian characteristics in the region ofinterest 103 to distinguish the accumulatedtracks 106 of the movingobjects 107 from the pedestrian tracks 108. Analyzing motion strength of pixels in theobject movement analysis 162 may include computing a binary thresholded image defining a histogram of oriented gradient features that further define a pedestrian contour, and updating the histogram as pixel activity occurs in the changing image. Analyzing a frequency of pixel activity may include computing an activity frequency threshold and finding accumulatedtracks 106 from pixel frequency activity. - The image processing and pedestrian
detection learning component 144 is configured to detect one ormore pedestrians 102 in thepedestrian zone 104 from similarities of a single walking pedestrian model with part-based object recognition ofindividual pedestrians 102, and increment a count for thecounting module 147. Multipleanalytical approaches 170 may therefore be applied to detect the one ormore pedestrians 102 for thecounting module 147. In one embodiment, the image processing and pedestriandetection learning component 144 applies a part-basedobject recognition analysis 171 and image analysis using a histogram of oriented gradient features 172 to develop amodel 173 of the single walking pedestrian. - The image processing and pedestrian
detection learning component 144 applies theseanalytical approaches 170 by, in one aspect of the present invention, analyzing portions of the field ofview 112 by moving a sliding window through thepedestrian detection zone 104 in the field ofview 112, and computing features of current pixel content identified in the sliding window by identifying part-based features that define anindividual pedestrian 102. The part-based features include one or more of body structure combinations, body shape, body width and walking gestures. In this part-basedobject recognition analysis 171, the image processing and pedestriandetection learning component 144 also determines a width and a height of one or more object parts, compares body structure combinations with one or more predetermined templates, and applies one or more geometric constraints to separate the part-based features. - The image processing and pedestrian
detection learning component 144 then proceeds with developing themodel 173 of a single walking pedestrian to separate each individual pedestrian in a group of moving pedestrians in the field ofview 112. This is accomplished by computing a histogram of oriented gradient pedestrian features 172 based on pixels defining a pedestrian contour. The image processing and pedestriandetection learning component 144 next determines a matching confidence between an individual pedestrian and a group of moving pedestrians by calculating a mathematical similarity between the computed features of current pixel content and the model of thesingle walking pedestrian 173. Where a matching confidence is high, this indicates that an individual pedestrian has been identified, and the present invention increments a pedestrian count in thecounting component 147. Where a matching confidence is low, the present invention analyzes the next portion of the field ofview 112 by moving the sliding window to the next position in the field ofview 112 for further image processing. -
FIG. 2 is a flowchart illustrating steps in aprocess 200 for performing the pedestrian tracking and countingframework 100, according to certain embodiments of the present invention. Such aprocess 200 may include one or more algorithms for pedestrian zone identification within thecomponent 143, and for image processing and pedestrian detection learning within thecomponent 144, and for the various analytical approaches applied within each such component. - Pedestrian zone identification and counting in the
process 200 are initialized atstep 210 by retrievinginput data 110 representing a field ofview 112 for atraffic detection zone 114. Theprocess 200 then detects and defines thepedestrian zone 104, using one of theanalytical approaches 160, in either step 220 or 230. - At
step 220, theprocess 200 determines and defines apedestrian zone 104 using existing positions of one or more of vehicle andbicycle lanes 105 in thetraffic detection zone 114. Those, as noted above, may be either manually drawn by users, or automatically determined, and the process atstep 220 proceeds by identifying a position of at least one of the vehicle detection zones andbicycle detection zones 105 in nearest proximity to a stop bar, and calculating a height of a pedestrian strip in the field ofview 112 from the height of vehicle detection zone(s) 105 and the height of the bicycle detection zone(s) 105. It should be noted that theprocess 200 does not require both vehicle detection zones andbicycle zones 105, and therefore thepedestrian zone 104 may be calculated using one or both of these types ofzones 105. Additionally, one or more of each zone may be used to determine and define thepedestrian zone 104 according to this embodiment of the present invention. - Alternatively, the
process 200 applies theanalytical approach 162 to determine and definepedestrian zones 104 atstep 230, using movement of one ormore objects 107 in the field ofview 112. As noted above, thisapproach 162 ascertains a region ofinterest 103 in the field ofview 112 for pedestrian tracks 108, based lane structures andintersection road markings 109 and movement of pixels representing moving objects 107. Regardless of the approach used in either step 220 or step 230, theprocess 200 identifies a region ofinterest 103 in the form of apedestrian detection zone 104 for further processing of images to detect and countpedestrians 102. - At
step 250, pixels in the region ofinterest 103 are processed to analyze pixel content, using a combination ofanalytical approaches 170 that examine characteristics of a person to separate groups of people and improve count accuracy. One suchanalytical approach 170 is a part-basedobject recognition approach 172 which identifies an individual person from a group by using local features which are not affected by occlusion as compared to global features. A single object, in this case ahuman pedestrian 102, can be thought of as having many individual parts like a head, arms, torso, legs, and each of those parts can be assigned a standard representative pixel size. Identification of these parts, and the relationship between them, can be used to recognize a person from a group, even if partly occluded. - In this approach, assumptions may be made to identify the parts in an image. For example, the head can be approximated as a circular shaped feature, and the shoulders may be approximate as an arc in the image, such as using for example an edge feature space technique. Depending on the camera location and the focal length, predetermined templates may be used to identify these parts using template matching techniques, such as for example edge intensity template matching. Geometric constraints relative to the relationship between the parts may also be applied. For example, a constraint that the head cannot be next to the torso may be used to remove false matches. Additionally, other techniques such as boosted cascade like classifiers with edgelet features may be applied to learn part detection. It is to be noted that parts can include full body, head, torso, shoulder, legs, head-shoulders and many other combinations of such parts.
- Another
analytical approach 170 employed atstep 250 is to develop amodel 173 of a single walking pedestrian using a histogram of oriented gradient features 172. Becausepedestrians 102 often travel in groups, this may cause the ability to countpedestrians 102 accurately to degrade. The present invention therefore uses various characteristics of pedestrians such as height, width, body shape, head shape, speed and location to separate each individual that may be in a group. - Over time, the
process 200 creates acomplex model 173 for the single walking pedestrian, based on all the ‘single walking pedestrians’ that have been identified. Themodel 173 therefore continually evolves as more data is collected within the present invention. - The single
walking pedestrian model 173 is comprised of a histogram of oriented gradient features (HoG) 172 that include head-torso-leg body structure, body shape, body width, walking gestures, and others to define a pedestrian contour. Theprocess 200 computes this by analyzing portions of the field ofview 112 in a sliding window that moves through grouped pedestrians in theimage 111 to separate individual pedestrians from the grouped pedestrians based on the matching confidence between the singlewalking pedestrian model 173 and the computed features of the current content in the portions of the field ofview 112 in the sliding window. The matching confidence is the mathematical interpretation of the similarity between the singlewalking pedestrian model 173 and the computed features of the current content of the sliding window. If the matching confidence is high, theprocess 200 concludes that a single walking pedestrian is found. If it is low, the analysis proceeds to the next portion of the field ofview 112 by moving the sliding window to the next position and performs the comparison again, until it reaches the end of the grouped pedestrians. - In a further embodiment, pedestrian detection using a
HoG approach 172 and a singlewalking pedestrian model 173 instep 250 therefore takes animage 111 frominput data 110, and may create a multi-scale image pyramid as theprocess 200 slides a moving window through the image to compute HoG features. Theprocess 200 may also apply one or more statistical classifiers, such as for example SVM or the like, to detect a pedestrian using these HoG features. Theprocess 200 learns by fusing results of these statistical classifiers across all portions of the field ofview 112 in sliding window positions and different image scales, and develops themodel 173 to detectpedestrians 102. - Returning to
FIG. 1 , the pedestrianspeed calibration component 145 is configured to calibrate a pedestrian speed with a region ofinterest 103 in the field ofview 112 for more accurate detection and counting ofpedestrians 102 in traffic intersection control. It is to be noted that pedestrian speed calibration may be performed manually by a user or automatically using one or more image processing steps as discussed below. - The pedestrian
speed calibration component 145 performs automatic calibration of pedestrian speed with a region ofinterest 103 in the field ofview 112 through a transformation of image pixels to actual distance traveled of apedestrian 102 in the image. Because of the constant possibility of movement ofsensors 120 such as cameras, and other changes such as focal length in the case of video cameras, the pedestrianspeed calibration component 145 attempts a transformation from a pixel-basedimage 111 to an actual distance-based environment so that a proper speed is calculated in relation to a definedpedestrian zone 104. - The pedestrian
speed calibration component 145 uses the intersection pavement markings andlane structures 109 to determine the speed at which apedestrian 102 is moving in the field ofview 112. Based on the position of vehicle andbicycle detection zones 105 in the field ofview 112, the present invention detects the horizontal stop bar and lane lines to locate the stop bar location. A stop bar finding algorithm may also be applied to identify one or more horizontally straight white lines in an image, by finding a peak in the horizontal projection. The layout of thetraffic detection zone 114 may also be used to find the stop bar, as the bottom zones of each lane are typically close to the stop bar. Once the stop bar is found, the present invention attempts to find lane lines which intersect with the stop bar. Zone coordinates are also utilized to find most vertically-oriented lane lines, either to the left or to the right of avehicle detection zone 105. - The pedestrian
speed calibration component 145 therefore performs automatic calibration of pedestrian speed with a region ofinterest 103 in the field ofview 112 by initially identifying a location of one or more of a stop bar and lane lines in the field ofview 112, and determining an intersection of the lane lines with the stop bar to develop coordinates of the region ofinterest 103. The pedestrianspeed calibration component 145 also identifies a vertical orientation of the lane lines relative to the stop bar. - The pedestrian
speed calibration component 145 then computes features of animage 111 inside the region ofinterest 103 to differentiate between image pixels. Features analyzed may include edge gradients, thresholded grayscale pixels, and feature projections. The present invention then measures an inter-lane distance between the image pixels using a known lane line width and the vertical orientation of the lane lines relative to the stop bar to map the image pixels to an actual distance traveled of apedestrian 102 in the region ofinterest 103. Using this measurement and mapping, the pedestrianspeed calibration component 145 calculates a pedestrian speed from the actual distance traveled that is calibrated with the region ofinterest 103. The calculation includes computing the number of feet or meters traveled relative to lane lines and stop bar markings, and the distance per unit of time traveled by thepedestrian 102. The calibrated pedestrian speed may then be provided to a traffic controller system asoutput data 180, or other external devices or location for storage or use. -
FIG. 3 is a flowchart illustrating steps in aprocess 300 for performing the calibration of pedestrian speed in the pedestrian tracking and countingframework 100, according to another embodiment of the present invention. Such aprocess 300 may include one or more algorithms for pedestrian speed calibration in a region ofinterest 103 within thecomponent 145. - The
process 300 is initialized atstep 310 by retrievinginput data 110 representing a field ofview 112 in thetraffic detection zone 114. Theprocess 300 analyzes thisinput data 110 to ascertain, atstep 320, a region ofinterest 103 in whichpedestrians 102 may use the roadway within thetraffic detection zone 114. The region ofinterest 103 may or may not be the specificpedestrian detection zone 104 referenced above with respect to other aspects of the present invention. Regardless, theprocess 300 determines the region ofinterest 103 using one or more of pavement or intersection markings andlane structures 109, positions ofother detection zones 105, movement ofobjects 107 in the field ofview 112, or some combination of these approaches. - At
step 330, theprocess 300 attempts to identify positions of both a stop bar and lane lines for vehicles and bicycles in the region ofinterest 103. Using this information, theprocess 300 develops positional coordinates of the region ofinterest 103 atstep 340. This may be performed in combination with the approach(es) used to ascertain a region ofinterest 103. Regardless, these zonal coordinates are used to further identify vertical orientations of the lane lines relative to the stop bar, so that those lane lines with the most vertical orientations relative to the detected stop bar are used for further computations of pedestrian speed as noted below. - Once the region of
interest 103 has been ascertained, theprocess 300 then attempts to ascertain a relationship between an actual distance traveled by apedestrian 102 and image pixels in theinput data 110 atstep 350. This involves measuring an inter-lane distance between the image pixels atstep 360, and mapping image pixels to the actual distance traveled. This is performed using standard lane widths, so that once most vertical orientations of lane lines are established instep 340, the transformation from an image to actual distance traveled by apedestrian 102 can be accomplished. - The process continues at
step 370 by calculating pedestrian speed. This is performed as noted above by computing the distance, in number of feet or meters, traveled relative to lane lines and stop bar markings, and the distance per unit of time traveled by thepedestrian 102. The pedestrian speed is therefore calibrated to the region ofinterest 103 for appropriate traffic intersection control, and the speed is provided asoutput data 180 to one or more of atraffic management tool 191,traffic management system 193,intersection signal controller 194,additional analytics 192, or any other additional orexternal applications 190. - Returning to
FIG. 1 , theincident detection component 146 is configured to detect various pedestrian incidents and to provide an alarm asoutput data 180 when a pedestrian incident is determined. Incidents may include non-moving objects within thepedestrian detection zone 104, or within the field ofview 112 generally, that can cause abnormal pedestrian and vehicle movements. Incidents may also include prone objects orpedestrians 102 within thepedestrian detection zone 104, forexample pedestrians 102 have fallen to the pavement. Other types of incidents include a presence of unauthorized vehicles in thepedestrian detection zone 104. - Once the
pedestrian detection zone 104 has been defined to track and identify movingpedestrians 102 as discussed above, theincident detection component 146 learns the background of thepedestrian detection zone 104 to continually search for parts of that area that are different than the background it has learned. If a change in the background has been present for some amount of time, and/or where moving vehicles (or even other walking pedestrians) being tracked are avoiding the area that has changed to avoid contact, theincident detection component 146 concludes that non-moving objects are in thepedestrian detection zone 104 and generates a warning signal. Non-moving objects may include fallen pedestrians, stalled vehicles, objects that have fallen from moving vehicles, motorcyclists or bicyclists who are down, or objects placed in thepedestrian detection zone 104 by someone. - The
incident detection component 146tracks walking pedestrians 102 as they move all the way through thepedestrian detection zone 104, from the entry point through to the exit point. If apedestrian 102 stops at the middle of thepedestrian detection zone 104 for some time, and does not move forward or backward and continues to be present in thezone 104, then the present invention can issue an alarm signaling “pedestrian down in the roadway” to alert the responsible authorities. - The
incident detection component 146 may also track movement of vehicles, bicycles, motorcycles, andother objects 107 in the field ofview 112. Where it detects that anobject 107 has entered thepedestrian detection zone 104, and stop there and not proceed for some time, theincident detection component 146 may signal that an unauthorized vehicle is present in thepedestrian detection zone 104 to alert authorities for further investigation. -
FIG. 4 is a flow diagram illustrating steps in a process 400 of incident detection in the pedestrian tracking and countingframework 100 according to one embodiment of the present invention. Such a process 400 may include one or more algorithms for incident detection within thecomponent 146. In this process 400, the present invention receives animage 111 representing the field ofview 112 instep 405 and thereby initializes theincident detection component 146. Atstep 410, the present invention performs pedestrian detection using one or more methods as described herein, and if apedestrian 102 is identified atstep 420, proceeds with tracking thepedestrian 102 atstep 430, together with updating the identification, location, speed, and other characteristics. If nopedestrian 102 is identified atstep 420, the algorithm loops back to begin processing anew image 111 representing the field ofview 112. - The algorithm for incident detection in the
component 146 then determines if thepedestrian 102 is moving atstep 440. If the pedestrian is found to be in motion, the process 400 returns to begin processing anew image 111 representing the field ofview 112. If, however, thepedestrian 102 is not in motion atstep 440, the algorithm for incident detection proceeds to determine how long thepedestrian 102 has been stationary atstep 450. If thepedestrian 102 is not in motion in excess of a certain amount of time, a pedestrian down alarm is generated atstep 470 asoutput data 180. - The certain amount of time may be preset by a user, and may also be learned by the process 400 as
pedestrians 102 andother objects 107 are identified and tracked. A timer may be updated atstep 460 for determining whether apedestrian 102 is not in motion for a certain amount of time, and this value is returned to the beginning of the algorithm. In this matter, were the incident detection components thatpedestrians 102 are not in motion for some specific reason (for example, a blockage in traffic) then this value can be stored and used by the process 400. - In one embodiment of the present invention, the pedestrian tracking and counting
framework 100 may be configured to provide aseparate output 180 to atraffic signal controller 194 when a group of pre-determined people is identified to enable additional functions to be performed. A user may set sample size for thisoutput 180 using thetraffic management tool 191, or it may be automatically determined within the present invention. - Regardless, several applications are possible with an identified group as an
output 180. For example, thetraffic signal controller 194 may extend the walk time or hold a red light for vehicles to allow safe passage through the intersection. In another example, the present invention may use an identified group of people to further identify periods of high pedestrian traffic for better intersection efficiency. It is therefore to be understood that many uses ofoutput data 180 in applications for traffic intersection signal control are possible and within the scope of the present invention. - The pedestrian tracking and counting
framework 100 of the present invention may be applied in many different circumstances. For example, the present invention may be used to identifypedestrians 102 during adverse weather conditions when physical danger may increase due to reduced visibility. The present invention may therefore perform pedestrian detection in low-light, fog or other low-contrast conditions for improved roadway and intersection. - In another example, the present invention may be used to identify the difference between a
pedestrian 102 and the pedestrian's shadow. In such an example, the pedestrian detection is improved through rejection of pedestrian shadows to ensure improved accuracy in pedestrian detection and counting. - In still a further example, the present invention may be used to determine a normal or average crossing speed for
pedestrians 102 in adetection zone 104. This may be then be used to identify slow-movingpedestrians 102, such as the elderly, children, and disabled or wheelchair-bound persons, to extend and/or adjust a signal timing for crossing the intersection for safer passage. It may also be used to identify faster-moving intersection users, such aspedestrians 102 using hover boards, skateboards, or other such devices in thepedestrian detection zone 104. - The present invention may further be used to identify late arrivals in the
pedestrian detection zone 104, to extend and/or adjust signal timing for safe intersection passage. The present invention may also receive and use additional input from the traffic signal controller to identify when apedestrian 102 starts to cross the intersection after a certain percentage of the crossing time has expired. The present invention may also be utilized to compute a crosswalk occupancy, for example to determine a pedestrian density in thedetection zone 104. - As noted above, the pedestrian tracking and counting
framework 100, and the various processes described herein, may be utilized in combination with existing approaches to determining vehicle andbicycle detection zones 105, and may be therefore performed using the existing field ofview 112 in atraffic detection zone 114 that is designed to detect vehicles, bicycles and other road users needing the traffic signal to cross an intersection. - In one embodiment, in order to achieve better accuracy, the present invention may use an existing vehicle detection status, such as speed or saturation, to dynamically change the sensitivity of pedestrian detection. For example, a known vehicular status may be applied to increase the likelihood of pedestrian crossing when stopped vehicle is detected, or when no vehicle in present. Conversely, it may be used to decrease the likelihood of pedestrian crossing while vehicular traffic is freely flowing. Therefore, the present invention use knowledge of either stopped or moving vehicles or bicycles in the respective
other detection zones 105 or moving vehicles to improve pedestrian detection accuracy. - Similarly, the present invention may be part of a whole scene analysis that combines vehicular, bicycle, and pedestrian detection to identify different moving
objects 107, such as vehicles, motorcycles, bicycles and pedestrians. Each object type has its own unique characteristics, and the present invention is configured to automatically learn these unique characteristics and apply them to identify the different types.Output data 180 from such a whole scene analysis provides traffic engineers, responsible authorities, and the public with a complete picture of street and intersection activity (for example, who is using what and at what time and for how long) for improved roadway management. - As noted above, the pedestrian tracking and counting
framework 100 may be configured to learn features of a traffic intersection, such as the background, using the image processing paradigms discussed herein. This may further include one or more approaches for learning roadway lane structures for improving accuracy of in identifying vehicles, bicycles, pedestrians, andother objects 107 in atraffic detection zone 114. - Consider a scenario where a lane line is detected to the left side of a field of
view 112, and its feature signature was learned over time to account for how much natural variability can be expected. Consider further that this lane line structure gets occluded 80% of the time a vehicle is detected inside thetraffic detection zone 114. It can be inferred that when the lane line gets occluded then it is possible that a vehicle is likely present inside thetraffic detection zone 114, thereby increasing a detection rate. Also, consider the case where a curb was detected to the right side of a zone and it was learned that the curb gets occluded 90% of the time when a vehicle is present inside thetraffic detection zone 114. This can be used to reduce a nuisance or false call rate, which can be caused by shadows or portions of vehicles present in the neighboring zone that can confuse the existing image analysis algorithms to misconstrue the contents of a zone. Where detection and false call rates are key metrics used to measure accuracy, having low missed and false calls improves the overall performance and efficiency of a traffic management system. - The present invention may include an approach that incorporates a highly robust model that learns roadway structures to improve sensing accuracy of a traffic management system. Such a roadway structure model provides a high confidence that learned structures correspond to physical roadway structures, and is robust to short-term changes lighting conditions, such as shadows cast by trees, buildings, clouds, vehicles, rain, fog, etc. The model also adaptively learns long-term appearance changes of the roadway structures caused by natural wear and tear, seasonal changes in lighting (winter vs. summer), etc. The model also exhibits a low decision time (in milliseconds) for responding to occlusions caused by fast moving traffic, and low computational complexity capable of running on an embedded computing platform.
- In one exemplary embodiment, the present invention looks at user-drawn
zones 105 to initialize and establish borders for regions ofinterest 103 for various detection zones.Images 111 are processed to compute features inside borders for the region ofinterest 103, and find roadway structures using these computed features. The model is then developed to learn background structures from these features to detect an occlusion, and learn the relationship between structure occlusions anddetection zones 105. - Several roadway characteristics may aid in the model's ability to learn the background and relationship between structure occlusions and detection ones. For example, roadway structures such as lane lines, curbs, and medians are generally found adjoining detection zone boundaries. Also, roadway structures exhibit strong feature patterns that can be generalized. For example, they contain strong edges and are relatively bright in grayscale. Such structures can be effectively described by overlapping projector peaks of positive edges, negative edges and thresholded grayscale pixels. These structures are also persistent, and their feature signatures can be learned over time to detect occlusions and draw inferences regarding the presence of vehicles in the neighboring zones.
- In the modeling approach described above, every zone requires the computation of a left and a right border region of
interest 103. If two zones are considered horizontal neighbors, then they will share a border region ofinterest 103, and the area between the zones is established as the border region ofinterest 103. If a zone has no neighboring zones to the left or right, then the corresponding the boundary of the corresponding side is extended by an area proportionate to the zone width, and this extended area serves as the border region ofinterest 103 for the zone. Also, each border region ofinterest 103 may be sub-divided into tile regions of interest based on the size of the user-drawn zones. A larger zone provides a larger border area, allowing the model to work with smaller tiles that provide a more localized knowledge of structures and occlusion. - Features are computed in the border region of
interest 103 by computing edges from projecting positive and negative edges across rows, and finding peak segments from each projected positive and negative edge. Additionally, the peak segments may be determined by computing a gray histogram and a cumulative histogram from image pixels, determining a gray threshold image, and projecting resulting pixels across rows. Roadway structures are learned from each computed feature by finding overlapping feature segment locations, accumulating peak segment locations of overlapping features in a histogram, and finding peaks in the feature background histograms. The model of roadway structures is therefore established using feature histogram peak locations. This is used to identify an occlusion by finding overlapping positive edge peak segments, negative edge peak segments, and gray threshold peak segments with the background histogram. Matching scores are compute for each of these overlaps and compared to threshold values to differentiate between a visible structure and an occlusion. - The systems and methods of the present invention may be implemented in many different computing environments. For example, they may be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, electronic or logic circuitry such as discrete element circuit, a programmable logic device or gate array such as a PLD, PLA, FPGA, PAL, and any comparable means. In general, any means of implementing the methodology illustrated herein can be used to implement the various aspects of the present invention. Exemplary hardware that can be used for the present invention includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other such hardware. Some of these devices include processors (e.g., a single or multiple microprocessors or general processing units), memory, nonvolatile storage, input devices, and output devices. Furthermore, alternative software implementations including, but not limited to, distributed processing, parallel processing, or virtual machine processing can also be configured to perform the methods described herein.
- The systems and methods of the present invention may also be wholly or partially implemented in software that can be stored on a non-transitory computer-readable storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of this invention can be implemented as a program embedded on a mobile device or personal computer through such mediums as an applet, JAVA® or CGI script, as a resource residing on one or more servers or computer workstations, as a routine embedded in a dedicated measurement system, system component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system.
- Additionally, the data processing functions disclosed herein may be performed by one or more program instructions stored in or executed by such memory, and further may be performed by one or more modules configured to carry out those program instructions. Modules are intended to refer to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, expert system or combination of hardware and software that is capable of performing the data processing functionality described herein.
- The foregoing descriptions of embodiments of the present invention have been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Accordingly, many alterations, modifications and variations are possible in light of the above teachings, may be made by those having ordinary skill in the art without departing from the spirit and scope of the invention. It is therefore intended that the scope of the invention be limited not by this detailed description. For example, notwithstanding the fact that the elements of a claim are set forth below in a certain combination, it must be expressly understood that the invention includes other combinations of fewer, more or different elements, which are disclosed in above even when not initially claimed in such combinations.
- The words used in this specification to describe the invention and its various embodiments are to be understood not only in the sense of their commonly defined meanings, but to include by special definition in this specification structure, material or acts beyond the scope of the commonly defined meanings. Thus if an element can be understood in the context of this specification as including more than one meaning, then its use in a claim must be understood as being generic to all possible meanings supported by the specification and by the word itself.
- The definitions of the words or elements of the following claims are, therefore, defined in this specification to include not only the combination of elements which are literally set forth, but all equivalent structure, material or acts for performing substantially the same function in substantially the same way to obtain substantially the same result. In this sense it is therefore contemplated that an equivalent substitution of two or more elements may be made for any one of the elements in the claims below or that a single element may be substituted for two or more elements in a claim. Although elements may be described above as acting in certain combinations and even initially claimed as such, it is to be expressly understood that one or more elements from a claimed combination can in some cases be excised from the combination and that the claimed combination may be directed to a sub-combination or variation of a sub-combination.
- Insubstantial changes from the claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalently within the scope of the claims. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements.
- The claims are thus to be understood to include what is specifically illustrated and described above, what is conceptually equivalent, what can be obviously substituted and also what essentially incorporates the essential idea of the invention.
Claims (30)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/470,627 US9805474B1 (en) | 2016-05-09 | 2017-03-27 | Pedestrian tracking at a traffic intersection to identify vulnerable roadway users for traffic signal timing, pedestrian safety, and traffic intersection control |
PCT/US2017/028662 WO2017196515A1 (en) | 2016-05-09 | 2017-04-20 | Pedestrian counting and detection at a traffic intersection based on location of vehicle zones |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/150,280 US9607402B1 (en) | 2016-05-09 | 2016-05-09 | Calibration of pedestrian speed with detection zone for traffic intersection control |
US15/470,627 US9805474B1 (en) | 2016-05-09 | 2017-03-27 | Pedestrian tracking at a traffic intersection to identify vulnerable roadway users for traffic signal timing, pedestrian safety, and traffic intersection control |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/150,280 Continuation US9607402B1 (en) | 2016-05-09 | 2016-05-09 | Calibration of pedestrian speed with detection zone for traffic intersection control |
Publications (2)
Publication Number | Publication Date |
---|---|
US9805474B1 US9805474B1 (en) | 2017-10-31 |
US20170323448A1 true US20170323448A1 (en) | 2017-11-09 |
Family
ID=58360093
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/150,280 Active US9607402B1 (en) | 2016-05-09 | 2016-05-09 | Calibration of pedestrian speed with detection zone for traffic intersection control |
US15/470,627 Active US9805474B1 (en) | 2016-05-09 | 2017-03-27 | Pedestrian tracking at a traffic intersection to identify vulnerable roadway users for traffic signal timing, pedestrian safety, and traffic intersection control |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/150,280 Active US9607402B1 (en) | 2016-05-09 | 2016-05-09 | Calibration of pedestrian speed with detection zone for traffic intersection control |
Country Status (1)
Country | Link |
---|---|
US (2) | US9607402B1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109584565A (en) * | 2018-12-25 | 2019-04-05 | 天津易华录信息技术有限公司 | A kind of Evaluation of Traffic Safety system and its evaluation number calculation method |
CN109948830A (en) * | 2019-01-29 | 2019-06-28 | 青岛科技大学 | Mix bicycle trajectory predictions method, equipment and the medium of environment certainly towards people |
US10363944B1 (en) * | 2018-02-14 | 2019-07-30 | GM Global Technology Operations LLC | Method and apparatus for evaluating pedestrian collision risks and determining driver warning levels |
CN110517484A (en) * | 2019-08-06 | 2019-11-29 | 南通大学 | Diamond interchange area planar crossing sign occlusion prediction model building method |
CN110956801A (en) * | 2018-09-27 | 2020-04-03 | 株式会社斯巴鲁 | Moving body monitoring device, vehicle control system using same, and traffic system |
US11189048B2 (en) * | 2018-07-24 | 2021-11-30 | Toyota Jidosha Kabushiki Kaisha | Information processing system, storing medium storing program, and information processing device controlling method for performing image processing on target region |
US11373534B2 (en) | 2018-12-05 | 2022-06-28 | Volkswagen Aktiengesellschaft | Method for providing map data in a transportation vehicle, transportation vehicle and central data processing device |
US20220375185A1 (en) * | 2021-08-06 | 2022-11-24 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method of recognizing image, electronic device, and storage medium |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180376051A9 (en) * | 2015-12-31 | 2018-12-27 | Ground Zero at Center Stage LLC | Surface integrated camera mesh for semi-automated video capture |
US9607402B1 (en) | 2016-05-09 | 2017-03-28 | Iteris, Inc. | Calibration of pedestrian speed with detection zone for traffic intersection control |
CN106127292B (en) * | 2016-06-29 | 2019-05-07 | 上海小蚁科技有限公司 | Flow method of counting and equipment |
US10169647B2 (en) | 2016-07-27 | 2019-01-01 | International Business Machines Corporation | Inferring body position in a scan |
CN107316320A (en) * | 2017-06-19 | 2017-11-03 | 江西洪都航空工业集团有限责任公司 | The real-time pedestrian detecting system that a kind of use GPU accelerates |
US11322021B2 (en) * | 2017-12-29 | 2022-05-03 | Traffic Synergies, LLC | System and apparatus for wireless control and coordination of traffic lights |
CN110097768B (en) * | 2018-01-30 | 2021-12-21 | 西门子公司 | Traffic signal indication method, device, system and machine readable medium |
US20200090501A1 (en) * | 2018-09-19 | 2020-03-19 | International Business Machines Corporation | Accident avoidance system for pedestrians |
US11689707B2 (en) * | 2018-09-20 | 2023-06-27 | Shoppertrak Rct Llc | Techniques for calibrating a stereoscopic camera in a device |
KR20200076133A (en) * | 2018-12-19 | 2020-06-29 | 삼성전자주식회사 | Electronic device and method for providing vehicle to everything service thereof |
US10817728B2 (en) * | 2019-01-23 | 2020-10-27 | GM Global Technology Operations LLC | Automated data collection for continued refinement in the detection of objects-of-interest |
EP3922049A1 (en) * | 2019-02-04 | 2021-12-15 | Nokia Technologies Oy | Improving operation of wireless communication networks for detecting vulnerable road users |
CN111753579A (en) * | 2019-03-27 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Detection method and device for designated walk-substituting tool |
CN110096959B (en) * | 2019-03-28 | 2021-05-28 | 上海拍拍贷金融信息服务有限公司 | People flow calculation method, device and computer storage medium |
CN111815671B (en) * | 2019-04-10 | 2023-09-15 | 曜科智能科技(上海)有限公司 | Target quantity counting method, system, computer device and storage medium |
US11643115B2 (en) * | 2019-05-31 | 2023-05-09 | Waymo Llc | Tracking vanished objects for autonomous vehicles |
US11447129B2 (en) | 2020-02-11 | 2022-09-20 | Toyota Research Institute, Inc. | System and method for predicting the movement of pedestrians |
US11878684B2 (en) * | 2020-03-18 | 2024-01-23 | Toyota Research Institute, Inc. | System and method for trajectory prediction using a predicted endpoint conditioned network |
US11651609B2 (en) * | 2020-06-10 | 2023-05-16 | Here Global B.V. | Method, apparatus, and system for mapping based on a detected pedestrian type |
US11763677B2 (en) * | 2020-12-02 | 2023-09-19 | International Business Machines Corporation | Dynamically identifying a danger zone for a predicted traffic accident |
CN114663793A (en) * | 2020-12-04 | 2022-06-24 | 丰田自动车株式会社 | Target behavior identification method and device, storage medium and terminal |
CN114693540A (en) * | 2020-12-31 | 2022-07-01 | 华为技术有限公司 | Image processing method and device and intelligent automobile |
CN112906483B (en) * | 2021-01-25 | 2024-01-23 | 中国银联股份有限公司 | Target re-identification method, device and computer readable storage medium |
CN113160182B (en) * | 2021-04-25 | 2022-04-08 | 湖南九九智能环保股份有限公司 | Stacking section counting system based on image recognition |
CN113888871B (en) * | 2021-10-20 | 2023-05-05 | 上海电科智能系统股份有限公司 | Automatic handling linkage system and method for expressway traffic incidents |
CN113709006B (en) * | 2021-10-29 | 2022-02-08 | 上海闪马智能科技有限公司 | Flow determination method and device, storage medium and electronic device |
CN116883938A (en) * | 2023-07-05 | 2023-10-13 | 禾多科技(北京)有限公司 | Pedestrian speed information generation method, device, equipment and computer readable medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130293394A1 (en) * | 2012-04-24 | 2013-11-07 | Zetta Research and Development, LLC–ForC Series | Operational efficiency in a vehicle-to-vehicle communications system |
US20140188365A1 (en) * | 2011-08-10 | 2014-07-03 | Toyota Jidosha Kabushiki Kaisha | Driving assistance device |
US20150046058A1 (en) * | 2012-03-15 | 2015-02-12 | Toyota Jidosha Kabushiki Kaisha | Driving assist device |
US20150210277A1 (en) * | 2014-01-30 | 2015-07-30 | Mobileye Vision Technologies Ltd. | Systems and methods for detecting traffic lights |
US20160027300A1 (en) * | 2014-07-28 | 2016-01-28 | Econolite Group, Inc. | Self-configuring traffic signal controller |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5903217A (en) | 1997-10-21 | 1999-05-11 | Microwave Sensors, Inc. | Micro motion sensor |
US9311540B2 (en) | 2003-12-12 | 2016-04-12 | Careview Communications, Inc. | System and method for predicting patient falls |
WO2008071000A1 (en) | 2006-12-15 | 2008-06-19 | Micro Target Media Holdings Inc. | System and method for obtaining and using advertising information |
GB0709329D0 (en) | 2007-05-15 | 2007-06-20 | Ipsotek Ltd | Data processing apparatus |
US8629784B2 (en) | 2009-04-02 | 2014-01-14 | GM Global Technology Operations LLC | Peripheral salient feature enhancement on full-windshield head-up display |
WO2011077400A2 (en) | 2009-12-22 | 2011-06-30 | Leddartech Inc. | Active 3d monitoring system for traffic detection |
US9472097B2 (en) | 2010-11-15 | 2016-10-18 | Image Sensing Systems, Inc. | Roadway sensing systems |
WO2012135014A2 (en) | 2011-03-25 | 2012-10-04 | Tk Holdings Inc. | Image sensor calibration system and method |
US9165190B2 (en) | 2012-09-12 | 2015-10-20 | Avigilon Fortress Corporation | 3D human pose and shape modeling |
US9152865B2 (en) | 2013-06-07 | 2015-10-06 | Iteris, Inc. | Dynamic zone stabilization and motion compensation in a traffic management apparatus and system |
KR101769897B1 (en) | 2013-09-23 | 2017-08-22 | 한국전자통신연구원 | Apparatus and method for managing safety of pedestrian on crosswalk |
CN104318760B (en) | 2014-09-16 | 2016-08-24 | 北方工业大学 | Crossing violation behavior intelligent detection method and system based on analog model |
CN104299426B (en) | 2014-09-19 | 2016-05-11 | 辽宁天久信息科技产业有限公司 | A kind of traffic signal control system and method based on to pedestrian detection counting statistics |
CN104318263A (en) | 2014-09-24 | 2015-01-28 | 南京邮电大学 | Real-time high-precision people stream counting method |
US9165461B1 (en) | 2015-05-06 | 2015-10-20 | Intellectual Fortress, LLC | Image processing based traffic flow control system and method |
US9460613B1 (en) | 2016-05-09 | 2016-10-04 | Iteris, Inc. | Pedestrian counting and detection at a traffic intersection based on object movement within a field of view |
US9607402B1 (en) | 2016-05-09 | 2017-03-28 | Iteris, Inc. | Calibration of pedestrian speed with detection zone for traffic intersection control |
US9449506B1 (en) | 2016-05-09 | 2016-09-20 | Iteris, Inc. | Pedestrian counting and detection at a traffic intersection based on location of vehicle zones |
-
2016
- 2016-05-09 US US15/150,280 patent/US9607402B1/en active Active
-
2017
- 2017-03-27 US US15/470,627 patent/US9805474B1/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140188365A1 (en) * | 2011-08-10 | 2014-07-03 | Toyota Jidosha Kabushiki Kaisha | Driving assistance device |
US20150046058A1 (en) * | 2012-03-15 | 2015-02-12 | Toyota Jidosha Kabushiki Kaisha | Driving assist device |
US20130293394A1 (en) * | 2012-04-24 | 2013-11-07 | Zetta Research and Development, LLC–ForC Series | Operational efficiency in a vehicle-to-vehicle communications system |
US20150210277A1 (en) * | 2014-01-30 | 2015-07-30 | Mobileye Vision Technologies Ltd. | Systems and methods for detecting traffic lights |
US9272709B2 (en) * | 2014-01-30 | 2016-03-01 | Mobileye Vision Technologies Ltd. | Systems and methods for detecting traffic lights |
US20160027300A1 (en) * | 2014-07-28 | 2016-01-28 | Econolite Group, Inc. | Self-configuring traffic signal controller |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10363944B1 (en) * | 2018-02-14 | 2019-07-30 | GM Global Technology Operations LLC | Method and apparatus for evaluating pedestrian collision risks and determining driver warning levels |
US20190248381A1 (en) * | 2018-02-14 | 2019-08-15 | GM Global Technology Operations LLC | Method and apparatus for evaluating pedestrian collision risks and determining driver warning levels |
US11189048B2 (en) * | 2018-07-24 | 2021-11-30 | Toyota Jidosha Kabushiki Kaisha | Information processing system, storing medium storing program, and information processing device controlling method for performing image processing on target region |
CN110956801A (en) * | 2018-09-27 | 2020-04-03 | 株式会社斯巴鲁 | Moving body monitoring device, vehicle control system using same, and traffic system |
US11205341B2 (en) * | 2018-09-27 | 2021-12-21 | Subaru Corporation | Movable body monitoring apparatus, and vehicle control system and traffic system using the movable body monitoring apparatus |
US11373534B2 (en) | 2018-12-05 | 2022-06-28 | Volkswagen Aktiengesellschaft | Method for providing map data in a transportation vehicle, transportation vehicle and central data processing device |
CN109584565A (en) * | 2018-12-25 | 2019-04-05 | 天津易华录信息技术有限公司 | A kind of Evaluation of Traffic Safety system and its evaluation number calculation method |
CN109948830A (en) * | 2019-01-29 | 2019-06-28 | 青岛科技大学 | Mix bicycle trajectory predictions method, equipment and the medium of environment certainly towards people |
CN109948830B (en) * | 2019-01-29 | 2021-03-02 | 青岛科技大学 | Bicycle track prediction method, equipment and medium oriented to self-mixed environment |
CN110517484A (en) * | 2019-08-06 | 2019-11-29 | 南通大学 | Diamond interchange area planar crossing sign occlusion prediction model building method |
US20220375185A1 (en) * | 2021-08-06 | 2022-11-24 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method of recognizing image, electronic device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US9805474B1 (en) | 2017-10-31 |
US9607402B1 (en) | 2017-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9805474B1 (en) | Pedestrian tracking at a traffic intersection to identify vulnerable roadway users for traffic signal timing, pedestrian safety, and traffic intersection control | |
US9460613B1 (en) | Pedestrian counting and detection at a traffic intersection based on object movement within a field of view | |
US9449506B1 (en) | Pedestrian counting and detection at a traffic intersection based on location of vehicle zones | |
JP7052663B2 (en) | Object detection device, object detection method and computer program for object detection | |
CN110178167B (en) | Intersection violation video identification method based on cooperative relay of cameras | |
WO2017196515A1 (en) | Pedestrian counting and detection at a traffic intersection based on location of vehicle zones | |
US20160148058A1 (en) | Traffic violation detection | |
US10311719B1 (en) | Enhanced traffic detection by fusing multiple sensor data | |
US9058744B2 (en) | Image based detecting system and method for traffic parameters and computer program product thereof | |
Cheng et al. | Intelligent highway traffic surveillance with self-diagnosis abilities | |
US20140293052A1 (en) | Image-based vehicle detection and distance measuring method and apparatus | |
Fernández-Caballero et al. | Road-traffic monitoring by knowledge-driven static and dynamic image analysis | |
CN113345237A (en) | Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data | |
EP2709066A1 (en) | Concept for detecting a motion of a moving object | |
WO2015089867A1 (en) | Traffic violation detection method | |
WO2013186662A1 (en) | Multi-cue object detection and analysis | |
Mithun et al. | Video-based tracking of vehicles using multiple time-spatial images | |
Salvi | An automated nighttime vehicle counting and detection system for traffic surveillance | |
CN104537841A (en) | Unlicensed vehicle violation detection method and detection system thereof | |
US10643465B1 (en) | Dynamic advanced traffic detection from assessment of dilemma zone activity for enhancement of intersection traffic flow and adjustment of timing of signal phase cycles | |
Maurya et al. | Deep learning based vulnerable road user detection and collision avoidance | |
CN106023650A (en) | Traffic intersection video and computer parallel processing-based real-time pedestrian early-warning method | |
Kanhere | Vision-based detection, tracking and classification of vehicles using stable features with automatic camera calibration | |
Buslaev et al. | On problems of intelligent monitoring for traffic | |
EP2709065A1 (en) | Concept for counting moving objects passing a plurality of different areas within a region of interest |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: SURCHARGE FOR LATE PAYMENT, SMALL ENTITY (ORIGINAL EVENT CODE: M2554); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: CAPITAL ONE, NATIONAL ASSOCIATION, MARYLAND Free format text: SECURITY INTEREST;ASSIGNOR:ITERIS, INC;REEL/FRAME:058770/0592 Effective date: 20220125 |
|
AS | Assignment |
Owner name: ITERIS, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CAPITAL ONE, NATIONAL ASSOCIATION;REEL/FRAME:061109/0658 Effective date: 20220909 |