CN113065691A - Traffic behavior prediction method and system - Google Patents
Traffic behavior prediction method and system Download PDFInfo
- Publication number
- CN113065691A CN113065691A CN202110300208.5A CN202110300208A CN113065691A CN 113065691 A CN113065691 A CN 113065691A CN 202110300208 A CN202110300208 A CN 202110300208A CN 113065691 A CN113065691 A CN 113065691A
- Authority
- CN
- China
- Prior art keywords
- data
- road
- user
- scene
- behavior
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000013507 mapping Methods 0.000 claims abstract description 34
- 230000000007 visual effect Effects 0.000 claims abstract description 19
- 230000009471 action Effects 0.000 claims description 9
- 230000007613 environmental effect Effects 0.000 claims description 9
- 230000006399 behavior Effects 0.000 description 74
- 238000012544 monitoring process Methods 0.000 description 12
- 238000003860 storage Methods 0.000 description 9
- 238000009826 distribution Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000004590 computer program Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 206010039203 Road traffic accident Diseases 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012806 monitoring device Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000000547 structure data Methods 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Theoretical Computer Science (AREA)
- Development Economics (AREA)
- General Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a traffic behavior prediction method, which comprises the following steps: receiving first data under a current road scene, wherein the first data represents road data of a first user visual angle under the current road scene; acquiring second data under the current road scene, wherein the second data represent road data of at least one second user visual angle under the current road scene; establishing a road prediction model under the current road scene based on the mapping relation between the first data and the second data; and taking third data as the input quantity of the road prediction model to predict a road traffic event, and outputting the road traffic event to the terminal of the first user for display, wherein the third data represents the road data of the first user view angle in the next road scene. Meanwhile, the invention also discloses a traffic behavior prediction system.
Description
Technical Field
The invention relates to the technical field of traffic behaviors, in particular to a traffic behavior prediction method and a traffic behavior prediction system.
Background
When people go out, various traffic accidents or traffic jam situations are usually encountered, and once the situations are met, a user needs to spend a large amount of time waiting for passing, so that the going out of people is undoubtedly influenced seriously.
Disclosure of Invention
Therefore, the invention provides a traffic behavior prediction method and a traffic behavior prediction system, which aim to solve the problem of inconvenient travel of a user caused by traffic accidents or road congestion in the prior art.
In order to achieve the above object, a first aspect of the present invention provides a traffic behavior prediction method, including:
receiving first data under a current road scene, wherein the first data represents road data of a first user visual angle under the current road scene;
acquiring second data under the current road scene, wherein the second data represent road data of at least one second user visual angle under the current road scene;
establishing a road prediction model under the current road scene based on the mapping relation between the first data and the second data;
and taking third data as the input quantity of the road prediction model to predict a road traffic event, and outputting the road traffic event to the terminal of the first user for display, wherein the third data represents the road data of the first user view angle in the next road scene.
In an embodiment of the application, before the building a road prediction model in a current road scene based on a mapping relationship between the first data and the second data, the method further includes:
acquiring environmental data under a current road scene;
the establishing of the road prediction model under the current road scene based on the mapping relation between the first data and the second data comprises:
establishing a mapping relation among the first data, the second data and the environment data to generate road behavior statistical data under the current road scene;
and establishing a road prediction model under the current road scene based on the road behavior statistical data.
In an embodiment of the present application, the establishing a road prediction model in a current road scene based on a mapping relationship between the first data and the second data further includes:
determining first road behavior data of the first user based on the first data, the first road behavior data being characterized by an action trend of the first user;
determining second road behavior data of at least one second user based on the second data, the second road behavior data being characterized by an action trend of the second user;
and establishing a road prediction model under the current road scene based on the mapping relation between the first road behavior data and the second road behavior data.
In an embodiment of the present application, the establishing a road prediction model in a current road scene based on a mapping relationship between the first data and the second data further includes:
classifying the second data to obtain behavior category data of each second user;
and establishing a road prediction model under the current road scene based on the mapping relation between the behavior category data of each second user and the first data.
In an embodiment of the present application, the third data is the same as the road scene of the first data, and a time interval condition is satisfied between the next road scene and the current road scene;
or, the third data is different from the road scene of the first data, and a time interval condition is satisfied between the next road scene and the current road scene.
In an embodiment of the present application, the first road behavior data at least includes image structure data, trend motion data, deviation data, and kurtosis data of the first user; the second road behavior data includes at least image structure data, trend motion data, deviation data, and kurtosis data of a second user.
In an embodiment of the present application, the environment data at least includes weather information and time information.
In an embodiment of the application, the time information at least includes a date and a time, and the weather information is a weather state corresponding to the time information.
In an embodiment of the present application, the method further includes:
and sending the road behavior data to the first user so that the road behavior data is displayed at a terminal of the first user.
A second aspect of the present invention provides a traffic behavior prediction system, the system comprising:
the receiving unit is used for receiving first data under a current road scene, wherein the first data represents road data of a first user visual angle under the current road scene;
the acquisition unit is used for acquiring second data under the current road scene, wherein the second data represent road data of at least one second user visual angle under the current road scene;
the modeling unit is used for establishing a road prediction model under the current road scene based on the mapping relation between the first data and the second data;
and the determining unit is used for taking the third data as the input quantity of the road prediction model to predict the road traffic event and outputting the road traffic event to the terminal of the first user for displaying.
A third aspect of the present invention provides a traffic behavior prediction apparatus comprising: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor is configured to execute the method steps of any of the above-mentioned traffic behavior prediction methods when running the computer program.
A fourth aspect of the invention provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the method steps of any one of the above-mentioned traffic behavior prediction methods.
The invention has the following advantages:
according to the traffic behavior prediction method and system provided by the invention, first data under a current road scene are received, and the first data represent road data of a first user view angle under the current road scene; obtaining second data under the current road scene, wherein the second data represent road data of at least one second user visual angle under the current road scene; establishing a road prediction model under the current road scene based on the mapping relation between the first data and the second data; and taking the third data as the input quantity of the road prediction model to predict a road traffic event, and outputting the road traffic event to the terminal of the first user for display. Therefore, road traffic behaviors in the same road scene can be predicted, and more favorable road information help is provided for user traveling.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a first schematic flow chart illustrating an implementation process of a traffic behavior prediction method according to an embodiment of the present invention;
fig. 2 is a schematic view illustrating a second implementation of a flow of a traffic behavior presetting method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a traffic behavior prediction system according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
When the terms "comprises" and/or "comprising … …" are used in this specification, the presence of stated features, integers, steps, operations, elements, and/or components are specified, but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The embodiments of the invention may be described with reference to plan and/or cross-sectional views in idealized schematic representations of the invention. Accordingly, the example illustrations can be modified in accordance with manufacturing techniques and/or tolerances.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present invention and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Fig. 1 is a schematic flow chart of a first implementation process of the traffic behavior prediction method in the present application, as shown in fig. 1, the method includes:
here, the method is mainly applied to a server of road traffic behaviors, and the server can communicate with terminals such as vehicle-mounted equipment, pedestrian mobile terminals, road monitoring equipment, road side equipment, traffic light monitoring equipment and the like.
Specifically, the current road scene can be shot through terminals such as vehicle-mounted equipment, a pedestrian mobile terminal, road monitoring equipment, road side equipment and traffic light monitoring equipment, and shot image or video data is uploaded to the server, so that the server obtains first data in the current road scene.
Here, the first data may refer to road image data or video data photographed at a first user perspective in a current road scene. For example, a user a is driving, and a camera is installed on the vehicle of the user a, and through a corresponding operation performed by the user a on the camera, the camera can be triggered to shoot a current road scene at the viewing angle of the user a, so as to obtain an image or a video in the current road scene. And uploading the image or video as first data to the server.
Or when the first data is obtained by shooting the current road scene through the road monitoring equipment and uploading the current road scene to the server, the first user visual angle is the visual angle of the road monitoring equipment.
Or when the first data is that the current road scene is shot by the mobile terminal of the pedestrian on the road and uploaded to the server, the first user visual angle is the visual angle of the mobile terminal of the pedestrian on the road.
102, acquiring second data under a current road scene, wherein the second data represent road data of at least one second user visual angle under the current road scene;
in the application, the server can receive the road data uploaded by all users, so that after the server receives first data which are uploaded by a first user and shot from a first user view angle, the first data can be analyzed to obtain a road scene corresponding to the first data. And acquiring second data matched with the road scene from a database according to the road scene corresponding to the first data.
Here, the second data may be road image data or road video data photographed by a plurality of second users in the road scene at a second user perspective. The server can be communicated with terminals such as vehicle-mounted equipment, pedestrian mobile terminals, road monitoring equipment, road side equipment, traffic light monitoring equipment and the like.
Specifically, the current road scene can be shot through terminals such as vehicle-mounted equipment, a pedestrian mobile terminal, road monitoring equipment, road side equipment and traffic light monitoring equipment, and shot image or video data is uploaded to the server, so that the server obtains second data in the current road scene. That is, the second user may be a pedestrian, a vehicle-mounted person, a road monitoring device, a road side device, a traffic light monitoring device, and the like near the user a.
In the application, when more second users upload more second data, the server can acquire more comprehensive second data in the current road scene, so that the prediction accuracy of the traffic incident can be improved when the traffic incident in the same road scene is predicted in the later period.
In this application, when the server analyzes the first data to obtain a road scene corresponding to the first data, the server may specifically perform image segmentation processing on an image of the first data to extract environmental data representing buildings, vehicles, pedestrians, and the like in the first data, and determine a current road scene according to the environmental data. Of course, the server may also determine the road scene corresponding to the first data by using other ways of determining the road scene in the prior art, which are not described herein any more.
103, establishing a road prediction model under the current road scene based on the mapping relation between the first data and the second data;
in the application, after acquiring first data and second data in a current road scene, the server may further determine first road behavior data of a first user based on the first data, where the first road behavior data represents an action trend of the first user; for example, the first road behavior data includes at least image pixel data, trend motion data, skew data, and kurtosis data of the first user. And determining second road behavior data of at least one second user or a plurality of second users based on the second data, wherein the second road behavior data is characterized by action trends of the second users; for example, the second road behavior data includes at least image pixel data, trend motion data, skew data, and kurtosis data of the first user.
The image pixel data refers to picture pixels of an image, and is used for extracting important related information, such as important pixel data of people, buildings, vehicles and the like, from a structured video or image data.
Trend motion data: namely, the pedestrian and vehicle action track data, can be extracted from the video picture.
Skew data: the method is to measure the deviation direction and degree of the whole data set after various data are counted in the motion track data of pedestrians or vehicles, and is a digital characteristic of the asymmetric degree of the statistical data distribution.
Here, the definition of the degree of skewness includes a normal distribution (skewness of 0), a right-skew distribution (also called a positive-skew distribution, whose skewness is >0), and a left-skew distribution (also called a negative-skew distribution, whose skewness is < 0). Respectively, indicates a trend of data change, such as more positive or more negative.
Kurtosis data: and characterizing the characteristic number of the peak value of the probability density distribution curve at the average value. If the kurtosis data is large, it indicates that the variance of the group of data is increasing, particularly due to extreme differences in low frequencies that are greater or less than the mean.
In the application, after the server obtains the first road behavior data and the second road behavior data, a mapping relationship between the first road behavior data and the second road behavior data may be established, and a road prediction model in the current road scene is established based on the mapping relationship between the first road behavior data and the second road behavior data. The traffic incident under the current road scene can be predicted through the established road prediction model.
In this application, after obtaining the first road behavior data of the first user, the server may further send the road behavior data to the first user, so that the road behavior data is displayed at the terminal of the first user. Therefore, the user can know own data in real time so as to adjust the advancing direction of the user.
In another implementation manner of the present application, the server may further obtain environmental data in a current road scene; then establishing a mapping relation among the first data, the second data and the environmental data to generate road behavior statistical data under the current road scene; and establishing a road prediction model under the current road scene based on the road behavior statistical data.
Here, the environment data includes at least weather information and time information. Wherein the weather information is a weather state corresponding to the time information. The time information includes at least a date and a time of day.
In another implementation manner of the application, after the server acquires second data in a current road scene, image pixel extraction may be performed on the second data to obtain important pixel information in the second data, and then the second data is classified according to the important pixel information to obtain behavior category data of each second user; and establishing a road prediction model under the current road scene based on the mapping relation between the behavior category data of each second user and the first data.
Here, the important pixel information may be information representing a pedestrian, a vehicle, or a traveling direction in an image or video corresponding to the second data. The obtained behavior category data of each second user may specifically refer to that the second user is a pedestrian, a non-motor vehicle, a motor vehicle, or the like on the road.
The method generates various self data of the participants corresponding to the images and video clips of the road scene shot by the participants (from the first user perspective), and forms various self data of other participants (from the second user perspective) in the same road scene into a plurality of subsets of response data (namely image pixel data, trend motion data, deflection data and kurtosis data of second users in various categories), then aggregating a plurality of subsets of response data and corresponding to the self data (image pixel data, trend motion data, deflection data and kurtosis data) of the participant (first user perspective) to form a mapping, and combining with corresponding weather, time and date data, the road prediction model is established by taking the road prediction model as a source of modeling data, so that the purposes of artificially and intelligently predicting various traffic behaviors, traffic accidents and the like can be achieved.
And 104, taking third data as the input quantity of the road prediction model to predict a road traffic event, and outputting the road traffic event to the terminal of the first user for display, wherein the third data represents the road data of the first user view angle in the next road scene.
In this application, after the server establishes the road prediction model based on the first data and the second data, the server may further receive third data in a next road scene, where the third data may represent road image data or road video data photographed at a first user perspective in the next road scene. Then, taking the third data as the input quantity of the road prediction model, performing road prediction on the road traffic event in the current road scene to obtain the output result of the road prediction (specifically, predictive judgment made after a large amount of data are collected and classified), and outputting the road prediction result as the output data of the road prediction model to the terminal of the first user for displaying.
Here, the third data may be the same as the road scene of the first data, and a time interval condition is satisfied between the next road scene and the current road scene; that is, the next road scene is the same as the current road scene, but the photographing time is different. For example, when the current user a waits for a traffic light and the vehicle driven by the user a is in a stationary state, the next road scene is the same as the current road scene. Or the third data is different from the road scene of the first data, and a time interval condition is met between the next road scene and the current road scene. That is, the next road scene is different from the current road scene, and the photographing time is different. For example, when the vehicle driven by the current user a is in a driving state, the next road scene is different from the current road scene.
According to the method and the device, before modeling, various self data of the corresponding participants are generated based on images and video clips of the road scene shot by the participants (first user view angles), various self data of other participants (second user view angles) in the same road scene form a plurality of subsets of response data, the subsets of the response data are aggregated and correspond to various self data of the participants one by one to form mapping, and the mapping corresponds to corresponding weather, time and date data to serve as a source of modeling data, so that the established road prediction model can be more attached to the current road scene, the obtained predicted road events are more accurate when the road prediction model is used for road event prediction, and the travel experience of users is improved.
Fig. 2 is a schematic view of a second implementation flow of the traffic behavior presetting method in the present application, as shown in fig. 2, the method includes:
wherein the second data characterizes road data of at least one second user perspective in a current road scene; the third data characterizes road data for the first user perspective in a next road scene.
In the application, the method is mainly applied to terminals such as vehicle-mounted equipment, pedestrian mobile terminals, road monitoring equipment, road side equipment, traffic light monitoring equipment and the like, an image shooting sensor is arranged on the terminal, and shot road images or video data can be sent to a server of road traffic behaviors.
Since the method belongs to a corresponding scheme with the method of fig. 1, the related content may refer to the corresponding description in fig. 1, and is not described herein again.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
Fig. 3 is a schematic structural diagram of a traffic behavior prediction system according to the present application, and as shown in fig. 2, the system includes:
a receiving unit 301, configured to receive first data in a current road scene, where the first data represents road data of a first user perspective in the current road scene;
an obtaining unit 302, configured to obtain second data in a current road scene, where the second data represents road data of at least one second user perspective in the current road scene;
a modeling unit 303, configured to establish a road prediction model in a current road scene based on a mapping relationship between the first data and the second data;
and the determining unit 304 is configured to use the third data as an input quantity of the road prediction model to determine a road traffic event, and output the road traffic event to the terminal of the first user for display.
In a preferred embodiment, the obtaining unit 302 is further configured to obtain environmental data in a current road scene;
the modeling unit 303 is further specifically configured to establish a mapping relationship between the first data, the second data, and the environment data to generate road behavior statistical data in a current road scene; and establishing a road prediction model under the current road scene based on the road behavior statistical data.
In a preferred embodiment, the determining unit 304 is further configured to determine first road behavior data of the first user based on the first data, where the first road behavior data is characterized by an action trend of the first user; and determining second road behavior data of at least one of the second users based on the second data, the second road behavior data being characterized by an action trend of the second user;
the modeling unit 303 is further specifically configured to establish a road prediction model in the current road scene based on a mapping relationship between the first road behavior data and the second road behavior data.
In a preferred embodiment, the apparatus further comprises:
a classifying unit 305, configured to classify the second data to obtain behavior class data of each second user;
the modeling unit 303 is further specifically configured to establish a road prediction model in the current road scene based on a mapping relationship between the behavior category data of each second user and the first data.
Here, the third data is the same as the road scene of the first data, and a time interval condition is satisfied between the next road scene and the current road scene; or, the third data is different from the road scene of the first data, and a time interval condition is satisfied between the next road scene and the current road scene.
In the present application, the first road behavior data at least includes image pixel data, trend motion data, skew data, and kurtosis data of the first user; the second road behavior data includes at least image pixel data, trend motion data, skew data, and kurtosis data of a second user.
The environmental data includes at least weather information and time information. The time information at least comprises date and time, and the weather information is weather state corresponding to the time information.
In a preferred embodiment, the apparatus further comprises:
a sending unit 306, configured to send the road behavior data to the first user, so that the road behavior data is displayed at a terminal of the first user.
Each module in the present embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, or may be implemented by a combination of a plurality of physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
The present embodiments also provide an electronic device, comprising one or more processors; the storage device stores one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors implement the traffic behavior presetting method provided by the embodiment, and in order to avoid repeated description, detailed steps of traffic behavior presetting are not repeated here.
The present embodiment further provides a computer readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for presetting a traffic behavior according to the present embodiment, and in order to avoid repeated descriptions, detailed steps of traffic behavior presetting are not repeated here.
It will be understood by those of ordinary skill in the art that all or some of the steps of the above inventive method, systems, functional modules/units in the apparatus may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Those skilled in the art will appreciate that although some embodiments described herein include some features included in other embodiments instead of others, combinations of features of different embodiments are meant to be within the scope of the embodiments and form different embodiments.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.
Claims (11)
1. A traffic behavior prediction method, characterized in that the method comprises:
receiving first data under a current road scene, wherein the first data represents road data of a first user visual angle under the current road scene;
acquiring second data under the current road scene, wherein the second data represent road data of at least one second user visual angle under the current road scene;
establishing a road prediction model under the current road scene based on the mapping relation between the first data and the second data;
and taking third data as the input quantity of the road prediction model to predict a road traffic event, and outputting the road traffic event to the terminal of the first user for display, wherein the third data represents the road data of the first user view angle in the next road scene.
2. The prediction method according to claim 1, wherein before the building of the road prediction model in the current road scene based on the mapping relationship between the first data and the second data, the method further comprises:
acquiring environmental data under a current road scene;
the establishing of the road prediction model under the current road scene based on the mapping relation between the first data and the second data comprises:
establishing a mapping relation among the first data, the second data and the environment data to generate road behavior statistical data under the current road scene;
and establishing a road prediction model under the current road scene based on the road behavior statistical data.
3. The prediction method according to claim 1, wherein the building a road prediction model in a current road scene based on a mapping relationship between the first data and the second data further comprises:
determining first road behavior data of the first user based on the first data, the first road behavior data being characterized by an action trend of the first user;
determining second road behavior data of at least one second user based on the second data, the second road behavior data being characterized by an action trend of the second user;
and establishing a road prediction model under the current road scene based on the mapping relation between the first road behavior data and the second road behavior data.
4. The prediction method according to claim 1, wherein the building a road prediction model in a current road scene based on a mapping relationship between the first data and the second data further comprises:
classifying the second data to obtain behavior category data of each second user;
and establishing a road prediction model under the current road scene based on the mapping relation between the behavior category data of each second user and the first data.
5. The prediction method of claim 1, wherein the third data is the same as the road scene of the first data, and a time interval condition is satisfied between the next road scene and the current road scene;
or, the third data is different from the road scene of the first data, and a time interval condition is satisfied between the next road scene and the current road scene.
6. The prediction method of claim 3, wherein the first road behavior data comprises at least image pixel data, trend motion data, skew data, and kurtosis data for the first user; the second road behavior data includes at least image pixel data, trend motion data, skew data, and kurtosis data of a second user.
7. The prediction method of claim 2, wherein the environmental data includes at least weather information and time information.
8. The prediction method according to claim 7, wherein the time information includes at least a date and a time, and the weather information is a weather state corresponding to the time information.
9. The prediction method of claim 3, wherein the method further comprises:
and sending the road behavior data to the first user so that the road behavior data is displayed at a terminal of the first user.
10. A traffic behavior presetting method, characterized in that the method comprises:
sending first data under a current road scene to a server, wherein the first data represents road data of a first user visual angle under the current road scene;
receiving a road traffic event predicted by the server based on the first data and the second data, wherein the road traffic event is a road prediction model under a current road scene established by the server based on the mapping relation between the first data and the second data; and using the third data as the input quantity of the road prediction model to predict the road traffic event;
wherein the second data characterizes road data of at least one second user perspective in a current road scene; the third data characterizes road data for the first user perspective in a next road scene.
11. A traffic behavior prediction system, characterized in that the system comprises:
the receiving unit is used for receiving first data under a current road scene, wherein the first data represents road data of a first user visual angle under the current road scene;
the acquisition unit is used for acquiring second data under the current road scene, wherein the second data represent road data of at least one second user visual angle under the current road scene;
the modeling unit is used for establishing a road prediction model under the current road scene based on the mapping relation between the first data and the second data;
and the determining unit is used for determining a road traffic event by taking the third data as the input quantity of the road prediction model, and outputting the road traffic event to the terminal of the first user for displaying.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110300208.5A CN113065691A (en) | 2021-03-22 | 2021-03-22 | Traffic behavior prediction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110300208.5A CN113065691A (en) | 2021-03-22 | 2021-03-22 | Traffic behavior prediction method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113065691A true CN113065691A (en) | 2021-07-02 |
Family
ID=76563451
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110300208.5A Pending CN113065691A (en) | 2021-03-22 | 2021-03-22 | Traffic behavior prediction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113065691A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853388A (en) * | 2009-04-01 | 2010-10-06 | 中国科学院自动化研究所 | Unchanged view angle behavior identification method based on geometric invariable |
CN104915628A (en) * | 2014-03-14 | 2015-09-16 | 株式会社理光 | Pedestrian movement prediction method and device by carrying out scene modeling based on vehicle-mounted camera |
CN108028020A (en) * | 2015-11-02 | 2018-05-11 | 大陆汽车有限公司 | For select and between vehicle transmission sensor data method and apparatus |
CN109285348A (en) * | 2018-10-26 | 2019-01-29 | 深圳大学 | A kind of vehicle behavior recognition methods and system based on two-way length memory network in short-term |
CN109421738A (en) * | 2017-08-28 | 2019-03-05 | 通用汽车环球科技运作有限责任公司 | Method and apparatus for monitoring autonomous vehicle |
CN109558781A (en) * | 2018-08-02 | 2019-04-02 | 北京市商汤科技开发有限公司 | A kind of multi-angle video recognition methods and device, equipment and storage medium |
CN111542831A (en) * | 2017-12-04 | 2020-08-14 | 感知自动机股份有限公司 | System and method for predicting human interaction with vehicle |
CN111815951A (en) * | 2020-07-17 | 2020-10-23 | 山东科技大学 | Road vehicle monitoring system and method based on intelligent vision Internet of things |
CN112466114A (en) * | 2020-11-19 | 2021-03-09 | 南京代威科技有限公司 | Traffic monitoring system and method based on millimeter wave technology |
CN112487905A (en) * | 2020-11-23 | 2021-03-12 | 北京理工大学 | Method and system for predicting danger level of pedestrian around vehicle |
-
2021
- 2021-03-22 CN CN202110300208.5A patent/CN113065691A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853388A (en) * | 2009-04-01 | 2010-10-06 | 中国科学院自动化研究所 | Unchanged view angle behavior identification method based on geometric invariable |
CN104915628A (en) * | 2014-03-14 | 2015-09-16 | 株式会社理光 | Pedestrian movement prediction method and device by carrying out scene modeling based on vehicle-mounted camera |
CN108028020A (en) * | 2015-11-02 | 2018-05-11 | 大陆汽车有限公司 | For select and between vehicle transmission sensor data method and apparatus |
CN109421738A (en) * | 2017-08-28 | 2019-03-05 | 通用汽车环球科技运作有限责任公司 | Method and apparatus for monitoring autonomous vehicle |
CN111542831A (en) * | 2017-12-04 | 2020-08-14 | 感知自动机股份有限公司 | System and method for predicting human interaction with vehicle |
CN109558781A (en) * | 2018-08-02 | 2019-04-02 | 北京市商汤科技开发有限公司 | A kind of multi-angle video recognition methods and device, equipment and storage medium |
CN109285348A (en) * | 2018-10-26 | 2019-01-29 | 深圳大学 | A kind of vehicle behavior recognition methods and system based on two-way length memory network in short-term |
CN111815951A (en) * | 2020-07-17 | 2020-10-23 | 山东科技大学 | Road vehicle monitoring system and method based on intelligent vision Internet of things |
CN112466114A (en) * | 2020-11-19 | 2021-03-09 | 南京代威科技有限公司 | Traffic monitoring system and method based on millimeter wave technology |
CN112487905A (en) * | 2020-11-23 | 2021-03-12 | 北京理工大学 | Method and system for predicting danger level of pedestrian around vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112417953B (en) | Road condition detection and map data updating method, device, system and equipment | |
CN108986465B (en) | Method, system and terminal equipment for detecting traffic flow | |
DE102009044083A1 (en) | System and method for counting the number of persons | |
CN111898581A (en) | Animal detection method, device, electronic equipment and readable storage medium | |
CN112449152B (en) | Method, system and equipment for synchronizing multi-channel video | |
CN103581620A (en) | Image processing apparatus, image processing method and program | |
CN111553947A (en) | Target object positioning method and device | |
CN111814593B (en) | Traffic scene analysis method and equipment and storage medium | |
KR20190043396A (en) | Method and system for generating and providing road weather information by using image data of roads | |
JP2020518165A (en) | Platform for managing and validating content such as video images, pictures, etc. generated by different devices | |
CN111695627A (en) | Road condition detection method and device, electronic equipment and readable storage medium | |
CN115731247A (en) | Target counting method, device, equipment and storage medium | |
CN113065691A (en) | Traffic behavior prediction method and system | |
CN108847052B (en) | Parking position determining method, device, system and computer readable medium | |
CN114913470B (en) | Event detection method and device | |
Dwicahya et al. | Moving object velocity detection based on motion blur on photos using gray level | |
CN114373081A (en) | Image processing method and device, electronic device and storage medium | |
CN115578441A (en) | Vehicle side image splicing and vehicle size measuring method based on deep learning | |
CN106911550B (en) | Information pushing method, information pushing device and system | |
Winanta et al. | Moving Objects Counting Dashboard Web Application Design | |
CN110705493A (en) | Method and system for detecting vehicle running environment, electronic device and storage medium | |
CN104700396B (en) | The method and system of the parameter for estimating the volume of traffic is determined from image | |
CN111369794B (en) | Method, device and equipment for determining traffic participation information and storage medium | |
CN117831000A (en) | Traffic light detection method and device, electronic equipment and storage medium | |
JP7554987B2 (en) | Information processing device and information processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210702 |