CN116897380A - Road condition detection method, readable medium and electronic equipment - Google Patents
Road condition detection method, readable medium and electronic equipment Download PDFInfo
- Publication number
- CN116897380A CN116897380A CN202280003400.0A CN202280003400A CN116897380A CN 116897380 A CN116897380 A CN 116897380A CN 202280003400 A CN202280003400 A CN 202280003400A CN 116897380 A CN116897380 A CN 116897380A
- Authority
- CN
- China
- Prior art keywords
- target
- vehicle
- collision
- road
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 52
- 230000001133 acceleration Effects 0.000 claims description 41
- 238000000034 method Methods 0.000 claims description 38
- 230000015654 memory Effects 0.000 claims description 25
- 230000034994 death Effects 0.000 claims description 17
- 231100000517 death Toxicity 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 8
- 239000013256 coordination polymer Substances 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 4
- 239000000523 sample Substances 0.000 claims 1
- 206010039203 Road traffic accident Diseases 0.000 description 9
- 230000001186 cumulative effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 5
- 238000009825 accumulation Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
A road condition detection method, a readable medium and an electronic device. The road condition detection method is used for electronic equipment and comprises the following steps: and acquiring a video stream acquired by a camera on the target road section. According to the video stream, current characteristic information of the target object on the target road section is determined, wherein the characteristic information comprises motion state information and/or position state information. And determining the current road condition information of the target road section based on the current characteristic information of the target object and/or the historical characteristic information of the target object. And sending the current road condition information of the target road section to the vehicle within the preset range of the target road section. Therefore, the vehicle in the preset range of the target road section can acquire the road condition information of the target road section in real time, the accuracy of the road condition information of the target road section received by the automatic driving vehicle is ensured, the accuracy of the automatic driving vehicle control is improved, and the safety and the user experience of the automatic driving vehicle are further improved.
Description
The present application relates to the field of automatic driving, and in particular, to a road condition detection method, a readable medium, and an electronic device.
Traffic safety has been a hotspot problem in the traffic field. At present, for accident multiple sections where road accidents are easy to occur, traffic warning signs are mainly arranged on the accident multiple sections or marked in a navigation map of a vehicle to remind a driver that the sections belong to the accident multiple sections, and cautious driving is needed. However, both of these implementations require a period of time after the traffic accident on the road to be known to the driver. This will result in the autonomous vehicle not being able to obtain information of the most recent traffic accident scenario at the first time, thereby affecting the safety of the autonomous vehicle driving and the user experience.
For example, the information of the traffic accident multiple sections released by the navigation map is mainly related information of the traffic accident on the road section in the past period of time collected according to the data of the third party, the result of whether the traffic accident multiple sections are obtained through manual analysis, and then the information of the traffic accident multiple sections is released on the navigation map. However, there is a long time interval from the collection and analysis of the related information of the traffic accident to the distribution to the navigation map, and the traffic accident occurring during this time interval is not collected and analyzed, so that timeliness of the information of the traffic accident-prone region of distribution cannot be ensured.
Disclosure of Invention
The embodiment of the application provides a road condition detection method, a readable medium and electronic equipment.
In a first aspect, an embodiment of the present application provides a road condition detection method, which is used in an electronic device, and includes: and acquiring a video stream acquired by a camera on the target road section. According to the video stream, current characteristic information of the target object on the target road section is determined, wherein the characteristic information comprises motion state information and/or position state information. And determining the current road condition information of the target road section based on the current characteristic information of the target object and/or the historical characteristic information of the target object.
For example, the road test device performs target detection on each frame of image in the video stream by acquiring the video stream acquired in real time by the camera arranged on the target road section, determines each target object in each frame of image and the position of each target object, and determines feature information corresponding to each target object according to the position of each target object in the continuous multi-frame images, wherein the feature information comprises motion state information and/or position state information and the like. The motion state information may be information such as velocity and acceleration, and the position state information may be position coordinates in a world coordinate system. And determining the current road condition information of the target road section according to the characteristic information of each current target object of the road section and the characteristic information of each target object of the road section history, and then transmitting the current road condition information of the target road section to the vehicle within the preset range of the target road section. The current road condition information of the target road section may be a road risk level of the target road section.
It can be appreciated that according to the road condition detection method of the present application, the electronic device can update the road condition information of the target road section in real time, and can transmit the road condition information of the target road section to the vehicle within the preset range of the target road section in real time. Therefore, the vehicle in the preset range of the target road section can acquire the road condition information of the target road section in real time, the accuracy of the acquired road condition information of the target road section is ensured, the accuracy of automatic driving vehicle control is improved, and the safety and user experience of the automatic driving vehicle are further improved.
In a possible implementation of the first aspect, the method further includes: and sending the current road condition information of the target road section to the vehicle within the preset range of the target road section.
In a possible implementation of the first aspect, the method further includes: the target object includes at least one of: vehicles, people, obstacles.
In a possible implementation of the first aspect, the method further includes: the current road condition information of the target road section comprises a road management risk level. And, based on the current characteristic information of the target object and/or the historical characteristic information of the target object, determining the current road condition information of the target road section comprises: the current traffic flow and collision accident information, and the historical traffic flow and collision accident information of the target road section are determined based on the current characteristic information of the target object and the historical characteristic information of the target object. The current road risk level of the target link is determined based on the current traffic and collision accident information of the target link, and the historical traffic and collision accident information.
In a possible implementation of the first aspect, the method further includes: the collision accident information includes at least one of: the number of crashes, the average relative acceleration of the crashed vehicle, the average relative speed of the crashed vehicle.
In a possible implementation of the first aspect, the method further includes: determining the current road risk level for the target link based on the current traffic and collision accident information for the target link and the historical traffic and collision accident information includes:
determining a collision risk indicator parameter of the target road segment based on the current traffic and collision accident information of the target road segment and the historical traffic and collision accident information, wherein the collision risk indicator parameter comprises at least one of the following: severity of collision accident, exposure of collision accident, controllability of collision accident.
And determining the current road risk level of the target road section based on the collision risk index parameter of the target road section.
In a possible implementation of the first aspect, the method further includes: current road risk level R dynamic Calculated by the following formula:
R dynamic =S*E*C
where S represents the severity of the crash event. C represents the controllability of the collision accident, and E represents the exposure of the collision accident.
In a possible implementation of the first aspect, the method further includes: determining collision risk indicator parameters for the target road segment based on the current traffic and collision accident information for the target road segment and the historical traffic and collision accident information includes:
the severity of the collision accident for the target road segment is determined based on the number of collision accidents for the current and historical target road segments, the average relative acceleration of the collision vehicle, the average relative speed of the collision vehicle.
In a possible implementation of the first aspect, the method further includes: the number of collision accidents includes: the number of vehicle-to-vehicle collisions, the number of vehicle-to-person collisions, the number of vehicle-to-obstacle collisions.
The severity S of the crash event is calculated by the following formula:
S=α 0 v r exp(β 0 a r )*(α 1 *N CC +α 2 *N CO +α 3 *N CP )*exp(β 1 N death )
wherein v is r Represents the average relative velocity of a crashed vehicle, a r Representing the average relative acceleration of the crashed vehicle, N CC Indicating the number of car-to-car collisions, N CO Indicating the number of collisions of the vehicle with obstacles, N CP Indicating the number of collisions of the vehicle with a person, alpha 0 、α 1 、α 2 、α 3 、β 0 、β 1 Respectively represent weight parameters, N death Indicating the number of deaths of collision accidents for the current and the historical target road segments, exp indicates an exponential function.
In a possible implementation of the first aspect, the method further includes: determining collision risk indicator parameters for the target road segment based on the current traffic and collision accident information for the target road segment and the historical traffic and collision accident information includes:
The controllability of the collision accident of the target link is determined based on the average relative acceleration of the collision vehicles and the average relative speed of the collision vehicles of the current and historical target links.
In a possible implementation of the first aspect, the method further includes: the controllability C of the collision accident of the target road section is calculated by the following formula:
wherein a is r Representing the average relative acceleration of a crashed vehicle, v r Represents the average relative velocity, beta, of a crashed vehicle 2 Representing weight parameters, N death Indicating the number of deaths of collision accidents for the current and the historical target road segments, exp indicates an exponential function.
In a possible implementation of the first aspect, the method further includes: determining collision risk indicator parameters for the target road segment based on the current traffic and collision accident information for the target road segment and the historical traffic and collision accident information includes:
the exposure of the collision accident for the current target road segment is determined based on the traffic flow and the number of collision accidents for the current and historical target road segments.
In a possible implementation of the first aspect, the method further includes: the exposure E of the collision accident is calculated by the following formula:
Wherein N is accident Indicating the number of collision accidents, N total Representing the vehicle flow.
In a possible implementation of the first aspect, the method further includes: the current road condition information of the target road section comprises characteristic information of sensitive traffic participants.
Based on the current characteristic information of the target object and/or the historical characteristic information of the target object, determining the current road condition information of the target road section comprises:
screening sensitive traffic participants from the target object based on the current characteristic information of the target object, and determining the current characteristic information of the sensitive traffic participants, wherein the sensitive traffic participants comprise at least one of the following: pedestrians, heavy trucks, bicycles, vans.
In a second aspect, an embodiment of the present application provides a readable medium, where instructions are stored, where the instructions, when executed on an electronic device, cause the electronic device to perform any one of the foregoing first aspect and various possible implementations of the first aspect.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing instructions for execution by one or more processors of the electronic device, and the processor, which is one of the processors of the electronic device, is configured to perform the road condition detection method of the first aspect and any of the various possible implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer program product comprising a computer program/instruction which, when executed by a processor, implements the road condition detection method of the first aspect and any of the various possible implementations of the first aspect.
FIG. 1 is a diagram illustrating a road condition detection scenario according to an embodiment of the present application;
FIG. 2 is a diagram illustrating another road condition detection scenario according to an embodiment of the present application;
FIG. 3 is a flow chart illustrating a road condition detection according to an embodiment of the present application;
FIG. 4 is a flow chart illustrating another road condition detection according to an embodiment of the present application;
FIG. 5 is a flow chart illustrating another road condition detection according to an embodiment of the present application;
fig. 6 is a block diagram illustrating the structure of a unified electronic device, in accordance with an embodiment of the present application.
Illustrative embodiments of the present application include, but are not limited to, road condition detection methods, readable media, and electronic devices.
In order to solve the above problems, the present application provides a road condition detection method applied to an electronic device. The method comprises the following steps: and (3) carrying out target detection on each frame of image in the video stream by acquiring the video stream acquired in real time by the camera arranged on the target road section, determining each target object in each frame of image and the position of each target object, and determining the characteristic information corresponding to each target object according to the position of each target object in the continuous multi-frame images, wherein the characteristic information comprises motion state information, position state information and the like. The motion state information may be information such as velocity and acceleration, and the position state information may be position coordinates in a world coordinate system. And determining the current road condition information of the target road section according to the characteristic information of each current target object of the road section and the characteristic information of each target object of the road section history, and then transmitting the current road condition information of the target road section to the vehicle within the preset range of the target road section. The current road condition information of the target road section may be a road risk level of the target road section.
It will be appreciated that the target object may be a variety of vehicles, such as bicycles, cars, heavy trucks, and the like. The vehicle within the preset range of the target link may be a vehicle on the target link or may be a vehicle that is not more than a preset distance from the target link.
The specific manner of determining the current road condition information (for example, road risk level) of the target road section according to the characteristic information of each current target object (each vehicle) of the road section and the characteristic information of each target object of the road section history may be:
based on the characteristic information of each current vehicle of the road section and the positions of each vehicle of the road section history, the traffic flow and the collision accident information of the target road section in the current and the history can be determined, wherein the collision accident information can comprise the number of collision accidents. It will be appreciated that, for example, when the locations of two vehicles are within a certain range, it may be determined that the two vehicles have collided. And determining the road risk level of the target road section according to the traffic flow at the current moment and in the past period and the number of the vehicle collision accidents. For example, when the traffic flow and the number of vehicle collision accidents are relatively large at the present time and over a period of time, the road risk level is relatively high; when the traffic flow and the number of vehicle collision accidents at the present moment and in the past period of time are smaller, the road risk level is lower.
It can be appreciated that according to the road condition detection method of the present application, the electronic device can update the road condition information of the target road section in real time, and can transmit the road condition information of the target road section to the vehicle within the preset range of the target road section in real time. Therefore, the vehicle in the preset range of the target road section can acquire the road condition information of the target road section in real time, the accuracy of the acquired road condition information of the target road section is ensured, the accuracy of automatic driving vehicle control is improved, and the safety and user experience of the automatic driving vehicle are further improved.
It can be understood that the electronic device in the embodiment of the application can be a road side device, a server, a vehicle machine and other devices, and the type of the electronic device is not particularly limited according to practical application.
It can be appreciated that the road condition detection method of the present application can be applied to an automatic driving scene, a manual driving scene, an unmanned driving scene, etc., and that the present application is not limited in particular to an actual application scene according to actual applications.
For ease of understanding, the term "severity" in connection with road risk level herein may refer to, for example, the extent to which associated personnel, property will be damaged once the risk becomes real. "exposure" may refer to a probability that a person or property may be affected when a risk occurs. "controllability" may refer to how much the driver, etc., can take proactive measures to avoid damage when a risk is present.
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail with reference to fig. 1 to 6.
Fig. 1 illustrates a road condition detection scene graph according to an embodiment of the present application. The scenario of fig. 1 includes: road section A, road section B, road section C and road section D. As shown in fig. 1, the electronic device 100 may be a roadside device. Wherein the road side device 100 is disposed at the a-section road side.
As shown in fig. 1, taking the section a as an example, a camera 200-1, a camera 200-2, and a camera 200-3 are disposed on the section a. The A road segment also comprises a plurality of target objects, wherein the plurality of target objects are vehicles 300-1, collision vehicles 300-2, collision vehicles 300-3, pedestrians 300-4, pedestrians 300-5 and pedestrians 300-6 respectively.
As shown in fig. 1, cameras 200-1, 200-2, 200-3 are used to collect video streams on the a road segment in real time and send the video streams to road side device 100.
As shown in fig. 1, the road side device 100 is configured to receive video streams on a road a acquired by the cameras 200-1 to 200-3 in real time, and determine target object information of the road a according to the video streams on the road a acquired by the cameras 100, where the target object information of the road a is used to describe a position and/or a motion state of the vehicle 300-1, the collision vehicle 300-2, the collision vehicle 300-3, the pedestrian 300-4, the pedestrian 300-5, and the pedestrian 300-6 on the road a. The road side device 100 is further configured to determine road condition information of the a road segment based on the characteristic information of the target object of the a road segment, and transmit the road condition information of the a road segment to the vehicle within the preset range of the a road segment. The vehicle receiving the road condition information can adjust the running speed of the vehicle on the section A according to the road condition information.
For example, as shown in fig. 1, the vehicle 300-1 receives the road risk level of the a-link transmitted by the electronic device 100 in real time when entering the a-link, and in the case that the road risk level is received as a high risk level, the vehicle 300-1 may adjust the travel speed of the vehicle 300-1 in the a-link from 100Km/h to 30Km/h according to the road risk level.
It will be appreciated that the road condition information of the a-section may be used to assist the driver or the autonomous vehicle in adjusting the speed of the vehicle traveling on the a-section. After the vehicle in the preset range of the A road section receives the road condition information of the A road section, whether to adjust the running speed of the vehicle in the A road section or not can be determined in advance according to the road condition information of the A road section before the automatic driving vehicle is in the A road section or enters the A road section. The accuracy of the control of the automatic driving vehicle is improved, and the safety and driving experience of the automatic driving vehicle are further improved.
It will be understood that, as shown in fig. 1, the road side device 10 may also acquire video streams acquired by cameras of the B-section, the C-section, and the D-section, and generate road condition information of the B-section, the C-section, and the D-section according to the acquired video streams of the B-section, the C-section, and the D-section.
As shown in fig. 1, the road side device 100 is disposed on the road side of the a road segment, and in other embodiments, other road side devices may be disposed on the road sides of the B road segment, the C road segment, and the D road segment, respectively. Other road side devices can also acquire video streams acquired by cameras of the road sections B, C and D respectively, and generate road condition information of the road sections B, C and D according to the acquired video streams of the road sections B, C and D. It will be appreciated that the scene ratio of the present application is not limited to the scene shown in fig. 1, and the present application is not particularly limited according to the specific location and number of the roadside devices disposed on the roadside according to the actual application.
The Road Side device 100 may be an infrastructure device or a fixed device or a Road Side Unit (RSU) disposed on the Road Side, or may be a device for supporting a vehicle-to-everything (V2X) application disposed on the Road Side. It will be appreciated that the present application is not limited to the roadside apparatus 100 depending on the actual application.
Fig. 2 illustrates another road condition detection scenario diagram according to an embodiment of the present application. In the scenario of fig. 2, the electronic device 100 may be a remotely located server, as compared to the scenario of fig. 1.
As shown in fig. 2, taking the section a as an example, the server 100 is configured to receive video streams on the section a collected by the cameras 200-1 to 200-3 in real time, and determine target object information of the section a according to the video streams on the section a collected by the cameras 200-1 to 200-3, where the target object information of the section a is used to describe a position and/or a motion state of a target object on the target section. The server 100 is further configured to determine road condition information of the a road segment based on the target object information of the a road segment, where the road condition information of the a road segment is used to adjust a traveling speed of the vehicle on the a road segment. The server 100 is further configured to transmit the road condition information of the a road segment to the vehicles within the preset range of the a road segment.
It can be understood that the server 100 is further configured to receive video streams collected by cameras for obtaining the B-section, the C-section, and the D-section, and generate road condition information of the B-section, the C-section, and the D-section according to the collected video streams of the B-section, the C-section, and the D-section. The specific content refers to the content described in section a, and will not be described in detail herein.
It may be appreciated that the server 100 may be a hardware server, and the server 100 may be an independent physical server, or may be a server cluster formed by a plurality of physical servers, or may be a server that provides basic cloud computing services such as a cloud database, cloud storage, CDN, and the like, which is not limited according to practical applications.
It will be appreciated that in the scenario of fig. 1 or 2, camera 200-1, camera 200-2, camera 200-3, and electronic device 100 may be communicatively coupled via one or more networks. The network may be a wired network, or may be a Wireless network, for example, a Wireless network may be a mobile cellular network (e.g., 5g,4g,3g, or GPRS), or may be a Wireless-Fidelity (WIFI) network, or may be other possible networks, which embodiments of the present application are not limited in this respect. For example, cameras 200-1 to 200-3 transmit video streams on the a road section acquired in real time to electronic device 100 through a wired network.
It will be appreciated that in the scenario of fig. 1 or 2, the electronic device 100 may be communicatively coupled to vehicles within a predetermined range of the a road segment via one or more wireless networks. For example, the Wireless network may be a mobile cellular network (e.g., 5g,4g,3g, or GPRS), or may be a Wireless-Fidelity (WIFI) network, although other networks are possible, and embodiments of the application are not limited in this respect. For example, the electronic device 100 transmits road condition information of the a-section to a vehicle within a preset range from the a-section through a wireless network.
It can be understood that the camera for capturing the video stream of the road section of the present application may be a 360-degree rotation camera (i.e. camera 200-1), or may be a far view camera, a zoom camera, a near view camera, a flash camera, a buckle camera, a speed camera, etc.
It will be appreciated that the electronic device 100 to which the road condition detection method of the present application is applied may be a road side device in the scenario of fig. 1, a server in the scenario of fig. 2, a car machine, a laptop computer, a desktop computer, a tablet computer, a mobile phone, a wearable device, a head-mounted display, a mobile email device, a portable game machine, a portable music player, a reader device, or other electronic devices capable of accessing a network. In some implementations, embodiments of the application may also be applied to wearable devices worn by users. For example, smart watches, bracelets, jewelry (e.g., devices made into decorative items such as earrings, bracelets, etc.), or glasses, etc., or as part of watches, bracelets, jewelry, or glasses, etc. According to practical applications, the embodiment of the present application does not limit the electronic device 100.
Based on the above scenario, fig. 3 shows a flow chart of road condition detection, and the execution subject in fig. 3 is the electronic device 100, as shown in fig. 3, specifically including:
s301: and acquiring the target object of the target road section in real time.
In some embodiments, the electronic device 100 obtains a video stream captured on a target road segment, performs target detection on each frame of image in the video stream, and determines a target object in each frame of image.
In some embodiments, the target object may include at least one of: collision vehicles, bicycles, cars, heavy trucks, etc. In other embodiments, the target object may also include static objects such as ponding, barriers, and the like. It will be appreciated that the present application is not limited to the type of the target object of the target road section according to practical applications.
For example, as shown in fig. 1, taking the section a as an example, the electronic device 100 may perform object detection on each frame image of the video stream according to the object detection algorithm according to the video of the section a, which is captured by the cameras 200-1 to 200-3 set on the section a and acquired in real time, to obtain the object appearing in each frame image of the video stream.
S302: and determining the characteristic information corresponding to the target object according to the target object of the target road section obtained in real time.
In some embodiments, the electronic device 100 determines the feature information corresponding to the target object according to the target object of the target road segment acquired in real time. The feature information corresponding to the target object may include: target object motion state information and position state information. The location state information of the target object at the target link may be location coordinates of the target object at the target link. The motion state information of the target object in the target road section can be information such as speed, acceleration and the like.
In some embodiments, after the electronic device 100 detects the target object in each frame of image, the motion state information of the target object may also be determined through a speed detection algorithm. For example, the speed detection algorithm is configured to determine the running time Δt of the target object according to the frame rate of the camera that collects the video stream, calculate the coordinates of the three-dimensional space for the position of the target object, thereby obtaining the actual running distance Δs of the target object, and further calculate the instantaneous speed or average speed V or acceleration a of the target object. The information for determining the target object is described in detail below, and will not be described in detail here.
In some embodiments, the target object in each frame of image may further include a static object such as water accumulation, a roadblock, etc., and then the position of the target object in each frame of image is determined, and the position state information of the target object may be the water accumulation position of the target road section, the roadblock position of the target road section, etc. according to the feature information of the target object determined by the position of the target object in the continuous multi-frame image. The state information of the target object may be a speed of 0, an acceleration of 0, or the like.
S303: and determining the road condition information of the target road section based on the characteristic information of the target object of the target road section, wherein the road condition information of the target road section is used for describing the road condition of the target road section.
In some embodiments, the road condition information of the a road segment may include, in addition to the road risk level of the a road segment, sensitive traffic participants of the a road segment, the traffic flow of the a road segment, the water accumulation depth of the a road segment, the roadblock position of the a road segment, and the like.
In some embodiments, the electronic device 100 determines the road condition information of the target road segment based on the target object information of the target road segment, where the road condition information of the target road segment is used to describe the road condition of the target road segment. The vehicle can adjust the running state (such as speed reduction, parking and the like) or running track (such as running route of the automatic driving vehicle) of the automatic driving vehicle in time before entering the target road section or when entering the target road section according to the road condition information of the target road section, so as to ensure the driving safety of the automatic driving vehicle.
In some embodiments, the road condition information of the target road segment may be a road risk level of the target road segment. For example, the electronic device 100 may determine a collision vehicle of a target link at the current time and for a past period of time according to the relative position of the vehicle at the current time and for the past period of time, determine a road risk level of the target link according to the speed, acceleration, etc. of the vehicle at the current time and for the past period of time when the vehicle collides.
Specifically, with the target object as the vehicle, the electronic device 100 determines collision accident information of the target road section and the accumulated number of vehicle passes at the current time and in the past preset time period based on the vehicle information of the target road section acquired in real time. Wherein the collision accident information includes at least one of: the integrated number of vehicle collision accidents, the average relative speed at the time of vehicle collision, the average relative acceleration at the time of vehicle collision, and the like, wherein the integrated number of vehicle collision accidents includes: the number of car-to-car collision accidents, the number of car-to-obstacle collision accidents, the number of car-to-pedestrian collision accidents, and the like.
In some embodiments, the electronic device 100 determines, according to target object information of a target road segment acquired in real time and target object information of a target road segment in a preset time period in the past, feature information of a collision vehicle on the target road segment in the preset time period in the present time and determines, based on the feature information of the collision vehicle on the target road segment in the preset time period in the past and the present time, collision accident information of the target road segment in the preset time period in the present time. Taking the target object as the vehicle in fig. 1 as an example, as shown in fig. 1, the collision between the vehicle 300-2 and the vehicle 300-3 is determined according to the position states of the vehicle 300-2 and the vehicle 300-3. Thereby acquiring the relative speeds and the relative accelerations of the vehicle 300-2 and the vehicle 300-3 at the time of collision, respectively.
Further, the electronic device 100 determines a collision risk index parameter of the target link according to collision accident information of the target link within a past preset time period, and determines a road risk level of the target link according to the collision risk index parameter of the target link. Wherein the collision risk indicator parameter of the target road section comprises at least one of the following: severity of collision accident, exposure of collision accident, controllability of collision accident. The details of determining the road risk level of the target road segment by the electronic device 100 based on the target object information of the target road segment are described below, and will not be described herein.
In some embodiments, the road condition information of the target road segment may be sensitive traffic participants of the target road segment. Specifically, the electronic device 100 screens out the sensitive traffic participants of the target road section based on the target object information of the target road section, wherein the characteristic information of the sensitive traffic participants may include at least one of the following: the location of the sensitive traffic participant at the target link, the speed of travel, and the acceleration of travel.
In some embodiments, in urban road traffic, sensitive traffic participants may include, but are not limited to: pedestrians, bicycles, strollers, people who are wheelchairs, etc. In highway traffic, sensitive traffic participants include, but are not limited to, large trucks, vans, and the like.
In other embodiments, the road condition information of the target road segment may be the traffic flow of the target road segment, the water accumulation depth of the target road segment, the roadblock position of the target road segment, and the like. It is understood that the road condition information of the target road section may be used for the autonomous vehicle to adjust the running state (e.g., decelerating, stopping, etc.) or running track (e.g., running route of the autonomous vehicle) of the vehicle on the target road section, etc., so as to ensure the driving safety of the autonomous vehicle.
S304: and sending road condition information to the vehicles within the preset range of the target road section.
In some embodiments, the electronic device 100 transmits the driving assistance information to the vehicle within the preset range of the target road section, and the vehicle may determine whether to adjust the driving speed of the vehicle in the target road section in advance of the vehicle being in the target road section or entering the target road section based on the driving assistance information transmitted by the electronic device 100. The accuracy of the control of the automatic driving vehicle is improved, and the safety and driving experience of the automatic driving vehicle are further improved.
According to the road condition detection method described in fig. 3, the electronic device 100 determines the road condition information of the target road section according to the target object information of the target road section obtained in real time, and sends the road condition information of the target road section to the vehicle within the preset range of the target road section in real time. The road condition information of the target road section can be information such as road risk level, sensitive traffic participants, traffic flow, ponding depth, roadblock position and the like.
It can be seen that, according to the road condition detection method of the present application, the electronic device 100 can update the road condition information of the target road section in real time, and the road condition information of the target road section can be used to assist the driver or the automatically driven vehicle to adjust the driving speed of the vehicle on the target road section. The road condition information of the target road section is obtained in real time by the vehicle in the preset range of the target road section, and the driver or the automatic driving vehicle can know the road condition information of the target road section in real time according to the road condition information of the target road section, so that whether the driving speed of the automatic driving vehicle in the target road section is adjusted or not is determined in advance before the vehicle enters the target road section. The accuracy of the control of the automatic driving vehicle is improved, and the safety and driving experience of the automatic driving vehicle are further improved.
In some embodiments, the road condition information of the target road segment may be a road risk level, and the electronic device 100 determines the road risk level of the target road segment according to the target object information of the target road segment acquired in real time and sends the road risk level to the vehicle within the preset range of the target road segment, as described in detail below with reference to fig. 4.
Based on the road condition detection scenario of fig. 1 or fig. 2, fig. 4 shows a flowchart of another road condition detection method, and the execution subject of the flowchart of fig. 4 is an electronic device 100, as shown in fig. 4, specifically including:
S401: and acquiring the video stream acquired by at least one camera on the section A in real time.
For example, as shown in fig. 1, the electronic device 100 may acquire video streams about a road segment acquired by the cameras 200-1 to 200-3 on a road segment, may acquire video streams about B road segment acquired by the three cameras on a road segment, may acquire video streams about C road segment acquired by the two cameras on a road segment, and may acquire video streams about C road segment acquired by the two cameras on a road segment. The following describes the road condition detection process of fig. 4 in detail, taking the target road section as the a road section as an example.
S402: and determining a real-time target object of the section A and characteristic information corresponding to the target object based on the video stream of the section A obtained in real time.
In some embodiments, the electronic device 100 determines feature information corresponding to the target object according to the target object of the a road segment acquired in real time. The feature information corresponding to the target object may include: target object motion state information and position state information. The location state information of the target object at the a-road segment may be location coordinates of the target object at the a-road segment. The motion state information of the target object in the section a may be information such as speed, acceleration, etc.
In some embodiments, the electronic device 100 performs object detection on each frame image of the video stream based on the acquired video stream about the a road segment acquired by the camera set on the a road segment, to obtain all the object objects appearing in the video stream. Specifically, object detection may be performed on each frame image of the video stream by an object detection algorithm to obtain all object objects appearing in the video stream. For example, in the scenario of FIG. 1, if the target object is set as a car, it is detected by the target detection algorithm that three target objects, namely, vehicle 300-1 and vehicle 300-2 and vehicle 300-3, are included in total on the A road segment. If the target objects are set as cars and people, six target objects, namely a vehicle 300-1 and a vehicle 300-2, a vehicle 300-3, a pedestrian 300-4, a pedestrian 300-5 and a pedestrian 300-6, are detected on the section A through the target detection algorithm. In other embodiments, the target object may be configured as a collision vehicle, a heavy truck, a van, a road section of water, a road barrier, etc. It will be appreciated that the application is not limited to the type of target object on the road, depending on the application.
In some embodiments, the object detection algorithm is mainly used for traversing each frame of image of the input video stream, classifying the object and the non-object of each frame of image, and determining the position coordinates of the object in each frame of image. In some embodiments, the object detection may be performed on each frame of image of the video stream by any one of a cascade-region convolutional neural network architecture (cascade-region convolutional neural networks, cascade RCNN) algorithm, a Faster-region convolutional neural network architecture (Faster-region convolutional neural networks, faster RCNN) algorithm, a Single-Shot MultiBox Detector, SSD algorithm, a real-time fast object detection (You Only Look Once, YOLO) algorithm, and the like, to obtain all the object objects present in the video stream.
In some embodiments, after the electronic device 100 detects the target object in each frame of image, the speed, acceleration, etc. of the target object may also be determined by a speed detection algorithm. For example, the speed detection algorithm is used for determining the running time Δt of the target object according to the frame rate of the camera for capturing the video stream, and obtaining the actual running distance Δs of the target object by obtaining the coordinates of the three-dimensional space through positioning the target object, so as to calculate the instantaneous speed or average speed V or acceleration a of the target object.
S403: and determining collision accident information of the A road section in the past preset time period according to the characteristic information corresponding to the real-time target object of the A road section and the characteristic information corresponding to the target object of the A road section in the past preset time period.
In some embodiments, the electronic device 100 determines the accumulated number of traffic trails (i.e. traffic flow) and the collision accident information of the a road segment in the past preset time period according to the characteristic information corresponding to the real-time target object of the a road segment and the characteristic information corresponding to the target object of the a road segment in the past preset time period, where the collision accident information of the a road segment may include at least one of the following: the cumulative number of vehicle collision accidents, the average relative speed at the time of a vehicle collision, the average relative acceleration at the time of a vehicle collision. Wherein, the accumulated number of the vehicle collision accidents comprises: the number of collision accidents between vehicles, the number of collision accidents between vehicles and obstacles, and the number of collision accidents between vehicles and pedestrians.
Taking the section a as an example, the preset time period may be in the last year, for example, the current time is 2021, 12, 24, 13, and the last year is 2020, 12, 24, 13, to 2021, 12, 24, 13.
Specifically, the electronic device 100 determines collision accident information of the a road segment in the past preset time period according to the feature information corresponding to the real-time target object of the a road segment and the feature information corresponding to the target object of the a road segment in the past preset time period. For example, it is determined that the number of collision accidents occurring in the a section is 10 in total in the past year, wherein the number of collision accidents with vehicles is 5, the number of collision accidents with obstacles is 3, and the number of collision accidents with pedestrians is 2. The average relative speed at the time of the vehicle collision is: 10 average relative speed at the time of collision of the vehicle at the time of collision accident. The average relative acceleration at the time of vehicle collision is: 10 average relative acceleration at the time of collision of the vehicle at the time of collision accident. In a collision accident between the vehicle E and the vehicle F, the relative speed of the vehicle E with respect to the vehicle F is the relative speed of the vehicle E, and the relative acceleration of the vehicle E with respect to the vehicle F is the relative acceleration of the vehicle E. In the collision accident of the vehicle X and the obstacle Y, the relative speed at the time of the vehicle collision is the speed of the vehicle X, and the relative acceleration at the time of the vehicle collision is the acceleration of the vehicle X. In a collision accident between the vehicle M and the pedestrian N, the relative speed at the time of the collision is the speed of the vehicle M relative to the pedestrian N, and the relative acceleration at the time of the collision is the acceleration of the vehicle M relative to the pedestrian N.
S404: based on collision accident information of the A road section in the past preset time period, determining collision risk index parameters of the A road section, wherein the collision risk index parameters of the A road section comprise at least one of the following: severity of collision accident, exposure of collision accident, controllability of collision accident.
For example, in the past preset time period, the collision accident information of the a section includes an average relative speed at the time of a vehicle collision, an average relative acceleration at the time of a vehicle collision, the number of vehicle-to-vehicle collision accidents, the number of vehicle-to-obstacle collision accidents, and the number of vehicle-to-pedestrian collision accidents. The electronic device 100 may determine the severity of the collision accident for the a road segment according to the average relative speed at the time of the vehicle collision, the average relative acceleration at the time of the vehicle collision, the number of vehicle-to-vehicle collision accidents, the number of vehicle-to-obstacle collision accidents, and the number of vehicle-to-pedestrian collision accidents.
In some embodiments, the electronic device 100 may calculate the severity S of the collision accident of the a road segment in the determined past preset time period based on the collision accident information of the a road segment in the past preset time period by the formula (1):
S=α 0 v r exp(β 0 a r )*(α 1 *N CC +α 2 *N CO +α 3 *N CP )*exp(β 1 N death ) (1)
in the formula (1), v r Represents the average relative speed, a, of the vehicle on the A road section during a collision within a preset period of time r Representing average relative acceleration at the time of collision of the vehicle on the A road section within a preset time period, N CC Indicating the number of collision accidents between vehicles in section A and N in a preset time period CO Indicating the number of collision accidents between the vehicle and the obstacle in the section A within a preset time period, alpha 0 、α 1 、α 2 、α 3 、β 0 、β 1 Representing the weight parameters. N (N) death And the death number of the collision accident of the section A in the preset time period is represented. exp represents an exponential function. Herein, x represents multiplication.
It is understood that the higher the average relative speed at the time of collision of the vehicle on the a-section, the higher the severity of the collision accident on the a-section within the past preset period. The higher the average relative acceleration at the time of collision of the vehicle on the a-section, the higher the severity of the collision accident on the a-section within the past preset time period. The higher the cumulative number of the vehicle collision accidents of the a road segment in the past preset time period, the higher the severity of the collision accidents of the a road segment. The higher the number of deaths of the vehicle crash accident of the a section in the past preset time, the higher the severity of the crash accident of the a section.
For example, the collision accident information of the a section includes an average relative speed at the time of a vehicle collision, an average relative acceleration at the time of a vehicle collision, within a preset period of time in the past. The electronic device 100 may determine the degree of controllability of the collision accident of the a-section in the past preset period based on the average relative speed at the time of the collision of the vehicle of the a-section, the average relative acceleration at the time of the collision of the vehicle in the past preset period. Specifically, the controllability C of the collision accident of the a road section in the past preset time period can be calculated by the following formula (2):
In the formula (2), a r Representing the relative acceleration, v, of the A road section at the time of collision accident in the past preset time period r Representing the relative speed, beta, of the collision accident of the A road section in the past preset time period 2 Representing the weight parameters. N (N) death Indicating the number of deaths from a crash event. exp represents an exponential function.
It is understood that the higher the average relative speed at the time of collision of the vehicle on the a-section in the past preset period, the lower the controllability of the collision accident on the a-section in the past preset period. The higher the average relative acceleration at the time of collision of the vehicle on the a-section in the past preset period, the lower the degree of controllability of the collision accident on the a-section in the past preset period. The higher the number of deaths of the vehicle collision accident of the a section in the past preset time, the lower the controllability of the collision accident of the a section in the past preset time.
For example, in the past preset time period, the collision accident information of the a-section includes the accumulated number of vehicle passes, the accumulated number of vehicle collision accidents. The electronic device 100 may determine the exposure of the collision accident of the a-section during the past preset period according to the accumulated number of vehicle passes of the a-section, the accumulated number of vehicle collision accidents during the past preset period. Specifically, the exposure E of the collision accident at the a road section in the past preset time period can be calculated by the following formula (3):
In the formula (3), N accident Indicating the accumulated number of vehicle collision accidents of the A road section within the past preset time period, N total Representing the cumulative number of vehicle passes of the a road segment over the past preset time period.
It will be understood that the higher the cumulative number of vehicle collision accidents for the a-section is in the cumulative number of vehicle passes over the past preset time period, the higher the exposure of the collision accidents for the a-section is.
S405: and determining the road risk level of the A road section based on the collision risk index parameter of the A road section within the past preset time.
In some embodiments, the electronic device 100 determines the road risk indicator for the target road segment based on the collision risk indicator parameter for the target road segment within a predetermined time period. The electronic device 100 may determine the road risk level of the a road segment according to the value range of the road risk index corresponding to the road risk level divided in advance.
For example, the road risk level of the a road segment includes four levels, 1, 2, 3, and 4, respectively, where 1 is the lowest level and 4 is the highest level. For example, the numerical range of the road risk index corresponding to the road risk level 1 divided in advance is [ a, b ], the numerical range of the road risk index corresponding to the road risk level 2 divided in advance is [ c, d ], the numerical range of the road risk index corresponding to the road risk level 3 divided in advance is [ e, f ], and the numerical range of the road risk index corresponding to the road risk level 4 divided in advance is [ g, h ].
In some embodiments, the electronic device 100 determines the road risk indicator for the target road segment based on the collision risk indicator parameter for the target road segment within a predetermined time period. Specifically, the road risk index R of the target road section dynamic Can be calculated by the formula (4):
R dynamic =S*E*C (4)
wherein S represents the severity of the collision accident of the section A in the past preset time period. C represents the controllability of collision accidents of the section A in the past preset time period. E represents the exposure of the collision accident of the section A in the past preset time period.
It is clear that the higher the severity of the collision accident for the a-section, the higher the road risk level for the a-section. The higher the controllability of the collision accident of the a road segment, the higher the road risk level of the a road segment. The higher the exposure of the collision accident of the a road segment, the higher the road risk level of the a road segment.
S406: and sending the road risk grade of the section A to the vehicles within the preset range of the section A.
In some embodiments, the vehicle within the preset range of the a road segment may be a vehicle on the a road segment, or may be a vehicle that is not more than the preset distance from the a road segment. For example, the preset distance may be 1 km, and the vehicle within the preset range of the a road segment may adjust the speed of the vehicle according to the received road risk level of the a road segment. For example, in an automatic driving scenario, the vehicle may automatically reduce the speed of the vehicle when the vehicle is at a road risk level of 3 from or enters the a road segment. When the road risk level of the A road section received by the vehicle is 1, the vehicle can not adjust the speed, namely, the original speed is kept to pass through the A road section.
As can be seen from the above description, the electronic device 100 may acquire a video stream acquired in real time by at least one camera on the a road segment, analyze the video stream acquired in real time, acquire the target object information of the a road segment in real time, and determine the road risk level of the a road segment based on the target object information of the a road segment acquired in real time. And the electronic device 100 may further update the road risk level of the a road segment in real time, so that the vehicle within the preset range of the a road segment may know the road risk level of the a road segment in real time, so that whether to adjust the running speed of the autonomous vehicle in the target road segment is predetermined according to the real-time known road risk level of the a road segment before the autonomous vehicle is in the a road segment or enters the target road segment. The accuracy of the control of the automatic driving vehicle is improved, and the safety and driving experience of the automatic driving vehicle are further improved.
In some embodiments, the road condition information of the target road segment may be a sensitive traffic participant, and the electronic device 100 determines the sensitive traffic participant of the target road segment according to the target object of the target road segment and the characteristic information of the target object acquired in real time and transmits the sensitive traffic participant to the vehicle within the preset range of the target road segment, as described in detail below with reference to fig. 5.
Based on the road condition detection scenario of fig. 1 or fig. 2, fig. 5 shows a flowchart of another road condition detection method, and the execution subject of the flowchart of fig. 5 is an electronic device 100, as shown in fig. 5, specifically including:
s501: the step S401 is referred to for specific content to acquire the video stream acquired by at least one camera on the target road section in real time, which is not described herein.
S502: based on the video stream of the target road section obtained in real time, the target object of the target road section and the feature information corresponding to the target object are determined, and specific content refers to step S402, which is not described herein.
S503: and screening sensitive traffic participants of the target road section from the target objects of the target road section according to the types of the target objects.
As described in step S402 in fig. 4, the target object of the target link may be a vehicle traveling on the target link, a pedestrian, or the like. In particular, the vehicle can be a car, a truck, a van, a bicycle, a pedestrian, a baby carriage and the like.
In some embodiments, the sensitive traffic participant is the target object that may affect the speed of travel of the vehicle on the road segment. Urban road traffic, sensitive traffic participants include, but are not limited to: pedestrians, bicycles, strollers, people who are wheelchairs, etc. In highway traffic, sensitive traffic participants include, but are not limited to, large trucks, vans, and the like.
In some embodiments, the electronic device 100 obtains feature information corresponding to a sensitive traffic participant based on screening the sensitive traffic participant of the target road segment from the target objects of the target road segment according to the type of the target object. The characteristic information of the sensitive traffic participant of the target road section comprises the position, the running speed and the running acceleration of the sensitive traffic participant on the target road section.
For example, at the present moment, a sensitive traffic participant p on road a n Can be expressed as (x) n ,y n ,v n ,a n ) Wherein (x) n ,y n ) Representing a sensitive traffic participant p n Position coordinates on road section A, v n Representing a sensitive traffic participant p n Speed at the current time, a n Representing a sensitive traffic participant p n Acceleration at the current time. Specifically, as shown in FIG. 1, the A road segment is an urban road, and the sensitive traffic participants of the A road segment may include pedestrians 300-4, 300-5, 300-6. The location information of the pedestrian 300-4 can be expressed as (x 1 ,y 1 ,v 1 ,a 1 ) The position information of the pedestrian 300-4 can be expressed as (x 2 ,y 2 ,v 2 ,a 2 ) The position information of the pedestrian 300-4 can be expressed as (x 3 ,y 3 ,v 3 ,a 3 )。
Since in step S502 the electronic device 100 determines the position state corresponding to the target object of the target link based on the video stream of the target link acquired in real time, if the position state corresponding to the target object is the position coordinate generated based on the own coordinate system, in step S503, the position coordinate of the sensitive traffic participant included in the sensitive traffic participant in the target link selected from the position states of the target object of the target link is also the position coordinate generated based on the own coordinate system. Accordingly, the electronic device 100 needs to convert the position coordinates of the sensitive traffic participant generated based on its own coordinate system into the position coordinates of the sensitive traffic participant generated based on the world coordinate system and then transmit the sensitive traffic participant including the position coordinates of the sensitive traffic participant generated based on the world coordinate system to the vehicle of the target link.
In some embodiments, the electronic device 100 may convert the position coordinates of the sensitive traffic participant generated based on the own coordinate system into the position coordinates of the sensitive traffic participant generated based on the world coordinate system according to the conversion relation matrix of the own coordinate system-the world coordinate system, the reference coordinates of the own coordinate system, and the position coordinates of the sensitive traffic participant generated based on the own coordinate system. Specifically, the location coordinates L of the sensitive traffic participant generated based on the world coordinate system can be represented by the following formula (5):
L=M*P*N (5)
in the formula (5), M represents a conversion relation matrix of the own coordinate system-the world coordinate system, N represents a reference coordinate of the own coordinate system, and P represents a position coordinate of the sensitive traffic participant generated based on the own coordinate system.
S504: and transmitting the characteristic information corresponding to the sensitive traffic participants of the target road section to the vehicles within the preset range of the target road section.
In some embodiments, the vehicle may adjust the speed of the vehicle based on the location and speed of sensitive traffic participants of the received target road segment when the target road segment is not entered. For example, as shown in FIG. 1, in an autopilot scenario, where a vehicle receives that the A-segment contains three sensitive traffic participants (i.e., pedestrian 300-4, pedestrian 300-5, pedestrian 300-6), the autopilot vehicle may automatically reduce the speed of the autopilot vehicle when it does not enter or enters the A-segment.
It will be appreciated that according to the road condition detection process described in fig. 5, the electronic device 100 may update the sensitive traffic participants of the target road segment in real time, and may transmit the sensitive traffic participants of the target road segment to the vehicles within the preset range of the target road segment in real time. Therefore, the vehicle in the preset range of the target road section can acquire the sensitive traffic participants of the target road section in real time, and the road section information of the target road section can be acquired in real time according to the sensitive traffic participants of the target road section, so that whether the running speed of the automatic driving vehicle in the target road section is regulated or not can be determined in advance before the automatic driving vehicle enters the target road section. The accuracy of the control of the automatic driving vehicle is improved, and the safety and driving experience of the automatic driving vehicle are further improved.
Fig. 6 is a block diagram illustrating a configuration of an electronic device 100 according to one embodiment of the application, and fig. 6 schematically illustrates an example electronic device 100 according to various embodiments. In one embodiment, the electronic device 100 may include one or more processors 1404, system control logic 1408 coupled to at least one of the processors 1404, system memory 1412 coupled to the system control logic 1408, non-volatile memory (NVM) 1416 coupled to the system control logic 1408, and a network interface 1420 coupled to the system control logic 1408.
In some embodiments, the processor 1404 may include one or more single-core or multi-core processors. In some embodiments, the processor 1404 may include any combination of general-purpose processors and special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In embodiments in which the electronic device 100 employs an eNB (enhanced Node B) or RAN (Radio Access Network ) controller, the processor 1404 may be configured to perform various conforming embodiments, such as one or more of the multiple embodiments shown in fig. 3 or 4 or 5. For example, the processing 1404 may be configured to perform the road condition detection method described above, where the processing 1404 may be configured to obtain a video stream collected by at least one camera on the a road segment, determine target object information, and may be configured to determine road condition information of the target road segment based on the target object information of the target road segment, and so on.
In some embodiments, the system control logic 1408 may include any suitable interface controller to provide any suitable interface to at least one of the processors 1404 and/or any suitable device or component in communication with the system control logic 1408.
In some embodiments, the system control logic 1408 may include one or more memory controllers to provide an interface to the system memory 1412. The system memory 1412 may be used for loading and storing data and/or instructions. The memory 1412 of the system 1400 may include any suitable volatile memory, such as suitable Dynamic Random Access Memory (DRAM), in some embodiments.
NVM/memory 1416 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions. In some embodiments, NVM/memory 1416 may include any suitable nonvolatile memory such as flash memory and/or any suitable nonvolatile storage device, such as at least one of a HDD (Hard Disk Drive), a CD (Compact Disc) Drive, a DVD (Digital Versatile Disc ) Drive.
NVM/memory 1416 may include a portion of the storage resources on the device on which electronic apparatus 100 is installed, or it may be accessed by, but is not necessarily part of, the apparatus. For example, NVM/storage 1416 may be accessed over a network via network interface 1420.
In particular, the system memory 1412 and NVM/storage 1416 may include: a temporary copy and a permanent copy of instructions 1424. The instructions 1424 may include: instructions that, when executed by at least one of the processors 1404, cause the electronic device 100 to implement a method as shown in fig. 1. In some embodiments, instructions 1424, hardware, firmware, and/or software components thereof may additionally/alternatively be disposed in system control logic 1408, network interface 1420, and/or processor 1404.
Network interface 1420 may include a transceiver to provide a radio interface for electronic device 100 to communicate over one or more networks to any other suitable device (e.g., front end module, antenna, etc.). In some embodiments, the network interface 1420 may be integrated with other components of the electronic device 100. For example, the network interface 1420 may be integrated with at least one of the processor 1404, the system memory 1412, the nvm/storage 1416, and a firmware device (not shown) having instructions which, when executed by at least one of the processor 1404, the electronic device 100 implements the methods as shown in fig. 3-5.
The network interface 1420 may further include any suitable hardware and/or firmware to provide a multiple-input multiple-output radio interface or a wired radio interface. For example, network interface 1420 may be a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem.
In one embodiment, at least one of the processors 1404 may be packaged together with logic for one or more controllers of the system control logic 1408 to form a System In Package (SiP). In one embodiment, at least one of the processors 1404 may be integrated on the same die with logic for one or more controllers of the system control logic 1408 to form a system on chip (SoC).
The electronic device 100 may further include: input/output (I/O) devices 1432. The I/O device 1432 may include a user interface to enable a user to interact with the electronic device 100; the design of the peripheral component interface enables the peripheral component to also interact with the electronic device 100. In some embodiments, the electronic device 100 further comprises a sensor for determining at least one of environmental conditions and location information related to the electronic device 100.
In some embodiments, the user interface may include, but is not limited to, a display (e.g., a liquid crystal display, a touch screen display, etc.), a speaker, a microphone, one or more cameras (e.g., still image cameras and/or video cameras), a flashlight (e.g., light emitting diode flash), and a keyboard.
In some embodiments, the peripheral component interface may include, but is not limited to, a non-volatile memory port, an audio jack, and a power interface.
In some embodiments, the sensors may include, but are not limited to, gyroscopic sensors, accelerometers, proximity sensors, ambient light sensors, and positioning units. The positioning unit may also be part of the network interface 1420 or interact with the network interface 1420 to communicate with components of a positioning network, such as Global Positioning System (GPS) satellites.
It is to be understood that the structure illustrated in fig. 6 does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Embodiments of the disclosed mechanisms may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as a computer program or program code that is executed on a programmable system comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope by any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including but not limited to floppy diskettes, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) in an electrical, optical, acoustical or other form of propagated signal using the internet. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the present application, each unit/module mentioned in each device is a logic unit/module, and in physical terms, one logic unit/module may be one physical unit/module, or may be a part of one physical unit/module, or may be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logic unit/module itself is not the most important, and the combination of functions implemented by the logic unit/module is only a key for solving the technical problem posed by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-described device embodiments of the present application do not introduce units/modules that are less closely related to solving the technical problems posed by the present application, which does not indicate that the above-described device embodiments do not have other units/modules.
It should be noted that in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the application.
Claims (16)
- A road condition detection method is used for electronic equipment and is characterized in that,acquiring a video stream acquired by a camera on a target road section;determining current characteristic information of a target object on a target road section according to the video stream, wherein the characteristic information comprises motion state information and/or position state information;and determining the current road condition information of the target road section based on the current characteristic information of the target object and/or the historical characteristic information of the target object.
- The method as recited in claim 1, further comprising:and sending the current road condition information of the target road section to vehicles within a preset range of the target road section.
- The method of claim 1, wherein the target object comprises at least one of: vehicles, people, obstacles.
- The method of claim 3, wherein the current road condition information of the target road segment includes a road condition risk level; and is also provided withThe determining the current road condition information of the target road section based on the current characteristic information of the target object and/or the historical characteristic information of the target object comprises:Determining current traffic and collision accident information, and historical traffic and collision accident information of the target road section based on the current characteristic information of the target object and the historical characteristic information of the target object;the current road risk level for the target road segment is determined based on current traffic and collision accident information, and historical traffic and collision accident information for the target road segment.
- The method of claim 4, wherein the crash incident information comprises at least one of: the number of crashes, the average relative acceleration of the crashed vehicle, the average relative speed of the crashed vehicle.
- The method of claim 5, wherein the determining the current road risk level for the target road segment based on current traffic and collision accident information for the target road segment and historical traffic and collision accident information comprises:determining a collision risk indicator parameter of a target road segment based on current traffic and collision accident information of the target road segment and historical traffic and collision accident information, wherein the collision risk indicator parameter comprises at least one of the following: severity of collision accident, exposure of collision accident, controllability of collision accident;And determining the current road risk level of the target road section based on the collision risk index parameter of the target road section.
- The method of claim 6, wherein the current road risk level R dynamic Calculated by the following formula:R dynamic =S*E*Cwhere S represents the severity of the crash event. C represents the controllability of the collision accident, and E represents the exposure of the collision accident.
- The method of claim 6 or claim 7, wherein the determining collision risk indicator parameters for the target road segment based on current traffic and collision accident information for the target road segment, and historical traffic and collision accident information comprises:the severity of the collision accident for the target road segment is determined based on the number of collision accidents for the current and historical target road segments, the average relative acceleration of the collision vehicle, the average relative speed of the collision vehicle.
- The method of claim 8, wherein the step of determining the position of the first electrode is performed,the number of collision accidents includes: the number of vehicle-to-vehicle collisions, the number of vehicle-to-human collisions, the number of vehicle-to-obstacle collisions;the severity S of the crash event is calculated by the following formula:S=α 0 v r exp(β 0 a r )*(α 1 *N CC +α 2 *N CO +α 3 *N CP )*exp(β 1 N death )Wherein v is r Representing the average relative speed of the crashed vehicle, a r Representing the average relative acceleration of the crashed vehicle, N CC Indicating the number of car-to-car collisions, N CO Indicating the number of collisions of the vehicle with obstacles, N CP Indicating the number of collisions of the vehicle with a person, alpha 0 、α 1 、α 2 、α 3 、β 0 、β 1 Respectively represent weight parameters, N death Indicating the number of deaths of collision accidents of the current and the historic target road segments, exp indicating an exponential function.
- The method of claim 6 or claim 7, wherein determining collision risk indicator parameters for a target road segment based on current traffic and collision accident information for the target road segment, and historical traffic and collision accident information comprises:the controllability of the collision accident of the target link is determined based on the average relative acceleration of the collision vehicle and the average relative speed of the collision vehicle of the current and historic target links.
- The method of claim 10, wherein the step of determining the position of the first electrode is performed,the controllability C of the collision accident of the target road section is calculated by the following formula:wherein a is r Representing the average relative acceleration, v, of the crashed vehicle r Representing the average relative speed, beta, of the crashed vehicle 2 Representing weight parameters, N death Indicating whenThe number of deaths of collision accidents of the target road section before and after history, exp represents an exponential function.
- The method of claim 6 or claim 7, wherein determining collision risk indicator parameters for a target road segment based on current traffic and collision accident information for the target road segment, and historical traffic and collision accident information comprises:and determining the exposure degree of the collision accident of the current target road section based on the current and historical vehicle flow of the target road section and the number of the collision accidents.
- The method of claim 12, wherein the step of determining the position of the probe is performed,the exposure E of the collision accident is calculated by the following formula:wherein N is accident Indicating the number of collision accidents, N total Representing the vehicle flow.
- The method of claim 3, wherein the current road condition information of the target link includes characteristic information of sensitive traffic participants;the determining the current road condition information of the target road section based on the current characteristic information of the target object and/or the historical characteristic information of the target object comprises:the method comprises the steps of screening sensitive traffic participants from the target object based on the current characteristic information of the target object, and determining the current characteristic information of the sensitive traffic participants, wherein the sensitive traffic participants comprise at least one of the following components: pedestrians, heavy trucks, bicycles, vans.
- A readable medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform a road condition detection method according to any one of claims 1 to 14.
- An electronic device, comprising:a memory for storing instructions for execution by one or more processors of the electronic device, anda processor, which is one of the processors of the electronic device, for executing the road condition detection method according to any one of claims 1 to 14.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/076064 WO2023151034A1 (en) | 2022-02-11 | 2022-02-11 | Traffic condition detection method, readable medium and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116897380A true CN116897380A (en) | 2023-10-17 |
Family
ID=87563370
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202280003400.0A Pending CN116897380A (en) | 2022-02-11 | 2022-02-11 | Road condition detection method, readable medium and electronic equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116897380A (en) |
WO (1) | WO2023151034A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118323143A (en) * | 2024-06-17 | 2024-07-12 | 吉利汽车研究院(宁波)有限公司 | Vehicle over-bending control method, vehicle, electronic equipment and storage medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117831278B (en) * | 2023-12-20 | 2024-08-06 | 特微乐行(广州)技术有限公司 | Expressway intelligent monitoring system based on cloud platform |
CN118379697B (en) * | 2024-06-27 | 2024-09-17 | 江西省公路工程检测中心 | Road state risk prediction method and system based on multi-source data analysis |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014174032A (en) * | 2013-03-11 | 2014-09-22 | Osaka Gas Co Ltd | Safe route search system |
CN103971523B (en) * | 2014-05-21 | 2016-08-17 | 南通大学 | A kind of mountain road traffic safety dynamic early-warning system |
US10024684B2 (en) * | 2014-12-02 | 2018-07-17 | Operr Technologies, Inc. | Method and system for avoidance of accidents |
JP2018155577A (en) * | 2017-03-17 | 2018-10-04 | パナソニックIpマネジメント株式会社 | Self-driving car and control program |
CN109389824B (en) * | 2017-08-04 | 2021-07-09 | 华为技术有限公司 | Driving risk assessment method and device |
CN109118773B (en) * | 2018-09-30 | 2019-10-01 | 中交第一公路勘察设计研究院有限公司 | A kind of traffic accidents methods of risk assessment |
CN111275960A (en) * | 2018-12-05 | 2020-06-12 | 杭州海康威视系统技术有限公司 | Traffic road condition analysis method, system and camera |
CN110020797B (en) * | 2019-03-27 | 2023-06-09 | 清华大学苏州汽车研究院(吴江) | Evaluation method of automatic driving test scene based on perception defect |
CN209822021U (en) * | 2019-06-20 | 2019-12-20 | 张志豪 | Vehicle-mounted real-time road traffic accident situation sensing device |
CN112037513B (en) * | 2020-09-01 | 2023-04-18 | 清华大学 | Real-time traffic safety index dynamic comprehensive evaluation system and construction method thereof |
CN112767695A (en) * | 2021-01-07 | 2021-05-07 | 哈尔滨工业大学 | Real-time prediction method and system for traffic accident risk at signalized intersection |
CN113963539B (en) * | 2021-10-19 | 2022-06-10 | 交通运输部公路科学研究所 | Highway traffic accident identification method, module and system |
-
2022
- 2022-02-11 WO PCT/CN2022/076064 patent/WO2023151034A1/en active Application Filing
- 2022-02-11 CN CN202280003400.0A patent/CN116897380A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118323143A (en) * | 2024-06-17 | 2024-07-12 | 吉利汽车研究院(宁波)有限公司 | Vehicle over-bending control method, vehicle, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023151034A1 (en) | 2023-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11538114B1 (en) | Providing insurance discounts based upon usage of telematics data-based risk mitigation and prevention functionality | |
CN116897380A (en) | Road condition detection method, readable medium and electronic equipment | |
US10996073B2 (en) | Navigation system with abrupt maneuver monitoring mechanism and method of operation thereof | |
US11849375B2 (en) | Systems and methods for automatic breakdown detection and roadside assistance | |
US20210287530A1 (en) | Applying machine learning to telematics data to predict accident outcomes | |
US20230048622A1 (en) | Providing insurance discounts based upon usage of telematics data-based risk mitigation and prevention functionality | |
Ghosh et al. | Dynamic V2V Network: Advancing V2V Safety with Distance, Speed, Emergency Priority, SOS, and Accident Preemption | |
CN117496711A (en) | 5G-based man-vehicle road integrated intelligent traffic system and method | |
CN117334084A (en) | Blind area collision early warning method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |