Nothing Special   »   [go: up one dir, main page]

KR101593676B1 - Method and device for perceiving driving situation - Google Patents

Method and device for perceiving driving situation Download PDF

Info

Publication number
KR101593676B1
KR101593676B1 KR1020140102528A KR20140102528A KR101593676B1 KR 101593676 B1 KR101593676 B1 KR 101593676B1 KR 1020140102528 A KR1020140102528 A KR 1020140102528A KR 20140102528 A KR20140102528 A KR 20140102528A KR 101593676 B1 KR101593676 B1 KR 101593676B1
Authority
KR
South Korea
Prior art keywords
information
accident
sound
comparing
area
Prior art date
Application number
KR1020140102528A
Other languages
Korean (ko)
Inventor
고한석
이성재
Original Assignee
고려대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 고려대학교 산학협력단 filed Critical 고려대학교 산학협력단
Priority to KR1020140102528A priority Critical patent/KR101593676B1/en
Application granted granted Critical
Publication of KR101593676B1 publication Critical patent/KR101593676B1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

A driving situation recognition method is disclosed. A driving situation recognition method according to an embodiment of the present invention is a method for recognizing a driving situation of a first area by separating a moving picture photographed with a driving situation of the first area into video information and sound information and transmitting at least one of the video information and the sound information to a traffic accident And compares it with the accident reference information generated in advance, and then determines whether or not a traffic accident has occurred based on the comparison result.

Description

Technical Field [0001] The present invention relates to a method and apparatus for perceiving a driving situation,

An embodiment of the present invention relates to a running situation recognition, and more particularly, to a running situation recognition method and apparatus capable of accurately and quickly recognizing a running situation.

In general, the acquisition of information on driving situations, including traffic accidents, is performed by a person, such as a report, or a camera for CCTV or traffic information collection.

Urban Traffic Information System (UTIS), which is operated by the National Police Agency and some municipalities, is composed of Onboard Equipment (OBE) installed on a probe vehicle such as a taxi or a patrol car and a roadside base station Road side equipment (RSE), and provides information such as traffic information, unexpected situation information, and weather information to the user.

However, in UTIS, OBE equipment is not based on image and sound / voice, but it is a principle that grasps driving situation based on location and speed.

In addition, existing systems may not be able to take appropriate measures in cases where people are unfamiliar, people can not report, or accidents occur in areas where there is no camera for collecting traffic information.

Therefore, it is necessary to introduce a new driving situation recognition system which can overcome these problems.

Relevant prior arts are disclosed in Japanese Patent Laid-Open No. 1020-060025930 entitled " Real-time Behavior Analysis & Context Awareness Smart Image Security System ".

An object of an embodiment of the present invention is to provide a driving situation recognition method and apparatus which can accurately and quickly recognize a driving situation by integrally analyzing video images and sound information about a moving picture taken in a driving situation.

According to an aspect of the present invention, there is provided a driving situation recognition method comprising: dividing a moving picture of a driving situation of a first area into image information and sound information; Analyzing at least one of the image information and the sound information with respect to a traffic accident in the second area and comparing the motion information with previously generated accident reference information; And determining whether a traffic accident has occurred or not based on the comparison result.

Preferably, the step of comparing the image information with the accident reference information includes extracting text information including at least one of a symbol, a letter and a number from the image information; Extracting object information from a region other than the region from which the text information is extracted from the image information; And comparing at least one of the text information and the object information with the accident reference information.

Preferably, when the accident reference information further includes keyword information related to a traffic accident, the step of comparing the text information with the accident reference information includes comparing the text information with the keyword information, And then selecting the keyword information.

Preferably, the step of comparing the object with the accident reference information includes: classifying the object according to the type of the object; And comparing the incident reference information with the incident reference information for each type of the classified object.

Preferably, the step of comparing the sound information with the accident reference information includes separating the sound information into sound information and sound information. And comparing at least one of the voice information and the acoustic information with the accident reference information.

Preferably, when the accident reference information further includes keyword information related to a traffic accident, the step of comparing the voice information with the accident reference information includes comparing the voice information with the keyword information, And selecting the keyword information.

Preferably, the driving situation recognition method according to an embodiment of the present invention further includes a step of analyzing the weather of the image information, and the step of determining whether or not the traffic accident occurs may further include . ≪ / RTI >

Preferably, the moving image photographed with respect to the running condition of the first area may be photographed by a fixed type photographing apparatus fixed in the first area or a mobile photographing apparatus photographing while moving in the first area.

Preferably, the driving situation recognition method according to an embodiment of the present invention further includes acquiring position information of the mobile type photographing apparatus, wherein the step of determining whether or not the traffic accident occurs is based on the position information As shown in FIG.

According to an aspect of the present invention, there is provided a driving situation recognition apparatus comprising: a motion picture separating unit for dividing a moving picture of a moving state of a first area into video information and sound information; An image information analyzing unit for analyzing the video information with respect to the accident reference information generated in advance by analyzing a moving picture of a traffic accident in the second area and a sound information analyzing unit for comparing the sound information with the accident reference information; And a determination unit for determining whether the traffic accident has occurred or not based on the comparison result.

Preferably, the running situation recognition apparatus according to an embodiment of the present invention includes: a text detection unit for extracting the text information from the image information; An object detection unit for extracting the object information from a region other than the region where the text information is extracted from the image information; And an image comparison unit for comparing at least one of the text information and the object information with the accident reference information.

Preferably, when the accident reference information further includes keyword information related to a traffic accident, the image comparison unit compares the text information with the keyword information, Can be selected.

Preferably, the running situation recognition apparatus according to an embodiment of the present invention may classify the object according to the type of the object, and compare the object with the accident reference information according to the type of the classified object.

Preferably, the running situation recognition apparatus according to an embodiment of the present invention further includes a sound separator for separating the sound information into sound information and sound information; And a sound comparator for comparing at least one of the sound information and the sound information with the accident reference information.

Preferably, when the accident reference information further includes keyword information related to a traffic accident, the driving situation recognition apparatus according to an embodiment of the present invention compares the voice information with the keyword information, Keyword information can be selected.

Preferably, the driving situation recognition apparatus according to an embodiment of the present invention further includes a weather analysis unit for analyzing the weather of the image information, and the determination unit may perform the determination based on the weather analysis result .

Preferably, in the running situation recognition apparatus according to an embodiment of the present invention, a moving picture photographed with respect to a running situation of the first area may include a fixed photographing apparatus fixedly installed in the first area, And can be photographed by a mobile type photographing apparatus.

Preferably, the running situation recognition apparatus according to an embodiment of the present invention further includes a position information obtaining unit that obtains the position information of the mobile type photographing apparatus, and the determination unit performs the determination based on the position information can do.

An embodiment of the present invention can promptly and accurately determine whether or not a traffic accident has occurred by determining the possibility of a traffic accident based on the image, voice, and sound of the traveling situation.

It is also possible to send a message to the person in charge of the traffic control task to perform a quick follow-up process.

Also, in case of an emergency, the driving situation recognition device can directly control the vehicle or warn the driver.

1 is a flowchart illustrating a driving situation recognition method according to an embodiment of the present invention.
2 is a flowchart illustrating an image information analysis method according to an embodiment of the present invention.
3 is a flowchart illustrating a driving situation recognition method according to another embodiment of the present invention.
4 is a view for explaining the operation of the determination unit according to an embodiment of the present invention.
5 is a view for explaining a driving situation recognition apparatus according to an embodiment of the present invention.
6 is a view for explaining a comparing unit according to an embodiment of the present invention.
7 is a view for explaining an image information analyzing unit according to an embodiment of the present invention.
8 is a diagram illustrating a UI of a running situation providing application according to an embodiment of the present invention.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It is to be understood, however, that the invention is not to be limited to the specific embodiments, but includes all changes, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.

The terms first, second, A, B, etc. may be used to describe various elements, but the elements should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component. And / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, the terms "comprises" or "having" and the like are used to specify that there is a feature, a number, a step, an operation, an element, a component or a combination thereof described in the specification, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the contextual meaning of the related art and are to be interpreted as either ideal or overly formal in the sense of the present application Do not.

Hereinafter, preferred embodiments according to the present invention will be described in detail with reference to the accompanying drawings.

1 is a flowchart illustrating a driving situation recognition method according to an embodiment of the present invention.

In step 110, the moving picture photographed with respect to the running condition of the first area is separated into video information and sound information.

In this case, the moving picture may be a moving picture input device such as a CCTV, a camera for collecting traffic information, or a moving picture input device such as a car black box or a smart phone.

In this case, the first area means an area to be photographed by the photographing device, and corresponds to an area in front, rear, and side where the camera of the photographing device faces. In other words, the first area is the area where there is information to be analyzed and the video to be analyzed is acquired.

The image information refers to information that can be visually recognized in the moving image of the running situation. The video information includes text information indicating symbols, characters, numbers, etc. appearing in the video and object information that is an object occupying a space other than the text information.

Sound information refers to information that can be perceived audibly in the moving image of the driving situation. The sound information includes sound information, which is human voice, and sound information, which is other information.

In step 120, at least one of the video information and the sound information is analyzed with respect to a traffic accident in the second area and compared with accident reference information generated in advance.

The second area is the area where traffic accident occurred or the area where traffic accident is likely to occur. In other words, the second region is an area where the analysis standard of the first region video is obtained. Information related to the second area necessary for calculating the occurrence and possibility of a traffic accident is acquired and stored in advance, which is referred to as 'accident reference information'. The accident reference information stores text, object, voice and sound information related to the occurrence of a traffic accident.

In this case, the first area and the second area may be the same area or different areas.

In step 130, it is determined based on the comparison result whether a traffic accident has occurred or not.

At this time, video information can be compared with accident reference information to determine whether or not a traffic accident occurs. The accident reference information stores information related to a traffic accident, and it can be determined whether or not a traffic accident occurred by comparing the analysis data of the first region video with accident reference information.

The process of comparing the image information with the accident reference information will be described later with reference to FIG.

2 is a flowchart illustrating an image information analysis method according to an embodiment of the present invention.

In step 210, text information including at least one of a symbol, a letter, and a number is extracted from the image information.

The symbol may include a lane marking direction drawn on the road, the character may include a place name, road information, etc. drawn on the sign, and the number may include a speed limit indication on the road or sign. At this time, the text information may be extracted by setting an area including text information of the image information and then applying an OCR (Optical Character Reader) technique to the area.

In step 220, object information is extracted from the remaining area other than the area where the text information is extracted from the image information.

An object is an object or objects appearing in an image, such as an automobile, a road, a person, a cloud, a traffic light, and a bridge.

In step 230, at least one of the text information and object information is compared with accident reference information.

As described above, according to one embodiment of the present invention, by comparing text information or object information with incident reference information, it is possible to calculate whether or not a traffic accident occurs and possibility of occurrence based on image information among first local moving images.

3 is a flowchart illustrating a driving situation recognition method according to another embodiment of the present invention.

In step 310, the moving picture photographed with respect to the running condition of the first area is separated into video information and sound information.

In step 320, the weather of the image information is analyzed.

For example, in the image of the image information, it is possible to recognize the road information due to the rain, the snow, the rain rate or the snow that is currently falling through the weather analysis.

In step 330, the video information is analyzed with respect to traffic accidents in the second area and compared with accident reference information generated in advance.

At this time, the second area may be the same as or different from the first area.

In step 340, based on the weather analysis result and the comparison result, it is determined whether a traffic accident has occurred or not.

For example, as a result of comparison between image information and accident reference information, it is judged that the occurrence and occurrence of a traffic accident is usually abnormal. If the weather is in a rainy condition, it can be judged that there is a traffic accident and the possibility of occurrence is high.

In another embodiment, in the case of a mobile camera such as a black box, it is possible to determine whether a traffic accident has occurred or not based on the location information obtained by using a separate location recognition device capable of obtaining location information.

For example, when the positions of the first region and the second region coincide with each other, it can be determined more accurately.

In addition, when judging whether or not a traffic accident occurs, if the driver is in a traffic accidents area, the driver will judge whether or not a traffic accident occurs and the possibility of occurrence of a traffic accident. In addition, if a traffic accident has already occurred in the area, or if it is planned to enter the congested area, or if it has already been received, it can be notified to the driver through the control center. In case of an accident to the driver in a rare place, This information can be provided to the control center for follow-up.

4 is a view for explaining the operation of the determination unit according to an embodiment of the present invention.

Referring to FIG. 4, the moving picture is divided at a time interval of 3 seconds, and acoustic information, text information, and object information are analyzed during each section, and the possibility of a traffic accident is judged based on the voice information in addition to the information can do.

In FIG. 4, in the analysis of the first, second, and third sections of the possibility of a traffic accident, the possibility of a traffic accident was judged as being 'middle'. However, as similar keyword information was detected in three sections, It was judged by raising the possibility of a traffic accident to 'high'.

Similar keyword information is information similar to keyword information previously set in relation to a traffic accident, and when text information or voice information is compared with keyword information and the relevance to the traffic accident is recognized, this information is selected as keyword information, And the probability of occurrence.

That is, in the embodiment of FIG. 4, the similar keyword information is not detected from the voice information of the first and second sections, but the similar keyword information is detected in the third section, thereby influencing the determination of the possibility of a traffic accident.

5 is a view for explaining a driving situation recognition apparatus according to an embodiment of the present invention.

5, the running situation recognition apparatus according to an embodiment of the present invention includes a moving picture separating unit 510, a comparing unit 520, and a determining unit 530. [

The moving picture separating unit 510 separates a moving picture photographed by the photographing apparatus into image information and sound information.

The comparing unit 520 compares at least one of the separated image information and sound information with the reference information.

The comparing unit 520 may include an image information analyzing unit 522 and a sound information analyzing unit 524. The specific operations of the image information analyzing unit 522 and the sound information analyzing unit 524 6 will be described later.

The determination unit 530 determines whether a traffic accident has occurred or not based on the comparison result.

FIG. 6 is a diagram illustrating a comparison unit 520 according to an embodiment of the present invention.

Referring to FIG. 6, the comparison unit 520 includes an image information analyzer 522 and a sound information analyzer 524.

The image information analyzing unit 522 compares the image information with accident reference information.

A specific configuration of the image information analyzing unit 522 according to an embodiment of the present invention will be described later with reference to FIG.

The sound information analyzer 524 compares the sound information with the accident reference information generated in advance.

Preferably, the sound information analyzer 524 includes a sound separator (not shown) that separates sound information into sound information and sound information, and a sound comparator (not shown) that compares at least one of the sound information and the sound information with accident reference information Not shown).

Here, the sound comparing unit may include a sound information comparing unit (not shown) for comparing the sound information with the keyword information related to the traffic accident stored in the incident reference information to select the similar keyword information, and an acoustic information comparing unit (Not shown).

The voice information comparison unit receives the voice information separated by the voice separation unit. For example, voice information can be entered for 'Watch out', 'I'm going to a party', 'Look ahead' and 'Make a sandwich'. Comparing this with keyword information related to a traffic accident, voice information such as 'caution' and 'look ahead' related to a traffic accident will be selected as keyword information.

The sound information comparing unit receives the sound information separated by the sound separating unit. For example, if the sound information separated by the sound separating unit includes 'sudden sound of the car, radio sound, car crash sound, airplane sound', etc., the sound information comparing unit compares it with the accident reference information.

Preferably, the evaluation value can be calculated for each acoustic information according to the relevance to a traffic accident. In this case, each acoustic information has a different evaluation value depending on the correlation with the traffic accident. For example, 'sound of automobile sudden braking' will be a basis for estimating the occurrence of a traffic accident, whereas 'sound of a car crash' can be a direct basis for a traffic accident, so that the latter has a high evaluation value Will be.

FIG. 7 is a view for explaining an image information analyzing unit 522 according to an embodiment of the present invention.

Referring to FIG. 7, the running situation recognition apparatus according to an embodiment of the present invention includes a moving picture text detection unit 522a, an object detection unit 522b, and an image comparison unit 522c.

The text detection unit 522a extracts text information from the image information.

The object detecting unit 522b extracts object information from the remaining area other than the area where the text information is extracted from the image information.

The image comparing unit 522c compares at least one of the text information and the object information with accident reference information.

Preferably, the image comparison unit 522c includes a text information comparison unit (not shown) for comparing the text information extracted from the text detection unit 522a with accident reference information, and object information extracted from the object detection unit 522b, And an object information comparing unit (not shown).

The text information comparison unit can compare the extracted text information with the keyword information related to the traffic accident to select the similar keyword information.

For example, when the text detection unit 522a reads the text 'accident bundle zone' on the traffic sign and the advertisement text on the road billboard, only the text 'accident bundle zone' is selected as the keyword information except for the advertisement text. Since there is a higher possibility that a traffic accident will occur when there is a sign called 'accidental multi-zone', this text information will be closely related to traffic accidents, and such information can be selected by keyword information.

The object information comparison unit can classify the extracted object information according to the type of the object, and compare the object information with the accident reference information.

For example, the object information comparison unit classifies the object information 'automobile, road, person, cloud, traffic light, bridge' extracted by the object detection unit 522b according to the object type and compares the classified object information with accident reference information.

At this time, the object information comparison unit can calculate the evaluation values for objects based on the relevance to the traffic accident. 'Vehicle, road, person, traffic light, bridge', which is closely related to the driving situation in the above object information, will yield a high evaluation value in comparison with 'cloud'.

In another embodiment, the object information comparison unit according to an embodiment of the present invention may compare the relationship between the object and the object. For example, when the car and the person are close to each other by a general distance, or when the road is forming a sharp curve and there is no rail, the relationship between the objects can be analyzed and provided to the judgment unit 530.

8 is a diagram illustrating a UI of a running situation providing application according to an embodiment of the present invention.

Referring to Fig. 8, a moving picture area 1110 is shown on the leftmost side of the running situation providing application.

Video information of a moving picture relating to a currently selected driving situation can be displayed at the top of the moving picture area. When a specific time is selected, video information corresponding to the time can be displayed in the moving picture area 1110. [ In addition, under the image information, sound information corresponding in time to the image information can be displayed. In the moving picture area 1110, menus for selecting and managing moving pictures can be selectively provided.

8, an acoustic information area 1120 is shown.

Referring to FIG. 8, in the sound information area 1120, each sound information included in the sound information is shown in a graph according to the frequency of the frequency. In FIG. 8, the three most frequently used sound information The kind of information is displayed.

An object information area 1130 is shown in the upper right area of FIG.

In the object information area 1130, each object information included in the image information is shown in a graph according to the frequency, and three kinds of object information having the highest frequency are displayed. When selecting the object information, the object information that is not related to the traffic accident or is erroneously detected (for example, 'candy' information of 1130 area) may be excluded from the judgment object through the process of comparing with the accident reference information to be performed in the future have.

A text information and voice information area 1140 is shown in the middle left lower area of FIG.

In the text information and audio information area 1140, text information recognized through an OCR technique and audio information recognized through an Automatic Speech Recognition (ASR) technique are displayed. In another embodiment, only the similar keyword information among the text information and the voice information may be displayed in the text information and voice information area 1140. [ A process display area 1150 is shown in the center right lower area of FIG.

The process display area 1150 separates the video information and the sound information from the moving picture, separates the text information and the object information from the separated video information, separates the audio information and the sound information from the separated sound information, Is compared with accident reference information, and a series of processes for calculating whether or not a traffic accident has occurred based on the comparison result may be displayed. Also, in the process display area 1150, all necessary processes such as the progress status and result of each process and the process progress method can be displayed.

A result display area 1160 is shown in the lower right area of Fig.

In one embodiment of the present invention, when the possibility of a traffic accident occurs more than a predetermined probability based on a preset specific value, an 'accident occurred', and a possibility of a traffic accident with a probability less than a reference value, Lt; / RTI >

In another embodiment, probability of occurrence of an accident may be expressed as 'up', 'middle', or 'lower' based on the probability of a predetermined range set in advance.

The diagram shown in FIG. 8 corresponds to one embodiment of the UI of the running situation providing application, so that it is possible to add more necessary areas or delete the illustrated areas according to the needs of a person skilled in the art without departing from the claims.

The present invention has been described with reference to the preferred embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.

Claims (18)

Separating the photographed moving picture with respect to the running state of the first area into video information and sound information;
Analyzing at least one of the image information and the sound information with respect to a traffic accident of a second area and comparing the motion information with previously generated accident reference information; And
Determining whether a traffic accident has occurred or not based on the comparison result,
The step of comparing the image information with the incident reference information
Extracting text information including at least one of a symbol, a letter and a number from the image information;
Extracting object information from a region other than the region from which the text information is extracted from the image information; And
And comparing at least one of the text information and the object information with the accident reference information.
delete The method according to claim 1,
When the accident reference information further includes keyword information related to a traffic accident,
The step of comparing the textual information with the accident reference information
And comparing the text information with the keyword information to select similar keyword information similar to the keyword information.
The method according to claim 1,
The step of comparing the object with the incident reference information
Dividing the object according to the type of the object; And
And comparing the incident reference information with the accident reference information for each type of the classified object.
The method according to claim 1,
The step of comparing the sound information with the accident reference information
Separating the sound information into sound information and sound information; And
And comparing at least one of the voice information and the acoustic information with the accident reference information.
6. The method of claim 5,
When the accident reference information further includes keyword information related to a traffic accident,
The step of comparing the voice information with the accident reference information
And comparing the voice information with the keyword information to select similar keyword information similar to the keyword information.
The method according to claim 1,
Further comprising analyzing the weather of the image information,
Wherein the step of determining whether or not the traffic accident occurs is performed based on the weather analysis result.
The method according to claim 1,
The moving picture taken with respect to the running state of the first area
Wherein the photographing is performed by a stationary photographing apparatus fixedly installed in the first area or a mobile photographing apparatus photographing while moving in the first area.
9. The method of claim 8,
Further comprising the step of acquiring positional information of the portable type photographing apparatus,
Wherein the step of determining whether or not the traffic accident occurs is based on the location information.
A moving picture separating unit for separating a moving picture taken with respect to a running state of the first area into video information and sound information;
An image information analyzing unit for analyzing the video information with respect to the accident reference information generated in advance by analyzing a moving picture of a traffic accident in the second area and a sound information analyzing unit for comparing the sound information with the accident reference information; And
A judging unit for judging whether or not the traffic accident has occurred based on the comparison result;
Lt; / RTI >
The image information analyzing unit
A text detection unit for extracting text information from the image information;
An object detection unit for extracting object information from a region other than the region where the text information is extracted from the image information; And
And an image comparison unit for comparing at least one of the text information and the object information with the accident reference information.
delete 11. The method of claim 10,
When the accident reference information further includes keyword information related to a traffic accident,
The image comparison unit
And compares the text information with the keyword information to further select the similar keyword information.
11. The method of claim 10,
The image comparison unit
Classifies the object according to the type of the object,
And compares the object with the accident reference information for each type of the classified object.
11. The method of claim 10,
The sound information analyzer
A sound separator for separating the sound information into sound information and sound information; And
And a sound comparator for comparing at least one of the sound information and the sound information with the accident reference information.
15. The method of claim 14,
When the accident reference information further includes keyword information related to a traffic accident,
The sound comparing unit
And compares the voice information with the keyword information to further select the similar keyword information.
11. The method of claim 10,
Further comprising a weather analyzing unit for analyzing the weather of the image information,
Wherein the determination unit performs the determination based on the weather analysis result.
11. The method of claim 10,
The moving picture taken with respect to the running state of the first area
Wherein the camera is photographed by a stationary photographing device fixedly installed in the first area or a mobile photographing device photographing while moving in the first area.
18. The method of claim 17,
Further comprising a positional information obtaining unit for obtaining positional information of the mobile type photographing apparatus,
Wherein the determination unit performs the determination based on the position information.
KR1020140102528A 2014-08-08 2014-08-08 Method and device for perceiving driving situation KR101593676B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020140102528A KR101593676B1 (en) 2014-08-08 2014-08-08 Method and device for perceiving driving situation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140102528A KR101593676B1 (en) 2014-08-08 2014-08-08 Method and device for perceiving driving situation

Publications (1)

Publication Number Publication Date
KR101593676B1 true KR101593676B1 (en) 2016-02-15

Family

ID=55357460

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140102528A KR101593676B1 (en) 2014-08-08 2014-08-08 Method and device for perceiving driving situation

Country Status (1)

Country Link
KR (1) KR101593676B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109995934A (en) * 2019-02-27 2019-07-09 维沃移动通信有限公司 Reminding method and terminal device
CN110427882A (en) * 2019-08-01 2019-11-08 北京派克盛宏电子科技有限公司 For the intelligent analysis method of tour, device, equipment and its storage medium
KR20210050150A (en) * 2019-10-28 2021-05-07 고려대학교 세종산학협력단 Method and procedure for driving autonomous test scenario using traffic accident image based on operational environment information in road traffic

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101266326B1 (en) * 2011-12-27 2013-05-22 전자부품연구원 Accident cognition device and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101266326B1 (en) * 2011-12-27 2013-05-22 전자부품연구원 Accident cognition device and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109995934A (en) * 2019-02-27 2019-07-09 维沃移动通信有限公司 Reminding method and terminal device
CN110427882A (en) * 2019-08-01 2019-11-08 北京派克盛宏电子科技有限公司 For the intelligent analysis method of tour, device, equipment and its storage medium
KR20210050150A (en) * 2019-10-28 2021-05-07 고려대학교 세종산학협력단 Method and procedure for driving autonomous test scenario using traffic accident image based on operational environment information in road traffic
KR102306085B1 (en) * 2019-10-28 2021-09-28 고려대학교 세종산학협력단 Method and procedure for driving autonomous test scenario using traffic accident image based on operational environment information in road traffic

Similar Documents

Publication Publication Date Title
CN107067718B (en) Traffic accident responsibility evaluation method, traffic accident responsibility evaluation device, and traffic accident responsibility evaluation system
US10089877B2 (en) Method and device for warning other road users in response to a vehicle traveling in the wrong direction
CN106781458B (en) A kind of traffic accident monitoring method and system
US8085140B2 (en) Travel information providing device
CN110619747A (en) Intelligent monitoring method and system for highway road
US20120148092A1 (en) Automatic traffic violation detection system and method of the same
CN106652468A (en) Device and method for detection of violation of front vehicle and early warning of violation of vehicle on road
CN110866479A (en) Method, device and system for detecting that motorcycle driver does not wear helmet
CN101739809A (en) Automatic alarm and monitoring system for pedestrian running red light
KR20100119476A (en) An outomatic sensing system for traffic accident and method thereof
CN111126171A (en) Vehicle reverse running detection method and system
CN112744174B (en) Vehicle collision monitoring method, device, equipment and computer readable storage medium
US11482012B2 (en) Method for driving assistance and mobile device using the method
KR20160141226A (en) System for inspecting vehicle in violation by intervention and the method thereof
KR101498582B1 (en) System and Method for Providing Traffic Accident Data
CN115035491A (en) Driving behavior road condition early warning method based on federal learning
KR101593676B1 (en) Method and device for perceiving driving situation
CN111785050A (en) Expressway fatigue driving early warning device and method
KR102323692B1 (en) Method and apparatus for evaluating driver using adas
CN108230717B (en) Intelligent traffic management system
CN113192109A (en) Method and device for identifying motion state of object in continuous frames
CN114120250B (en) Video-based motor vehicle illegal manned detection method
CN113352989A (en) Intelligent driving safety auxiliary method, product, equipment and medium
CN104715615A (en) Electronic violation recognizing platform in traffic intersection
KR101793156B1 (en) System and method for preventing a vehicle accitdent using traffic lights

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20190201

Year of fee payment: 4