Nothing Special   »   [go: up one dir, main page]

CN111854780B - Vehicle navigation method, device, vehicle, electronic equipment and storage medium - Google Patents

Vehicle navigation method, device, vehicle, electronic equipment and storage medium Download PDF

Info

Publication number
CN111854780B
CN111854780B CN202010524245.XA CN202010524245A CN111854780B CN 111854780 B CN111854780 B CN 111854780B CN 202010524245 A CN202010524245 A CN 202010524245A CN 111854780 B CN111854780 B CN 111854780B
Authority
CN
China
Prior art keywords
navigation
information
lane
vehicle
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010524245.XA
Other languages
Chinese (zh)
Other versions
CN111854780A (en
Inventor
李雪冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Evergrande Hengchi New Energy Automobile Research Institute Shanghai Co Ltd
Original Assignee
Evergrande Hengchi New Energy Automobile Research Institute Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Evergrande Hengchi New Energy Automobile Research Institute Shanghai Co Ltd filed Critical Evergrande Hengchi New Energy Automobile Research Institute Shanghai Co Ltd
Priority to CN202010524245.XA priority Critical patent/CN111854780B/en
Publication of CN111854780A publication Critical patent/CN111854780A/en
Application granted granted Critical
Publication of CN111854780B publication Critical patent/CN111854780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3658Lane guidance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a vehicle navigation method, a vehicle navigation device, a vehicle, electronic equipment and a storage medium, and belongs to the technical field of navigation. The method comprises the following steps: the method comprises the steps of obtaining lane information of a target driving area corresponding to a vehicle and a navigation path of the vehicle in the target driving area. And determining a navigation mark according to the lane information and the navigation path. And projecting the navigation mark to the target driving area through the vehicle. According to the method and the device, the navigation mark is determined through the lane information and the navigation path of the target driving area, and the navigation mark is projected in the target driving area, so that vehicle navigation is performed intuitively, and a user can understand the navigation mark conveniently. The method can not only not disperse the attention of the user, but also avoid the situation that the user selects a wrong path due to wrong understanding, thereby improving the navigation accuracy, achieving a good navigation effect and improving the driving experience of the user.

Description

Vehicle navigation method, device, vehicle, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of navigation technologies, and in particular, to a vehicle navigation method and apparatus, a vehicle, an electronic device, and a storage medium.
Background
With the development of navigation technology, more and more users drive vehicles according to navigation. In the related art, a navigation terminal corresponding to a vehicle usually performs vehicle navigation in a video display or voice broadcast manner. Therefore, the user needs to select a lane for the vehicle to travel by himself or herself according to the displayed video or the broadcasted voice, thereby performing the driving of the vehicle on the selected lane.
However, the video display and the voice playing are not intuitive. Therefore, the lane selected by the user according to the video and the voice may have a mistake, and the user driving the vehicle on the wrong lane may not only waste time, but also have a great risk. Therefore, the method provided by the related technology is not visual enough, the navigation effect is poor, and the driving experience of the user is influenced.
Disclosure of Invention
The embodiment of the application provides a vehicle navigation method and device, a vehicle, electronic equipment and a storage medium, so as to improve the navigation effect. The technical scheme is as follows:
in one aspect, a vehicle navigation method is provided, the method comprising:
acquiring lane information of a target driving area corresponding to a vehicle and a navigation path of the vehicle in the target driving area;
determining a navigation identifier according to the lane information and the navigation path;
projecting the navigation mark on the target driving area through the vehicle.
In an exemplary embodiment, the determining the navigation identifier according to the lane information and the navigation path includes:
determining relative orientation information of the navigation path relative to the current position of the vehicle based on the lane information and the navigation path;
and taking the current position of the vehicle as a starting point of the navigation arrow, determining an end point of the navigation arrow according to the relative azimuth information, and determining the navigation arrow according to the starting point and the end point of the navigation arrow.
In an exemplary embodiment, the determining the relative bearing information of the navigation path with respect to the current position of the vehicle based on the lane information and the navigation path includes:
acquiring a lane model of the target driving area according to the lane information;
determining position information corresponding to the navigation path according to the lane model;
and determining the relative azimuth information of the navigation path relative to the current position of the vehicle according to the position information corresponding to the navigation path.
In an exemplary embodiment, the acquiring a lane model of the target travel area according to the lane information includes:
determining the information quality of the lane information, wherein the information quality is used for indicating whether the lane model can be acquired according to the lane information, and the lane information is acquired through a camera device;
and responding to the information quality indication that the lane model can be obtained according to the lane information, and obtaining the lane model of the target driving area according to the lane information.
In an exemplary embodiment, the method further comprises:
responding to the information quality indication that the lane model cannot be obtained according to the lane information, and scanning the target driving area through radar equipment to obtain point cloud information;
and acquiring a lane model of the target driving area according to the lane information and the point cloud information.
In an exemplary embodiment, the obtaining of the lane information of the target driving area where the vehicle is located and the navigation path of the vehicle in the target driving area further includes:
determining the road complexity of the target driving area;
in response to the road complexity degree being greater than a reference degree, the acquiring of the lane information of the target driving area where the vehicle is located and the navigation path of the vehicle in the target driving area are performed.
In an exemplary embodiment, said projecting said navigation identifier to said target driving area by said vehicle comprises:
acquiring illumination intensity information of the target driving area, and determining the target brightness of the navigation identifier according to the illumination intensity information;
and projecting the navigation mark to the target driving area according to the target brightness through the lighting equipment of the vehicle.
In an exemplary embodiment, after the projecting the navigation mark to the target driving area by the vehicle, the method further comprises:
stopping projecting the navigation mark on the target driving area in response to detecting that the vehicle has reached an end position of the target driving area.
In an exemplary embodiment, the vehicle includes a plurality of sub-lighting devices distributed in an array, and the projecting the navigation mark to the target driving area by the vehicle includes:
for any sub-lighting device, determining lighting information of the any sub-lighting device according to the navigation identifier, wherein the lighting information is used for indicating whether the any sub-lighting device emits light or not;
and controlling any one of the sub-lighting devices according to the lighting information, and projecting the navigation mark to the target driving area.
In one aspect, there is provided a navigation device, the device comprising:
the system comprises an acquisition module, a navigation module and a processing module, wherein the acquisition module is used for acquiring lane information of a target driving area corresponding to a vehicle and a navigation path of the vehicle in the target driving area;
the first determining module is used for determining a navigation mark according to the lane information and the navigation path;
and the projection module is used for projecting the navigation identifier to the target driving area through the vehicle.
In an exemplary embodiment, the navigation identifier includes a navigation arrow, and the first determining module is configured to determine, based on the lane information and the navigation path, relative orientation information of the navigation path with respect to a current location of the vehicle; and taking the current position of the vehicle as a starting point of the navigation arrow, determining an end point of the navigation arrow according to the relative direction information, and determining the navigation arrow according to the starting point and the end point of the navigation arrow.
In an exemplary embodiment, the first determining module is configured to obtain a lane model of the target driving area according to the lane information; determining position information corresponding to the navigation path according to the lane model; and determining the relative azimuth information of the navigation path relative to the current position of the vehicle according to the position information corresponding to the navigation path.
In an exemplary embodiment, the first determining module is configured to determine an information quality of the lane information, the information quality indicating whether the lane model can be acquired according to the lane information, the lane information being information acquired by a camera device; and responding to the information quality indication, acquiring the lane model according to the lane information, and acquiring the lane model of the target driving area according to the lane information.
In an exemplary embodiment, the first determining module is further configured to scan the target driving area through a radar device to obtain point cloud information in response to the information quality indicating that the lane model cannot be obtained according to the lane information; and acquiring a lane model of the target driving area according to the lane information and the point cloud information.
In an exemplary embodiment, the apparatus further comprises: the second determination module is used for determining the road complexity of the target driving area; in response to the road complexity being greater than a reference level, performing the acquiring of the lane information of the target driving area where the vehicle is located and the navigation path of the vehicle in the target driving area.
In an exemplary embodiment, the projection module is configured to acquire illumination intensity information of the target driving area, and determine target brightness of the navigation identifier according to the illumination intensity information; and projecting the navigation mark on the target driving area according to the target brightness through the lighting equipment.
In an exemplary embodiment, the apparatus further comprises: the stopping module is used for stopping projecting the navigation mark to the target driving area in response to the fact that the vehicle is detected to reach the end position of the target driving area.
In an exemplary embodiment, the vehicle includes a plurality of sub-lighting devices distributed in an array, and the projection module is configured to determine, for any one of the sub-lighting devices, lighting information of the any one of the sub-lighting devices according to the navigation identifier, where the lighting information is used to indicate whether the any one of the sub-lighting devices emits light; and controlling any one of the sub-lighting devices according to the lighting information, and projecting the navigation identifier to the target driving area.
On one hand, the vehicle comprises a lane information collector, a positioning sensor, a processor and a light emitting device, wherein the processor is electrically connected with the lane information collector, the positioning sensor and the light emitting device respectively;
the lane information collector is used for obtaining lane information of a target driving area corresponding to the vehicle;
the positioning sensor is used for a navigation path of the vehicle in the target driving area;
the processor is used for determining a navigation mark according to the lane information and the navigation path;
the light emitting device is used for projecting the navigation mark to the target driving area through the vehicle.
In one aspect, an electronic device is provided, which includes a processor and a memory electrically connected to the processor, the memory being configured to store a computer program, and the processor being configured to invoke the computer program to implement a vehicle navigation method provided in any of the exemplary embodiments of the present application.
In another aspect, a readable storage medium is provided, the storage medium having at least one instruction stored therein, the instruction being loaded and executed by a processor to implement a vehicle navigation method provided by any one of the exemplary embodiments of the present application.
The technical scheme provided by the embodiment of the application brings beneficial effects at least comprising:
the navigation mark is determined through the lane information and the navigation path of the target driving area, and the navigation mark is projected in the target driving area, so that vehicle navigation is performed visually, and a user can understand the navigation mark conveniently. The method can not only not disperse the attention of the user, but also avoid the situation that the user selects a wrong path due to wrong understanding, thereby improving the navigation accuracy, achieving a good navigation effect and improving the driving experience of the user.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic structural diagram of a vehicle navigation system provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a central processing module according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of a method for vehicle navigation provided by an embodiment of the present application;
FIG. 4 is a flow chart of navigation provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a vehicle navigation device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides a vehicle navigation method, which can be applied to a vehicle navigation system shown in fig. 1, wherein the vehicle navigation system is installed on a terminal or a cloud platform. The terminal may be any electronic product capable of performing human-Computer interaction with a user through one or more modes such as a keyboard, a touch pad, a touch screen, a remote controller, voice interaction or handwriting equipment, for example, a PC (Personal Computer), a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a wearable device, a Pocket PC (Pocket PC), a tablet PC, a smart car machine, a smart television, a smart sound box, and the like. The cloud platform can be a server, a server cluster formed by a plurality of servers, or a cloud computing service center.
In fig. 1, the vehicle navigation system includes a lane acquisition module 101, a positioning navigation module 102, a remote sensing acquisition module 103, an intelligent headlight module 104, and a central processing module 105. Next, the above modules will be explained:
the lane acquisition module 101: the lane acquisition module 101 acquires a video signal in a driving direction of the vehicle through a monocular or monocular camera, and the video signal can be used to acquire lane information, such as traffic signs, lane lines, and the like. The video signal is then converted to a digital signal and passed to the image processing circuitry in the central processing module 105.
The positioning navigation module 102: and marking the position information of the vehicle in real time by combining an electronic navigation map. The vehicle travel location and navigation path are communicated to the control circuit in the central processing module 105.
Remote sensing acquisition module 103: the remote sensing acquisition module 103 may be a high-resolution millimeter wave radar or a multi-thread laser radar. The remote sensing acquisition module 103 may be one or more modules, scans the road conditions in front of the vehicle, such as fences, street lamps, buildings, etc., with millimeter waves or laser signals, and transmits the acquired point cloud information to the remote sensing imaging circuit in the central processing module 105.
The central processing module 105: referring to fig. 2, the central processing module 105 includes a power management circuit, an image processing circuit, a remote sensing imaging circuit, a lane model circuit, a control circuit, and a communication circuit. The power management circuit is used for managing a power supply and providing electric energy required by the operation of each module and each circuit through power input and power output. The image processing circuit receives and processes the digital signal provided by the lane acquisition module 101 by using an LVDS (Low-voltage Differential Signaling) technology, extracts lane information such as lane lines and signs according to the digital signal, and transmits the lane information to the lane model circuit. The remote sensing imaging circuit receives point cloud information provided by the remote sensing acquisition module 103 through LVDS, carries out point cloud clustering according to the point cloud information to form stereo image information, and transmits the stereo image information to the lane model circuit.
The lane model circuit obtains a lane model according to the processing results of the image processing circuit and the remote sensing imaging circuit, the control circuit receives the navigation path provided by the positioning navigation module 102, and the control circuit determines the position information of the navigation path according to the lane model, so that the relative azimuth information of the navigation path relative to the current position of the vehicle is determined. The control circuit then sends the determined relative bearing information to the communication circuit. The communication module transmits the relative orientation information to the intelligent headlamp module 104 through an LNA (low noise Amplifier), a CAN (Controller Area Network, bus), a USB (Universal Serial Bus), or a UART (Universal asynchronous receiver Transmitter).
It should be noted that, the central processing module 105 needs to process the data information of the lane acquisition module 101, the positioning navigation module 102 and the remote sensing acquisition module 103, and needs to process the data information in the same time window to make sense, so the operation clocks of the modules need to be synchronized. The central processing module 105 sends a time calibration signal CLK (Clock) to the lane collection module 101, the position location navigation module 102, and the remote sensing collection module 103 at intervals of time, for example, 20ms, to ensure that the operation time of the lane collection module 101, the position location navigation module 102, and the remote sensing collection module 103 is synchronized with the operation time of the central processing module 105. In fig. 2, the image processing circuit is configured to send a time calibration signal to the lane acquisition module 101, the control circuit is configured to send a time calibration signal to the positioning navigation module 102, and the remote sensing imaging circuit is configured to send a time calibration signal to the remote sensing acquisition module 103.
The time calibration signal is composed of a time signal of the central processing module 105 and a communication delay, where the communication delay is a delay time length between the central processing module 105 and different modules.
Intelligent headlight module 104: the intelligent headlight module projects a navigation mark for guiding a user to an actual road surface of the target driving area by controlling the vehicle according to the relative direction information provided by the central processing module 105.
Based on the vehicle navigation system shown in fig. 1 and 2, referring to fig. 3, an embodiment of the present application provides a vehicle navigation method, where the vehicle navigation method is applied to a terminal or a cloud platform including the vehicle navigation system shown in fig. 1 and 2. The terminals including the vehicle navigation system include, but are not limited to, vehicle-mounted terminals of a vehicle to be navigated and vehicle-mounted terminals of other vehicles except the vehicle to be navigated. As shown in fig. 3, the method includes:
step 301, obtaining lane information of a target driving area corresponding to the vehicle and a navigation path of the vehicle in the target driving area.
Wherein the vehicle is a vehicle to be navigated, driven by a user. The target travel region corresponding to the vehicle refers to a travelable region in the current travel direction of the vehicle. The lane information of the target driving area is used for describing driving-related environment elements in the target driving area, such as a lane, a traffic sign corresponding to the lane, and the like, and is information understandable by the terminal or the cloud platform.
In an implementation, the area image of the target driving area may be acquired by an image pickup device of the vehicle, such as a monocular or monocular camera. The area image includes, in addition to the driving-related environmental elements, other driving-unrelated environmental elements, such as environmental elements of the sky around the lane, greenbelts, and the like. Therefore, the driving-related environment elements may be extracted from the region image as the lane information, so that the terminal or the cloud platform can understand according to the lane information.
It should be noted that the image pickup apparatus of the vehicle is often disposed in the vehicle front direction of the vehicle, so that an area image of the target traveling area can be picked up in the traveling direction of the vehicle. Therefore, the area image is an image captured from the viewpoint of the vehicle. In this way, the terminal understands based on the lane information extracted from the area image, and can confirm not only the lane where the vehicle is currently located but also which other lanes are available for traveling around the lane where the vehicle is currently located.
In addition, the navigation path of the vehicle in the target driving area is a path on which the vehicle will drive in the future time, and the navigation path is a path determined according to the electronic navigation map. Considering that the navigation path in the target travel area tends to be a part of the entire navigation path between the start point to the destination of the vehicle, the navigation path in the target travel area may be confirmed based on the entire navigation path. In implementation, all navigation paths from the starting point to the destination of the vehicle can be acquired, and the vehicle is positioned to obtain the position and the driving direction of the vehicle. Then, according to the position and the driving direction, a path of a reference length in front of the driving direction in all the navigation paths can be used as a navigation path in the target driving area.
It should be noted that, in response to step 301 executed by the terminal included in the vehicle to be navigated, after the area image of the target driving area is acquired by the camera device of the vehicle to be navigated and the positioning information of the vehicle to be navigated is obtained, the terminal of the vehicle to be navigated may directly acquire the lane information and the navigation path according to the area image and the positioning information by the method in the above description. In response to the step 301 being executed by a terminal or a cloud platform of another vehicle other than the vehicle to be navigated, the vehicle to be navigated needs to send the acquired region image and the positioning information to the terminal or the cloud platform of the other vehicle executing the step 301, so that the terminal or the cloud platform of the other vehicle can acquire the lane information and the navigation path according to the received region image and the positioning information.
For the timing of triggering the acquisition of the lane information and the navigation path, after the vehicle is detected to be started, the acquisition of the lane information and the navigation path may be directly triggered by default. Alternatively, since the lane information and the navigation path are acquired to provide the vehicle navigation function, it may be first detected whether the vehicle navigation function is in an on state after it is detected that the vehicle is started. In response to detecting that the vehicle navigation function is already in an on state after the vehicle is started, the vehicle navigation function is not turned off before the vehicle finishes the last driving process. Therefore, it is explained that the user wishes to continue using the vehicle navigation function during the current driving, and can trigger the acquisition of the lane information and the navigation route. Accordingly, in response to detecting that the vehicle navigation function is in the off state after the vehicle is started, it indicates that the vehicle navigation function is turned off before the vehicle finishes the last driving process. Therefore, in the driving process, the vehicle navigation function needs to be started after the start operation of the user on the vehicle navigation function is detected, that is, the system shown in fig. 4 is activated, and the acquisition of the lane information and the navigation path is triggered after the vehicle navigation function is started.
In an implementation, an on button may be provided within the vehicle, and if it is detected that the on button is pressed, it may be considered that an on operation of the vehicle navigation function by the user is detected. Alternatively, a toggle switch may be provided in the vehicle, and if it is detected that the toggle switch is toggled to the "on" side, it is considered that the opening operation of the user for the vehicle navigation function is detected. Or, the vehicle voice can be monitored, and the voice sent by the user can be subjected to semantic recognition, so that the user semantics can be understood. If the user semantics are detected to be related to starting the vehicle navigation function, the starting operation of the user for the navigation function can be considered to be detected.
Or, after detecting that the navigation function is in the on state, the embodiment may not trigger the acquisition of the lane information and the navigation path, but default to make the navigation function in the sleep state. In the driving process, if the target driving area is detected to meet the reference condition, the navigation function is switched to a working state from a dormant state, so that the acquisition of lane information and a navigation path is triggered. For example, the reference condition may be that the road complexity of the target travel area is greater than a reference level. That is, the method further includes, before acquiring lane information of a target traveling area where the vehicle is located and a navigation path of the vehicle in the target traveling area: and determining the road complexity of the target driving area. In response to the road complexity being greater than the reference level, acquiring lane information of a target driving area where the vehicle is located and a navigation path of the vehicle in the target driving area are performed.
In this embodiment, the location of the vehicle may be acquired, so that it is determined whether the road complexity of the location is greater than the reference level according to the electronic navigation map. The electronic navigation map usually records the link type of each link, and for any link, the embodiment may determine the road complexity of the link according to the link type of the link. For example, the road complexity of the road section with the road section type of one-way road is low, and the road complexity of the road section with the road section type of multi-branch road and overpass is high. Of course, in addition to the link type, the present embodiment may also determine the road complexity of the link according to one or more types of information of other link information, for example, information such as the number of lanes included in the link and the traffic flow of the link.
In the driving process of the vehicle, in response to the fact that the distance between the position of the vehicle and the road section with the road complexity degree larger than the reference degree is smaller than the reference distance, it is indicated that the vehicle approaches the road section with the road complexity degree larger than the reference degree, and therefore the road complexity degree of the target driving area corresponding to the position of the vehicle can be considered to be larger than the reference degree. In the implementation, the reference degree and the reference distance may be set according to experience or actual needs, and the reference degree and the reference distance are not limited in this embodiment.
And step 302, determining a navigation identifier according to the lane information and the navigation path.
The navigation mark is used for guiding a user of the vehicle, and the navigation arrow can be used as the navigation mark by considering the intuitiveness of the navigation mark. Thus, the user can drive according to the direction of the navigation arrow, and the vehicle driven by the user can be navigated. In an exemplary embodiment, determining the navigation identifier according to the lane information and the navigation path includes the following steps 3021 and 3022:
step 3021, determining the relative orientation information of the navigation path relative to the current position of the vehicle based on the lane information and the navigation path.
As can be seen from the description in step 301, the navigation path is a path determined from an electronic navigation map. Due to the accuracy limitation of the electronic navigation map, a certain lane in the target driving area cannot be determined as a navigation path. Therefore, it is necessary to refine the navigation path to the lane in combination with the acquired lane information on the basis of the navigation path in order to determine the relative bearing information of the navigation path with respect to the current location of the vehicle.
In an exemplary embodiment, determining the relative bearing information of the navigation path with respect to the current position of the vehicle based on the lane information and the navigation path includes: and acquiring a lane model of the target driving area according to the lane information, and determining the position information corresponding to the navigation path according to the lane model. And determining the relative azimuth information of the navigation path relative to the current position of the vehicle according to the position information corresponding to the navigation path.
The lane information includes environment elements such as lanes related to driving, traffic signs and the like, so that a lane model of the target driving area can be obtained according to the lane information. In implementation, the lane model can be constructed in real time according to the lane information, so that the lane model can be acquired. Or, a plurality of lane models can be constructed in advance according to a plurality of types of reference information, and the reference information and the lane models are correspondingly stored, wherein the reference information includes but is not limited to the number of reference lanes, reference traffic signs and the like. And then, matching the acquired lane information with the reference information, and taking a lane model corresponding to the reference information with the highest matching degree with the lane information as the lane model of the acquired target driving area.
After the lane model of the target driving area is obtained, the position information corresponding to the navigation path can be determined according to the lane model, and the position information corresponding to the navigation path is used for the terminal or the cloud platform to understand which lane of the lane model the navigation path is located in, so that the navigation path is accurate to the lane in the target driving area. Then, according to the relative position and the relative direction between the position information corresponding to the navigation path and the current position of the vehicle, the relative azimuth information of the navigation path relative to the current position of the vehicle can be obtained.
As is apparent from the above description, the lane information is extracted from the region image in which the target travel region is captured by the imaging device. In a case where visibility of a target traveling region is low, for example, in rainy or foggy weather, the definition of the region image is not high, and it is difficult to extract lane information such as lane lines and traffic signs from the region image. Alternatively, if the target travel area corresponding to the vehicle is a travel area without a lane line such as an intersection, the lane line cannot be extracted from the area image in the same manner. In the above two cases, the lane model cannot be successfully acquired according to the lane information.
Therefore, referring to fig. 4, in an exemplary embodiment, acquiring a lane model of a target driving region according to lane information includes: the information quality of lane information, which is information acquired by the image pickup apparatus, indicating whether or not a lane model can be acquired based on the lane information, is determined. A lane model can be obtained from the lane information in response to the information quality indication, and a lane model of the target driving area is obtained from the lane information. That is, after the lane information is acquired, the information quality of the lane information is first determined. Taking the lane line as the lane information, the information quality of the lane information can be determined by the definition degree of the lane line and the continuity degree of the lane line. A lane model can be obtained from the lane information in response to the information quality indication, and then the lane model is obtained from the lane information.
Accordingly, in response to the information quality indication that the lane model cannot be acquired according to the lane information, additional acquisition of other information needs to be acquired, and the other information and the lane information are integrated to acquire the lane model. In an exemplary embodiment, in response to the information quality indicating that the lane model cannot be obtained from the lane information, the target travel area is scanned by the radar device resulting in point cloud information. And acquiring a lane model of the target driving area according to the lane information and the point cloud information.
In implementation, the radar device may be a millimeter wave radar, and may also be a laser radar. Taking the millimeter wave radar as an example, the millimeter wave radar can emit millimeter waves along the driving direction of the vehicle, and the emitted millimeter waves can be reflected by surrounding objects such as fences, street lamps and buildings in the driving direction to form reflected millimeter waves. And then, the millimeter wave radar receives the reflected millimeter waves, so that point cloud information for describing surrounding objects is obtained according to the transmitted millimeter waves and the reflected millimeter waves.
It should be noted that the lane information is two-dimensional information for describing the object, and the point cloud information is three-dimensional information for describing the object. However, the same objects in the target driving area are described regardless of the lane information obtained by the camera device or the point cloud information obtained by the radar device. Therefore, the lane information and the point cloud information can be combined, and a lane model of the target driving area can be obtained according to the lane information and the point cloud information. Therefore, the lane model can be successfully acquired by combining the lane information and the point cloud information under the condition that the lane information is difficult to extract, so that the implementation of the subsequent steps is facilitated.
And step 3022, taking the current position of the vehicle as a starting point of the navigation arrow, determining an end point of the navigation arrow according to the relative direction information, and determining the navigation arrow according to the starting point and the end point of the navigation arrow.
After determining the relative bearing information of the navigation path with respect to the current position of the vehicle, navigation arrows for guiding the user may be determined based on the relative bearing information. In implementation, the current position of the vehicle may be used as the starting point of the navigation arrow, and the pointing direction of the navigation arrow may be matched with the relative direction information. Therefore, the end point of the navigation arrow is the position of the navigation path. Therefore, the navigation arrow points to the position of the navigation path from the current position of the vehicle, and the direction pointed by the arrow can be continuously and dynamically changed along with the change of the position of the vehicle in the process of driving the vehicle by the user, so that the user can be intuitively navigated.
Step 303, projecting the navigation mark on the target driving area through the vehicle.
After the navigation mark is determined, the navigation mark can be projected to the target driving area. It can be understood that, in response to step 303 being executed by a terminal of the vehicle to be navigated, the projection may be performed directly according to the determined navigation identifier. In response to step 303 being executed by another vehicle terminal or the cloud platform other than the terminal of the navigation vehicle, after the navigation identifier is determined, information indicating the navigation identifier needs to be sent to the vehicle to be navigated, so that the vehicle to be navigated can project the navigation identifier according to the received information.
The present embodiment exemplarily projects the navigation mark by a lighting device of the vehicle, for example, a vehicle headlamp. Alternatively, the present embodiment sets a projection device on the vehicle, and performs projection of the navigation mark according to the set projection device. Since the target driving area is a drivable area in the driving direction of the vehicle, the navigation mark can be projected on the ground in the driving direction of the vehicle through the projection process, so that the navigation mark and the actual ground are associated with each other, and the navigation mark is more intuitive for a user. And then, the user can drive according to the projected navigation identifier, so that the vehicle navigation is realized according to the projected navigation identifier.
In an exemplary embodiment, the lane comprises a plurality of sub-luminaires distributed in an array. Projecting the navigation mark on the target driving area through the vehicle, including: and for any sub-lighting device, determining the lighting information of any sub-lighting device according to the navigation identifier, wherein the lighting information is used for indicating whether any sub-lighting device emits light or not. And controlling any one of the sub-lighting devices according to the lighting information, and projecting the navigation mark on the target driving area. Therefore, navigation marks with various shapes can be formed. For example, when the navigation path and the current position of the vehicle are located in different lanes, respectively, the navigation mark may be a straight-line shaped navigation arrow, thereby instructing the user to perform a lane change. Alternatively, when the navigation path is a path such as a turn, or the like, the navigation identifier may be an arc-shaped navigation arrow, thereby instructing the user to travel according to the navigation arrow.
In an exemplary embodiment, projecting the navigation mark to the target driving area through the lighting device of the vehicle includes: and acquiring illumination intensity information of the target driving area, and determining the target brightness of the navigation identifier according to the illumination intensity information. And projecting the navigation mark on the target driving area according to the target brightness through the lighting equipment.
The illumination intensity of the target driving area is different in the day and night, and the brightness of the navigation mark required by the user is different in the day and night. Therefore, by acquiring the illumination intensity information of the target driving area, the current night can be confirmed under the condition that the illumination intensity information indicates that the illumination intensity is weak, so that the higher brightness is used as the target brightness, the navigation mark is projected according to the higher brightness, and the user can clearly watch the projected navigation mark at night. Accordingly, it is also possible to confirm that the current day is the daytime when the illumination intensity information indicates that the illumination intensity is strong, and thus to take the lower brightness as the target brightness. The navigation mark is projected according to the lower brightness, so that the navigation mark can be projected through lower electric quantity consumption in the daytime, and the waste of electric energy is avoided.
Additionally, in an exemplary embodiment, after navigating the vehicle according to the projected navigation mark, the method further comprises: in response to detecting that the vehicle has reached the end position of the target travel area, stopping projecting the navigation mark on the target travel area.
As can be seen from the above description, in this embodiment, when it is determined that the road complexity of the target driving area is greater than the reference level, the acquisition of the lane information and the navigation path is triggered, so as to start the vehicle navigation function of projecting the navigation identifier. Accordingly, in response to detecting that the vehicle has reached the end position of the target driving area, it is indicated that the vehicle has driven out of the area with a high route complexity, and therefore, the projection of the navigation mark may be stopped, so that the navigation function enters a sleep state, and after the target lane reaches another driving area with a next route complexity greater than the reference level, the navigation function is restarted.
Referring to fig. 4, after the navigation function enters the sleep state, the user may be prompted to turn off the navigation function by pressing a button, toggling a switch, and voice command. If the navigation function is not closed by the user before the vehicle is flamed out, the navigation function can be directly opened after the vehicle is started next time, so that the navigation is conveniently carried out on the user.
To sum up, the navigation mark is determined according to the lane information and the navigation path of the target driving area, and the navigation mark is projected in the target driving area, so that vehicle navigation is performed intuitively, and a user can understand the navigation mark conveniently. The method can not only not disperse the attention of the user, but also avoid the situation that the user selects a wrong path due to wrong understanding, thereby improving the navigation accuracy, achieving a good navigation effect and improving the driving experience of the user.
Based on the same concept, the embodiment of the application provides a navigation device, referring to fig. 5, the navigation device comprises:
an obtaining module 501, configured to obtain lane information of a target driving area corresponding to a vehicle and a navigation path of the vehicle in the target driving area;
a first determining module 502, configured to determine a navigation identifier according to the lane information and the navigation path;
and a projection module 503, configured to project the navigation identifier to the target driving area through the vehicle.
In an exemplary embodiment, the navigation identifier includes a navigation arrow, and the first determining module 502 is configured to determine relative orientation information of the navigation path with respect to a current position of the vehicle based on the lane information and the navigation path; and taking the current position of the vehicle as a starting point of the navigation arrow, determining an end point of the navigation arrow according to the relative azimuth information, and determining the navigation arrow according to the starting point and the end point of the navigation arrow.
In an exemplary embodiment, the first determining module 502 is configured to obtain a lane model of the target driving area according to the lane information; and determining the position information corresponding to the navigation path according to the lane model, and determining the relative azimuth information of the navigation path relative to the current position of the vehicle according to the position information corresponding to the navigation path.
In an exemplary embodiment, a first determination module for determining information quality of lane information, the information quality indicating whether a lane model can be acquired from the lane information, the lane information being information acquired by an image pickup apparatus; a lane model can be obtained from the lane information in response to the information quality indication, and a lane model of the target driving area is obtained from the lane information.
In an exemplary embodiment, the first determining module 502 is further configured to scan a target driving area through a radar device to obtain point cloud information in response to the information quality indication that a lane model cannot be obtained according to lane information; and acquiring a lane model of the target driving area according to the lane information and the point cloud information.
In an exemplary embodiment, the apparatus further comprises: the second determination module is used for determining the road complexity of the target driving area; in response to the road complexity being greater than the reference level, acquiring lane information of a target driving area where the vehicle is located and a navigation path of the vehicle in the target driving area are performed.
In an exemplary embodiment, the projection module 503 is configured to obtain illumination intensity information of the target driving area, and determine target brightness of the navigation identifier according to the illumination intensity information; and projecting the navigation mark on the target driving area according to the target brightness through the lighting equipment.
In an exemplary embodiment, the apparatus further comprises: and the stopping module is used for stopping projecting the navigation mark on the target driving area in response to detecting that the vehicle reaches the end position of the target driving area.
In an exemplary embodiment, the vehicle comprises a plurality of sub-lighting devices distributed in an array, and the projection module is used for determining the lighting information of any sub-lighting device according to the navigation identification for any sub-lighting device, wherein the lighting information is used for indicating whether any sub-lighting device emits light or not. And controlling any one of the sub-lighting devices according to the lighting information, and projecting the navigation mark on the target driving area.
To sum up, the navigation mark is determined through the lane information and the navigation path of the target driving area, and the navigation mark is projected into the target driving area, so that vehicle navigation is performed visually, and a user can understand the navigation mark conveniently. The method can not only not disperse the attention of the user, but also avoid the situation that the user selects a wrong path due to wrong understanding, thereby improving the navigation accuracy, ensuring better navigation effect and improving the driving experience of the user.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 6, a schematic structural diagram of a terminal 600 according to an embodiment of the present application is shown. The terminal 600 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, a 6-core processor, and so on. The processor 601 may be implemented in at least one hardware form selected from a group consisting of a DSP (Digital Signal Processing), a Field-Programmable Gate Array (FPGA), and a Programmable Logic Array (PLA). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display 605. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 602 may include one or more computer-readable storage media, which may be non-transitory. Memory 602 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 602 is used to store at least one instruction for execution by processor 601 to implement the vehicle navigation methods provided by the method embodiments herein.
In some embodiments, the terminal 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602 and peripherals interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of the group consisting of a radio frequency circuit 604, a display 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on separate chips or circuit boards, which is not limited by the present embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or Wi-Fi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or above the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the terminal 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the terminal 600 or in a foldable design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (Liquid Crystal Display), OLED (organic light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and can be used for light compensation under different color temperatures.
Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. The microphones may be provided in plural numbers, respectively, at different portions of the terminal 600 for the purpose of stereo sound collection or noise reduction. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert the electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used for positioning the current geographic Location of the terminal 600 to implement navigation or LBS (Location Based Service). The Positioning component 608 can be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
Power supply 609 is used to provide power to the various components in terminal 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery can also be used to support fast charge technology.
In some embodiments, the terminal 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 610 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the display screen 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the terminal 600, and the gyro sensor 612 and the acceleration sensor 611 may cooperate to acquire a 3D motion of the user on the terminal 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 613 may be disposed on the side bezel of terminal 600 and/or underneath display screen 605. When the pressure sensor 613 is disposed on the side frame of the terminal 600, a user's holding signal of the terminal 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 605. The operability control comprises at least one of a group consisting of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of the user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be provided on the front, back, or side of the terminal 600. When a physical button or vendor Logo is provided on the terminal 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of display screen 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the display screen 605 is increased; when the ambient light intensity is low, the display brightness of the touch screen 606 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
A proximity sensor 616, also known as a distance sensor, is typically disposed on the front panel of the terminal 600. The proximity sensor 616 is used to collect the distance between the user and the front surface of the terminal 600. In one embodiment, when proximity sensor 616 detects that the distance between the user and the front face of terminal 600 gradually decreases, processor 601 controls display 605 to switch from the bright screen state to the dark screen state; when the proximity sensor 616 detects that the distance between the user and the front face of the terminal 600 is gradually increased, the processor 601 controls the display 605 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not limiting of terminal 600 and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be used.
The embodiment of the application provides a vehicle, and this vehicle includes lane information collector, positioning sensor, treater and light emitting equipment, and the treater is connected with lane information collector, positioning sensor and light emitting equipment electricity respectively. The lane information collector is used for obtaining lane information of a target driving area corresponding to the vehicle. And the positioning sensor is used for navigating the path of the vehicle in the target driving area. And the processor is used for determining the navigation mark according to the lane information and the navigation path. And the light emitting device is used for projecting the navigation mark to the target driving area through the vehicle.
The embodiment of the application provides an electronic device, which comprises a processor and a memory electrically connected with the processor, wherein the memory is used for storing a computer program, and the processor is used for calling the computer program to realize the vehicle navigation method provided by any one of the exemplary embodiments of the application.
The embodiment of the application provides a computer-readable storage medium, wherein at least one instruction is stored in the storage medium, and the instruction is loaded and executed by a processor to realize the vehicle navigation method provided by any one of the exemplary embodiments of the application.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described in detail herein.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended only to be exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the scope of the present application.

Claims (7)

1. A method for navigating a vehicle, the method comprising:
acquiring lane information of a target driving area corresponding to a vehicle and a navigation path of the vehicle in the target driving area;
determining a navigation identifier according to the lane information and the navigation path;
projecting the navigation mark on the target driving area through the vehicle;
wherein, the navigation mark comprises a navigation arrow, and the determining the navigation mark according to the lane information and the navigation path comprises: determining relative orientation information of the navigation path relative to the current position of the vehicle based on the lane information and the navigation path; taking the current position of the vehicle as a starting point of the navigation arrow, determining an end point of the navigation arrow according to the relative orientation information, and determining the navigation arrow according to the starting point and the end point of the navigation arrow;
wherein the determining the relative orientation information of the navigation path with respect to the current location of the vehicle based on the lane information and the navigation path comprises: acquiring a lane model of the target driving area according to the lane information; determining position information corresponding to the navigation path according to the lane model; determining the relative azimuth information of the navigation path relative to the current position of the vehicle according to the position information corresponding to the navigation path;
the acquiring of the lane model of the target driving area according to the lane information includes: determining the information quality of the lane information, wherein the information quality is used for indicating whether the lane model can be acquired according to the lane information, and the lane information is acquired through a camera device; and responding to the information quality indication that the lane model can be obtained according to the lane information, and obtaining the lane model of the target driving area according to the lane information.
2. The method of claim 1, further comprising:
responding to the information quality indication that the lane model cannot be obtained according to the lane information, and scanning the target driving area through radar equipment to obtain point cloud information;
and acquiring a lane model of the target driving area according to the lane information and the point cloud information.
3. The method according to any one of claims 1-2, wherein the vehicle comprises a plurality of sub-lighting devices distributed in an array, and the projecting the navigation mark onto the target driving area by the vehicle comprises:
for any sub-lighting device, determining lighting information of the any sub-lighting device according to the navigation identifier, wherein the lighting information is used for indicating whether the any sub-lighting device emits light or not;
and controlling any one of the sub-lighting devices according to the lighting information, and projecting the navigation mark to the target driving area.
4. A navigation device, characterized in that the navigation device comprises:
the system comprises an acquisition module, a navigation module and a processing module, wherein the acquisition module is used for acquiring lane information of a target driving area corresponding to a vehicle and a navigation path of the vehicle in the target driving area;
the first determining module is used for determining a navigation mark according to the lane information and the navigation path;
the projection module is used for projecting the navigation identifier to the target driving area through the vehicle;
wherein the navigation identifier includes a navigation arrow, and the first determining module is specifically configured to, in the process of determining the navigation identifier according to the lane information and the navigation path: determining relative position information of the navigation path relative to the current position of the vehicle based on the lane information and the navigation path; taking the current position of the vehicle as a starting point of the navigation arrow, determining an end point of the navigation arrow according to the relative azimuth information, and determining the navigation arrow according to the starting point and the end point of the navigation arrow;
wherein, in the process of determining the relative orientation information of the navigation path with respect to the current position of the vehicle based on the lane information and the navigation path, the first determining module is specifically configured to: acquiring a lane model of the target driving area according to the lane information; determining position information corresponding to the navigation path according to the lane model; determining the relative azimuth information of the navigation path relative to the current position of the vehicle according to the position information corresponding to the navigation path;
wherein, in the process of acquiring the lane model of the target driving area according to the lane information, the first determining module is specifically configured to: determining the information quality of the lane information, wherein the information quality is used for indicating whether the lane model can be acquired according to the lane information, and the lane information is acquired through a camera device; and responding to the information quality indication that the lane model can be obtained according to the lane information, and obtaining the lane model of the target driving area according to the lane information.
5. A vehicle is characterized by comprising a lane information collector, a positioning sensor, a processor and a light emitting device, wherein the processor is electrically connected with the lane information collector, the positioning sensor and the light emitting device respectively;
the lane information collector is used for obtaining lane information of a target driving area corresponding to the vehicle;
the positioning sensor is used for a navigation path of the vehicle in the target driving area;
the processor is used for determining a navigation mark according to the lane information and the navigation path;
the light emitting device is used for projecting the navigation mark to the target driving area through the vehicle;
wherein the navigation identifier includes a navigation arrow, and in the process of determining the navigation identifier according to the lane information and the navigation path, the processor is specifically configured to: determining relative orientation information of the navigation path relative to the current position of the vehicle based on the lane information and the navigation path; taking the current position of the vehicle as a starting point of the navigation arrow, determining an end point of the navigation arrow according to the relative orientation information, and determining the navigation arrow according to the starting point and the end point of the navigation arrow;
wherein, in the process of determining the relative orientation information of the navigation path with respect to the current location of the vehicle based on the lane information and the navigation path, the processor is specifically configured to: acquiring a lane model of the target driving area according to the lane information; determining position information corresponding to the navigation path according to the lane model; determining the relative azimuth information of the navigation path relative to the current position of the vehicle according to the position information corresponding to the navigation path;
wherein, in the process of acquiring the lane model of the target driving area according to the lane information, the processor is specifically configured to: determining the information quality of the lane information, wherein the information quality is used for indicating whether the lane model can be acquired according to the lane information, and the lane information is acquired through a camera device; and responding to the information quality indication, acquiring the lane model according to the lane information, and acquiring the lane model of the target driving area according to the lane information.
6. An electronic device, comprising a processor and a memory electrically connected to the processor, the memory configured to store a computer program, the processor configured to invoke the computer program to implement the vehicle navigation method of any of claims 1-3.
7. A storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the vehicle navigation method of any one of claims 1-3.
CN202010524245.XA 2020-06-10 2020-06-10 Vehicle navigation method, device, vehicle, electronic equipment and storage medium Active CN111854780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010524245.XA CN111854780B (en) 2020-06-10 2020-06-10 Vehicle navigation method, device, vehicle, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010524245.XA CN111854780B (en) 2020-06-10 2020-06-10 Vehicle navigation method, device, vehicle, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111854780A CN111854780A (en) 2020-10-30
CN111854780B true CN111854780B (en) 2022-10-11

Family

ID=72987618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010524245.XA Active CN111854780B (en) 2020-06-10 2020-06-10 Vehicle navigation method, device, vehicle, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111854780B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI766647B (en) * 2021-04-19 2022-06-01 逢甲大學 Virtual Navigation Path Generation Method
CN113587937A (en) * 2021-06-29 2021-11-02 阿波罗智联(北京)科技有限公司 Vehicle positioning method and device, electronic equipment and storage medium
CN113670323B (en) * 2021-08-17 2024-05-17 京东鲲鹏(江苏)科技有限公司 Method, device, equipment and medium for determining target area
CN113992709B (en) * 2021-10-12 2023-11-03 北京理工新源信息科技有限公司 Intelligent vehicle-mounted early warning terminal equipment based on 5G communication
CN114563788B (en) * 2022-02-17 2024-08-27 英博超算(南京)科技有限公司 Unmanned system based on single-line laser radar and millimeter wave radar
CN117990117A (en) * 2022-10-31 2024-05-07 华为技术有限公司 Navigation method, navigation device, navigation system and vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2505962A1 (en) * 2011-03-31 2012-10-03 Technisat Digital Gmbh Method for operating a navigation device in a vehicle and navigation device
CN104422462A (en) * 2013-09-06 2015-03-18 上海博泰悦臻电子设备制造有限公司 Vehicle navigation method and vehicle navigation device
CN106096525A (en) * 2016-06-06 2016-11-09 重庆邮电大学 A kind of compound lane recognition system and method
CN109141464A (en) * 2018-09-30 2019-01-04 百度在线网络技术(北京)有限公司 Navigate lane change reminding method and device
CN110375764A (en) * 2019-07-16 2019-10-25 中国第一汽车股份有限公司 Lane change reminding method, system, vehicle and storage medium
CN111256726A (en) * 2020-01-19 2020-06-09 北京无限光场科技有限公司 Navigation device and method, terminal and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4572823B2 (en) * 2005-11-30 2010-11-04 アイシン・エィ・ダブリュ株式会社 Route guidance system and route guidance method
JP2011154003A (en) * 2010-01-28 2011-08-11 Sony Corp Navigation apparatus, navigation method and program
KR101843773B1 (en) * 2015-06-30 2018-05-14 엘지전자 주식회사 Advanced Driver Assistance System, Display apparatus for vehicle and Vehicle
CN106441319B (en) * 2016-09-23 2019-07-16 中国科学院合肥物质科学研究院 A kind of generation system and method for automatic driving vehicle lane grade navigation map
CN107521411B (en) * 2017-07-18 2020-07-17 吉林大学 Lane-level navigation augmented reality device for assisting driver
CN110988954A (en) * 2019-12-23 2020-04-10 长安大学 HUD dynamic navigation method for complex turnout

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2505962A1 (en) * 2011-03-31 2012-10-03 Technisat Digital Gmbh Method for operating a navigation device in a vehicle and navigation device
CN104422462A (en) * 2013-09-06 2015-03-18 上海博泰悦臻电子设备制造有限公司 Vehicle navigation method and vehicle navigation device
CN106096525A (en) * 2016-06-06 2016-11-09 重庆邮电大学 A kind of compound lane recognition system and method
CN109141464A (en) * 2018-09-30 2019-01-04 百度在线网络技术(北京)有限公司 Navigate lane change reminding method and device
CN110375764A (en) * 2019-07-16 2019-10-25 中国第一汽车股份有限公司 Lane change reminding method, system, vehicle and storage medium
CN111256726A (en) * 2020-01-19 2020-06-09 北京无限光场科技有限公司 Navigation device and method, terminal and storage medium

Also Published As

Publication number Publication date
CN111854780A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111854780B (en) Vehicle navigation method, device, vehicle, electronic equipment and storage medium
CN111126182B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
CN110971930A (en) Live virtual image broadcasting method, device, terminal and storage medium
CN110618800A (en) Interface display method, device, equipment and storage medium
CN110095128B (en) Method, device, equipment and storage medium for acquiring missing road information
CN110044638B (en) Method and device for testing lane keeping function and storage medium
CN111125442A (en) Data labeling method and device
CN114299468A (en) Method, device, terminal, storage medium and product for detecting convergence of lane
CN112991439B (en) Method, device, electronic equipment and medium for positioning target object
CN111010537B (en) Vehicle control method, device, terminal and storage medium
CN112406707A (en) Vehicle early warning method, vehicle, device, terminal and storage medium
CN113408989B (en) Automobile data comparison method and device and computer storage medium
CN109189290B (en) Click area identification method and device and computer readable storage medium
CN112700647B (en) Method and device for monitoring vehicle driving information
CN112269939B (en) Automatic driving scene searching method, device, terminal, server and medium
CN110775056B (en) Vehicle driving method, device, terminal and medium based on radar detection
CN111754564B (en) Video display method, device, equipment and storage medium
CN111444749B (en) Method and device for identifying road surface guide mark and storage medium
CN115965936A (en) Edge position marking method and equipment
CN112037545B (en) Information management method, information management device, computer equipment and storage medium
CN114789734A (en) Perception information compensation method, device, vehicle, storage medium, and program
CN112365088B (en) Method, device and equipment for determining travel key points and readable storage medium
CN114598992A (en) Information interaction method, device, equipment and computer readable storage medium
CN112699906B (en) Method, device and storage medium for acquiring training data
CN112000899A (en) Method and device for displaying scenery spot information, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant