Nothing Special   »   [go: up one dir, main page]

CN112836586A - Intersection information determination method, system and device - Google Patents

Intersection information determination method, system and device Download PDF

Info

Publication number
CN112836586A
CN112836586A CN202110015030.XA CN202110015030A CN112836586A CN 112836586 A CN112836586 A CN 112836586A CN 202110015030 A CN202110015030 A CN 202110015030A CN 112836586 A CN112836586 A CN 112836586A
Authority
CN
China
Prior art keywords
intersection
area
information
data
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110015030.XA
Other languages
Chinese (zh)
Other versions
CN112836586B (en
Inventor
刘国平
唐建波
罗斌
温翔
胡润波
邓敏
马楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202110015030.XA priority Critical patent/CN112836586B/en
Publication of CN112836586A publication Critical patent/CN112836586A/en
Application granted granted Critical
Publication of CN112836586B publication Critical patent/CN112836586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present specification provides an intersection information determination method, including: acquiring vehicle track data in a region to be detected; acquiring a trained intersection detection model, wherein the trained intersection detection model is obtained by training sample vehicle track data in a sample area; and determining intersection information of the area to be tested based on the vehicle track data and the trained intersection detection model, wherein the intersection information comprises position information and range information of an intersection.

Description

Intersection information determination method, system and device
Technical Field
The specification relates to the technical field of intersection identification, in particular to an intersection information determination method, system and device based on a deep neural network.
Background
With the development of cities and the construction of roads, it is necessary to update road network data frequently. Intersections are key nodes of a road network and are important basic data for path planning. The detection of the intersection is a key link in the topology reconstruction of the road network, and the accurate acquisition of the spatial position, the range and the topological structure of the intersection is a key step for constructing the urban high-precision navigation road network. Therefore, it is desirable to provide a more accurate intersection information determination method that can adapt to data environments with high noise, different sampling frequencies, heterogeneous track density distribution, and the like.
Deep Neural Network (DNN) is a Neural Network with at least one hidden layer. Similar to the shallow neural network, the deep neural network can also provide modeling for a complex nonlinear system, but more layers provide higher modeling capability for the model, and thus higher recognition rate for the target. Common deep Neural Networks include Convolutional Neural Networks (CNNs), Full Convolutional Networks (FCNs), and the like. The deep neural network has strong feature learning ability and is widely applied to the field of image target recognition.
Disclosure of Invention
One of the embodiments of the present application provides an intersection information determining method, where the method is executed by at least one processor, and includes: acquiring vehicle track data in a region to be detected; acquiring a trained intersection detection model, wherein the trained intersection detection model is obtained by training sample vehicle track data in a sample area; and determining intersection information of the area to be tested based on the vehicle track data and the trained intersection detection model, wherein the intersection information comprises position information and range information of an intersection.
One of the embodiments of the present application provides an intersection information determining system, which includes an obtaining module and a detecting module; the acquisition module is used for acquiring vehicle track data in a region to be detected; the acquisition module is further used for acquiring a trained intersection detection model, and the trained intersection detection model is obtained by training sample vehicle track data in a sample area; the detection module is used for determining intersection information of the area to be detected based on the vehicle track data and the trained intersection detection model, and the intersection information comprises position information and range information of an intersection.
One of the embodiments of the present application provides an intersection information determining apparatus, including at least one processor and at least one memory; the at least one memory is to store instructions; the processor is configured to execute the instructions to implement the method according to any one of the above.
One embodiment of the present application provides a computer-readable storage medium, where the storage medium stores computer instructions, and when the computer reads the computer instructions in the storage medium, the computer executes the method described in any one of the above.
One of the embodiments of the present application provides a computer program product, which includes a computer program or instructions, and is characterized in that, when the computer program or instructions are executed by a processor, the computer program or instructions implement the steps of any one of the methods described above.
Drawings
The present description will be further described by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of an intersection information determination system according to some embodiments of the present application;
FIG. 2 is an exemplary block diagram of an intersection information determination system according to some embodiments of the present application;
FIG. 3 is an exemplary flow chart of an intersection information determination method according to some embodiments of the present application;
FIG. 4 is an exemplary flow chart of an intersection detection model training process according to some embodiments of the present application;
FIG. 5 is a schematic diagram of a trained intersection detection model according to some embodiments of the present application;
FIG. 6 is a schematic structural diagram of an intersection detection model according to some embodiments of the present application;
FIG. 7 is a diagram illustrating the results of rasterization of taxi GPS trajectory data within an area of a metropolis;
FIG. 8 is a schematic diagram of a process and results of manual labeling of intersections by rasterized images generated after framing of certain trajectory data;
fig. 9 is a schematic diagram of a result of detecting track data in one frame and intersection information thereof in a certain area of a metropolis.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used in this specification is a method for distinguishing different components, elements, parts or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 is a schematic diagram of an application scenario of an exemplary intersection information determination system according to some embodiments of the present application.
As shown in fig. 1, the intersection information determination system 100 may include a server 110, a network 120, a terminal device 130, a driver device 140, a vehicle 150, a storage device 160, and a locating device 170.
In some embodiments, the server 110 may be used to process information and/or data related to the intersection information determination. The server 110 may be a computer server. In some embodiments, the server 110 may be a single server or a group of servers. The server group may be a centralized server group connected to the network 120 via an access point, or a distributed server group respectively connected to the network 120 via one or more access points. In some embodiments, server 110 may be connected locally to network 120 or remotely from network 120. For example, the server 110 can access information and/or data stored in the terminal device 130, the driver device 140, and/or the storage device 160 via the network 120. As another example, storage device 160 may serve as back-end storage for server 110. In some embodiments, the server 110 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an intermediate cloud, a multi-cloud, and the like, or any combination thereof.
In some embodiments, the server 110 may include a processing device 112. Processing device 112 may process information and/or data related to performing one or more of the functions described herein. For example, the processing device 112 may periodically (e.g., 1 time/second) receive positioning information transmitted from the vehicle 150 (e.g., the communication device and/or the driver device 140 mounted on the vehicle 150) and store it in the storage device 160 to form trajectory data for the vehicle. For another example, the processing device 112 may obtain vehicle trajectory data for a sample area from the storage device 160 and train an intersection detection model using the vehicle trajectory data for the sample area. As another example, the processing device 112 may obtain an intersection information display request from the terminal device 130. In some embodiments, the processing device 112 may include one or more processing units (e.g., single core processing engines or multiple core processing engines). By way of example only, the processing device 112 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an application specific instruction set processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a micro-controller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination thereof.
Network 120 may facilitate the exchange of information and/or data. In some embodiments, one or more components of the intersection information determination system 100 (e.g., the server 110, the terminal device 130, the driver device 140, the vehicle 150, the storage device 160) may send information and/or data to other components of the intersection information determination system 100 via the network 120. For example, server 110 may access and/or obtain vehicle trajectory data within the area under test, sample vehicle trajectory data within the sample area from storage device 160 via network 120. In some embodiments, the network 120 may be any type or combination of wired or wireless network. By way of example only, network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, the like, or any combination thereof. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, such as base stations and/or Internet switching points 120-1, 120-2, etc. One or more components of the intersection information determination system 100 may connect to the network 120 through a network access point to exchange data and/or information.
The terminal device 130 may enable user interaction with the intersection information determination system 100. For example, the user may transmit an intersection information display request through the terminal device 130. Server 110 may determine the area to be tested based on the service request. In some embodiments, the terminal device 130 may also output prompt information to the user based on the intersection information determined by the server 110. In some embodiments, the terminal device 130 may include a mobile device 130-1, a tablet 130-2, a laptop 130-3, an automotive built-in device 130-4, or the like, or any combination thereof. In some embodiments, the mobile device 130-1 may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include smart lighting devices, control devices for smart electrical devices, smart monitoring devices, smart televisions, smart cameras, interphones, and the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, smart footwear, smart glasses, smart helmet, smart watch, smart garment, smart backpack, smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a point of sale (POS) device, and the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glasses, virtual reality eyeshields, augmented reality helmets, augmented reality glasses, augmented reality eyeshields, and the like, or any combination thereof. For example, virtual reality devicesAnd/or the augmented reality device may include a Google GlassTM、Oculus RiftTM、HololensTM、Gear VRTMAnd the like. In some embodiments, terminal device 130 may include a location-enabled device to determine the location of the user and/or terminal device 130.
The driver devices 140 can include at least two driver devices 140-1, 140-2, … …, 140-n. In some embodiments, the driver device 140 can be similar to or the same as the terminal device 130. In some embodiments, the driver device 140 includes a means with a location function to determine the location of the driver device 140. The driver device 140 can upload its position information to the server 110 or storage device 160 to obtain vehicle trajectory data for determining intersection information. In some embodiments, the driver device 140 can include one or any combination of a mobile device, a tablet, a laptop, an automotive built-in device, and the like.
In some embodiments, the driver device 140 can correspond to one or more vehicles 150. Vehicle 150 may include at least two vehicles, such as vehicles 150-1, 150-2, … …, 150-n. In some embodiments, the vehicle 150 may include a minicar, van, truck, or the like. In some embodiments, the vehicle 150 may include a private car, a taxi, and the like. In some embodiments, the vehicle 150 may include a manned vehicle and/or an unmanned autonomous vehicle, and the like, and the description does not limit the type of the vehicle 150.
Storage device 160 may store data and/or instructions. In some embodiments, storage device 160 may store data and/or instructions that server 110 may execute to provide the methods or steps described herein. In some embodiments, storage device 160 may store data associated with vehicle 150, such as location information, log information, and the like associated with vehicle 150. In some embodiments, one or more components in the intersection information determination system 100 may access data or instructions stored in the storage device 160 via the network 120. In some embodiments, storage device 160 may be connected directly to server 110 as back-end storage. In some embodiments, storage device 160 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), etc., or any combination thereof. Exemplary mass storage devices may include magnetic disks, optical disks, solid state drives, and the like. Exemplary removable memory may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tape, and the like. Exemplary volatile read and write memories can include Random Access Memory (RAM). Exemplary RAM may include Dynamic RAM (DRAM), double data rate synchronous dynamic RAM (DDR SDRAM), Static RAM (SRAM), thyristor RAM (T-RAM), zero capacitor RAM (Z-RAM), and the like. Exemplary ROMs may include Mask ROM (MROM), Programmable ROM (PROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), compact disk ROM (CD-ROM), digital versatile disk ROM, and the like. In some embodiments, storage device 160 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an intermediate cloud, a multi-cloud, and the like, or any combination thereof.
The locating device 170 can determine information related to the object (e.g., the terminal device 130, the driver device 140, the vehicle 150, etc.). For example, the locating device 170 may determine the current time and location of the user through the terminal device 130. As another example, the locating device 170 can determine trajectory data of the vehicle 150 via the driver device 140 or the vehicle 150. The information may include the position, altitude, speed or acceleration of the object (e.g., vehicle), and/or the current time. The location may be in the form of coordinates, such as latitude and longitude coordinates, and the like. In some embodiments, the positioning device 170 may be the Global Positioning System (GPS), the global navigation satellite system (GLONASS), the COMPASS navigation system (COMPASS), the beidou navigation satellite system, the galileo positioning system, the quasi-zenith satellite system (QZSS), or the like. In some embodiments, positioning device 170 may include one or more satellites, such as satellite 170-1, satellite 170-2, and satellite 170-3. Satellite 170-1, satellite 170-2, and satellite 170-3 may determine the above information independently or collectively. In some embodiments, the locating device 170 can send the information directly to the terminal device 130, the driver device 140, or the vehicle 150, or can send the information to the server 110 via the network 120.
It should be noted that the above description is merely for convenience and should not be taken as limiting the scope of the present application. It will be understood by those skilled in the art that, having the benefit of the teachings of this system, various modifications and changes in form and detail may be made to the field of application for which the method and system described above may be practiced without departing from this teachings.
FIG. 2 is an exemplary block diagram of an intersection information determination system according to some embodiments of the present application.
As shown in fig. 2, the intersection information determination system 200 may include an acquisition module 210, a processing module 220, a training module 230, and a detection module 240.
The acquisition module 210 may be used to acquire vehicle trajectory data within the area under test. In some embodiments, the acquisition module 210 may also acquire vehicle trajectory data within the sample area. In some embodiments, the acquisition module 210 may also be used to acquire a trained intersection detection model.
The processing module 220 may be used to process the acquired vehicle trajectory data within the area under test and/or sample vehicle trajectory data within the sample area. For example, the processing module 220 may perform denoising processing on the vehicle trajectory data in the region to be detected according to a preset condition. The processing module 220 may divide the denoised vehicle track data into a plurality of sub-region track data according to the region range of the denoised vehicle track data. The processing module 220 may perform rasterization processing on each sub-region trajectory data to obtain sub-region raster data.
The training module 230 may be used to train intersection detection models. For example, the training module may train an intersection detection model based on sample vehicle trajectory data within the sample area. The intersection detection model may include a feature extraction block, a feature fusion and semantic segmentation block, and a feature map analysis block. In some embodiments, training module 230 is to: preprocessing the sample vehicle track data in the sample area to obtain a plurality of sample rasterized track images; acquiring marking information of each sample rasterization track image, and generating a marked sample rasterization track image; and training the initial intersection detection model based on the marked sample rasterization track image so as to obtain the trained intersection detection model. In some embodiments, the marking information includes identification areas, where intersections of different ranges correspond to identification areas of different sizes; the training module is further configured to: marking the track points in the identification area as first labels, and indicating the corresponding track points to be in the intersection range by the first labels; and marking the track points outside the identification area as second labels, wherein the second labels indicate that the corresponding track points are outside the range of the intersection.
The detection module 240 may be configured to determine intersection information of the area to be detected, where the intersection information includes position information and range information of intersections in the area to be detected, based on the vehicle trajectory data in the area to be detected and the trained intersection detection model. In some embodiments, the detection module is further to: inputting the sub-area grid data into the trained intersection detection model to obtain initial intersection information, wherein the initial intersection information reflects intersection conditions in the sub-area grid data; and determining the intersection information of the area to be detected based on the initial intersection information and the corresponding relation between the sub-area grid data and the area to be detected.
It should be understood that the system and its modules shown in FIG. 2 may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of the present application may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above description of the system and its modules is merely for convenience of description and should not limit the present application to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. For example, in some embodiments, the obtaining module 210 and the processing module 220 disclosed in fig. 2 may be different modules in a system, or may be a module that implements the functions of two or more modules described above. For another example, each module may share one storage device 160, and each module may have its own storage device 160. Such variations are within the scope of the present application.
Fig. 3 is an exemplary flow chart of an intersection information determination method according to some embodiments of the present application.
In some embodiments, the intersection information determination method 300 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (instructions run on a processing device to perform hardware simulation), etc., or any combination thereof. One or more of the operations illustrated in fig. 3 for determining intersection information may be implemented by the intersection information determination system 100 illustrated in fig. 1 or the intersection information determination system 200 illustrated in fig. 2. For example, the intersection information determination method 300 may be stored in the storage device 160 in the form of instructions and invoked and/or executed by the processing device 112.
And step 310, acquiring vehicle track data in the area to be detected. In some embodiments, step 310 may be performed by acquisition module 210 in system 200.
The area to be measured is an area in which intersection information needs to be determined. In some embodiments, the region to be measured may be a region divided by one or a combination of more of a division method by a mesh, a division method by a cluster, a division method by a specific rule, and the like. For example, the area to be measured may be an area divided based on one or more of an administrative area (e.g., a western lake area, a hangul area, etc.), a longitude and latitude, a business district, a building, a street name, and the like. In some embodiments, the region under test may be designated by a user. For example, the user may transmit an intersection information display request through a terminal device (e.g., the terminal device 130). In response to the request, the server 110 may determine an area within a certain distance (e.g., 5km, 10km, etc.) from the current location of the user or the terminal device as the area to be measured.
In some embodiments, vehicle trajectory data within the area under test may be obtained from a database. The vehicle trajectory data may include latitude and longitude of the trajectory location, sampling point time (e.g., the time point at which the trajectory data is uploaded), sampling point separation distance, sampling point vehicle speed, trajectory length, etc., or any combination thereof. In some embodiments, the vehicle trajectory data may be trajectory data of vehicle travel within the area under test over a period of time, e.g., a day, a week, a month, a half year, a year, etc.
In some embodiments, the trajectory data in the database may be obtained by embedding points in the driver device 140, and obtaining the trajectory data of the vehicle traveling by a positioning system (e.g., a GPS system) while the driver device 140 is navigating. In some embodiments, a positioning system (e.g., a GPS system) may be installed in the vehicle, and track data of the vehicle traveling is acquired while the vehicle is traveling. In some embodiments, the trajectory data of the vehicle driving may be uploaded to the server in real time and recorded in the database in real time. Alternatively, the database may be updated at intervals. For example, every hour.
At one endIn some embodiments, the trajectory data set may be defined as S ═ T1,T2,…,Tn}. Wherein, TjRepresenting the jth vehicle track, n being the total number of input tracks, Tj={P1,P2,…,PmIn which P isiRepresenting a track TjM is the trace TjTotal number of middle sampling points, PiIs a five-membered group and is specifically represented by Pi=(xi,yi,ti,oi,vi) Wherein x isiAnd yiRespectively representing the sampling points PiX and Y coordinate values (unit: meter), tiRepresents the sampling time (unit: second), oiRepresenting a sample point PiThe vehicle heading (starting from true north in degrees), v, recorded at the locationiRepresenting a sample point PiThe recorded vehicle travel speed (unit: km/h).
In some embodiments, after acquiring vehicle trajectory data within the area under test, the vehicle trajectory data may be pre-processed. The pre-processing may be performed by the processing module 220. The preprocessing may include one or more of coordinate transformation processing, de-noising processing, framing processing, rasterization processing, and the like.
In some embodiments, to facilitate calculation of the straight-line distance between the sampling points, the vehicle trajectory data may be subjected to a coordinate conversion process. For example, when the trajectory data coordinate system of the vehicle trajectory data (e.g., GPS trajectory data) is WGS-84 longitude and latitude coordinates, the processing module 220 may employ coordinate conversion software (e.g., QGIS) to convert the GPS trajectory data coordinates to spherical mercator projection coordinates. In some embodiments, the coordinate unit of the converted trace sampling point may be meter, so as to obtain the regular region during the data framing processing.
The denoising processing refers to removing a noise track in the vehicle track data. In some embodiments, the processing module 220 may perform denoising processing on the vehicle trajectory data according to a preset condition. In some embodiments, the predetermined condition may include deleting the track when the track length is less than a track length threshold. The track length threshold may be 200m, 250m, 300m, 400m, 500m, etc. In some embodiments, the preset condition may include deleting the track when the distance between the sampling points in the track is greater than the threshold distance between the sampling points. The sample point interval threshold may be 400m, 500m, 550m, 600m, 1000m, etc. In some embodiments, the preset condition may include deleting a track when a sampling speed of a sampling point existing in the track is greater than a sampling speed threshold. The sampling speed threshold may be 100km/h, 150km/h, 200km/h, 250 km/h. In some embodiments, the preset condition may include deleting the track when the distance between adjacent sampling points in the track is greater than a set threshold (e.g., 300m, 500m, etc.). In some embodiments, the preset condition may include deleting the track when the number of track sampling points is less than a set threshold (e.g., 5, 10, etc.). In some embodiments, the processing module 220 may perform denoising processing on the vehicle trajectory data according to one or more of the preset conditions. By carrying out denoising processing on the vehicle track data, the noise in the GPS track data caused by the shielding of a GPS signal, the poor signal quality of a receiver and other reasons and some sampling points with wrong records can be effectively removed; thereby effectively improving the data quality.
The framing processing means dividing the vehicle trajectory data into a plurality of sub-area trajectory data according to the area range of the vehicle trajectory data. In some embodiments, the range of the area to be measured may be directly determined as an area range of the vehicle trajectory data. In some embodiments, the processing module 220 may calculate the maximum and minimum values in the coordinates of all the trajectory sampling points in the vehicle trajectory data. And taking the minimum circumscribed rectangle of the track data as the area range of the vehicle track data based on the maximum value and the minimum value of the coordinates of the sampling points. The processing module 220 may divide the region range of the vehicle trajectory data into a plurality of sub-region ranges by a grid of a certain size (e.g., 1500 m). In some embodiments, the area range of the vehicle trajectory data may also be divided into a number (e.g., 100) of sub-area ranges. In some embodiments, the processing module 220 may determine the sample point data falling within each sub-region range as corresponding sub-region trajectory data. In some embodiments, the processing module 220 may divide the denoised vehicle track data into a plurality of sub-region track data according to the region range of the denoised vehicle track data, so as to perform framing processing on the denoised vehicle track data.
The rasterization processing is to convert the point data of the trajectory data (i.e., the data of each sampling point) into the grid data of a regular structure. In some embodiments, the processing module 220 may perform a rasterization process on each sub-region trajectory data to obtain sub-region raster data. Specifically, the processing module 220 may use an image rendering function (e.g., an image rendering function in an OpenCV library) to render each sub-region trajectory data onto an image with a certain size (e.g., 576 × 576 pixels), so as to implement the conversion from the trajectory data to the raster data. For example, during the process of rasterizing the trajectory data, the trajectory sampling point and the trajectory line of the same vehicle can be drawn simultaneously. In some embodiments, the trace sample points and trace lines may be drawn in a certain color (e.g., black) and a certain line width (e.g., 3 pixels). In some embodiments, to avoid the interference of the scattered noise points, the sub-area raster data image may be further processed by an image opening and closing operator (e.g., an image opening and closing operator in an OpenCV library) to filter noise points formed by a single track point in the image. FIG. 7 shows the result of rasterization processing with taxi GPS trajectory data within an area of a metropolis. In fig. 7, fig. 7(a), fig. 7(b) and fig. 7(c) show the results of rasterization processing of different sub-area trajectory data, respectively.
And 320, acquiring a trained intersection detection model, wherein the trained intersection detection model is obtained by training sample vehicle track data in the sample area. In some embodiments, step 320 may be performed by acquisition module 210 in system 200.
In some embodiments, the trained intersection detection model may be a deep neural network model pre-trained from sample vehicle trajectory data within the sample area. The trained intersection detection model can be used for determining intersection information in a certain area according to vehicle track data in the area. For example, vehicle trajectory data (e.g., pre-processed sub-area grid data) within a certain area may be input into a trained intersection detection model to obtain intersection information for the area.
In some embodiments, the trained intersection detection model may be retrieved from the terminal device 130, the storage device 160, or other memory. For example, the trained intersection detection model may be obtained by training the initial model offline using the same or a different processing device as the processing device 112. The trained initial model (i.e., the trained intersection detection model) may be stored in the terminal device 130, the storage device 160, or other memory. Upon receiving an intersection information display request initiated by the user through the terminal device 130, the obtaining module 210 may extract the trained intersection detection model from the storage device 160. For further description of the training process of the intersection detection model, refer to fig. 4 and its related description, which are not repeated herein.
And 330, determining intersection information of the area to be detected based on the vehicle track data in the area to be detected and the trained intersection detection model. In some embodiments, step 330 may be performed by detection module 240 in system 200.
In some embodiments, the intersection information may include position information and range information for the intersection. In some embodiments, the location information for the intersection may include a geographic location of the intersection, road information associated with the intersection, or the like, or any combination thereof. In some embodiments, the geographic location of an intersection may be represented by latitude and longitude coordinates (e.g., latitude and longitude coordinates for an intersection are 102 ° 54 'east longitude and 30 ° 05' north latitude).
The range information of the intersection may indicate the size of the intersection. In some embodiments, the range information for an intersection may be represented by a range of the set of trace sampling points and/or a minimum circumcircle of the set of trace sampling points. In some embodiments, the range information may also be represented in latitude and longitude coordinate ranges (e.g., the range for an intersection may be represented as east longitude 102 ° 54 'to 102 ° 53', north latitude 30 ° 05 'to 30 ° 04'). In some embodiments, the location information (e.g., location coordinates) of the intersection may be the geometric center (e.g., center of a circle) of its range information.
In some embodiments, the detection module 240 may determine initial intersection information using the trained intersection detection model based on the sub-region raster data; and determining intersection information of the area to be detected based on the initial intersection information. Specifically, the detection module 240 may use the sub-area grid data after the vehicle trajectory data is processed as an input of the trained intersection detection model, and predict intersection information (i.e., initial intersection information) in the sub-area grid data image through the trained intersection detection model. The initial intersection information reflects intersection conditions (such as intersection positions and intersection ranges) in the sub-area grid data. Further, the detection module 240 may convert the initial intersection information in the sub-area grid data image into coordinates in a vehicle trajectory data coordinate system according to a corresponding relationship between the sub-area grid data and the area to be detected (e.g., a corresponding relationship between the size of the sub-area and the size and position of the area to be detected), that is, determine the intersection information of the area to be detected. Fig. 9 shows the track data in one frame (i.e., the track data of the sub-area) in a certain area of the metropolis and the detection result of the intersection information thereof. As shown in fig. 9, fig. 9(a) shows the result of the sub-area track data rasterization process; fig. 9(b) shows the intersection position and range prediction result. In some embodiments, the system 200 may store and output the determined intersection information of the area to be tested (e.g., store and output in a shapefile data file format).
It should be noted that the above description is merely for convenience and should not be taken as limiting the scope of the present application. It will be understood by those skilled in the art that, having the benefit of the teachings of this system, various modifications and changes in form and detail may be made to the field of application for which the method and system described above may be practiced without departing from this teachings. For example, when the vehicle trajectory data is preprocessed, all steps of the coordinate transformation process, the denoising process, the framing process, and the rasterization process may be included, or only some steps thereof may be included.
FIG. 4 is an exemplary flow chart of an intersection detection model training process according to some embodiments of the present application.
In some embodiments, training process 400 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (instructions run on a processing device to perform hardware simulation), etc., or any combination thereof. One or more of the operations illustrated in fig. 4 for determining intersection information may be implemented by the intersection information determination system 100 illustrated in fig. 1 or the intersection information determination system 200 illustrated in fig. 2. For example, training process 400 may be stored in storage device 160 in the form of instructions and invoked and/or executed by processing device 112 (or processing module 220, training module 230, etc.).
Step 410, preprocessing the sample vehicle trajectory data in the sample area to obtain a plurality of sample rasterized trajectory images. In some embodiments, step 410 may be performed by processing module 220 in system 200.
In some embodiments, the sample vehicle trajectory data may be vehicle historical trajectory data stored in the storage device 160. In some embodiments, the pre-processing may include one or more of coordinate transformation processing, de-noising processing, framing processing, rasterization processing, and the like. In some embodiments, the process of preprocessing the sample vehicle trajectory data in the sample region by the processing module 220 is similar to the process of processing the vehicle trajectory data of the region to be measured in step 310, and for more details of the preprocessing, reference may be made to step 310 and the description thereof, and details are not repeated here.
And 420, acquiring the marking information of each sample rasterization track image, and generating a marked sample rasterization track image. In some embodiments, step 420 may be performed by training module 230 in system 200.
The training module 230 may obtain marking information for manually marking each sample rasterized trajectory image to generate a marked sample rasterized trajectory image. The marking information comprises identification areas, and intersections in different ranges correspond to the identification areas in different sizes. In some embodiments, a human may mark intersections with different sized marking areas depending on the location and extent of the intersection in each sample rasterized trajectory image. In some embodiments, the identified area may be a circular area, and the intersection may be marked with circular areas of different diameters by a human being according to the location and extent of the intersection in each sample rasterized trajectory image. Fig. 8 shows an example of a process and a result of manual labeling of intersections of rasterized images generated after framing of certain trajectory data. As shown in fig. 8, fig. 8(a) is an image after trajectory rasterization processing; FIG. 8(b) is a manually labeled circle; fig. 8(c) is a rasterized trajectory image after marking. In some embodiments, the identified area may be an elliptical area, and the training module may mark the intersection with elliptical areas of different semi-major and semi-minor axes according to the position and range of the intersection in each sample rasterized trajectory image. In alternative embodiments, the identification area may also have other shapes, such as rectangular, square, regular hexagonal, etc.
In some embodiments, the training module 230 may mark a track point within the identified area as a first label, which may indicate that the track point is within the intersection range. In some embodiments, the training module may mark a track point outside the identified region as a second label, which may indicate that the track point is outside the intersection range.
Step 430, training an initial intersection detection model based on the marked sample rasterized trajectory image to obtain the trained intersection detection model. In some embodiments, step 430 may be performed by training module 230 in system 200.
In some embodiments, the training of the intersection detection model may be performed in a batch data training manner. For example, all training samples may be divided into K subsets (or referred to as groups), each group containing a fixed number m (e.g., m equal to 30) of data samples, and the K subsets may be sequentially input into the initial intersection detection model for learning and training in batches. The method comprises the following specific steps:
assuming that n manually marked sample data sets (i.e., marked sample rasterized trajectory images) are obtained through step 420, 80% of samples are selected to form a training sample data set, and 20% of samples are taken as a test sample data set. Learning and training the initial intersection detection model through a training sample data set, and checking the generalization capability of the trained intersection detection model through a test sample data set. When the intersection prediction accuracy of the intersection detection model on the training data set reaches more than 90%, and the intersection prediction accuracy on the testing data set reaches more than 80%, the learning and training of the intersection detection model can be stopped. Otherwise, increasing sample data, and continuously learning and training the intersection detection model until the model reaches the preset prediction precision.
FIG. 5 is a schematic diagram of a trained intersection detection model according to some embodiments of the present application. FIG. 6 is a schematic structural diagram of an intersection detection model according to some embodiments of the present application.
In some embodiments, as shown in FIGS. 5-6, the intersection detection model (or initial intersection detection model) may include a feature extraction block 520, a feature fusion and semantic segmentation block 530, and a feature map analysis block 540. The training module 230 may perform joint training on the feature extraction block 520, the feature fusion and semantic segmentation block 530, and the feature map analysis block 540 in the initial intersection detection model based on a training sample (e.g., a marked sample rasterized trajectory image), and update parameters of the blocks, thereby obtaining a trained intersection detection model. In some embodiments, the training module may input training data 510 (e.g., sample rasterized trajectory images) into an intersection detection model to be trained (e.g., an initial intersection detection model or an updated intersection detection model) to obtain a detection result 550. The training module 230 may calculate a loss function of the intersection detection model based on the labels 512 (e.g., label information of the sample rasterized trajectory image) and the detection results 550, and iteratively update the model parameters accordingly to obtain a trained intersection detection model.
Feature extraction block 520 may be used to extract features of vehicle trajectory data. In some embodiments, feature extraction block 520 may include a ResNext network (e.g., ResNext-50). The ResNext-50 network has strong image element feature extraction capability and can effectively extract road network intersection features. Compared with a classical ResNet-50 network, the ResNeXt-50 network introduces a grouping convolution structure, so that the network can improve the network performance while reducing the calculation parameters, and each convolution block gradually reduces the size of a feature map, so that the U-net network structure is convenient to carry out target space positioning and semantic segmentation, and the method is suitable for multi-level space result feature learning and extraction of intersections in a track rasterized image.
In some embodiments, feature fusion and semantic segmentation chunk 530 includes a Pyramid Pooling Module (PPM). The Pyramid pooling module is a module based on a Pyramid Scene Parsing Network (PSPnet) structure. The structure can effectively utilize the image context characteristics, namely, the overall structure characteristics of the intersection are further considered when the intersection is generated.
In some embodiments, feature extraction block 520 and feature fusion and semantic segmentation block 530 comprise a U-net network structure. The U-shaped network structure mainly comprises a coding part and a decoding part, wherein the coding part adopts ResNetXt-50 to extract multi-scale features of the intersection, the decoding part realizes the connection of feature maps with the same size as the coding part, finally realizes the gradual amplification of the feature map size, restores the feature map size to 1/4 size of an input image, and gradually realizes the spatial positioning and semantic classification of the intersection target in the process. In the decoding stage, the intersection detection model introduces a pyramid pooling module, different scales of convolution kernels and step lengths are adopted for convolution respectively, context background information under multiple scales is aggregated, and intersection targets with different sizes can be segmented. In the intersection detection model in the embodiment, the U-net network structure is adopted in the feature map sampling process, the feature map information with the same size in the encoding stage is fused, the local information lost in the convolution process can be recovered, the noise interference is effectively removed, and the integrity of the intersection target segmentation result is ensured.
In the network model shown in fig. 6, the intersection feature map obtained by the resenext-50 network is 1/32 of the original input image, and therefore, the element-by-element classification of the original image is difficult according to the feature map. For example, the size of the intersection target original image is approximately 50 × 50, and the size is only 2 × 2 after the intersection target original image is reduced 1/8, and at this time, the range and the shape of the original intersection are difficult to be divided, so that the up-sampling processing of the fusion coding part needs to be performed on the coding result (feature map). In some embodiments, the decoding portion of the U-net network may be performed by interpolation, such as linear interpolation (e.g., bilinear interpolation), polynomial interpolation, spline interpolation, and the like. In some embodiments, the decoding portion of the U-net network may include a transpose module (such as the CBT in fig. 6) to replace the original upsampling. The transposition module comprises transposition convolution (also called deconvolution), which is the inverse operation of convolution operation, namely training convolution parameters, and realizes intelligent recovery of the network to the specific target size in the characteristic diagram so as to improve the deformation problem existing in the original segmentation target. Compared with secondary spline upsampling, the transposed convolution can enable the network to realize better upsampling of the characteristic diagram through convolution, so that the final segmentation result edge of the network is clearer. In the decoding part of the U-net network, the size of the feature map is doubled after passing through a transposition module, and then the feature maps with the same size as the first half part in the image size are gradually restored to be connected (namely the process of gradually generating the intersection region), so that the final output result of the network is consistent with the input result size.
Feature map analysis block 540 may determine intersection information based on the processing results of feature extraction block 520 and feature fusion and semantic segmentation block 530. In some embodiments, the profile analysis block 540 may include a Softmax function. After appropriate upsampling processing (such as an extended path of a U-net network), the value can be assigned according to the position of the maximum probability of the feature map through a Softmax layer, namely, the classes of pixels in the feature map one by one are calculated, and the pixels in different classes are sequentially placed in the layers of the classes. As shown in fig. 6, the feature map parsing block 540 may further include an RR (Resize and Reshape) module, and the RR module may be configured to adjust the size of the feature map so that the final output result of the intersection detection model is consistent with the size of the input result.
FIG. 6 is a schematic structural diagram of an intersection detection model according to some embodiments of the present application. In the embodiment shown in fig. 6, the intersection detection model takes a track raster image as input, and firstly extracts multi-level spatial structure characteristics of the intersection by adopting a resenext-50 network module; furthermore, a U-net network structure is added into the network structure to realize intersection position positioning and semantic segmentation, and PPM (PPM (Pyramid Pooling Module) is introduced at the decoding stage to extract image context information, so that the feature learning capability of the large intersection is enhanced; finally, feature upsampling is realized by adopting transposition convolution, and the feature upsampling is fused with a feature map with the same size in the original ResNeXt-50 network, and finally the cross port area segmentation is completed by a Softmax layer. The network output is a dual-channel 576 x 756 probability map (zero pixels in channel 1 are background areas, non-zero pixels in channel 2 are intersection areas), and a contour detection algorithm is further adopted for the two-channel images to extract the outer contour of each intersection and calculate the minimum circumcircle as intersection range estimation.
It is noted that the intersection detection model is not limited to the above-described deep network model. In some embodiments, the intersection detection model may also be a network model of other architectures. For example, the intersection detection model may be an object detection network model. The target detection network model may include, but is not limited to, Yolo-v3, Fast RCNN, Mask RCNN, Detectron, and the like.
The beneficial effects that may be brought by the embodiments of the present description include, but are not limited to: (1) the method has the advantages that the method can automatically position and identify the position and range of the intersection by directly utilizing the neural network to perform feature learning on the rasterized trajectory data, and has the advantages of strong identification capability, high identification speed, adaptability to complex scenes, independence on artificial parameter setting and the like; (2) the intersection information determination method and system based on the deep neural network are provided, and can adapt to automatic detection of positions and ranges of complex intersections in data environments with high noise, different sampling frequencies, heterogeneous track density distribution and the like; (3) by simulating the human visual cognition process, the intersection range is automatically extracted from the track data, the method has better adaptability to different data qualities and complex application scenes, the system deployment is simple, and the detection speed is high; (4) the depth network model for identifying the image target is expanded, a Resnext-50 network module and a U-net network model are introduced on the basis of a PSPnet infrastructure, the accuracy of identifying small targets by the PSPnet network is improved, and intersection identification problems of different sizes can be simultaneously processed. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran2003, Perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.
The embodiment of the application discloses TS1 and an intersection information determination method, wherein the method is executed by at least one processor, and is characterized by comprising the following steps:
acquiring vehicle track data in a region to be detected;
acquiring a trained intersection detection model, wherein the trained intersection detection model is obtained by training sample vehicle track data in a sample area;
and determining intersection information of the area to be tested based on the vehicle track data and the trained intersection detection model, wherein the intersection information comprises position information and range information of an intersection.
TS2, the method of TS1, comprising, after obtaining vehicle trajectory data within the area under test:
denoising the vehicle track data according to a preset condition;
dividing the denoised vehicle track data into a plurality of sub-region track data according to the region range of the denoised vehicle track data;
and rasterizing each sub-area track data to obtain sub-area raster data.
The TS3 method of TS2, wherein the determining intersection information for the area to be tested based on the vehicle trajectory data and the trained intersection detection model includes:
inputting the sub-area grid data into the trained intersection detection model to obtain initial intersection information, wherein the initial intersection information reflects intersection conditions in the sub-area grid data;
and determining the intersection information of the area to be detected based on the initial intersection information and the corresponding relation between the sub-area grid data and the area to be detected.
The TS4 is the method according to TS1, and the intersection detection model comprises a feature extraction block, a feature fusion and semantic segmentation block and a feature map analysis block.
TS5, the method according to TS4, characterized in that the feature extraction chunks comprise ResNext nets.
TS6, the method of TS4, wherein the feature fusion and semantic segmentation chunks comprise pyramid pooling modules.
TS7, the method according to TS4, characterized in that the feature extraction chunk and the feature fusion and semantic segmentation chunk form a U-net network structure; the decoding portion of the U-net network structure includes a transpose module.
The method of TS8, according to TS1, characterized in that the intersection detection model training process includes:
preprocessing the sample vehicle track data in the sample area to obtain a plurality of sample rasterized track images;
acquiring marking information of each sample rasterization track image, and generating a marked sample rasterization track image;
training an initial intersection detection model based on the marked sample rasterized trajectory image to obtain the trained intersection detection model.
The TS9 is the method according to TS8, wherein the marking information includes identification areas, and intersections in different ranges correspond to the identification areas in different sizes; the training process further comprises:
marking the track points in the identification area as first labels, wherein the first labels indicate that the corresponding track points are in the intersection range;
and marking the track points outside the identification area as second labels, wherein the second labels indicate that the corresponding track points are outside the range of the intersection.
TS10, the method of TS9, wherein the identification area is a circular area.
The TS11 is an intersection information determination system, and is characterized by comprising an acquisition module and a detection module;
the acquisition module is used for acquiring vehicle track data in a region to be detected;
the acquisition module is further used for acquiring a trained intersection detection model, and the trained intersection detection model is obtained by training sample vehicle track data in a sample area;
the detection module is used for determining intersection information of the area to be detected based on the vehicle track data and the trained intersection detection model, and the intersection information comprises position information and range information of an intersection.
TS12, the system according to TS11, characterized in that the system further comprises a processing module for:
denoising the vehicle track data according to a preset condition;
dividing the denoised vehicle track data into a plurality of sub-region track data according to the region range of the denoised vehicle track data;
and rasterizing each sub-area track data to obtain sub-area raster data.
TS13, the system according to TS12, characterized in that the detection module is further configured to:
inputting the sub-area grid data into the trained intersection detection model to obtain initial intersection information, wherein the initial intersection information reflects intersection conditions in the sub-area grid data;
and determining the intersection information of the area to be detected based on the initial intersection information and the corresponding relation between the sub-area grid data and the area to be detected.
The system of TS14, according to TS11, characterized in that the intersection detection model includes a feature extraction block, a feature fusion and semantic segmentation block, and a feature map analysis block.
TS15, the system according to TS14, characterized in that the feature extraction chunks comprise ResNext nets.
TS16, the system of TS14, wherein the feature fusion and semantic segmentation block comprises a pyramid pooling module.
TS17, the system according to TS14, characterized in that the feature extraction chunk and the feature fusion and semantic segmentation chunk form a U-net network structure; the decoding portion of the U-net network structure includes a transpose module.
TS18, the system according to TS11, characterized in that the system further comprises a training module for:
preprocessing the sample vehicle track data in the sample area to obtain a plurality of sample rasterized track images;
acquiring marking information of each sample rasterization track image, and generating a marked sample rasterization track image;
training an initial intersection detection model based on the marked sample rasterized trajectory image to obtain the trained intersection detection model.
TS19, the system according to TS18, characterized in that the marking information includes identification areas, intersections in different ranges correspond to identification areas of different sizes; the training module is further configured to:
marking the track points in the identification area as first labels, wherein the first labels indicate that the corresponding track points are in the intersection range;
and marking the track points outside the identification area as second labels, wherein the second labels indicate that the corresponding track points are outside the range of the intersection.
TS20, the system according to TS19, characterized in that the identification area is a circular area.
TS21, an intersection information determining device, comprising at least one processor and at least one memory;
the at least one memory is to store instructions;
the processor is configured to execute the instructions to implement the method as recited in any one of TS1 to 10.
TS22, a computer readable storage medium storing computer instructions, wherein when the computer instructions in the storage medium are read by a computer, the computer executes the method as recited in any one of TS1 to 10.
TS23, a computer program product comprising computer programs or instructions, characterized in that the computer programs or instructions, when executed by a processor, implement the steps of the method of any one of TS 1-10.

Claims (10)

1. A method for intersection information determination, the method being performed by at least one processor, comprising:
acquiring vehicle track data in a region to be detected;
acquiring a trained intersection detection model, wherein the trained intersection detection model is obtained by training sample vehicle track data in a sample area;
and determining intersection information of the area to be tested based on the vehicle track data and the trained intersection detection model, wherein the intersection information comprises position information and range information of an intersection.
2. The method of claim 1, after acquiring vehicle trajectory data within the area-under-test, comprising:
denoising the vehicle track data according to a preset condition;
dividing the denoised vehicle track data into a plurality of sub-region track data according to the region range of the denoised vehicle track data;
and rasterizing each sub-area track data to obtain sub-area raster data.
3. The method of claim 2, wherein determining intersection information for the area under test based on the vehicle trajectory data and the trained intersection detection model comprises:
inputting the sub-area grid data into the trained intersection detection model to obtain initial intersection information, wherein the initial intersection information reflects intersection conditions in the sub-area grid data;
and determining the intersection information of the area to be detected based on the initial intersection information and the corresponding relation between the sub-area grid data and the area to be detected.
4. The method of claim 1, wherein the intersection detection model comprises a feature extraction block, a feature fusion and semantic segmentation block, and a feature map analysis block.
5. The method according to claim 1, wherein the training process of the intersection detection model comprises:
preprocessing the sample vehicle track data in the sample area to obtain a plurality of sample rasterized track images;
acquiring marking information of each sample rasterization track image, and generating a marked sample rasterization track image;
training an initial intersection detection model based on the marked sample rasterized trajectory image to obtain the trained intersection detection model.
6. The method according to claim 5, wherein the marking information comprises identification areas, wherein intersections of different ranges correspond to identification areas of different sizes; the training process further comprises:
marking the track points in the identification area as first labels, wherein the first labels indicate that the corresponding track points are in the intersection range;
and marking the track points outside the identification area as second labels, wherein the second labels indicate that the corresponding track points are outside the range of the intersection.
7. The intersection information determining system is characterized by comprising an acquisition module and a detection module;
the acquisition module is used for acquiring vehicle track data in a region to be detected;
the acquisition module is further used for acquiring a trained intersection detection model, and the trained intersection detection model is obtained by training sample vehicle track data in a sample area;
the detection module is used for determining intersection information of the area to be detected based on the vehicle track data and the trained intersection detection model, and the intersection information comprises position information and range information of an intersection.
8. An intersection information determination device comprising at least one processor and at least one memory;
the at least one memory is to store instructions;
the processor is used for executing the instructions and realizing the method of any one of claims 1 to 6.
9. A computer-readable storage medium, wherein the storage medium stores computer instructions, and when the computer instructions in the storage medium are read by a computer, the computer performs the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program or instructions, characterized in that the computer program or instructions, when executed by a processor, implement the steps of the method of any of claims 1-6.
CN202110015030.XA 2021-01-06 2021-01-06 Intersection information determining method, system and device Active CN112836586B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110015030.XA CN112836586B (en) 2021-01-06 2021-01-06 Intersection information determining method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110015030.XA CN112836586B (en) 2021-01-06 2021-01-06 Intersection information determining method, system and device

Publications (2)

Publication Number Publication Date
CN112836586A true CN112836586A (en) 2021-05-25
CN112836586B CN112836586B (en) 2024-09-06

Family

ID=75926354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110015030.XA Active CN112836586B (en) 2021-01-06 2021-01-06 Intersection information determining method, system and device

Country Status (1)

Country Link
CN (1) CN112836586B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113611107A (en) * 2021-07-15 2021-11-05 北京汇通天下物联科技有限公司 Non-networked intersection traffic reminding method
CN114092460A (en) * 2021-11-29 2022-02-25 株洲时代电子技术有限公司 Automatic identification method for bridge surface diseases
CN114140550A (en) * 2021-11-24 2022-03-04 武汉中海庭数据技术有限公司 Expressway branch and confluence point conjecture method and system based on track shape
CN118215008A (en) * 2024-05-17 2024-06-18 北京九栖科技有限责任公司 Region self-adaptive image-letter fusion calculation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909788A (en) * 2019-11-19 2020-03-24 湖南博通信息股份有限公司 Statistical clustering-based road intersection position identification method in track data
US20200124423A1 (en) * 2018-10-19 2020-04-23 Baidu Usa Llc Labeling scheme for labeling and generating high-definition map based on trajectories driven by vehicles
CN111369783A (en) * 2018-12-25 2020-07-03 北京嘀嘀无限科技发展有限公司 Method and system for identifying intersection
CN111462488A (en) * 2020-04-01 2020-07-28 北京工业大学 Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model
CN112150804A (en) * 2020-08-31 2020-12-29 中国地质大学(武汉) City multi-type intersection identification method based on MaskRCNN algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200124423A1 (en) * 2018-10-19 2020-04-23 Baidu Usa Llc Labeling scheme for labeling and generating high-definition map based on trajectories driven by vehicles
CN111369783A (en) * 2018-12-25 2020-07-03 北京嘀嘀无限科技发展有限公司 Method and system for identifying intersection
CN110909788A (en) * 2019-11-19 2020-03-24 湖南博通信息股份有限公司 Statistical clustering-based road intersection position identification method in track data
CN111462488A (en) * 2020-04-01 2020-07-28 北京工业大学 Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model
CN112150804A (en) * 2020-08-31 2020-12-29 中国地质大学(武汉) City multi-type intersection identification method based on MaskRCNN algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BADVEETI ADINARAYANA ET AL: "Development of Bicycle Safety Index Models for Safety of Bicycle Flow at 3-Legged Junctions on Urban Roads under Mixed Traffic Conditions", 《TRANSPORTATION RESEARCH PROCEDIA》, vol. 48, pages 1227 - 1243 *
万子健等: "车辆轨迹数据提取道路交叉口特征的决策树模型", 《测绘学报》, vol. 48, no. 11, pages 1391 - 1403 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113611107A (en) * 2021-07-15 2021-11-05 北京汇通天下物联科技有限公司 Non-networked intersection traffic reminding method
CN114140550A (en) * 2021-11-24 2022-03-04 武汉中海庭数据技术有限公司 Expressway branch and confluence point conjecture method and system based on track shape
CN114140550B (en) * 2021-11-24 2024-08-06 武汉中海庭数据技术有限公司 Expressway bifurcation and convergence point estimation method and system based on track shape
CN114092460A (en) * 2021-11-29 2022-02-25 株洲时代电子技术有限公司 Automatic identification method for bridge surface diseases
CN118215008A (en) * 2024-05-17 2024-06-18 北京九栖科技有限责任公司 Region self-adaptive image-letter fusion calculation method

Also Published As

Publication number Publication date
CN112836586B (en) 2024-09-06

Similar Documents

Publication Publication Date Title
CN112836586B (en) Intersection information determining method, system and device
US11454500B2 (en) Map feature extraction system for computer map visualizations
CN110686686B (en) System and method for map matching
CN111002980B (en) Road obstacle trajectory prediction method and system based on deep learning
EP3550471A1 (en) Method, apparatus, and system for determining polyline homogeneity
EP3671545A1 (en) Lane feature detection in aerial images based on road geometry
CN111261016B (en) Road map construction method and device and electronic equipment
CN111465936B (en) System and method for determining new road on map
CN106052697A (en) Driverless vehicle, driverless vehicle positioning method, driverless vehicle positioning device and driverless vehicle positioning system
CN112041858B (en) System and method for providing travel advice
CN110689719B (en) System and method for identifying closed road sections
CN113640822A (en) High-precision map construction method based on non-map element filtering
CN112164138A (en) Point cloud data screening method and device
CN111275807A (en) 3D road modeling method and system
CN111260549A (en) Road map construction method and device and electronic equipment
CN113450455A (en) Method, device and computer program product for generating a map of road links of a parking lot
EP3642821A1 (en) Systems and methods for determining a new route in a map
CN111860903A (en) Method and system for determining estimated arrival time
JP2022039188A (en) Position attitude calculation method and position attitude calculation program
CN114943870A (en) Training method and device of line feature extraction model and point cloud matching method and device
CN113252058B (en) IMU data processing method, system, device and storage medium
Dal Poz et al. Road network extraction using GPS trajectories based on morphological and skeletonization algorithms
US20210270629A1 (en) Method and apparatus for selecting a path to a destination
CN115773744A (en) Model training and road network processing method, device, equipment, medium and product
CN116246030A (en) High-precision map single-element updating method and device based on non-newly added road scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant