CN106873566B - A kind of unmanned logistic car based on deep learning - Google Patents
A kind of unmanned logistic car based on deep learning Download PDFInfo
- Publication number
- CN106873566B CN106873566B CN201710146233.6A CN201710146233A CN106873566B CN 106873566 B CN106873566 B CN 106873566B CN 201710146233 A CN201710146233 A CN 201710146233A CN 106873566 B CN106873566 B CN 106873566B
- Authority
- CN
- China
- Prior art keywords
- logistic car
- module
- deep learning
- layer
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 11
- SAZUGELZHZOXHB-UHFFFAOYSA-N acecarbromal Chemical compound CCC(Br)(CC)C(=O)NC(=O)NC(C)=O SAZUGELZHZOXHB-UHFFFAOYSA-N 0.000 title claims abstract description 6
- 238000013136 deep learning model Methods 0.000 claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 20
- 230000004888 barrier function Effects 0.000 claims abstract description 11
- 230000006870 function Effects 0.000 claims abstract description 9
- 238000012360 testing method Methods 0.000 claims description 17
- 238000012549 training Methods 0.000 claims description 16
- 230000004913 activation Effects 0.000 claims description 5
- 239000011248 coating agent Substances 0.000 claims description 5
- 238000000576 coating method Methods 0.000 claims description 5
- 230000005611 electricity Effects 0.000 claims description 2
- 210000002569 neuron Anatomy 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000011017 operating method Methods 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 claims 1
- 238000013519 translation Methods 0.000 claims 1
- 238000009434 installation Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 4
- 238000010276 construction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000000034 method Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 235000015170 shellfish Nutrition 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000002054 transplantation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/42—Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine
- G05B19/425—Teaching successive positions by numerical control, i.e. commands being entered to control the positioning servo of the tool head or end effector
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Aviation & Aerospace Engineering (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Robotics (AREA)
- Acoustics & Sound (AREA)
- Image Analysis (AREA)
- Navigation (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to a kind of unmanned logistic car based on deep learning, including logistic car car body, avoiding obstacles by supersonic wave module, binocular stereo vision obstacle avoidance module, motor drive module, embedded system, power module and vision guided navigation processing system;Binocular stereo vision obstacle avoidance module is for detecting more remote barrier in road scene, and for avoiding obstacles by supersonic wave module for detecting short distance barrier, the range information of barrier acquired in the two is referred to as avoidance information;Vision guided navigation processing system uses the road image data for the deep learning model treatment acquisition trained by sample set, and exports control instruction information;Finally judged by decision-making module comprehensively control command information and avoidance information, to control motor drive module, to realize the unmanned function of logistic car.The present invention does not need installation ancillary equipment, it is only necessary to which deep learning model can be perceived and be understood to road ambient enviroment, realize the unmanned function of logistic car by learning sample collection.
Description
Technical field
The invention belongs to unmanned technical fields, are related to a kind of unmanned logistic car based on deep learning, are applicable in
In public places such as large-scale garden, warehouse, station, airport, harbours.
Background technique
With the rapid development of logistics industry, especially warehouse carrying, express delivery and the freight volume of take-away are constantly soaring, nobody is given
It drives logistic car and brings great development potentiality and the huge market space.
But at present warehouse carrying mostly use greatly the AGV(of electromagnetic guide, tape guidance and the modes such as laser navigation without
People's carrier), express delivery and take-away substantially rely on manpower transport.The former realizes unmanned transport, and it is flat to apply in general to place
Whole, clean indoor environment;The latter needs to expend a large amount of manpower and material resources, and shipment and delivery cost is high.Although the former can realize unmanned fortune
It is defeated, but it is stringent for site requirements, to configure some auxiliary guidance equipment (magnetic stripe, colour band and reflector etc.), construction period
Long, investment cost is high.
Currently, pilotless automobile largely uses laser radar as navigation detection device, but laser radar cost
It is too high, and the characteristic information extracted is sparse, is unfavorable for the understanding and perception of scene.In addition, logistic car work garden or
Low speed even running is generally required to its speed in person room.And low, short construction period is not only invested using vision guided navigation, Er Qieneng
Image feature information abundant is enough extracted, processing speed can also meet the requirement of real-time of system, be suitble to unmanned object
The technology for flowing vehicle is implemented.
Unmanned logistic car of the invention is using deep learning model as core, by acquiring road ambient enviroment in real time
Image data goes the environmental information around perception through deep learning model treatment, and then issues control instruction information;Again by decision
The avoidance information that module combination binocular stereo vision obstacle avoidance module and avoiding obstacles by supersonic wave module provide judges, to control motor
Drive module, to realize the unmanned function of logistic car.
Summary of the invention
The present invention makes full use of the respective advantage of vision-based detection and deep learning, proposes a kind of nobody based on deep learning
Drive logistic car.Compared to existing using auxiliary guidance AGV(automatic guided vehicle), the unmanned logistic car of the present invention is not only installed
Debugging difficulty is small, at low cost, flexible and convenient, Navigation Control deviation is small;And it is adapted to more complicated interior or outside work
Road scene.
In order to achieve the object of the present invention, the technical solution adopted is as follows:
A kind of unmanned logistic car based on deep learning, including logistic car external structure and logistic car internal structure two
Part.
Logistic car external structure is mainly by logistic car car body, avoiding obstacles by supersonic wave module and binocular stereo vision obstacle avoidance module group
At;The total setting of the logistic car car body five is for storing the drawer door of cargo;Logistic car car body two sides respectively fill two drawers
Door, the tail portion of logistic car car body fills a drawer door;There are four Mecanum wheels for the bottom dress of the logistic car car body, realize former
Ground Omnidirectional rotation;The both sides of the head of the logistic car car body is respectively equipped with a set of avoiding obstacles by supersonic wave module, for closely preventing
Shield property ranging avoidance;Camera A and camera B is housed, the two constitutes binocular tri-dimensional among the head of the logistic car car body
Feel obstacle avoidance module, for detecting more remote barrier in road scene, avoiding obstacles by supersonic wave module is for detecting short distance obstacle
The range information of object, barrier acquired in the two is referred to as the avoidance information of logistic car;The camera A is similar to the main view of people
Eye, provides image data for vision guided navigation processing system.
Module, embedded system and power module composition is mainly driven by motor in logistic car internal structure.The motor drives
Dynamic model block drives belt to drive Mecanum wheel for driving DC brushless motor, then by DC brushless motor;It is described embedded
System carries vision guided navigation processing system, controls the motor drive module for acquiring image data;The power module is adopted
It is logistic car system power supply with battery pack.
Above-mentioned unmanned logistic car is completed unmanned in road work environment by vision guided navigation processing system
Function, the vision guided navigation processing system are mainly made of deep learning model, decision-making module and sample set, and wherein sample set is again
It is divided into training set and test set.
Steps are as follows for the deep learning model foundation:
Step 1: by logistic car, collected road environment video image and operator operate telecommand information in advance
It copies in computer, sample set is fabricated to training set and test set according to 9:1 ratio;
Step 2: training set is for training deep learning model, and test set is used for test deep learning model, by repeatedly
Parameter and test being adjusted to deep learning model, observing test result error size, the control until can satisfy system
Until precision;Finally, required deep learning model is got.
The present invention is based on deep learning theory of algorithm, by learning the driving experience of outstanding operator, by anti-
Multiple adjusting and Optimal Parameters, training reach the deep learning model for meeting system requirements.Make logistics by deep learning model
Vehicle can perceive the ambient enviroment of road, obtain the control instruction information of system.In addition, again by decision-making module combination logistic car
Avoidance information issues decision instruction information and controls motor drive module, realizes the unmanned function of logistic car.
Detailed description of the invention
Fig. 1 is car body external structure of the invention.
Fig. 2 is vehicle body structure chart of the invention.
Fig. 3 is deep learning model structure of the invention.
Fig. 4 is system structure functional diagram of the invention.
Fig. 5 is system function implementation flow chart of the invention.
In figure: 100 be logistic car car body, and 200 be drawer door, and 300 be Mecanum wheel, and 400 be avoiding obstacles by supersonic wave module,
500 be binocular stereo vision obstacle avoidance module, and 501 be camera A, and 502 be camera B, and 600 be DC brushless motor, and 701 be embedding
Embedded system, 702 be motor drive module, and 800 be power module, and 900 be sample set, and 901 be training set, and 902 be test set,
903 be deep learning model, and 904 be decision-making module, and 905 be vision guided navigation processing system.
Specific embodiment
Specific embodiments of the present invention are described in detail below in conjunction with technical solution and attached drawing.
Fig. 1 is the external structure schematic diagram of logistic car car body.As shown in Figure 1, the two sides of the logistic car car body 100 respectively fill
Two drawer doors 200 and tail portion are equipped with a drawer door 200, and total five for storing the drawer door of cargo
200.100 bottom of the logistic car car body dress is there are four the Mecanum wheel 300, it can be achieved that four-wheel omnidirectional drives.The object
Stream 100 both sides of the head of vehicle car body is respectively equipped with a set of ultrasonic obstacle avoidance module 400, is used for close-range protection ranging avoidance.Institute
It states 100 head middle position of logistic car car body and the camera A501 and camera B502 is installed, the two constitutes binocular solid
Shown in vision obstacle avoidance module 500(Fig. 4), for obtaining the three-dimensional spatial information of road ambient enviroment, it can be achieved that vision avoidance function
Energy;Wherein, the camera A501 is similar to the dominant eye of people, is shown in vision guided navigation processing system 905(Fig. 4) image is provided
Data.
Fig. 2 is the schematic diagram of internal structure of logistic car car body.As can be seen from Figure 2, the embedded system 701 passes through camera
A501 acquire image data, through shown in vision guided navigation processing system 905(Fig. 4) processing after, control the motor drive module
702.The motor drive module 702 is for driving the DC brushless motor 600.The DC brushless motor 600 passes through skin
Band drives shown in driving Mecanum wheel 300(Fig. 1) movement.The power module 800 uses battery pack for the confession of logistic car system
Electricity.
In deep learning model specific implementation case, as shown in figure 3, it is depth convolutional Neural that deep learning model, which uses,
Network, the order of connection of network is followed successively by input layer, first layer convolution adds active coating, the first maximum pond layer, the second convolution
Add active coating, the second maximum pond layer, third convolution that active coating, third maximum pond layer, the first full connection plus activation is added to add
Dropout layers, the second full articulamentum, output layer.As shown in figure 4, the camera A501 acquired image is as input layer,
The size of its input data is 200 × 160 × 1.According to table 1, use convolution kernel size for 5 × 5 input layer, filtering
Device number be 20, convolution step-length be 1, convolution is filled with 1 carry out convolution operation, obtain the first convolutional layer size be 196 × 156 ×
20;First convolutional layer is after activation primitive ReLU processing, as the input of the first maximum pond layer, using the big of Chi Huahe
Small is 2 × 2, and filter number is 20, and pond step-length is the 2 down-sampled processing of progress, and the size for obtaining the first maximum pond layer is 98
×78×20.This moment, the operation of first time convolution Yu pond is realized.
Next second, the convolution of third time and pondization operate the operating procedure according to first time convolution and pond,
The size that parameter query table 1 successively obtains the second convolutional layer is 94 × 74 × 40, and the size of the second maximum pond layer is 47 × 37
× 40, the size of third convolutional layer is 44 × 34 × 60, and the size of third maximum pond layer is 22 × 17 × 60.Pass through three in total
After secondary convolution sum pondization operation, it is sequentially connected the first full articulamentum, ReLU layers, Dropout layers and the second full articulamentum, wherein
Dropout layers are to inhibit at random to some neurons, prevent supersaturation occur during deep learning training.The
The input of one full articulamentum is 22 × 17 × 60, and exporting is 400;The input and output of ReLU layers and Dropout layers remain unchanged for
400, the input of the second full articulamentum is 400, and exporting is 6.
Finally, the output category size of output layer is 6 after Softmax layers of progress data normalization.Wherein, it controls
System 6 classes of instruction point, are corresponding in turn to the control instruction of operation remote control: advancing (0), turn left (1), left (2), turning right (3), is right flat
Move (5) six (4), u-turn instructions.
Table 1
Serial number | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
Title | Input layer | First convolutional layer | First maximum pond layer | Second convolutional layer | Second maximum pond layer | Third convolutional layer | Third maximum pond layer | First full articulamentum | Second full articulamentum | Output Layer |
Filter number | 1 | 20 | 20 | 40 | 40 | 60 | 60 | |||
Convolution kernel size | 5*5 | 5*5 | 4*4 | |||||||
Convolution step-length | 1 | 1 | 1 | |||||||
Convolution filling | 1 | 1 | 1 | |||||||
Pond core size | 2*2 | 2*2 | 2*2 | |||||||
Pond step-length | 2 | 2 | 2 | |||||||
Output size | 400 | 6 | 6 | |||||||
ReLU | Have | Have | Have | Have | ||||||
Dropout | Have | |||||||||
Softmax | Have |
Fig. 4 is logistic car system structure functional diagram.As can be seen from Figure 4, the vision guided navigation processing system 905 includes depth
Practise model 903 and decision-making module 904.The deep learning model 903 be through sample set 900 training obtain, input picture by
Camera A501 is provided.The sample set 900 is by the road environment video in camera A501 acquisition logistic car motion process
Image and operator are remotely controlled the control instruction composition of logistic car, and will obtain sample set 900 and be divided into training set in the ratio of 9:1
901 and test set 902.The input data of the decision-making module 904 is by deep learning model 903, binocular stereo vision avoidance
Module 500 and avoiding obstacles by supersonic wave module 400 provide, and output decision instruction information is transferred to motor drive module 702, realize object
Flow the control of vehicle autokinetic movement.
As shown in figure 5, the course of work of the unmanned logistic car of present embodiment is as follows:
Step 1: the collected road environment video image of logistic car and operator first being operated into telecommand information and copied
Shellfish is fabricated to training set and test set according to 9:1 ratio into calculating, by sample set.
Step 2: on computers, going to train deep learning model using the training set being collected into, then go to survey with test set
Try deep learning model, by deep learning model constantly repeatedly adjustment parameter and test, until the control precision of output
Until the control requirement that can satisfy system, it finally will acquire deep learning model transplantations into embedded system.
Step 3: binocular stereo vision obstacle avoidance module is for detecting more remote barrier, avoiding obstacles by supersonic wave in road scene
For module for detecting short distance barrier, obstacle information acquired in the two is referred to as logistic car avoidance information.
Step 4: logistic car will be collected road environment image transmitting and give deep learning model in real time by camera A, be passed through
Control instruction information is exported after deep learning model treatment;Then it is kept away by decision-making module comprehensively control command information and logistic car
Barrier information is judged, is issued decision instruction information and is controlled motor drive module, realizes the unmanned function of logistic car.
Claims (2)
1. a kind of unmanned logistic car based on deep learning, including logistic car external structure and logistic car internal structure two
Point;It is characterized by:
Logistic car external structure is mainly made of logistic car car body, avoiding obstacles by supersonic wave module and binocular stereo vision obstacle avoidance module;
The total setting of the logistic car car body five is for storing the drawer door of cargo;Logistic car car body two sides respectively fill two drawer doors,
The tail portion of logistic car car body fills a drawer door;There are four Mecanum wheels for the bottom dress of the logistic car car body, realize original place
Omnidirectional rotation;The both sides of the head of the logistic car car body is respectively equipped with a set of avoiding obstacles by supersonic wave module, is used for close-range protection
Property ranging avoidance;Camera A and camera B is housed, the two constitutes binocular stereo vision among the head of the logistic car car body
Obstacle avoidance module, for detecting more remote barrier in road scene, avoiding obstacles by supersonic wave module is used to detect short distance barrier,
The range information of barrier acquired in the two is referred to as the avoidance information of logistic car;The camera A is similar to the dominant eye of people,
Image data is provided for vision guided navigation processing system;
Module, embedded system and power module composition is mainly driven by motor in logistic car internal structure;The motor driven mould
Block drives belt to drive Mecanum wheel for driving DC brushless motor, then by DC brushless motor;The embedded system
For acquiring image data, vision guided navigation processing system is carried, the motor drive module is controlled;The power module is using electricity
Pond group is logistic car system power supply;
Above-mentioned unmanned logistic car completes the unmanned function in road work environment by vision guided navigation processing system,
The vision guided navigation processing system is mainly made of deep learning model, decision-making module and sample set, and wherein sample set is divided into again
Training set and test set;
Steps are as follows for the deep learning model foundation:
Step 1: by logistic car, collected road environment video image and operator operate telecommand information copy in advance
Into computer, sample set is fabricated to training set and test set according to 9:1 ratio;
Step 2: training set is for training deep learning model, and test set is used for test deep learning model, by repeatedly to depth
Parameter and test is adjusted in degree learning model, observes test result error size, the control precision until can satisfy system
Until;Finally, required deep learning model is got.
2. unmanned logistic car as described in claim 1, it is characterised in that:
For the deep learning model using being depth convolutional neural networks, the order of connection of network is followed successively by input layer, first
Layer convolution adds active coating, the first maximum pond layer, the second convolution that active coating, the second maximum pond layer, third convolution is added to add activation
Layer, third maximum pond layer, the first full connection plus activation plus Dropout layers, the second full articulamentum, output layer;
The camera A acquired image is 200 × 160 × 1 as input layer, the size of input data;For input
Layer use convolution kernel size be 5 × 5, filter number be 20, convolution step-length be 1, convolution is filled with 1 carry out convolution operation, obtains
The size of first convolutional layer is 196 × 156 × 20;First convolutional layer is after activation primitive ReLU processing, most as first
The input of great Chiization layer uses the size of Chi Huahe for 2 × 2, and filter number is 20, and pond step-length is the 2 down-sampled processing of progress,
The size for obtaining the first maximum pond layer is 98 × 78 × 20, realizes the operation of first time convolution Yu pond;Following second
Secondary, third time convolution and pondization operate the operating procedure according to first time convolution and pond, and parameter query table 1 successively obtains
The size of second convolutional layer is 94 × 74 × 40, and the size of the second maximum pond layer is 47 × 37 × 40, third convolutional layer it is big
Small is 44 × 34 × 60, and the size of third maximum pond layer is 22 × 17 × 60;
Table 1
After operating in total by cubic convolution and pondization, it is sequentially connected the first full articulamentum, ReLU layers, Dropout layers and the
Two full articulamentums, wherein Dropout layers are to inhibit at random to some neurons, prevent from going out during deep learning training
Existing supersaturation;The input of first full articulamentum is 22 × 17 × 60, and exporting is 400;ReLU layers and Dropout layers of input
It is 400 that output, which remains unchanged, and the input of the second full articulamentum is 400, and exporting is 6;Finally, by Softmax layers of progress data
After normalization, the output category size of output layer is 6;Wherein, 6 classes of control instruction point are corresponding in turn to the control of operation remote control
Instruction: advance (0), turn left (1), left (2), turning right (3), (5) six right translation (4), u-turn instructions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710146233.6A CN106873566B (en) | 2017-03-14 | 2017-03-14 | A kind of unmanned logistic car based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710146233.6A CN106873566B (en) | 2017-03-14 | 2017-03-14 | A kind of unmanned logistic car based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106873566A CN106873566A (en) | 2017-06-20 |
CN106873566B true CN106873566B (en) | 2019-01-22 |
Family
ID=59170774
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710146233.6A Active CN106873566B (en) | 2017-03-14 | 2017-03-14 | A kind of unmanned logistic car based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106873566B (en) |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109116374B (en) * | 2017-06-23 | 2021-08-17 | 百度在线网络技术(北京)有限公司 | Method, device and equipment for determining distance of obstacle and storage medium |
US10520940B2 (en) * | 2017-08-14 | 2019-12-31 | GM Global Technology Operations LLC | Autonomous operation using deep spatio-temporal learning |
CN107450593B (en) * | 2017-08-30 | 2020-06-12 | 清华大学 | Unmanned aerial vehicle autonomous navigation method and system |
CN107767487B (en) * | 2017-09-05 | 2020-08-04 | 百度在线网络技术(北京)有限公司 | Method and device for determining data acquisition route |
CN107515607A (en) * | 2017-09-05 | 2017-12-26 | 百度在线网络技术(北京)有限公司 | Control method and device for unmanned vehicle |
CN107491072B (en) * | 2017-09-05 | 2021-03-30 | 百度在线网络技术(北京)有限公司 | Vehicle obstacle avoidance method and device |
CN107563332A (en) * | 2017-09-05 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | For the method and apparatus for the driving behavior for determining unmanned vehicle |
CN107544518B (en) * | 2017-10-17 | 2020-12-01 | 芜湖伯特利汽车安全系统股份有限公司 | ACC/AEB system based on anthropomorphic driving and vehicle |
CN107826105B (en) * | 2017-10-31 | 2019-07-02 | 清华大学 | Translucent automatic Pilot artificial intelligence system and vehicle |
CN109754625A (en) * | 2017-11-07 | 2019-05-14 | 天津工业大学 | Take out the drive manner of vehicle in a kind of unpiloted campus |
CN109933081A (en) * | 2017-12-15 | 2019-06-25 | 北京京东尚科信息技术有限公司 | Unmanned plane barrier-avoiding method, avoidance unmanned plane and unmanned plane obstacle avoidance apparatus |
CN107967468A (en) * | 2018-01-19 | 2018-04-27 | 刘至键 | A kind of supplementary controlled system based on pilotless automobile |
CN110298219A (en) | 2018-03-23 | 2019-10-01 | 广州汽车集团股份有限公司 | Unmanned lane keeping method, device, computer equipment and storage medium |
CN108897313A (en) * | 2018-05-23 | 2018-11-27 | 清华大学 | A kind of end-to-end Vehicular automatic driving system construction method of layer-stepping |
US11966838B2 (en) * | 2018-06-19 | 2024-04-23 | Nvidia Corporation | Behavior-guided path planning in autonomous machine applications |
CN109358614A (en) * | 2018-08-30 | 2019-02-19 | 深圳市易成自动驾驶技术有限公司 | Automatic Pilot method, system, device and readable storage medium storing program for executing |
CN109240123A (en) * | 2018-10-09 | 2019-01-18 | 合肥学院 | On-loop simulation method and system for intelligent logistics vehicle |
CN109407679B (en) * | 2018-12-28 | 2022-12-23 | 百度在线网络技术(北京)有限公司 | Method and device for controlling an unmanned vehicle |
CN109976153B (en) * | 2019-03-01 | 2021-03-26 | 北京三快在线科技有限公司 | Method and device for controlling unmanned equipment and model training and electronic equipment |
CN110473806A (en) * | 2019-07-13 | 2019-11-19 | 河北工业大学 | The deep learning identification of photovoltaic cell sorting and control method and device |
CN110646574B (en) * | 2019-10-08 | 2022-02-08 | 张家港江苏科技大学产业技术研究院 | Unmanned ship-based water quality conductivity autonomous detection system and method |
CN110673602B (en) * | 2019-10-24 | 2022-11-25 | 驭势科技(北京)有限公司 | Reinforced learning model, vehicle automatic driving decision method and vehicle-mounted equipment |
CN111142519A (en) * | 2019-12-17 | 2020-05-12 | 西安工业大学 | Automatic driving system based on computer vision and ultrasonic radar redundancy and control method thereof |
CN113470416B (en) * | 2020-03-31 | 2023-02-17 | 上汽通用汽车有限公司 | System, method and storage medium for realizing parking space detection by using embedded system |
CN111898732B (en) * | 2020-06-30 | 2023-06-20 | 江苏省特种设备安全监督检验研究院 | Ultrasonic ranging compensation method based on deep convolutional neural network |
CN111975769A (en) * | 2020-07-16 | 2020-11-24 | 华南理工大学 | Mobile robot obstacle avoidance method based on meta-learning |
CN114509077B (en) * | 2020-11-16 | 2024-09-06 | 阿里巴巴集团控股有限公司 | Method, device, system and computer program product for generating a guide wire |
CN112835333B (en) * | 2020-12-31 | 2022-03-15 | 北京工商大学 | Multi-AGV obstacle avoidance and path planning method and system based on deep reinforcement learning |
CN116409312A (en) * | 2021-12-31 | 2023-07-11 | 上海邦邦机器人有限公司 | Auxiliary driving system, method and storage medium applied to scooter in old age |
CN114495041A (en) * | 2022-01-27 | 2022-05-13 | 北京合众思壮时空物联科技有限公司 | Method, device, equipment and medium for measuring distance between vehicle and target object |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103129468A (en) * | 2013-02-19 | 2013-06-05 | 河海大学常州校区 | Vehicle-mounted roadblock recognition system and method based on laser imaging technique |
CN103514456A (en) * | 2013-06-30 | 2014-01-15 | 安科智慧城市技术(中国)有限公司 | Image classification method and device based on compressed sensing multi-core learning |
CN104715021A (en) * | 2015-02-27 | 2015-06-17 | 南京邮电大学 | Multi-label learning design method based on hashing method |
CN104793620A (en) * | 2015-04-17 | 2015-07-22 | 中国矿业大学 | Obstacle avoidance robot based on visual feature binding and reinforcement learning theory |
CN105629221A (en) * | 2014-10-26 | 2016-06-01 | 江西理工大学 | Logistics vehicle wireless-infrared-ultrasonic distance-measuring and positioning system |
-
2017
- 2017-03-14 CN CN201710146233.6A patent/CN106873566B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103129468A (en) * | 2013-02-19 | 2013-06-05 | 河海大学常州校区 | Vehicle-mounted roadblock recognition system and method based on laser imaging technique |
CN103514456A (en) * | 2013-06-30 | 2014-01-15 | 安科智慧城市技术(中国)有限公司 | Image classification method and device based on compressed sensing multi-core learning |
CN105629221A (en) * | 2014-10-26 | 2016-06-01 | 江西理工大学 | Logistics vehicle wireless-infrared-ultrasonic distance-measuring and positioning system |
CN104715021A (en) * | 2015-02-27 | 2015-06-17 | 南京邮电大学 | Multi-label learning design method based on hashing method |
CN104793620A (en) * | 2015-04-17 | 2015-07-22 | 中国矿业大学 | Obstacle avoidance robot based on visual feature binding and reinforcement learning theory |
Also Published As
Publication number | Publication date |
---|---|
CN106873566A (en) | 2017-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106873566B (en) | A kind of unmanned logistic car based on deep learning | |
Oleynikova et al. | Safe local exploration for replanning in cluttered unknown environments for microaerial vehicles | |
Schwesinger et al. | Automated valet parking and charging for e-mobility | |
CN102707724B (en) | Visual localization and obstacle avoidance method and system for unmanned plane | |
CN114815654B (en) | Unmanned vehicle control-oriented digital twin system and construction method thereof | |
CN110362083A (en) | It is a kind of based on multiple target tracking prediction space-time map under autonomous navigation method | |
CN106054896A (en) | Intelligent navigation robot dolly system | |
CA3139421A1 (en) | Automatic annotation of object trajectories in multiple dimensions | |
CA3139477A1 (en) | Systems and methods for simulating traffic scenes | |
Yu et al. | Baidu driving dataset and end-to-end reactive control model | |
CN106863259A (en) | A kind of wheeled multi-robot intelligent ball collecting robot | |
DE102022114201A1 (en) | Neural network for object detection and tracking | |
Gajjar et al. | A comprehensive study on lane detecting autonomous car using computer vision | |
Sanchez-Lopez et al. | A vision based aerial robot solution for the mission 7 of the international aerial robotics competition | |
Asadi et al. | An integrated aerial and ground vehicle (UAV-UGV) system for automated data collection for indoor construction sites | |
Dong et al. | A vision-based method for improving the safety of self-driving | |
CN204883371U (en) | Decide many rotor crafts of dimension flight and controller thereof | |
Soto et al. | Cyber-ATVs: Dynamic and Distributed Reconnaissance and Surveillance Using All-Terrain UGVs | |
Helble et al. | OATS: Oxford aerial tracking system | |
Cheng et al. | Technologies for Automating Rotorcraft Nap‐of‐the‐Earth Flight | |
Shi et al. | Path Planning of Unmanned Aerial Vehicle Based on Supervised Learning | |
Jarvis | An all-terrain intelligent autonomous vehicle with sensor-fusion-based navigation capabilities | |
CN112965494A (en) | Control system and method for pure electric automatic driving special vehicle in fixed area | |
CN207281589U (en) | A kind of autonomous type graticule robot system | |
Anderson et al. | Semi-autonomous unmanned ground vehicle control system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |