WO2021051329A1 - Systems and methods for determining estimated time of arrival in online to offline services - Google Patents
Systems and methods for determining estimated time of arrival in online to offline services Download PDFInfo
- Publication number
- WO2021051329A1 WO2021051329A1 PCT/CN2019/106567 CN2019106567W WO2021051329A1 WO 2021051329 A1 WO2021051329 A1 WO 2021051329A1 CN 2019106567 W CN2019106567 W CN 2019106567W WO 2021051329 A1 WO2021051329 A1 WO 2021051329A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- preset
- links
- eta
- service request
- determining
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 98
- 239000013598 vector Substances 0.000 claims abstract description 93
- 238000010801 machine learning Methods 0.000 claims abstract description 26
- 238000003860 storage Methods 0.000 claims description 59
- 230000008569 process Effects 0.000 claims description 38
- 238000012549 training Methods 0.000 claims description 35
- 238000004422 calculation algorithm Methods 0.000 claims description 16
- 238000004891 communication Methods 0.000 claims description 10
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 230000000306 recurrent effect Effects 0.000 claims description 6
- 238000009826 distribution Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 description 96
- 238000012986 modification Methods 0.000 description 14
- 230000004048 modification Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 8
- 238000013439 planning Methods 0.000 description 8
- 230000003190 augmentative effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000003058 natural language processing Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012384 transportation and delivery Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000010977 jade Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000003746 surface roughness Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/02—Reservations, e.g. for tickets, services or events
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
- G06Q30/0202—Market predictions or forecasting for commercial activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/40—Business processes related to the transportation industry
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/123—Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/20—Monitoring the location of vehicles belonging to a group, e.g. fleet of vehicles, countable or determined number of vehicles
Definitions
- This disclosure generally relates to an online to offline service platform, and more particularly, relates to systems and methods for determining an estimated time of arrival (ETA) for a service request.
- ETA estimated time of arrival
- a system providing the online taxi hailing services can recommend one or more suitable preset driving routes based on a starting location and a destination of the service request. It is important to accurately predict the arrival time at the destination, which can help estimate service fee and/or provide information with the user who wants to know the estimated time of arrival (ETA) at the destination.
- ETA estimated time of arrival
- the user changes the preset driving route due to an objective reasons (e.g., a traffic congestion, a traffic accident, a water accumulation in a road segment) and/or a subjective preference (e.g., view preference, shipping an object) , which may lead to a low accuracy of the ETA.
- an objective reasons e.g., a traffic congestion, a traffic accident, a water accumulation in a road segment
- a subjective preference e.g., view preference, shipping an object
- a method may include one or more of the following operations performed by at least one processor.
- the method may include obtaining a service request comprising a starting location and a destination.
- the method may include determining a preset route from the starting location to the destination.
- the preset route may include a plurality of preset links.
- the method may include, for each one of the plurality of preset links, determining a corresponding vector representing a topological relationship among the plurality preset links.
- the method may include determining, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links.
- the method may also include, for each one of the plurality of preset links, determining the corresponding vector using a word2vec model.
- the method may also include determining the word2vec model according to a training process.
- the training process may include obtaining a plurality of training samples comprising a plurality of historical routes associated with a plurality of historical orders.
- the training process may include determining the word2vec model based on the plurality of training samples.
- the method may also include removing a historical route including duplicate links.
- the method may also include removing a historical route that violates a traffic rule.
- the method may also include obtaining at least one feature of the each one of the plurality of preset links.
- the method may also include determining, based on the at least one feature of the each one of the plurality of preset links and the vector corresponding to the each one of the plurality of preset links, a sub-ETA corresponding to the each one of the plurality of preset links by using the machine learning model.
- the method may also include determining, based on the sub-ETA corresponding to the each one of the plurality of preset links, the ETA for the service request.
- the at least one feature of the each one of the plurality of preset links may include at least one of a length of the preset link, a width of the preset link, a direction of travel, a traffic light distribution, or a lane condition.
- the plurality of historical routes may be determined according to a map matching algorithm.
- the machine learning model may be a recurrent neural network (RNN) model.
- RNN recurrent neural network
- the method may also include transmitting the ETA to a terminal, directing the terminal to display the ETA.
- a system for determining an estimated time of arrival (ETA) for a service request may include at least one storage medium storing a set of instructions, and at least one processor in communication with the at least one storage medium.
- the at least one processor may cause the system to obtain a service request comprising a starting location and a destination.
- the at least one processor may also cause the system to determine a preset route from the starting location to the destination.
- the preset route may include a plurality of preset links.
- the at least one processor may also cause the system to, for each one of the plurality of preset links, determine a corresponding vector representing a topological relationship among the plurality preset links.
- the at least one processor may also cause the system to determine, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links.
- a non-transitory computer readable medium storing instructions, the instructions, when executed by at least one processor, causing the at least one processor to implement a method.
- the method may include obtaining a service request comprising a starting location and a destination.
- the method may include determining a preset route from the starting location to the destination.
- the preset route may include a plurality of preset links.
- the method may include, for each one of the plurality of preset links, determining a corresponding vector representing a topological relationship among the plurality preset links.
- the method may include determining, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links.
- FIG. 1 is a schematic diagram illustrating an exemplary online to offline service system according to some embodiments of the present disclosure
- FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure
- FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure
- FIG. 4 is a block diagram illustrating an exemplary processing engine according to some embodiments of the present disclosure
- FIG. 5 is a flowchart illustrating an exemplary process for determining an ETA for a service request according to some embodiments of the present disclosure
- FIG. 6 is a flowchart illustrating an exemplary process for determining a word2vec model according to some embodiments of the present disclosure.
- FIG. 7 is a flowchart illustrating an exemplary process for determining an ETA for a service request according to some embodiments of the present disclosure.
- the flowcharts used in the present disclosure illustrate operations that systems implemented according to some embodiments of the present disclosure. It is to be expressly understood that the operations of the flowchart may be implemented not in order. Conversely, the operations may be implemented in inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
- systems and methods disclosed in the present disclosure are described primarily regarding online transportation service, it should also be understood that this is only one exemplary embodiment.
- the systems and methods of the present disclosure may be applied to any other kind of online to offline service.
- the systems and methods of the present disclosure may be applied to transportation systems of different environments including land (e.g. roads or off-road) , water (e.g. river, lake, or ocean) , air, aerospace, or the like, or any combination thereof.
- the vehicle of the transportation systems may include a taxi, a private car, a hitch, a bus, a train, a bullet train, a high-speed rail, a subway, a boat, a vessel, an aircraft, a spaceship, a hot-air balloon, a driverless vehicle, or the like, or any combination thereof.
- the transportation systems may also include any transportation system for management and/or distribution, for example, a system for sending and/or receiving an express.
- the application of the systems and methods of the present disclosure may include a mobile device (e.g. smart phone or pad) application, a webpage, a plug-in of a browser, a client terminal, a custom system, an internal analysis system, an artificial intelligence robot, or the like, or any combination thereof.
- passenger, ” “requester, ” “requestor, ” “service requester, ” “service requestor, ” and “customer” in the present disclosure are used interchangeably to refer to an individual, an entity or a tool that may request or order a service.
- driver, ” “provider, ” “service provider, ” and “supplier” in the present disclosure are used interchangeably to refer to an individual, an entity or a tool that may provide a service or facilitate the providing of the service.
- user in the present disclosure is used to refer to an individual, an entity or a tool that may request a service, order a service, provide a service, or facilitate the providing of the service.
- terms “requester” and “requester terminal” may be used interchangeably
- terms “provider” and “provider terminal” may be used interchangeably.
- the terms “request, ” “service, ” “service request, ” and “order” in the present disclosure are used interchangeably to refer to a request that may be initiated by a passenger, a requester, a service requester, a customer, a driver, a provider, a service provider, a supplier, or the like, or any combination thereof.
- the service request may be accepted by any one of a passenger, a requester, a service requester, a customer, a driver, a provider, a service provider, or a supplier.
- the service request is accepted by a driver, a provider, a service provider, or a supplier.
- the service request may be chargeable or free.
- the positioning technology used in the present disclosure may be based on a global positioning system (GPS) , a global navigation satellite system (GLONASS) , a compass navigation system (COMPASS) , a Galileo positioning system, a quasi-zenith satellite system (QZSS) , a wireless fidelity (WiFi) positioning technology, or the like, or any combination thereof.
- GPS global positioning system
- GLONASS global navigation satellite system
- COMPASS compass navigation system
- Galileo positioning system Galileo positioning system
- QZSS quasi-zenith satellite system
- WiFi wireless fidelity positioning technology
- An aspect of the present disclosure is directed to systems and methods for determining an estimated time of arrival (ETA) for a service request.
- the processing engine may obtain a service request.
- the service request may include a starting location and a destination.
- the processing engine may determine a preset route from the starting location to the destination.
- the preset route may include a plurality of preset links.
- the processing engine may determine a corresponding vector representing a topological relationship among the plurality preset links.
- the processing engine may determine the corresponding vector using a word2vec model.
- the processing engine may determine, using a machine learning model (e.g., a RNN model) , the ETA for the service request based on the vectors corresponding to the plurality of preset links. Accordingly, the ETA for the service request may be determined more accurately, which may improve user experience.
- a machine learning model e.g., a RNN model
- FIG. 1 is a schematic diagram illustrating an exemplary online to offline service system according to some embodiments of the present disclosure.
- the online to offline service system may be a system for online to offline services.
- the online to offline service system 100 may be an online transportation service platform for transportation services such as taxi hailing, chauffeur services, delivery vehicles, express car, carpool, bus service, driver hiring, shuttle services, take-out services, navigation services, vehicle sharing services.
- the online to offline service system 100 may include a server 110, a positioning system 120, a terminal device 130, a storage device 140, and a network 150.
- the server 110 may be a single server or a server group.
- the server group may be centralized or distributed (e.g., the server 110 may be a distributed system) .
- the server 110 may be local or remote.
- the server 110 may access information and/or data stored in the terminal device 130, the storage device 140, and/or the positioning system 120 via the network 150.
- the server 110 may be directly connected to the terminal device 130, and/or the storage device 140 to access stored information and/or data.
- the server 110 may be implemented on a cloud platform or an onboard computer.
- the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
- the server 110 may be implemented on a computing device 200 having one or more components illustrated in FIG. 2 in the present disclosure.
- the server 110 may include a processing engine 112.
- the processing engine 112 may process information and/or data to perform one or more functions described in the present disclosure. For example, the processing engine 112 may obtain a service request comprising a starting location and a destination. As another example, the processing engine 112 may determine a preset route from a starting location to a destination. The preset route may include a plurality of preset links. As still another example, for each one of a plurality of preset links, the processing engine 112 may determine a corresponding vector representing a topological relationship among the plurality preset links.
- the processing engine 112 may determine, using a machine learning model (e.g., a RNN model) , an ETA for a service request based on a plurality of vectors corresponding to a plurality of preset links.
- the processing engine 112 may include one or more processing engines (e.g., single-core processing engine (s) or multi-core processor (s) ) .
- the processing engine 112 may include a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a digital signal processor (DSP) , a field programmable gate array (FPGA) , a programmable logic device (PLD) , a controller, a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
- CPU central processing unit
- ASIC application-specific integrated circuit
- ASIP application-specific instruction-set processor
- GPU graphics processing unit
- PPU physics processing unit
- DSP digital signal processor
- FPGA field programmable gate array
- PLD programmable logic device
- controller a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
- RISC reduced
- the server 110 may be connected to the network 150 to communicate with one or more components (e.g., the terminal device 130, the storage device 140, and/or the positioning system 120) of the online to offline service system 100.
- the server 110 may be directly connected to or communicate with one or more components (e.g., the terminal device 130, the storage device 140, and/or the positioning system 120) of the online to offline service system 100.
- the positioning system 120 may determine information associated with an object, for example, one or more of the terminal devices 130.
- the positioning system 120 may be a global positioning system (GPS) , a global navigation satellite system (GLONASS) , a compass navigation system (COMPASS) , a BeiDou navigation satellite system, a Galileo positioning system, a quasi-zenith satellite system (QZSS) , etc.
- the information may include a location, an elevation, a velocity, or an acceleration of the object, or a current time.
- the positioning system 120 may include one or more satellites, for example, a satellite 120-1, a satellite 120-2, and a satellite 120-3.
- the satellites 120-1 through 120-3 may determine the information mentioned above independently or jointly.
- the positioning system 120 may send the information mentioned above to the network 150, the terminal device 130 via wireless connections.
- the terminal device 130 may include a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, a built-in device in a motor vehicle 130-4, or the like, or any combination thereof.
- the mobile device 130-1 may include a smart home device, a wearable device, a mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof.
- the smart home device may include a smart lighting device, a control device of an intelligent electrical apparatus, a smart monitoring device, a smart television, a smart video camera, an interphone, or the like, or any combination thereof.
- the wearable device may include a bracelet, footgear, glasses, a helmet, a watch, clothing, a backpack, a smart accessory, or the like, or any combination thereof.
- the mobile device may include a mobile phone, a personal digital assistance (PDA) , a gaming device, a navigation device, a point of sale (POS) device, a laptop, a desktop, or the like, or any combination thereof.
- the virtual reality device and/or the augmented reality device may include a virtual reality helmet, a virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof.
- the virtual reality device and/or the augmented reality device may include a Google Glass TM , a RiftCon TM , a Fragments TM , a Gear VR TM , etc.
- the built-in device in the motor vehicle 130-4 may include an onboard computer, an onboard television, etc.
- the terminal device 130 may be a device with positioning technology for locating the position of a user of the terminal device 130 (e.g., a service requester or a service provider) and/or the terminal device 130.
- the terminal device 130 may communicate with one or more other positioning devices to determine the position of the terminal device 130.
- the terminal device 130 may send positioning information to the server 110.
- the terminal device 130 may receive/transmit information related to the online to offline service system 100 from/to one or more components (e.g., the server 110, the storage device 150, the positioning system 160) of the online to offline service system 100.
- a user of the terminal device 130 may input a starting location and a destination through the terminal device 130.
- the terminal device 130 may transmit the user’s input to the server 110 (e.g., request a navigation service) .
- the terminal device 130 may receive, from the server 110, signals including a route from the starting location to the destination and/or an ETA of the route and display the route and/or the ETA.
- the terminal device 130 may include a requester terminal and a provider terminal.
- a requester may be a user of the requester terminal.
- the terms “passenger, ” “requester, ” “service requester, ” and “customer” in the present disclosure are used interchangeably to refer to an individual, an entity or a tool that may request or order a service.
- a provider may be a user of the provider terminal.
- driver, ” “provider, ” “service provider, ” and “supplier” in the present disclosure are used interchangeably to refer to an individual, an entity, or a tool that may provide a service or facilitate the providing of the service.
- the term “user” in the present disclosure may refer to an individual, an entity, or a tool that may request a service, order a service, provide a service, or facilitate the providing of the service.
- the user may be a passenger, a driver, an operator, or the like, or any combination thereof.
- passenger and bypassenger terminal may be used interchangeably, and terms “driver” and “driver terminal” may be used interchangeably.
- the term “request, ” “service request, ” “order, ” in the present disclosure refers to a request that initiated by a passenger, a requester, a service requester, a customer, a driver, a provider, a service provider, a supplier, or the like, or any combination thereof.
- the service request may be accepted by any one of a passenger, a requester, a service requester, a customer, a driver, a provider, a service provider, or a supplier.
- the service request may be chargeable, or free.
- the user of the terminal device 130 may be someone other than the requester.
- a user A of the requester terminal may use the requester terminal to send a service request for a user B, or receive service and/or information or instructions from the server 110.
- the user of the provider terminal may be someone other than the provider.
- a user C of the provider terminal may use the provider terminal to receive a service request for a user D, and/or information or instructions from the server 110.
- “service requester, ” “requester, ” and “requester terminal” may be used interchangeably, and “service provider, ” “provider, ” and “provider terminal” may be used interchangeably.
- the storage device 140 may store data and/or instructions. In some embodiments, the storage device 140 may store data obtained from the terminal device 130, the positioning system 120, the processing engine 112, and/or an external storage device. For example, the storage device 140 may store a service request received from a terminal device (e.g., the terminal device 130) . As another example, the storage device 140 may store a plurality of links associated with a route determined by the processing engine 112. As another example, the storage device 140 may store a preset route from a starting location to a destination determined by the processing engine 112. As another example, the storage device 140 may store a vector corresponding to each one of a plurality of preset links determined by the processing engine 112.
- the storage device 140 may store an ETA for a service request determined by the processing engine 112.
- the storage device 140 may store data and/or instructions that the server 110 may execute or use to perform exemplary methods described in the present disclosure.
- the storage device 140 may store instructions that the processing engine 112 may execute or use to determine a preset route from a starting location to a destination.
- the storage device 140 may store instructions that the processing engine 112 may execute or use to determine a vector corresponding to each one of a plurality of preset links.
- the storage device 140 may store instructions that the processing engine 112 may execute or use to determine, using a machine learning model, an ETA for a service request based on a plurality of vectors corresponding to a plurality of preset links.
- the storage device 140 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
- Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc.
- Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
- Exemplary volatile read-and-write memory may include a random access memory (RAM) .
- Exemplary RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyrisor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
- Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically-erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
- the storage device 140 may be implemented on a cloud platform.
- the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
- the storage device 140 may be connected to the network 150 to communicate with one or more components (e.g., the server 110, the terminal device 130, and/or the positioning system 120) of the online to offline service system 100.
- One or more components of the online to offline service system 100 may access the data or instructions stored in the storage device 140 via the network 150.
- the storage device 140 may be directly connected to or communicate with one or more components (e.g., the server 110, the terminal device 130, and/or the positioning system 120) of the online to offline service system 100.
- the storage device 140 may be part of the server 110.
- the network 150 may facilitate exchange of information and/or data.
- one or more components e.g., the server 110, the terminal device 130, the storage device 140, or the positioning system 120
- the server 110 may send information and/or data to other component (s) of the online to offline service system 100 via the network 150.
- the server 110 may obtain/acquire a service request from the terminal device 130 via the network 150.
- the server 110 may transmit an ETA for a service request to the terminal device 130 to display via the network 150 in real-time.
- the server 110 may transmit a predicted/planed route for a service request to the terminal device 130 to display via the network 150 in real-time.
- the network 150 may be any type of wired or wireless network, or combination thereof.
- the network 150 may include a cable network, a wireline network, an optical fiber network, a tele communications network, an intranet, an Internet, a local area network (LAN) , a wide area network (WAN) , a wireless local area network (WLAN) , a metropolitan area network (MAN) , a wide area network (WAN) , a public telephone switched network (PSTN) , a Bluetooth network, a ZigBee network, a near field communication (NFC) network, or the like, or any combination thereof.
- the network 150 may include one or more network access points.
- the network 150 may include wired or wireless network access points (e.g., 150-1, 150-2) , through which one or more components of the online to offline service system 100 may be connected to the network 150 to exchange data and/or information.
- the online to offline service system 100 is merely provided for the purposes of illustration, and is not intended to limit the scope of the present disclosure.
- the online to offline service system 100 may further include a database, an information source, etc.
- the online to offline service system 100 may be implemented on other devices to realize similar or different functions.
- the GPS device may also be replaced by other positioning device, such as BeiDou.
- BeiDou other positioning device
- FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure.
- the server 110, the positioning system 120, and/or the terminal device 130 may be implemented on the computing device 200.
- the processing engine 112 may be implemented on the computing device 200 and configured to perform functions of the processing engine 112 disclosed in this disclosure.
- the computing device 200 may be used to implement any component of the online to offline service system 100 as described herein.
- the processing engine 112 may be implemented on the computing device 200, via its hardware, software program, firmware, or a combination thereof.
- only one such computer is shown, for convenience, the computer functions relating to the online to offline service as described herein may be implemented in a distributed fashion on a number of similar platforms to distribute the processing load.
- the computing device 200 may include COM ports 250 connected to and from a network connected thereto to facilitate data communications.
- the computing device 200 may also include a processor 220, in the form of one or more, e.g., logic circuits, for executing program instructions.
- the processor 220 may include interface circuits and processing circuits therein.
- the interface circuits may be configured to receive electronic signals from a bus 210, wherein the electronic signals encode structured data and/or instructions for the processing circuits to process.
- the processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus 210.
- the computing device 200 may further include program storage and data storage of different forms including, for example, a disk 270, a read only memory (ROM) 230, or a random access memory (RAM) 240, for storing various data files to be processed and/or transmitted by the computing device 200.
- the computing device 200 may also include program instructions stored in the ROM 230, RAM 240, and/or another type of non-transitory storage medium to be executed by the processor 220.
- the methods and/or processes of the present disclosure may be implemented as the program instructions.
- the computing device 200 may also include an I/O component 260, supporting input/output between the computer and other components.
- the computing device 200 may also receive programming and data via network communications.
- processors are also contemplated, thus operations and/or steps performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors.
- the processor of the computing device 200 executes both operation A and operation B, it should be understood that operation A and operation B may also be performed by two different CPUs and/or processors jointly or separately in the computing device 200 (e.g., the first processor executes operation A and the second processor executes operation B, or the first and second processors jointly execute operations A and B) .
- FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure.
- the terminal device 130 may be implemented on the mobile device 300.
- the mobile device 300 may include a communication platform 310, a display 320, a graphic processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, a mobile operating system (OS) 370, and a storage 390.
- any other suitable component including but not limited to a system bus or a controller (not shown) , may also be included in the mobile device 300.
- the mobile operating system 370 e.g., iOS TM , Android TM , Windows Phone TM
- the applications 380 may include a browser or any other suitable mobile app for receiving and rendering information relating to online to offline services or other information from the online to offline service system 100.
- User interactions with the information stream may be achieved via the I/O 350 and provided to the processing engine 112 and/or other components of the online to offline service system 100 via the network 150.
- computer hardware platforms may be used as the hardware platform (s) for one or more of the elements described herein.
- a computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device.
- PC personal computer
- a computer may also act as a server if appropriately programmed.
- FIG. 4 is a block diagram illustrating an exemplary processing engine according to some embodiments of the present disclosure.
- the processing engine 112 may include an obtaining module 410, a route determination module 420, a vector determination module 430, a time determination module 440, and a training module 450.
- the obtaining module 410 may be configured to obtain data and/or information associated with the online to offline service system 100. For example, the obtaining module 410 may obtain a service request. As another example, the obtaining module 410 may obtain at least one feature of each one of a plurality of preset links. As still another example, the obtaining module 410 may obtain a plurality of training samples comprising a plurality of historical routes associated with a plurality of historical orders.
- the obtaining module 410 may obtain the data and/or the information from one or more components (e.g., the terminal device 130, and the storage device 140) of the online to offline service system 100 or an external storage device via the network 150.
- one or more components e.g., the terminal device 130, and the storage device 140
- the route determination module 420 may be configured to determine a route based on a starting location to a destination. In some embodiments, the route determination module 420 may determine a preset route from a starting location to a destination based on route planning techniques.
- the route planning techniques may include a machine learning technique, an artificial intelligence technique, a template approach technique, an artificial potential field technique, or the like, or any combination thereof.
- the vector determination module 430 may be configured to determine a vector of a link. In some embodiments, for each preset link of a plurality of preset links, the vector determination module 430 may determine a corresponding vector representing a topological relationship among the plurality preset links. For example, for each preset link of a plurality of preset links, the vector determination module 430 may determine a corresponding vector using a word2vec model.
- the time determination module 440 may be configured to determine an ETA for a service request.
- the time determination module 440 may determine an ETA for a preset route including a plurality of preset links. For example, the time determination module 440 may determine, based on at least one feature of each one of the plurality of preset links and a vector corresponding to the each one of the plurality of preset links, a sub-ETA corresponding to the each one of the plurality of preset links by using a machine learning model. As another example, the time determination module 440 may determine, based on a sub-ETA corresponding to each one of a plurality of preset links, an ETA for a service request. More descriptions of the determination of the ETA for the service request may be found elsewhere in the present disclosure (e.g., FIGs. 5, 7 and the descriptions thereof) .
- the training module 450 may be configured to determine a trained word2vec model. In some embodiments, the training module 450 may determine the word2vec model based on a plurality of training samples. More descriptions of the determination of the word2vec model may be found elsewhere in the present disclosure (e.g., FIGs. 5, 6 and the descriptions thereof) .
- the modules in the processing engine 112 may be connected to or communicated with each other via a wired connection or a wireless connection.
- the wired connection may include a metal cable, an optical cable, a hybrid cable, or the like, or any combination thereof.
- the wireless connection may include a Local Area Network (LAN) , a Wide Area Network (WAN) , a Bluetooth, a ZigBee, a Near Field Communication (NFC) , or the like, or any combination thereof.
- LAN Local Area Network
- WAN Wide Area Network
- NFC Near Field Communication
- Two or more of the modules may be combined into a single module, and any one of the modules may be divided into two or more units. In some embodiments, one or more modules may be combined into a single module.
- the vector determination module 430 and the time determination module 440 may be combined as a single module which may both determine a vector corresponding to a preset link and determine an ETA for the service request.
- one or more modules may be added.
- the processing engine 112 may further include a storage module (not shown) used to store information and/or data (e.g., a plurality of preset links, one or more features of a plurality of preset links) associated with a preset route.
- one or more modules may be omitted.
- the training module 450 may be unnecessary and the word2vec model may be obtained from a storage device (e.g., the storage device 140) , such as the ones disclosed elsewhere in the present disclosure.
- FIG. 5 is a flowchart illustrating an exemplary process for determining an ETA for a service request according to some embodiments of the present disclosure.
- the process 500 may be implemented as a set of instructions (e.g., an application) stored in the storage ROM 230 or RAM 240.
- the processor 220 and/or the modules in FIG. 4 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the modules may be configured to perform the process 500.
- the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 500 may be accomplished with one or more additional operations not described and/or without one or more of the operations herein discussed. Additionally, the order in which the operations of the process as illustrated in FIG. 5 and described below is not intended to be limiting.
- the processing engine 112 may obtain a service request.
- a service request may be a request for any location based services.
- the service request may be a request for a transportation service (e.g., a taxi service, a delivery service, a vehicle hailing service) or a navigation service.
- the service request may include a starting location, a destination, a starting time, identity information (e.g., an identification (ID) , a telephone number, a user’s name) , or the like, or any combination thereof.
- ID identification
- a starting location may refer to a location that a user inputs/selects to start a service (e.g., an online taxi hailing service) via a terminal device (e.g., the terminal device 130) when the user initiates a service request or detected using a positioning technology by sever system 100 or the terminal device 130.
- a destination may refer to a location that a user inputs/selects to end a service (e.g., an online taxi hailing service) via a terminal device (e.g., the terminal device 130) when the user initiates a service request.
- a starting time may refer to a time point when a user wants to start a service.
- the processing engine 112 may obtain the service request from the storage device 140, the terminal device (e.g., the terminal device 130) of one or more users via the network 150.
- the terminal device may establish a communication (e.g., a wireless communication) with the server 110, for example, through an application (e.g., the application 380 in FIG. 3) installed in the terminal device.
- the application may be associated with a service platform (e.g., an online to offline service platform) .
- the application may be associated with a taxi-hailing service platform.
- the user may log into the application and initiate the service request.
- the application installed in the terminal device may direct the terminal device to monitor the service request from the user continuously or periodically, and automatically transmit the service request to the processing engine 112 via the network 150.
- the processing engine 112 may determine a preset route from the starting location to the destination.
- the preset route may include a plurality of preset links.
- the processing device 112 may determine a route that travels from the starting location to the destination in response to a service request obtained from a terminal device 130 of a user.
- the preset route may be determined based on route planning techniques.
- the route planning techniques may include a machine learning technique, an artificial intelligence technique, a template approach technique, an artificial potential field technique, or the like, or any combination thereof, which improve accuracy of ETA.
- algorithms used in route planning may include a Dijkstra algorithm, a Floyd-Warshall algorithm, a Bellman-Ford algorithm, a double direction A algorithm, a Geometric Goal Directed Search (A*) algorithm, a priority queue algorithm, a Heuristics algorithm, a sample algorithm, or the like, or any combination thereof.
- the preset route may be determined based on a plurality of routes completed in a plurality of historical orders. For example, if route A is determined as the most frequently used route from a starting location to a destination in the plurality of historical orders, route A may be recommended to the user as the preset route to travel from the same starting location to the same destination.
- one or more preset routes from the starting location to the destination may be determined and recommended to the user, and at least one preset route may be selected from the one or more preset routes.
- the route selection may be performed by the user, or the processing engine 112.
- the preset route may be selected based on a time related criterion, a service cost related criterion, a path related criterion (e.g., road type, road width, traffic condition, speed limit, curve radius, number of intersections) from the one or more preset routes.
- the at least one preset route may be selected in terms of a shortest mileage, a shortest time, a least service cost, a safest route, a route with most scenarios, a route with less traffic, or the like, from the one or more preset routes.
- the preset route may include one or more preset links. Each preset link may correspond to at least a portion of the preset route.
- a “link” may be an element of road or street in a map.
- a link may correspond to a segment of a road or a street on the map.
- a road may include one or more links.
- Changan Street may be mapped to five links on the map. The five links may be connected one by one via its nodes to constitute Changan Street.
- a region e.g., Chaoyang district, Beijing city
- a road network of the region may be represented as an aggregation of links.
- the processing engine 112 may divide a road into the one or more links based on one or more intersections of the road. For example, the processing engine 112 may determine a road segment (e.g., 200 meters) between two intersections of the road as the link. In some embodiments, if a distance of the road segment between two intersections of the road is larger than a distance threshold (e.g., 1 kilometer) , the processing engine 112 may divide the road into the one or more links based on a location of a city. For example, the road may be a highway with several kilometers. The processing engine 112 may divide the road into the one or more links based on the location of the city near the road. In some embodiments, the processing engine 112 may mark each link of the one or more links of the road, for example, L1, L2, L3...Ln.
- the processing engine 112 may determine a corresponding vector representing a topological relationship among the plurality preset links.
- the processing engine 112 may determine the corresponding vector using a word2vec model, which improve accuracy of ETA.
- the word2vec model may be usually the one used in the natural language processing (NLP) field.
- the word2vec may be a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that may be trained to reconstruct linguistic contexts of words.
- the word2vec model may take as its input a large corpus of text and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a corresponding vector in the space.
- processing engine 112 may obtain a corpus (e.g., ABCD, ABD, ADE, ABC, AE) including a plurality of words (e.g., A, B, C, D, E) .
- the processing engine 112 may train the word2vec model by inputting the corpus into the word2vec model.
- the word2vec model may generate a vector corresponding to each word of the plurality of words, for example, A (0.1, 0.2, 0.3) , B (0.2, 0.4, 0.5) , C (0.3, 0.6, 0.8) , D (0.1, 0.5, 0.6) , and E (0.3, 0.9, 1.1) .
- the vector corresponding to the each word may reflect a semantics of the each word.
- the processing engine 112 may determine E (i.e., “Washington” ) by inputting D and B into the word2vec model. Accordingly, the word2vec model may determine a semantic relationship among the plurality of words.
- the word2vec model may be determined according to a training process.
- the processing engine 112 may obtain a plurality of training samples.
- the plurality of training samples may include a plurality of historical routes associated with a plurality of historical orders.
- the processing engine 112 may train the word2vec model based on the plurality of training samples.
- the word2vec model may be configured to generate a vector corresponding to each link of a plurality of links associated with the plurality of historical routes.
- a plurality of vectors corresponding to a plurality of links associated with a specific area may be stored in a storage device (e.g., the storage device 140) of the online to offline service system 100 or an external storage device.
- the processing engine 112 may access the storage device and retrieve the vector corresponding to the each preset link of the plurality of preset links associated with the preset route.
- the processing engine 112 may determine, using a machine learning model, an ETA for the service request based on the vectors corresponding to the plurality of preset links.
- an estimated time of arrival may refer to an estimated time point when a user arrives at a destination.
- the processing engine 112 may obtain at least one feature of each one of the plurality of preset links.
- the processing engine 112 may determine, based on the at least one feature of the each one of the plurality of preset links and the vector corresponding to the each one of the plurality of preset links, a sub-ETA corresponding to the each one of the plurality of preset links by using the machine learning model.
- the machine learning model may be a recurrent neural network (RNN) model.
- the processing engine 112 may determine, based on the sub-ETA corresponding to the each one of the plurality of preset links, the ETA for the service request. More descriptions of the determination of the ETA for the service request may be found elsewhere in the present disclosure (e.g., FIG. 7 and descriptions thereof) .
- the ETA for the route of the service request may be configured for further processing. For example, for online taxi-hailing services, the ETA for the service request may be used to determine a service fee from the starting location to the destination.
- the processing engine 112 may transmit the ETA for the service request to the terminal device (e.g., the terminal device 130) of the user. The ETA may be displayed on a visual interface of the terminal device.
- one or more operations may be combined into a single operation.
- operation 510 and operation 520 may be combined into an operation.
- one or more other optional operations e.g., a storing operation
- the processing engine 112 may store information and/or data associated with the online to offline service system (e.g., the one or more features of a preset link, the trained word2vec model) in a storage device (e.g., the storage device 140) disclosed elsewhere in the present disclosure.
- a storage device e.g., the storage device 140
- one or more operations may be performed simultaneously. For example, operation 510 and operation 520 may be performed simultaneously.
- FIG. 6 is a flowchart illustrating an exemplary process for determining a word2vec model according to some embodiments of the present disclosure.
- the process 600 may be implemented as a set of instructions (e.g., an application) stored in the storage ROM 230 or RAM 240.
- the processor 220 and/or the modules in FIG. 4 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the modules may be configured to perform the process 600.
- the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 600 may be accomplished with one or more additional operations not described and/or without one or more of the operations herein discussed. Additionally, the order in which the operations of the process as illustrated in FIG. 6 and described below is not intended to be limiting.
- the processing engine 112 may obtain a plurality of training samples comprising a plurality of historical routes associated with a plurality of historical orders.
- each historical route of the plurality of historical routes may include a plurality of links.
- a historical order may refer to an order that has been fulfilled.
- information associated with the historical order may include an order number, a historical starting location, a historical destination, a historical pick-up location, a historical drop-off location, user’s identity information (e.g., an identification (ID) , a telephone number, a user’s name) , or the like, or any combination thereof.
- a pick-up location may refer to a location where a user starts a service.
- the pick-up location may be the location where a passenger actually gets on a vehicle.
- a drop-off location may refer to a location where a user ends a service.
- the drop-off location may be the location where a passenger actually gets off a vehicle.
- the pick-up location may be the same as or different from the starting location.
- the drop-off location may be the same as or different from the destination.
- the historical order may correspond to the historical route.
- the historical route may be an actual driving route of the user from the historical pick-up location to the historical drop-off location.
- the processing engine 112 may obtain position information (e.g., GPS information) of the terminal device which indicates a current location of the terminal device from the terminal device at a certain time interval (e.g., 1 second, 10 seconds, 1 minutes, etc. ) , in real time or substantially in real-time. Further, the processing engine 112 may store the position information in a storage device (e.g., the storage device 140) disclosed elsewhere in the present disclosure.
- the processing engine 112 may determine the historical route of the terminal device based on the position information of the terminal device. In some embodiments, the processing engine 112 may determine the historical route of the terminal device based on the position information of the terminal device and map information according to a map matching algorithm. Exemplary map matching algorithm may include a nearest neighbor algorithm, a Hidden Markov Model (HMM) , or the like. In some embodiments, the processing engine 112 may match recorded geographic coordinates of the terminal device to a logical model of the real world according to the map matching algorithm. For example, the processing engine 112 may associate a position point (e.g. a GPS point) of the terminal device to a location in an existing street graph.
- a position point e.g. a GPS point
- the processing engine 112 may determine the word2vec model based on the plurality of training samples.
- the processing engine 112 may input the plurality of training samples into the word2vec model.
- the word2vec model may generate a vector corresponding to each link of the plurality of links associated with the plurality of historical routes.
- the vector corresponding to the each link of the plurality of links may represent a topological relationship among the plurality links.
- the training process of the word2vec model in the natural language processing (NLP) field may be different from the training process of the word2vec model in the determination of the ETA in the route planning field.
- NLP natural language processing
- a road segment cannot be repeated multiple times in a same direction.
- an operation for preprocessing the plurality of training samples may be added before operation 620.
- the processing engine 112 may remove a historical route including duplicate links. For example, the processing engine 112 may remove a historical route that a user takes a detour. In some embodiments, the processing engine 112 may remove a historical route that violates a traffic rule. For example, assuming that the driving direction of the historical route on a link is not in consistent with a direction of travel of the link, the processing engine 112 may remove the historical route.
- the processing engine 112 may determine a plurality of training parameters for the word2vec model.
- the training parameters may include a window size, a min-count, a dimensionality of a vector, or the like, or any combination thereof.
- a window size may refer to a maximum distance between a current word (e.g., a current link) and a predicted word (e.g., a predicted link) within a sentence (e.g., a route) .
- a min-count may refer to that all words (e.g., links) with a total frequency lower than this may be ignored.
- the plurality of training parameters for the word2vec model in the NPL may be different from the plurality of training parameters for the word2vec model in the determination of the ETA.
- a word may be related to other five words in a sentence.
- a link may be related to other ten links in a route.
- the window size of the word2vec model in the NPL may be set as 5
- the window size of the word2vec model in the determination of the ETA may be set as 10.
- FIG. 7 is a flowchart illustrating an exemplary process for determining an ETA for a service request according to some embodiments of the present disclosure.
- the process 700 may be implemented as a set of instructions (e.g., an application) stored in the storage ROM 230 or RAM 240.
- the processor 220 and/or the modules in FIG. 4 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the modules may be configured to perform the process 700.
- the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 700 may be accomplished with one or more additional operations not described and/or without one or more of the operations herein discussed. Additionally, the order in which the operations of the process as illustrated in FIG. 7 and described below is not intended to be limiting.
- the processing engine 112 may obtain at least one feature of each one of a plurality of preset links.
- the one or more features of the link may include a length of the link, a width of the link, a direction of travel, a traffic light distribution, a lane condition (e.g., a number of lanes in the link, a classification of the lanes) , a road condition of the link (e.g., a shape, a surface roughness determination, a slippery condition of road surface) , a speed limit of the link, a traffic light duration of the link, or the like, or any combination thereof.
- a lane condition e.g., a number of lanes in the link, a classification of the lanes
- a road condition of the link e.g., a shape, a surface roughness determination, a slippery condition of road surface
- speed limit of the link e.g., a traffic light duration of the link, or the like, or any combination thereof.
- a plurality of links associated with an area and corresponding features may be stored in a storage device (e.g., the storage device 140) of the online to offline service system 100 or an external storage device.
- the processing engine 112 may access the storage device and retrieve the one or more features of each one of the plurality of preset links.
- the processing engine 112 may determine, based on the at least one feature of the each one of the plurality of preset links and a vector corresponding to the each one of the plurality of preset links, a sub-ETA corresponding to the each one of the plurality of preset links by using a machine learning model.
- the machine learning model may be a recurrent neural network (RNN) model. Since the preset route of the service request includes a plurality of preset links, and as the travel of the service request is presumably from the staring location to the destination via the plurality of preset links, the plurality of preset links may be sequentially connected with each other.
- the RNN model may include a plurality of cells each of which may make use of sequential information (e.g., the one or more features of each preset link, a vector corresponding to each preset link) to obtain the sub-ETA for each one of the plurality of preset links.
- RNNs are called recurrent because they perform the same task for every element (e.g., each preset link) of a sequence, with the output being depended on the previous computations.
- each cell of the RNN model may include an input layer, a hidden layer, and an output layer.
- the hidden layer may have one or more feedback loops. These feedback loops may provide RNNs with a type of “memory, ” in which past outputs from the hidden layer of a cell may inform future outputs from the hidden layer of another cell.
- each feedback loop may provide an output from the hidden layer in a previous cell back to the hidden layer of the current cell as input for the current cell to inform the output of the current cell. This can enable RNNs to recurrently process sequence data (e.g., data that exists in an ordered sequence, like a route having a sequence of links) over a sequence of steps.
- each cell may receive an input (e.g., the one or more features of a corresponding preset link, the vector of the corresponding preset link) and an output (e.g., a feature vector for a previous preset link, a vector of the previous preset link) from the hidden layer of a previous cell.
- the each cell may determine the feature vector for the corresponding preset link by analyzing the one or more features of the corresponding preset link. The each cell may generate the sub-ETA for the corresponding preset link based on the one or more features of the corresponding preset link, the vector of the corresponding preset link, the feature vector for the previous preset link, and the vector of the previous preset link.
- the processing engine 112 may determine a combined vector of the feature vector for the corresponding present link, the vector of the corresponding preset link, the feature vector for the previous preset link, and the vector of the previous preset link. For example, the processing engine 112 may determine a weight for each one of the feature vector for the corresponding present link, the vector of the corresponding preset link, the feature vector for the previous preset link, and the vector of the previous preset link, respectively. The processing engine 112 may determine the combined vector based on the feature vector for the corresponding present link, the vector of the corresponding preset link, the feature vector for the previous preset link, the vector of the previous preset link, and their own corresponding weight.
- the processing engine 112 may determine the sub-ETA for the corresponding preset link based on the combined vector of the feature vector for the corresponding present link, the vector of the corresponding preset link, the feature vector for the previous preset link, and the vector of the previous preset link.
- the cell may determine the sub-ETA for the corresponding preset link based on the combined vector of the feature vector for the corresponding present link, the vector of the corresponding preset link, the feature vector for the previous preset link, and the vector of the previous preset link, and output the sub-ETA for the corresponding preset link.
- the processing engine 112 may determine, based on the sub-ETA corresponding to the each one of the plurality of preset links, an ETA for a service request.
- the processing engine 112 may determine a sum of the sub-ETA corresponding to the each one of the plurality of preset links.
- the processing engine 112 may determine the sum of the sub-ETA corresponding to the each one of the plurality of preset links as the ETA for the service request.
- aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “module, ” “unit, ” “component, ” “device, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
- a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof.
- a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
- LAN local area network
- WAN wide area network
- SaaS Software as a Service
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Strategic Management (AREA)
- Development Economics (AREA)
- Data Mining & Analysis (AREA)
- Economics (AREA)
- Marketing (AREA)
- General Health & Medical Sciences (AREA)
- Finance (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Accounting & Taxation (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Resources & Organizations (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Game Theory and Decision Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Primary Health Care (AREA)
- Navigation (AREA)
- Traffic Control Systems (AREA)
Abstract
It is provided systems and methods for determining an estimated time of arrival (ETA) for a service request. The method includes: obtaining a service request comprising a starting location and a destination (510); determining a preset route from the starting location to the destination (520); for each one of the plurality of preset links, determining a corresponding vector representing a topological relationship among the plurality of preset links (530); determining, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links (540).
Description
This disclosure generally relates to an online to offline service platform, and more particularly, relates to systems and methods for determining an estimated time of arrival (ETA) for a service request.
With the development of Internet technology, online to offline services, such as online taxi hailing services and/or navigation services, are starting to play a significant role in people’s daily lives. When a user (e.g., a passenger) initiates a service request, a system providing the online taxi hailing services can recommend one or more suitable preset driving routes based on a starting location and a destination of the service request. It is important to accurately predict the arrival time at the destination, which can help estimate service fee and/or provide information with the user who wants to know the estimated time of arrival (ETA) at the destination. However, in some cases, it is common that the user changes the preset driving route due to an objective reasons (e.g., a traffic congestion, a traffic accident, a water accumulation in a road segment) and/or a subjective preference (e.g., view preference, shipping an object) , which may lead to a low accuracy of the ETA. Thus, it is desirable to provide systems and methods for accurately predicting the ETA for the service request, which helps the user with travel planning and improves the user’s travel efficiency.
SUMMARY
According to an aspect of the present disclosure, a method may include one or more of the following operations performed by at least one processor. The method may include obtaining a service request comprising a starting location and a destination. The method may include determining a preset route from the starting location to the destination. The preset route may include a plurality of preset links. The method may include, for each one of the plurality of preset links, determining a corresponding vector representing a topological relationship among the plurality preset links. The method may include determining, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links.
In some embodiments, the method may also include, for each one of the plurality of preset links, determining the corresponding vector using a word2vec model.
In some embodiments, the method may also include determining the word2vec model according to a training process. The training process may include obtaining a plurality of training samples comprising a plurality of historical routes associated with a plurality of historical orders. The training process may include determining the word2vec model based on the plurality of training samples.
In some embodiments, the method may also include removing a historical route including duplicate links.
In some embodiments, the method may also include removing a historical route that violates a traffic rule.
In some embodiments, the method may also include obtaining at least one feature of the each one of the plurality of preset links. The method may also include determining, based on the at least one feature of the each one of the plurality of preset links and the vector corresponding to the each one of the plurality of preset links, a sub-ETA corresponding to the each one of the plurality of preset links by using the machine learning model. The method may also include determining, based on the sub-ETA corresponding to the each one of the plurality of preset links, the ETA for the service request.
In some embodiments, the at least one feature of the each one of the plurality of preset links may include at least one of a length of the preset link, a width of the preset link, a direction of travel, a traffic light distribution, or a lane condition.
In some embodiments, the plurality of historical routes may be determined according to a map matching algorithm.
In some embodiments, the machine learning model may be a recurrent neural network (RNN) model.
In some embodiments, the method may also include transmitting the ETA to a terminal, directing the terminal to display the ETA.
According to another aspect of the present disclosure, a system for determining an estimated time of arrival (ETA) for a service request may include at least one storage medium storing a set of instructions, and at least one processor in communication with the at least one storage medium. When executing the stored set of instructions, the at least one processor may cause the system to obtain a service request comprising a starting location and a destination. The at least one processor may also cause the system to determine a preset route from the starting location to the destination. The preset route may include a plurality of preset links. The at least one processor may also cause the system to, for each one of the plurality of preset links, determine a corresponding vector representing a topological relationship among the plurality preset links. The at least one processor may also cause the system to determine, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links.
According to still another aspect of the present disclosure, a non-transitory computer readable medium storing instructions, the instructions, when executed by at least one processor, causing the at least one processor to implement a method. The method may include obtaining a service request comprising a starting location and a destination. The method may include determining a preset route from the starting location to the destination. The preset route may include a plurality of preset links. The method may include, for each one of the plurality of preset links, determining a corresponding vector representing a topological relationship among the plurality preset links. The method may include determining, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links.
Additional features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities and combinations set forth in the detailed examples discussed below.
The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. The drawings are not to scale. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
FIG. 1 is a schematic diagram illustrating an exemplary online to offline service system according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure;
FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure;
FIG. 4 is a block diagram illustrating an exemplary processing engine according to some embodiments of the present disclosure;
FIG. 5 is a flowchart illustrating an exemplary process for determining an ETA for a service request according to some embodiments of the present disclosure;
FIG. 6 is a flowchart illustrating an exemplary process for determining a word2vec model according to some embodiments of the present disclosure; and
FIG. 7 is a flowchart illustrating an exemplary process for determining an ETA for a service request according to some embodiments of the present disclosure.
The following description is presented to enable any person skilled in the art to make and use the present disclosure and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a, ” “an, ” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise, ” “comprises, ” and/or “comprising, ” “include, ” “includes, ” and/or “including, ” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
These and other features, and characteristics of the present disclosure, as well as the methods of operation, various components of the stated system, functions of the related elements of structure, and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawings, all of which form part of this disclosure. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale.
The flowcharts used in the present disclosure illustrate operations that systems implemented according to some embodiments of the present disclosure. It is to be expressly understood that the operations of the flowchart may be implemented not in order. Conversely, the operations may be implemented in inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
Moreover, while the systems and methods disclosed in the present disclosure are described primarily regarding online transportation service, it should also be understood that this is only one exemplary embodiment. The systems and methods of the present disclosure may be applied to any other kind of online to offline service. For example, the systems and methods of the present disclosure may be applied to transportation systems of different environments including land (e.g. roads or off-road) , water (e.g. river, lake, or ocean) , air, aerospace, or the like, or any combination thereof. The vehicle of the transportation systems may include a taxi, a private car, a hitch, a bus, a train, a bullet train, a high-speed rail, a subway, a boat, a vessel, an aircraft, a spaceship, a hot-air balloon, a driverless vehicle, or the like, or any combination thereof. The transportation systems may also include any transportation system for management and/or distribution, for example, a system for sending and/or receiving an express. The application of the systems and methods of the present disclosure may include a mobile device (e.g. smart phone or pad) application, a webpage, a plug-in of a browser, a client terminal, a custom system, an internal analysis system, an artificial intelligence robot, or the like, or any combination thereof.
The terms “passenger, ” “requester, ” “requestor, ” “service requester, ” “service requestor, ” and “customer” in the present disclosure are used interchangeably to refer to an individual, an entity or a tool that may request or order a service. Also, the terms “driver, ” “provider, ” “service provider, ” and “supplier” in the present disclosure are used interchangeably to refer to an individual, an entity or a tool that may provide a service or facilitate the providing of the service. The term “user” in the present disclosure is used to refer to an individual, an entity or a tool that may request a service, order a service, provide a service, or facilitate the providing of the service. In the present disclosure, terms “requester” and “requester terminal” may be used interchangeably, and terms “provider” and “provider terminal” may be used interchangeably.
The terms “request, ” “service, ” “service request, ” and “order” in the present disclosure are used interchangeably to refer to a request that may be initiated by a passenger, a requester, a service requester, a customer, a driver, a provider, a service provider, a supplier, or the like, or any combination thereof. Depending on context, the service request may be accepted by any one of a passenger, a requester, a service requester, a customer, a driver, a provider, a service provider, or a supplier. In some embodiments, the service request is accepted by a driver, a provider, a service provider, or a supplier. The service request may be chargeable or free.
The positioning technology used in the present disclosure may be based on a global positioning system (GPS) , a global navigation satellite system (GLONASS) , a compass navigation system (COMPASS) , a Galileo positioning system, a quasi-zenith satellite system (QZSS) , a wireless fidelity (WiFi) positioning technology, or the like, or any combination thereof. One or more of the above positioning systems may be used interchangeably in the present disclosure.
An aspect of the present disclosure is directed to systems and methods for determining an estimated time of arrival (ETA) for a service request. According to some systems and methods of the present disclosure, the processing engine may obtain a service request. The service request may include a starting location and a destination. The processing engine may determine a preset route from the starting location to the destination. The preset route may include a plurality of preset links. For each one of the plurality of preset links, the processing engine may determine a corresponding vector representing a topological relationship among the plurality preset links. For example, for the each one of the plurality of preset links, the processing engine may determine the corresponding vector using a word2vec model. The processing engine may determine, using a machine learning model (e.g., a RNN model) , the ETA for the service request based on the vectors corresponding to the plurality of preset links. Accordingly, the ETA for the service request may be determined more accurately, which may improve user experience.
FIG. 1 is a schematic diagram illustrating an exemplary online to offline service system according to some embodiments of the present disclosure. In some embodiments, the online to offline service system may be a system for online to offline services. For example, the online to offline service system 100 may be an online transportation service platform for transportation services such as taxi hailing, chauffeur services, delivery vehicles, express car, carpool, bus service, driver hiring, shuttle services, take-out services, navigation services, vehicle sharing services.
In some embodiments, the online to offline service system 100 may include a server 110, a positioning system 120, a terminal device 130, a storage device 140, and a network 150.
In some embodiments, the server 110 may be a single server or a server group. The server group may be centralized or distributed (e.g., the server 110 may be a distributed system) . In some embodiments, the server 110 may be local or remote. For example, the server 110 may access information and/or data stored in the terminal device 130, the storage device 140, and/or the positioning system 120 via the network 150. As another example, the server 110 may be directly connected to the terminal device 130, and/or the storage device 140 to access stored information and/or data. In some embodiments, the server 110 may be implemented on a cloud platform or an onboard computer. Merely by way of example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof. In some embodiments, the server 110 may be implemented on a computing device 200 having one or more components illustrated in FIG. 2 in the present disclosure.
In some embodiments, the server 110 may include a processing engine 112. The processing engine 112 may process information and/or data to perform one or more functions described in the present disclosure. For example, the processing engine 112 may obtain a service request comprising a starting location and a destination. As another example, the processing engine 112 may determine a preset route from a starting location to a destination. The preset route may include a plurality of preset links. As still another example, for each one of a plurality of preset links, the processing engine 112 may determine a corresponding vector representing a topological relationship among the plurality preset links. As still another example, the processing engine 112 may determine, using a machine learning model (e.g., a RNN model) , an ETA for a service request based on a plurality of vectors corresponding to a plurality of preset links. In some embodiments, the processing engine 112 may include one or more processing engines (e.g., single-core processing engine (s) or multi-core processor (s) ) . Merely by way of example, the processing engine 112 may include a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a digital signal processor (DSP) , a field programmable gate array (FPGA) , a programmable logic device (PLD) , a controller, a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
In some embodiments, the server 110 may be connected to the network 150 to communicate with one or more components (e.g., the terminal device 130, the storage device 140, and/or the positioning system 120) of the online to offline service system 100. In some embodiments, the server 110 may be directly connected to or communicate with one or more components (e.g., the terminal device 130, the storage device 140, and/or the positioning system 120) of the online to offline service system 100.
The positioning system 120 may determine information associated with an object, for example, one or more of the terminal devices 130. In some embodiments, the positioning system 120 may be a global positioning system (GPS) , a global navigation satellite system (GLONASS) , a compass navigation system (COMPASS) , a BeiDou navigation satellite system, a Galileo positioning system, a quasi-zenith satellite system (QZSS) , etc. The information may include a location, an elevation, a velocity, or an acceleration of the object, or a current time. The positioning system 120 may include one or more satellites, for example, a satellite 120-1, a satellite 120-2, and a satellite 120-3. The satellites 120-1 through 120-3 may determine the information mentioned above independently or jointly. The positioning system 120 may send the information mentioned above to the network 150, the terminal device 130 via wireless connections.
In some embodiments, the terminal device 130 may include a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, a built-in device in a motor vehicle 130-4, or the like, or any combination thereof. In some embodiments, the mobile device 130-1 may include a smart home device, a wearable device, a mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home device may include a smart lighting device, a control device of an intelligent electrical apparatus, a smart monitoring device, a smart television, a smart video camera, an interphone, or the like, or any combination thereof. In some embodiments, the wearable device may include a bracelet, footgear, glasses, a helmet, a watch, clothing, a backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the mobile device may include a mobile phone, a personal digital assistance (PDA) , a gaming device, a navigation device, a point of sale (POS) device, a laptop, a desktop, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, a virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or the augmented reality device may include a Google Glass
TM, a RiftCon
TM, a Fragments
TM, a Gear VR
TM, etc. In some embodiments, the built-in device in the motor vehicle 130-4 may include an onboard computer, an onboard television, etc. In some embodiments, the terminal device 130 may be a device with positioning technology for locating the position of a user of the terminal device 130 (e.g., a service requester or a service provider) and/or the terminal device 130. In some embodiments, the terminal device 130 may communicate with one or more other positioning devices to determine the position of the terminal device 130. In some embodiments, the terminal device 130 may send positioning information to the server 110.
In some embodiments, the terminal device 130 may receive/transmit information related to the online to offline service system 100 from/to one or more components (e.g., the server 110, the storage device 150, the positioning system 160) of the online to offline service system 100. For example, for a navigation service, a user of the terminal device 130 may input a starting location and a destination through the terminal device 130. The terminal device 130 may transmit the user’s input to the server 110 (e.g., request a navigation service) . The terminal device 130 may receive, from the server 110, signals including a route from the starting location to the destination and/or an ETA of the route and display the route and/or the ETA.
In some embodiments, the terminal device 130 may include a requester terminal and a provider terminal. In some embodiments, a requester may be a user of the requester terminal. The terms “passenger, ” “requester, ” “service requester, ” and “customer” in the present disclosure are used interchangeably to refer to an individual, an entity or a tool that may request or order a service. In some embodiments, a provider may be a user of the provider terminal. The terms “driver, ” “provider, ” “service provider, ” and “supplier” in the present disclosure are used interchangeably to refer to an individual, an entity, or a tool that may provide a service or facilitate the providing of the service. The term “user” in the present disclosure may refer to an individual, an entity, or a tool that may request a service, order a service, provide a service, or facilitate the providing of the service. For example, the user may be a passenger, a driver, an operator, or the like, or any combination thereof. In the present disclosure, terms “passenger” and “passenger terminal” may be used interchangeably, and terms “driver” and “driver terminal” may be used interchangeably.
The term “request, ” “service request, ” “order, ” in the present disclosure refers to a request that initiated by a passenger, a requester, a service requester, a customer, a driver, a provider, a service provider, a supplier, or the like, or any combination thereof. The service request may be accepted by any one of a passenger, a requester, a service requester, a customer, a driver, a provider, a service provider, or a supplier. The service request may be chargeable, or free.
In some embodiments, the user of the terminal device 130 may be someone other than the requester. For example, a user A of the requester terminal may use the requester terminal to send a service request for a user B, or receive service and/or information or instructions from the server 110. In some embodiments, the user of the provider terminal may be someone other than the provider. For example, a user C of the provider terminal may use the provider terminal to receive a service request for a user D, and/or information or instructions from the server 110. In some embodiments, “service requester, ” “requester, ” and “requester terminal” may be used interchangeably, and “service provider, ” “provider, ” and “provider terminal” may be used interchangeably.
The storage device 140 may store data and/or instructions. In some embodiments, the storage device 140 may store data obtained from the terminal device 130, the positioning system 120, the processing engine 112, and/or an external storage device. For example, the storage device 140 may store a service request received from a terminal device (e.g., the terminal device 130) . As another example, the storage device 140 may store a plurality of links associated with a route determined by the processing engine 112. As another example, the storage device 140 may store a preset route from a starting location to a destination determined by the processing engine 112. As another example, the storage device 140 may store a vector corresponding to each one of a plurality of preset links determined by the processing engine 112. As another example, the storage device 140 may store an ETA for a service request determined by the processing engine 112. In some embodiments, the storage device 140 may store data and/or instructions that the server 110 may execute or use to perform exemplary methods described in the present disclosure. For example, the storage device 140 may store instructions that the processing engine 112 may execute or use to determine a preset route from a starting location to a destination. As another example, the storage device 140 may store instructions that the processing engine 112 may execute or use to determine a vector corresponding to each one of a plurality of preset links. As still another example, the storage device 140 may store instructions that the processing engine 112 may execute or use to determine, using a machine learning model, an ETA for a service request based on a plurality of vectors corresponding to a plurality of preset links.
In some embodiments, the storage device 140 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memory may include a random access memory (RAM) . Exemplary RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyrisor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc. Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically-erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc. In some embodiments, the storage device 140 may be implemented on a cloud platform. Merely by way of example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
In some embodiments, the storage device 140 may be connected to the network 150 to communicate with one or more components (e.g., the server 110, the terminal device 130, and/or the positioning system 120) of the online to offline service system 100. One or more components of the online to offline service system 100 may access the data or instructions stored in the storage device 140 via the network 150. In some embodiments, the storage device 140 may be directly connected to or communicate with one or more components (e.g., the server 110, the terminal device 130, and/or the positioning system 120) of the online to offline service system 100. In some embodiments, the storage device 140 may be part of the server 110.
The network 150 may facilitate exchange of information and/or data. In some embodiments, one or more components (e.g., the server 110, the terminal device 130, the storage device 140, or the positioning system 120) of the online to offline service system 100 may send information and/or data to other component (s) of the online to offline service system 100 via the network 150. For example, the server 110 may obtain/acquire a service request from the terminal device 130 via the network 150. As another example, the server 110 may transmit an ETA for a service request to the terminal device 130 to display via the network 150 in real-time. As another example, the server 110 may transmit a predicted/planed route for a service request to the terminal device 130 to display via the network 150 in real-time. In some embodiments, the network 150 may be any type of wired or wireless network, or combination thereof. Merely by way of example, the network 150 may include a cable network, a wireline network, an optical fiber network, a tele communications network, an intranet, an Internet, a local area network (LAN) , a wide area network (WAN) , a wireless local area network (WLAN) , a metropolitan area network (MAN) , a wide area network (WAN) , a public telephone switched network (PSTN) , a Bluetooth network, a ZigBee network, a near field communication (NFC) network, or the like, or any combination thereof. In some embodiments, the network 150 may include one or more network access points. For example, the network 150 may include wired or wireless network access points (e.g., 150-1, 150-2) , through which one or more components of the online to offline service system 100 may be connected to the network 150 to exchange data and/or information.
It should be noted that the online to offline service system 100 is merely provided for the purposes of illustration, and is not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations or modifications may be made under the teachings of the present disclosure. For example, the online to offline service system 100 may further include a database, an information source, etc. As another example, the online to offline service system 100 may be implemented on other devices to realize similar or different functions. In some embodiments, the GPS device may also be replaced by other positioning device, such as BeiDou. However, those variations and modifications do not depart from the scope of the present disclosure.
FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure. In some embodiments, the server 110, the positioning system 120, and/or the terminal device 130 may be implemented on the computing device 200. For example, the processing engine 112 may be implemented on the computing device 200 and configured to perform functions of the processing engine 112 disclosed in this disclosure.
The computing device 200 may be used to implement any component of the online to offline service system 100 as described herein. For example, the processing engine 112 may be implemented on the computing device 200, via its hardware, software program, firmware, or a combination thereof. Although only one such computer is shown, for convenience, the computer functions relating to the online to offline service as described herein may be implemented in a distributed fashion on a number of similar platforms to distribute the processing load.
The computing device 200 may include COM ports 250 connected to and from a network connected thereto to facilitate data communications. The computing device 200 may also include a processor 220, in the form of one or more, e.g., logic circuits, for executing program instructions. For example, the processor 220 may include interface circuits and processing circuits therein. The interface circuits may be configured to receive electronic signals from a bus 210, wherein the electronic signals encode structured data and/or instructions for the processing circuits to process. The processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus 210.
The computing device 200 may further include program storage and data storage of different forms including, for example, a disk 270, a read only memory (ROM) 230, or a random access memory (RAM) 240, for storing various data files to be processed and/or transmitted by the computing device 200. The computing device 200 may also include program instructions stored in the ROM 230, RAM 240, and/or another type of non-transitory storage medium to be executed by the processor 220. The methods and/or processes of the present disclosure may be implemented as the program instructions. The computing device 200 may also include an I/O component 260, supporting input/output between the computer and other components. The computing device 200 may also receive programming and data via network communications.
Merely for illustration, only one processor is described in FIG. 2. Multiple processors are also contemplated, thus operations and/or steps performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors. For example, if in the present disclosure the processor of the computing device 200 executes both operation A and operation B, it should be understood that operation A and operation B may also be performed by two different CPUs and/or processors jointly or separately in the computing device 200 (e.g., the first processor executes operation A and the second processor executes operation B, or the first and second processors jointly execute operations A and B) .
FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure. In some embodiments, the terminal device 130 may be implemented on the mobile device 300. As illustrated in FIG. 3, the mobile device 300 may include a communication platform 310, a display 320, a graphic processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, a mobile operating system (OS) 370, and a storage 390. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown) , may also be included in the mobile device 300.
In some embodiments, the mobile operating system 370 (e.g., iOS
TM, Android
TM, Windows Phone
TM) and one or more applications 380 may be loaded into the memory 360 from the storage 390 in order to be executed by the CPU 340. The applications 380 may include a browser or any other suitable mobile app for receiving and rendering information relating to online to offline services or other information from the online to offline service system 100. User interactions with the information stream may be achieved via the I/O 350 and provided to the processing engine 112 and/or other components of the online to offline service system 100 via the network 150.
To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform (s) for one or more of the elements described herein. A computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device. A computer may also act as a server if appropriately programmed.
FIG. 4 is a block diagram illustrating an exemplary processing engine according to some embodiments of the present disclosure. In some embodiments, the processing engine 112 may include an obtaining module 410, a route determination module 420, a vector determination module 430, a time determination module 440, and a training module 450.
The obtaining module 410 may be configured to obtain data and/or information associated with the online to offline service system 100. For example, the obtaining module 410 may obtain a service request. As another example, the obtaining module 410 may obtain at least one feature of each one of a plurality of preset links. As still another example, the obtaining module 410 may obtain a plurality of training samples comprising a plurality of historical routes associated with a plurality of historical orders.
In some embodiments, the obtaining module 410 may obtain the data and/or the information from one or more components (e.g., the terminal device 130, and the storage device 140) of the online to offline service system 100 or an external storage device via the network 150.
The route determination module 420 may be configured to determine a route based on a starting location to a destination. In some embodiments, the route determination module 420 may determine a preset route from a starting location to a destination based on route planning techniques. The route planning techniques may include a machine learning technique, an artificial intelligence technique, a template approach technique, an artificial potential field technique, or the like, or any combination thereof.
The vector determination module 430 may be configured to determine a vector of a link. In some embodiments, for each preset link of a plurality of preset links, the vector determination module 430 may determine a corresponding vector representing a topological relationship among the plurality preset links. For example, for each preset link of a plurality of preset links, the vector determination module 430 may determine a corresponding vector using a word2vec model.
The time determination module 440 may be configured to determine an ETA for a service request. In some embodiments, the time determination module 440 may determine an ETA for a preset route including a plurality of preset links. For example, the time determination module 440 may determine, based on at least one feature of each one of the plurality of preset links and a vector corresponding to the each one of the plurality of preset links, a sub-ETA corresponding to the each one of the plurality of preset links by using a machine learning model. As another example, the time determination module 440 may determine, based on a sub-ETA corresponding to each one of a plurality of preset links, an ETA for a service request. More descriptions of the determination of the ETA for the service request may be found elsewhere in the present disclosure (e.g., FIGs. 5, 7 and the descriptions thereof) .
The training module 450 may be configured to determine a trained word2vec model. In some embodiments, the training module 450 may determine the word2vec model based on a plurality of training samples. More descriptions of the determination of the word2vec model may be found elsewhere in the present disclosure (e.g., FIGs. 5, 6 and the descriptions thereof) .
The modules in the processing engine 112 may be connected to or communicated with each other via a wired connection or a wireless connection. The wired connection may include a metal cable, an optical cable, a hybrid cable, or the like, or any combination thereof. The wireless connection may include a Local Area Network (LAN) , a Wide Area Network (WAN) , a Bluetooth, a ZigBee, a Near Field Communication (NFC) , or the like, or any combination thereof. Two or more of the modules may be combined into a single module, and any one of the modules may be divided into two or more units. In some embodiments, one or more modules may be combined into a single module. For example, the vector determination module 430 and the time determination module 440 may be combined as a single module which may both determine a vector corresponding to a preset link and determine an ETA for the service request. In some embodiments, one or more modules may be added. For example, the processing engine 112 may further include a storage module (not shown) used to store information and/or data (e.g., a plurality of preset links, one or more features of a plurality of preset links) associated with a preset route. In some embodiments, one or more modules may be omitted. For example, the training module 450 may be unnecessary and the word2vec model may be obtained from a storage device (e.g., the storage device 140) , such as the ones disclosed elsewhere in the present disclosure.
FIG. 5 is a flowchart illustrating an exemplary process for determining an ETA for a service request according to some embodiments of the present disclosure. In some embodiments, the process 500 may be implemented as a set of instructions (e.g., an application) stored in the storage ROM 230 or RAM 240. The processor 220 and/or the modules in FIG. 4 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the modules may be configured to perform the process 500. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 500 may be accomplished with one or more additional operations not described and/or without one or more of the operations herein discussed. Additionally, the order in which the operations of the process as illustrated in FIG. 5 and described below is not intended to be limiting.
In 510, the processing engine 112 (e.g., the obtaining module 410) may obtain a service request.
As used herein, a service request may be a request for any location based services. In some embodiments, the service request may be a request for a transportation service (e.g., a taxi service, a delivery service, a vehicle hailing service) or a navigation service. In some embodiments, the service request may include a starting location, a destination, a starting time, identity information (e.g., an identification (ID) , a telephone number, a user’s name) , or the like, or any combination thereof. As used herein, “a starting location” may refer to a location that a user inputs/selects to start a service (e.g., an online taxi hailing service) via a terminal device (e.g., the terminal device 130) when the user initiates a service request or detected using a positioning technology by sever system 100 or the terminal device 130. “A destination” may refer to a location that a user inputs/selects to end a service (e.g., an online taxi hailing service) via a terminal device (e.g., the terminal device 130) when the user initiates a service request. “A starting time” may refer to a time point when a user wants to start a service.
In some embodiments, the processing engine 112 may obtain the service request from the storage device 140, the terminal device (e.g., the terminal device 130) of one or more users via the network 150. In some embodiments, the terminal device may establish a communication (e.g., a wireless communication) with the server 110, for example, through an application (e.g., the application 380 in FIG. 3) installed in the terminal device. In some embodiments, the application may be associated with a service platform (e.g., an online to offline service platform) . For example, the application may be associated with a taxi-hailing service platform. In some embodiments, the user may log into the application and initiate the service request. In some embodiments, the application installed in the terminal device may direct the terminal device to monitor the service request from the user continuously or periodically, and automatically transmit the service request to the processing engine 112 via the network 150.
In 520, the processing engine 112 (e.g., the route determination module 420) may determine a preset route from the starting location to the destination. The preset route may include a plurality of preset links.
In some embodiments, the processing device 112 may determine a route that travels from the starting location to the destination in response to a service request obtained from a terminal device 130 of a user. In some embodiments, the preset route may be determined based on route planning techniques. The route planning techniques may include a machine learning technique, an artificial intelligence technique, a template approach technique, an artificial potential field technique, or the like, or any combination thereof, which improve accuracy of ETA. For example, algorithms used in route planning may include a Dijkstra algorithm, a Floyd-Warshall algorithm, a Bellman-Ford algorithm, a double direction A algorithm, a Geometric Goal Directed Search (A*) algorithm, a priority queue algorithm, a Heuristics algorithm, a sample algorithm, or the like, or any combination thereof.
In some embodiments, the preset route may be determined based on a plurality of routes completed in a plurality of historical orders. For example, if route A is determined as the most frequently used route from a starting location to a destination in the plurality of historical orders, route A may be recommended to the user as the preset route to travel from the same starting location to the same destination.
n some embodiments, one or more preset routes from the starting location to the destination may be determined and recommended to the user, and at least one preset route may be selected from the one or more preset routes. The route selection may be performed by the user, or the processing engine 112. In some embodiments, the preset route may be selected based on a time related criterion, a service cost related criterion, a path related criterion (e.g., road type, road width, traffic condition, speed limit, curve radius, number of intersections) from the one or more preset routes. For example, the at least one preset route may be selected in terms of a shortest mileage, a shortest time, a least service cost, a safest route, a route with most scenarios, a route with less traffic, or the like, from the one or more preset routes.
In some embodiments, the preset route may include one or more preset links. Each preset link may correspond to at least a portion of the preset route. As used herein, a “link” may be an element of road or street in a map. A link may correspond to a segment of a road or a street on the map. In some embodiments, a road may include one or more links. For example, Changan Street may be mapped to five links on the map. The five links may be connected one by one via its nodes to constitute Changan Street. In some embodiments, a region (e.g., Chaoyang district, Beijing city) may include a plurality of roads. Thus, a road network of the region may be represented as an aggregation of links.
In some embodiments, the processing engine 112 may divide a road into the one or more links based on one or more intersections of the road. For example, the processing engine 112 may determine a road segment (e.g., 200 meters) between two intersections of the road as the link. In some embodiments, if a distance of the road segment between two intersections of the road is larger than a distance threshold (e.g., 1 kilometer) , the processing engine 112 may divide the road into the one or more links based on a location of a city. For example, the road may be a highway with several kilometers. The processing engine 112 may divide the road into the one or more links based on the location of the city near the road. In some embodiments, the processing engine 112 may mark each link of the one or more links of the road, for example, L1, L2, L3…Ln.
In 530, for each one of the plurality of preset links, the processing engine 112 (e.g., the vector determination module 430) may determine a corresponding vector representing a topological relationship among the plurality preset links.
In some embodiments, for each one of the plurality of preset links, the processing engine 112 may determine the corresponding vector using a word2vec model, which improve accuracy of ETA. In some embodiments, the word2vec model may be usually the one used in the natural language processing (NLP) field. In some embodiments, the word2vec may be a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that may be trained to reconstruct linguistic contexts of words. The word2vec model may take as its input a large corpus of text and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a corresponding vector in the space.
In some embodiments, processing engine 112 may obtain a corpus (e.g., ABCD, ABD, ADE, ABC, AE) including a plurality of words (e.g., A, B, C, D, E) . The processing engine 112 may train the word2vec model by inputting the corpus into the word2vec model. The word2vec model may generate a vector corresponding to each word of the plurality of words, for example, A (0.1, 0.2, 0.3) , B (0.2, 0.4, 0.5) , C (0.3, 0.6, 0.8) , D (0.1, 0.5, 0.6) , and E (0.3, 0.9, 1.1) . The vector corresponding to the each word may reflect a semantics of the each word. The processing engine 112 may perform multiple calculations on the plurality of words based on the vector corresponding to the each word. For example, assuming that A is “China” , B is “capital” , and A+B=C (i.e., “China” + “capital” = “Beijing” ) , the processing engine 112 may determine C (i.e., “Beijing” ) by inputting A and B into the word2vec model. As another example, assuming that D is “US” , B is “capital” , and D+B=E (i.e., “US” + “capital” = “Washington” ) , the processing engine 112 may determine E (i.e., “Washington” ) by inputting D and B into the word2vec model. Accordingly, the word2vec model may determine a semantic relationship among the plurality of words.
In some embodiments, the word2vec model may be determined according to a training process. In some embodiments, the processing engine 112 may obtain a plurality of training samples. The plurality of training samples may include a plurality of historical routes associated with a plurality of historical orders. The processing engine 112 may train the word2vec model based on the plurality of training samples. The word2vec model may be configured to generate a vector corresponding to each link of a plurality of links associated with the plurality of historical routes. The vector corresponding to the each link may represent a topological relationship among the plurality links. More descriptions for training the word2vec model may be found elsewhere in the present disclosure (e.g., FIG. 6 and descriptions thereof) .
In some embodiments, a plurality of vectors corresponding to a plurality of links associated with a specific area may be stored in a storage device (e.g., the storage device 140) of the online to offline service system 100 or an external storage device. The processing engine 112 may access the storage device and retrieve the vector corresponding to the each preset link of the plurality of preset links associated with the preset route.
In 540, the processing engine 112 (e.g., the time determination module 440) may determine, using a machine learning model, an ETA for the service request based on the vectors corresponding to the plurality of preset links.
As used herein, “an estimated time of arrival (ETA) ” may refer to an estimated time point when a user arrives at a destination. In some embodiments, the processing engine 112 may obtain at least one feature of each one of the plurality of preset links. The processing engine 112 may determine, based on the at least one feature of the each one of the plurality of preset links and the vector corresponding to the each one of the plurality of preset links, a sub-ETA corresponding to the each one of the plurality of preset links by using the machine learning model. In some embodiments, the machine learning model may be a recurrent neural network (RNN) model. The processing engine 112 may determine, based on the sub-ETA corresponding to the each one of the plurality of preset links, the ETA for the service request. More descriptions of the determination of the ETA for the service request may be found elsewhere in the present disclosure (e.g., FIG. 7 and descriptions thereof) .
In some embodiments, the ETA for the route of the service request may be configured for further processing. For example, for online taxi-hailing services, the ETA for the service request may be used to determine a service fee from the starting location to the destination. In some embodiments, the processing engine 112 may transmit the ETA for the service request to the terminal device (e.g., the terminal device 130) of the user. The ETA may be displayed on a visual interface of the terminal device.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more operations may be combined into a single operation. For example, operation 510 and operation 520 may be combined into an operation. In some embodiments, one or more other optional operations (e.g., a storing operation) may be added elsewhere in process 500. In the storing operation, the processing engine 112 may store information and/or data associated with the online to offline service system (e.g., the one or more features of a preset link, the trained word2vec model) in a storage device (e.g., the storage device 140) disclosed elsewhere in the present disclosure. In some embodiments, one or more operations may be performed simultaneously. For example, operation 510 and operation 520 may be performed simultaneously.
FIG. 6 is a flowchart illustrating an exemplary process for determining a word2vec model according to some embodiments of the present disclosure. In some embodiments, the process 600 may be implemented as a set of instructions (e.g., an application) stored in the storage ROM 230 or RAM 240. The processor 220 and/or the modules in FIG. 4 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the modules may be configured to perform the process 600. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 600 may be accomplished with one or more additional operations not described and/or without one or more of the operations herein discussed. Additionally, the order in which the operations of the process as illustrated in FIG. 6 and described below is not intended to be limiting.
In 610, the processing engine 112 (e.g., the obtaining module 410) may obtain a plurality of training samples comprising a plurality of historical routes associated with a plurality of historical orders. In some embodiments, each historical route of the plurality of historical routes may include a plurality of links.
As used herein, a historical order may refer to an order that has been fulfilled. In some embodiments, information associated with the historical order may include an order number, a historical starting location, a historical destination, a historical pick-up location, a historical drop-off location, user’s identity information (e.g., an identification (ID) , a telephone number, a user’s name) , or the like, or any combination thereof. As used herein, “a pick-up location” may refer to a location where a user starts a service. For example, the pick-up location may be the location where a passenger actually gets on a vehicle. “A drop-off location” may refer to a location where a user ends a service. For example, the drop-off location may be the location where a passenger actually gets off a vehicle. In some embodiments, the pick-up location may be the same as or different from the starting location. In some embodiments, the drop-off location may be the same as or different from the destination.
In some embodiments, the historical order may correspond to the historical route. The historical route may be an actual driving route of the user from the historical pick-up location to the historical drop-off location. Take a specific historical order of a terminal device (e.g., the terminal device 130) as an example, the processing engine 112 may obtain position information (e.g., GPS information) of the terminal device which indicates a current location of the terminal device from the terminal device at a certain time interval (e.g., 1 second, 10 seconds, 1 minutes, etc. ) , in real time or substantially in real-time. Further, the processing engine 112 may store the position information in a storage device (e.g., the storage device 140) disclosed elsewhere in the present disclosure. The processing engine 112 may determine the historical route of the terminal device based on the position information of the terminal device. In some embodiments, the processing engine 112 may determine the historical route of the terminal device based on the position information of the terminal device and map information according to a map matching algorithm. Exemplary map matching algorithm may include a nearest neighbor algorithm, a Hidden Markov Model (HMM) , or the like. In some embodiments, the processing engine 112 may match recorded geographic coordinates of the terminal device to a logical model of the real world according to the map matching algorithm. For example, the processing engine 112 may associate a position point (e.g. a GPS point) of the terminal device to a location in an existing street graph.
In 620, the processing engine 112 (e.g., the training module 450) may determine the word2vec model based on the plurality of training samples.
In some embodiments, the processing engine 112 may input the plurality of training samples into the word2vec model. The word2vec model may generate a vector corresponding to each link of the plurality of links associated with the plurality of historical routes. In some embodiments, the vector corresponding to the each link of the plurality of links may represent a topological relationship among the plurality links. For example, assuming that a first route is L1+L2, a second route is L3, a first vector corresponding to the L1 is (0.1, 0.2) , a second vector corresponding to the L2 is (0.2, 0.3) , and a third vector corresponding to the L3 is (0.3, 0.5) , the processing engine 112 may determine that the first route is equivalent to the second route, that is, L1+L2=L3.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the training process of the word2vec model in the natural language processing (NLP) field may be different from the training process of the word2vec model in the determination of the ETA in the route planning field. In some embodiments, in the NLP field, it is reasonable to repeat a word multiple times. For example, a user may say “hello” twice in succession. However, in the route planning, a road segment cannot be repeated multiple times in a same direction. In some embodiments, an operation for preprocessing the plurality of training samples may be added before operation 620. In some embodiments, the processing engine 112 may remove a historical route including duplicate links. For example, the processing engine 112 may remove a historical route that a user takes a detour. In some embodiments, the processing engine 112 may remove a historical route that violates a traffic rule. For example, assuming that the driving direction of the historical route on a link is not in consistent with a direction of travel of the link, the processing engine 112 may remove the historical route.
In some embodiments, the processing engine 112 may determine a plurality of training parameters for the word2vec model. The training parameters may include a window size, a min-count, a dimensionality of a vector, or the like, or any combination thereof. As used herein, “a window size” may refer to a maximum distance between a current word (e.g., a current link) and a predicted word (e.g., a predicted link) within a sentence (e.g., a route) . “A min-count” may refer to that all words (e.g., links) with a total frequency lower than this may be ignored. In some embodiments, the plurality of training parameters for the word2vec model in the NPL may be different from the plurality of training parameters for the word2vec model in the determination of the ETA. For example, a word may be related to other five words in a sentence. A link may be related to other ten links in a route. Accordingly, the window size of the word2vec model in the NPL may be set as 5, and the window size of the word2vec model in the determination of the ETA may be set as 10.
FIG. 7 is a flowchart illustrating an exemplary process for determining an ETA for a service request according to some embodiments of the present disclosure. In some embodiments, the process 700 may be implemented as a set of instructions (e.g., an application) stored in the storage ROM 230 or RAM 240. The processor 220 and/or the modules in FIG. 4 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the modules may be configured to perform the process 700. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 700 may be accomplished with one or more additional operations not described and/or without one or more of the operations herein discussed. Additionally, the order in which the operations of the process as illustrated in FIG. 7 and described below is not intended to be limiting.
In 710, the processing engine 112 (e.g., the obtaining module 410) may obtain at least one feature of each one of a plurality of preset links.
In some embodiments, the one or more features of the link may include a length of the link, a width of the link, a direction of travel, a traffic light distribution, a lane condition (e.g., a number of lanes in the link, a classification of the lanes) , a road condition of the link (e.g., a shape, a surface roughness determination, a slippery condition of road surface) , a speed limit of the link, a traffic light duration of the link, or the like, or any combination thereof.
In some embodiments, a plurality of links associated with an area and corresponding features may be stored in a storage device (e.g., the storage device 140) of the online to offline service system 100 or an external storage device. The processing engine 112 may access the storage device and retrieve the one or more features of each one of the plurality of preset links.
In 720, the processing engine 112 (e.g., the time determination module 440) may determine, based on the at least one feature of the each one of the plurality of preset links and a vector corresponding to the each one of the plurality of preset links, a sub-ETA corresponding to the each one of the plurality of preset links by using a machine learning model.
In some embodiments, the machine learning model may be a recurrent neural network (RNN) model. Since the preset route of the service request includes a plurality of preset links, and as the travel of the service request is presumably from the staring location to the destination via the plurality of preset links, the plurality of preset links may be sequentially connected with each other. The RNN model may include a plurality of cells each of which may make use of sequential information (e.g., the one or more features of each preset link, a vector corresponding to each preset link) to obtain the sub-ETA for each one of the plurality of preset links. RNNs are called recurrent because they perform the same task for every element (e.g., each preset link) of a sequence, with the output being depended on the previous computations.
In some embodiments, each cell of the RNN model may include an input layer, a hidden layer, and an output layer. The hidden layer may have one or more feedback loops. These feedback loops may provide RNNs with a type of “memory, ” in which past outputs from the hidden layer of a cell may inform future outputs from the hidden layer of another cell. Specifically, each feedback loop may provide an output from the hidden layer in a previous cell back to the hidden layer of the current cell as input for the current cell to inform the output of the current cell. This can enable RNNs to recurrently process sequence data (e.g., data that exists in an ordered sequence, like a route having a sequence of links) over a sequence of steps.
In some embodiments, each cell (except for the first cell) may receive an input (e.g., the one or more features of a corresponding preset link, the vector of the corresponding preset link) and an output (e.g., a feature vector for a previous preset link, a vector of the previous preset link) from the hidden layer of a previous cell. In some embodiments, the each cell may determine the feature vector for the corresponding preset link by analyzing the one or more features of the corresponding preset link. The each cell may generate the sub-ETA for the corresponding preset link based on the one or more features of the corresponding preset link, the vector of the corresponding preset link, the feature vector for the previous preset link, and the vector of the previous preset link. In some embodiments, the processing engine 112 may determine a combined vector of the feature vector for the corresponding present link, the vector of the corresponding preset link, the feature vector for the previous preset link, and the vector of the previous preset link. For example, the processing engine 112 may determine a weight for each one of the feature vector for the corresponding present link, the vector of the corresponding preset link, the feature vector for the previous preset link, and the vector of the previous preset link, respectively. The processing engine 112 may determine the combined vector based on the feature vector for the corresponding present link, the vector of the corresponding preset link, the feature vector for the previous preset link, the vector of the previous preset link, and their own corresponding weight.
The processing engine 112 may determine the sub-ETA for the corresponding preset link based on the combined vector of the feature vector for the corresponding present link, the vector of the corresponding preset link, the feature vector for the previous preset link, and the vector of the previous preset link. For example, the cell may determine the sub-ETA for the corresponding preset link based on the combined vector of the feature vector for the corresponding present link, the vector of the corresponding preset link, the feature vector for the previous preset link, and the vector of the previous preset link, and output the sub-ETA for the corresponding preset link.
In 730, the processing engine 112 (e.g., the time determination module 440) may determine, based on the sub-ETA corresponding to the each one of the plurality of preset links, an ETA for a service request.
In some embodiments, the processing engine 112 may determine a sum of the sub-ETA corresponding to the each one of the plurality of preset links. The processing engine 112 may determine the sum of the sub-ETA corresponding to the each one of the plurality of preset links as the ETA for the service request.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure.
Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure.
Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment, ” “an embodiment, ” and “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure.
Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “module, ” “unit, ” “component, ” “device, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution, e.g., an installation on an existing server or mobile device.
Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, claim subject matter lie in less than all features of a single foregoing disclosed embodiment.
Claims (21)
- A method for determining an estimated time of arrival (ETA) for a service request implemented on a computing device having at least one processor and at least one storage device, the method comprising:obtaining a service request comprising a starting location and a destination;determining a preset route from the starting location to the destination, the preset route comprising a plurality of preset links;for each one of the plurality of preset links, determining a corresponding vector representing a topological relationship among the plurality preset links; anddetermining, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links.
- The method of claim 1, further comprising:for each one of the plurality of preset links, determining the corresponding vector using a word2vec model.
- The method of claim 2, further comprising determining the word2vec model according to a training process, the training process comprising operations of:obtaining a plurality of training samples comprising a plurality of historical routes associated with a plurality of historical orders; anddetermining the word2vec model based on the plurality of training samples.
- The method of claim 3, further comprises:removing a historical route including duplicate links.
- The method of claim 3, further comprises:removing a historical route that violates a traffic rule.
- The method of claim 3, wherein the determining, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links further comprises:obtaining at least one feature of the each one of the plurality of preset links;determining, based on the at least one feature of the each one of the plurality of preset links and the vector corresponding to the each one of the plurality of preset links, a sub-ETA corresponding to the each one of the plurality of preset links by using the machine learning model; anddetermining, based on the sub-ETA corresponding to the each one of the plurality of preset links, the ETA for the service request.
- The method of claim 6, wherein the at least one feature of the each one of the plurality of preset links comprise at least one of a length of the preset link, a width of the preset link, a direction of travel, a traffic light distribution, or a lane condition.
- The method of claim 3, wherein the plurality of historical routes are determined according to a map matching algorithm.
- The method of any one of claims 1-8, wherein the machine learning model is a recurrent neural network (RNN) model.
- The method of any one of claims 1-9, further comprising:transmitting the ETA to a terminal, directing the terminal to display the ETA.
- A system for determining an estimated time of arrival (ETA) for a service request, comprising:at least one storage medium storing a set of instructions;at least one processor in communication with the at least one storage medium, when executing the stored set of instructions, the at least one processor causes the system to:obtain a service request comprising a starting location and a destination;determine a preset route from the starting location to the destination, the preset route comprising a plurality of preset links;for each one of the plurality of preset links, determine a corresponding vector representing a topological relationship among the plurality preset links; anddetermine, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links.
- The system of claim 11, the at least one processor causes the system to:for each one of the plurality of preset links, determine the corresponding vector using a word2vec model.
- The system of claim 12, wherein the word2vec model is determined according to a training process, the training process comprising operations of:obtaining a plurality of training samples comprising a plurality of historical routes associated with a plurality of historical orders; anddetermining the word2vec model based on the plurality of training samples.
- The system of claim 13, the at least one processor causes the system to:remove a historical route including duplicate links.
- The system of claim 13, the at least one processor causes the system to:remove a historical route that violates a traffic rule.
- The system of claim 13, wherein to the determine, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links, the at least one processor causes the system to:obtain at least one feature of the each one of the plurality of preset links;determine, based on the at least one feature of the each one of the plurality of preset links and the vector corresponding to the each one of the plurality of preset links, a sub-ETA corresponding to the each one of the plurality of preset links by using the machine learning model; anddetermine, based on the sub-ETA corresponding to the each one of the plurality of preset links, the ETA for the service request.
- The system of claim 16, wherein the at least one feature of the each one of the plurality of preset links comprise at least one of a length of the preset link, a width of the preset link, a direction of travel, a traffic light distribution, or a lane condition.
- The system of claim 13, wherein the plurality of historical routes are determined according to a map matching algorithm.
- The system of any one of claims 11-18, wherein the machine learning model is a recurrent neural network (RNN) model.
- The system of any one of claims 11-19, the at least one processor causes the system to:transmit the ETA to a terminal, directing the terminal to display the ETA.
- A non-transitory computer readable medium storing instructions, the instructions, when executed by at least one processor, causing the at least one processor to implement a method comprising:obtaining a service request comprising a starting location and a destination;determining a preset route from the starting location to the destination, the preset route comprising a plurality of preset links;for each one of the plurality of preset links, determining a corresponding vector representing a topological relationship among the plurality preset links; anddetermining, using a machine learning model, the ETA for the service request based on the vectors corresponding to the plurality of preset links.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/106567 WO2021051329A1 (en) | 2019-09-19 | 2019-09-19 | Systems and methods for determining estimated time of arrival in online to offline services |
CN201980099967.0A CN114365205A (en) | 2019-09-19 | 2019-09-19 | System and method for determining estimated time of arrival in online-to-offline service |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/106567 WO2021051329A1 (en) | 2019-09-19 | 2019-09-19 | Systems and methods for determining estimated time of arrival in online to offline services |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021051329A1 true WO2021051329A1 (en) | 2021-03-25 |
Family
ID=74883784
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/106567 WO2021051329A1 (en) | 2019-09-19 | 2019-09-19 | Systems and methods for determining estimated time of arrival in online to offline services |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114365205A (en) |
WO (1) | WO2021051329A1 (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123841A (en) * | 2014-08-14 | 2014-10-29 | 苏州大学 | Method and system for acquiring arrival time of vehicle |
CN108108854A (en) * | 2018-01-10 | 2018-06-01 | 中南大学 | City road network link prediction method, system and storage medium |
TW201901185A (en) * | 2017-05-22 | 2019-01-01 | 大陸商北京嘀嘀無限科技發展有限公司 | System and method for determining estimated arrival time |
DE102018005778A1 (en) * | 2018-07-23 | 2019-01-03 | Daimler Ag | Method for determining an arrival time for a vehicle |
CN109489679A (en) * | 2018-12-18 | 2019-03-19 | 成佳颖 | A kind of arrival time calculation method in guidance path |
CN109584552A (en) * | 2018-11-28 | 2019-04-05 | 青岛大学 | A kind of public transport arrival time prediction technique based on network vector autoregression model |
CN109791731A (en) * | 2017-06-22 | 2019-05-21 | 北京嘀嘀无限科技发展有限公司 | A kind of method and system for estimating arrival time |
CN109974735A (en) * | 2019-04-08 | 2019-07-05 | 腾讯科技(深圳)有限公司 | Predictor method, device and the computer equipment of arrival time |
CN110073426A (en) * | 2017-11-23 | 2019-07-30 | 北京嘀嘀无限科技发展有限公司 | The system and method for Estimated Time of Arrival |
CN110168313A (en) * | 2017-01-10 | 2019-08-23 | 北京嘀嘀无限科技发展有限公司 | For estimating the method and system of arrival time |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103268280B (en) * | 2013-04-16 | 2016-01-06 | 西安电子科技大学 | The software fault positioning system combined based on distance metric and statistical study and method |
CN104240260B (en) * | 2014-10-09 | 2017-01-18 | 武汉大学 | Junction identification based intelligent road extraction method |
CN107967532B (en) * | 2017-10-30 | 2020-07-07 | 厦门大学 | Urban traffic flow prediction method fusing regional vitality |
CN108399201B (en) * | 2018-01-30 | 2020-05-12 | 武汉大学 | Web user access path prediction method based on recurrent neural network |
CN110210604B (en) * | 2019-05-21 | 2021-06-04 | 北京邮电大学 | Method and device for predicting movement track of terminal equipment |
-
2019
- 2019-09-19 CN CN201980099967.0A patent/CN114365205A/en active Pending
- 2019-09-19 WO PCT/CN2019/106567 patent/WO2021051329A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123841A (en) * | 2014-08-14 | 2014-10-29 | 苏州大学 | Method and system for acquiring arrival time of vehicle |
CN110168313A (en) * | 2017-01-10 | 2019-08-23 | 北京嘀嘀无限科技发展有限公司 | For estimating the method and system of arrival time |
TW201901185A (en) * | 2017-05-22 | 2019-01-01 | 大陸商北京嘀嘀無限科技發展有限公司 | System and method for determining estimated arrival time |
CN109791731A (en) * | 2017-06-22 | 2019-05-21 | 北京嘀嘀无限科技发展有限公司 | A kind of method and system for estimating arrival time |
CN110073426A (en) * | 2017-11-23 | 2019-07-30 | 北京嘀嘀无限科技发展有限公司 | The system and method for Estimated Time of Arrival |
CN108108854A (en) * | 2018-01-10 | 2018-06-01 | 中南大学 | City road network link prediction method, system and storage medium |
DE102018005778A1 (en) * | 2018-07-23 | 2019-01-03 | Daimler Ag | Method for determining an arrival time for a vehicle |
CN109584552A (en) * | 2018-11-28 | 2019-04-05 | 青岛大学 | A kind of public transport arrival time prediction technique based on network vector autoregression model |
CN109489679A (en) * | 2018-12-18 | 2019-03-19 | 成佳颖 | A kind of arrival time calculation method in guidance path |
CN109974735A (en) * | 2019-04-08 | 2019-07-05 | 腾讯科技(深圳)有限公司 | Predictor method, device and the computer equipment of arrival time |
Also Published As
Publication number | Publication date |
---|---|
CN114365205A (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11003677B2 (en) | Systems and methods for location recommendation | |
US20200049522A1 (en) | Methods and systems for route planning | |
US20210140774A1 (en) | Systems and methods for recommending pick-up locations | |
US20200011692A1 (en) | Systems and methods for recommending an estimated time of arrival | |
US20200042885A1 (en) | Systems and methods for determining an estimated time of arrival | |
US20180357736A1 (en) | Systems and methods for determining an estimated time of arrival | |
TWI675184B (en) | Systems, methods and non-transitory computer readable medium for route planning | |
US20200300650A1 (en) | Systems and methods for determining an estimated time of arrival for online to offline services | |
WO2018090581A1 (en) | Systems and methods for performing location-based actions | |
CN110675621A (en) | System and method for predicting traffic information | |
US20190139070A1 (en) | Systems and methods for cheat examination | |
CN110839346A (en) | System and method for distributing service requests | |
WO2018141159A1 (en) | Systems and methods for data updating | |
WO2021087663A1 (en) | Systems and methods for determining name for boarding point | |
CN111415024A (en) | Arrival time estimation method and estimation device | |
CN111882112B (en) | Method and system for predicting arrival time | |
WO2019071993A1 (en) | Systems and methods for determining an optimal transportation service type in an online to offline service | |
CN112243487A (en) | System and method for on-demand services | |
EP3642782A1 (en) | Systems and methods for determining a fee of a service request | |
WO2021051221A1 (en) | Systems and methods for evaluating driving path | |
WO2022126354A1 (en) | Systems and methods for obtaining estimated time of arrival in online to offline services | |
WO2021051329A1 (en) | Systems and methods for determining estimated time of arrival in online to offline services | |
WO2019200553A1 (en) | Systems and methods for improving user experience for an on-line platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19945658 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19945658 Country of ref document: EP Kind code of ref document: A1 |