Nothing Special   »   [go: up one dir, main page]

CN115631550A - User feedback method and system - Google Patents

User feedback method and system Download PDF

Info

Publication number
CN115631550A
CN115631550A CN202110806625.7A CN202110806625A CN115631550A CN 115631550 A CN115631550 A CN 115631550A CN 202110806625 A CN202110806625 A CN 202110806625A CN 115631550 A CN115631550 A CN 115631550A
Authority
CN
China
Prior art keywords
user
vehicle
information
state
feedback
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110806625.7A
Other languages
Chinese (zh)
Inventor
李昌远
阮春彬
李晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Voyager Technology Co Ltd
Original Assignee
Beijing Voyager Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Voyager Technology Co Ltd filed Critical Beijing Voyager Technology Co Ltd
Priority to CN202110806625.7A priority Critical patent/CN115631550A/en
Publication of CN115631550A publication Critical patent/CN115631550A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the specification discloses a method and a system for user feedback, wherein the method comprises the following steps: acquiring state information, wherein the state information comprises at least one of a vehicle state and a user state; based on the state information, sending first reminding information to the user terminal; and acquiring feedback information of the driving condition of the vehicle, which is input by the user at the user terminal in response to the first reminding information, wherein the vehicle is taken by the user.

Description

User feedback method and system
Technical Field
The embodiment of the specification relates to the technical field of vehicle control, in particular to a method and a system for user feedback.
Background
With the continuous development of computer technology, intelligent travel (e.g., shared travel or automatic driving, etc.) has also been rapidly developed. Taking automated driving as an example, an autonomous vehicle may refer to a vehicle capable of achieving a certain level of driving automation. For example, an autonomous vehicle may be controlled by a system (e.g., a back-end remote control) to achieve autonomous driving. The automatic driving vehicle can be used as a taxi or a public transport means and the like, and how to timely collect feedback of a user to the vehicle when the automatic driving vehicle provides service for human beings so as to optimize automatic control over the vehicle, so that different requirements of the user are better met, and the problem to be solved is urgently needed.
Disclosure of Invention
One aspect of an embodiment of the present specification provides a method of user feedback, comprising: acquiring state information, wherein the state information comprises at least one of a vehicle state and a user state; based on the state information, sending first reminding information to the user terminal; and acquiring feedback information of the running condition of the vehicle, which is input by the user at the user terminal in response to the first reminding information, wherein the vehicle is taken by the user.
One aspect of an embodiment of the present specification provides a system for user feedback, comprising: the system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring state information, and the state information comprises at least one of a vehicle state and a user state; the sending module is used for sending first reminding information to the user terminal based on the state information; and the second acquisition module is used for acquiring feedback information of the driving condition of the vehicle, which is input by the user terminal in response to the first reminding information, wherein the vehicle is taken by the user.
An aspect of embodiments of the present specification provides an apparatus for user feedback, the apparatus comprising at least one processor and at least one memory; the at least one memory is for storing computer instructions; the at least one processor is configured to execute at least some of the computer instructions to implement operations corresponding to the method for user feedback as described in any of the previous paragraphs.
An aspect of the embodiments of the present specification provides a computer-readable storage medium, which stores computer instructions that, when executed by a processor, implement operations corresponding to the method for user feedback according to any one of the preceding claims.
Drawings
The present description will be further described by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a user feedback system shown in accordance with some embodiments of the present description;
FIG. 2 is a flow diagram of a user feedback method shown in accordance with some embodiments of the present description;
FIG. 3 is a flow diagram illustrating determining status information according to some embodiments of the present description;
FIG. 4 is a flow diagram illustrating sending first reminder information in accordance with some embodiments of the present description;
FIG. 5 is a flow diagram illustrating predicting a target location in accordance with some embodiments of the present description;
6A-6D are user interface diagrams shown in accordance with some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or stated otherwise, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used in this specification is a method for distinguishing different components, elements, parts or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not to be taken in a singular sense, but rather are to be construed to include a plural sense unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to or removed from these processes.
Fig. 1 is a schematic diagram of an application scenario of a user feedback system according to some embodiments of the present description. In some embodiments, the user feedback system 100 may include a server 110, a network 120, a user terminal 130, a database 140, and a vehicle 150. The server 110 may include a processing device 112.
In some embodiments, the server 110 may be used to process information and/or data related to user feedback. The servers 110 may be individual servers or groups of servers. The set of servers can be centralized or distributed (e.g., server 110 can be a distributed system). In some embodiments, the server 110 may be regional or remote. For example, server 110 may access information and/or profiles stored in user terminal 130, database 140, vehicle 150, and detection unit 152 via network 120. In some embodiments, the server 110 may be directly connected to the user terminal 130, the database 140, the vehicle 150, the detection unit 152 to access the information and/or material stored therein. In some embodiments, the server 110 may execute on a cloud platform.
In some embodiments, the server 110 may include a processing device 112. The processing device 112 may process data and/or information related to user feedback to perform one or more of the functions described in the specification. For example, the processing device 112 may obtain status information. For another example, the processing device 112 may send the first reminding information to the user terminal based on the state information to obtain the feedback information of the driving condition of the vehicle input at the user terminal by the user in response to the first reminding information.
In some embodiments, the processing device 112 may include one or more sub-processing devices (e.g., a single core processing device or a multi-core processing device). By way of example only, the processing device 112 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a programmable logic circuit (PLD), a controller, a micro-controller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination thereof. In some embodiments, the processing device 122 may be integrated into the vehicle 150 and/or the user terminal 130.
Network 120 may facilitate the exchange of data and/or information. In some embodiments, one or more components in system 100 (e.g., server 110, user terminal 130, database 140, vehicle 150, detection unit 152) may send data and/or information to other components over network 120. In some embodiments, the network 120 may be any type of wired or wireless network. For example, network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, the like, or any combination thereof. In some embodiments, network 120 may include one or more network access points. For example, network 120 may include wired or wireless network access points, such as base stations and/or Internet switching points 120-1, 120-2, \ 8230, through which one or more components of system 100 may connect to network 120 to exchange data and/or information.
In some embodiments, the user terminal 130 may be a terminal in which a user inputs feedback information on the running condition of the vehicle. In some embodiments, the user may be a service user. For example, the service users may include passengers of a networked car appointment platform, passengers of an autonomous vehicle, navigation service users, and transport service users, among others. In some embodiments, the user terminal 130 may include one or any combination of a mobile device 130-1, a tablet 130-2, a laptop 130-3, an in-vehicle device (not shown), a wearable device (not shown), and the like. In some embodiments, the mobile device 130-1 may include a smart home device, a smart mobile device, a virtual reality device, an augmented reality device, and the like, or any combination thereof. The smart mobile device may include a smartphone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a point of sale (POS) device, the like, or any combination thereof. The virtual reality device or augmented reality device may include a virtual reality helmet, virtual reality glasses, virtual reality eyeshields, augmented reality helmets, augmented reality glasses, augmented reality eyeshields, and the like, or any combination thereof. In some embodiments, the in-vehicle device may include an on-board computer, an on-board television, or the like. In some embodiments, the wearable device may include a smart bracelet, smart footwear, smart glasses, smart helmet, smart watch, smart garment, smart backpack, smart accessory, or the like, or any combination thereof. In some embodiments, the user terminal 130 may be a device having a positioning technology for locating the position of the user terminal 130.
Database 140 may store data and/or instructions. In some embodiments, database 140 may store data obtained from user terminals 130, vehicle 150 detection unit 152, processing device 112, and the like. In some embodiments, database 140 may store information and/or instructions for server 110 to perform the example methods described herein. For example, the database 140 may store current vehicle states (e.g., pitch condition, speed, acceleration, etc.) collected by the detection unit 152. For another example, the database 140 may store feedback information input by the user at the user terminal 130. As another example, database 140 may also store target locations and the like determined by processing device 112. In some embodiments, the database 140 may include mass storage, removable memory, volatile read-write memory (e.g., random access memory, RAM), read-only memory (ROM), the like, or any combination thereof. In some embodiments, database 140 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a decentralized cloud, an internal cloud, and the like, or any combination thereof.
In some embodiments, database 140 may be connected to network 120 to communicate with one or more components of system 100 (e.g., server 110, user terminal 130, vehicle 150, detection unit 152, etc.). One or more components of system 100 may access data or instructions stored in database 140 via network 120. For example, the server 110 may obtain the user status from the database 140 and perform corresponding processing. In some embodiments, database 140 may be directly connected to or in communication with one or more components in system 100 (e.g., server 110, user terminal 130). In some embodiments, database 140 may be part of server 110. In some embodiments, database 140 may be integrated into vehicle 110.
The vehicle 150 may be any type of driving vehicle, such as an autonomous vehicle, a net taxi driver driven vehicle, and the like. In some embodiments, vehicle 150 may be a vehicle in which a user is riding. As used herein, an autonomous vehicle may refer to a vehicle that is capable of achieving a level of driving automation. For example, the level of driving automation may include a first level, i.e., the vehicle is primarily supervised by humans and has a particular autonomous function (e.g., autonomous steering or acceleration), a second level, i.e., the vehicle has one or more advanced driver assistance systems (ADAS, e.g., adaptive cruise control systems, lane keeping systems) that may control braking, steering, and/or accelerating the vehicle, a third level, i.e., the vehicle is capable of being automatically driven when one or more certain conditions are met, a fourth level, i.e., the vehicle may be operated without human input or inattention, but still subject to certain limitations (e.g., limited to a certain area), a fifth level, i.e., the vehicle may be operated autonomously in all circumstances, and the like, or any combination thereof.
In some embodiments, vehicle 150 may have an equivalent structure that enables vehicle 150 to move or fly. For example, the vehicle 150 may include the structure of a conventional vehicle, such as a chassis, a suspension, a steering device (e.g., a steering wheel), a braking device (e.g., a brake pedal), an accelerator, and the like. As another example, the vehicle 150 may have a body and at least one wheel. The body may be any type of body, such as a sports vehicle, a coupe, a sedan, a light truck, a station wagon, a Sport Utility Vehicle (SUV), a minivan, or a switch car. At least one of the wheels may be configured as all-wheel drive (AWD), front-wheel drive (FWR), rear-wheel drive (RWD), or the like. In some embodiments, it is contemplated that the vehicle 150 may be an electric vehicle, a fuel cell vehicle, a hybrid vehicle, a conventional internal combustion engine vehicle, or the like.
In some embodiments, the vehicle 150 is able to sense its environment and travel using one or more detection units 152. The at least two detection units 152 may include sensor devices (e.g., radars (e.g., lidar devices)), global Positioning System (GPS) modules, inertial Measurement Units (IMUs), cameras, and the like, or any combination thereof. A radar (e.g., a lidar device) may be configured to scan the surroundings of vehicle 150 and generate corresponding data. A GPS module may refer to a device capable of receiving geolocation and time information from GPS satellites and determining the geographic location of the device. An IMU may refer to an electronic device that uses various inertial sensors to measure and provide specific forces, angular rates of a vehicle, and sometimes magnetic fields around the vehicle. In some embodiments, the various inertial sensors may include acceleration sensors (e.g., piezoelectric sensors), velocity sensors (e.g., hall sensors), distance sensors (e.g., radar, infrared sensors), steering angle sensors (e.g., tilt sensors), traction-related sensors (e.g., force sensors), and the like. The camera may be configured to acquire one or more images related to a target (e.g., a person, animal, tree, barricade, building, or vehicle) within range of the camera.
It will be understood by those of ordinary skill in the art that when an element (or component) of the user feedback system 100 is implemented, the element may be implemented by an electrical and/or electromagnetic signal. For example, when user terminal 130 sends feedback information to server 110, a processor of user terminal 130 may generate an electrical signal encoding the feedback information. The processor of the user terminal 130 may then send the electrical signal to the output port. If user terminal 130 communicates with server 110 via a wired network, the output port may be physically connected to a cable, which may also send electrical signals to the input port of server 110. If user terminal 130 communicates with server 110 via a wireless network, the output port of user terminal 130 may be one or more antennas that convert electrical signals to electromagnetic signals. Within an electronic device, such as user terminal 130 and/or server 110, when its processor processes instructions, issues instructions, and/or performs actions, the instructions and/or actions are performed by electrical signals. For example, when the processor retrieves or saves data from a storage medium (e.g., database 140), it can send electrical signals to the storage medium's read/write device, which can read or write structured data in the storage medium. The configuration data may be transmitted to the processor in the form of electrical signals via a bus of the electronic device. Herein, an electrical signal may refer to one electrical signal, a series of electrical signals, and/or at least two discrete electrical signals.
In some embodiments, a processing device (e.g., processing device 112) may include a first acquisition module, a sending module, and a second acquisition module.
The first obtaining module may be configured to obtain status information including at least one of a vehicle status and a user status. For specific details of the status information, reference may be made to step 210 and the related description thereof, which are not described herein again.
The sending module may be configured to send the first reminding information to the user terminal based on the state information.
In some embodiments, the sending module may be further configured to determine a feedback reminding manner, where the feedback reminding manner is related to at least one of a user state, a user terminal state, a driving environment, and user association information; and generating the first reminding information based on the feedback reminding mode, and sending the first reminding information to the user terminal. For specific details of sending the first reminding information, reference may be made to fig. 4 and the related description thereof, which are not described herein again.
The second obtaining module may be configured to obtain feedback information of the driving condition of the vehicle, which is input at the user terminal by a user in response to the first reminding information.
It should be noted that the above description of the user feedback system and its modules is for convenience only and should not limit the present disclosure to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. In some embodiments, the first acquiring module, the sending module and the second acquiring module may be different modules in a system, or may be a module that implements the functions of two or more modules. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present disclosure.
Fig. 2 is a flow diagram of a user feedback method shown in accordance with some embodiments of the present description. As shown in fig. 2, the process 200 may include the following steps 210, 220, and 230. In some embodiments, flow 200 may be performed by a processing device (e.g., processing device 112).
At step 210, status information is obtained, the status information including at least one of a vehicle status and a user status. In some embodiments, step 210 may be performed by the first obtaining module.
In some embodiments, the vehicle state is related to a driving condition of the vehicle. For specific details of the driving condition of the vehicle, reference may be made to step 230 and the related description thereof, which are not described herein again. The vehicle state may include information of a driving parameter, a driving state, a driving environment, a driving section, a driving location, a driving route, and the like.
In some embodiments, the vehicle state may include at least one of a current vehicle state, a vehicle state after a preset time, and a vehicle state at the target location.
In some embodiments, the current vehicle state may reflect some or all of the current driving conditions of the vehicle. For example, a navigation route being used by the vehicle, a road segment or position where the vehicle is currently located, a current driving state of the vehicle, a current driving parameter of the vehicle, and the like are reflected. For specific details of the driving condition, reference may be made to step 230 and the related description thereof, which are not described herein again.
In some embodiments, different data of the current vehicle state may be obtained in different ways. For example, the current running parameters may be acquired by sensors (e.g., a speed sensor and an acceleration sensor) installed in the vehicle. For another example, the current driving state may be acquired by a monitoring system provided in the vehicle. As another example, the current weather environment and the current time environment are obtained from a weather service platform. For another example, the road condition environment may be reported by users of other vehicles, or obtained by a map or a navigation service platform. The current travel section and the current travel position may be acquired by a positioning technique, or the like.
The vehicle state after the preset time may reflect part or all of the driving condition of the vehicle after the preset time. The vehicle state at the target position may reflect part or all of the traveling condition of the vehicle at the target position. For specific details of the vehicle state after the preset time and the vehicle state at the target position, reference may be made to step 220 and the related description thereof, which are not described herein again.
In some embodiments, the user status is related to one or more of the following information: vehicle usage by the user, idle status by the user, and mood by the user. In some embodiments, the user state may include a current user state, a user state after a preset time, and a user state at the target location.
The vehicle usage of the user may reflect a state relationship between the user and the vehicle. For example, the user has got on the vehicle, the user is waiting for the vehicle to pick up, the user is about to finish using the vehicle (or the user finishes using the vehicle after a certain time), etc.
The user's idle condition may reflect at least whether the user has time to feedback. For example, a user's idle conditions may include being very busy, relatively busy, idle, and the like. The busy level may be set on demand. For example, a user may be considered very busy if he is on the phone. If the user is using the mobile phone for entertainment (such as chatting, playing games, etc.), the user can be considered as busy.
The emotion of the user may reflect the user's feeling of the running condition of the vehicle. For example, the user mood may include whether or not to be angry, obsessive, panic, happy, bored, and depressed, etc.
In some embodiments, the processing device may obtain the current state information based on image information of the user and/or audio information of the vehicle. For example, the current user status is obtained. For specific details of obtaining the status information, reference may be made to fig. 3 and the related description thereof, which are not described herein again.
In some embodiments, the processing device may determine the user's mood based on data (e.g., heart beat, heart rate, pulse, and blood pressure) collected by the wearable device. For example, if the number of heartbeats exceeds a preset threshold for a preset time, the mood of the user may be nervous. In some embodiments, the processing device may determine the current user state based on a state of the user terminal. For example, the user emotion is determined from the web page viewed or information viewed by the user in the current user state.
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may obtain an emotional state of the user based on data monitored by the wearable device, obtain a vehicle state based on the storage device; in some embodiments, the processing device may obtain vehicle usage or idle conditions of the user based on the image data and/or audio data and obtain driving parameters based on the sensors.
And step 220, sending the first reminding information to the user terminal based on the state information. In some embodiments, step 220 may be performed by the sending module.
The first warning information may be information for guiding the user terminal to feed back the driving condition. The first reminding information comprises the content and the mode of the first reminding information.
In some embodiments, the content of the first reminder information may be preset. In some embodiments, the content of the first reminder information may be determined based on the driving condition for which the feedback is directed. For example, if the sensor of the vehicle detects a bump, the content of the first warning information may include "please feed back whether the vehicle is stationary". For another example, if the target position Q is a turning section, the content of the first reminder information may include "please feedback whether the turning at the position Q is smooth. In some embodiments, the content of the first reminder information may be determined based on the user status. For example, if the state of the user is obtained as the body shaking, the content of the first reminding information may include "please feedback whether sudden braking of the vehicle occurs". In some embodiments, the content of the first reminder information may be determined in conjunction with the user state and the driving condition for which the feedback is directed. For example, if no abnormality of the vehicle is found, but the user is angry, the content of the first reminder information may include "please feedback whether the running condition of the current vehicle is satisfactory" or the like. The content of the corresponding first reminding information can be set according to different user states and/or the driving conditions for which the feedback is directed, and the processing device can directly obtain the corresponding content after obtaining the user states and/or the driving conditions for which the feedback is directed.
The first reminder information may be represented in a variety of ways. For example, the first reminder information may be voice, text, image, video, vibration, etc. For example, the user terminal may display an icon of "click on" or "like" as the first reminder information.
In some embodiments, the processing device may determine the representation of the first reminder information according to at least one of a user status, a user terminal status, a driving environment, and user association information, with particular reference to fig. 4 and its related description.
In some embodiments, the processing device may send the first alert information to the user terminal based on a preset rule. The preset rules can be specifically set according to actual conditions.
In some embodiments, the preset rule may include sending the first reminder information at preset time intervals. For example, the first reminding information is sent every 5 min. The preset time can be specifically set according to actual conditions, so that poor riding experience of a user caused by frequent sending of reminding information or incomplete feedback information collection caused by untimely reminding can be avoided.
In some embodiments, the preset rule may include sending the first alert message at a preset distance of travel. For example, the first warning message is transmitted every 50m of the travel distance. The preset rule may further include sending the first reminder information at a preset location. For example, at red, yellow, green lights, etc. In some embodiments, the preset rules may be adjusted and optimized according to the driving conditions. For example, if the weather is rainy or snowy, the preset time or the preset travel distance may be shortened.
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may send the content of the first reminder information related to the vehicle speed to a wearable device worn by the user, to be alerted by the wearable device in a vibrating manner. In some embodiments, the processing device may send first reminding information "please feedback the current driving status" to the user terminal at preset time intervals, where the sending modes of the first reminding information are different, and the sending modes are respectively voice, image, text, and the like.
In some embodiments, the processing device may send the first reminder information to the user terminal based on the user representation. The user representation may reflect the user's interests and needs. The user representation may be determined based on the user association information. For more introduction of the user association information, see the related description below. For example, when the user representation reflects that the user is interested in the speed of the vehicle, the processing device may send a first alert related to the speed of the vehicle. By means of the user portrait, the reminding information which is more interesting to the user can be sent, the feedback interest of the user is improved, and therefore more feedback information can be obtained.
In some embodiments, the processing device may determine whether to send the first reminder information to the user terminal or/and determine the content of the sent first reminder information based on the vehicle status and/or the user status. In some embodiments, the processing device may determine whether to send the first reminder information, or/and determine the content of the sent first reminder information, based on current status information (including current vehicle status and/or current user status).
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the current vehicle state. The processing device may determine whether to send the first reminder information and its content based on one of the information of the current vehicle state. For example, the processing device may determine whether to send the first warning information based on whether the current vehicle state is an abnormal driving state in which a driving event occurs. Specifically, if the current vehicle state is an abnormal driving state, the first reminding information is sent. The processing device may determine the content of the first reminding information according to the type of the driving event of the abnormal driving state. For example, when the driving event "bump" occurs in the current vehicle state, the processing device may send a first warning message "please feedback whether the vehicle bumps". The processing device may determine whether to send the first reminder information and its content based on the travel section of the current vehicle. For example, if the vehicle is in a turning road section, a first reminding message "please feed back whether the vehicle turns steadily" is sent. The processing device may determine whether to send the first reminder information and its content based on the driving environment of the current vehicle. For example, when the road condition environment of the current vehicle is congested, the processing device may send a first reminding message "please feed back whether sudden braking of the vehicle frequently occurs".
In some embodiments, the processing device may determine whether to send the first reminder information, and the content of the first reminder information, based on a variety of information about the current vehicle state. For example, the processing device may determine whether to send the first warning information based on a preset rule in combination with various information of the current vehicle state. The preset rule may be that various information of the current vehicle state is respectively scored, scores corresponding to the various information are fused (for example, averaging, weighted summing, weighted averaging and the like), and whether the first reminding information is sent is determined based on the fused scores (for example, the first reminding information is sent if the fused score is larger than a certain threshold value). The content of the first reminder information may be determined based on information of the plurality of information having a score greater than a threshold value.
In some embodiments, the processing device may process various information of the current vehicle state based on the trained first reminder model, determine whether to send the first reminder information, and the content of the first reminder information. The processing equipment inputs various information into the first reminding model and outputs whether the first reminding information is sent or not and the content theme of the first reminding information. Before entering the model, the processing device may pre-process the various information, including screening, vector representation, vector normalization, etc. The content theme refers to the driving condition for which user feedback is required. The content topics may include: bump, overspeed, hard braking, etc. The content of the corresponding first reminding information can be preset based on the content theme.
The trained first reminding model can be obtained by training based on a plurality of first training samples, wherein the first training samples comprise sample vehicle states, and the labels comprise whether the first reminding information needs to be sent or not and content subjects of the first reminding information. The first alert model can include, but is not limited to, a linear regression model, a neural network model, a decision tree, a support vector machine, and na iotave bayes, etc.
In some embodiments, the processing device may determine whether to send the first reminder information, and/or determine the content of the first reminder information, based on the current user state. In some embodiments, the processing device may determine whether to send the first reminder information and its content based on one of the information of the current user state. For example, the processing device may send a first reminder message "please evaluate the automatic driving technique of the vehicle" when the user's idle condition reflects that the user is idle. For another example, the emotion of the user reflects that the emotion of the user is panic, and the processing device may send the first reminding information "whether the vehicle is stationary" or "whether the vehicle is braked suddenly", or the like. For another example, the user's vehicle usage reflects that the user has got on the vehicle, the processing device may send a first reminder message "whether the vehicle starts up smoothly" or the like.
In some embodiments, the processing device may determine whether to send the first reminder information, and the content of the first reminder information, based on a variety of information of the current user state. For example, the processing device may determine whether to send the first reminder information and the content thereof based on a preset rule in combination with various information of the current user state. For example, the vehicle usage of the user reflects that the user has got on the vehicle and the user is busy, and the processing device may not need to send the first reminder information. The emotion of the user is panic, but the user is busy, and the processing device may send the first reminder information.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on one or more information in the current user state and one or more information of the current vehicle state. In some embodiments, the processing device may determine whether to send the first reminder information, and the content of the first reminder information, based on the current user state and the current vehicle state using preset rules. The preset rules can be set according to requirements. For example, the current driving road segment of the vehicle is a turning road segment, and the user emotion is panic, the processing device may send a first reminding message "whether the current turning is smooth". For another example, if the current driving state of the vehicle is overspeed and the user is in a difficult state, the processing device may transmit the first warning information "whether the current vehicle speed is too fast". For another example, the user's emotion is uncomfortable, the current in-vehicle ambient temperature of the vehicle is high, and the processing device may send the first warning message "whether the current in-vehicle temperature is too high".
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the current user state and the current vehicle state using the second reminder model. The second reminder model is similar to the first reminder model.
In some embodiments, the content subject matter output by the second reminder model may also be related to the user's mood as compared to the first reminder model. Correspondingly, the content of the first reminding information can comprise matching content with the emotion of the user. For example, if the emotion of the user is angry, the content of the first reminding information may include content for soothing the user.
Whether the first reminding information is sent or not is determined by comprehensively considering the current vehicle state and the current user state, so that the current running condition of the vehicle can be more accurately evaluated. Therefore, the user can be reminded in time under the condition that the user feedback is needed, and a basis is provided for the optimization of subsequent driving. Moreover, interference to the user can be avoided.
In some embodiments, the processing device may determine whether to send the first reminder information based on the user's willingness to feedback. For example, when the user feedback will reflect that the user does not want to perform feedback, the processing device does not send the first reminding information to the user terminal. The user feedback will may be determined according to the user association information. The user-related information refers to information related to the user. The user association information includes: user basic information and user historical behavior information. The basic information of the user comprises credit rating of the user, gender, age, occupation, hobbies, duration of a user registration platform, feedback intention of the user, feedback intention degree and the like. The user historical behavior information comprises: historical order information of the user, historical evaluation information of the user, historical feedback information of the user and the like. In some embodiments, the processing device may obtain the user-associated information from a storage device or platform (e.g., a network appointment platform).
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may determine whether to send the first reminder information and the content of the first reminder information based on the vehicle state and the user profile. For example, the user representation reflects that the user is sensitive to speed, and the processing device may send the first alert when the speed of travel of the vehicle is detected as speeding.
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may send the first reminder information to the in-vehicle terminal in a text manner based on the vehicle state; in some embodiments, the processing device may send the first reminder information to the wearable device of the user in a vibrating manner based on the user state; in some embodiments, the processing device may send the first reminding information to a mobile phone of the user in an image manner based on the vehicle state and the user state, and the user performs feedback in an icon clicking manner after receiving the first reminding information.
In some embodiments, the processing device may determine whether to send the first reminder information, and the content and/or manner of the first reminder information, based on subsequent status information. The subsequent state information includes a vehicle state after a preset time, a user state after the preset time, a vehicle state at the target position and/or a user state at the target position.
The vehicle state after the preset time may be a vehicle state after the preset time starting from the current time. The preset time may be one or more. The preset time may be preset, for example, 10min, 20min, etc. The preset time may be determined according to a driving condition, a user state, and/or user-related information, etc. For example, if the current road is congested, the preset time may be extended. For another example, if it is determined that the user is not active for feedback according to the user association information, the preset time may be extended.
In some embodiments, the vehicle state may be predicted after a preset time. And predicting the vehicle state after the preset time according to the current vehicle state and the preset time. For example, predicting whether there is a bumpy driving event after a preset time may be determined based on a possible driving location of the vehicle after the preset time. The driving position after the predicted preset time may be determined based on the current driving speed and the driving route.
In some embodiments, the prediction may be implemented by a first vehicle state prediction model. The first vehicle state prediction model can process the current vehicle state and the preset time to obtain the vehicle state after the preset time.
The second training sample used in the training of the first vehicle state prediction model comprises a sample current vehicle state and sample preset time, and the label represents the vehicle state after the preset time. The first vehicle state prediction model may include, but is not limited to, a linear regression model, a neural network model, a decision tree, a support vector machine, and na iotave bayes, among others.
In some embodiments, the input of the first vehicle state prediction model may include, in addition to the preset time and the current vehicle state, a navigation route (i.e., a first navigation route) used by the vehicle to currently travel, and the like, and accordingly, the second training sample may further include a sample first navigation route.
In some embodiments, the processing device may further optimize or update the vehicle state after the preset time by historical vehicle travel conditions at a location that the vehicle arrives at after the preset time. For example, if the processing device predicts that the vehicle reaches the position X after the preset time, and the vehicle state predicted by the processing device after the preset time includes an abnormal traveling state of "jerk", and the historical vehicle traveling condition exceeding the threshold value at the position X does not include "jerk", the vehicle state after the preset time may be updated to "no jerk". The vehicle state after the preset time is optimized or updated according to the historical vehicle running state of the vehicle at the position where the vehicle arrives after the preset time, the vehicle state after the preset time can be obtained more in accordance with the actual situation according to the running states of a plurality of historical vehicles, and the prediction accuracy of the vehicle state after the preset time is improved.
The user state after the preset time is similar to the vehicle state after the preset time, and is not described again. In some embodiments, predicting the user state after the preset time may be implemented by a first user state prediction model. The first user state prediction model can process the current user state and the preset time to obtain the user state after the preset time. The first user state prediction model can also process the user state, the preset time and the user association information to obtain the user state after the preset time. For the user association information, see the above related description. The first user state prediction model is similar to the first vehicle state prediction model and will not be described in detail herein.
The target position in the vehicle state at the target position or the user state at the target position may be a preset position. For example, a user terminal, a driver terminal, a processing device, or the like is set or determined in advance. The target location may be a location including: a point, area, or segment.
The target location may be one or more. In some embodiments, the target position may be set based on preset rules. The preset rule may be to determine a target position every preset distance. The preset rule may also be to take a position where a travel event may occur as a target position. For example, a turning position, a pedestrian congestion (e.g., school, etc.) link or position, etc. are taken as the target positions. The processing device, the user terminal, or the driving terminal may acquire a navigation route (i.e., a first navigation route) currently traveled by the vehicle, and determine a location where a travel event may occur from the navigation route as a target location.
In some embodiments, the target position may be predicted based on feedback information, historical information, and the like, and for specific details of the predicted target position, reference may be made to step 520 and related description thereof, which are not described herein again.
In some embodiments, the vehicle state at the target location may be predicted. And predicting the vehicle state at the target position according to the current vehicle state and the target position. In some embodiments, the prediction is achieved by a second vehicle state prediction model. The second vehicle state prediction model may process the current vehicle state and the target position to obtain a vehicle state at the target position. The third training samples used for training the second vehicle state prediction model comprise sample current vehicle states and sample target positions, and the labels are used for representing the vehicle states at the target positions. In some embodiments, the second vehicle condition prediction model may include, but is not limited to, a linear regression model, a neural network model, a decision tree, a support vector machine, and naive bayes, among others.
In some embodiments, the processing device may also determine the travel condition at the target location based on historical travel conditions at the target location. For example, the historical traveling condition at the target position, the current vehicle state, and the target position are input to the second vehicle state prediction model, and the traveling condition at the target position is determined.
In some embodiments, the processing device may further calculate a preset time for the vehicle to travel to the target position based on the target position and the current travel parameter, so as to obtain the vehicle state at the target position by using the first vehicle state prediction model based on the preset time and the current vehicle state.
In some embodiments, the prediction of the user state at the target location may be achieved by a second user state prediction model. The second user state prediction model can process the current user state and the target position to obtain the user state at the target position. The second user state prediction model is similar to the second vehicle state prediction model, and is not described herein again.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the vehicle state after a preset time, similar to determining whether the first reminder information occurs based on the current vehicle state. For example, the processing device may determine whether a driving event occurs based on the vehicle state after a preset time, and if so, may send the first warning information.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the user state after a preset time, similar to determining whether to send the first reminder information based on the current user state.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the preset time later vehicle state and the preset time later user state, similar to determining whether to send the first reminder information based on the current user state and the current vehicle state.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the vehicle status at the target location, similar to determining whether to send the first reminder information based on the current vehicle status.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the user status at the target location, similar to determining whether to send the first reminder information based on the current user status.
In some embodiments, the processing device may determine whether to send the first reminder information and its content based on the vehicle status at the target location and the user status at the target location, similar to determining whether to send the first reminder information based on the current user status and the current vehicle status.
The first reminding information determined based on the subsequent state information is early reminding information sent to the user, namely, the user is reminded in advance to feed back the subsequent running condition. At this time, the time for sending the first reminding information to the user terminal can be adjusted according to the actual situation. For example, the transmission may be sent at a preset distance from the target location. Also for example, the transmission may be made some time before the preset time is reached. For another example, the transmission time may be determined based on a user status, a user terminal status, a driving environment, or/and user-related information. For example, the transmission may be made in a case where a user status or a user terminal status is idle until a preset time or a target location is reached.
In some embodiments, the processing device may determine the manner of the first reminder information based on a subsequent user state, a subsequent user terminal state, a subsequent driving environment, and user association information. Specifically, refer to fig. 4 and the related description thereof, which are not repeated herein.
In some embodiments, the subsequent ue status may include the ue status after a preset time and the ue status at the target location. Reference may be made to fig. 4 and its associated description regarding the status of the user terminal.
In some embodiments, the state of the ue or the state of the ue at the target location after the predetermined time may be predicted. For example, the chat content may be identified to predict whether the chat is over after a predetermined time. In some embodiments, the processing device may predict a subsequent user terminal state based on the current user terminal state, the user association information, and/or the current user state.
The ways of the different embodiments can be combined. For example, in some embodiments, subsequent user terminal states may be predicted based on user emotions and current user terminal states. For example, if the user emotion is "happy" and the current user terminal state is game play, the predicted subsequent user terminal state is game play. In some embodiments, subsequent user terminal states may be predicted based on user idle conditions, user mood, and current user terminal state.
In some embodiments, the subsequent driving environment may include a preset time later driving environment and a driving environment at the target position. In some embodiments, the driving environment and the driving environment at the target location may be obtained from a service platform after a preset time (e.g., a map service or a weather service, etc.).
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may send the first reminder information to the user terminal in real time based on the status information; in some embodiments, the processing device may send the first reminding information to the user terminal in real time in an image manner based on the current state information, and send the first reminding information to the user terminal in a voice manner at a time before reaching the preset time or the target position based on the subsequent state information.
In some embodiments, the first reminder information determined based on the subsequent status information may be displayed directly on a navigation interface of the user terminal. In this way, the user can know the content needing feedback and the position of the feedback in advance by viewing the navigation route.
As shown in fig. 6A-D, fig. 6B is a traveling interface of the vehicle, and as shown in fig. 6B, the vehicle is traveling from the current position a to the position B1, the processing device may display the estimated time of traveling from the current position to the next position on the interface of the user terminal to prompt the user for feedback when the next position is reached. For example, the user terminal may display "going to B1, the time expected to be fed back: 20 min.
In some embodiments, the processing device may display the road condition at the driving position corresponding to the subsequent vehicle state (i.e., the vehicle state after the preset time and the vehicle state at the target position) on the current position feedback interface and the traveling interface of the vehicle. Taking fig. 6A or 6B as an example, the link icons from the position a to the position B1 displayed on the user terminal interface indicate that the link is a delay driving link. For example, a road is muddy or a road section requiring a longer time to travel than usual is constructed; the road section icon from the position B2 to the position B3 represents that the road section is a road section with a school; the link icons from the position B3 to the position B4 indicate that the link is a link with more traffic lights.
In some embodiments, the processing device may also display the road condition at the current location of the vehicle on a travel interface of the vehicle. For example, as shown in FIG. 6B, the floating window to the right in FIG. 6B may show that the vehicle is currently in the quagmire/construction.
In some embodiments, the processing device may dynamically adjust the current position feedback interface and the travel interface of the vehicle based on preset instructions of the user. The preset instruction may be an instruction generated after the user feeds back the driving condition at the current position. Still taking fig. 6A as an example, if the user clicks the emoticon, that is, the driving condition at the current position is fed back, the processing device may switch the current position feedback interface to the driving interface of the vehicle.
In some embodiments, the user may zoom the current position feedback interface and the travel interface of the vehicle on the user terminal interface. For example, taking fig. 6A or 6B as an example, the user may click on the "-" icon and "+" icon displayed on the current location feedback interface and the travel interface of the vehicle to zoom the interface.
In the user interface shown in fig. 6A, the current vehicle position is position a, and the vehicle position after the preset time is position B 1 、B 2 And B 3 Wherein, B 1 、B 2 And B 3 The vehicle position after 10min, the vehicle position after 20min, and the vehicle position after 30min, respectively. Displaying a first reminding message of feedback on the driving condition after 10min on the current user interface, wherein the first reminding message is used for predicting that the 10min reaches the position B 1 Please prepare to feed back the driving condition after 10min, the user can click three emoticons on the interface to feed back, wherein the three emoticons represent very satisfactory, general satisfactory and unsatisfactory from left to right. After 10min, the user interface is updated to FIG. 6B, and the current vehicle position is updated to position B 1 ,B 2 And B 3 The first reminding information on the current interface is updated to 'predicted 10min arrival position B' and the second reminding information on the current interface is respectively the vehicle position after 10min and the vehicle position after 20min 2 Please prepare for feedback on the driving status after 10 min.
As shown in FIG. 6C, in the user interface, the current vehicle location is location A and the target location includes location C 1 、C 2 、C 3 、C 4 、C 5 And C 6 Wherein, C 1 、C 2 、C 3 、C 4 、C 5 And C 6 The turning position, the school road section, the turning position, the traffic light position and the turning position are respectively. On the current user interface, position C is displayed 1 First warning message "please prepare for C" for feedback of driving condition of (1) 1 And if the turning is smooth and smooth, the user can click three emoticons on the interface to perform feedback. When the vehicle runs to C 1 Then, the user gives feedback on the running condition (specifically, whether or not the turn is smooth) at that position. And, the user interface proceeds to FIG. 6D, where the current vehicle position is updated to C 1 The target position is updated to position C 2 、C 3 、C 4 、C 5 And C 6 The first reminding information on the current interface is updated to' please prepare for C 2 Whether the turning is smooth or not is treated.
Therefore, after the preset time, the vehicle state and the vehicle state at the target position are both used for reflecting the future driving condition of the vehicle, the user can be reminded in advance to feed back before the preset time or the target position is reached by sending the first reminding information based on the future driving condition, and the condition that the user does not send feedback information when the vehicle has driven past the position or the target position at the preset time due to the time delay of sending the first reminding information is avoided.
And step 230, acquiring feedback information of the driving condition of the vehicle, which is input by the user at the user terminal in response to the first reminding information. In some embodiments, step 230 may be performed by the second acquisition module.
The user refers to a passenger riding in the vehicle. The vehicle may be a vehicle used in smart travel, i.e., a vehicle in which a user rides. For example, the vehicle may be a shared vehicle in a networked appointment. Also for example, the vehicle may be an autonomous vehicle or the like. For specific details of the autonomous vehicle, reference may be made to the above-associated description, which is not repeated herein.
The user terminal is a terminal that a user performs an operation or receives information. In some embodiments, the user terminal may be a terminal that provides an online service for the user. For example, the user terminal may be a terminal that provides a network appointment service for the user. For another example, the user terminal may be a terminal that provides an automatic driving service to the user. The user terminal may include, but is not limited to, one or more combinations of a notebook, a tablet, a cell phone, a Personal Digital Assistant (PDA), an in-car camera, a dedicated touch device, and a wearable device, among others. In some embodiments, the user terminal may also be a device terminal inside the vehicle for collecting information or providing information, such as a vehicle-mounted terminal, an electronic device, or a remote control device.
The running condition of the vehicle may include information related to running of the vehicle. The running condition may include: one or more combinations of a travel route, a travel location, a travel section, a travel environment, a travel state, a travel parameter, and the like.
The travel route may be a route on which the vehicle travels from a departure point to a destination. In some embodiments, the travel route may be at least one navigation route automatically generated by the processing device based on the origin and destination. The departure and destination can be obtained in a variety of ways. For example, by a user entering the acquisition at a user terminal.
The travel position may be a position through which the vehicle travels. The location may include a point, area, or segment. In some embodiments, the travel location may be a location that the vehicle has passed, a current location, or an upcoming location. The travel location may be a point of a trajectory in the travel route. For example, the driving location may include a departure place, a destination, a current location, and the like.
The road segment may be a route formed by at least two position points on a road network. The at least two location points may be adjacent or non-adjacent. For example, a road segment may be a section or the entirety of a road.
In some embodiments, the travel segment may be a segment over which the vehicle travels. In some embodiments, the travel segment may be a segment that the vehicle has traveled, a segment that the current location is on, or a segment that is about to travel. In some embodiments, the travel segment may also be a preset segment included in the travel route. The preset section may be specifically set according to an actual demand, and for example, the preset section may include at least one of a turning section, a roundabout section, a no-parking section, a one-way driving section, and an accident-prone section.
The running environment may be environmental information related to the vehicle or the running of the vehicle. The running environment may include an environment inside the vehicle when the vehicle runs and an environment outside the vehicle when the vehicle runs. In some embodiments, the driving environment may include air quality in a vehicle, humidity level in a vehicle, temperature in a vehicle, number of occupants in a vehicle, cleanliness in a vehicle, lighting in a vehicle, and appearance of occupants in a vehicle. In some embodiments, the driving environment may include at least one of a weather environment, a road condition environment, and a time environment.
In some embodiments, the weather environment may reflect air temperature, air pressure, humidity, visibility, and the like. For example, weather conditions may include rain, snow, sunny, and the like.
In some embodiments, the road condition environment may reflect a smooth condition of vehicle driving. For example, the road condition environment may reflect road severity, number of lanes, road congestion, road closure, road construction, traffic control, and the like. For example, the road conditions may include traffic flow, number of traffic lights, construction ahead, etc.
In some embodiments, the time environment may reflect time information of vehicle travel. For example, the time context may include whether it is a holiday, whether it is a workday, whether it is a commute peak, and so forth.
The running state may be state information of the vehicle during running of the vehicle. In some embodiments, the driving conditions may include: a normal running state, an abnormal running state, and the like. The abnormal driving state may be driving in which a driving event occurs, and the normal driving state may be driving in which no driving event occurs. The driving event can be events related to vehicle operation and events affecting the ride experience of the user, which occur during the driving process of the vehicle. In some embodiments, the driving event may include a start, stop, acceleration, deceleration, jerk, overspeed, jerk, turn, and bump event of the vehicle.
The running parameter refers to a parameter reflecting a running condition of the vehicle. In some embodiments, the driving parameters include, but are not limited to: velocity parameters and acceleration parameters, etc.
The feedback information may reflect the attitude of the user to the driving condition of the vehicle. In some embodiments, the feedback information may include whether and/or to what extent the driving conditions of the vehicle are satisfactory (e.g., highly satisfactory, generally satisfactory, and unsatisfactory, etc.), and/or the like. In some embodiments, the feedback information may also reflect other information. For example, a cause of satisfaction or dissatisfaction, a recommendation of a running condition of the vehicle, and the like.
In some embodiments, the user may input the feedback information at the user terminal according to his or her will or needs. For example, the user inputs feedback information at the user terminal according to a vehicle running condition (e.g., the vehicle is running too fast or the vehicle is suddenly braked). As another example, the user inputs feedback information at the user terminal at any time.
In some embodiments, the user may input the feedback information based on the first reminder information sent by the processing device.
The ways of the different embodiments can be combined. For example, in some embodiments, the user may input feedback information on the driving parameters of the vehicle in the mobile phone terminal; in some embodiments, the processing device may determine whether to send the first reminding information based on the vehicle state after the preset time, and the user may input feedback information on the driving route of the vehicle in the vehicle-mounted terminal at an appropriate time after receiving the first reminding information.
In some embodiments, the user may input the feedback information in a variety of ways. For example, the user may send feedback information by clicking an icon or text button, sending text or voice, or making a gesture action or a facial action, etc. For example, the user sends feedback information representing dissatisfaction by clicking the 'click-on' icon. And the user makes a preset gesture corresponding to satisfaction as the sent feedback information to the camera of the terminal.
The ways of the different embodiments can be combined. For example, in some embodiments, a user may click on an icon at a cell phone interface to input feedback information about a travel event; in some embodiments, the processing device may determine whether to send the first reminding information based on a vehicle state at the target location, and if so, the processing device sends the first reminding information to a wearable device of the user when the distance between the processing device and the target location is a preset distance, and the user may send a voice as feedback information on the driving route through the wearable device after receiving the first reminding information.
After the user inputs the feedback information, the feedback information may be stored in the storage device or directly transmitted to the processing device. The processing device (e.g., the second obtaining module) may obtain the feedback information directly from the user terminal, or may read the feedback information from the storage device.
In some embodiments, after the processing device obtains the feedback information, the processing device may process the feedback information to determine an attitude reflected by the feedback information. For example, the processing device may determine the attitude represented by the icon clicked on the user terminal by the user through a preset relationship between the icon and the attitude. For another example, the processing device may identify (e.g., via a model or algorithm, etc.) text, pictures, video, or speech sent by the user terminal to determine the reflected attitude.
The ways of the different embodiments can be combined. For example, in some embodiments, the processing device may determine whether to send the first reminding information based on a current user state (e.g., a current emotion of the user), and the user may input the feedback information in the mobile phone terminal based on the first reminding information received by the vehicle-mounted device; in some embodiments, the processing device may determine whether to send the first prompting information based on a vehicle state after a preset time, and if so, send the first prompting information to the mobile phone end at a certain moment before the preset time, and the user may input feedback information in the mobile phone end based on the first prompting information received by the mobile phone end, and when the feedback information reflects that the user is not satisfied with the navigation route, the navigation route may be updated.
Some embodiments of the present description send the first warning information to the user terminal in time by obtaining the state information, so that the user responds to the first warning information and inputs feedback information about the form condition of the vehicle at the user terminal, which is convenient for the processing device to subsequently operate a driving system of the vehicle, improves the driving condition corresponding to the feedback dissatisfaction of the user, and improves the riding experience of the user; meanwhile, the user can avoid that the automatic travel control set by the automatic driving vehicle can not meet the actual demand of the user through timely feedback in the travel process, and the riding experience of the user is further improved.
FIG. 3 is a flow diagram illustrating determining status information according to some embodiments of the present description. As shown in fig. 3, the process 300 may include the following steps 310 to 340. In some embodiments, flow 300 may be performed by a processing device (e.g., processing device 112). In some embodiments, the flow 300 may be performed by a first acquisition module.
Step 310, image information of a user is acquired from a first terminal.
The first terminal may be a terminal device for capturing images. For example, a camera device, a monitoring device, a user terminal with a camera, or an in-vehicle device with a camera, etc. The image information of the user may be an image of the user photographed by the first terminal or an image frame included in a video of the user photographed by the first terminal.
In some embodiments, the first terminal may obtain image information of the user in real time and upload the image information to the database. The processing device may obtain image information of the user from the first terminal in real time. The processing device may also retrieve image information of the user from a database.
At step 320, vehicle-related audio information is obtained from the second terminal.
In some embodiments, the second terminal may be a terminal device that obtains audio information. For example, a sound recording apparatus, an image pickup apparatus, a user terminal with a sound recording function, or an in-vehicle apparatus with a sound recording function, and the like. The second terminal and the first terminal may be the same or different.
Vehicle-related audio information refers to audio data emitted by a vehicle or emitted within the vehicle's internal environment. In some embodiments, the vehicle-related audio information may include: audio emitted by the vehicle during driving (e.g., engine sound, brake sound, etc.), audio emitted or received by the user using a user terminal, or communication audio between the user and a driver of the vehicle during driving, etc.
In some embodiments, the second terminal may obtain vehicle related audio information in real time and upload to the database. The processing device may obtain vehicle-related audio information from the second terminal in real-time. The processing device may also retrieve vehicle-related audio information from a database.
In step 330, image features of the image information and audio features of the audio information are extracted.
In some embodiments, the image features may include color features, texture features, shape features, spatial features, and the like. In some embodiments, the processing device may extract image features through an image feature extraction algorithm or an image feature extraction model. For example, the image feature extraction model may include a convolutional neural network model or the like.
In some embodiments, the audio features may include sampling frequency, bit rate, number of channels, frame rate, zero-crossing rate, short-time energy, and short-time autocorrelation coefficients, among others. In some embodiments, the processing device may extract the audio features through an audio feature extraction algorithm. The audio feature extraction algorithm may include, but is not limited to, linear Prediction Coefficients (LPC), perceptual Linear Prediction Coefficients (PLP), linear Prediction Cepstral Coefficients (LPCC), mel-Frequency Cepstral Coefficients (MFCC), and the like.
In some embodiments, the processing device pre-processes the image information or audio information prior to extracting the audio features or image features. The preprocessing of the image information comprises: smoothing, noise elimination, edge enhancement, edge feature extraction, and the like. The pre-processing of the audio information includes: pre-emphasis, framing, windowing, etc.
At step 340, state information is determined based on the processing of the image features and the audio features.
In some embodiments, the processing may be performed by a pre-trained feature processing model. The pre-trained feature processing model may determine state information based on image features and audio features. In some embodiments, the fourth training sample of the feature processing model includes sample image features and sample audio features of the sample vehicle, and the label is used to characterize state information of the sample vehicle. When the determined status information is a vehicle status, then the tag represents the vehicle status, e.g., the tag represents a driving event of the vehicle: start, stop, accelerate, or turn, etc. When the determined state information is a user state, then the tag represents the user state, e.g., the tag represents the mood of the user: engendering qi, uneasiness, panic or happiness, etc. The processing device may iteratively update parameters of the initial feature processing model based on a plurality of training samples such that a loss function of the model satisfies a preset condition, e.g., the loss function converges, or the loss function value is less than a preset value. And completing model training when the loss function meets the preset condition to obtain a trained feature processing model.
In some embodiments, the image feature extraction model and the feature processing model may be trained in an end-to-end training manner. Specifically, sample image information of a sample user is input into an initial image feature extraction model, sample audio features are input into an initial feature processing model, a loss function is constructed based on a label representing state information of a sample vehicle and a result output by the initial feature processing model, and parameters of the initial feature processing model are updated iteratively based on the loss function until preset conditions are met.
In some embodiments of the present description, the current user state and the current vehicle state may be determined in real time through the user image information and the vehicle audio information acquired in real time, so as to facilitate subsequent acquisition of the current state information in real time and transmission of the first reminding information.
FIG. 4 is a flow chart illustrating sending a first reminder message according to some embodiments of the present description. As shown in fig. 4, the process 400 may include the following steps 410 and 420. In some embodiments, flow 400 may be performed by a processing device (e.g., processing device 112).
And step 410, determining a feedback reminding mode. In some embodiments, step 610 may be performed by the sending module.
In some embodiments, the feedback alert mode may be used to determine the representation mode of the first alert information. Such as voice, text, images, video, vibrations, etc. The feedback reminding mode can be related to at least one of the user state, the user terminal state, the driving environment and the user associated information. For specific details of the user status, reference may be made to step 210 and its related description, which are not described herein again. For details of the driving environment, reference may be made to step 230 and its related description, which are not described herein again.
The user terminal state may reflect the terminal power, the terminal standby time, the terminal usage time and whether the terminal is being used, the current usage mode of the terminal (watching video, listening to songs, etc.), etc.
In some embodiments, the feedback alert mode may be determined based on the user state.
The ways of the different embodiments can be combined. For example, in some embodiments, when the idle condition of the user reflects that the user is reading a book, the feedback reminding manner may be voice; in some embodiments, when the idle condition of the user reflects that the user is listening to a song, the feedback reminding mode can be a text display or an image; in some embodiments, when the user's emotion is anger, then the feedback alert mode may be voice.
In some embodiments, the feedback reminding manner can be determined based on the state of the user terminal.
The ways of the different embodiments can be combined. For example, in some embodiments, when the user terminal state reflects that the user is talking on a speaker or playing music, etc., the feedback alert may be an image display or text; in some embodiments, when the state of the user terminal reflects that the mobile phone of the user is in a low power or standby state, the feedback reminding mode may be performed by the vehicle-mounted device or the wearable device, for example, by a voice display, an image display, a text display, or the like of the vehicle-mounted device or the smart watch.
In some embodiments, the feedback alert mode may be determined based on the driving environment.
The ways of the different embodiments can be combined. For example, in some embodiments, when the in-vehicle environment information reflects that the environment is noisy and it is inconvenient to listen to audio, the feedback reminding manner may be through image display or text, and when the in-vehicle environment information reflects that the in-vehicle light is dim or the light is too bright, the feedback reminding manner may be voice; in some embodiments, when the road condition environment is a congested road segment, the braking frequency of the vehicle is high, the user may be uncomfortable due to long-time viewing of the mobile phone screen, and the feedback reminding mode may be voice or vibration.
In some embodiments, the feedback reminding manner can be further determined based on the user association information. For example, the user-related information reflects that the age of the user exceeds a preset age, and the text and the sound in the feedback reminding mode can be amplified. For another example, the user information reflects that the common language of the user is english, and the text and sound in the feedback reminding mode can be switched to english. For another example, the user information reflects that the user is a Chinese, and the characters and sounds in the feedback reminding mode can be switched to Chinese.
In some embodiments, the feedback reminding manner can also be determined based on multiple types of user states, user terminal states, driving environments and user associated information. The processing device may determine the model based on a preset rule or a trained feedback reminding mode, process various information among the user state, the user terminal state, the driving environment, and the user association information, and determine the feedback reminding mode.
For example, as described above, each of the user state, the user terminal state, the driving environment, and the user association information may include a plurality of types, each type may preset a corresponding feedback reminding manner, the preset rule may be to score a plurality of information in the user state, the user terminal state, the driving environment, and the user association information, and determine a final feedback reminding manner (e.g., select a feedback reminding manner with a largest score after fusion) by fusing scores of feedback reminding manners of the same type (e.g., averaging, weighted summing, weighted averaging, etc.).
For another example, the processing device may determine the model based on a preset rule or a trained feedback reminding mode, and simultaneously process a plurality of information in the user state, the user terminal state, the driving environment, and the user association information to determine a plurality of initial feedback reminding modes; and determining a final feedback reminding mode from the plurality of initial feedback reminding modes based on a preset screening rule. The preset screening rule can be random screening or set according to requirements.
In some embodiments, the feedback reminding mode can be changed in real time based on the actual conditions. For example, when the user is making a call, the feedback reminding mode can be changed from the original voice display to a text display or an image display.
And step 420, generating first reminding information based on the feedback reminding mode, and sending the first reminding information to the user terminal. In some embodiments, step 420 may be performed by the sending module.
For specific details of generating and sending the first reminder information to the user terminal, reference may be made to step 220 and the related description thereof, which are not described herein again. In some embodiments, the processing device may send the first alert information to the user terminal through a transmission manner such as short message, network, or bluetooth.
In some embodiments of the present description, the corresponding feedback reminding manner is determined intelligently and dynamically based on various information (e.g., a user state, a user terminal state, a driving environment, user association information, and the like), so that the fixed feedback reminding manner is prevented from disturbing the user, and the riding experience of the user is improved.
FIG. 5 is a flow diagram illustrating predicting a target location according to some embodiments of the present description. As shown in fig. 5, the process 500 may include steps 510 and 520 described below. In some embodiments, flow 500 may be performed by a processing device (e.g., processing device 112). In some embodiments, the flow 500 may be performed by a first acquisition module.
And step 510, obtaining historical feedback information related to at least one of the track points and the user on the first navigation route, and historical driving conditions corresponding to the historical feedback information.
In some embodiments, the first navigation route may be a navigation route being used by the vehicle. As previously described in step 230, the travel routes may include at least one navigation route, and in some embodiments, the first navigation route may be one selected by the user from the travel routes, or may be an optimal navigation route automatically selected by the processing device from the travel routes (e.g., the shortest travel navigation route, the shortest time-consuming navigation route, or the least expensive navigation route, etc.).
In some embodiments, historical feedback information from historical users on historical driving conditions of the historical vehicle may be stored in a storage device (e.g., database 140), the platform. It can be understood that there is at least one historical feedback information corresponding to historical driving conditions of the historical vehicle. For example, the correspondence relationship of the historical user, the historical vehicle, the historical traveling condition, and the historical feedback information may be stored in the storage device.
The track points on the first navigation route may be location points included on the first navigation route. In some embodiments, the obtaining module may obtain historical feedback information related to the track points on the first navigation route. For example, the obtaining module may obtain historical feedback information corresponding to the track point on the first navigation route and historical driving conditions corresponding to the historical feedback information from a storage device or a platform.
In some embodiments, the acquisition module may acquire historical feedback information related to the user. For example, the obtaining module may obtain historical feedback information of a historical user for the user from a storage device or a platform, and historical driving conditions corresponding to the historical feedback information.
In some embodiments, the acquisition module may acquire historical feedback information related to the user and the track point on the first navigation route. For example, the obtaining module obtains historical feedback information of a historical user for track points on the first navigation route and historical driving conditions corresponding to the historical feedback information from a storage device or a platform, wherein the historical user is the user.
And step 520, predicting a target position on the first navigation route based on the historical feedback information and the historical driving condition.
In some embodiments, the processing device may predict the target location on the first navigation route based on historical feedback information (which may be referred to as "first historical feedback information") related to the track points and/or the user on the first navigation route. For example, the processing device may determine, as the target location, a trajectory point corresponding to a negative feedback (e.g., the feedback is unsatisfactory) in the first historical feedback information. For example, if the user or another user is dissatisfied with the feedback information of the historical driving conditions of the track point 1 and the track point 2 included in the first navigation route in the past, the processing device may determine the track points 1 and 2 as the target positions. For another example, the processing device may determine, as the target position, a trace point in the first historical feedback information that includes a negative feedback number greater than a threshold. The preset times can be specifically set according to actual requirements, for example, 3 times or 5 times and the like.
In some embodiments, the processing device may predict the target location on the first navigation route based on a historical travel condition (which may be referred to as a "first historical travel condition") to which the first historical feedback information corresponds. For example, the processing device may determine, as the target position, a trajectory point in the historical travel condition on which the travel event is reflected. By way of example, if the driving event "turn" is reflected at track points 2 and 3, then track points 2 and 3 can be determined as the target positions.
In some embodiments, the processing device may predict a target location on the first navigation route based on the first historical feedback information and the first historical driving condition. In some embodiments, the processing device may determine, as the target position, a trajectory point in the first historical driving situation in which the driving event is reflected and whose corresponding first historical feedback information is negative feedback. Still taking the above example as an example, the trajectory point 2 may be determined as the target position.
In some embodiments, the processing device may predict a target location on the first navigation route based on a trained target location prediction model. Specifically, the target position prediction model may process the first history feedback information and the first history traveling condition, and output the target position. The target location prediction model may include, but is not limited to, a graph neural network model, a linear regression model, a neural network model, a decision tree, a support vector machine, and na iotave bayes, among others.
When the target position prediction model is the graph neural network model, the nodes of the graph neural network model are track points on the first navigation route, and the characteristics of the nodes at least comprise: the first historical feedback information, the first historical driving condition and the attributes of the track points. The attributes of the track points may reflect the type of track point, e.g., turn road segment, congested road segment, near school, etc. The edges of the graph neural network model are relationships between the track points, and the characteristics of the edges may include distance relationships, whether they occur on the same navigation route and/or on the same road segment, and the like. And inputting the characteristics of the edges and the characteristics of the nodes into the graph neural network model, and outputting whether the corresponding track points are the target positions or not by the nodes of the graph neural network model.
As previously described, the first historical feedback information may be classified into a plurality of categories: the user past feedback information on the track points on the first navigation route, the other user past feedback information on the track points on the first navigation route, and the user past feedback information on other positions (i.e., positions other than the track points on the first navigation route). In some embodiments, weights may be set for different historical feedback information when determining the target location based on the historical feedback information. The weight may be set according to circumstances. For example, the feedback information of the user on the track point in the past has the highest weight, so that the determined target position can better meet the requirement of the user.
In some embodiments of the present description, the target position is determined according to the historical feedback information and the historical driving condition, track points that are historically fed back or have been historically determined to have driving events on a navigation route being used by the vehicle can be determined as the target position, and then feedback information of the user on the target position is collected, so that the subsequent optimization of vehicle control is facilitated, and the riding experience of the user is improved.
Embodiments of the present specification also provide an apparatus for user feedback, the apparatus including at least one processor and at least one memory; the at least one memory is for storing computer instructions; the at least one processor is configured to execute at least some of the computer instructions to implement operations corresponding to the method of user feedback as described in any of the previous paragraphs.
The present specification also provides a computer readable storage medium, which stores computer instructions, and when the computer instructions are executed by a processor, the operations corresponding to the method for user feedback as claimed in any one of the preceding claims are implemented.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered as illustrative only and not limiting, of the present invention. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such alterations, modifications, and improvements are intended to be suggested in this specification, and are intended to be within the spirit and scope of the exemplary embodiments of this specification.
Also, the description uses specific words to describe embodiments of the description. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means a feature, structure, or characteristic described in connection with at least one embodiment of the specification. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C + +, C #, VB.NET, python, and the like, a conventional programming language such as C, visual Basic, fortran2003, perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features are required than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range in some embodiments of the specification are approximations, in specific embodiments, such numerical values are set forth as precisely as possible within the practical range.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of the present specification shall control if they are inconsistent or inconsistent with the statements and/or uses of the present specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A method of user feedback, comprising:
acquiring state information, wherein the state information comprises at least one of a vehicle state and a user state;
based on the state information, sending first reminding information to the user terminal;
and acquiring feedback information of the driving condition of the vehicle, which is input by the user at the user terminal in response to the first reminding information, wherein the vehicle is taken by the user.
2. The method of claim 1, the user status relating to one or more of the following information: vehicle usage by the user, idle status by the user, and mood by the user.
3. The method of claim 1, the vehicle state comprising at least one of a current vehicle state, a vehicle state after a preset time, and a vehicle state at a target location; the vehicle state is related to a running condition of the vehicle.
4. The method of claim 3, the driving condition comprising: at least one of a driving route, a driving location, a driving section, a driving environment, a driving state, and a driving parameter.
5. The method of claim 3, the target location being obtained by a prediction process comprising:
obtaining historical feedback information related to at least one of a track point on a first navigation route and the user and historical driving conditions corresponding to the historical feedback information; and
predicting the target location on the first navigation route based on the historical feedback information and the historical driving conditions, wherein,
the first navigation route is a navigation route being used by the vehicle.
6. The method of claim 1, wherein the sending the first alert message to the ue comprises:
determining a feedback reminding mode, wherein the feedback reminding mode is related to at least one of a user state, a user terminal state, a driving environment and user association information; and
and generating the first reminding information based on the feedback reminding mode, and sending the first reminding information to the user terminal.
7. The method of claim 1, the vehicle being an autonomous vehicle.
8. A system of user feedback, comprising:
the system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring state information, and the state information comprises at least one of a vehicle state and a user state;
the sending module is used for sending first reminding information to the user terminal based on the state information;
and the second acquisition module is used for acquiring feedback information of the driving condition of the vehicle, which is input by the user terminal in response to the first reminding information.
9. An apparatus for user feedback, the apparatus comprising at least one processor and at least one memory;
the at least one memory is for storing computer instructions;
the at least one processor is configured to execute at least some of the computer instructions to implement the method of any of claims 1 to 7.
10. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 7.
CN202110806625.7A 2021-07-16 2021-07-16 User feedback method and system Pending CN115631550A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110806625.7A CN115631550A (en) 2021-07-16 2021-07-16 User feedback method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110806625.7A CN115631550A (en) 2021-07-16 2021-07-16 User feedback method and system

Publications (1)

Publication Number Publication Date
CN115631550A true CN115631550A (en) 2023-01-20

Family

ID=84902876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110806625.7A Pending CN115631550A (en) 2021-07-16 2021-07-16 User feedback method and system

Country Status (1)

Country Link
CN (1) CN115631550A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012225683A (en) * 2011-04-15 2012-11-15 Nippon Soken Inc Car navigation device
US20180365740A1 (en) * 2017-06-16 2018-12-20 Uber Technologies, Inc. Systems and Methods to Obtain Passenger Feedback in Response to Autonomous Vehicle Driving Events
WO2019025120A1 (en) * 2017-08-01 2019-02-07 Audi Ag Method for determining user feedback during the use of a device by a user, and control device for carrying out the method
US20190047584A1 (en) * 2017-08-11 2019-02-14 Uber Technologies, Inc. Systems and Methods to Adjust Autonomous Vehicle Parameters in Response to Passenger Feedback
KR20190103521A (en) * 2018-02-13 2019-09-05 현대자동차주식회사 Vehicle and control method for the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012225683A (en) * 2011-04-15 2012-11-15 Nippon Soken Inc Car navigation device
US20180365740A1 (en) * 2017-06-16 2018-12-20 Uber Technologies, Inc. Systems and Methods to Obtain Passenger Feedback in Response to Autonomous Vehicle Driving Events
WO2019025120A1 (en) * 2017-08-01 2019-02-07 Audi Ag Method for determining user feedback during the use of a device by a user, and control device for carrying out the method
US20190047584A1 (en) * 2017-08-11 2019-02-14 Uber Technologies, Inc. Systems and Methods to Adjust Autonomous Vehicle Parameters in Response to Passenger Feedback
KR20190103521A (en) * 2018-02-13 2019-09-05 현대자동차주식회사 Vehicle and control method for the same

Similar Documents

Publication Publication Date Title
US10222227B2 (en) Navigation systems and associated methods
Chan et al. A comprehensive review of driver behavior analysis utilizing smartphones
CN108205830B (en) Method and system for identifying individual driving preferences for unmanned vehicles
US10192171B2 (en) Method and system using machine learning to determine an automotive driver's emotional state
US10875525B2 (en) Ability enhancement
JP2020522798A (en) Device and method for recognizing driving behavior based on motion data
US9928833B2 (en) Voice interface for a vehicle
JP6612707B2 (en) Information provision device
US12056198B2 (en) Method and apparatus for enhancing a geolocation database
US11460309B2 (en) Control apparatus, control method, and storage medium storing program
EP3296944A1 (en) Information processing device, information processing method, and program
JP7122239B2 (en) Matching method, matching server, matching system, and program
CN111540222A (en) Intelligent interaction method and device based on unmanned vehicle and unmanned vehicle
US20240344840A1 (en) Sentiment-based navigation
CN113320536A (en) Vehicle control method and system
CN113320537A (en) Vehicle control method and system
CN111797755A (en) Automobile passenger emotion recognition method and electronic equipment
JP7183864B2 (en) Information processing system, program, and control method
JP2018200192A (en) Point proposal device and point proposal method
CN115631550A (en) User feedback method and system
JP7151400B2 (en) Information processing system, program, and control method
US20230392936A1 (en) Method and apparatus for determining lingering communication indicators
CN115204262A (en) Pedestrian warning method and device, storage medium and electronic equipment
JP2022103553A (en) Information providing device, information providing method, and program
JP2020134193A (en) Server, stopover point proposal method, program, terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination