Nothing Special   »   [go: up one dir, main page]

CN111510553A - Motion trail display method and device and readable storage medium - Google Patents

Motion trail display method and device and readable storage medium Download PDF

Info

Publication number
CN111510553A
CN111510553A CN202010217999.0A CN202010217999A CN111510553A CN 111510553 A CN111510553 A CN 111510553A CN 202010217999 A CN202010217999 A CN 202010217999A CN 111510553 A CN111510553 A CN 111510553A
Authority
CN
China
Prior art keywords
map magnification
motion
track
map
app
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010217999.0A
Other languages
Chinese (zh)
Other versions
CN111510553B (en
Inventor
牟桐
韩颜聪
蔡冰莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010217999.0A priority Critical patent/CN111510553B/en
Publication of CN111510553A publication Critical patent/CN111510553A/en
Application granted granted Critical
Publication of CN111510553B publication Critical patent/CN111510553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72457User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4084Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72406User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by software upgrading or downloading
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Environmental & Geological Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Navigation (AREA)

Abstract

A motion trail display method, a motion trail display device and a readable storage medium are used for determining target sampling parameters by combining a first map magnification factor used when a user is predicted to display a motion trail on electronic equipment, namely on the premise that the problem of motion trail distortion can be relieved, sampled track points can be reduced, and time consumed by the motion trail is reduced. According to the embodiment of the application, a first map magnification factor used when a user shows a motion track on electronic equipment is pre-estimated according to the size of a first screen of the electronic equipment, a target sampling parameter is determined according to the first map magnification factor, the maximum map magnification factor supported by a motion APP and the corresponding relation between the preset map magnification factor and the sampling parameter, and then the motion track is sampled and shown on a display screen of the electronic equipment through a graphical interface of the motion APP. Therefore, on the premise of reducing the distortion problem of the motion trail, the sampling track points can be reduced, and the time consumption of the motion trail is reduced.

Description

Motion trail display method and device and readable storage medium
Technical Field
The present application relates to the field of communications, and in particular, to a method and an apparatus for displaying a motion trajectory, and a readable storage medium.
Background
In daily life, outdoor sports enthusiasts often like to record own motion trail. Track points generated by a user are recorded and displayed as the main printing function of a plurality of Application programs (APP).
With the development of APP, the display of the movement track is from the previous single-color display, to the gradual-color display of the combined map, to the dynamic display of the trajectory line. When satisfying user's demand increasingly, to the preliminary treatment of track point, color, the calculation of time stamp in the track point developments show has all increased consuming time in the motion trail show process, and when the track point was more, the problem consuming time of motion trail show process was more outstanding. For example, a user participating in cross-country running may generate a large number of track points, the order of magnitude of which may reach 100000, and displaying the track points of the user on a map takes a long time, and even may cause a problem that the motion trail display fails.
In the prior art, in order to improve the time-consuming problem of motion track display, the sampling rate is usually set to a larger fixed value, so that the track points to be sampled can be reduced. However, in this scheme, the sampling rate is high, and the sampling results in fewer track points, so that the distortion of the generated motion trajectory is serious.
Disclosure of Invention
The application provides a motion trail display method, a motion trail display device and a readable storage medium, so that time consumption of motion trail display is reduced on the premise of reducing motion trail distortion.
In a first aspect, the present application provides a method for displaying a motion trajectory, including: estimating a first map magnification factor used when a user shows a motion track on the electronic equipment according to a first screen size of a display screen in the electronic equipment; determining a target sampling parameter according to the first map magnification factor, the maximum map magnification factor supported by a sport APP installed in the electronic equipment, and the corresponding relation between the preset map magnification factor and the sampling parameter; sampling the track points acquired by the motion APP according to the target sampling parameters to obtain the sampled track points; and then displaying the motion trail on a display screen of the electronic equipment through a graphical interface of the motion APP according to the sampled track points.
The sampling parameter is described as including a sampling ratio in this application, and the sampling ratio included in the target sampling parameter is referred to as a target sampling ratio in this application. In one possible implementation, the preset map magnification factor and the sampling parameter are associated with the following conditions: the smaller the sampling ratio is, the smaller the number of track points after sampling is, in this case, the smaller the map magnification corresponding to the sampling ratio can be, and thus, in the case of a smaller map magnification, the sampling ratio can be reduced, and thus, the time consumption for generating a motion trajectory can be reduced. On the contrary, the larger the sampling ratio is, the more the number of the track points after sampling is, in this case, the larger the map magnification factor corresponding to the sampling ratio can be, so that the problem of distortion of the motion trail under the larger map magnification factor can be alleviated.
In one possible implementation, predicting a first map magnification used by a user in presenting a motion trajectory on an electronic device through a motion APP may include: if the first screen size is larger than the second screen size, determining that the first map magnification is the maximum map magnification supported by a map APP installed in the electronic equipment; if the first screen size of the electronic equipment is not larger than the second screen size, determining a first ratio between the first screen size and the second screen size, and calculating a first map magnification according to the first ratio and the corresponding relation between the second screen size and the second map magnification; and the second map magnification factor is a map magnification factor used when the estimated user shows the motion trail on the electronic equipment with the second screen size through the motion APP.
In this way, when the first screen size is larger, the user can completely see the motion trail even if the user uses a larger map magnification factor, so that it can be presumed that the user may use a larger map magnification factor as much as possible when viewing the motion trail on the large screen, and on a smaller screen, the map magnification factor corresponding to the first screen size can be flexibly selected according to the scheme provided by the embodiment.
In one possible implementation, determining the target sampling parameter according to the first map magnification, the maximum map magnification supported by the motion APP, may include: if the first map magnification is larger than the maximum map magnification supported by the sport APP, taking the sampling parameter corresponding to the maximum map magnification supported by the sport APP as a target sampling parameter according to the corresponding relation between the preset map magnification and the sampling parameter; and if the first map magnification is not larger than the maximum map magnification supported by the motion APP, taking the sampling parameter corresponding to the first map magnification as a target sampling parameter according to the corresponding relation between the preset map magnification and the sampling parameter.
In this embodiment, the sport APP may determine the maximum map magnification supported by the sport APP according to the service characteristics of the sport APP, for example, the sport APP is a travel APP, and since the distance spanned by the travel of the user is relatively long, in order to display the travel track of the user, a relatively low map magnification needs to be set. It can be seen that this scheme can be compatible with the maximum map magnification supported by the sports APP itself.
In a possible implementation mode, before the track points are sampled according to the target sampling parameters and the sampled track points are obtained, redundant track points in the M track points can be deleted according to the sampling time and/or position of each track point in the M track points acquired by the motion APP; m is an integer greater than 2; and the M track points are obtained by sampling the motion trail of the motion APP by adopting a preset sampling frequency. Therefore, the storage space of the electronic equipment can be saved on the premise of ensuring the definition of the displayed motion trail.
In one possible implementation, deleting redundant trace points from the M trace points according to the sampling time and/or position of each trace point in the M trace points may include: if the (i + 1) th track point in the M track points meets one or more of the following conditions, deleting the (i + 1) th track point: the acquisition time interval between the (i + 2) th track point and the ith track point is less than the time threshold; i is a positive integer less than M; the included angle between the first vector and the second vector belongs to a preset target included angle range; the first vector is composed of an i +1 th track point and an i +2 th track point, and the direction is from the i +1 th track point to the i +2 th track point; the second vector comprises ith +1 th track point and ith track point, and the direction is from ith +1 th track point to ith track point. Therefore, the time can be similar, and track points which are nearly on the same straight line are deleted as redundant points, so that the distortion problem of the motion track can be reduced as much as possible on the premise of saving the storage space.
In a second aspect, an electronic device is provided, where the electronic device includes modules for executing the motion trail display method in the first aspect or any one of the possible implementations of the first aspect.
In a third aspect, an electronic device is provided that includes a processor and a memory. The memory is used for storing computer-executable instructions, and the processor executes the computer-executable instructions in the memory to perform the operational steps of the method of the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein instructions, which, when executed on a computer, cause the computer to perform the method of the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, the present application provides a computer program product storing instructions that, when executed on a computer, cause the computer to perform the method of the first aspect or any one of the possible implementations of the first aspect.
Drawings
FIG. 1 is a schematic diagram of an electronic device;
fig. 2 is a schematic flow chart of a motion trajectory display method according to an embodiment of the present disclosure;
FIG. 3a is a schematic diagram showing a motion trajectory on a cell phone at a map magnification of 16 levels (1:200 meters);
FIG. 3b is a schematic diagram showing the movement trace on a cell phone at a map magnification of 18 steps (1:50 meters);
FIG. 3c is another schematic diagram illustrating a motion trajectory on a cell phone at a map magnification of 18 steps (1:50 meters);
FIG. 4a illustrates a diagram showing a motion trajectory on a smart bracelet at 18 levels of map magnification (1:50 meters);
FIG. 4b illustrates a diagram showing a motion trajectory on a smart bracelet at a map magnification of 17 levels (1:100 meters);
FIG. 4c shows an exemplary diagram of a motion trajectory on a smart bracelet at 18 levels of map magnification (1:50 meters);
FIG. 5 is a diagram illustrating an angle between a first vector and a second vector provided by an embodiment of the present application;
fig. 6 schematically shows a structure of an electronic device;
fig. 7 schematically shows a structure of another electronic device.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments of the present application, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
The embodiment of the application can be applied to electronic devices, wherein the electronic devices can be portable electronic devices including functions such as a personal digital assistant and/or a music player, such as a mobile phone, a tablet computer, a wearable device (e.g., a smart watch) with a wireless communication function, an in-vehicle device, and the like. Exemplary embodiments of the portable electronic device include, but are not limited to, a mount
Figure BDA0002425055710000031
Or other operating system.
Fig. 1 schematically shows a structure of an electronic device 100.
It should be understood that the illustrated electronic device 100 is merely an example, and that the electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
As shown in fig. 1, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a Subscriber Identification Module (SIM) card interface 195, etc., wherein the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, a barometric sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, etc.
The following describes the components of the electronic device 100 in detail with reference to fig. 1:
the processor 110 may include one or more processing units, for example, the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory, so that repeated accesses can be avoided, the waiting time of the processor 110 can be reduced, and the efficiency of the system can be improved.
The processor 110 may execute the method for adjusting the volume of the touch screen provided in the embodiment of the present application, and the processor may respond to the touch operation on the display screen and display the prompting information related to the volume interaction at the side edge of the display screen. When the processor 110 integrates different devices, such as a CPU and a GPU, the CPU and the GPU may cooperate to execute the operation prompting method provided by the embodiment of the present application, for example, part of the algorithm in the operation prompting method is executed by the CPU, and another part of the algorithm is executed by the GPU, so as to obtain faster processing efficiency.
In some embodiments, processor 110 may include one or more interfaces. For example, the interface may include an integrated circuit (I2C) interface, an inter-integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SC L). in some embodiments, the processor 110 may include multiple sets of I2C buses.the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. via different I2C bus interfaces, for example, the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch function of the electronic device 100.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a display screen serial interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display panel may be a liquid crystal display (L CD), an organic light-emitting diode (O L ED), an active matrix organic light-emitting diode (AMO L ED), a flexible light-emitting diode (F L ED), a miniature, Micro L ED, Micro-O L ED, a quantum dot light-emitting diode (Q L ED), or the like.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on. For example, a fingerprint sensor may be disposed on the front side of the electronic apparatus 100 (below the display screen 194), or a fingerprint sensor may be disposed on the back side of the electronic apparatus 100 (below the rear camera). In addition, the fingerprint recognition function may also be implemented by configuring a fingerprint sensor in the touch screen, that is, the fingerprint sensor may be integrated with the touch screen to implement the fingerprint recognition function of the electronic device 100. In this case, the fingerprint sensor may be disposed in the touch screen, may be a part of the touch screen, or may be otherwise disposed in the touch screen. In addition, the fingerprint sensor can also be implemented as a full panel fingerprint sensor, and thus, the touch screen can be regarded as a panel which can perform fingerprint collection at any position. In some embodiments, the fingerprint sensor may process the acquired fingerprint (e.g., whether the fingerprint is verified) and send the processed fingerprint to the processor 110, and the processor 110 performs corresponding processing according to the processing result of the fingerprint. In other embodiments, the fingerprint sensor may also send the captured fingerprint to the processor 110 for processing (e.g., fingerprint verification, etc.) by the processor 110. The fingerprint sensor in embodiments of the present application may employ any type of sensing technology including, but not limited to, optical, capacitive, piezoelectric, or ultrasonic sensing technologies, among others.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signals acquired by the bone conduction sensor 180M, and the heart rate detection function is realized.
Although not shown in fig. 1, the electronic device 100 may further include a bluetooth device, a positioning device, a flash, a micro-projection device, a Near Field Communication (NFC) device, and the like, which are not described in detail herein.
Some concepts related to the embodiments of the present application are described below.
(1) The screen size of the electronic device.
In the embodiment of the application, the screen sizes of the electronic devices may be different, and the screen sizes of the electronic devices are described by taking the electronic devices as notebook computers, mobile phones or smart bands as examples.
The screen size of a notebook computer may include the following specifications: 8.9 inches, 11.1 inches, 10.2 inches, 12.1 inches, 13.3 inches, 13.4 inches, 14 inches, etc.
The screen size of the mobile phone can include the following specifications: 3.7 inches, 3.8 inches, 4 inches, 4.2 inches, 4.3 inches, 4.4 inches, 4.5 inches, 4.7 inches, 4.8 inches, 5.0 inches, 5.1 inches, 5.2 inches, 5.3 inches, 5.5 inches, 5.7 inches, 5.8 inches, 6 inches, 6.1 inches, 6.5 inches, 7 inches.
The screen size of the smart band may include several specifications: 1.0 inch, 1.2 inches.
(2) Map APP
The map APP in the embodiment of the application refers to an APP capable of displaying a map. Such as a Baidu map APP or a Gade map APP.
(3) Map magnification supported by map APP
The map APP defines a set of corresponding relationship between a scale and a map magnification, and table 1 below exemplarily shows a possible schematic of the corresponding relationship between the map magnification and the scale of the map APP, and the schematic is given by taking a first row of numerical values as an example: when the map magnification of the map APP reaches the level 19, in this case, 1 centimeter shown on the map represents 20 meters in the real distance.
The correspondence between the map magnification and the scale of the map APP shown in symbol 1 in the embodiment of the present application is merely an example, and is not limitative.
TABLE 1 map APP map magnification and scale correspondence
Map magnification (grade) Scale bar
19 1:20 m
18 1:50 m
17 1:100 m
16 1:200 m
15 1:500 m
14 1:1 km
13 1:2 km
12 1:5 km
11 1:10 km
10 1:20 km
9 1:25 km
8 1:50 km
7 1:100 km
6 1:200 km
5 1:500 km
4 1:1000 km
3 1:2000 km
2 1:5000 km
1 1:10000 km
(4) And (6) sports APP.
The motion APP in the embodiment of the present application refers to an APP that can show a motion trajectory of a user based on one map. The map used by the motion APP to show the motion trajectory of the user may be, for example, the Baidu map, the Gade map, or the like.
(5) Map magnification supported by sports APP.
The map magnification supported by the sport APP in the embodiment of the present application is a map magnification that can be allowed for a map showing a movement trajectory of the user in the sport APP.
The maximum map magnification factor supported by the sports APP may be the same as or different from the maximum map magnification factor supported by the map used by the sports APP. For example, the maximum magnification of the map support used by the sport APP is 19 levels as shown in table 1, and the maximum magnification of the map support for the sport APP may be 13 levels, which illustrates that when the sport APP is used to show the user's trajectory based on the map, the user may magnify the map to 13 levels at maximum within the sport APP.
Based on the above description, fig. 2 exemplarily shows a flow chart of a motion trajectory display method provided in an embodiment of the present application, and as shown in fig. 2, the method includes:
in step 201, an electronic device obtains a first screen size of the electronic device.
For convenience of description, the screen size of the electronic device is referred to as a first screen size in the embodiments of the present application. In the embodiment of the application, the electronic device can obtain the first screen size from factory parameters.
Step 202, the electronic device judges whether the first screen size is larger than a preset second screen size; if yes, go to step 203; if not, go to step 204.
The second screen size in the embodiment of the present application is a preset value, and can be understood as a reference value. For example, the second screen size may be preset to 10 inches, or 4 inches, etc. Regarding the specific value of the second screen size, the following will be described in detail by specific examples, and will not be described herein again.
Step 203, the electronic device determines the maximum map magnification factor supported by the map APP installed in the electronic device as a first map magnification factor.
In step 204, the electronic device calculates a first ratio between the first screen size and the second screen size, and calculates a product of the first ratio and a preset scale of the second map magnification factor to obtain a first scale.
In the embodiment of the present application, a correspondence relationship of the second map magnification corresponding to the second screen size is preset. When the user shows the motion trail on the motion APP, the motion trail may be reduced and enlarged, and in the process, the user may use a plurality of map magnification factors. In the embodiment of the present application, the second map magnification may be understood as: and the estimated maximum map magnification factor is the maximum map magnification factor in the map magnification factors used when the user shows the motion trail on the electronic equipment with the second screen size.
In the embodiments of the present application, for convenience of description, the product of the first ratio and the scale of the magnification of the second map is referred to as a first scale.
In step 205, the electronic device determines whether a scale corresponding to the map magnification is the same as the first scale in the preset map magnification-scale correspondence relationship (see table 1). If yes, go to step 206; if not, go to step 207.
In step 206, the electronic device determines the map magnification corresponding to the first scale as the first map magnification according to the preset corresponding relationship between the map magnification and the scale.
Step 207, the electronic device determines, as a first map magnification, a map magnification corresponding to a scale adjacent to the value of the first scale in the correspondence, according to the correspondence between the preset map magnification and the scale.
For example, in the preset mapping relationship between the map magnification factor and the scale, if the first scale is smaller than the scale corresponding to the third map magnification factor and larger than the scale corresponding to the fourth map magnification factor, the third map magnification factor may be determined as the first map magnification factor, or the fourth map magnification factor may be determined as the first map magnification factor, and which value is specifically selected may be determined according to the actual application scenario.
Step 208, the electronic device judges whether the first map magnification is larger than the maximum map magnification supported by the sport APP; if yes, go to step 209; if not, go to step 210.
And step 209, the electronic equipment takes the sampling parameter corresponding to the maximum map magnification factor supported by the motion APP as a target sampling parameter according to the corresponding relation between the preset map magnification factor and the sampling parameter.
And step 210, the electronic equipment takes the sampling parameter corresponding to the first map magnification as a target sampling parameter according to the corresponding relation between the preset map magnification and the sampling parameter.
In the embodiment of the present application, the example in which the sampling parameter includes the sampling ratio is described, and in the embodiment of the present application, the sampling ratio included in the target sampling parameter is referred to as a target sampling ratio. In one possible implementation, the preset map magnification factor and the sampling parameter may satisfy the following condition: the smaller the sampling ratio is, the smaller the number of track points after sampling is, in this case, the smaller the map magnification corresponding to the sampling ratio can be, and thus, in the case of a smaller map magnification, the sampling ratio can be reduced, and thus, the time consumption for generating a motion trajectory can be reduced. On the contrary, the larger the sampling ratio is, the more the number of the track points after sampling is, in this case, the larger the map magnification factor corresponding to the sampling ratio can be, so that the problem of distortion of the motion trail under the larger map magnification factor can be alleviated.
And step 211, the electronic equipment samples the track points according to the target sampling parameters to obtain the sampled track points.
For example, the target sampling parameter is 1:100, and the specific implementation process includes: for the track points stored in the storage area, 1 track point is collected from every 100 track points as the sampled track point according to the collection time of the track points. Optionally, the selection may be performed according to a preset rule, for example, for each 100 track points, a first one of the 100 track points is used as the sampled track point; for another example, the last of the 100 trace points is used as the sampled trace point; for another example, the 50 th of the 100 track points is used as the sampled track point; for another example, a random one of the 100 trace points is used as the sampled trace point.
And 212, displaying the motion trail on the display screen of the electronic equipment through a graphical interface of the motion APP by the electronic equipment according to the sampled track points.
It can be seen that in the first aspect, the target sampling parameter is determined by combining the first map magnification, and is not a fixed value. Since the screen sizes of the electronic devices may be different, the estimated magnification of the first map is also different. In one case, the mobile phone may have a larger screen size, and the user may see a more complete movement track even if the user turns the first map to a larger magnification factor, so that when viewing the movement track on the electronic device with a larger screen, the first map used by the user may have a larger magnification factor. In the embodiment of the application, the target sampling parameter is determined by combining the first map magnification, and when the first map magnification is larger, more sampled track points may be adopted to alleviate the distortion problem of the motion track.
The second aspect, on intelligent bracelet, probably because the screen is less, if when looking over the motion trail on intelligent bracelet, with map magnification number modulation very big, only can see a segment motion trail, and can't see more complete motion trail, because the user is when looking over the motion trail through motion APP, usually hope to see more complete motion trail, consequently can speculate, the user is when adopting the electronic equipment of the little screen of intelligent bracelet class to look over the motion trail, the first map magnification that uses is usually less, because first map magnification is less, consequently can adopt the track point of less sampling can slow down the problem of motion trail distortion. In addition, in this case, the number of sampled track points is reduced, the generation time of the motion track can be shortened, and the time consumption is reduced.
In a third aspect, because the processing capability of hardware such as an internal processor is also weak, when the scheme provided by the application is adopted, the number of trace points to be sampled by the electronic device is reduced, that is, the number of data to be processed of the electronic device can be reduced, so that the workload of the processor of the electronic device can be reduced, the requirement on the processor of the electronic device can be reduced, and the production cost of the electronic device can be reduced.
In summary, the target sampling rate can be determined according to the specific situation of the electronic device in the scheme provided by the embodiment of the application, and compared with a scheme that all electronic devices correspond to one sampling rate, on the premise that the problem of motion trajectory distortion can be alleviated, the number of sampled track points is reduced as much as possible, and the time consumption of the motion trajectory is reduced. In the embodiment of the present application, the target sampling parameter is considered comprehensively in combination with the distortion problem and the time consumption problem of the motion trajectory, so that the determined target sampling parameter is more reasonable.
The beneficial effects of the embodiments of the present application are explained below with reference to fig. 3a to 4 c. In the embodiment of the present application, fig. 3a exemplarily shows a schematic diagram showing a movement track on a mobile phone at a map magnification of 16 levels (1:200 meters), fig. 3b exemplarily shows a schematic diagram showing a movement track on a mobile phone at a map magnification of 18 levels (1:50 meters), and fig. 3c exemplarily shows another schematic diagram showing a movement track on a mobile phone at a map magnification of 18 levels (1:50 meters). Fig. 4a illustrates a schematic diagram showing a movement track on the smart bracelet at a map magnification of 18 levels (1:50 meters), fig. 4b illustrates a schematic diagram showing a movement track on the smart bracelet at a map magnification of 17 levels (1:100 meters), and fig. 4c illustrates a schematic diagram showing a movement track on the smart bracelet at a map magnification of 18 levels (1:50 meters).
All shown in fig. 3a to 4c are part or all of the same motion trajectory. As shown in fig. 3a, the motion trail is displayed on the mobile phone at the map magnification 16 level, and the whole of the motion trail can be seen. If the sampling rate is set to be small in consideration of the time-consuming problem of the motion trajectory, although the distortion is not obvious when viewed at a map magnification of 16 steps, the distortion of the motion trajectory is obvious when viewed at a map magnification of 18 steps because there are fewer sampling points, as shown in fig. 3 b.
If the scheme provided by the embodiment of the present application is applied, it can be considered that the maximum map magnification that may be used by the user on the mobile phone is 18 levels, and then sampling is performed at the sampling rate corresponding to 18 levels, as shown in fig. 3c, even if the user views the motion trajectory with the map magnification of 18 levels, a serious distortion situation is not generated.
When the user views the movement trace on the large screen, the user can view the movement trace with the map magnification of 18 levels, and if the user views the movement trace on the small-sized screen with the map magnification of 18 levels, as shown in fig. 4a, it can be seen that, since the screen of the electronic device is small, when the map magnification is large, the user can basically see no movement trace. This may not be the intention of the user. The user's original intention is to see a relatively complete motion trajectory on the display screen of the electronic device, and therefore, the maximum map magnification used may be 17 levels on a small-screen electronic device such as a smart band (as shown in fig. 4 b), or 17 levels on a small-screen electronic device such as a smart band (as shown in fig. 4 c). In this case, the target sampling ratio is determined directly based on the maximum map magnification factor used by the user on the electronic device, and in this case, the maximum map magnification factor possibly used by the user is smaller, so the target sampling ratio is smaller, and the number of track points to be sampled is smaller.
The second screen size and the second map magnification referred to in the above step 202 and step 204 are explained below by specific examples.
Example one, the second screen size is 10 inches, and the second map magnification is set to 19 steps, in which case the second screen size is equal to the maximum map magnification supported by the map APP. When the first screen size is smaller than the second screen size, for example, the first screen size is 3 inches, the product (1:66.7 meters) of the first ratio (0.3) and the scale (1:20 meters) of the second map magnification can be calculated to obtain the first scale (1:66.7 meters). As can be seen from the look-up table 1, a first scale (1:66.7 m) smaller than (1:50 m) and larger than (1:100 m) determines that the first map magnification is 18 or 17. The following table 2 shows an example in which the third map magnification (18 steps) is determined as the first map magnification.
Example two, the second screen size is 4 inches, and the second map magnification is 18 steps, in this example, the second map magnification is smaller than the maximum map magnification supported by the map APP, and the second screen size is a value smaller than the screen size corresponding to the maximum map magnification. When the first screen size is smaller than the second screen size, for example, the first screen size is 3 inches, the product (1:66.7 meters) of the first ratio (0.75) and the scale (1:50 meters) of the second map magnification can be calculated to obtain the first scale (1:66.7 meters). As can be seen from the look-up table 1, a first scale (1:66.7 m) smaller than (1:50 m) and larger than (1:100 m) determines that the first map magnification is 18 or 17.
Based on the above example one and example two, the following table 2 exemplarily shows a correspondence relationship between a possible screen size, a map magnification, and a scale provided by an embodiment of the present application.
TABLE 2 correspondence between screen size, map magnification, and scale
Size of screen Map magnification Scale bar
Over 10 inches 19 stage 1:20 m
10 inches, and 10 inches to 4 inches 19 stage 1:20 m
4.0 inches, and 4.0 inchesCun to 2.0 inches 18 stages 1:50 m
2.0 inches, and 2.0 inches to 1.0 inches Stage 17 1:100 m
1.0 inch and below 16 stage 1:200 m
The correspondence relationship shown in table 2 is presented by taking as an example that the second screen size may be any of the screen sizes of 10 inches shown in table 2, or 10 inches or less. Of course, the second screen size may be 20 inches, and the second map magnification may be 19 steps, in which case the relationship between the screen size, the map magnification and the scale may be different from that shown in table 2, but the calculation method is similar and the related description of table 2 can be used for reference.
The numerical values in the first three rows of table 2 are taken as an example for explanation, and the other contents are similar and will not be described again. As shown in table 2, when the screen of the electronic device is over 10 inches, the map magnification factor that may be used by the user to display the movement track on the electronic device with the second screen size is 19 levels, or in other words, the maximum map magnification factor that may be used by the user to display the movement track on the electronic device with the second screen size is 19 levels, and the scale of the map corresponding to the 19 levels is 1:20 meters. When the screen of the electronic device is 10 inches or between 10 inches and 4 inches, the map magnification that may be used by the user to display the movement trace on the electronic device with the second screen size is 19 levels, or in other words, the maximum map magnification that may be used by the user to display the movement trace on the electronic device with the second screen size is 19 levels. When the screen of the electronic device is 4 inches or between 4 inches and 2 inches, the map magnification that may be used by the user to display the movement trace on the electronic device with the second screen size is 18 levels, or in other words, the maximum map magnification that may be used by the user to display the movement trace on the electronic device with the second screen size is 18 levels.
In the above steps 202 to 206, the first map magnification is estimated according to the first screen size of the electronic device. Several possible ways for determining the magnification of the first map are also provided in the embodiments of the present application.
In a first mode, the electronic device may determine the first map magnification according to a personal preference setting of the user, for example, the user may set a map magnification that may be used when the user displays the movement track through the electronic device.
In a second mode, the corresponding relationship between the type identifier of the electronic device and the map magnification factor can be preset in the electronic device, so that the electronic device determines the first map magnification factor corresponding to the electronic device according to the corresponding relationship. The corresponding relationship may be stored in a memory on the electronic device side, and in this case, the electronic device may determine the type identifier of the electronic device from the factory parameters, and then query the corresponding relationship to obtain the first map magnification. The corresponding relationship may also be stored in the network side, and after the electronic device queries the type identifier of the electronic device from the factory parameters, the electronic device sends a query request to the network side to query the first map magnification factor, where the query request carries the type identifier of the electronic device. Optionally, the type of the electronic device may be set by a manufacturer, and the type of the electronic device may include: smart band, cell phone, computer, tablet, etc.
In this embodiment of the present application, before step 211, the user motion trajectory may be sampled at a preset sampling frequency to obtain a full amount of track points. For one trace point in the embodiment of the present application, the specific processing procedure may include: and acquiring information such as current longitude, latitude, time stamp and the like from the GPS equipment of the electronic equipment, and storing the information into a local memory, namely storing the relevant information of the track point into the local memory.
In one possible embodiment, the sampling in step 211 may be performed from a full number of trace points. In another possible implementation, the full-scale track points can be screened, and redundant track points in the full-scale track points are deleted, so that the storage space is saved. That is, for M track points obtained by sampling, M is an integer greater than 2; and deleting redundant track points from the M track points according to the sampling time and/or position of each track point in the M track points. For example, if the (i + 1) th track point in the M track points satisfies one or more of the following conditions, the (i + 1) th track point is deleted:
the duration between the (i + 2) th track point and the ith track point is less than a duration threshold;
the included angle between the first vector and the second vector belongs to a preset target included angle range; the first vector is composed of an i +1 th track point and an i +2 th track point, and the direction is from the i +1 th track point to the i +2 th track point; the second vector comprises ith +1 th track point and ith track point, and the direction is from ith +1 th track point to ith track point.
It should be noted that, in the embodiment of the present application, the algorithm for deleting the redundant trace points is executed in a loop. For example, 10 trace points are obtained, and the 10 trace points are the total number of trace points. The 10 th track point, the 9 th track point and the 8 th track point can be combined to judge whether the 9 th track point is a redundant track point or not, and if yes, the 9 th track point is deleted. After the deletion, whether the 8 th track point is a redundant track point is judged by combining the 10 th track point, the 8 th track point and the 7 th track point, and if so, the 8 th track point is deleted. After the deletion, whether the 7 th track point is a redundant track point is judged by combining the 10 th track point, the 7 th track point and the 6 th track point, and if so, the 7 th track point is deleted. If not, the 7 th track point, the 6 th track point and the 5 th track point are combined to judge whether the 6 th track point is a redundant track point or not. And so on. That is to say, in this embodiment of the application, for each track point, whether the track point is a redundant track point is determined, and the determination is made according to two adjacent track points before and after the track point in the track points that are not deleted in the storage area.
Fig. 5 is a schematic diagram illustrating an angle between the first vector and the second vector. As shown in fig. 5, when the included angle between the first vector and the second vector belongs to the preset target included angle range, the first vector and the second vector may be said to substantially belong to the same straight line, and the (i + 2) th track point, the (i + 1) th track point, and the (i) th track point may be said to substantially be on the same straight line.
In a possible embodiment, if the first map magnification is greater than the maximum map magnification supported by the sport APP, an angle range corresponding to the maximum map magnification supported by the sport APP is taken as a target angle range according to a preset corresponding relationship between the map magnification and the angle range. And if the first map magnification is not larger than the maximum map magnification supported by the motion APP, taking the included angle range corresponding to the first map magnification as a target included angle range according to the corresponding relation between the preset map magnification and the included angle range.
In one possible embodiment, the preset map magnification factor and the corresponding relationship between the included angle ranges may satisfy the following condition: the smaller the minimum value in the included angle range is, the smaller the map magnification corresponding to the included angle range is. That is to say, the smaller the map magnification is, the smaller the minimum value of the included angle range corresponding to the allowed redundant point to be deleted is, and thus, when the map magnification is smaller, more redundant points can be deleted, so that the number of track points required to be stored is further reduced, and the storage space is saved.
TABLE 3 map magnification and angle range
Figure BDA0002425055710000121
Figure BDA0002425055710000131
According to the foregoing method, fig. 6 is a schematic structural diagram of an electronic device provided in the embodiment of the present application, and as shown in fig. 6, the electronic device may be a chip or a circuit, such as a chip or a circuit that can be disposed on the electronic device.
Further, the electronic device 1401 may further comprise a bus system, wherein the processor 1402, the memory 1404, the communication interface 1403, and the display 1405 may be connected via the bus system.
For example, the processor 1402 may be a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a system on chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a digital signal processing circuit (digital signal processor (DSP)), a Micro Controller Unit (MCU), a programmable logic controller (P L D), or other integrated chips.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 1402. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor, or in a combination of hardware and software modules within the processor 1402. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1404, and the processor 1402 reads the information in the memory 1404 and performs the steps of the above method in combination with the hardware thereof.
It should be noted that the processor 1402 in the embodiment of the present application may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor described above may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
It is understood that the memory 1404 in the embodiments of the present application may be either volatile memory or non-volatile memory, or may include both volatile and non-volatile memory, wherein non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory volatile memory may be Random Access Memory (RAM) which functions as an external cache memory, by way of example and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic random access memory (dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory (SDRAM), SDRAM, double data rate Synchronous Dynamic Random Access Memory (SDRAM), SDRAM, or any other types of RAM intended to be suitable for direct access systems, including, but not limited to, SDRAM, or SDRAM, and SDRAM L.
In one possible implementation, the processor 1402 is configured to predict a first map magnification used by a user in displaying a motion trajectory on an electronic device according to a first screen size of the electronic device; determining a target sampling parameter according to the first map magnification factor, the maximum map magnification factor supported by the motion APP installed on the electronic equipment, and the corresponding relation between the preset map magnification factor and the sampling parameter; sampling the track points according to the target sampling parameters to obtain the sampled track points; and displaying the motion trail on a display screen 1405 of the electronic device through a graphical interface of the motion APP according to the sampled track points.
In one possible implementation, the processor 1402 may determine the first map magnification as a maximum map magnification supported by the map APP when the first screen size is greater than the second screen size; when the first screen size of the electronic equipment is not larger than the second screen size, determining a first ratio between the first screen size and the second screen size, and calculating a first map magnification according to the first ratio and the corresponding relation between the second screen size and the second map magnification; and the second map magnification factor is a map magnification factor used when the movement track is displayed on the display screen of the electronic equipment with the estimated second screen size.
In one possible implementation, the processor 1402 may use, as the target sampling parameter, the sampling parameter corresponding to the maximum map magnification supported by the sport APP according to a preset correspondence between the map magnification and the sampling parameter when the first map magnification is greater than the maximum map magnification supported by the sport APP; and when the first map magnification is not more than the maximum map magnification supported by the motion APP, taking the sampling parameter corresponding to the first map magnification as a target sampling parameter according to the corresponding relation between the preset map magnification and the sampling parameter.
In a possible implementation manner, the processor 1402 may further delete a redundant trace point of the M trace points according to the sampling time and/or the position of each trace point of the M trace points acquired by the motion APP; m is an integer greater than 2; and the M track points are obtained by sampling the motion trail of the motion APP by adopting a preset sampling frequency. Optionally, the processor 1402 may delete the (i + 1) th trace point when the (i + 1) th trace point of the M trace points satisfies one or more of the following conditions:
the acquisition time interval between the (i + 2) th track point and the ith track point is less than the time threshold; i is a positive integer less than M;
the included angle between the first vector and the second vector belongs to a preset target included angle range; the first vector is composed of an i +1 th track point and an i +2 th track point, and the direction is from the i +1 th track point to the i +2 th track point; the second vector comprises ith +1 th track point and ith track point, and the direction is from ith +1 th track point to ith track point.
For the concepts, explanations, details and other steps related to the technical solutions provided in the embodiments of the present application related to the electronic device, please refer to the descriptions of the foregoing methods or other embodiments, which are not repeated herein.
Based on the above embodiments and the same concept, fig. 7 is a schematic diagram of an electronic device provided in an embodiment of the present application, and as shown in fig. 7, the electronic device 1501 may be a chip or a circuit, such as a chip or a circuit that may be disposed in the electronic device.
As shown in fig. 7, the electronic device 1501 may include a processing unit 1502, a display unit 1503.
In one possible implementation, the processing unit 1502 is configured to predict a first map magnification used by a user in displaying a motion trajectory on the electronic device according to a first screen size of the electronic device; determining a target sampling parameter according to the first map magnification, the maximum map magnification supported by the motion APP installed on the electronic device 1501 and the corresponding relationship between the preset map magnification and the sampling parameter; sampling the track points according to the target sampling parameters to obtain the sampled track points; and the display unit 1503 is used for displaying the motion trail through a graphical interface of the motion APP according to the sampled track points.
For the concepts, explanations, details and other steps related to the technical solutions provided in the embodiments of the present application related to the electronic device, please refer to the descriptions of the foregoing methods or other embodiments, which are not repeated herein.
It is to be understood that the functions of the units in the electronic device 1501 can refer to the implementation of the corresponding method embodiments, and are not described herein again.
It should be understood that the above division of the units of the electronic device is only a division of logical functions, and the actual implementation may be wholly or partially integrated into one physical entity or may be physically separated. In this embodiment, the display unit 1503 may be implemented by the display 1405 in fig. 6, and the processing unit 1502 may be implemented by the processor 1402 in fig. 6.
According to the method provided by the embodiment of the present application, the present application further provides a computer program product, which includes: computer program code which, when run on a computer, causes the computer to perform the method of any of the embodiments shown in fig. 2.
According to the method provided by the embodiment of the present application, the present application further provides a computer-readable storage medium storing program code, which when run on a computer, causes the computer to execute the method of any one of the embodiments shown in fig. 2.
The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g., from one website, computer, server, or data center via a wired (e.g., coaxial cable, fiber optic, digital subscriber line (DS L)) or wireless (e.g., infrared, wireless, microwave, etc.) manner to another website, computer, server, or data center.
The network device in the foregoing various apparatus embodiments corresponds to the terminal device or the network device in the terminal device and method embodiments, and the corresponding module or unit executes the corresponding steps, for example, the communication unit (transceiver) executes the steps of receiving or transmitting in the method embodiments, and other steps besides transmitting and receiving may be executed by the processing unit (processor). The functions of the specific elements may be referred to in the respective method embodiments. The number of the processors may be one or more.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
Those of ordinary skill in the art will appreciate that the various illustrative logical blocks and steps (step) described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A motion trail display method is applied to electronic equipment and comprises the following steps:
according to the first screen size of the electronic equipment, a first map magnification factor used when a user shows a motion track on the electronic equipment is estimated;
determining a target sampling parameter according to the first map magnification factor, the maximum map magnification factor supported by a motion APP installed in the electronic equipment, and the corresponding relation between a preset map magnification factor and the sampling parameter;
sampling the track points acquired by the motion APP according to the target sampling parameters to obtain sampled track points;
and displaying the motion trail on the display screen of the electronic equipment through the graphical interface of the motion APP according to the sampled track points.
2. The method of claim 1, wherein predicting a first map magnification used by a user in presenting a motion trajectory on an electronic device through a motion APP comprises:
if the first screen size is larger than the second screen size, determining that the first map magnification is the maximum map magnification supported by a map APP installed in the electronic equipment;
if the first screen size of the electronic equipment is not larger than the second screen size, determining a first ratio between the first screen size and the second screen size, and calculating the first map magnification according to the first ratio and the corresponding relation between the second screen size and the second map magnification; and the second map magnification factor is a pre-estimated map magnification factor used when the motion trail is displayed on the electronic equipment with the second screen size through the motion APP.
3. The method of claim 1 or 2, wherein said determining said target sampling parameters based on said first map magnification, a maximum map magnification supported by said motion APP, comprises:
if the first map magnification factor is larger than the maximum map magnification factor supported by the sport APP, taking the sampling parameter corresponding to the maximum map magnification factor supported by the sport APP as the target sampling parameter according to the corresponding relation between the preset map magnification factor and the sampling parameter;
and if the first map magnification is not greater than the maximum map magnification supported by the motion APP, taking the sampling parameter corresponding to the first map magnification as the target sampling parameter according to the corresponding relation between the preset map magnification and the sampling parameter.
4. A method according to any one of claims 1-3, wherein before sampling the trace points according to the target sampling parameters to obtain the sampled trace points, the method further comprises:
deleting redundant track points in the M track points according to the sampling time and/or position of each track point in the M track points acquired by the motion APP; m is an integer greater than 2;
and the M track points are obtained by sampling the motion trail of the motion APP by adopting a preset sampling frequency.
5. The method according to claim 4, wherein the deleting redundant trace points of the M trace points according to the sampling time and/or position of each trace point of the M trace points comprises:
if the (i + 1) th track point in the M track points meets one or more of the following conditions, deleting the (i + 1) th track point:
the acquisition time length of the interval between the (i + 2) th track point and the ith track point is less than a time length threshold value; i is a positive integer less than M;
the included angle between the first vector and the second vector belongs to a preset target included angle range; the first vector is composed of an i +1 th track point and an i +2 th track point, and the direction is from the i +1 th track point to the i +2 th track point; the second vector comprises ith +1 th track point and ith track point, and the direction is followed ith +1 th track point points to ith track point.
6. An electronic device, comprising: a display screen; one or more processors; a memory; and one or more computer programs;
the one or more computer programs stored in the memory, the one or more computer programs comprising instructions that, when executed by the one or more processors, cause the electronic device to perform the steps of:
according to the first screen size of the electronic equipment, a first map magnification factor used when a user shows a motion track on the electronic equipment is estimated;
determining a target sampling parameter according to the first map magnification factor, the maximum map magnification factor supported by a motion APP installed in the electronic equipment, and the corresponding relation between a preset map magnification factor and the sampling parameter;
sampling the track points acquired by the motion APP according to the target sampling parameters to obtain sampled track points; and displaying the motion trail on the display screen through the graphical interface of the motion APP according to the sampled track points.
7. The electronic device of claim 6, wherein the processor, when estimating a first map magnification used by the user when presenting the motion trajectory on the electronic device through the motion APP, is specifically configured to:
if the first screen size is larger than the second screen size, determining that the first map magnification is the maximum map magnification supported by a map APP installed in the electronic equipment;
if the first screen size of the electronic equipment is not larger than the second screen size, determining a first ratio between the first screen size and the second screen size, and calculating the first map magnification according to the first ratio and the corresponding relation between the second screen size and the second map magnification; and the second map magnification factor is a pre-estimated map magnification factor used when the motion trail is displayed on the electronic equipment with the second screen size through the motion APP.
8. The electronic device of claim 6 or 7, wherein the processor, when determining the target sampling parameter according to the first map magnification and the maximum map magnification supported by the motion APP, is specifically configured to:
if the first map magnification factor is larger than the maximum map magnification factor supported by the sport APP, taking the sampling parameter corresponding to the maximum map magnification factor supported by the sport APP as the target sampling parameter according to the corresponding relation between the preset map magnification factor and the sampling parameter;
and if the first map magnification is not greater than the maximum map magnification supported by the motion APP, taking the sampling parameter corresponding to the first map magnification as the target sampling parameter according to the corresponding relation between the preset map magnification and the sampling parameter.
9. The electronic device of any of claims 6-8, wherein the processor is further configured to, prior to sampling the trace points according to the target sampling parameter to obtain sampled trace points:
deleting redundant track points in the M track points according to the sampling time and/or position of each track point in the M track points acquired by the motion APP; m is an integer greater than 2;
and the M track points are obtained by sampling the motion trail of the motion APP by adopting a preset sampling frequency.
10. The electronic device according to claim 9, wherein the processor deletes redundant trace points of the M trace points according to the sampling time and/or position of each trace point of the M trace points, and is specifically configured to:
if the (i + 1) th track point in the M track points meets one or more of the following conditions, deleting the (i + 1) th track point:
the acquisition time length of the interval between the (i + 2) th track point and the ith track point is less than a time length threshold value; i is a positive integer less than M;
the included angle between the first vector and the second vector belongs to a preset target included angle range; the first vector is composed of an i +1 th track point and an i +2 th track point, and the direction is from the i +1 th track point to the i +2 th track point; the second vector comprises ith +1 th track point and ith track point, and the direction is followed ith +1 th track point points to ith track point.
11. A computer-readable storage medium having stored thereon computer-executable instructions which, when invoked by a computer, cause the computer to perform the method of any of claims 1 to 5.
CN202010217999.0A 2020-03-25 2020-03-25 Motion trail display method and device and readable storage medium Active CN111510553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010217999.0A CN111510553B (en) 2020-03-25 2020-03-25 Motion trail display method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010217999.0A CN111510553B (en) 2020-03-25 2020-03-25 Motion trail display method and device and readable storage medium

Publications (2)

Publication Number Publication Date
CN111510553A true CN111510553A (en) 2020-08-07
CN111510553B CN111510553B (en) 2021-06-22

Family

ID=71875796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010217999.0A Active CN111510553B (en) 2020-03-25 2020-03-25 Motion trail display method and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN111510553B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113082698A (en) * 2021-04-15 2021-07-09 网易(杭州)网络有限公司 Game display control method and device, electronic equipment and readable storage medium
WO2024131584A1 (en) * 2022-12-21 2024-06-27 华为技术有限公司 Trajectory playback method and apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020178258A1 (en) * 2001-05-22 2002-11-28 Hushing Sumner K. System and method for processing and monitoring telemetry data
CN102707300A (en) * 2012-06-05 2012-10-03 大唐移动通信设备有限公司 Method, device and system for optimizing GPS track
CN105183766A (en) * 2015-07-31 2015-12-23 诚迈科技(南京)股份有限公司 Method and system for eliminating historical route redundant points of maps
CN106877875A (en) * 2017-01-22 2017-06-20 齐鲁工业大学 A kind of vehicle running orbit compression method based on threshold value combination algorithm
CN109683180A (en) * 2019-02-28 2019-04-26 广东小天才科技有限公司 GPS movement track optimization method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020178258A1 (en) * 2001-05-22 2002-11-28 Hushing Sumner K. System and method for processing and monitoring telemetry data
CN102707300A (en) * 2012-06-05 2012-10-03 大唐移动通信设备有限公司 Method, device and system for optimizing GPS track
CN105183766A (en) * 2015-07-31 2015-12-23 诚迈科技(南京)股份有限公司 Method and system for eliminating historical route redundant points of maps
CN106877875A (en) * 2017-01-22 2017-06-20 齐鲁工业大学 A kind of vehicle running orbit compression method based on threshold value combination algorithm
CN109683180A (en) * 2019-02-28 2019-04-26 广东小天才科技有限公司 GPS movement track optimization method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113082698A (en) * 2021-04-15 2021-07-09 网易(杭州)网络有限公司 Game display control method and device, electronic equipment and readable storage medium
CN113082698B (en) * 2021-04-15 2024-02-23 网易(杭州)网络有限公司 Game display control method and device, electronic equipment and readable storage medium
WO2024131584A1 (en) * 2022-12-21 2024-06-27 华为技术有限公司 Trajectory playback method and apparatus

Also Published As

Publication number Publication date
CN111510553B (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN108388390B (en) Apparatus and method for controlling fingerprint sensor
CN109074158B (en) Electronic equipment and method for starting application thereof
CN108701178B (en) Authentication method and electronic device using the same
CN109992189B (en) Screen control method, electronic device and storage medium
WO2021052016A1 (en) Body posture detection method and electronic device
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN107885285B (en) Electronic device and noise control method thereof
KR20170136920A (en) Method for Outputting Screen and the Electronic Device supporting the same
CN111104980B (en) Method, device, equipment and storage medium for determining classification result
CN110134744B (en) Method, device and system for updating geomagnetic information
CN107784268B (en) Method and electronic device for measuring heart rate based on infrared sensor
CN111510553B (en) Motion trail display method and device and readable storage medium
KR102700131B1 (en) Apparatus and Method for Sequentially displaying Images on the Basis of Similarity of Image
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN113873083A (en) Duration determination method and device, electronic equipment and storage medium
CN116070035B (en) Data processing method and electronic equipment
CN116842047A (en) Cache updating method, device, equipment and computer readable storage medium
CN111797017A (en) Method and device for storing log, test equipment and storage medium
US9723402B2 (en) Audio data processing method and electronic device supporting the same
CN110580561B (en) Analysis method and device for oil well oil increasing effect and storage medium
CN109902844B (en) Optimization information determination method and device for water injection system and storage medium
CN112579661B (en) Method and device for determining specific target pair, computer equipment and storage medium
CN112749583A (en) Face image grouping method and device, computer equipment and storage medium
CN113673224B (en) Method and device for recognizing popular vocabulary, computer equipment and readable storage medium
CN112135256A (en) Method, device and equipment for determining movement track and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant