Nothing Special   »   [go: up one dir, main page]

WO2019085749A1 - Application program control method and apparatus, medium, and electronic device - Google Patents

Application program control method and apparatus, medium, and electronic device Download PDF

Info

Publication number
WO2019085749A1
WO2019085749A1 PCT/CN2018/110518 CN2018110518W WO2019085749A1 WO 2019085749 A1 WO2019085749 A1 WO 2019085749A1 CN 2018110518 W CN2018110518 W CN 2018110518W WO 2019085749 A1 WO2019085749 A1 WO 2019085749A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
layer
feature information
training model
calculation
Prior art date
Application number
PCT/CN2018/110518
Other languages
French (fr)
Chinese (zh)
Inventor
梁昆
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019085749A1 publication Critical patent/WO2019085749A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • the present application relates to the field of electronic device terminals, and in particular, to an application management method, device, medium, and electronic device.
  • the embodiment of the present application provides an application management method, device, medium, and electronic device to intelligently close an application.
  • An embodiment of the present application provides an application management and control method, which is applied to an electronic device, where the application management method includes the following steps:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the embodiment of the present application further provides an application management method device, where the device includes:
  • An obtaining module configured to obtain the application sample vector set, where the sample vector in the sample vector set includes historical feature information x i of multiple dimensions of the application;
  • a generating module for calculating a sample vector set by using a BP neural network algorithm to generate a training model
  • a calculation module configured to input the current feature information s of the application into the training model for calculation when the application enters the background;
  • the determining module is configured to determine whether the application needs to be closed.
  • the embodiment of the present application further provides a medium in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor to execute the application management method described above.
  • the embodiment of the present application further provides an electronic device, where the electronic device includes a processor and a memory, the electronic device is electrically connected to the memory, the memory is used to store instructions and data, and the processor is configured to execute the following step:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the BP neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the embodiment of the present application provides an application management method, device, medium, and electronic device to intelligently close an application.
  • FIG. 1 is a schematic diagram of a system of an application management device according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an application scenario of an application management and control device according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic flowchart of an application management and control method according to an embodiment of the present application.
  • FIG. 4 is another schematic flowchart of an application management and control method according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
  • FIG. 6 is another schematic structural diagram of an apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 8 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
  • An application management method is applied to an electronic device, wherein the application management method comprises the following steps:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the BP neural network algorithm is used to calculate the sample vector set, and the steps of generating the training model include:
  • the sample vector set is brought into the network structure for calculation to obtain a training model.
  • the method in the step of defining a network structure, includes:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the hidden layer including M nodes
  • the classification layer adopts a Softmax function, and the Softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value;
  • the output layer comprising 2 nodes
  • the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1;
  • the batch size is A
  • the learning rate is set, and the learning rate is B.
  • the hidden layer includes a first implicit layer, a second implicit layer, and a third implicit layer, the first implicit layer, and the second implicit layer
  • the number of nodes in each of the layer and the third implicit layer is less than 10.
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10.
  • the step of bringing a sample vector set into a network structure for calculation, and obtaining the training model includes:
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the network structure is modified according to the predicted result value y to obtain a training model.
  • the current feature information s is input into the training model to calculate a predicted probability value of the classification layer.
  • the method in the step of determining whether the application needs to be closed, the method includes:
  • the current feature information s of the application is input into the training model for calculation, including:
  • the current feature information s is brought into the training model for calculation.
  • An application management device wherein the device comprises:
  • An obtaining module configured to obtain the application sample vector set, where the sample vector in the sample vector set includes historical feature information x i of multiple dimensions of the application;
  • a generating module for calculating a sample vector set by using a BP neural network algorithm to generate a training model
  • a calculation module configured to input the current feature information s of the application into the training model for calculation when the application enters the background;
  • the determining module is configured to determine whether the application needs to be closed.
  • An electronic device comprising: a processor and a memory, the electronic device being electrically connected to the memory, the memory for storing instructions and data, the processor for performing:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the BP neural network algorithm is used to calculate the sample vector set, and the steps of generating the training model include:
  • the sample vector set is brought into the network structure for calculation to obtain a training model.
  • the method includes:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the hidden layer including M nodes
  • the classification layer adopts a Softmax function, and the Softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value;
  • the output layer comprising 2 nodes
  • the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1;
  • the batch size is A
  • the learning rate is set, and the learning rate is B.
  • the hidden layer includes a first implicit layer, a second implicit layer, and a third hidden layer, the first implicit layer, the second hidden layer, and The number of nodes in each layer in the third implicit layer is less than 10.
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10.
  • the step of bringing the sample vector set into the network structure for calculation, and obtaining the training model includes:
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the network structure is modified according to the predicted result value y to obtain a training model.
  • the method includes:
  • the application management method provided by the present application is mainly applied to electronic devices such as a wristband, a smart phone, a tablet based on an Apple system or an Android system, or a smart mobile electronic device such as a Windows or Linux based notebook computer.
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • FIG. 1 is a schematic diagram of a system for controlling an application program according to an embodiment of the present application.
  • the application management device is mainly configured to: obtain historical feature information x i of the application from a database, and then calculate the historical feature information x i by an algorithm to obtain a training model, and secondly, the current feature information of the application.
  • the training model is input for calculation, and the calculation result is used to judge whether the application can be closed to control the preset application, such as closing, or freezing.
  • FIG. 2 is a schematic diagram of an application scenario of an application management and control method according to an embodiment of the present application.
  • the historical feature information x i of the application is obtained from the database, and then the historical feature information x i is calculated by an algorithm to obtain a training model, and secondly, when the application control device detects that the application enters When the electronic device is in the background, the current feature information s of the application is input into the training model for calculation, and the calculation result determines whether the application can be closed.
  • the historical feature information x i of the application a is obtained from the database, and then the historical feature information x i is calculated by an algorithm to obtain a training model, and secondly, when the application control device detects that the application a enters the electronic device In the background, the current feature information s of the application is input into the training model for calculation, and the calculation result determines that the application a can be closed, and the application a is closed, when the application control device detects that the application b enters the background of the electronic device. At this time, the current feature information s of the application b is input into the training model for calculation, and it is judged by the calculation result that the application b needs to be retained, and the application b is retained.
  • the embodiment of the present application provides an application management method, and the execution entity of the application management method may be an application management device provided by an embodiment of the present invention, or an electronic device of the application management device, where the application The control device can be implemented in hardware or software.
  • FIG. 3 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
  • the application management and control method provided by the embodiment of the present application is applied to an electronic device, and the specific process may be as follows:
  • Step S101 Acquire the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the feature information of the multiple dimensions may refer to Table 1.
  • the feature information of the ten dimensions shown in Table 1 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • step S102 the BP neural network algorithm is used to calculate the sample vector set to generate a training model.
  • FIG. 4 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
  • the step S102 may include:
  • Step S1021 defining a network structure
  • Step S1022 Bring the sample vector set into the network structure for calculation, and obtain a training model.
  • step S1021 the defining the network structure includes:
  • Step S1021a setting an input layer, the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i .
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • Step S1021b setting a hidden layer, the hidden layer including M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • Step S1021c setting a classification layer, the classification layer adopts a softmax function, and the softmax function is
  • p is the predicted probability value
  • Z K is the intermediate value
  • C is the number of categories of the predicted result. Is the jth intermediate value.
  • step S1021d an output layer is set, and the output layer includes two nodes.
  • Step S1021e setting an activation function, the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • step S1021f the batch size is set, and the batch size is A.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • step S1021g a learning rate is set, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • step S1022 the step of bringing the sample vector set into the network structure for calculation, the step of obtaining the training model may include:
  • step S1022a the sample vector set is input at the input layer for calculation, and an output value of the input layer is obtained.
  • Step S1022b inputting an output value of the input layer in the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • Step S1022c inputting an output value of the hidden layer in the classification layer to perform calculation to obtain the predicted probability value [p 1 p 2 ] T .
  • the output value of the hidden layer is an input value of the classification layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output value of the last implicit layer is the input value of the classification layer.
  • Step S1022d Bring the predicted probability value into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the output value of the classification layer is an input value of the output layer.
  • step S1022e the network structure is modified according to the prediction result value y to obtain a training model.
  • Step S103 when the application enters the background, the current feature information s of the application is input into the training model for calculation.
  • the step S103 may include:
  • Step S1031 Collect current feature information s of the application.
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • Step S1032 Bring the current feature information s into the training model for calculation.
  • step S104 it is determined whether the application needs to be closed.
  • the application management method provided by the present application generates the training model by using the BP neural network algorithm by acquiring the historical feature information x i , and brings the current feature information s of the application into the training model when the detection application enters the background, and further Determine if the application needs to be closed and intelligently close the application.
  • FIG. 5 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
  • the device 30 includes an acquisition module 31, a generation module 32, a calculation module 33, and a determination module 34.
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • the obtaining module 31 is configured to obtain the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • FIG. 6 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
  • the device 30 further includes a detection module 35 for detecting that the application enters the background.
  • the device 30 can also include a storage module 36.
  • the storage module 36 is configured to store historical feature information x i of the application .
  • the feature information of the multiple dimensions may refer to Table 2.
  • the feature information of the ten dimensions shown in Table 2 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • the generating module 32 is configured to calculate a sample vector set by using a BP neural network algorithm to generate a training model.
  • the generating module 32 trains the historical feature information x i acquired by the obtaining module 31, and inputs the historical feature information x i in the BP neural network algorithm.
  • the generating module 32 includes a defining module 321 and a solving module 322.
  • the definition module 321 is used to define a network structure.
  • the definition module 321 may include an input layer definition module 3211, an implicit layer definition module 3212, a classification layer definition module 3213, an output layer definition module 3214, an activation function definition module 3215, a batch size definition module 3216, and a learning rate definition module 3217.
  • the input layer definition module 3211 is configured to set an input layer, where the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i .
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • the hidden layer definition module 3212 is configured to set an implicit layer, and the hidden layer includes M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • the classification layer definition module 3213 is configured to set a classification layer, the classification layer adopts a softmax function, and the softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value.
  • the output layer definition module 3214 is configured to set an output layer, and the output layer includes 2 nodes.
  • the activation function definition module 3215 is configured to set an activation function, the activation function adopts a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • the batch size definition module 3216 is configured to set a batch size, and the batch size is A.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • the learning rate definition module 3217 is configured to set a learning rate, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • the input layer definition module 3211 sets the input layer
  • the hidden layer definition module 3212 sets the hidden layer
  • the classification layer definition module 3213 sets the classification layer
  • the output layer definition module 3214 The setting output layer
  • the activation function definition module 3215 sets the activation function
  • the batch size definition module 3216 sets the batch size
  • the learning order definition module 3217 sets the learning order in a sequence that can be flexibly adjusted.
  • the solving module 322 is configured to bring the sample vector set into the network structure for calculation to obtain a training model.
  • the solution module 322 can include a first solution module 3221, a second solution module 3222, a third solution module 3223, a fourth solution module 3224, and a correction module.
  • the first solving module 3221 is configured to input the sample vector set at the input layer for calculation to obtain an output value of the input layer.
  • the second solving module 3222 is configured to input an output value of the input layer at the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • the third solving module 3223 is configured to input an output value of the hidden layer in the classification layer to calculate, to obtain the predicted probability value [p 1 p 2 ] T .
  • the output value of the hidden layer is an input value of the classification layer.
  • the fourth solving module 3224 is configured to bring the predicted probability value into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • y [0 1] T .
  • the output value of the classification layer is an input value of the output layer.
  • the modification module 3225 is configured to modify the network structure according to the prediction result value y to obtain a training model.
  • the calculating module 33 is configured to input the current feature information s of the application into the training model for calculation when the application enters the background.
  • the calculation module 33 may include an acquisition module 331 and an operation module 332 .
  • the collecting module 331 is configured to collect current feature information s of the application.
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • the operation module 332 is configured to bring the current feature information s into the training model for calculation.
  • the collecting module 331 is configured to collect the current feature information s according to a predetermined acquisition time, and store the current feature information s in the storage module 36.
  • the collecting module 331 is further configured to collect and detect the application.
  • the current feature information s corresponding to the time point entering the background is used, and the current feature information s is input into the operation module 332 for being brought into the training model for calculation.
  • the determining module 34 is configured to determine whether the application needs to be closed.
  • the apparatus 30 can also include a shutdown module 37 for shutting down the application when it is determined that the application needs to be closed.
  • the apparatus for application management and control provided by the application obtains the historical feature information x i , generates a training model by using a BP neural network algorithm, and brings the current feature information s of the application into the background when the detection application enters the background. Train the model to determine if the application needs to be closed and intelligently close the application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device 500 includes a processor 501 and a memory 502.
  • the processor 501 is electrically connected to the memory 502.
  • the processor 501 is a control center of the electronic device 500, and connects various parts of the entire electronic device 500 by various interfaces and lines, by running or loading an application stored in the memory 502, and calling data stored in the memory 502, executing The various functions of the electronic device and the processing of the data enable overall monitoring of the electronic device 500.
  • the processor 501 in the electronic device 500 loads the instructions corresponding to the process of one or more applications into the memory 502 according to the following steps, and is stored and stored in the memory 502 by the processor 501.
  • the application thus implementing various functions:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the neural network algorithm is used to calculate the sample vector set to generate a training model
  • the current feature information s of the application is input into the training model for calculation;
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the feature information of the multiple dimensions may refer to Table 3.
  • the feature information of the ten dimensions shown in Table 3 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • the processor 501 calculates a sample vector set by using a BP neural network algorithm, and the generating the training model further includes:
  • the sample vector set is brought into the network structure for calculation to obtain a training model.
  • the defined network structure includes:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • a hidden layer is set, the hidden layer including M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • the classification layer adopts a softmax function, and the softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value.
  • An output layer is set, the output layer comprising 2 nodes.
  • the activation function adopting a sigmoid function
  • the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • the learning rate is set, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • the step of bringing the sample vector set into the network structure for calculation, and obtaining the training model may include:
  • the sample vector set is input at the input layer for calculation to obtain an output value of the input layer.
  • An output value of the input layer is input to the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • the output value of the hidden layer is input at the classification layer to calculate, and the predicted probability value [p 1 p 2 ] T is obtained .
  • the output value of the hidden layer is an input value of the classification layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output value of the last implicit layer is the input value of the classification layer.
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the output value of the classification layer is an input value of the output layer.
  • the network structure is modified according to the predicted result value y to obtain a training model.
  • the step of inputting the current feature information s of the application into the training model for calculation includes:
  • the current feature information s of the application is collected.
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • the current feature information s is brought into the training model for calculation.
  • Memory 502 can be used to store applications and data.
  • the program stored in the memory 502 contains instructions executable in the processor.
  • the program can constitute various functional modules.
  • the processor 501 executes various function applications and data processing by running a program stored in the memory 502.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device 500 further includes a radio frequency circuit 503, a display screen 504, a control circuit 505, an input unit 506, an audio circuit 507, a sensor 508, and a power source 509.
  • the processor 501 is electrically connected to the radio frequency circuit 503, the display screen 504, the control circuit 505, the input unit 506, the audio circuit 507, the sensor 508, and the power source 509, respectively.
  • the radio frequency circuit 503 is configured to transceive radio frequency signals to communicate with a server or other electronic device over a wireless communication network.
  • the display screen 504 can be used to display information entered by the user or information provided to the user as well as various graphical user interfaces of the terminal, which can be composed of images, text, icons, video, and any combination thereof.
  • the control circuit 505 is electrically connected to the display screen 504 for controlling the display screen 504 to display information.
  • the input unit 506 can be configured to receive input digits, character information, or user characteristic information (eg, fingerprints), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function controls.
  • user characteristic information eg, fingerprints
  • the audio circuit 507 can provide an audio interface between the user and the terminal through a speaker and a microphone.
  • Sensor 508 is used to collect external environmental information.
  • Sensor 508 can include one or more of ambient brightness sensors, acceleration sensors, gyroscopes, and the like.
  • Power source 509 is used to power various components of electronic device 500.
  • the power supply 509 can be logically coupled to the processor 501 through a power management system to enable functions such as managing charging, discharging, and power management through the power management system.
  • the electronic device 500 may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the electronic device provided by the present application generates the training model by using the BP neural network algorithm by acquiring the historical feature information x i , and when the detection application enters the background, the current feature information s of the application is brought into the training model, and then the judgment is performed. Whether the application needs to be closed, intelligently close the application.
  • the embodiment of the present invention further provides a medium in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor to execute the application management method described in any of the above embodiments.
  • the application management method, the device, the medium, and the electronic device provided by the embodiments of the present invention belong to the same concept, and the specific implementation process thereof is described in the full text of the specification, and details are not described herein again.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
  • ROM Read Only Memory
  • RAM Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Stored Programmes (AREA)

Abstract

The present application provides an application program control method and apparatus, a medium, and an electronic device. The method comprises: obtaining history feature information x i ; generating a training model by using a back propagation (BP) neural network algorithm; and when it is detected that an application program enters the background, bringing current feature information s of the application program into the training model; and then determining whether the application program needs to be closed, and intelligently closing the application program.

Description

应用程序管控方法、装置、介质及电子设备Application management method, device, medium and electronic device
本申请要求于2017年10月31日提交中国专利局、申请号为201711044959.5、申请名称为“应用程序管控方法、装置、介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on October 31, 2017, the Chinese Patent Office, the application number is 201711044959.5, and the application name is "application control method, device, medium and electronic device", the entire contents of which are incorporated by reference. In this application.
技术领域Technical field
本申请涉及电子设备终端领域,具体涉及一种应用程序管控方法、装置、介质及电子设备。The present application relates to the field of electronic device terminals, and in particular, to an application management method, device, medium, and electronic device.
背景技术Background technique
终端用户每天会使用大量应用,通常一个应用被推到后台后,如果及时不清理会占用宝贵的系统内存资源,并且会影响系统功耗。因此,有必要提供一种应用程序管控方法、装置、介质及电子设备。End users use a large number of applications every day. Usually, after an application is pushed to the background, if it is not cleaned up in time, it will take up valuable system memory resources and affect system power consumption. Therefore, it is necessary to provide an application management method, device, medium and electronic device.
技术问题technical problem
本申请实施例提供一种应用程序管控方法、装置、介质及电子设备,以智能关闭应用程序。The embodiment of the present application provides an application management method, device, medium, and electronic device to intelligently close an application.
技术解决方案Technical solution
本申请实施例提供一种应用程序管控方法,应用于电子设备,所述应用程序管控方法包括以下步骤:An embodiment of the present application provides an application management and control method, which is applied to an electronic device, where the application management method includes the following steps:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iObtaining the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算,生成训练模型;The Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及When the application enters the background, the current feature information s of the application is input into the training model for calculation;
判断所述应用程序是否需要关闭。Determine if the application needs to be closed.
本申请实施例还提供一种应用程序管控方法装置,所述装置包括:The embodiment of the present application further provides an application management method device, where the device includes:
获取模块,用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iAn obtaining module, configured to obtain the application sample vector set, where the sample vector in the sample vector set includes historical feature information x i of multiple dimensions of the application;
生成模块,用于采用BP神经网络算法对样本向量集进行计算,生成训练模型;a generating module for calculating a sample vector set by using a BP neural network algorithm to generate a training model;
计算模块,用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及a calculation module, configured to input the current feature information s of the application into the training model for calculation when the application enters the background;
判断模块,用于判断所述应用程序是否需要关闭。The determining module is configured to determine whether the application needs to be closed.
本申请实施例还提供一种介质,所述介质中存储有多条指令,所述指令适于由处理器加载以执行上述的应用程序管控方法。The embodiment of the present application further provides a medium in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor to execute the application management method described above.
本申请实施例还提供一种电子设备,所述电子设备包括处理器和存储器,所述电子设备与所述存储器电性连接,所述存储器用于存储指令和数据,所述处理器用于执行以下步骤:The embodiment of the present application further provides an electronic device, where the electronic device includes a processor and a memory, the electronic device is electrically connected to the memory, the memory is used to store instructions and data, and the processor is configured to execute the following step:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iObtaining the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
采用BP神经网络算法对样本向量集进行计算,生成训练模型;The BP neural network algorithm is used to calculate the sample vector set to generate a training model.
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及When the application enters the background, the current feature information s of the application is input into the training model for calculation;
判断所述应用程序是否需要关闭。Determine if the application needs to be closed.
有益效果Beneficial effect
本申请实施例提供一种应用程序管控方法、装置、介质及电子设备,以智能关闭应用程序。The embodiment of the present application provides an application management method, device, medium, and electronic device to intelligently close an application.
附图说明DRAWINGS
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present application. Other drawings can also be obtained from those skilled in the art based on these drawings without paying any creative effort.
图1为本申请实施例提供的应用程序管控装置的一种系统示意图。FIG. 1 is a schematic diagram of a system of an application management device according to an embodiment of the present application.
图2为本申请实施例提供的应用程序管控装置的应用场景示意图。FIG. 2 is a schematic diagram of an application scenario of an application management and control device according to an embodiment of the present disclosure.
图3为本申请实施例提供的应用程序管控方法的一种流程示意图。FIG. 3 is a schematic flowchart of an application management and control method according to an embodiment of the present application.
图4为本申请实施例提供的应用程序管控方法的另一种流程示意图。FIG. 4 is another schematic flowchart of an application management and control method according to an embodiment of the present application.
图5为本申请实施例提供的装置的一种结构示意图。FIG. 5 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
图6为本申请实施例提供的装置的另一种结构示意图。FIG. 6 is another schematic structural diagram of an apparatus according to an embodiment of the present application.
图7为本申请实施例提供的电子设备的一种结构示意图。FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
图8为本申请实施例提供的电子设备的另一种结构示意图。FIG. 8 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
本发明的实施方式Embodiments of the invention
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described in the following with reference to the accompanying drawings. It is apparent that the described embodiments are only a part of the embodiments of the invention, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
一种应用程序管控方法,应用于电子设备,其中,所述应用程序管控方法包括以下步骤:An application management method is applied to an electronic device, wherein the application management method comprises the following steps:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iObtaining the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算,生成训练模型;The Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及When the application enters the background, the current feature information s of the application is input into the training model for calculation;
判断所述应用程序是否需要关闭。Determine if the application needs to be closed.
在所述应用程序管控方法中,采用BP神经网络算法对样本向量集进行计算,生成训练模型的步骤包括:In the application management method, the BP neural network algorithm is used to calculate the sample vector set, and the steps of generating the training model include:
定义网络结构;以及Define the network structure;
将样本向量集带入网络结构进行计算,得到训练模型。The sample vector set is brought into the network structure for calculation to obtain a training model.
在所述应用程序管控方法中,在所述定义网络结构的步骤中,包括:In the application management method, in the step of defining a network structure, the method includes:
设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同; Setting an input layer, the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
设定隐含层,所述隐含层包括M个节点;Setting a hidden layer, the hidden layer including M nodes;
设定分类层,所述分类层采用Softmax函数,所述Softmax函数为
Figure PCTCN2018110518-appb-000001
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110518-appb-000002
为第j个中间值;
Setting a classification layer, the classification layer adopts a Softmax function, and the Softmax function is
Figure PCTCN2018110518-appb-000001
Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result.
Figure PCTCN2018110518-appb-000002
Is the jth intermediate value;
设定输出层,所述输出层包括2个节点;Setting an output layer, the output layer comprising 2 nodes;
设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110518-appb-000003
其中,所述f(x)的范围为0到1;
Setting an activation function, the activation function adopting a sigmoid function, and the sigmoid function is
Figure PCTCN2018110518-appb-000003
Wherein the range of f(x) is 0 to 1;
设定批量大小,所述批量大小为A;以及Set the batch size, the batch size is A;
设定学习率,所述学习率为B。The learning rate is set, and the learning rate is B.
在所述应用程序管控方法中,所述隐含层包括第一隐含分层,第二隐含分层和第三隐含分层,所述第一隐含分层,第二隐含分层和第三隐含分层中的每一层的节点数均小于10。In the application management method, the hidden layer includes a first implicit layer, a second implicit layer, and a third implicit layer, the first implicit layer, and the second implicit layer The number of nodes in each of the layer and the third implicit layer is less than 10.
在所述应用程序管控方法中,所述历史特征信息x i的维数小于10,所述输入层的节点数小于10。 In the application management method, the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10.
在所述应用程序管控方法中,所述将样本向量集带入网络结构进行计算,得到训练模型的步骤包括:In the application management method, the step of bringing a sample vector set into a network structure for calculation, and obtaining the training model includes:
在输入层输入所述样本向量集进行计算,得到输入层的输出值;Inputting the sample vector set at the input layer to perform calculation, and obtaining an output value of the input layer;
在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;Inputting an output value of the input layer at the hidden layer to obtain an output value of the hidden layer;
在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] TInputting, at the classification layer, an output value of the hidden layer to calculate, to obtain the predicted probability value [p 1 p 2 ] T ;
将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及 The predicted probability value is brought into the output layer for calculation to obtain a predicted result value y. When p 1 is greater than p 2 , y=[1 0] T , when p 1 is less than or equal to p 2 , y=[0 1] T ; and
根据预测结果值y修正所述网络结构,得到训练模型。The network structure is modified according to the predicted result value y to obtain a training model.
在所述应用程序管控方法中,在将所述应用程序的当前特征信息s输入所述训练模型进行计算的步骤中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] TIn the application management method, in the step of inputting the current feature information s of the application into the training model for calculation, the current feature information s is input into the training model to calculate a predicted probability value of the classification layer. [p 1 ' p 2 '] T , when p 1 ' is greater than p 2 ', y = [1 0] T , when p 1 ' is less than or equal to p 2 ', y = [0 1] T .
在所述应用程序管控方法中,在所述判断所述应用程序是否需要关闭的步骤中,包括:In the application management method, in the step of determining whether the application needs to be closed, the method includes:
当y=[1 0] T,判定所述应用程序需要关闭;以及 When y=[1 0] T , it is determined that the application needs to be closed;
当y=[0 1] T,判定所述应用程序需要保留。 When y = [0 1] T , it is determined that the application needs to be retained.
其中,among them,
在所述应用程序管控方法中,在所述当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算中,包括:In the application management method, when the application enters the background, the current feature information s of the application is input into the training model for calculation, including:
采集所述应用程序的当前特征信息s;Collecting current feature information s of the application;
将当前特征信息s带入训练模型进行计算。The current feature information s is brought into the training model for calculation.
一种应用程序管控装置,其中,所述装置包括:An application management device, wherein the device comprises:
获取模块,用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iAn obtaining module, configured to obtain the application sample vector set, where the sample vector in the sample vector set includes historical feature information x i of multiple dimensions of the application;
生成模块,用于采用BP神经网络算法对样本向量集进行计算,生成训练模型;a generating module for calculating a sample vector set by using a BP neural network algorithm to generate a training model;
计算模块,用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及a calculation module, configured to input the current feature information s of the application into the training model for calculation when the application enters the background;
判断模块,用于判断所述应用程序是否需要关闭。The determining module is configured to determine whether the application needs to be closed.
一种介质,其中,所述介质中存储有多条指令,所述指令适于由处理器加载以执行如前所述的应用程序管控方法。A medium in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor to perform an application management method as previously described.
一种电子设备,其中,所述电子设备包括处理器和存储器,所述电子设备与所述存储器电性连接,所述存储器用于存储指令和数据,所述处理器用于执行:An electronic device, comprising: a processor and a memory, the electronic device being electrically connected to the memory, the memory for storing instructions and data, the processor for performing:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iObtaining the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算,生成训练模型;The Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算; 以及When the application enters the background, the current feature information s of the application is input into the training model for calculation;
判断所述应用程序是否需要关闭。Determine if the application needs to be closed.
在所述电子设备中,采用BP神经网络算法对样本向量集进行计算,生成训练模型的步骤包括:In the electronic device, the BP neural network algorithm is used to calculate the sample vector set, and the steps of generating the training model include:
定义网络结构;以及Define the network structure;
将样本向量集带入网络结构进行计算,得到训练模型。The sample vector set is brought into the network structure for calculation to obtain a training model.
在所述电子设备中,在所述定义网络结构的步骤中,包括:In the electronic device, in the step of defining a network structure, the method includes:
设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同; Setting an input layer, the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
设定隐含层,所述隐含层包括M个节点;Setting a hidden layer, the hidden layer including M nodes;
设定分类层,所述分类层采用Softmax函数,所述Softmax函数为
Figure PCTCN2018110518-appb-000004
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110518-appb-000005
为第j个中间值;
Setting a classification layer, the classification layer adopts a Softmax function, and the Softmax function is
Figure PCTCN2018110518-appb-000004
Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result.
Figure PCTCN2018110518-appb-000005
Is the jth intermediate value;
设定输出层,所述输出层包括2个节点;Setting an output layer, the output layer comprising 2 nodes;
设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110518-appb-000006
其中,所述f(x)的范围为0到1;
Setting an activation function, the activation function adopting a sigmoid function, and the sigmoid function is
Figure PCTCN2018110518-appb-000006
Wherein the range of f(x) is 0 to 1;
设定批量大小,所述批量大小为A;以及Set the batch size, the batch size is A;
设定学习率,所述学习率为B。The learning rate is set, and the learning rate is B.
在所述电子设备中,所述隐含层包括第一隐含分层,第二隐含分层和第三隐含分层,所述第一隐含分层,第二隐含分层和第三隐含分层中的每一层的节点数均小于10。In the electronic device, the hidden layer includes a first implicit layer, a second implicit layer, and a third hidden layer, the first implicit layer, the second hidden layer, and The number of nodes in each layer in the third implicit layer is less than 10.
在所述电子设备中,所述历史特征信息x i的维数小于10,所述输入层的节点数小于10。 In the electronic device, the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10.
在所述电子设备中,所述将样本向量集带入网络结构进行计算,得到训练模型的步骤包括:In the electronic device, the step of bringing the sample vector set into the network structure for calculation, and obtaining the training model includes:
在输入层输入所述样本向量集进行计算,得到输入层的输出值;Inputting the sample vector set at the input layer to perform calculation, and obtaining an output value of the input layer;
在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;Inputting an output value of the input layer at the hidden layer to obtain an output value of the hidden layer;
在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] TInputting, at the classification layer, an output value of the hidden layer to calculate, to obtain the predicted probability value [p 1 p 2 ] T ;
将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及 The predicted probability value is brought into the output layer for calculation to obtain a predicted result value y. When p 1 is greater than p 2 , y=[1 0] T , when p 1 is less than or equal to p 2 , y=[0 1] T ; and
根据预测结果值y修正所述网络结构,得到训练模型。The network structure is modified according to the predicted result value y to obtain a training model.
在所述电子设备中,在将所述应用程序的当前特征信息s输入所述训练模型进行计算的步骤中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] TIn the electronic device, in the step of inputting the current feature information s of the application into the training model for calculation, the current feature information s is input into the training model to calculate a predicted probability value of the classification layer [p 1 ' p 2 '] T , when p 1 ' is greater than p 2 ', y = [1 0] T , when p 1 ' is less than or equal to p 2 ', y = [0 1] T .
在所述电子设备中,在所述判断所述应用程序是否需要关闭的步骤中,包括:In the electronic device, in the step of determining whether the application needs to be closed, the method includes:
当y=[1 0] T,判定所述应用程序需要关闭;以及 When y=[1 0] T , it is determined that the application needs to be closed;
当y=[0 1] T,判定所述应用程序需要保留。 When y = [0 1] T , it is determined that the application needs to be retained.
本申请提供的应用程序管控方法,主要应用于电子设备,如:手环、智能手机、基于苹果系统或安卓系统的平板电脑、或基于Windows或Linux系统的笔记本电脑等智能移动电子设备。需要说明的是,所述应用程序可以为聊天应用程序、视频应用程序、音乐应用程序、购物应用程序、共享单车应用程序或手机银行应用程序等。The application management method provided by the present application is mainly applied to electronic devices such as a wristband, a smart phone, a tablet based on an Apple system or an Android system, or a smart mobile electronic device such as a Windows or Linux based notebook computer. It should be noted that the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
请参阅图1,图1为本申请实施例提供的应用程序管控装置的系统示意图。所述应用程序管控装置主要用于:从数据库中获取应用程序的历史特征信息x i,然后,将历史特征 信息x i通过算法进行计算,得到训练模型,其次,将应用程序的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序是否可关闭,以对预设应用程序进行管控,例如关闭、或者冻结等。 Please refer to FIG. 1. FIG. 1 is a schematic diagram of a system for controlling an application program according to an embodiment of the present application. The application management device is mainly configured to: obtain historical feature information x i of the application from a database, and then calculate the historical feature information x i by an algorithm to obtain a training model, and secondly, the current feature information of the application The training model is input for calculation, and the calculation result is used to judge whether the application can be closed to control the preset application, such as closing, or freezing.
具体的,请参阅图2,图2为本申请实施例提供的应用程序管控方法的应用场景示意图。在一种实施例中,从数据库中获取应用程序的历史特征信息x i,然后,将历史特征信息x i通过算法进行计算,得到训练模型,其次,当应用程序管控装置在检测到应用程序进入电子设备的后台时,将应用程序的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序是否可关闭。比如,从数据库中获取应用程序a的历史特征信息x i,然后,将历史特征信息x i通过算法进行计算,得到训练模型,其次,当应用程序管控装置在检测到应用程序a进入电子设备的后台时,将应用程序的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序a可关闭,并将应用程序a关闭,当应用程序管控装置在检测到应用程序b进入电子设备的后台时,将应用程序b的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序b需要保留,并将应用程序b保留。 Specifically, please refer to FIG. 2 , which is a schematic diagram of an application scenario of an application management and control method according to an embodiment of the present application. In an embodiment, the historical feature information x i of the application is obtained from the database, and then the historical feature information x i is calculated by an algorithm to obtain a training model, and secondly, when the application control device detects that the application enters When the electronic device is in the background, the current feature information s of the application is input into the training model for calculation, and the calculation result determines whether the application can be closed. For example, the historical feature information x i of the application a is obtained from the database, and then the historical feature information x i is calculated by an algorithm to obtain a training model, and secondly, when the application control device detects that the application a enters the electronic device In the background, the current feature information s of the application is input into the training model for calculation, and the calculation result determines that the application a can be closed, and the application a is closed, when the application control device detects that the application b enters the background of the electronic device. At this time, the current feature information s of the application b is input into the training model for calculation, and it is judged by the calculation result that the application b needs to be retained, and the application b is retained.
本申请实施例提供一种应用程序管控方法,所述应用程序管控方法的执行主体可以是本发明实施例提供的应用程序管控装置,或者成了该应用程序管控装置的电子设备,其中该应用程序管控装置可以采用硬件或者软件的方式实现。The embodiment of the present application provides an application management method, and the execution entity of the application management method may be an application management device provided by an embodiment of the present invention, or an electronic device of the application management device, where the application The control device can be implemented in hardware or software.
请参阅图3,图3为本申请实施例提供的应用程序管控方法的流程示意图。本申请实施例提供的应用程序管控方法应用于电子设备,具体流程可以如下:Please refer to FIG. 3. FIG. 3 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application. The application management and control method provided by the embodiment of the present application is applied to an electronic device, and the specific process may be as follows:
步骤S101,获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iStep S101: Acquire the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
其中,从样本数据库中获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iWherein, the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
其中,所述多个维度的特征信息可以参考表1。The feature information of the multiple dimensions may refer to Table 1.
Figure PCTCN2018110518-appb-000007
Figure PCTCN2018110518-appb-000007
表1Table 1
需要说明的是,以上表1示出的10个维度的特征信息仅为本申请实施例中的一种,但 是本申请并不局限于表1示出的10个维度的特征信息,也可以为其中之一、或者其中至少两个,或者全部,亦或者还可以包括其他维度的特征信息,例如,当前是否在充电、当前的电量或者当前是否连接WiFi等。It should be noted that the feature information of the ten dimensions shown in Table 1 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
在一种实施例中,可以选取6个维度的历史特征信息:In one embodiment, historical features of six dimensions can be selected:
A、应用程序在后台驻留的时间;A, the time the application resides in the background;
B、屏幕是否为亮,例如,屏幕亮,记为1,屏幕熄灭,记为0;B, whether the screen is bright, for example, the screen is bright, recorded as 1, the screen is off, recorded as 0;
C、当周总使用次数统计;C. Statistics of the total number of uses in the week;
D、当周总使用时间统计;D. Total usage time statistics for the week;
E、WiFi是否打开,例如,WiFi打开,记为1,WiFi关闭,记为0;以及E, whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
F、当前是否在充电,例如,当前正在充电,记为1,当前未在充电,记为0。F. Is it currently charging? For example, it is currently charging, recorded as 1, currently not charging, and recorded as 0.
步骤S102,采用BP神经网络算法对样本向量集进行计算,生成训练模型。In step S102, the BP neural network algorithm is used to calculate the sample vector set to generate a training model.
请参阅图4,图4为本申请实施例提供的应用程序管控方法的流程示意图。在一种实施例中,所述步骤S102可以包括:Please refer to FIG. 4. FIG. 4 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application. In an embodiment, the step S102 may include:
步骤S1021:定义网络结构;以及Step S1021: defining a network structure;
步骤S1022:将样本向量集带入网络结构进行计算,得到训练模型。Step S1022: Bring the sample vector set into the network structure for calculation, and obtain a training model.
在步骤S1021中,所述定义网络结构包括:In step S1021, the defining the network structure includes:
步骤S1021a,设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同。 Step S1021a, setting an input layer, the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i .
其中,所述历史特征信息x i的维数小于10个,所述输入层的节点数小于10个,以简化运算过程。 The dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
在一种实施例中,所述历史特征信息x i的维数为6维,所述输入层包括6个节点。 In one embodiment, the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
步骤S1021b,设定隐含层,所述隐含层包括M个节点。Step S1021b, setting a hidden layer, the hidden layer including M nodes.
其中,所述隐含层可以包括多个隐含分层。每一所述隐含分层的节点数小于10个,以简化运算过程。Wherein, the hidden layer may include a plurality of implicit layers. The number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
在一种实施例中,所述隐含层可以包括第一隐含分层,第二隐含分层和第三隐含分层。所述第一隐含分层包括10个节点,第二隐含分层包括5个节点,第三隐含分层包括5个节点。In an embodiment, the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer. The first implicit layering includes 10 nodes, the second implicit layering includes 5 nodes, and the third implicit layering includes 5 nodes.
步骤S1021c,设定分类层,所述分类层采用softmax函数,所述softmax函数为
Figure PCTCN2018110518-appb-000008
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110518-appb-000009
为第j个中间值。
Step S1021c, setting a classification layer, the classification layer adopts a softmax function, and the softmax function is
Figure PCTCN2018110518-appb-000008
Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result.
Figure PCTCN2018110518-appb-000009
Is the jth intermediate value.
步骤S1021d,设定输出层,所述输出层包括2个节点。In step S1021d, an output layer is set, and the output layer includes two nodes.
步骤S1021e,设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110518-appb-000010
其中,所述f(x)的范围为0到1。
Step S1021e, setting an activation function, the activation function adopting a sigmoid function, and the sigmoid function is
Figure PCTCN2018110518-appb-000010
Wherein the range of f(x) is 0 to 1.
步骤S1021f,设定批量大小,所述批量大小为A。In step S1021f, the batch size is set, and the batch size is A.
其中,所述批量大小可以根据实际情况灵活调整。所述批量大小可以为50-200。The batch size can be flexibly adjusted according to actual conditions. The batch size can be 50-200.
在一种实施例中,所述批量大小为128。In one embodiment, the batch size is 128.
步骤S1021g,设定学习率,所述学习率为B。In step S1021g, a learning rate is set, and the learning rate is B.
其中,所述学习率可以根据实际情况灵活调整。所述学习率可以为0.1-1.5。The learning rate can be flexibly adjusted according to actual conditions. The learning rate can be from 0.1 to 1.5.
在一种实施例中,所述学习率为0.9。In one embodiment, the learning rate is 0.9.
需要说明的是,所述步骤S1021a、S1021b、S1021c、S1021d、S1021e、S1021f、S1021g的先后顺序可以灵活调整。It should be noted that the order of the steps S1021a, S1021b, S1021c, S1021d, S1021e, S1021f, and S1021g can be flexibly adjusted.
在步骤S1022中,所述将样本向量集带入网络结构进行计算,得到训练模型的步骤可以包括:In step S1022, the step of bringing the sample vector set into the network structure for calculation, the step of obtaining the training model may include:
步骤S1022a,在输入层输入所述样本向量集进行计算,得到输入层的输出值。In step S1022a, the sample vector set is input at the input layer for calculation, and an output value of the input layer is obtained.
步骤S1022b,在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值。Step S1022b: inputting an output value of the input layer in the hidden layer to obtain an output value of the hidden layer.
其中,所述输入层的输出值为所述隐含层的输入值。Wherein, the output value of the input layer is an input value of the hidden layer.
在一种实施例中,所述隐含层可以包括多个隐含分层。所述输入层的输出值为第一隐含分层的输入值。所述第一隐含分层的输出值为第二隐含分层的输入值。所述第二隐含分层的输出值为所述第三隐含分层的输入值,依次类推。In an embodiment, the hidden layer may include a plurality of hidden layers. The output of the input layer is the input value of the first implicit layer. The output value of the first implicit layer is an input value of the second implicit layer. The output value of the second implicit layer is an input value of the third implicit layer, and so on.
步骤S1022c,在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] TStep S1022c: inputting an output value of the hidden layer in the classification layer to perform calculation to obtain the predicted probability value [p 1 p 2 ] T .
其中,所述隐含层的输出值为所述分类层的输入值。The output value of the hidden layer is an input value of the classification layer.
在一种实施例中,所述隐含层可以包括多个隐含分层。最后一个隐含分层的输出值为所述分类层的输入值。In an embodiment, the hidden layer may include a plurality of hidden layers. The output value of the last implicit layer is the input value of the classification layer.
步骤S1022d,将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] TStep S1022d: Bring the predicted probability value into the output layer for calculation to obtain a predicted result value y. When p 1 is greater than p 2 , y=[1 0] T , when p 1 is less than or equal to p 2 , y=[ 0 1] T .
其中,所述分类层的输出值为所述输出层的输入值。The output value of the classification layer is an input value of the output layer.
步骤S1022e,根据预测结果值y修正所述网络结构,得到训练模型。In step S1022e, the network structure is modified according to the prediction result value y to obtain a training model.
步骤S103,当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算。Step S103, when the application enters the background, the current feature information s of the application is input into the training model for calculation.
请参阅图4,在一种实施例中,所述步骤S103可以包括:Referring to FIG. 4, in an embodiment, the step S103 may include:
步骤S1031:采集所述应用程序的当前特征信息s。Step S1031: Collect current feature information s of the application.
其中,采集的所述应用程序的当前特征信息s的维度与采集的所述应用程序的历史特征信息x i的维度相同。 The dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
步骤S1032:将当前特征信息s带入训练模型进行计算。Step S1032: Bring the current feature information s into the training model for calculation.
其中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] TWherein, the current feature information s is input into the training model to calculate a predicted probability value [p 1 ' p 2 '] T of the classification layer, and when p 1 ' is greater than p 2 ', y=[1 0] T , when When p 1 ' is less than or equal to p 2 ', y=[0 1] T .
步骤S104,判断所述应用程序是否需要关闭。In step S104, it is determined whether the application needs to be closed.
需要说明的是,当y=[1 0] T,判定所述应用程序需要关闭;当y=[0 1] T,判定所述应用程序需要保留。 It should be noted that when y=[1 0] T , it is determined that the application needs to be closed; when y=[0 1] T , it is determined that the application needs to be retained.
本申请所提供的应用程序管控方法,通过获取历史特征信息x i,采用BP神经网络算法生成训练模型,当检测应用程序进入后台时,从而将应用程序的当前特征信息s带入训练模型,进而判断所述应用程序是否需要关闭,智能关闭应用程序。 The application management method provided by the present application generates the training model by using the BP neural network algorithm by acquiring the historical feature information x i , and brings the current feature information s of the application into the training model when the detection application enters the background, and further Determine if the application needs to be closed and intelligently close the application.
请参阅图5,图5为本申请实施例提供的应用程序管控装置的结构示意图。所述装置30包括获取模块31,生成模块32、计算模块33和判断模块34。Referring to FIG. 5, FIG. 5 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application. The device 30 includes an acquisition module 31, a generation module 32, a calculation module 33, and a determination module 34.
需要说明的是,所述应用程序可以为聊天应用程序、视频应用程序、音乐应用程序、购物应用程序、共享单车应用程序或手机银行应用程序等。It should be noted that the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
所述获取模块31用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iThe obtaining module 31 is configured to obtain the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
其中,从样本数据库中获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iWherein, the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
请参阅图6,图6为本申请实施例提供的应用程序管控装置的结构示意图。所述装置30还包括检测模块35,用于检测所述应用程序进入后台。Please refer to FIG. 6. FIG. 6 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application. The device 30 further includes a detection module 35 for detecting that the application enters the background.
所述装置30还可以包括储存模块36。所述储存模块36用于储存应用程序的历史特征信息x iThe device 30 can also include a storage module 36. The storage module 36 is configured to store historical feature information x i of the application .
其中,所述多个维度的特征信息可以参考表2。The feature information of the multiple dimensions may refer to Table 2.
Figure PCTCN2018110518-appb-000011
Figure PCTCN2018110518-appb-000011
表2Table 2
需要说明的是,以上表2示出的10个维度的特征信息仅为本申请实施例中的一种,但是本申请并不局限于表1示出的10个维度的特征信息,也可以为其中之一、或者其中至少两个,或者全部,亦或者还可以包括其他维度的特征信息,例如,当前是否在充电、当前的电量或者当前是否连接WiFi等。It should be noted that the feature information of the ten dimensions shown in Table 2 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
在一种实施例中,可以选取6个维度的历史特征信息:In one embodiment, historical features of six dimensions can be selected:
A、应用程序在后台驻留的时间;A, the time the application resides in the background;
B、屏幕是否为亮,例如,屏幕亮,记为1,屏幕熄灭,记为0;B, whether the screen is bright, for example, the screen is bright, recorded as 1, the screen is off, recorded as 0;
C、当周总使用次数统计;C. Statistics of the total number of uses in the week;
D、当周总使用时间统计;D. Total usage time statistics for the week;
E、WiFi是否打开,例如,WiFi打开,记为1,WiFi关闭,记为0;以及E, whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
F、当前是否在充电,例如,当前正在充电,记为1,当前未在充电,记为0。F. Is it currently charging? For example, it is currently charging, recorded as 1, currently not charging, and recorded as 0.
所述生成模块32用于采用BP神经网络算法对样本向量集进行计算,生成训练模型。The generating module 32 is configured to calculate a sample vector set by using a BP neural network algorithm to generate a training model.
所述生成模块32训练所述获取模块31获取的历史特征信息x i,在BP神经网络算法中输入所述历史特征信息x iThe generating module 32 trains the historical feature information x i acquired by the obtaining module 31, and inputs the historical feature information x i in the BP neural network algorithm.
请参阅图6,所述生成模块32包括定义模块321和求解模块322。Referring to FIG. 6, the generating module 32 includes a defining module 321 and a solving module 322.
所述定义模块321用于定义网络结构。The definition module 321 is used to define a network structure.
所述定义模块321可以包括输入层定义模块3211、隐含层定义模块3212、分类层定义模块3213、输出层定义模块3214、激活函数定义模块3215、批量大小定义模块3216和学习率定义模块3217。The definition module 321 may include an input layer definition module 3211, an implicit layer definition module 3212, a classification layer definition module 3213, an output layer definition module 3214, an activation function definition module 3215, a batch size definition module 3216, and a learning rate definition module 3217.
所述输入层定义模块3211用于设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同。 The input layer definition module 3211 is configured to set an input layer, where the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i .
其中,所述历史特征信息x i的维数小于10个,所述输入层的节点数小于10个,以简化运算过程。 The dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
在一种实施例中,所述历史特征信息x i的维数为6维,所述输入层包括6个节点。 In one embodiment, the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
所述隐含层定义模块3212用于设定隐含层,所述隐含层包括M个节点。The hidden layer definition module 3212 is configured to set an implicit layer, and the hidden layer includes M nodes.
其中,所述隐含层可以包括多个隐含分层。每一所述隐含分层的节点数小于10个,以简化运算过程。Wherein, the hidden layer may include a plurality of implicit layers. The number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
在一种实施例中,所述隐含层可以包括第一隐含分层,第二隐含分层和第三隐含分层。所述第一隐含分层包括10个节点,第二隐含分层包括5个节点,第三隐含分层包括5个节点。In an embodiment, the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer. The first implicit layering includes 10 nodes, the second implicit layering includes 5 nodes, and the third implicit layering includes 5 nodes.
所述分类层定义模块3213用于设定分类层,所述分类层采用softmax函数,所述softmax函数为
Figure PCTCN2018110518-appb-000012
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110518-appb-000013
为第j个中间值。
The classification layer definition module 3213 is configured to set a classification layer, the classification layer adopts a softmax function, and the softmax function is
Figure PCTCN2018110518-appb-000012
Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result.
Figure PCTCN2018110518-appb-000013
Is the jth intermediate value.
所述输出层定义模块3214用于设定输出层,所述输出层包括2个节点。The output layer definition module 3214 is configured to set an output layer, and the output layer includes 2 nodes.
所述激活函数定义模块3215用于设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110518-appb-000014
其中,所述f(x)的范围为0到1。
The activation function definition module 3215 is configured to set an activation function, the activation function adopts a sigmoid function, and the sigmoid function is
Figure PCTCN2018110518-appb-000014
Wherein the range of f(x) is 0 to 1.
所述批量大小定义模块3216用于设定批量大小,所述批量大小为A。The batch size definition module 3216 is configured to set a batch size, and the batch size is A.
其中,所述批量大小可以根据实际情况灵活调整。所述批量大小可以为50-200。The batch size can be flexibly adjusted according to actual conditions. The batch size can be 50-200.
在一种实施例中,所述批量大小为128。In one embodiment, the batch size is 128.
所述学习率定义模块3217用于设定学习率,所述学习率为B。The learning rate definition module 3217 is configured to set a learning rate, and the learning rate is B.
其中,所述学习率可以根据实际情况灵活调整。所述学习率可以为0.1-1.5。The learning rate can be flexibly adjusted according to actual conditions. The learning rate can be from 0.1 to 1.5.
在一种实施例中,所述学习率为0.9。In one embodiment, the learning rate is 0.9.
需要说明的是,所述输入层定义模块3211设定输入层、所述隐含层定义模块3212设定隐含层、所述分类层定义模块3213设定分类层、所述输出层定义模块3214设定输出层、所述激活函数定义模块3215设定激活函数、所述批量大小定义模块3216设定批量大小和所述学习率定义模块3217设定学习率的先后顺序可以灵活调整。It should be noted that the input layer definition module 3211 sets the input layer, the hidden layer definition module 3212 sets the hidden layer, the classification layer definition module 3213 sets the classification layer, and the output layer definition module 3214 The setting output layer, the activation function definition module 3215 sets the activation function, the batch size definition module 3216 sets the batch size, and the learning order definition module 3217 sets the learning order in a sequence that can be flexibly adjusted.
所述求解模块322用于将样本向量集带入网络结构进行计算,得到训练模型。The solving module 322 is configured to bring the sample vector set into the network structure for calculation to obtain a training model.
所述求解模块322可以包括第一求解模块3221、第二求解模块3222、第三求解模块3223、第四求解模块3224和修正模块。The solution module 322 can include a first solution module 3221, a second solution module 3222, a third solution module 3223, a fourth solution module 3224, and a correction module.
所述第一求解模块3221用于在输入层输入所述样本向量集进行计算,得到输入层的输出值。The first solving module 3221 is configured to input the sample vector set at the input layer for calculation to obtain an output value of the input layer.
所述第二求解模块3222用于在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值。The second solving module 3222 is configured to input an output value of the input layer at the hidden layer to obtain an output value of the hidden layer.
其中,所述输入层的输出值为所述隐含层的输入值。Wherein, the output value of the input layer is an input value of the hidden layer.
在一种实施例中,所述隐含层可以包括多个隐含分层。所述输入层的输出值为第一隐含分层的输入值。所述第一隐含分层的输出值为第二隐含分层的输入值。所述第二隐含分层的输出值为所述第三隐含分层的输入值,依次类推。In an embodiment, the hidden layer may include a plurality of hidden layers. The output of the input layer is the input value of the first implicit layer. The output value of the first implicit layer is an input value of the second implicit layer. The output value of the second implicit layer is an input value of the third implicit layer, and so on.
所述第三求解模块3223用于在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] TThe third solving module 3223 is configured to input an output value of the hidden layer in the classification layer to calculate, to obtain the predicted probability value [p 1 p 2 ] T .
其中,所述隐含层的输出值为所述分类层的输入值。The output value of the hidden layer is an input value of the classification layer.
所述第四求解模块3224用于将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] TThe fourth solving module 3224 is configured to bring the predicted probability value into the output layer for calculation to obtain a predicted result value y. When p 1 is greater than p 2 , y=[1 0] T , when p 1 is less than or equal to p 2 , y = [0 1] T .
其中,所述分类层的输出值为所述输出层的输入值。The output value of the classification layer is an input value of the output layer.
所述修正模块3225用于根据预测结果值y修正所述网络结构,得到训练模型。The modification module 3225 is configured to modify the network structure according to the prediction result value y to obtain a training model.
所述计算模块33用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算。The calculating module 33 is configured to input the current feature information s of the application into the training model for calculation when the application enters the background.
请参阅图6,在一种实施例中,所述计算模块33可以包括采集模块331和运算模块332。Referring to FIG. 6 , in an embodiment, the calculation module 33 may include an acquisition module 331 and an operation module 332 .
所述采集模块331用于采集所述应用程序的当前特征信息s。The collecting module 331 is configured to collect current feature information s of the application.
其中,采集的所述应用程序的当前特征信息s的维度与采集的所述应用程序的历史特征信息x i的维度相同。 The dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
所述运算模块332用于当前特征信息s带入训练模型进行计算。The operation module 332 is configured to bring the current feature information s into the training model for calculation.
其中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] TWherein, the current feature information s is input into the training model to calculate a predicted probability value [p 1 ' p 2 '] T of the classification layer, and when p 1 ' is greater than p 2 ', y=[1 0] T , when When p 1 ' is less than or equal to p 2 ', y=[0 1] T .
在一种实施例中,所述采集模块331用于根据预定采集时间定时采集当前特征信息s,并将当前特征信息s存入储存模块36,所述采集模块331还用于采集检测到应用程序进入后台的时间点对应的当前特征信息s,并将该当前特征信息s输入运算模块332用于带入训练模型进行计算。In an embodiment, the collecting module 331 is configured to collect the current feature information s according to a predetermined acquisition time, and store the current feature information s in the storage module 36. The collecting module 331 is further configured to collect and detect the application. The current feature information s corresponding to the time point entering the background is used, and the current feature information s is input into the operation module 332 for being brought into the training model for calculation.
所述判断模块34用于判断所述应用程序是否需要关闭。The determining module 34 is configured to determine whether the application needs to be closed.
需要说明的是,当y=[1 0] T,判定所述应用程序需要关闭;当y=[0 1] T,判定所述应用程序需要保留。 It should be noted that when y=[1 0] T , it is determined that the application needs to be closed; when y=[0 1] T , it is determined that the application needs to be retained.
所述装置30还可以包括关闭模块37,用于当判断应用程序需要关闭时,将所述应用程序关闭。The apparatus 30 can also include a shutdown module 37 for shutting down the application when it is determined that the application needs to be closed.
本申请所提供的用于应用程序管控方法的装置,通过获取历史特征信息x i,采用BP神经网络算法生成训练模型,当检测应用程序进入后台时,从而将应用程序的当前特征信息s带入训练模型,进而判断所述应用程序是否需要关闭,智能关闭应用程序。 The apparatus for application management and control provided by the application obtains the historical feature information x i , generates a training model by using a BP neural network algorithm, and brings the current feature information s of the application into the background when the detection application enters the background. Train the model to determine if the application needs to be closed and intelligently close the application.
请参阅图7,图7为本申请实施例提供的电子设备的结构示意图。所述电子设备500包括:处理器501和存储器502。其中,处理器501与存储器502电性连接。Please refer to FIG. 7. FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 500 includes a processor 501 and a memory 502. The processor 501 is electrically connected to the memory 502.
处理器501是电子设备500的控制中心,利用各种接口和线路连接整个电子设备500的各个部分,通过运行或加载存储在存储器502内的应用程序,以及调用存储在存储器502内的数据,执行电子设备的各种功能和处理数据,从而对电子设备500进行整体监控。The processor 501 is a control center of the electronic device 500, and connects various parts of the entire electronic device 500 by various interfaces and lines, by running or loading an application stored in the memory 502, and calling data stored in the memory 502, executing The various functions of the electronic device and the processing of the data enable overall monitoring of the electronic device 500.
在本实施例中,电子设备500中的处理器501会按照如下的步骤,将一个或一个以上的应用程序的进程对应的指令加载到存储器502中,并由处理器501来运行存储在存储器502中的应用程序,从而实现各种功能:In this embodiment, the processor 501 in the electronic device 500 loads the instructions corresponding to the process of one or more applications into the memory 502 according to the following steps, and is stored and stored in the memory 502 by the processor 501. In the application, thus implementing various functions:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iObtaining the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
采用神经网络算法对样本向量集进行计算,生成训练模型;The neural network algorithm is used to calculate the sample vector set to generate a training model;
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及When the application enters the background, the current feature information s of the application is input into the training model for calculation;
判断所述应用程序是否需要关闭。Determine if the application needs to be closed.
需要说明的是,所述应用程序可以为聊天应用程序、视频应用程序、音乐应用程序、购物应用程序、共享单车应用程序或手机银行应用程序等。It should be noted that the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
其中,从样本数据库中获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iWherein, the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
其中,所述多个维度的特征信息可以参考表3。The feature information of the multiple dimensions may refer to Table 3.
Figure PCTCN2018110518-appb-000015
Figure PCTCN2018110518-appb-000015
Figure PCTCN2018110518-appb-000016
Figure PCTCN2018110518-appb-000016
表3table 3
需要说明的是,以上表3示出的10个维度的特征信息仅为本申请实施例中的一种,但是本申请并不局限于表1示出的10个维度的特征信息,也可以为其中之一、或者其中至少两个,或者全部,亦或者还可以包括其他维度的特征信息,例如,当前是否在充电、当前的电量或者当前是否连接WiFi等。It should be noted that the feature information of the ten dimensions shown in Table 3 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
在一种实施例中,可以选取6个维度的历史特征信息:In one embodiment, historical features of six dimensions can be selected:
A、应用程序在后台驻留的时间;A, the time the application resides in the background;
B、屏幕是否为亮,例如,屏幕亮,记为1,屏幕熄灭,记为0;B, whether the screen is bright, for example, the screen is bright, recorded as 1, the screen is off, recorded as 0;
C、当周总使用次数统计;C. Statistics of the total number of uses in the week;
D、当周总使用时间统计;D. Total usage time statistics for the week;
E、WiFi是否打开,例如,WiFi打开,记为1,WiFi关闭,记为0;以及E, whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
F、当前是否在充电,例如,当前正在充电,记为1,当前未在充电,记为0。F. Is it currently charging? For example, it is currently charging, recorded as 1, currently not charging, and recorded as 0.
在一种实施例中,所述处理器501采用BP神经网络算法对样本向量集进行计算,生成训练模型还包括:In an embodiment, the processor 501 calculates a sample vector set by using a BP neural network algorithm, and the generating the training model further includes:
定义网络结构;以及Define the network structure;
将样本向量集带入网络结构进行计算,得到训练模型。The sample vector set is brought into the network structure for calculation to obtain a training model.
其中,所述定义网络结构包括:Wherein, the defined network structure includes:
设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同; Setting an input layer, the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
其中,所述历史特征信息x i的维数小于10个,所述输入层的节点数小于10个,以简化运算过程。 The dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
在一种实施例中,所述历史特征信息x i的维数为6维,所述输入层包括6个节点。 In one embodiment, the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
设定隐含层,所述隐含层包括M个节点。A hidden layer is set, the hidden layer including M nodes.
其中,所述隐含层可以包括多个隐含分层。每一所述隐含分层的节点数小于10个,以简化运算过程。Wherein, the hidden layer may include a plurality of implicit layers. The number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
在一种实施例中,所述隐含层可以包括第一隐含分层,第二隐含分层和第三隐含分层。所述第一隐含分层包括10个节点,第二隐含分层包括5个节点,第三隐含分层包括5个节点。In an embodiment, the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer. The first implicit layering includes 10 nodes, the second implicit layering includes 5 nodes, and the third implicit layering includes 5 nodes.
设定分类层,所述分类层采用softmax函数,所述softmax函数为
Figure PCTCN2018110518-appb-000017
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110518-appb-000018
为第j个中间值。
Setting a classification layer, the classification layer adopts a softmax function, and the softmax function is
Figure PCTCN2018110518-appb-000017
Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result.
Figure PCTCN2018110518-appb-000018
Is the jth intermediate value.
设定输出层,所述输出层包括2个节点。An output layer is set, the output layer comprising 2 nodes.
设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110518-appb-000019
其中,所述f(x)的范围为0到1。
Setting an activation function, the activation function adopting a sigmoid function, and the sigmoid function is
Figure PCTCN2018110518-appb-000019
Wherein the range of f(x) is 0 to 1.
设定批量大小,所述批量大小为A。Set the batch size, which is A.
其中,所述批量大小可以根据实际情况灵活调整。所述批量大小可以为50-200。The batch size can be flexibly adjusted according to actual conditions. The batch size can be 50-200.
在一种实施例中,所述批量大小为128。In one embodiment, the batch size is 128.
设定学习率,所述学习率为B。The learning rate is set, and the learning rate is B.
其中,所述学习率可以根据实际情况灵活调整。所述学习率可以为0.1-1.5。The learning rate can be flexibly adjusted according to actual conditions. The learning rate can be from 0.1 to 1.5.
在一种实施例中,所述学习率为0.9。In one embodiment, the learning rate is 0.9.
需要说明的是,所述设定输入层、设定隐含层、设定分类层、设定输出层、设定激活函数、设定批量大小、设定学习率的先后顺序可以灵活调整。It should be noted that the order of setting the input layer, setting the hidden layer, setting the classification layer, setting the output layer, setting the activation function, setting the batch size, and setting the learning rate can be flexibly adjusted.
所述将样本向量集带入网络结构进行计算,得到训练模型的步骤可以包括:The step of bringing the sample vector set into the network structure for calculation, and obtaining the training model may include:
在输入层输入所述样本向量集进行计算,得到输入层的输出值。The sample vector set is input at the input layer for calculation to obtain an output value of the input layer.
在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值。An output value of the input layer is input to the hidden layer to obtain an output value of the hidden layer.
其中,所述输入层的输出值为所述隐含层的输入值。Wherein, the output value of the input layer is an input value of the hidden layer.
在一种实施例中,所述隐含层可以包括多个隐含分层。所述输入层的输出值为第一隐含分层的输入值。所述第一隐含分层的输出值为第二隐含分层的输入值。所述第二隐含分层的输出值为所述第三隐含分层的输入值,依次类推。In an embodiment, the hidden layer may include a plurality of hidden layers. The output of the input layer is the input value of the first implicit layer. The output value of the first implicit layer is an input value of the second implicit layer. The output value of the second implicit layer is an input value of the third implicit layer, and so on.
在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] TThe output value of the hidden layer is input at the classification layer to calculate, and the predicted probability value [p 1 p 2 ] T is obtained .
其中,所述隐含层的输出值为所述分类层的输入值。The output value of the hidden layer is an input value of the classification layer.
在一种实施例中,所述隐含层可以包括多个隐含分层。最后一个隐含分层的输出值为所述分类层的输入值。In an embodiment, the hidden layer may include a plurality of hidden layers. The output value of the last implicit layer is the input value of the classification layer.
将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] TThe predicted probability value is brought into the output layer for calculation to obtain a predicted result value y. When p 1 is greater than p 2 , y=[1 0] T , when p 1 is less than or equal to p 2 , y=[0 1] T.
其中,所述分类层的输出值为所述输出层的输入值。The output value of the classification layer is an input value of the output layer.
根据预测结果值y修正所述网络结构,得到训练模型。The network structure is modified according to the predicted result value y to obtain a training model.
所述当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算的步骤包括:When the application enters the background, the step of inputting the current feature information s of the application into the training model for calculation includes:
采集所述应用程序的当前特征信息s。The current feature information s of the application is collected.
其中,采集的所述应用程序的当前特征信息s的维度与采集的所述应用程序的历史特征信息x i的维度相同。 The dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
将当前特征信息s带入训练模型进行计算。The current feature information s is brought into the training model for calculation.
其中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] TWherein, the current feature information s is input into the training model to calculate a predicted probability value [p 1 ' p 2 '] T of the classification layer, and when p 1 ' is greater than p 2 ', y=[1 0] T , when When p 1 ' is less than or equal to p 2 ', y=[0 1] T .
在所述判断所述应用程序是否需要关闭的步骤中,当y=[1 0] T,判定所述应用程序需要关闭;当y=[0 1] T,判定所述应用程序需要保留。 In the step of determining whether the application needs to be closed, when y=[1 0] T , it is determined that the application needs to be closed; when y=[0 1] T , it is determined that the application needs to be retained.
存储器502可用于存储应用程序和数据。存储器502存储的程序中包含有可在处理器中执行的指令。所述程序可以组成各种功能模块。处理器501通过运行存储在存储器502 的程序,从而执行各种功能应用以及数据处理。 Memory 502 can be used to store applications and data. The program stored in the memory 502 contains instructions executable in the processor. The program can constitute various functional modules. The processor 501 executes various function applications and data processing by running a program stored in the memory 502.
在一些实施例中,如图8所示,图8为本申请实施例提供的电子设备的结构示意图。所述电子设备500还包括:射频电路503、显示屏504、控制电路505、输入单元506、音频电路507、传感器508以及电源509。其中,处理器501分别与射频电路503、显示屏504、控制电路505、输入单元506、音频电路507、传感器508以及电源509电性连接。In some embodiments, as shown in FIG. 8, FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 500 further includes a radio frequency circuit 503, a display screen 504, a control circuit 505, an input unit 506, an audio circuit 507, a sensor 508, and a power source 509. The processor 501 is electrically connected to the radio frequency circuit 503, the display screen 504, the control circuit 505, the input unit 506, the audio circuit 507, the sensor 508, and the power source 509, respectively.
射频电路503用于收发射频信号,以通过无线通信网络与服务器或其他电子设备进行通信。The radio frequency circuit 503 is configured to transceive radio frequency signals to communicate with a server or other electronic device over a wireless communication network.
显示屏504可用于显示由用户输入的信息或提供给用户的信息以及终端的各种图形用户接口,这些图形用户接口可以由图像、文本、图标、视频和其任意组合来构成。The display screen 504 can be used to display information entered by the user or information provided to the user as well as various graphical user interfaces of the terminal, which can be composed of images, text, icons, video, and any combination thereof.
控制电路505与显示屏504电性连接,用于控制显示屏504显示信息。The control circuit 505 is electrically connected to the display screen 504 for controlling the display screen 504 to display information.
输入单元506可用于接收输入的数字、字符信息或用户特征信息(例如指纹),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。The input unit 506 can be configured to receive input digits, character information, or user characteristic information (eg, fingerprints), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function controls.
音频电路507可通过扬声器、传声器提供用户与终端之间的音频接口。The audio circuit 507 can provide an audio interface between the user and the terminal through a speaker and a microphone.
传感器508用于采集外部环境信息。传感器508可以包括环境亮度传感器、加速度传感器、陀螺仪等传感器中的一种或多种。 Sensor 508 is used to collect external environmental information. Sensor 508 can include one or more of ambient brightness sensors, acceleration sensors, gyroscopes, and the like.
电源509用于给电子设备500的各个部件供电。在一些实施例中,电源509可以通过电源管理系统与处理器501逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。 Power source 509 is used to power various components of electronic device 500. In some embodiments, the power supply 509 can be logically coupled to the processor 501 through a power management system to enable functions such as managing charging, discharging, and power management through the power management system.
尽管图8中未示出,电子设备500还可以包括摄像头、蓝牙模块等,在此不再赘述。Although not shown in FIG. 8, the electronic device 500 may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
本申请所提供的电子设备,通过获取历史特征信息x i,采用BP神经网络算法生成训练模型,当检测应用程序进入后台时,从而将应用程序的当前特征信息s带入训练模型,进而判断所述应用程序是否需要关闭,智能关闭应用程序。 The electronic device provided by the present application generates the training model by using the BP neural network algorithm by acquiring the historical feature information x i , and when the detection application enters the background, the current feature information s of the application is brought into the training model, and then the judgment is performed. Whether the application needs to be closed, intelligently close the application.
本发明实施例还提供一种介质,该介质中存储有多条指令,该指令适于由处理器加载以执行上述任一实施例所述的应用程序管控方法。The embodiment of the present invention further provides a medium in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor to execute the application management method described in any of the above embodiments.
本发明实施例提供的应用程序管控方法、装置、介质及电子设备属于同一构思,其具体实现过程详见说明书全文,此处不再赘述。The application management method, the device, the medium, and the electronic device provided by the embodiments of the present invention belong to the same concept, and the specific implementation process thereof is described in the full text of the specification, and details are not described herein again.
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。A person skilled in the art may understand that all or part of the various steps of the foregoing embodiments may be performed by a program to instruct related hardware. The program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
以上对本申请实施例提供的应用程序管控方法、装置、介质及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施例进行了阐述,以上实施例的说明只是用于帮助理解本申请。同时,对于本领域的技术人员,依据本申请的思想,在具体实施例及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。The application management method, apparatus, medium, and electronic device provided by the embodiments of the present application are described in detail. The principles and embodiments of the application are described in the specific examples. The description of the above embodiments is only for helping. Understand this application. In the meantime, those skilled in the art will be able to change the specific embodiments and the scope of the application according to the idea of the present application. In the above, the content of the specification should not be construed as limiting the present application.

Claims (19)

  1. 一种应用程序管控方法,应用于电子设备,其中,所述应用程序管控方法包括以下步骤:An application management method is applied to an electronic device, wherein the application management method comprises the following steps:
    获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iObtaining the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
    采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算,生成训练模型;The Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
    当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及When the application enters the background, the current feature information s of the application is input into the training model for calculation;
    判断所述应用程序是否需要关闭。Determine if the application needs to be closed.
  2. 如权利要求1所述的应用程序管控方法,其中,采用BP神经网络算法对样本向量集进行计算,生成训练模型的步骤包括:The application management method according to claim 1, wherein the BP neural network algorithm is used to calculate the sample vector set, and the step of generating the training model comprises:
    定义网络结构;以及Define the network structure;
    将样本向量集带入网络结构进行计算,得到训练模型。The sample vector set is brought into the network structure for calculation to obtain a training model.
  3. 如权利要求2所述的应用程序管控方法,其中,在所述定义网络结构的步骤中,包括:The application management method according to claim 2, wherein in the step of defining the network structure, the method comprises:
    设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同; Setting an input layer, the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
    设定隐含层,所述隐含层包括M个节点;Setting a hidden layer, the hidden layer including M nodes;
    设定分类层,所述分类层采用Softmax函数,所述Softmax函数为
    Figure PCTCN2018110518-appb-100001
    其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
    Figure PCTCN2018110518-appb-100002
    为第j个中间值;
    Setting a classification layer, the classification layer adopts a Softmax function, and the Softmax function is
    Figure PCTCN2018110518-appb-100001
    Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result.
    Figure PCTCN2018110518-appb-100002
    Is the jth intermediate value;
    设定输出层,所述输出层包括2个节点;Setting an output layer, the output layer comprising 2 nodes;
    设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
    Figure PCTCN2018110518-appb-100003
    其中,所述f(x)的范围为0到1;
    Setting an activation function, the activation function adopting a sigmoid function, and the sigmoid function is
    Figure PCTCN2018110518-appb-100003
    Wherein the range of f(x) is 0 to 1;
    设定批量大小,所述批量大小为A;以及Set the batch size, the batch size is A;
    设定学习率,所述学习率为B。The learning rate is set, and the learning rate is B.
  4. 如权利要求3所述的应用程序管控方法,其中,所述隐含层包括第一隐含分层,第二隐含分层和第三隐含分层,所述第一隐含分层,第二隐含分层和第三隐含分层中的每一层的节点数均小于10。The application management method of claim 3, wherein the hidden layer comprises a first implicit layer, a second implicit layer and a third implicit layer, the first implicit layering, The number of nodes in each of the second implicit layer and the third implicit layer is less than 10.
  5. 如权利要求3所述的应用程序管控方法,其中,所述历史特征信息x i的维数小于10,所述输入层的节点数小于10。 The application management method according to claim 3, wherein the history feature information x i has a dimension less than 10, and the number of nodes of the input layer is less than 10.
  6. 如权利要求3所述的应用程序管控方法,其中,所述将样本向量集带入网络结构进行计算,得到训练模型的步骤包括:The application management method according to claim 3, wherein the step of bringing the sample vector set into the network structure for calculation, and obtaining the training model comprises:
    在输入层输入所述样本向量集进行计算,得到输入层的输出值;Inputting the sample vector set at the input layer to perform calculation, and obtaining an output value of the input layer;
    在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;Inputting an output value of the input layer at the hidden layer to obtain an output value of the hidden layer;
    在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] TInputting, at the classification layer, an output value of the hidden layer to calculate, to obtain the predicted probability value [p 1 p 2 ] T ;
    将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及 The predicted probability value is brought into the output layer for calculation to obtain a predicted result value y. When p 1 is greater than p 2 , y=[1 0] T , when p 1 is less than or equal to p 2 , y=[0 1] T ; and
    根据预测结果值y修正所述网络结构,得到训练模型。The network structure is modified according to the predicted result value y to obtain a training model.
  7. 如权利要求6所述的应用程序管控方法,其中,在将所述应用程序的当前特征信息 s输入所述训练模型进行计算的步骤中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] TThe application management method according to claim 6, wherein in the step of inputting the current feature information s of the application into the training model for calculation, the current feature information s is input into the training model for calculation and classification. The predicted probability value of the layer [p 1 ' p 2 '] T , when p 1 ' is greater than p 2 ', y = [1 0] T , when p 1 ' is less than or equal to p 2 ', y = [0 1] T.
  8. 如权利要求7所述的应用程序管控方法,其中,在所述判断所述应用程序是否需要关闭的步骤中,包括:The application management method according to claim 7, wherein in the step of determining whether the application needs to be closed, the method comprises:
    当y=[1 0] T,判定所述应用程序需要关闭;以及 When y=[1 0] T , it is determined that the application needs to be closed;
    当y=[0 1] T,判定所述应用程序需要保留。 When y = [0 1] T , it is determined that the application needs to be retained.
  9. 如权利要求1所述的应用程序管控方法,其中,在所述当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算中,包括:The application management method according to claim 1, wherein, when the application enters the background, the current feature information s of the application is input into the training model for calculation, including:
    采集所述应用程序的当前特征信息s;Collecting current feature information s of the application;
    将当前特征信息s带入训练模型进行计算。The current feature information s is brought into the training model for calculation.
  10. 一种应用程序管控装置,其中,所述装置包括:An application management device, wherein the device comprises:
    获取模块,用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iAn obtaining module, configured to obtain the application sample vector set, where the sample vector in the sample vector set includes historical feature information x i of multiple dimensions of the application;
    生成模块,用于采用BP神经网络算法对样本向量集进行计算,生成训练模型;a generating module for calculating a sample vector set by using a BP neural network algorithm to generate a training model;
    计算模块,用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及a calculation module, configured to input the current feature information s of the application into the training model for calculation when the application enters the background;
    判断模块,用于判断所述应用程序是否需要关闭。The determining module is configured to determine whether the application needs to be closed.
  11. 一种介质,其中,所述介质中存储有多条指令,所述指令适于由处理器加载以执行如权利要求1至9中任一项所述的应用程序管控方法。A medium in which a plurality of instructions are stored in the medium, the instructions being adapted to be loaded by a processor to perform the application management method according to any one of claims 1 to 9.
  12. 一种电子设备,其中,所述电子设备包括处理器和存储器,所述电子设备与所述存储器电性连接,所述存储器用于存储指令和数据,所述处理器用于执行:An electronic device, comprising: a processor and a memory, the electronic device being electrically connected to the memory, the memory for storing instructions and data, the processor for performing:
    获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x iObtaining the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
    采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算,生成训练模型;The Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
    当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及When the application enters the background, the current feature information s of the application is input into the training model for calculation;
    判断所述应用程序是否需要关闭。Determine if the application needs to be closed.
  13. 如权利要求12所述的电子设备,其中,采用BP神经网络算法对样本向量集进行计算,生成训练模型的步骤包括:The electronic device of claim 12, wherein the step of calculating the sample vector set using the BP neural network algorithm, the step of generating the training model comprises:
    定义网络结构;以及Define the network structure;
    将样本向量集带入网络结构进行计算,得到训练模型。The sample vector set is brought into the network structure for calculation to obtain a training model.
  14. 如权利要求13所述的电子设备,其中,在所述定义网络结构的步骤中,包括:The electronic device of claim 13, wherein in the step of defining a network structure, the method comprises:
    设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同; Setting an input layer, the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
    设定隐含层,所述隐含层包括M个节点;Setting a hidden layer, the hidden layer including M nodes;
    设定分类层,所述分类层采用Softmax函数,所述Softmax函数为
    Figure PCTCN2018110518-appb-100004
    其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
    Figure PCTCN2018110518-appb-100005
    为第j个中间值;
    Setting a classification layer, the classification layer adopts a Softmax function, and the Softmax function is
    Figure PCTCN2018110518-appb-100004
    Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result.
    Figure PCTCN2018110518-appb-100005
    Is the jth intermediate value;
    设定输出层,所述输出层包括2个节点;Setting an output layer, the output layer comprising 2 nodes;
    设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
    Figure PCTCN2018110518-appb-100006
    其中,所述f(x)的范围为0到1;
    Setting an activation function, the activation function adopting a sigmoid function, and the sigmoid function is
    Figure PCTCN2018110518-appb-100006
    Wherein the range of f(x) is 0 to 1;
    设定批量大小,所述批量大小为A;以及Set the batch size, the batch size is A;
    设定学习率,所述学习率为B。The learning rate is set, and the learning rate is B.
  15. 如权利要求14所述的电子设备,其中,所述隐含层包括第一隐含分层,第二隐含分层和第三隐含分层,所述第一隐含分层,第二隐含分层和第三隐含分层中的每一层的节点数均小于10。The electronic device of claim 14, wherein the hidden layer comprises a first implicit layer, a second implicit layer and a third implicit layer, the first implicit layer, the second The number of nodes in each of the implicit layered and third implicit layers is less than 10.
  16. 如权利要求14所述的电子设备,其中,所述历史特征信息x i的维数小于10,所述输入层的节点数小于10。 The electronic device of claim 14, wherein the historical feature information x i has a dimension less than 10 and the input layer has a node number less than 10.
  17. 如权利要求14所述的电子设备,其中,所述将样本向量集带入网络结构进行计算,得到训练模型的步骤包括:The electronic device of claim 14, wherein the step of bringing the set of sample vectors into the network structure for calculation, the step of obtaining the training model comprises:
    在输入层输入所述样本向量集进行计算,得到输入层的输出值;Inputting the sample vector set at the input layer to perform calculation, and obtaining an output value of the input layer;
    在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;Inputting an output value of the input layer at the hidden layer to obtain an output value of the hidden layer;
    在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] TInputting, at the classification layer, an output value of the hidden layer to calculate, to obtain the predicted probability value [p 1 p 2 ] T ;
    将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及 The predicted probability value is brought into the output layer for calculation to obtain a predicted result value y. When p 1 is greater than p 2 , y=[1 0] T , when p 1 is less than or equal to p 2 , y=[0 1] T ; and
    根据预测结果值y修正所述网络结构,得到训练模型。The network structure is modified according to the predicted result value y to obtain a training model.
  18. 如权利要求17所述的电子设备,其中,在将所述应用程序的当前特征信息s输入所述训练模型进行计算的步骤中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] TThe electronic device according to claim 17, wherein in the step of inputting the current feature information s of the application into the training model for calculation, the current feature information s is input to the training model for calculation to obtain a classification layer. The predicted probability value [p 1 ' p 2 '] T , when p 1 ' is greater than p 2 ', y = [1 0] T , when p 1 ' is less than or equal to p 2 ', y = [0 1] T .
  19. 如权利要求18所述的电子设备,其中,在所述判断所述应用程序是否需要关闭的步骤中,包括:The electronic device of claim 18, wherein in the step of determining whether the application needs to be closed, the method comprises:
    当y=[1 0] T,判定所述应用程序需要关闭;以及 When y=[1 0] T , it is determined that the application needs to be closed;
    当y=[0 1] T,判定所述应用程序需要保留。 When y = [0 1] T , it is determined that the application needs to be retained.
PCT/CN2018/110518 2017-10-31 2018-10-16 Application program control method and apparatus, medium, and electronic device WO2019085749A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711044959.5 2017-10-31
CN201711044959.5A CN107885544B (en) 2017-10-31 2017-10-31 Application program control method, device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
WO2019085749A1 true WO2019085749A1 (en) 2019-05-09

Family

ID=61783058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/110518 WO2019085749A1 (en) 2017-10-31 2018-10-16 Application program control method and apparatus, medium, and electronic device

Country Status (2)

Country Link
CN (1) CN107885544B (en)
WO (1) WO2019085749A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107885544B (en) * 2017-10-31 2020-04-10 Oppo广东移动通信有限公司 Application program control method, device, medium and electronic equipment
CN109101326A (en) * 2018-06-06 2018-12-28 三星电子(中国)研发中心 A kind of background process management method and device
CN110286949A (en) * 2019-06-27 2019-09-27 深圳市网心科技有限公司 Process based on the read-write of physical host storage device hangs up method and relevant device
CN110275760A (en) * 2019-06-27 2019-09-24 深圳市网心科技有限公司 Process based on fictitious host computer processor hangs up method and its relevant device
CN110286961A (en) * 2019-06-27 2019-09-27 深圳市网心科技有限公司 Process based on physical host processor hangs up method and relevant device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306095A (en) * 2011-07-21 2012-01-04 宇龙计算机通信科技(深圳)有限公司 Application management method and terminal
CN105718027A (en) * 2016-01-20 2016-06-29 努比亚技术有限公司 Management method of background application programs and mobile terminal
CN105808410A (en) * 2016-03-29 2016-07-27 联想(北京)有限公司 Information processing method and electronic equipment
US20170116511A1 (en) * 2015-10-27 2017-04-27 Pusan National University Industry-University Cooperation Foundation Apparatus and method for classifying home appliances based on power consumption using deep learning
CN106909447A (en) * 2015-12-23 2017-06-30 北京金山安全软件有限公司 Background application processing method and device and terminal
CN107608748A (en) * 2017-09-30 2018-01-19 广东欧珀移动通信有限公司 Application program management-control method, device, storage medium and terminal device
CN107643948A (en) * 2017-09-30 2018-01-30 广东欧珀移动通信有限公司 Application program management-control method, device, medium and electronic equipment
CN107885544A (en) * 2017-10-31 2018-04-06 广东欧珀移动通信有限公司 Application program management-control method, device, medium and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160091786A (en) * 2015-01-26 2016-08-03 삼성전자주식회사 Method and apparatus for managing user
CN105389193B (en) * 2015-12-25 2019-04-26 北京奇虎科技有限公司 Accelerated processing method, device and system, the server of application
CN106354836A (en) * 2016-08-31 2017-01-25 南威软件股份有限公司 Advertisement page prediction method and device
CN106648023A (en) * 2016-10-02 2017-05-10 上海青橙实业有限公司 Mobile terminal and power-saving method of mobile terminal based on neural network
CN107145215B (en) * 2017-05-06 2019-09-27 维沃移动通信有限公司 A kind of background application method for cleaning and mobile terminal
CN107133094B (en) * 2017-06-05 2021-11-02 努比亚技术有限公司 Application management method, mobile terminal and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306095A (en) * 2011-07-21 2012-01-04 宇龙计算机通信科技(深圳)有限公司 Application management method and terminal
US20170116511A1 (en) * 2015-10-27 2017-04-27 Pusan National University Industry-University Cooperation Foundation Apparatus and method for classifying home appliances based on power consumption using deep learning
CN106909447A (en) * 2015-12-23 2017-06-30 北京金山安全软件有限公司 Background application processing method and device and terminal
CN105718027A (en) * 2016-01-20 2016-06-29 努比亚技术有限公司 Management method of background application programs and mobile terminal
CN105808410A (en) * 2016-03-29 2016-07-27 联想(北京)有限公司 Information processing method and electronic equipment
CN107608748A (en) * 2017-09-30 2018-01-19 广东欧珀移动通信有限公司 Application program management-control method, device, storage medium and terminal device
CN107643948A (en) * 2017-09-30 2018-01-30 广东欧珀移动通信有限公司 Application program management-control method, device, medium and electronic equipment
CN107885544A (en) * 2017-10-31 2018-04-06 广东欧珀移动通信有限公司 Application program management-control method, device, medium and electronic equipment

Also Published As

Publication number Publication date
CN107885544B (en) 2020-04-10
CN107885544A (en) 2018-04-06

Similar Documents

Publication Publication Date Title
WO2019085749A1 (en) Application program control method and apparatus, medium, and electronic device
US11244672B2 (en) Speech recognition method and apparatus, and storage medium
US11249645B2 (en) Application management method, storage medium, and electronic apparatus
WO2019062413A1 (en) Method and apparatus for managing and controlling application program, storage medium, and electronic device
WO2019062317A1 (en) Application program control method and electronic device
WO2019062358A1 (en) Application program control method and terminal device
CN110176226A (en) A kind of speech recognition and speech recognition modeling training method and device
WO2019085750A1 (en) Application program control method and apparatus, medium, and electronic device
CN113284142B (en) Image detection method, image detection device, computer-readable storage medium and computer equipment
WO2019062405A1 (en) Application program processing method and apparatus, storage medium, and electronic device
CN107885545B (en) Application management method and device, storage medium and electronic equipment
CN111797288A (en) Data screening method and device, storage medium and electronic equipment
CN107659717B (en) State detection method, device and storage medium
CN107402808B (en) Process management method, device, storage medium and electronic equipment
WO2019062462A1 (en) Application control method and apparatus, storage medium and electronic device
CN111738365B (en) Image classification model training method and device, computer equipment and storage medium
CN107729144B (en) Application control method and device, storage medium and electronic equipment
CN112948763B (en) Piece quantity prediction method and device, electronic equipment and storage medium
CN112672405A (en) Power consumption calculation method and device, storage medium, electronic device and server
CN107861770B (en) Application program management-control method, device, storage medium and terminal device
CN115618232A (en) Data prediction method, device, storage medium and electronic equipment
CN114647703A (en) Data processing method and device, electronic equipment and storage medium
CN112367428A (en) Electric quantity display method and system, storage medium and mobile terminal
CN107766892B (en) Application program control method and device, storage medium and terminal equipment
CN107844375B (en) Using method for closing, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18873865

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18873865

Country of ref document: EP

Kind code of ref document: A1