US20230422058A1 - Method and apparatus of sharing information related to status - Google Patents
Method and apparatus of sharing information related to status Download PDFInfo
- Publication number
- US20230422058A1 US20230422058A1 US18/462,065 US202318462065A US2023422058A1 US 20230422058 A1 US20230422058 A1 US 20230422058A1 US 202318462065 A US202318462065 A US 202318462065A US 2023422058 A1 US2023422058 A1 US 2023422058A1
- Authority
- US
- United States
- Prior art keywords
- state
- model
- state model
- information
- related information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000004891 communication Methods 0.000 claims description 39
- 238000004422 calculation algorithm Methods 0.000 claims description 16
- 238000010801 machine learning Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 10
- 230000002159 abnormal effect Effects 0.000 claims 3
- 230000001186 cumulative effect Effects 0.000 description 12
- 230000006870 function Effects 0.000 description 12
- 230000008859 change Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 238000010295 mobile communication Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- MWRWFPQBGSZWNV-UHFFFAOYSA-N Dinitrosopentamethylenetetramine Chemical compound C1N2CN(N=O)CN1CN(N=O)C2 MWRWFPQBGSZWNV-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000002547 anomalous effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 229940112112 capex Drugs 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- FEBLZLNTKCEFIT-VSXGLTOVSA-N fluocinolone acetonide Chemical compound C1([C@@H](F)C2)=CC(=O)C=C[C@]1(C)[C@]1(F)[C@@H]2[C@@H]2C[C@H]3OC(C)(C)O[C@@]3(C(=O)CO)[C@@]2(C)C[C@@H]1O FEBLZLNTKCEFIT-VSXGLTOVSA-N 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/0816—Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L5/00—Arrangements affording multiple use of the transmission path
- H04L5/003—Arrangements for allocating sub-channels of the transmission path
- H04L5/0058—Allocation criteria
- H04L5/006—Quality of the received signal, e.g. BER, SNR, water filling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W16/00—Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
- H04W16/18—Network planning tools
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/04—Arrangements for maintaining operational condition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/08—Testing, supervising or monitoring using real traffic
Definitions
- the present invention relates to a method and apparatus for sharing state information between a plurality of electronic devices, and more particularly, to a method and apparatus for predicting the state of a device based on information shared among multiple electronic devices.
- the state prediction refers to predicting the software and hardware state of the apparatus in the future on the basis of the previous operational log information of the mobile communication apparatus.
- the states in the mobile communication apparatus may include the state of network resource distribution, the state of power usage, and the state of maintaining throughput connection.
- sensor data of various elements that can be extracted from the apparatus can be collected.
- statistical analysis and machine learning techniques can be applied to the sensor data to predict the state of the apparatus.
- the sensor data can be classified into periodic data, event data, and configuration data according to the data collection approach.
- the apparatus may collect periodic data by periodically recording information about elements extracted from software and hardware, such as temperature, power interference, and resource utilization.
- the apparatus may collect event data by, for example, configuring a situation where a certain element exceeds a preset threshold as an event.
- the apparatus may collect configuration data by recording information on the firmware version, location, and cell setup thereof.
- FIGS. 1 A and 1 B illustrate the use of a prediction model for a particular state of the apparatus.
- the apparatus collects the data log during the learning period 100 and generates a prediction model 110 based on the collected data logs. Then, the apparatus predicts the state thereof during the state prediction period 120 by inputting data logs of a given period into the prediction model 110 .
- FIG. 1 B shows a process of deriving a prediction result based on the internal structure of the apparatus making a prediction on the state using a prediction model. More specifically, after collecting data logs, the apparatus produces a prediction result through a learning stage in the modeling unit (MU) and a prediction stage in the prediction unit (PU).
- the modeling unit and the prediction unit are based on machine learning algorithms widely known in the art, and the machine learning algorithms include naive Bayesian networks, support vector machines, and random forests. The prediction accuracy according to the prediction results produced by the listed algorithms may be different depending on the characteristics and amount of the data logs.
- the modeling unit creates a model using data and data classes.
- the data may include raw data or processed data logs.
- the data and the data log can be interchangeably used.
- the data class refers to the desired output value for the data.
- the data class may refer to the result values for data values belonging to a specific period in the past.
- the model is generated based on stochastic or deterministic patterns of data for the data class.
- the prediction unit inputs a new data log to the model to produce an output value as a prediction result. That is, the prediction unit derives a data class for the new data log using a class decision rule of the model.
- the prediction unit also derives the prediction accuracy of the prediction result.
- the prediction accuracy can be different depending on the machine learning algorithm, the characteristics and quantity of data logs, the number of parameters, and the data processing precision.
- the prediction accuracy can be improved by using feature engineering or feature selection. In particular, it is possible to extract and learn various patterns as the number of learning data logs and the number of parameters increase, so that collecting and learning data logs of various parameters can improve the prediction accuracy.
- Feature engineering or feature selection arranging modeling, prediction, and data classes for a typical machine learning operation is a general technique in the field of the present invention and does not belong to the scope of the present invention, and a detailed description thereof will be omitted.
- FIG. 2 illustrates a specific method of applying data learning and the prediction model.
- each base station learns generated data independently to generate a prediction model, and performs data prediction based on the generated prediction model.
- Each base station also calculates the accuracy of the prediction model. That is, one base station does not transmit or receive a data log to or from another base station.
- each base station transmits a data log generated thereat to the central server, which learns the collected data logs and generates a prediction model.
- the central server performs data prediction based on the prediction model and calculates the accuracy of the prediction model. That is, new data logs are also transmitted to the apparatus and the accuracy is calculated.
- each apparatus learns and predicts using independently collected data logs. This makes it possible to create a prediction model by taking into consideration the characteristics of each apparatus, but it is necessary to accumulate data logs for a long period of time for practical use.
- a model can be generated by having information on the output value for a data log (i.e., data class), the state prediction for a newly installed apparatus may be impossible because there are no accumulated data logs.
- the central server can collect a large amount of data logs from various base stations and reach a high prediction accuracy by using recently introduced big data technology.
- the amount of data increases according to a specific learning algorithm, more resources are needed for learning.
- the CPU, memory, and disk capacity requirements of the central server increase; the learning time becomes long; it is difficult to transmit the prediction result in real time; and it is difficult to reflect characteristics of each base station in the prediction model.
- an aspect of the present invention is to provide a method for sharing state related information including parameters selected based on a device state model between different devices having similar characteristics.
- a method of sharing state related information for a device may include: generating a state model of the device on the basis of state related data; selecting at least one parameter determining the state of the device based on the generated state model; and transmitting the selected at least one parameter to at least one different device.
- a device capable of sharing state related information.
- the device may include: a transceiver unit configured to transmit and receive a signal; and a controller configured to control generating a state model of the device on the basis of state related data, selecting at least one parameter determining the state of the device based on the generated state model, and transmitting the selected at least one parameter to at least one different device.
- state related information including parameters selected based on the state model, other than the entire data logs, is shared among different devices having similar characteristics. Hence, it is possible to use a small amount of resources among the devices.
- Each device can make a prediction even if a specific state to be predicted at the present time point has not occurred in the past. A high prediction accuracy can be achieved by learning a small amount of data logs.
- one device can combine the state related information received from another device with the state model generated by it to produce a prediction result. Hence, each device can obtain a prediction result in real time in consideration of the characteristics of the device.
- FIGS. 1 A and 1 B illustrate the use of a prediction model for a particular state of the apparatus.
- FIG. 2 illustrates a specific method of applying data learning and the prediction model.
- FIG. 3 is a block diagram illustrating the internal structure and operation of a device according to an embodiment of the present invention.
- FIG. 4 illustrates pieces of information transmitted and received between the internal components of the device according to an embodiment of the present invention.
- FIGS. 5 A and 5 B illustrate grouping of devices to share state related information according to an embodiment of the present invention.
- FIGS. 6 A and 6 B illustrate producing a prediction result for the device installation state and performing a feedback operation according to an embodiment of the present invention.
- FIG. 7 shows a graph representing the state model that predicts the device state.
- FIGS. 8 A, 8 B, and 8 C illustrate operations for selecting parameters to be shared with another device for a state model and determining whether to share the state model according to an embodiment of the present invention.
- FIG. 9 illustrates the operation of determining whether to share state related information with another device according to an embodiment of the present invention.
- FIG. 10 illustrates an operation by which the device derives a prediction result based on state related information according to an embodiment of the present invention.
- FIG. 11 illustrates a feedback operation performed by the device based on prediction results according to an embodiment of the present invention.
- blocks of a flowchart (or sequence diagram) and a combination of flowcharts may be represented and executed by computer program instructions.
- These computer program instructions may be loaded on a processor of a general purpose computer, special purpose computer or programmable data processing equipment. When the loaded program instructions are executed by the processor, they create a means for carrying out functions described in the flowchart.
- the computer program instructions may be stored in a computer readable memory that is usable in a specialized computer or a programmable data processing equipment, it is also possible to create articles of manufacture that carry out functions described in the flowchart.
- the computer program instructions may be loaded on a computer or a programmable data processing equipment, when executed as processes, they may carry out steps of functions described in the flowchart.
- a block of a flowchart may correspond to a module, a segment or a code containing one or more executable instructions implementing one or more logical functions, or to a part thereof.
- functions described by blocks may be executed in an order different from the listed order. For example, two blocks listed in sequence may be executed at the same time or executed in reverse order.
- unit may refer to a software component or hardware component such as an FPGA or ASIC capable of carrying out a function or an operation.
- unit or the like is not limited to hardware or software.
- a unit or the like may be configured so as to reside in an addressable storage medium or to drive one or more processors.
- Units or the like may refer to software components, object-oriented software components, class components, task components, processes, functions, attributes, procedures, subroutines, program code segments, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays or variables.
- a function provided by a component and unit may be a combination of smaller components and units, and may be combined with others to compose large components and units.
- Components and units may be configured to drive a device or one or more processors in a secure multimedia card.
- the “state model” is generated by learning various data logs in the past that determine the state of the device, such as the resource usage state or the power usage state.
- the state model may output a prediction result for the state of the device in the future when a certain amount or more of data logs is input.
- the “state related information” refers to information on the factors that determine the state (e.g., software state or hardware state) of the device.
- the state related information may be derived from the state model generated based on the learning data for the state.
- the state related information may include at least one parameter determining the state, and may further include weight information indicating the extent to which the parameters determine the state.
- model related information may include information regarding the characteristics of data logs, the algorithm, and the parameters used to generate the state model in the device, or information on the accuracy of the state model.
- FIG. 3 is a block diagram illustrating the internal structure and operation of a device according to an embodiment of the present invention.
- the device may include a modeling unit 310 , a prediction unit 320 , a control unit 330 , and a remote communication unit 340 .
- the data log 300 can be input to the device and optimized device configuration information 350 can be output. That is, it can be seen that the control unit 330 is newly added to the existing prediction model approach.
- the device including the control unit 330 is referred to as a local device, and the other device is referred to as a remote device.
- the modeling unit 310 may learn data logs from the control unit 330 to generate the state model of the local device. In this case, the modeling unit 310 can generate the state model of the local device using state related information and model related information received through the remote communication unit 340 from the remote device.
- the prediction unit 320 may produce a prediction result for each state model on the basis of a preset amount or more of data logs and at least one state model from the control unit 330 . That is, the prediction unit 320 may input received data logs to the state model generated in the local device and the state model generated in the remote device to produce prediction results for each state model.
- the control unit 330 may cause the modeling unit 310 and the prediction unit 320 to work together and exchange information with the remote devices through the remote communication unit 340 . That is, the control unit 330 can act as an integrator for state prediction in the device. Specifically, the control unit 330 may include a monitoring part 331 , a parameter selector 333 , a prediction result determiner 335 , and a configuration change determiner 337 .
- the monitoring part 331 collectively manages the interfaces between the internal modules of the control unit 330 and may continuously monitor the status of the local device.
- the monitoring part 331 may store data logs 300 obtained from the sensors attached to the local device and may store the state model received from the modeling unit 310 .
- the state model may include state related information determining the state, and may be composed of parameters and weighting information for the parameters.
- the state model may include information on the parameters, arranged in order of weight, which determine the state of the device.
- the parameter selector 333 may select a parameter to be shared with at least one remote device according to a preset criterion from among one or more parameters included in the state model generated by the modeling unit 310 .
- the parameter selector 333 can dynamically determine the number of parameters to be shared based on the weight and the accuracy of the state model.
- the accuracy of the state model indicates the degree of agreement between the predicted result calculated by entering the data log into the state model and the actual result, and can be included in the model related information.
- An example of the state model is shown in FIG. 7 , and a more detailed description is given with reference to FIG. 8 .
- the prediction result determiner 335 may use both the prediction result calculated based on the state model generated by the modeling unit 310 of the local device and the prediction result calculated based on the state model generated in the remote device to produce a prediction result for the state of the local device. Also, the prediction result determiner 335 may use both the accuracy of the state model of the local device and the reliability of the state model of the remote device to produce a prediction result for the state. The accuracy and reliability of the state model indicate the degree of agreement between the predicted result and the actual result for an input value to the state model. In the description, the term “accuracy” is used for the state model generated in the local device, and the term “reliability” is used for the state model generated in the remote device. A detailed description is given of the prediction result determiner 335 , which produces a prediction result based on the state model of the local device and the state model of the remote device, with reference to FIG. 10 .
- the configuration change determiner 337 may generate a feedback value using the prediction result and the reliability thereof produced by the prediction result determiner 335 and determine whether to change the configuration of the device based on the feedback value. A detailed description is given of determining whether to change the configuration of the device with reference to FIG. 11 .
- control unit 330 It is possible for the constituent modules of the control unit 330 to perform the above operations. However, it is well known in the art that the control unit 330 can directly perform all of the above operations.
- the remote communication unit 340 may be connected to at least one remote device to share state related information and model related information with the remote device. More specifically, the remote communication unit 340 may transmit at least one remote device a parameter selected based on the state model. The remote communication unit 340 may transmit the weight information of the parameter and the accuracy information of the state model to the at least one remote device. That is, the remote communication unit 340 does not transmit a data log itself collected from the local device, but transmits the state related information, thereby consuming a smaller amount of resources.
- the remote communication unit 340 may receive state-related information and model-related information from at least one remote device.
- the control unit 330 may receive state related information and model related information through the remote communication unit 340 from the remote device, construct a model of the remote device based on the received information, and use the constructed model to produce a prediction result for the local device.
- the remote communication unit 340 may receive, from at least one remote device, model related information including a parameter selected based on the state model of the remote device and weight information of the selected parameter, and state related information including the reliability information of the state model.
- the control unit 330 may finally produce an optimized device configuration 350 according to the predicted state of the device by using the constituent modules thereof.
- the controller 330 may control generating a state model of the device based on state related data, selecting at least one parameter that determines the state of the device based on the generated state model, and transmitting the selected at least one parameter to at least one different device.
- the control unit 330 may further control transmitting weight information corresponding to the selected at least one parameter to the different device.
- the control unit 330 may control transmitting only the parameter among the state related data and the parameter to at least one different device.
- the control unit 330 may control receiving, from at least one different device, at least one parameter, which determines the state of the different device and is selected based on the state model generated in the different device, and the weight information corresponding to the selected parameter, and producing a prediction result for the state of the device on the basis of at least one parameter determining the state of the device and at least one parameter determining the state of the different device.
- the control unit 330 may control receiving, from at least one different device, information about the reliability of the prediction result derived from the state model generated in the different device.
- the control unit 330 may control producing a prediction result for the state of the device in consideration of the accuracy of the prediction result derived from the state model generated in the device and the reliability of the prediction result derived from the state model generated in the different device.
- the control unit 330 may control determining whether to change the configuration of the device based on the prediction result for the state.
- FIG. 4 illustrates pieces of information transmitted and received between the internal components of the device according to an embodiment of the present invention.
- the device may further include interface mechanisms for transmitting and receiving information between the modeling unit 410 , the prediction unit 420 , and the control unit 430 .
- the interface for transmitting information from the control unit 430 to the modeling unit 410 may be referred to as “SND_to_MU” 431 .
- the control unit 430 may transmit data logs received from various sensors of the local device. Typically, the data log is streamed to the control unit 430 as an input value and processed through a pre-processing step. Pre-processing is not within the scope of the present invention and is not described herein.
- the control unit 430 may transmit state related information and model related information received through the remote communication unit 440 .
- the state related information may include parameters selected based on the state model generated at the remote device and weights for the parameters.
- the model related information may include information regarding characteristics of data logs, algorithms, and parameters used to generate the state model.
- the model related information may also include information on the reliability of the state model.
- the modeling unit 410 can learn the data logs received from the local device to generate a state model. To generate the state model, the modeling unit 410 may additionally use the state related information and model related information received through the remote communication unit 440 .
- the learning algorithms may include, for example, machine learning algorithms.
- the interface for transmitting information from the modeling unit 410 to the control unit 430 may be referred to as RCV_from_MU 433 .
- the modeling unit 410 may transmit the generated state model and model related information to the control unit 430 .
- the control unit 430 may derive state related information from the state model. That is, the controller 430 may produce information on the parameters determining the state and the weight information for the parameters.
- the interface for transmitting information from the control unit 430 to the prediction unit 420 may be referred to as SND_to_PU 435 .
- the control unit 430 may transmit a given amount or more of data logs at the present time for predicting the state of the device as part of SND_to_PU.
- the control unit 430 may transmit the state model received from the modeling unit 410 , that is, a state model generated based on the data logs collected from the local device and a state model received from the remote device.
- the prediction unit 420 can produce a prediction result on the state by applying pre-stored algorithms to each state model.
- the algorithms may include, for example, machine learning algorithms.
- the interface for transmitting information from the prediction unit 420 to the control unit 430 may be referred to as RCV_from_PU 437 .
- the prediction unit 420 may transmit a prediction result for the state of the device when each state model is applied.
- the control unit 430 can produce a prediction result for the state of the local device by using both the prediction result derived from the state model of the local device and the prediction result derived from the state model of the remote device.
- the interface for transmitting information from the control unit 430 to the remote communication unit 440 may be referred to as SND_to_Remote 438 .
- the control unit 430 may send state related information to at least one remote device based on the state model received from the modeling unit 410 through continuous monitoring.
- the remote device may be a member of a group of devices having characteristics similar to the local device.
- the control unit 430 may select state related information based on the received state model and determine whether to transmit the state related information to the at least one remote device. This is described in detail with reference to FIG. 9 .
- the interface for transmitting information from the remote communication unit 440 to the control unit 430 may be referred to as RCV_from_Remote 439 .
- the remote communication unit 440 may transmit state related information based on the state model generated in the remote device to the control unit 430 . More specifically, the control unit 430 may receive parameters selected based on the state model and weight information indicating the degree to which the parameters determine the state. The control unit 430 may also receive information on the reliability of the state model generated at the remote device. The control unit 430 may calculate a prediction result for the state using the received state related information. This is described in detail with reference to FIG. 10 . In addition, peer-to-peer (P2P) sharing schemes may be used as distributed algorithms for transmitting and receiving information through the remote communication unit 440 to and from other remote devices.
- P2P peer-to-peer
- FIGS. 5 A and 5 B illustrate grouping of devices to share state related information according to an embodiment of the present invention.
- FIG. 5 A shows groups of base stations sharing state related information and model related information for the model on a map.
- a group of one or more devices may be referred to as a shared model group.
- FIG. 5 B is a table showing attribute values of base stations. Base stations can be grouped based on the installation area, the software version, the number of assigned cells, and the like to create shared model groups.
- the group information can be determined by the base station management system based on the attribute values of the base stations as shown in FIG. 5 B .
- a k-mode clustering technique may be performed to group the base stations, and the information on the shared model groups may be notified to the base stations of the groups. Thereafter, the base stations belonging to the same shared model group may share state related information.
- FIGS. 6 A and 6 B illustrate producing a prediction result for the device installation state and performing a feedback operation according to an embodiment of the present invention.
- FIG. 6 A illustrates operations between internal modules in a newly installed local device
- FIG. 6 B illustrates operations between internal modules in a previously installed local device.
- the control unit 610 may receive the state related information and model related information through the interface connected to the remote communication unit 600 (RCV_from_Remote 605 ) from the remote devices belonging to the same shared model group, and produce a prediction result. That is, the control unit 610 can receive the state related information selected from the remote base station through the remote communication unit 600 .
- the control unit 610 may transmit a new data log at the current local base station and the state related information received from at least one remote base station to the prediction unit 625 through SND_to_PU 620 .
- the prediction unit 625 can produce a prediction result based on the received new data log at the current time point and the state related information of at least one remote base station, and transmit the prediction result back to the control unit 622 via RCV_from_PU 627 .
- the control unit 610 can utilize the prediction result produced by at least one remote base station and the reliability of the state model at the remote base to produce a prediction result at the current time point in the local base station, and perform a feedback operation ( 630 ).
- the feedback operation may include determining whether to change the current configuration of the device.
- the local device may share state related information derived from the state model generated based on the accumulated data logs therein with at least one remote device.
- the local device may produce a prediction result at the current time in the local device on the basis of the state related information received from the remote device and the state model generated in the local device.
- the control unit 655 can transmit the accumulated data logs to the modeling unit 665 via SND_to_MU 660 .
- the modeling unit 665 can generate a state model based on the data logs and transmit the state model back to the control unit 655 via RCV_from_MU 667 .
- the control unit 655 may transmit the state model generated based on the data logs to the prediction unit 675 through SND_to_PU 670 . In this case, the control unit 655 may also transmit a new data log collected at the current time point.
- the prediction unit 675 can produce a prediction result based on the new data log and the state model.
- the prediction unit 675 can transmit the prediction result to the control unit 655 via RCV_from_PU 677 .
- the control unit 655 can compute the prediction accuracy of the state model on the basis of the prediction result derived from the state model generated based on the data logs of the local device. Obtaining the prediction accuracy is not within the scope of the present invention, and a description thereof is omitted.
- the control unit 677 may select parameters to be shared with the remote device from among the parameters included in the state related information of the state model, and determine whether to share the state related information of the state model. This is described in more detail with reference to FIG. 8 .
- control unit 655 may transmit the selected parameters and weight information corresponding to the selected parameters to the remote communication unit 640 through SND_to_Remote 680 .
- the control unit 655 can receive state related information and model related information about the state model created in the remote device from the remote communication unit 640 via SND_to_Remote 680 .
- the description on creating the state model and selecting the parameters to be shared in the local device is the same as that in the remote device.
- the control unit 655 may forward the state related information received from the remote device to the prediction unit 675 via SND_to_PU 670 .
- the control unit 655 can also transmit a certain amount of data logs collected by the local device up to the current time point.
- the predicting unit 675 can produce a prediction result for the data logs at the current time point by using at least one parameter included in the state related information and a weight corresponding to the parameter.
- the prediction unit 675 can transmit the prediction result based on the state related information received from the remote device to the control unit 655 through RCV_from_PU 677 .
- the control unit 655 can utilize both the prediction result based on the state model of the local device and the prediction result based on the state model of the remote device to produce the prediction result for the state of the local device. This is described in more detail with reference to FIG. 10 .
- the control unit 655 may perform a feedback operation based on the produced prediction result ( 690 ). Specifically, the control unit 655 may determine whether to perform a feedback operation based on the prediction result.
- FIG. 7 shows a graph representing the state model that predicts the device state.
- FIG. 7 illustrates a state model generated by the modeling unit based on data logs collected in the device.
- the device may select parameters to be shared with a remote device based on the state model.
- the x-axis indicates the ranks of parameters ( 705 ) and the y-axis indicates the cumulative sum of weights of the parameters ( 700 ).
- the parameters refers to factors that determine the state of the device. For example, to make a prediction on the network throughput of the device, the parameters may include reference signal received power (RSRP), reference signal received quality (RSRQ), and channel quality indication (CQI).
- RSRP reference signal received power
- RSRQ reference signal received quality
- CQI channel quality indication
- the graph of cumulative weights of the parameters determining the state is in the form of a cumulative distribution function (CDF) graph. It can be seen that the slope of the graph decreases as the number of parameters is accumulated according to the order of weighting ( 710 , 720 , 730 ). The device may determine the number of parameters to be shared on the basis of the slope of the graph. This is described in more detail with reference to FIG. 8 .
- CDF cumulative distribution function
- FIGS. 8 A, 8 B and 8 C illustrate operations for selecting parameters to be shared with another device for a state model and determining whether to share the state model according to an embodiment of the present invention.
- the device may receive data logs associated with the state from the sensors ( 800 ).
- the device may generate a state model based on the data logs, and may identify parameters that determine the state according to the state model and weights of the parameters ( 805 ).
- the state model may be the one shown in FIG. 7 , and it is assumed in the following description that the state model is the same as the graph shown in FIG. 7 .
- the device may set N pre to 0 for N indicating the number of parameters ( 807 ).
- the device may set N to 1 ( 810 ).
- the device may then determine whether the calculated angle difference is less than a preset threshold of the slope (threshold D ) ( 823 ). If the calculated angle difference is not less than threshold D , the device may increase the number of parameters by one ( 825 ), and repeat steps 820 and 823 . The device repeats steps 820 through 825 until the condition of step 823 is satisfied, and it can determine the number of parameters whose weight is greater than or equal to a preset value. If the calculated angle difference (I N ⁇ I N+1 ) is less than threshold D , the device may learn the first to N th parameters to thereby produce the prediction result and the prediction accuracy ( 830 ).
- the device may determine whether the produced prediction accuracy is higher than a preset threshold of the prediction accuracy (threshold P ) ( 833 ). If the prediction accuracy is lower than threshold P , the device may determine whether the value of N is equal to the total number of parameters ( 835 ). If N is equal to the total number of parameters, the device may determine the state model as unusable ( 837 ). That is, if the prediction accuracy is lower than threshold P although all the parameters in the state model are used for producing the prediction result, the state model is not shared with remote devices and is not used to make a prediction on the state in the local device.
- threshold P a preset threshold of the prediction accuracy
- the device may determine whether flag is set to 1 ( 840 ). If flag is set to 1, the step size serving as the adjustment interval for threshold D can be halved ( 843 ). Otherwise, step 840 can be skipped.
- the step size means the interval value that changes the number of parameters in order to identify the minimum number of parameters whose prediction accuracy satisfies threshold P . Thereafter, the device may adjust threshold D downward by subtracting the step size from threshold D ( 845 ). This is to increase the number of parameters to be learned by lowering threshold D when the device has learned N parameters and produced a prediction result at step 830 with the prediction accuracy lower than threshold P . Then, the device may set flag to 0 and set N pre to N ( 847 ).
- steps 810 to 830 may be repeated to determine the number of parameters based on changed threshold D .
- the device may determine whether N is equal to N pre ( 850 ). If N is equal to N pre , the device may determine the current state model as a usable state model ( 855 ). If N is not equal to N pre , the device may determine whether flag is set to 0 ( 860 ). If the device initially tests step 833 , N pre is 0 at step 850 , which is the value set at step 807 ; the result of step 850 is always “no” even if the result exceeds the prediction accuracy in the first attempt at step 833 ; and the device may initiate learning again by decreasing the number of parameters through adjustment of threshold D .
- N pre is the number of parameters learned in the previous stage; and if N is equal to N pre at step 850 after adjusting threshold D , that is, when the number of learned parameters equals the number of learned parameters in the previous stage, the device can determine the current state model as a usable state model.
- the device may reduce the step size serving as the adjustment interval for threshold D by half ( 863 ). If flag is set to 1, the device can maintain the step size.
- the device may adjust threshold D upward by adding the step size to threshold D ( 865 ). This is to decrease the number of parameters to be learned by increasing threshold D when the device has learned N parameters and produced a prediction result at step 830 with the prediction accuracy higher than threshold P . Then, the device may set flag to 1 and set N pre to N ( 870 ). Thereafter, step 810 and subsequent steps may be repeated to determine the number of parameters based on changed threshold D .
- flag is a criterion for determining whether the step size for adjusting threshold D is halved.
- flag is set to 0 ( 847 ); and when threshold D is adjusted upward, flag is set to 1 ( 870 ). If flag is 1 before threshold D is adjusted downward, threshold D is adjusted upward. At this time, the step size is reduced by half. If flag is 0 before threshold D is adjusted upward, threshold D is adjusted downward. At this time, the step size is reduced by half. That is, when the device adjusts threshold D differently from the previous stage, it can reduce the step size by half.
- the device can select a minimum number of parameters reaching the desired prediction accuracy. For example, after selecting 50 parameters based on initially determined threshold D and learning, if the prediction accuracy satisfies threshold P , threshold D is increased to reduce the number of selected parameters. Thereafter, the device may learn 45 parameters (excluding 5 parameters) based on upwardly adjusted threshold D and calculate the prediction result. If the prediction accuracy of the prediction result does not satisfy threshold P , the device can select an increased number of parameters and learn again by adjusting threshold D downward.
- FIG. 9 illustrates the operation of determining whether to share state related information with another device according to an embodiment of the present invention.
- the device may determine whether the state model is updated ( 900 ). That is, the device can determine whether a new state model has been created using newly accumulated data logs. Thereafter, the device may determine whether the newly generated state model is a usable model ( 910 ). That is, the device can perform the operation described in FIG. 8 on the newly generated state model to determine whether it is a usable model. If the newly generated state model is not a usable model, it is not shared and the procedure proceeds to step 900 , at which the device may check whether a new state model is generated. If the newly generated state model is a usable model, the device may share the parameters selected from the state model with a remote device via the remote communication unit ( 920 ).
- FIG. 10 illustrates an operation by which the device produces a prediction result on the state according to an embodiment of the present invention.
- the device can produce the final prediction result of the local device on the basis of the reliability or accuracy of the state models generated in the local device and the remote device. That is, a state model with a high reliability or accuracy has a large influence on the final prediction result, and a state model with a low reliability or accuracy has a small influence on the final prediction result. In addition, if the reliability of a state model does not exceed a preset threshold, it may be regarded as an inaccurate model and be not used for calculating the prediction result.
- the device may obtain state related information, model related information, and state prediction results from the local device and the remote device ( 1000 ).
- the device may generate a state model based on locally collected data logs, and use the prediction unit to calculate a prediction result based on the state related information of the state model.
- the device may receive state related information of the state model generated in the remote device, and may use the prediction unit to calculate a prediction result based on the state related information of the remote device.
- the device may set initial values by setting w (cumulative weight) to 0, setting N (index of a state model) to 1, and setting p (cumulative prediction result) to 0 ( 1010 ).
- the device may utilize a state model whose reliability is greater than threshold R (preset threshold for reliability) only.
- the device may determine whether the reliability (or accuracy) of model N is greater than threshold R ( 1020 ). If the reliability of model N is less than threshold R , the device may increase the model index by one ( 1025 ) and may determine whether the reliability of the next model is greater than threshold R ( 1020 ).
- the device may determine whether the model index N is equal to the total number of models corresponding to the state related information obtained at step 1000 ( 1050 ). If N is less than the total number of models, the device may increase the model index by one ( 1025 ), and the procedure returns to step 1020 for the next model. Thereafter, if the cumulative prediction result p and the cumulative weight w are determined based on the reliability of all the models, the device may calculate the final prediction result using p/w ( 1060 ).
- model 1 has a reliability much higher than threshold R , and it has a large influence on calculating the prediction result.
- Model 2 has a reliability a little higher than threshold R , and it has a small influence on calculating the prediction result.
- Model 3 has a reliability not higher than threshold R , and it has no influence on calculating the prediction result.
- the cumulative prediction result p may be computed as shown in Equation 1 below.
- the cumulative weight w may be computed as shown in Equation 2 below.
- the final prediction result is 0.41. It can be seen that model 1 with higher reliability than model 2 affects the prediction result and the final prediction result is closer to the prediction result (0.3) of model 1 than the prediction result (0.9) of model 2.
- FIG. 11 illustrates a feedback operation performed by the device based on the prediction result according to an embodiment of the present invention.
- the device can generate a feedback value using the state prediction result of the local device and the reliability of the prediction result in FIG. 10 , and determine whether to execute the feedback operation using the feedback value. Whether to execute the feedback operation may include determining whether to change the configuration of the device.
- the device can obtain the state prediction result and the reliability of the prediction result of the local device derived as in FIG. 10 ( 1100 ).
- the reliability of the prediction result can be obtained by dividing the cumulative weight w by the number of models.
- the device may determine a feedback value f ( 1110 ).
- the feedback value f is a criterion value for the device to determine whether to perform the feedback operation.
- the device can determine the feedback value f using Equation 3 below.
- the device may determine whether the determined feedback value f is greater than or equal to preset threshold f ( 1120 ). If the feedback value f is less than threshold f , the procedure can be terminated without changing the configuration of the device. If the determined feedback value f is greater than or equal to threshold f , the device may change the configuration of the device based on the feedback value f ( 1130 ).
- the device is a base station and the failure state of a fan is to be predicted.
- the reliability of the prediction result is 0.8
- threshold f is 0.7
- the feedback value f is given as follows.
- the device can determine that a fan failure will occur with a high probability and determine to change the hardware or software settings of the base station. By determining the feedback value using the reliability of the current prediction result, the device can predict and prepare for a more accurate state in real time.
- the above-described method of sharing state related information including parameters selected based on the state model between devices having similar characteristics, and predicting the state of the local device based on the shared state related information can be applied as follows.
- base stations installed in the same cell site may monitor their available resources and terminal traffic. If the terminal traffic amount exceeds a given threshold, the base stations may perform resource migration. In this way, the base station can keep the quality of experience (QoE) of the user at a high level.
- QoE quality of experience
- the present invention can be utilized to introduce a rechargeable battery structure according to the peak/off-peak hours and the hourly power charges. More specifically, a parameter can be used for the number of terminals served per hour. In addition, parameters for physical resource block (PRB) usage amount, terminal data throughput, cell load, hardware component load, and backhaul load can be used. Additionally, parameters for the number of neighbor base stations can be used to account for the power consumed when the base station uses interference control modules. As the strength of a signal transmitted to the terminal can be influenced by the channel condition, parameters for the channel state can be used.
- PRB physical resource block
- the patterns of power consumption (voltage, ampere, ohm, interference) can be predicted based on the above parameters, and base station resources (radio power, frequency bandwidth) can be changed as needed based on the predicted power usage patterns.
- base station resources radio power, frequency bandwidth
- the base station may monitor radio information (e.g., reference signal received power (RSRP), reference signal received quality (RSRQ), channel quality indication (CQI)) and expected quality of experience (QoE).
- radio information e.g., reference signal received power (RSRP), reference signal received quality (RSRQ), channel quality indication (CQI)
- RSRP reference signal received power
- RSSQ reference signal received quality
- CQI channel quality indication
- QoE expected quality of experience
- expected radio information and quality of experience (QoE) values of the terminals can be determined according to the billing plan, terminal type, and traffic pattern. For example, if the user subscribed to an expensive billing plan requires a terminal with a modem chipset supporting the high capacity downlink and a low latency application service, the base station can identify the available resources and the expected resources in the future in advance and perform resource migration to the base station with the highest performance.
- the present invention can be used for enabling the base stations connected with the terminals to predict software and hardware states.
- the base station may monitor the operation of various software modules and hardware modules internally (e.g., available resources and stability) and make a prediction on the normal operation. If a particular component exhibits an anomalous condition (e.g., optical error, communication error, memory error, fan error, memory full, CPU (central processing unit) full, DSP (digital signal processing) error), the base station can handover the terminals to a normally operating base station.
- an anomalous condition e.g., optical error, communication error, memory error, fan error, memory full, CPU (central processing unit) full, DSP (digital signal processing) error
- the present invention may be applied to flexibly steering network traffic through the central SDN controller and to flexibly drive the apparatus resources as needed through NFV technology.
- NFV network function virtualization
- SDN software defined networking
- deep packet inspection technology installed on a base station may be used based on data logs for the parameters such as bandwidth, throughput, latency, jitter, and error rate between base stations.
- the user application content and association patterns can be extracted.
- the extraction results can be sent to the SDN controller, which may result in higher user QoS and QoE and reduced operator capital expenditure and operational expenditure (CapEx/OpEx).
- all steps and messages may be subject to selective execution or may be subject to omissions.
- the steps in each embodiment need not occur in the listed order, but may be reversed.
- Messages need not necessarily be delivered in order, but the order may be reversed. Individual steps and messages can be performed and delivered independently.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Testing And Monitoring For Control Systems (AREA)
- Mobile Radio Communication Systems (AREA)
- Environmental & Geological Engineering (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The present invention relates to method and device for sharing state related information among a plurality of electronic devices and, more particularly, to method and device for predicting the state of a device on the basis of information shared among a plurality of electronic devices. In order to attain the purpose, a method for sharing state related information of a device, according to an embodiment of the present invention, comprises the steps of: generating a state model of a device on the basis of state related data; selecting one or more parameters for determining the state of the device on the basis of the generated state model; and transmitting the one or more selected parameters to at least one other device.
Description
- This application is a continuation application of prior application Ser. No. 17/105,840 filed on Nov. 27, 2020, which has issued a U.S. Pat. No. 11,758,415 on Sep. 12, 2023; which is a continuation application of a prior application Ser. No. 15/776,528 filed on May 16, 2018, which has issued as U.S. Pat. No. 10,880,758 on Dec. 29, 2020; which is a U.S. National Stage application under 35 U.S.C. § 371 of an International application number PCT/KR2016/013347 filed on Nov. 18, 2016, which is based on and claimed priority of a Korean patent application number 10-2015-0163556 filed on Nov. 20, 2015 in the Korean Intellectual Property Office, the disclosure of each of which is incorporated by reference herein in its entirety.
- The present invention relates to a method and apparatus for sharing state information between a plurality of electronic devices, and more particularly, to a method and apparatus for predicting the state of a device based on information shared among multiple electronic devices.
- Recently, with the increase of mobile communication devices and the use of big data and cloud computing technologies, traffic has been rapidly increasing. In this trend, low-latency, high-throughput and secure end-to-end communication has become important. At the same time, in particular, as the role of the base station system (BSS) equipment relaying such communication becomes important, there is an ongoing discussion on the analysis capability for continuous real-time resource allocation, performance optimization, stability, and the cause of anomalies. In general, to satisfy the requirements of a mobile communication service provider, values for quality of service (QoS), quality of experience (QoE), and service level agreement (SLA) are defined, and mobile communication devices are operated based on them.
- In operating the mobile communication apparatus, the state prediction refers to predicting the software and hardware state of the apparatus in the future on the basis of the previous operational log information of the mobile communication apparatus. As a specific example, the states in the mobile communication apparatus may include the state of network resource distribution, the state of power usage, and the state of maintaining throughput connection.
- For the state prediction of the performance apparatus, sensor data of various elements that can be extracted from the apparatus can be collected. For example, statistical analysis and machine learning techniques can be applied to the sensor data to predict the state of the apparatus. The sensor data can be classified into periodic data, event data, and configuration data according to the data collection approach. For example, the apparatus may collect periodic data by periodically recording information about elements extracted from software and hardware, such as temperature, power interference, and resource utilization. The apparatus may collect event data by, for example, configuring a situation where a certain element exceeds a preset threshold as an event. The apparatus may collect configuration data by recording information on the firmware version, location, and cell setup thereof.
-
FIGS. 1A and 1B illustrate the use of a prediction model for a particular state of the apparatus. - In
FIG. 1A , to make a prediction on the state, the apparatus collects the data log during thelearning period 100 and generates aprediction model 110 based on the collected data logs. Then, the apparatus predicts the state thereof during thestate prediction period 120 by inputting data logs of a given period into theprediction model 110. -
FIG. 1B shows a process of deriving a prediction result based on the internal structure of the apparatus making a prediction on the state using a prediction model. More specifically, after collecting data logs, the apparatus produces a prediction result through a learning stage in the modeling unit (MU) and a prediction stage in the prediction unit (PU). The modeling unit and the prediction unit are based on machine learning algorithms widely known in the art, and the machine learning algorithms include naive Bayesian networks, support vector machines, and random forests. The prediction accuracy according to the prediction results produced by the listed algorithms may be different depending on the characteristics and amount of the data logs. - Next, a description is given of the modeling unit and the prediction unit.
- First, the modeling unit creates a model using data and data classes. The data may include raw data or processed data logs. In the present invention, the data and the data log can be interchangeably used. The data class refers to the desired output value for the data. For example, the data class may refer to the result values for data values belonging to a specific period in the past. The model is generated based on stochastic or deterministic patterns of data for the data class.
- Next, after the model is generated, the prediction unit inputs a new data log to the model to produce an output value as a prediction result. That is, the prediction unit derives a data class for the new data log using a class decision rule of the model. The prediction unit also derives the prediction accuracy of the prediction result. The prediction accuracy can be different depending on the machine learning algorithm, the characteristics and quantity of data logs, the number of parameters, and the data processing precision. The prediction accuracy can be improved by using feature engineering or feature selection. In particular, it is possible to extract and learn various patterns as the number of learning data logs and the number of parameters increase, so that collecting and learning data logs of various parameters can improve the prediction accuracy. Feature engineering or feature selection arranging modeling, prediction, and data classes for a typical machine learning operation is a general technique in the field of the present invention and does not belong to the scope of the present invention, and a detailed description thereof will be omitted.
-
FIG. 2 illustrates a specific method of applying data learning and the prediction model. - More specifically, a description is given of a decentralized method and a centralized method, which are methods for, e.g., plural base stations that learn data and generate a prediction model to produce a prediction result.
- In the decentralized method shown in part (a) of
FIG. 2 , each base station learns generated data independently to generate a prediction model, and performs data prediction based on the generated prediction model. Each base station also calculates the accuracy of the prediction model. That is, one base station does not transmit or receive a data log to or from another base station. In the centralized method shown in part (b) ofFIG. 2 , each base station transmits a data log generated thereat to the central server, which learns the collected data logs and generates a prediction model. The central server performs data prediction based on the prediction model and calculates the accuracy of the prediction model. That is, new data logs are also transmitted to the apparatus and the accuracy is calculated. - More specifically, in the decentralized method, each apparatus learns and predicts using independently collected data logs. This makes it possible to create a prediction model by taking into consideration the characteristics of each apparatus, but it is necessary to accumulate data logs for a long period of time for practical use. In addition, since a model can be generated by having information on the output value for a data log (i.e., data class), the state prediction for a newly installed apparatus may be impossible because there are no accumulated data logs.
- In the centralized method, the central server can collect a large amount of data logs from various base stations and reach a high prediction accuracy by using recently introduced big data technology. However, if the amount of data increases according to a specific learning algorithm, more resources are needed for learning. In addition, the CPU, memory, and disk capacity requirements of the central server increase; the learning time becomes long; it is difficult to transmit the prediction result in real time; and it is difficult to reflect characteristics of each base station in the prediction model.
- The present invention has been made in view of the above problems. Accordingly, an aspect of the present invention is to provide a method for sharing state related information including parameters selected based on a device state model between different devices having similar characteristics.
- In accordance with an aspect of the present invention, there is provided a method of sharing state related information for a device. The method may include: generating a state model of the device on the basis of state related data; selecting at least one parameter determining the state of the device based on the generated state model; and transmitting the selected at least one parameter to at least one different device.
- In accordance with another aspect of the present invention, there is provided a device capable of sharing state related information. The device may include: a transceiver unit configured to transmit and receive a signal; and a controller configured to control generating a state model of the device on the basis of state related data, selecting at least one parameter determining the state of the device based on the generated state model, and transmitting the selected at least one parameter to at least one different device.
- In a feature of the present invention, state related information including parameters selected based on the state model, other than the entire data logs, is shared among different devices having similar characteristics. Hence, it is possible to use a small amount of resources among the devices. Each device can make a prediction even if a specific state to be predicted at the present time point has not occurred in the past. A high prediction accuracy can be achieved by learning a small amount of data logs. In addition, one device can combine the state related information received from another device with the state model generated by it to produce a prediction result. Hence, each device can obtain a prediction result in real time in consideration of the characteristics of the device.
-
FIGS. 1A and 1B illustrate the use of a prediction model for a particular state of the apparatus. -
FIG. 2 illustrates a specific method of applying data learning and the prediction model. -
FIG. 3 is a block diagram illustrating the internal structure and operation of a device according to an embodiment of the present invention. -
FIG. 4 illustrates pieces of information transmitted and received between the internal components of the device according to an embodiment of the present invention. -
FIGS. 5A and 5B illustrate grouping of devices to share state related information according to an embodiment of the present invention. -
FIGS. 6A and 6B illustrate producing a prediction result for the device installation state and performing a feedback operation according to an embodiment of the present invention. -
FIG. 7 shows a graph representing the state model that predicts the device state. -
FIGS. 8A, 8B, and 8C illustrate operations for selecting parameters to be shared with another device for a state model and determining whether to share the state model according to an embodiment of the present invention. -
FIG. 9 illustrates the operation of determining whether to share state related information with another device according to an embodiment of the present invention. -
FIG. 10 illustrates an operation by which the device derives a prediction result based on state related information according to an embodiment of the present invention. -
FIG. 11 illustrates a feedback operation performed by the device based on prediction results according to an embodiment of the present invention. - Hereinafter, preferred embodiments of the present invention are described in detail with reference to the accompanying drawings. The same or similar reference symbols are used throughout the drawings to refer to the same or like parts. Descriptions of well-known functions and constructions may be omitted to avoid obscuring the subject matter of the present invention.
- Hereinafter, preferred embodiments of the present invention are described in detail with reference to the accompanying drawings. The same or similar reference symbols are used throughout the drawings to refer to the same or like parts. Descriptions of well-known functions and constructions may be omitted to avoid obscuring the subject matter of the present invention.
- The following description is focused on 4G communication systems including the advanced E-UTRA (or LTE-A) system supporting carrier aggregation. However, it should be understood by those skilled in the art that the subject matter of the present invention is applicable to other communication systems having similar technical backgrounds and channel configurations without significant modifications departing from the scope of the present invention. For example, the subject matter of the present invention is applicable to multicarrier HSPA systems supporting carrier aggregation and next generation 5G communication systems.
- Descriptions of functions and structures well known in the art and not directly related to the present invention may also be omitted for clarity and conciseness without obscuring the subject matter of the present invention.
- In the drawings, some elements are exaggerated, omitted, or only outlined in brief, and thus may be not drawn to scale. The same or similar reference symbols are used throughout the drawings to refer to the same or like parts.
- The aspects, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings. The description of the various embodiments is to be construed as exemplary only and does not describe every possible instance of the present invention. It should be apparent to those skilled in the art that the following description of various embodiments of the present invention is provided for illustration purpose only and not for the purpose of limiting the present invention as defined by the appended claims and their equivalents. The same reference symbols are used throughout the description to refer to the same parts.
- Meanwhile, it is known to those skilled in the art that blocks of a flowchart (or sequence diagram) and a combination of flowcharts may be represented and executed by computer program instructions. These computer program instructions may be loaded on a processor of a general purpose computer, special purpose computer or programmable data processing equipment. When the loaded program instructions are executed by the processor, they create a means for carrying out functions described in the flowchart. As the computer program instructions may be stored in a computer readable memory that is usable in a specialized computer or a programmable data processing equipment, it is also possible to create articles of manufacture that carry out functions described in the flowchart. As the computer program instructions may be loaded on a computer or a programmable data processing equipment, when executed as processes, they may carry out steps of functions described in the flowchart.
- A block of a flowchart may correspond to a module, a segment or a code containing one or more executable instructions implementing one or more logical functions, or to a part thereof. In some cases, functions described by blocks may be executed in an order different from the listed order. For example, two blocks listed in sequence may be executed at the same time or executed in reverse order.
- In the description, the word “unit”, “module” or the like may refer to a software component or hardware component such as an FPGA or ASIC capable of carrying out a function or an operation. However, “unit” or the like is not limited to hardware or software. A unit or the like may be configured so as to reside in an addressable storage medium or to drive one or more processors. Units or the like may refer to software components, object-oriented software components, class components, task components, processes, functions, attributes, procedures, subroutines, program code segments, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays or variables. A function provided by a component and unit may be a combination of smaller components and units, and may be combined with others to compose large components and units. Components and units may be configured to drive a device or one or more processors in a secure multimedia card.
- In the description, the “state model” is generated by learning various data logs in the past that determine the state of the device, such as the resource usage state or the power usage state. The state model may output a prediction result for the state of the device in the future when a certain amount or more of data logs is input.
- In the description, the “state related information” refers to information on the factors that determine the state (e.g., software state or hardware state) of the device. The state related information may be derived from the state model generated based on the learning data for the state. The state related information may include at least one parameter determining the state, and may further include weight information indicating the extent to which the parameters determine the state.
- In the description, the “model related information” may include information regarding the characteristics of data logs, the algorithm, and the parameters used to generate the state model in the device, or information on the accuracy of the state model.
-
FIG. 3 is a block diagram illustrating the internal structure and operation of a device according to an embodiment of the present invention. - More specifically, the device may include a
modeling unit 310, aprediction unit 320, acontrol unit 330, and aremote communication unit 340. The data log 300 can be input to the device and optimizeddevice configuration information 350 can be output. That is, it can be seen that thecontrol unit 330 is newly added to the existing prediction model approach. Hereinafter, to distinguish the device from a different or external device that exchanges information with the device through the remote communication unit, the device including thecontrol unit 330 is referred to as a local device, and the other device is referred to as a remote device. - The
modeling unit 310 may learn data logs from thecontrol unit 330 to generate the state model of the local device. In this case, themodeling unit 310 can generate the state model of the local device using state related information and model related information received through theremote communication unit 340 from the remote device. - The
prediction unit 320 may produce a prediction result for each state model on the basis of a preset amount or more of data logs and at least one state model from thecontrol unit 330. That is, theprediction unit 320 may input received data logs to the state model generated in the local device and the state model generated in the remote device to produce prediction results for each state model. - The
control unit 330 may cause themodeling unit 310 and theprediction unit 320 to work together and exchange information with the remote devices through theremote communication unit 340. That is, thecontrol unit 330 can act as an integrator for state prediction in the device. Specifically, thecontrol unit 330 may include amonitoring part 331, aparameter selector 333, a prediction result determiner 335, and aconfiguration change determiner 337. - The
monitoring part 331 collectively manages the interfaces between the internal modules of thecontrol unit 330 and may continuously monitor the status of the local device. Themonitoring part 331 may store data logs 300 obtained from the sensors attached to the local device and may store the state model received from themodeling unit 310. The state model may include state related information determining the state, and may be composed of parameters and weighting information for the parameters. For example, the state model may include information on the parameters, arranged in order of weight, which determine the state of the device. - The
parameter selector 333 may select a parameter to be shared with at least one remote device according to a preset criterion from among one or more parameters included in the state model generated by themodeling unit 310. Here, theparameter selector 333 can dynamically determine the number of parameters to be shared based on the weight and the accuracy of the state model. The accuracy of the state model indicates the degree of agreement between the predicted result calculated by entering the data log into the state model and the actual result, and can be included in the model related information. An example of the state model is shown inFIG. 7 , and a more detailed description is given with reference toFIG. 8 . - The prediction result determiner 335 may use both the prediction result calculated based on the state model generated by the
modeling unit 310 of the local device and the prediction result calculated based on the state model generated in the remote device to produce a prediction result for the state of the local device. Also, the prediction result determiner 335 may use both the accuracy of the state model of the local device and the reliability of the state model of the remote device to produce a prediction result for the state. The accuracy and reliability of the state model indicate the degree of agreement between the predicted result and the actual result for an input value to the state model. In the description, the term “accuracy” is used for the state model generated in the local device, and the term “reliability” is used for the state model generated in the remote device. A detailed description is given of the prediction result determiner 335, which produces a prediction result based on the state model of the local device and the state model of the remote device, with reference toFIG. 10 . - The
configuration change determiner 337 may generate a feedback value using the prediction result and the reliability thereof produced by the prediction result determiner 335 and determine whether to change the configuration of the device based on the feedback value. A detailed description is given of determining whether to change the configuration of the device with reference toFIG. 11 . - It is possible for the constituent modules of the
control unit 330 to perform the above operations. However, it is well known in the art that thecontrol unit 330 can directly perform all of the above operations. - The
remote communication unit 340 may be connected to at least one remote device to share state related information and model related information with the remote device. More specifically, theremote communication unit 340 may transmit at least one remote device a parameter selected based on the state model. Theremote communication unit 340 may transmit the weight information of the parameter and the accuracy information of the state model to the at least one remote device. That is, theremote communication unit 340 does not transmit a data log itself collected from the local device, but transmits the state related information, thereby consuming a smaller amount of resources. - Also, the
remote communication unit 340 may receive state-related information and model-related information from at least one remote device. Thecontrol unit 330 may receive state related information and model related information through theremote communication unit 340 from the remote device, construct a model of the remote device based on the received information, and use the constructed model to produce a prediction result for the local device. - That is, the
remote communication unit 340 may receive, from at least one remote device, model related information including a parameter selected based on the state model of the remote device and weight information of the selected parameter, and state related information including the reliability information of the state model. - The
control unit 330 may finally produce an optimizeddevice configuration 350 according to the predicted state of the device by using the constituent modules thereof. - The
controller 330 may control generating a state model of the device based on state related data, selecting at least one parameter that determines the state of the device based on the generated state model, and transmitting the selected at least one parameter to at least one different device. - The
control unit 330 may further control transmitting weight information corresponding to the selected at least one parameter to the different device. Thecontrol unit 330 may control transmitting only the parameter among the state related data and the parameter to at least one different device. Thecontrol unit 330 may control receiving, from at least one different device, at least one parameter, which determines the state of the different device and is selected based on the state model generated in the different device, and the weight information corresponding to the selected parameter, and producing a prediction result for the state of the device on the basis of at least one parameter determining the state of the device and at least one parameter determining the state of the different device. - The
control unit 330 may control receiving, from at least one different device, information about the reliability of the prediction result derived from the state model generated in the different device. Thecontrol unit 330 may control producing a prediction result for the state of the device in consideration of the accuracy of the prediction result derived from the state model generated in the device and the reliability of the prediction result derived from the state model generated in the different device. In addition, thecontrol unit 330 may control determining whether to change the configuration of the device based on the prediction result for the state. -
FIG. 4 illustrates pieces of information transmitted and received between the internal components of the device according to an embodiment of the present invention. - More specifically, the device may further include interface mechanisms for transmitting and receiving information between the
modeling unit 410, theprediction unit 420, and the control unit 430. - First, the interface for transmitting information from the control unit 430 to the
modeling unit 410 may be referred to as “SND_to_MU” 431. The control unit 430 may transmit data logs received from various sensors of the local device. Typically, the data log is streamed to the control unit 430 as an input value and processed through a pre-processing step. Pre-processing is not within the scope of the present invention and is not described herein. In addition, the control unit 430 may transmit state related information and model related information received through theremote communication unit 440. The state related information may include parameters selected based on the state model generated at the remote device and weights for the parameters. The model related information may include information regarding characteristics of data logs, algorithms, and parameters used to generate the state model. The model related information may also include information on the reliability of the state model. Themodeling unit 410 can learn the data logs received from the local device to generate a state model. To generate the state model, themodeling unit 410 may additionally use the state related information and model related information received through theremote communication unit 440. The learning algorithms may include, for example, machine learning algorithms. - The interface for transmitting information from the
modeling unit 410 to the control unit 430 may be referred to asRCV_from_MU 433. Themodeling unit 410 may transmit the generated state model and model related information to the control unit 430. The control unit 430 may derive state related information from the state model. That is, the controller 430 may produce information on the parameters determining the state and the weight information for the parameters. - Next, the interface for transmitting information from the control unit 430 to the
prediction unit 420 may be referred to asSND_to_PU 435. The control unit 430 may transmit a given amount or more of data logs at the present time for predicting the state of the device as part of SND_to_PU. The control unit 430 may transmit the state model received from themodeling unit 410, that is, a state model generated based on the data logs collected from the local device and a state model received from the remote device. Upon receiving new data logs at the current time point, the state models, and the state related information, theprediction unit 420 can produce a prediction result on the state by applying pre-stored algorithms to each state model. The algorithms may include, for example, machine learning algorithms. - The interface for transmitting information from the
prediction unit 420 to the control unit 430 may be referred to asRCV_from_PU 437. Theprediction unit 420 may transmit a prediction result for the state of the device when each state model is applied. Thereby, the control unit 430 can produce a prediction result for the state of the local device by using both the prediction result derived from the state model of the local device and the prediction result derived from the state model of the remote device. - Next, the interface for transmitting information from the control unit 430 to the
remote communication unit 440 may be referred to asSND_to_Remote 438. The control unit 430 may send state related information to at least one remote device based on the state model received from themodeling unit 410 through continuous monitoring. The remote device may be a member of a group of devices having characteristics similar to the local device. The control unit 430 may select state related information based on the received state model and determine whether to transmit the state related information to the at least one remote device. This is described in detail with reference toFIG. 9 . - The interface for transmitting information from the
remote communication unit 440 to the control unit 430 may be referred to asRCV_from_Remote 439. Theremote communication unit 440 may transmit state related information based on the state model generated in the remote device to the control unit 430. More specifically, the control unit 430 may receive parameters selected based on the state model and weight information indicating the degree to which the parameters determine the state. The control unit 430 may also receive information on the reliability of the state model generated at the remote device. The control unit 430 may calculate a prediction result for the state using the received state related information. This is described in detail with reference toFIG. 10 . In addition, peer-to-peer (P2P) sharing schemes may be used as distributed algorithms for transmitting and receiving information through theremote communication unit 440 to and from other remote devices. -
FIGS. 5A and 5B illustrate grouping of devices to share state related information according to an embodiment of the present invention. - More specifically, assuming that the device is a base station,
FIG. 5A shows groups of base stations sharing state related information and model related information for the model on a map. A group of one or more devices may be referred to as a shared model group.FIG. 5B is a table showing attribute values of base stations. Base stations can be grouped based on the installation area, the software version, the number of assigned cells, and the like to create shared model groups. - The group information can be determined by the base station management system based on the attribute values of the base stations as shown in
FIG. 5B . For example, a k-mode clustering technique may be performed to group the base stations, and the information on the shared model groups may be notified to the base stations of the groups. Thereafter, the base stations belonging to the same shared model group may share state related information.FIGS. 6A and 6B illustrate producing a prediction result for the device installation state and performing a feedback operation according to an embodiment of the present invention. - More specifically,
FIG. 6A illustrates operations between internal modules in a newly installed local device, andFIG. 6B illustrates operations between internal modules in a previously installed local device. - In the case of a newly installed local device of
FIG. 6A , there are few or no accumulated data logs. Hence, thecontrol unit 610 may receive the state related information and model related information through the interface connected to the remote communication unit 600 (RCV_from_Remote 605) from the remote devices belonging to the same shared model group, and produce a prediction result. That is, thecontrol unit 610 can receive the state related information selected from the remote base station through theremote communication unit 600. Thecontrol unit 610 may transmit a new data log at the current local base station and the state related information received from at least one remote base station to theprediction unit 625 throughSND_to_PU 620. Theprediction unit 625 can produce a prediction result based on the received new data log at the current time point and the state related information of at least one remote base station, and transmit the prediction result back to the control unit 622 viaRCV_from_PU 627. Thecontrol unit 610 can utilize the prediction result produced by at least one remote base station and the reliability of the state model at the remote base to produce a prediction result at the current time point in the local base station, and perform a feedback operation (630). The feedback operation may include determining whether to change the current configuration of the device. - In the case of a previously installed local device of
FIG. 6B , there are accumulated data logs. Hence, the local device may share state related information derived from the state model generated based on the accumulated data logs therein with at least one remote device. In addition, the local device may produce a prediction result at the current time in the local device on the basis of the state related information received from the remote device and the state model generated in the local device. - First, a description is given of sharing state related information derived from the state model generated in the local device with at least one remote device. The
control unit 655 can transmit the accumulated data logs to themodeling unit 665 viaSND_to_MU 660. Themodeling unit 665 can generate a state model based on the data logs and transmit the state model back to thecontrol unit 655 viaRCV_from_MU 667. Thecontrol unit 655 may transmit the state model generated based on the data logs to theprediction unit 675 throughSND_to_PU 670. In this case, thecontrol unit 655 may also transmit a new data log collected at the current time point. Theprediction unit 675 can produce a prediction result based on the new data log and the state model. - The
prediction unit 675 can transmit the prediction result to thecontrol unit 655 viaRCV_from_PU 677. Thecontrol unit 655 can compute the prediction accuracy of the state model on the basis of the prediction result derived from the state model generated based on the data logs of the local device. Obtaining the prediction accuracy is not within the scope of the present invention, and a description thereof is omitted. Based on the prediction accuracy of the state model, thecontrol unit 677 may select parameters to be shared with the remote device from among the parameters included in the state related information of the state model, and determine whether to share the state related information of the state model. This is described in more detail with reference toFIG. 8 . Upon determining to share the state model generated in the local base station and selecting the parameters to be shared among the state related information, thecontrol unit 655 may transmit the selected parameters and weight information corresponding to the selected parameters to theremote communication unit 640 throughSND_to_Remote 680. - Next, a description is given of producing a prediction result at the current time point in the local device based on the state model generated in the local device. The
control unit 655 can receive state related information and model related information about the state model created in the remote device from theremote communication unit 640 viaSND_to_Remote 680. The description on creating the state model and selecting the parameters to be shared in the local device is the same as that in the remote device. Thecontrol unit 655 may forward the state related information received from the remote device to theprediction unit 675 viaSND_to_PU 670. At the same time, thecontrol unit 655 can also transmit a certain amount of data logs collected by the local device up to the current time point. - The predicting
unit 675 can produce a prediction result for the data logs at the current time point by using at least one parameter included in the state related information and a weight corresponding to the parameter. Theprediction unit 675 can transmit the prediction result based on the state related information received from the remote device to thecontrol unit 655 throughRCV_from_PU 677. Thecontrol unit 655 can utilize both the prediction result based on the state model of the local device and the prediction result based on the state model of the remote device to produce the prediction result for the state of the local device. This is described in more detail with reference toFIG. 10 . Thecontrol unit 655 may perform a feedback operation based on the produced prediction result (690). Specifically, thecontrol unit 655 may determine whether to perform a feedback operation based on the prediction result. -
FIG. 7 shows a graph representing the state model that predicts the device state. - More specifically,
FIG. 7 illustrates a state model generated by the modeling unit based on data logs collected in the device. The device may select parameters to be shared with a remote device based on the state model. In the graph for the state model, the x-axis indicates the ranks of parameters (705) and the y-axis indicates the cumulative sum of weights of the parameters (700). The parameters refers to factors that determine the state of the device. For example, to make a prediction on the network throughput of the device, the parameters may include reference signal received power (RSRP), reference signal received quality (RSRQ), and channel quality indication (CQI). - In
FIG. 7 , the graph of cumulative weights of the parameters determining the state is in the form of a cumulative distribution function (CDF) graph. It can be seen that the slope of the graph decreases as the number of parameters is accumulated according to the order of weighting (710, 720, 730). The device may determine the number of parameters to be shared on the basis of the slope of the graph. This is described in more detail with reference toFIG. 8 . -
FIGS. 8A, 8B and 8C illustrate operations for selecting parameters to be shared with another device for a state model and determining whether to share the state model according to an embodiment of the present invention. - More specifically, the device may receive data logs associated with the state from the sensors (800). The device may generate a state model based on the data logs, and may identify parameters that determine the state according to the state model and weights of the parameters (805). Here, the state model may be the one shown in
FIG. 7 , and it is assumed in the following description that the state model is the same as the graph shown inFIG. 7 . - The device may set Npre to 0 for N indicating the number of parameters (807). The device may set N to 1 (810). To compute the angle difference in the graph between the parameters of adjacent ranks, the device may calculate the angle difference in the graph between the Nth parameter and the N+1th parameter (degree=IN−IN+1) (820).
- The device may then determine whether the calculated angle difference is less than a preset threshold of the slope (thresholdD) (823). If the calculated angle difference is not less than thresholdD, the device may increase the number of parameters by one (825), and repeat
steps steps 820 through 825 until the condition ofstep 823 is satisfied, and it can determine the number of parameters whose weight is greater than or equal to a preset value. If the calculated angle difference (IN−IN+1) is less than thresholdD, the device may learn the first to Nth parameters to thereby produce the prediction result and the prediction accuracy (830). Thereafter, the device may determine whether the produced prediction accuracy is higher than a preset threshold of the prediction accuracy (thresholdP) (833). If the prediction accuracy is lower than thresholdP, the device may determine whether the value of N is equal to the total number of parameters (835). If N is equal to the total number of parameters, the device may determine the state model as unusable (837). That is, if the prediction accuracy is lower than thresholdP although all the parameters in the state model are used for producing the prediction result, the state model is not shared with remote devices and is not used to make a prediction on the state in the local device. - If N is not equal to the total number of parameters, the device may determine whether flag is set to 1 (840). If flag is set to 1, the step size serving as the adjustment interval for thresholdD can be halved (843). Otherwise, step 840 can be skipped. Here, the step size means the interval value that changes the number of parameters in order to identify the minimum number of parameters whose prediction accuracy satisfies thresholdP. Thereafter, the device may adjust thresholdD downward by subtracting the step size from thresholdD (845). This is to increase the number of parameters to be learned by lowering thresholdD when the device has learned N parameters and produced a prediction result at
step 830 with the prediction accuracy lower than thresholdP. Then, the device may set flag to 0 and set Npre to N (847). - Thereafter, steps 810 to 830 may be repeated to determine the number of parameters based on changed thresholdD.
- On the other hand, if the prediction accuracy is higher than thresholdP at
step 833, the device may determine whether N is equal to Npre (850). If N is equal to Npre, the device may determine the current state model as a usable state model (855). If N is not equal to Npre, the device may determine whether flag is set to 0 (860). If the device initially testsstep 833, Npre is 0 atstep 850, which is the value set atstep 807; the result ofstep 850 is always “no” even if the result exceeds the prediction accuracy in the first attempt atstep 833; and the device may initiate learning again by decreasing the number of parameters through adjustment of thresholdD. If the device previously testedstep 833 and passed thestep step 850 after adjusting thresholdD, that is, when the number of learned parameters equals the number of learned parameters in the previous stage, the device can determine the current state model as a usable state model. - Thereafter, if flag is set to 0 at
step 860, the device may reduce the step size serving as the adjustment interval for thresholdD by half (863). If flag is set to 1, the device can maintain the step size. - Thereafter, the device may adjust thresholdD upward by adding the step size to thresholdD (865). This is to decrease the number of parameters to be learned by increasing thresholdD when the device has learned N parameters and produced a prediction result at
step 830 with the prediction accuracy higher than thresholdP. Then, the device may set flag to 1 and set Npre to N (870). Thereafter,step 810 and subsequent steps may be repeated to determine the number of parameters based on changed thresholdD. - Here, flag is a criterion for determining whether the step size for adjusting thresholdD is halved. When thresholdD is adjusted downward, flag is set to 0 (847); and when thresholdD is adjusted upward, flag is set to 1 (870). If flag is 1 before thresholdD is adjusted downward, thresholdD is adjusted upward. At this time, the step size is reduced by half. If flag is 0 before thresholdD is adjusted upward, thresholdD is adjusted downward. At this time, the step size is reduced by half. That is, when the device adjusts thresholdD differently from the previous stage, it can reduce the step size by half.
- Through the above process, the device can select a minimum number of parameters reaching the desired prediction accuracy. For example, after selecting 50 parameters based on initially determined thresholdD and learning, if the prediction accuracy satisfies thresholdP, thresholdD is increased to reduce the number of selected parameters. Thereafter, the device may learn 45 parameters (excluding 5 parameters) based on upwardly adjusted thresholdD and calculate the prediction result. If the prediction accuracy of the prediction result does not satisfy thresholdP, the device can select an increased number of parameters and learn again by adjusting thresholdD downward.
-
FIG. 9 illustrates the operation of determining whether to share state related information with another device according to an embodiment of the present invention. - The device may determine whether the state model is updated (900). That is, the device can determine whether a new state model has been created using newly accumulated data logs. Thereafter, the device may determine whether the newly generated state model is a usable model (910). That is, the device can perform the operation described in
FIG. 8 on the newly generated state model to determine whether it is a usable model. If the newly generated state model is not a usable model, it is not shared and the procedure proceeds to step 900, at which the device may check whether a new state model is generated. If the newly generated state model is a usable model, the device may share the parameters selected from the state model with a remote device via the remote communication unit (920). -
FIG. 10 illustrates an operation by which the device produces a prediction result on the state according to an embodiment of the present invention. - More specifically, the device can produce the final prediction result of the local device on the basis of the reliability or accuracy of the state models generated in the local device and the remote device. That is, a state model with a high reliability or accuracy has a large influence on the final prediction result, and a state model with a low reliability or accuracy has a small influence on the final prediction result. In addition, if the reliability of a state model does not exceed a preset threshold, it may be regarded as an inaccurate model and be not used for calculating the prediction result.
- The device may obtain state related information, model related information, and state prediction results from the local device and the remote device (1000). The device may generate a state model based on locally collected data logs, and use the prediction unit to calculate a prediction result based on the state related information of the state model. In addition, the device may receive state related information of the state model generated in the remote device, and may use the prediction unit to calculate a prediction result based on the state related information of the remote device.
- Then, the device may set initial values by setting w (cumulative weight) to 0, setting N (index of a state model) to 1, and setting p (cumulative prediction result) to 0 (1010). Among the state models generated in the local device and the remote device, the device may utilize a state model whose reliability is greater than thresholdR (preset threshold for reliability) only. The device may determine whether the reliability (or accuracy) of model N is greater than thresholdR (1020). If the reliability of model N is less than thresholdR, the device may increase the model index by one (1025) and may determine whether the reliability of the next model is greater than thresholdR (1020).
- If the reliability of model N is greater than thresholdR at
step 1020, the device may compute the cumulative prediction result p (1030). That is, p=p+reliability of model N* prediction result of model N. Thereafter, the device may calculate the cumulative weight w (1040). That is, w=w+reliability (accuracy) of model N. - Then, the device may determine whether the model index N is equal to the total number of models corresponding to the state related information obtained at step 1000 (1050). If N is less than the total number of models, the device may increase the model index by one (1025), and the procedure returns to step 1020 for the next model. Thereafter, if the cumulative prediction result p and the cumulative weight w are determined based on the reliability of all the models, the device may calculate the final prediction result using p/w (1060).
- For example, assume that the reliability of [
model 1,model 2, model 3] is [0.9, 0.2, 0.1] and that the prediction result is [0.3, 0.9, 0.8]. Assume that thresholdR is 0.1. Here,model 1 has a reliability much higher than thresholdR, and it has a large influence on calculating the prediction result.Model 2 has a reliability a little higher than thresholdR, and it has a small influence on calculating the prediction result.Model 3 has a reliability not higher than thresholdR, and it has no influence on calculating the prediction result. In this case, the cumulative prediction result p may be computed as shown inEquation 1 below. -
p=0.9(reliability of model 1)×0.3(prediction result of model 1)+0.2 (reliability of model 2)×0.9(prediction result of model 2)=0.45Equation 1 - The cumulative weight w may be computed as shown in
Equation 2 below. -
w=0.9(reliability of model 1)+0.2(reliability of model 2)=1.1Equation 2 - Hence, the final prediction result (p/w) is 0.41.
- As described above, the final prediction result is 0.41. It can be seen that
model 1 with higher reliability thanmodel 2 affects the prediction result and the final prediction result is closer to the prediction result (0.3) ofmodel 1 than the prediction result (0.9) ofmodel 2. -
FIG. 11 illustrates a feedback operation performed by the device based on the prediction result according to an embodiment of the present invention. - The device can generate a feedback value using the state prediction result of the local device and the reliability of the prediction result in
FIG. 10 , and determine whether to execute the feedback operation using the feedback value. Whether to execute the feedback operation may include determining whether to change the configuration of the device. - More specifically, the device can obtain the state prediction result and the reliability of the prediction result of the local device derived as in
FIG. 10 (1100). The reliability of the prediction result can be obtained by dividing the cumulative weight w by the number of models. Thereafter, the device may determine a feedback value f (1110). The feedback value f is a criterion value for the device to determine whether to perform the feedback operation. The device can determine the feedback valuef using Equation 3 below. -
Feedback value(f)=prediction result×reliability of prediction result+(1−prediction result)×(1−reliability of prediction result)Equation 3 - The device may determine whether the determined feedback value f is greater than or equal to preset thresholdf (1120). If the feedback value f is less than thresholdf, the procedure can be terminated without changing the configuration of the device. If the determined feedback value f is greater than or equal to thresholdf, the device may change the configuration of the device based on the feedback value f (1130).
- For example, assume that the device is a base station and the failure state of a fan is to be predicted. When the prediction result for the failure of the fan is 0.9, the reliability of the prediction result is 0.8, and thresholdf is 0.7, the feedback value f is given as follows.
-
- As this value is greater than thresholdf, the device can determine that a fan failure will occur with a high probability and determine to change the hardware or software settings of the base station. By determining the feedback value using the reliability of the current prediction result, the device can predict and prepare for a more accurate state in real time.
- The above-described method of sharing state related information including parameters selected based on the state model between devices having similar characteristics, and predicting the state of the local device based on the shared state related information can be applied as follows.
- For example, to distribute network resources in real time by using the present invention, base stations installed in the same cell site, i.e., the same physical area, may monitor their available resources and terminal traffic. If the terminal traffic amount exceeds a given threshold, the base stations may perform resource migration. In this way, the base station can keep the quality of experience (QoE) of the user at a high level. By using the information about small states related to component resources of the base station, it is possible to predict and prepare for a large problem in advance while operating the network apparatus. As a result, it is possible to reduce the operation cost of the base station and to maintain the stability of the apparatus.
- To optimize the power consumption of the base stations by utilizing the smart grid technology, the present invention can be utilized to introduce a rechargeable battery structure according to the peak/off-peak hours and the hourly power charges. More specifically, a parameter can be used for the number of terminals served per hour. In addition, parameters for physical resource block (PRB) usage amount, terminal data throughput, cell load, hardware component load, and backhaul load can be used. Additionally, parameters for the number of neighbor base stations can be used to account for the power consumed when the base station uses interference control modules. As the strength of a signal transmitted to the terminal can be influenced by the channel condition, parameters for the channel state can be used. The patterns of power consumption (voltage, ampere, ohm, interference) can be predicted based on the above parameters, and base station resources (radio power, frequency bandwidth) can be changed as needed based on the predicted power usage patterns. Hence, it is possible to operate fewer base stations, to reduce unnecessary facility investment, and to reduce the power operation cost as needed.
- To optimize the network throughput performance of terminals, the base station may monitor radio information (e.g., reference signal received power (RSRP), reference signal received quality (RSRQ), channel quality indication (CQI)) and expected quality of experience (QoE). Thereby, performance of the terminals can be optimized through effectively allocated resources. Here, expected radio information and quality of experience (QoE) values of the terminals can be determined according to the billing plan, terminal type, and traffic pattern. For example, if the user subscribed to an expensive billing plan requires a terminal with a modem chipset supporting the high capacity downlink and a low latency application service, the base station can identify the available resources and the expected resources in the future in advance and perform resource migration to the base station with the highest performance.
- To maintain the connectivity of the terminals, the high seamless handover rate, and high transparency, the present invention can be used for enabling the base stations connected with the terminals to predict software and hardware states. The base station may monitor the operation of various software modules and hardware modules internally (e.g., available resources and stability) and make a prediction on the normal operation. If a particular component exhibits an anomalous condition (e.g., optical error, communication error, memory error, fan error, memory full, CPU (central processing unit) full, DSP (digital signal processing) error), the base station can handover the terminals to a normally operating base station.
- In the 5G communication system, a next-generation network becoming popular in recent years, to utilize network function virtualization (NFV) technology in conjunction with software defined networking (SDN) technology, the present invention may be applied to flexibly steering network traffic through the central SDN controller and to flexibly drive the apparatus resources as needed through NFV technology. For example, deep packet inspection technology installed on a base station may be used based on data logs for the parameters such as bandwidth, throughput, latency, jitter, and error rate between base stations. Thereby, the user application content and association patterns can be extracted. The extraction results can be sent to the SDN controller, which may result in higher user QoS and QoE and reduced operator capital expenditure and operational expenditure (CapEx/OpEx).
- In the embodiments described above, all steps and messages may be subject to selective execution or may be subject to omissions. The steps in each embodiment need not occur in the listed order, but may be reversed. Messages need not necessarily be delivered in order, but the order may be reversed. Individual steps and messages can be performed and delivered independently.
- Some or all of the table in the above-described embodiments is shown to illustrate embodiments of the present invention for facilitating understanding. Hence, the details of the table can be regarded as representing a part of the method and apparatus proposed by the present invention. That is, semantical approach to the contents of the table herein may be more desirable than syntactical approach thereto.
- Hereinabove, various embodiments of the present invention have been shown and described for the purpose of illustration without limiting the subject matter of the present invention. It should be understood by those skilled in the art that many variations and modifications of the method and apparatus described herein will still fall within the spirit and scope of the present invention as defined in the appended claims and their equivalents.
Claims (20)
1. A method performed by a device of a base station (BS), the method comprising:
obtaining local data and a state model for local learning to predict a state of the device;
performing the local learning to update the state model using the local data based on a machine learning algorithm;
obtaining information for determining the state of the device using the updated state model, based on the local learning; and
transmitting, to a remote device through communication circuitry of the device, first state related information including the information for determining the state of the device using the updated state model,
wherein the state of the device is associated with at least one of a power consumption of the device, a resource usage of the device related to frequency or time resources, an abnormal operation of the BS for detecting at least one error occurring in the device, or a network throughput performance of at least one terminal.
2. The method of claim 1 , further comprising:
receiving, from the remote device through the communication circuitry of the device, second state related information for predicting the state of the device.
3. The method of claim 2 , wherein the performing of the local learning comprises:
performing the local learning to update the state model based on the second state related information.
4. The method of claim 2 , wherein the performing of the local learning further comprises:
updating the state model based on the first state related information and the second state related information.
5. The method of claim 1 ,
wherein the information for determining the state of the device comprises at least one parameter for updating the state model among a plurality of parameters for the state model, and
wherein the number of the at least one parameter is smaller than a total number of the plurality of parameters for the state model.
6. The method of claim 5 ,
wherein the information for determining the state of the device comprises weight information of the at least one parameter, and
wherein the at least one parameter and the weight information are used to determine the state of the BS based on the updated state model in the remote device.
7. The method of claim 1 , wherein the transmitting of the first state related information comprises:
determining whether the remote device belongs to a shared group for the device; and
in case that the remote device belongs to the shared group, transmitting the first state related information to the remote device.
8. The method of claim 1 , wherein the at least one error comprises at least one of a communication error, a memory error, a fan error, a memory full error, a central processing unit (CPU) full error, or a digital signal processing (DSP) error.
9. The method of claim 1 , further comprising:
receiving, from the remote device through the communication circuitry of the device, information associated with the state model for the local learning in the device.
10. The method of claim 1 , wherein the local data comprises sensor data.
11. A device of a base station (BS), comprising:
communication circuitry; and
a processor configured to:
obtain local data and a state model for local learning to predict a state of the device,
perform the local learning to update the state model using the local data based on a machine learning algorithm,
obtain information for determining the state of the device using the updated state model based on the local learning, and
control the communication circuitry to transmit, to a remote device, first state related information including the information for determining the state of the device using the updated state model,
wherein the state of the device is associated with at least one of a power consumption of the device, a resource usage of the device related to frequency or time resources, an abnormal operation of the BS for detecting at least one error occurring in the device, or a network throughput performance of at least one terminal.
12. The device of claim 11 , wherein the processor is further configured to control the communication circuitry to receive, from the remote device, second state related information for predicting the state of the device.
13. The device of claim 12 , wherein, to perform the local learning, the processor is configured to:
perform the local learning to update the state model based on the second state related information.
14. The device of claim 12 , wherein, to perform the local learning, the processor is configured to:
update the state model based on the first state related information and the second state related information.
15. The device of claim 11 ,
wherein the information for determining the state of the device comprises at least one parameter for updating the state model among a plurality of parameters for the state model, and
wherein the number of the at least one parameter is smaller than a total number of the plurality of parameters for the state model.
16. The device of claim 15 ,
wherein the information for determining the state of the device comprises weight information of the at least one parameter, and
wherein the at least one parameter and the weight information are used to determine the state of the BS based on the updated state model in the remote device.
17. The device of claim 11 , wherein, to transmit the first state related information, the processor is configured to:
determine whether the remote device belongs to a shared group for the device, and
in case that the remote device belongs to the shared group, transmit the state related information to the remote device.
18. The device of claim 11 , wherein the at least one error comprises at least one of a communication error, a memory error, a fan error, a memory full error, a central processing unit (CPU) full error, or a digital signal processing (DSP) error.
19. The device of claim 11 , wherein the processor is further configured to control the communication circuitry to receive, from the remote device, information associated with the state model for the local learning in the device.
20. An electronic device comprising:
a memory configured to store instructions,
wherein, when the instructions are executed on a device of a base station (BS), the instructions cause the device to:
obtain local data and a state model for local learning to predict a state of the device,
perform the local learning to update the state model using the local data based on a machine learning algorithm,
obtain information for determining the state of the device using the updated state model based on the local learning, and
transmit, to a remote device through communication circuitry of the device, first state related information including the information for determining the state of the device using the updated state model, and
wherein the state of the device is associated with at least one of a power consumption of the device, a resource usage of the device related to frequency or time resources, an abnormal operation of the BS for detecting at least one error occurring in the device, or a network throughput performance of at least one terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/462,065 US20230422058A1 (en) | 2015-11-20 | 2023-09-06 | Method and apparatus of sharing information related to status |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2015-0163556 | 2015-11-20 | ||
KR1020150163556A KR102292990B1 (en) | 2015-11-20 | 2015-11-20 | Method and apparatus of sharing information related to status |
PCT/KR2016/013347 WO2017086739A1 (en) | 2015-11-20 | 2016-11-18 | Method and device for sharing state related information |
US201815776528A | 2018-05-16 | 2018-05-16 | |
US17/105,840 US11758415B2 (en) | 2015-11-20 | 2020-11-27 | Method and apparatus of sharing information related to status |
US18/462,065 US20230422058A1 (en) | 2015-11-20 | 2023-09-06 | Method and apparatus of sharing information related to status |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/105,840 Continuation US11758415B2 (en) | 2015-11-20 | 2020-11-27 | Method and apparatus of sharing information related to status |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230422058A1 true US20230422058A1 (en) | 2023-12-28 |
Family
ID=58719106
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/776,528 Active 2036-12-20 US10880758B2 (en) | 2015-11-20 | 2016-11-18 | Method and device for sharing state related information |
US17/105,840 Active 2037-03-14 US11758415B2 (en) | 2015-11-20 | 2020-11-27 | Method and apparatus of sharing information related to status |
US18/462,065 Pending US20230422058A1 (en) | 2015-11-20 | 2023-09-06 | Method and apparatus of sharing information related to status |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/776,528 Active 2036-12-20 US10880758B2 (en) | 2015-11-20 | 2016-11-18 | Method and device for sharing state related information |
US17/105,840 Active 2037-03-14 US11758415B2 (en) | 2015-11-20 | 2020-11-27 | Method and apparatus of sharing information related to status |
Country Status (5)
Country | Link |
---|---|
US (3) | US10880758B2 (en) |
EP (2) | EP3654689A1 (en) |
KR (1) | KR102292990B1 (en) |
CN (1) | CN108293005B (en) |
WO (1) | WO2017086739A1 (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018173121A1 (en) * | 2017-03-21 | 2018-09-27 | 株式会社Preferred Networks | Server device, trained model providing program, trained model providing method, and trained model providing system |
WO2018210876A1 (en) * | 2017-05-16 | 2018-11-22 | Tellmeplus | Process and system for remotely generating and transmitting a local device state predicting method |
US10817757B2 (en) | 2017-07-31 | 2020-10-27 | Splunk Inc. | Automated data preprocessing for machine learning |
US10715633B2 (en) * | 2018-01-10 | 2020-07-14 | Cisco Technology, Inc. | Maintaining reachability of apps moving between fog and cloud using duplicate endpoint identifiers |
US10735274B2 (en) * | 2018-01-26 | 2020-08-04 | Cisco Technology, Inc. | Predicting and forecasting roaming issues in a wireless network |
JP6984036B2 (en) * | 2018-09-28 | 2021-12-17 | 三菱電機株式会社 | Server equipment, data distribution system, data provision method, and program |
US11169239B2 (en) * | 2018-09-28 | 2021-11-09 | Intel Corporation | Methods and apparatus to trigger calibration of a sensor node using machine learning |
US10602383B1 (en) | 2018-10-15 | 2020-03-24 | Microsoft Technology Licensing Llc | Application of machine learning for building predictive models enabling smart fail over between different network media types |
WO2020126043A1 (en) * | 2018-12-21 | 2020-06-25 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods, apparatus and machine-readable mediums relating to power failure notifications in a communication network |
US11113653B2 (en) * | 2018-12-26 | 2021-09-07 | Accenture Global Solutions Limited | Artificial intelligence and machine learning based incident management |
KR102386382B1 (en) * | 2019-01-09 | 2022-04-14 | 삼성전자 주식회사 | A method and apparatus for forecasting saturation of cell capacity in a wireless communication system |
US11277499B2 (en) * | 2019-09-30 | 2022-03-15 | CACI, Inc.—Federal | Systems and methods for performing simulations at a base station router |
CN112886996B (en) * | 2019-11-29 | 2024-08-20 | 北京三星通信技术研究有限公司 | Signal receiving method, user equipment, electronic equipment and computer storage medium |
WO2021126019A1 (en) | 2019-12-16 | 2021-06-24 | Telefonaktiebolaget Lm Ericsson (Publ) | Configuring network nodes in communication network |
KR20220132934A (en) * | 2021-03-24 | 2022-10-04 | 삼성전자주식회사 | Electronic device and operation method thereof |
KR102476700B1 (en) * | 2021-05-13 | 2022-12-12 | 서울대학교산학협력단 | Wireless distributed learning system including abnormal terminal and method of operation thereof |
KR102536005B1 (en) * | 2021-12-23 | 2023-05-26 | 광운대학교 산학협력단 | Method and Apparatus for Setting Reception Threshold in Backscatter Communication |
US12035286B2 (en) * | 2022-01-14 | 2024-07-09 | Qualcomm Incorporated | Gradient accumulation for federated learning |
CN115003140B (en) * | 2022-08-04 | 2022-11-08 | 浩鲸云计算科技股份有限公司 | Cooperative control energy-saving method for tail end air conditioner of water cooling unit of data center machine room |
US20240098533A1 (en) * | 2022-09-15 | 2024-03-21 | Samsung Electronics Co., Ltd. | Ai/ml model monitoring operations for nr air interface |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5995805A (en) | 1997-10-17 | 1999-11-30 | Lockheed Martin Missiles & Space | Decision-theoretic satellite communications system |
US6353902B1 (en) | 1999-06-08 | 2002-03-05 | Nortel Networks Limited | Network fault prediction and proactive maintenance system |
US6892317B1 (en) | 1999-12-16 | 2005-05-10 | Xerox Corporation | Systems and methods for failure prediction, diagnosis and remediation using data acquisition and feedback for a distributed electronic system |
US7130805B2 (en) | 2001-01-19 | 2006-10-31 | International Business Machines Corporation | Method and apparatus for generating progressive queries and models for decision support |
US7164919B2 (en) * | 2002-07-01 | 2007-01-16 | Qualcomm Incorporated | Scheduling of data transmission for terminals with variable scheduling delays |
DE10251993B4 (en) * | 2002-11-06 | 2012-09-27 | Actix Gmbh | Method and apparatus for optimizing cellular wireless communication networks |
JP4333331B2 (en) | 2002-12-20 | 2009-09-16 | セイコーエプソン株式会社 | Failure prediction system, failure prediction program, and failure prediction method |
JP3922375B2 (en) | 2004-01-30 | 2007-05-30 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Anomaly detection system and method |
JP4626852B2 (en) | 2005-07-11 | 2011-02-09 | 日本電気株式会社 | Communication network failure detection system, communication network failure detection method, and failure detection program |
US7865089B2 (en) | 2006-05-18 | 2011-01-04 | Xerox Corporation | Soft failure detection in a network of devices |
US8463297B2 (en) | 2007-12-27 | 2013-06-11 | Trueposition, Inc. | Subscriber selective, area-based service control |
JP5055221B2 (en) * | 2008-08-01 | 2012-10-24 | 株式会社エヌ・ティ・ティ・ドコモ | Mobile communication method and operation device |
US8437764B2 (en) * | 2009-01-05 | 2013-05-07 | Nokia Siemens Networks Oy | Determining an optimized configuration of a telecommunication network |
JP5228119B2 (en) | 2009-03-13 | 2013-07-03 | テレフオンアクチーボラゲット エル エム エリクソン(パブル) | Base station energy consumption management |
EP2474146B1 (en) | 2009-08-31 | 2013-06-05 | Telefonaktiebolaget LM Ericsson (publ) | Methods, base station and wireless communication system |
CN102668620A (en) * | 2009-11-11 | 2012-09-12 | 日本电气株式会社 | Wireless communication system, autonomous optimization system, wireless base station, and wireless parameter setting method |
FI123336B (en) * | 2009-12-23 | 2013-02-28 | 7Signal Oy | A method for monitoring and intelligently adjusting radio network parameters |
JP2011215964A (en) * | 2010-03-31 | 2011-10-27 | Sony Corp | Server apparatus, client apparatus, content recommendation method and program |
WO2011158298A1 (en) | 2010-06-17 | 2011-12-22 | 富士通株式会社 | Communication apparatus, control apparatus and transmission parameter adjustment method |
JP2012004997A (en) * | 2010-06-18 | 2012-01-05 | Kyocera Corp | Wireless communication system, wireless base station, and power consumption control method |
US9480078B2 (en) | 2011-06-10 | 2016-10-25 | Telefonaktiebolaget Lm Ericsson (Publ) | Closed control loop for uplink scheduling |
CN103369539B (en) * | 2012-04-06 | 2016-10-05 | 华为技术有限公司 | The method and apparatus of interference coordination |
CN103384372B (en) * | 2012-05-03 | 2016-08-10 | 华为技术有限公司 | A kind of optimize network capacity and cover compromise method, Apparatus and system |
US9119179B1 (en) * | 2012-06-06 | 2015-08-25 | Bae Systems Information And Electronic Systems Integration Inc. | Skypoint for mobile hotspots |
US8983486B2 (en) * | 2013-03-15 | 2015-03-17 | Blackberry Limited | Statistical weighting and adjustment of state variables in a radio |
DE102014205391A1 (en) * | 2014-03-24 | 2015-09-24 | Bayerische Motoren Werke Aktiengesellschaft | Device for predicting driving state transitions |
EP2947910B1 (en) * | 2014-05-23 | 2017-06-07 | Accenture Global Services Limited | Performance optimizations for wireless access points |
CN106664254A (en) * | 2014-08-21 | 2017-05-10 | 七网络有限责任公司 | Optimizing network traffic management in a mobile network |
US10327159B2 (en) * | 2014-12-09 | 2019-06-18 | Futurewei Technologies, Inc. | Autonomous, closed-loop and adaptive simulated annealing based machine learning approach for intelligent analytics-assisted self-organizing-networks (SONs) |
US9432901B1 (en) * | 2015-07-24 | 2016-08-30 | Cisco Technology, Inc. | System and method to facilitate radio access point load prediction in a network environment |
-
2015
- 2015-11-20 KR KR1020150163556A patent/KR102292990B1/en active IP Right Grant
-
2016
- 2016-11-18 EP EP19205827.9A patent/EP3654689A1/en active Pending
- 2016-11-18 CN CN201680067656.2A patent/CN108293005B/en active Active
- 2016-11-18 WO PCT/KR2016/013347 patent/WO2017086739A1/en active Application Filing
- 2016-11-18 US US15/776,528 patent/US10880758B2/en active Active
- 2016-11-18 EP EP16866691.5A patent/EP3352414B1/en active Active
-
2020
- 2020-11-27 US US17/105,840 patent/US11758415B2/en active Active
-
2023
- 2023-09-06 US US18/462,065 patent/US20230422058A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
KR20170059315A (en) | 2017-05-30 |
WO2017086739A1 (en) | 2017-05-26 |
EP3654689A1 (en) | 2020-05-20 |
US20210084505A1 (en) | 2021-03-18 |
US20180332483A1 (en) | 2018-11-15 |
US11758415B2 (en) | 2023-09-12 |
EP3352414A4 (en) | 2018-07-25 |
EP3352414A1 (en) | 2018-07-25 |
CN108293005B (en) | 2022-03-29 |
KR102292990B1 (en) | 2021-08-26 |
CN108293005A (en) | 2018-07-17 |
US10880758B2 (en) | 2020-12-29 |
EP3352414B1 (en) | 2019-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11758415B2 (en) | Method and apparatus of sharing information related to status | |
US10327159B2 (en) | Autonomous, closed-loop and adaptive simulated annealing based machine learning approach for intelligent analytics-assisted self-organizing-networks (SONs) | |
US10382979B2 (en) | Self-learning, adaptive approach for intelligent analytics-assisted self-organizing-networks (SONs) | |
CN107113635B (en) | Method and apparatus for determining cell status to adjust antenna configuration parameters | |
US9225652B2 (en) | Framework for traffic engineering in software defined networking | |
EP3000204B1 (en) | System and methods for multi-objective cell switch-off in wireless networks | |
US9432257B2 (en) | Traffic behavior driven dynamic zoning for distributed traffic engineering in SDN | |
US20160165472A1 (en) | Analytics assisted self-organizing-network (SON) for coverage capacity optimization (CCO) | |
TWI437895B (en) | Apparatus and method for determining a core network configuration of a wireless communication system | |
CN111466103B (en) | Method and system for generation and adaptation of network baselines | |
CN112042219A (en) | Radio access network controller method and system for optimizing load balancing between frequencies | |
US20220400394A1 (en) | Monitoring the Performance of a Plurality of Network Nodes | |
WO2020079678A1 (en) | Managing a cellular network using machine learning to classify the access points | |
WO2018162046A1 (en) | Protecting kpi during optimization of self-organizing network | |
Omar et al. | Downlink spectrum allocation in 5g hetnets | |
Virdis et al. | A Practical Framework for Energy‐Efficient Node Activation in Heterogeneous LTE Networks | |
CN109495402B (en) | Resource optimization method for minimizing physical layer resources of network function virtualization | |
EP4449687A1 (en) | Distributed machine learning | |
Lai et al. | A Multidepth Load‐Balance Scheme for Clusters of Congested Cells in Ultradense Cellular Networks | |
CN118368704A (en) | Optimizing power usage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |