CN104536939A - Method for configurable energy-saving dispatching of multi-core embedded cache - Google Patents
Method for configurable energy-saving dispatching of multi-core embedded cache Download PDFInfo
- Publication number
- CN104536939A CN104536939A CN201410755519.0A CN201410755519A CN104536939A CN 104536939 A CN104536939 A CN 104536939A CN 201410755519 A CN201410755519 A CN 201410755519A CN 104536939 A CN104536939 A CN 104536939A
- Authority
- CN
- China
- Prior art keywords
- cache
- performance
- fairness
- energy
- embedded system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a method for configurable energy-saving dispatching of a multi-core embedded cache. The method for configurable energy-saving dispatching of the multi-core embedded cache comprises the steps that application performance monitor parameters of the multi-core embedded system cache are set, algorithm optimization and improvement are conducted on an optimizing configuration research method of the multi-core embedded system cache, and changes of performance indexes under different cache configuration conditions are simulated to achieve the most reasonable optimized performance matching. The method achieves application of hardware performance to the maximum extent, achieves the multi-core embedded system data management capacity, and improves the overall performance of power consumption.
Description
Technical field
The invention belongs to embedded system technology field, particularly relate to the method for the configurable energy-saving distribution of a kind of multinuclear embedded high-speed memory buffer.
Background technology
Along with the development of society, the progress of science and technology, embedding people's formula system is applied to each neck city in life more and more widely with its distinctive superior function.From high-end specific application area as network calculations, communications electronics, Avionic Products, to some handheld devices of low side as MP5, e-book etc., consumer wishes the power supply deliverability that processing speed is more and more faster, work efficiency is more and more higher and more powerful bar none.The accumulation of these phenomenons also makes the practitioner of embedded system aspect recognize, the frequency of independent increase embedded system process device can not the overall performance of increase embedded system at double, the raising of corresponding performance can not be there is especially as concurrent operation, prediction execution and pipe technology etc. along with the raising of frequency, and catch up with the growing requirement to embedded system data image procossing, audio frequency and video process etc.Need concerning often increasing frequency 50% monokaryon embedded system the cost paying energy ezpenditure twice, and the embedded system of double-core is only increased to the energy ezpenditure of 30%, and vital status is played concerning the consumption of energy embedded system.
Cache memory (Cache) internal memory as the integrated assembly of high-end embedded system one, the introducing of cache memory (Cache) also for embedded system performance raising and reduce energy ezpenditure unpredicted new problem be provided.The method for designing of current embedded system processor relies on and provides enough large chip space to cache memory (Cache), ensure the unification of die size, chip performance and chip low-power consumption, the processor platform based on application-specific (the Xtensa platform of Tensilica) as nearest allows Cache can be self-defined.In the flush bonding processor of existing low-power consumption, the consumption of Cache accounts for the consumption of very large part energy substantially, document [A160MHz, 32b, 0.5W CMOS RISCmicroprocessor.Montanaro et al.JSSC, 1996,31 (11): 1703-1712.] show that the consumption of cache memory (Cache) energy account for 43% of the whole energy ezpenditure of processor.So for the correct configuration just very crucial factor beyond doubt of Cache an application program repeatedly run within a processor or class cache memory (Cache) configurable embedded system, and correct cache memory (Cache) configuration will be selected, often need to carry out the result that com-parison and analysis just can obtain needs from many aspects.The hit rate of such as cache memory (Cache) and processor energy ezpenditure and program execution time etc., document [WFornaciari et.al.A DesignFrameworkto Effciently ExploreEnergy-Delay Tradeoffs.CODES, 2001.pp260-265.X.Veraet.al..A Fast andAccurate Framework to Analyze and Optimize Cache Memory Behavior.ACMTOPLAS, 2004, 26 (2): 263-300.] mainly through adopting heuristic to cache memory (Cache) hit ratio estimation to cache memory (Cache) parameter designing space, document [J.EdlerandM.D.Hill.Dinero IV Trace-Driven Uniprocessor Cache Simulator.] is estimated single cache memory (Cache) configuration accurately by using instrument.These methods are all studied from the cache performance of certain angle to embedded system.Also more deep about polycaryon processor cache memory (Cache) research at home, such as document [in multiple nucleus system Real-Time Scheduling Police design and analysis design [D]. Deng Qingxu. Northeastern University, 2009.] conduct in-depth research with regard to the Real-Time Scheduling Police that cache memory (Cache) communication is responsive, and achieve multinuclear Real-Time Scheduling prototype system, document [operating system scheduling research [J] of chip-scale multiline procedure processor. Shao Lisong, hole gold bead, wear East China. computer engineering, 2009,35 (15): 277-279.] theoretic discussion has been carried out to the problem of load balancing of polycaryon processor inside, and utilized cooperative scheduling to avoid cache memory (Cache) jitter problem, Xiong Wei, Yin Jianping, institute's light, Zhao Zhiheng. polycaryon processor is towards shared cache memory (Cache) splitting scheme of low-power consumption. computer engineering and science .2010, (32) 10:26-29.] polycaryon processor that have studied towards low-power consumption shares the partitioning technology of cache memory (Cache), the crash rate of dynamically collection procedure is carried out by adding crash rate watch-dog within a processor, then shared cache memory (Cache) partitioning algorithm towards low-power consumption is used, shared cache memory (Cache) partition strategy in calculated performance loss threshold range.
The energy ezpenditure that existing monokaryon and dual core embedded system increase processing speed existence is larger.
Summary of the invention
The object of the embodiment of the present invention is the method providing the configurable energy-saving distribution of a kind of multinuclear embedded high-speed memory buffer, is intended to solve the problem that the energy ezpenditure of existing monokaryon and the existence of dual core embedded system increase processing speed is larger.
The embodiment of the present invention realizes like this, a method for the configurable energy-saving distribution of multinuclear embedded high-speed memory buffer, the method for the configurable energy-saving distribution of this multinuclear embedded high-speed memory buffer comprise multinuclear embedded system cache cache memory application performance monitor parameter is arranged, the research method of distributing rationally of multinuclear embedded system cache memory carries out algorithm optimization improvement, by emulating, realize the Performance Match of most reasonably optimizing to the change of performance index in different cache configuration situations;
Described setting multinuclear embedded system cache cache memory application performance monitor parameter refers to and utilizes the application performance monitor parameter of computer process to multinuclear embedded system cache cache memory repeatedly to arrange, and obtains best Optimal Parameters;
Described multinuclear embedded system cache memory distribute rationally research method carry out algorithm optimization improve refer to that monitor parameters that input is optimized arranges multinuclear embedded system cache memory distribute research method rationally, utilize computer program to carry out algorithm optimization improvement to research method, obtain optimum Research on configuration method;
By emulating the change of performance index in different cache configuration situations, described refers to that the Research on configuration method of utilization optimum carries out emulation experiment respectively by the change of the index in different cache configuration situations, obtain different experimental datas, select best experimental result.
The Performance Match of the most reasonably optimizing of described realization refers to that in selected experimental result, energy consumption runs out and little configuration may carry out building of actual items by the simulation experiment result above, thus realizes the Performance Match of most reasonably optimizing.
Further, described multinuclear embedded system cache memory distribute rationally research method carry out algorithm optimization improve step comprise based on performance and fairness be benchmark the dead block prediction of cache, cache access was lost efficacy, cache looks ahead, be that benchmark cache sharing divides, energy simulation calculates based on performance and fairness;
The described cache dead block prediction being benchmark based on performance and fairness refers to first by carrying out the prediction in data to the dead block of the cache based on performance and fairness being benchmark, ready for accessing cache;
Described cache accesses inefficacy and refers to when accessing cache process, there will be the result that cache access was lost efficacy;
Described cache looks ahead and refers to after cache access was lost efficacy, and takes the measure that cache looks ahead;
Described is after the division of benchmark cache sharing refers to that cache looks ahead based on performance and fairness, by based on performance and fairness being the division of benchmark, cache sharing;
Described energy simulation calculates the division referring to and utilize cache, arranges energy simulation model and carries out energy simulation calculating, obtains optimum result of calculation.
The method of the configurable energy-saving distribution of multinuclear embedded high-speed memory buffer provided by the invention, by adopting corresponding cache memory (Cache) to distribute rationally, achieving and applying to greatest extent hardware performance; By emulating the configuration of different cache memories (Cache), and multinuclear embedded system emulation benchmark set compares research, find the correlativity between cache memory (Cache), and then demonstrate proposed research method correctness and validity from multinuclear embedded system in the unit area utilization ratio of energy ezpenditure and cache memory (Cache), finally achieve the raising on the overall performance of multinuclear embedded system data managerial ability, power consumption.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the method for the configurable energy-saving distribution of multinuclear embedded high-speed memory buffer that the embodiment of the present invention provides;
Fig. 2 is that the research method of distributing rationally of the multinuclear embedded system cache memory of the method for the configurable energy-saving distribution of multinuclear embedded high-speed memory buffer that the embodiment of the present invention provides carries out the process flow diagram that algorithm optimization improves step;
Fig. 3 is that the research method of distributing rationally of the multinuclear embedded system cache memory of the method for the configurable energy-saving distribution of multinuclear embedded high-speed memory buffer that the embodiment of the present invention provides is carried out in algorithm optimization improvement based on the process flow diagram that performance and fairness are benchmark cache sharing partiting step.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with embodiment, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Below in conjunction with drawings and the specific embodiments, application principle of the present invention is further described.
As shown in Figure 1, a method for the configurable energy-saving distribution of multinuclear embedded high-speed memory buffer, the method for the configurable energy-saving distribution of this multinuclear embedded high-speed memory buffer comprise S101 is arranged to multinuclear embedded system cache cache memory application performance monitor parameter, the research method of distributing rationally of multinuclear embedded system cache memory carries out algorithm optimization and improves S102, by carrying out emulation S103 to the change of performance index in different cache configuration situations, realizes the Performance Match S104 of most reasonably optimizing;
Described S101 is arranged to multinuclear embedded system cache cache memory application performance monitor parameter refer to and utilize the application performance monitor parameter of computer process to multinuclear embedded system cache cache memory repeatedly to arrange, obtain best Optimal Parameters;
Described multinuclear embedded system cache memory distribute that research method carries out that algorithm optimization improves that monitor parameters that S102 refers to that input is optimized arranges multinuclear embedded system cache memory rationally distribute research method rationally, utilize computer program to carry out algorithm optimization improvement to research method, obtain optimum Research on configuration method;
By carrying out emulating S103 to the change of performance index in different cache configuration situations, described refers to that the Research on configuration method of utilization optimum carries out emulation experiment respectively by the change of the index in different cache configuration situations, obtain different experimental datas, select best experimental result.
The Performance Match S104 of the most reasonably optimizing of described realization refers to that in selected experimental result, energy consumption runs out and little configuration may carry out building of actual items by the simulation experiment result above, thus realizes the Performance Match of most reasonably optimizing.
Further, as shown in Figure 2, described multinuclear embedded system cache memory distribute rationally research method carry out algorithm optimization improve S102 step cache dead block prediction S201, cache that to comprise based on performance and fairness be benchmark access inefficacy S202, cache look ahead S203, be that benchmark cache sharing divides S204, energy simulation calculates S205 based on performance and fairness;
The described cache dead block prediction S201 being benchmark based on performance and fairness refers to first by carrying out the prediction in data to the dead block of the cache based on performance and fairness being benchmark, ready for accessing cache;
Described cache accesses inefficacy S202 and refers to when accessing cache process, there will be the result that cache access was lost efficacy;
Described cache look ahead S203 refer to cache access lost efficacy after, take the measure that cache looks ahead;
Described is after benchmark cache sharing division S204 refers to that cache looks ahead based on performance and fairness, by based on performance and fairness being the division of benchmark, cache sharing;
Described energy simulation calculates S205 and refers to the division utilized cache, arranges energy simulation model and carries out energy simulation calculating, obtains optimum result of calculation.
As shown in Figure 3, the research method of distributing rationally of the multinuclear embedded system cache memory of the method for the configurable energy-saving distribution of multinuclear embedded high-speed memory buffer of the embodiment of the present invention to carry out in algorithm optimization improvement based on performance and fairness being that benchmark cache sharing partiting step comprises:
Step one, carries out thread and calculates based on the equitable degree variable of performance;
Step 2, according to cache correlation principle, to system can distributing cache block size and determine;
Step 3, carries out the confirmation of priority to thread;
Step 4, carries out the distribution of cache number of blocks to thread according to thread priority;
Step 5, carries out the calculating of crash rate fairness metric according to the cache quantity that thread has distributed;
Step 6, compares from the thread cache crash rate fairness metric calculated, if number of threads is greater than two, then therefrom selects maximal value and minimum value thread;
Whether step 7, be less than fairness metric variable critical value according to the difference of the cache crash rate fairness metric maxima and minima elected and judge; If be false, then the cache quantity of distributing two threads redistributed, repeat step 5 and seven;
Step 8, if be true, then delete these two threads, repeats step 6 and seven;
Step 9, if number of threads is one or is zero.Algorithm terminates.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments done within the spirit and principles in the present invention, equivalent replacement and improvement etc., all should be included within protection scope of the present invention.
Claims (3)
1. the method for the configurable energy-saving distribution of multinuclear embedded high-speed memory buffer, it is characterized in that, the method for the configurable energy-saving distribution of this multinuclear embedded high-speed memory buffer comprise multinuclear embedded system cache cache memory application performance monitor parameter is arranged, the research method of distributing rationally of multinuclear embedded system cache memory carries out algorithm optimization improvement, by emulating, realize the Performance Match of most reasonably optimizing to the change of performance index in different cache configuration situations;
Cache memory application performance monitor parameter is carried out arranging and is referred to and utilize the application performance monitor parameter of computer process to multinuclear embedded system cache cache memory repeatedly to arrange, and obtains best Optimal Parameters;
The research method of distributing rationally of cache memory is carried out algorithm optimization and is improved and refer to that the monitor parameters that input is optimized arranges the Optimal Configuration Method of multinuclear embedded system cache memory, utilize computer program to carry out algorithm optimization improvement to method, obtain optimum collocation method;
Refer to that the collocation method of utilization optimum carries out emulation experiment respectively by the change of the index in different cache configuration situations by emulating the change of performance index in different cache configuration situations, obtain different experimental datas, select best experimental result;
The Performance Match realizing most reasonably optimizing refers to that in selected experimental result, energy consumption runs out and little configuration may carry out building of actual items by the simulation experiment result above, realizes the Performance Match of most reasonably optimizing.
2. the method for the configurable energy-saving distribution of multinuclear embedded high-speed memory buffer as claimed in claim 1, it is characterized in that, the Optimal Configuration Method of this multinuclear embedded system cache memory carry out algorithm optimization improve step comprise based on performance and fairness be benchmark the dead block prediction of cache, cache access was lost efficacy, cache looks ahead, be that benchmark cache sharing divides, energy simulation calculates based on performance and fairness;
The cache dead block prediction being benchmark based on performance and fairness refers to first by carrying out the prediction in data to the dead block of the cache based on performance and fairness being benchmark, for access cache is ready;
Cache accesses inefficacy and refers to when accessing cache process, there will be the result that cache access was lost efficacy;
Cache looks ahead and refers to after cache access was lost efficacy, and takes the measure that cache looks ahead;
Based on performance and fairness be benchmark cache sharing divide refer to that cache looks ahead after, by based on performance and fairness being the division of benchmark, cache sharing;
Energy simulation calculates the division referring to and utilize cache, arranges energy simulation model and carries out energy simulation calculating, obtains optimum result of calculation.
3. the method for the configurable energy-saving distribution of multinuclear embedded high-speed memory buffer as claimed in claim 1, it is characterized in that, the research method of distributing rationally of the multinuclear embedded system cache memory of the method for the configurable energy-saving distribution of this multinuclear embedded high-speed memory buffer to carry out in algorithm optimization improvement based on performance and fairness being that benchmark cache sharing partiting step comprises:
Step one, carries out thread and calculates based on the equitable degree variable of performance;
Step 2, according to cache correlation principle, can distribute cache block size to system and determine;
Step 3, carries out the confirmation of priority to thread;
Step 4, carries out the distribution of cache number of blocks to thread according to thread priority;
Step 5, carries out the calculating of crash rate fairness metric according to the cache quantity that thread has distributed;
Step 6, compares from the thread cache crash rate fairness metric calculated, if number of threads is greater than two, then therefrom selects maximal value and minimum value thread;
Whether step 7, be less than fairness metric variable critical value according to the difference of the cache crash rate fairness metric maxima and minima elected and judge; Be false, then the cache quantity of distributing two threads redistributed, repeat step 5 and seven;
Step 8 is true, then these two threads are deleted, repeat step 6 and seven;
Step 9, number of threads is one or is zero, and algorithm terminates.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410755519.0A CN104536939A (en) | 2014-12-10 | 2014-12-10 | Method for configurable energy-saving dispatching of multi-core embedded cache |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410755519.0A CN104536939A (en) | 2014-12-10 | 2014-12-10 | Method for configurable energy-saving dispatching of multi-core embedded cache |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104536939A true CN104536939A (en) | 2015-04-22 |
Family
ID=52852468
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410755519.0A Pending CN104536939A (en) | 2014-12-10 | 2014-12-10 | Method for configurable energy-saving dispatching of multi-core embedded cache |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104536939A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104991884A (en) * | 2015-06-18 | 2015-10-21 | 中国科学院自动化研究所 | Heterogeneous multi-core SoC architecture design method |
CN105681306A (en) * | 2016-01-13 | 2016-06-15 | 华北水利水电大学 | Spatial data security control system based on access mode protection |
CN106844235A (en) * | 2016-12-23 | 2017-06-13 | 北京北大众志微系统科技有限责任公司 | A kind of method and device for realizing cache replacement |
CN115421918A (en) * | 2022-09-16 | 2022-12-02 | 河南省职工医院 | Transcranial magnetic stimulation equipment and system based on RT-Linux |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0602808A2 (en) * | 1992-12-18 | 1994-06-22 | Advanced Micro Devices, Inc. | Cache systems |
CN102830616A (en) * | 2011-06-14 | 2012-12-19 | 北京三博中自科技有限公司 | Operation optimizing system and method of steam system |
CN103136039A (en) * | 2011-11-30 | 2013-06-05 | 国际商业机器公司 | Job scheduling method and system to balance energy consumption and schedule performance |
-
2014
- 2014-12-10 CN CN201410755519.0A patent/CN104536939A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0602808A2 (en) * | 1992-12-18 | 1994-06-22 | Advanced Micro Devices, Inc. | Cache systems |
CN102830616A (en) * | 2011-06-14 | 2012-12-19 | 北京三博中自科技有限公司 | Operation optimizing system and method of steam system |
CN103136039A (en) * | 2011-11-30 | 2013-06-05 | 国际商业机器公司 | Job scheduling method and system to balance energy consumption and schedule performance |
Non-Patent Citations (1)
Title |
---|
庞守雷: "面向特定应用的多核处理器体系结构关键技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104991884A (en) * | 2015-06-18 | 2015-10-21 | 中国科学院自动化研究所 | Heterogeneous multi-core SoC architecture design method |
CN104991884B (en) * | 2015-06-18 | 2017-12-05 | 中国科学院自动化研究所 | Heterogeneous polynuclear SoC architecture design method |
CN105681306A (en) * | 2016-01-13 | 2016-06-15 | 华北水利水电大学 | Spatial data security control system based on access mode protection |
CN106844235A (en) * | 2016-12-23 | 2017-06-13 | 北京北大众志微系统科技有限责任公司 | A kind of method and device for realizing cache replacement |
CN115421918A (en) * | 2022-09-16 | 2022-12-02 | 河南省职工医院 | Transcranial magnetic stimulation equipment and system based on RT-Linux |
CN115421918B (en) * | 2022-09-16 | 2023-05-12 | 河南省职工医院 | Transcranial magnetic stimulation equipment and system based on RT-Linux |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shu et al. | A novel energy-efficient resource allocation algorithm based on immune clonal optimization for green cloud computing | |
CN103365726B (en) | A kind of method for managing resource towards GPU cluster and system | |
CN104038392A (en) | Method for evaluating service quality of cloud computing resources | |
CN104536939A (en) | Method for configurable energy-saving dispatching of multi-core embedded cache | |
NZ707185A (en) | Aggregation source routing | |
Zidenberg et al. | Multiamdahl: How should i divide my heterogenous chip? | |
CN107111553A (en) | System and method for providing dynamic caching extension in many cluster heterogeneous processor frameworks | |
CN105187327A (en) | Distributed message queue middleware | |
Liu et al. | A tensor-based holistic edge computing optimization framework for Internet of Things | |
Vishnu et al. | Designing energy efficient communication runtime systems: a view from PGAS models | |
Ren et al. | Multi-objective optimization for task offloading based on network calculus in fog environments | |
Zhang et al. | A new energy efficient VM scheduling algorithm for cloud computing based on dynamic programming | |
CN104821906A (en) | Efficient energy-saving virtual network node mapping model and algorithm | |
TWI681289B (en) | Method,computing device,and non-transitory processor readable medium of managing heterogeneous parallel computing | |
Lu et al. | Mildip: An energy efficient code offloading framework in mobile cloudlets | |
Zhang et al. | Dependent task offloading mechanism for cloud–edge-device collaboration | |
CN103617090A (en) | Energy saving method based on distributed management | |
CN103106112A (en) | Method and device based on maximum load and used for load balancing scheduling | |
Pham et al. | Incorporating energy and throughput awareness in design space exploration and run-time mapping for heterogeneous MPSoCs | |
CN103617305A (en) | Self-adaptive electric power simulation cloud computing platform job scheduling algorithm | |
Xiong et al. | An energy-aware task consolidation algorithm for cloud computing data centre | |
Hamid et al. | A Multi-core architecture for a hybrid information system | |
CN104699520B (en) | A kind of power-economizing method based on virtual machine (vm) migration scheduling | |
Liu et al. | Virtual machine dynamic deployment scheme based on double-cursor mechanism | |
CN107636636A (en) | Adjust processor core operation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20150422 |
|
WD01 | Invention patent application deemed withdrawn after publication |