Abstract
By some estimates, there will be close to one billion wireless devices capable of Internet connectivity within five years, surpassing the installed base of traditional wired compute devices. These devices will take the form of cellular phones, personal digital assistants (PDA's), embedded processors, and "Internet appliances". This proliferation of networked computing devices will enable a number of compelling applications, centering around ubiquitous access to global information services, just in time delivery of personalized content, and tight synchronization among compute devices/appliances in our everyday environment. However, one of the principal challenges of realizing this vision in the post-PC environment is the need to reduce the energy consumed in using these next-generation mobile and wireless devices, thereby extending the lifetime of the batteries that power them. While the processing power, memory, and network bandwidth of post-PC devices are increasing exponentially, their battery capacity is improving at a more modest pace.
Thus, to ensure the utility of post-PC applications, it is important to develop low-level mechanisms and higher-level policies to maximize energy efficiency. In this paper, we propose the systematic re-examination of all aspects of operating system design and implementation from the point of view of energy efficiency rather than the more traditional OS metric of maximizing performance. In [7], we made the case for energy as a first-class OS-managed resource. We emphasized the benefits of higher-level control over energy usage policy and the application/OS interactions required to achieve them. This paper explores the implications that this major shift in focus can have upon the services, policies, mechanisms, and internal structure of the OS itself based on our initial experiences with rethinking system design for energy efficiency.
Our ultimate goal is to design an operating system where major components cooperate to explicitly optimize for energy efficiency. A number of research efforts have recently investigated aspects of energy-efficient operating systems (a good overview is available at [16, 20]) and we intend to leverage existing "best practice" in our own work where such results exist. However, we are not aware of any systems that systematically revisit system structure with energy in mind. Further, our examination of operating system functionality reveals a number of opportunities that have received little attention in the literature. To illustrate this point, Table 1 presents major operating system functionality, along with possible techniques for improving power consumption characteristics. Several of the techniques are well studied, such as disk spindown policies or adaptively trading content fidelity for power [8]. For example, to reduce power consumption for MPEG playback, the system could adapt to a smaller frame rate and window size, consuming less bandwidth and computation.
One of the primary objectives of operating systems is allocating resources among competing tasks, typically for fairness and performance. Adding energy efficiency to the equation raises a number of interesting issues. For example, competing processes/users may be scheduled to receive a fair share of battery resources rather than CPU resources (e.g., an application that makes heavy use of DISK I/O may be given lower priority relative to a compute-bound application when energy resources are low). Similarly, for tasks such as ad hoc routing, local battery resources are often consumed on behalf of remote processes. Fair allocation dictates that one battery is not drained in preference to others. Finally, for the communication subsystem, a number of efforts already investigate adaptively setting the polling rate for wireless networks (trading latency for energy).
Our efforts to date have focused on the last four areas highlighted in Table 1. For memory allocation, our work explores how to exploit the ability of memory chips to transition among multiple power states. We also investigate metrics for picking energy-efficient routes in ad hoc networks, energy-efficient placement of distributed computation, and flexible RPC/name binding that accounts for power consumption.
These last two points of resource allocation and remote communication highlight an interesting property for energy-aware OS design in the post-PC environment. Many tasks are distributed across multiple machines, potentially running on machines with widely varying CPU, memory, and power source characteristics. Thus, energy-aware OS design must closely cooperate with and track the characteristics of remote computers to balance the often conflicting goals of optimizing for energy and speed.
The rest of this paper illustrates our approach with selected examples extracted from our recent efforts toward building an integrated hardware/software infrastructure that incorporates cooperative power management to support mobile and wireless applications. The instances we present in subsequent sections cover the resource management policies and mechanisms necessary to exploit low power modes of various (existing or proposed) hardware components, as well as power-aware communications and the essential role of the wide-area environment. We begin our discussion with the resources of a single machine and then extend it to the distributed context.