Enhancing Energy Efficiency with DRAM-Aware Physical Memory Management in Linux Illia Ostapyshyn (Leibniz Universität Hannover) In contrast to the early days of computing, memory is no longer a scarce resource. The capacity of dynamic random-access memory (DRAM) continues to successfully scale with ever-growing requirements of the applications. Modern server systems can house terabytes of DRAM, yet in the large-scale data centers, the average memory utilization does not exceed 70 percent. Especially in such systems, memory is a major contributor to the overall power consumption, prompting the question of whether it is possible to deactivate unused memory to conserve energy. However, neither hardware nor software offer sufficient support to realize these energy savings and millions of systems continue to expend energy on maintaining unused memory. On the hardware side, the existing power-saving modes are only applicable at the large granularities of ranks and channels (>8 GiB). This is slowly changing with new power management features in the LPDDR5 standard bringing the granularity to the sub-rank level of around 1 GiB. On the software side, contemporary operating systems are still designed around the notion of memory scarcity. They primarily manage memory in 4 KiB pages and operate under the assumption that unused memory is a wasted resource. Over time, the available memory is filled with file cache and used memory inevitably becomes scattered across all memory devices. Consequently, it becomes impossible to find unused contiguous segments of memory for deactivation in any modern system with considerable uptime. In a nutshell, the possibility of memory power management contradicts the foundational assumptions of the conventional memory management. This work approaches the lack of power-saving mechanisms from the systems software perspective. It demonstrates that also Linux suffers from poor memory management, as the memory quickly becomes unsuitable for deactivation and remains in this state even if the utilization declines. To tackle this issue, this thesis proposes a novel compaction mechanism designed with DRAM power management in mind. Applied to real-world workloads, it successfully increases the amount of unused memory segments and reduces the power consumption of a desktop system under heavy load. These savings scale quickly when applied to numerous systems with overprovisioned memory worldwide. Ultimately, this work is the first to reveal that it actually pays off to actively reorganize memory contents with the goal of energy savings: the energy invested in a single compaction procedure is recovered in just under one minute.