Geomancy: Automated Data Placement for Exascale Storage Systems
Speaker(s) : Darrell Long (UC Santa Cruz)
Exascale cloud storage and High-Performance Computing Systems (HPC) deliver unprecedented storage capacity and levels of computing power, though the full potential of these systems remain untapped because of inefficient data place- ment. Due to the unpredictable nature of workload I/O pat- terns, changes in data accesses can cause a system’s perfor- mance to suffer. Allocating more resources to these nodes is not always an economical or technical solution at the exas- cale level. To mitigate performance losses, system design- ers implement strategies to preemptively place popular data on higher performance nodes. However, these strategies fail to address a diverse userbase whose users individually de- mand the highest performance, and they must be carefully constructed by an expert of the system.
We propose Geomancy, a tool that reorganizes data to in- crease I/O throughput. Geomancy does not need any prior knowledge of the host system, instead it will build its own understanding of the system and determine the memory and computing capabilities of each node. In systems where heuristic-based improvements are inadequate or too resource intensive, Geomancy determines new placement policies by training a deep neural network with past workload and sys- tem traces. With CERN traces, Geomancy calculated an example placement policy for the scientific data accessed by workloads on the EOS. From system and workload data gathered on Pacific Northwest National Laboratory (PNNL) servers, we demonstrate a 49% increase in average through- put compared to the standard data layout for 50 runs of a physics simulation using a new placement policy calculated by Geomancy.
More details here
marc.shapiro (at) null