User-Level I/O Accelerations for High-Performance Deep Learning Applications
Zhu, Yue (author)
Yu, Weikuan (professor directing dissertation)
Liu, Guosheng (university representative)
Duan, Zhenhai (committee member)
Zhao, Peixiang (committee member)
Mohror, Kathryn (committee member)
Florida State University (degree granting institution)
College of Arts and Sciences (degree granting college)
Department of Computer Science (degree granting department)
2021
text
doctoral thesis
With the popularity of microprocessors and scale-out system architectures, many large-scale high-performance computing (HPC) systems are built from a collection of compute servers, with an identical set of resources such as CPU, memory, and storage. A variety of applications have been leveraging the tremendous computation capacity on these large-scale HPC systems. Scientific applications and deep learning (DL) training are two of the popular workloads on HPC systems. However, with the rapid growth of the computation power, it has also become increasingly important to fill in the computation and I/O performance gap for these applications and workloads on HPC systems. In recent years, many research efforts have been made to explore user-level file systems on HPC systems for various workloads due to the flexibility of implementation and maintenance in user space. In particular, scientific applications which have two typical I/O patterns (checkpoint/restart and multi-dimensional I/O) have been able to utilize different specialized user-level file systems in a single job. However, non-trivial overheads can be introduced in such a method. We need to carefully review the overheads in order to mit- igate the performance degradation. In addition, the existing methods of using user-level file systems are not sufficient to meet the fundamental I/O needs of large-scale DL training on HPC systems. Firstly, in DL training, random samples are organized into batches to update model parameters in iterations. This is to avoid the model being biased by the input sequences’ noise, which allows faster convergence speed and reduces memory consumption during the training computation. This results in massive random reads for data shuffling across the entire datasets on storage systems. Such a random read I/O pattern is significantly different from the traditional scientific workloads. Moreover, leadership HPC systems are often equipped with a large pool of burst buffers in the form of flash or non-volatile memory (NVM) devices. DL applica- tions running on these systems face the resource underutilization problem. This is because NVM devices’ performance with respect to low latency and high bandwidth can be severely underutilized under heavy CPU and memory workloads. In this environment, the flash or NVMe storage devices are capable of low-latency and high-bandwidth I/O services, but the complex software stack significantly hampers such capabilities for I/O processing in the kernel. Also, due to DL training accuracy and performance concerns, the storage capacity and the performance of on-node storage devices on the nodes allocated to the training job are not sufficient to store an entire dataset and match the training speed, respectively.This dissertation focus on applying user-level file systems on HPC systems. Our overarching goal is to accelerate the I/O supports on HPC systems through specialized user-level file systems for popular workloads. In specific, we want to bring lightweight user-level file systems as efficient intermediates to reduce the performance overhead and ease the use of multiple FUSE file systems in a single job, orchestrate the data movement between storage tiers and DL applications, and improve the storage resource utilization for a pool of NVMe SSDs in DL training. Based on these design goals, we investigate the issues and challenges when applying existing user-level file systems to the popular workloads, then propose three strategies to meet our goals. Firstly, we have studied the problem of excessive cost in crossing the user-kernel boundary when using multiple traditional user-level file systems, and we design Direct-FUSE to support multiple FUSE file sys- tems as well as other, custom user-level file systems in user space without the need to cross the user/kernel boundary into the FUSE kernel module. All layers of Direct-FUSE are in user space, and applications can directly use pre-defined unified file system calls to interact with different user-defined file systems. Our performance results show that Direct-FUSE can outperform some native FUSE file systems and does not add significant overhead over backend file systems. Secondly, we examine the I/O patterns of deep neural networks and study the performance overheads when loading samples from some popular DL applications. Then, we introduce an entropy-aware I/O framework called DeepIO for large-scale deep learning on HPC systems. It coordinates the use of memory, communication, and I/O resources for efficient training of datasets. DeepIO features an I/O pipeline that utilizes several novel optimizations: RDMA (Remote Direct Memory Access)-assisted in-situ shuffling, input pipelining, and entropy-aware opportunistic ordering. It outperforms the state-of-the-art persistent memory based distributed file systems for efficient sample load- ing during DL training. Thirdly, besides examining the I/O patterns of deep neural networks, we also reveal a critical need of loading many small samples randomly and the issues of storage resources underutilization for successful training. Based on these understandings, we design a specialized Deep Learning File System (DLFS) with an in-memory tree-based sample directory for metadata management and user-level storage disaggregation through the SPDK protocol. Our experimental results show that DLFS can dramatically improve the throughput of training for deep neural networks when compared with the kernel-based local Ext4 file system. Furthermore, DLFS demonstrates its capability of achieving efficient user-level storage disaggregation with very little CPU utilization. In conclusion, the first branch concentrates on enriching the functionality and enhancing the performance of the Direct-FUSE framework; the second and third branches focus on wisely storing and prefetching datasets with the coordination of hierarchical storage tiers and fast interconnect, respectively. By exploring these three branches, we can further accelerate the specialized user-level file systems for popular workloads on HPC systems.
Deep Learning , FUSE, I/O, Storage Disaggregation, User-Level File System
March 15, 2021.
A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
Includes bibliographical references.
Weikuan Yu, Professor Directing Dissertation; Guosheng Liu, University Representative; Zhenhai Duan, Committee Member; Peixiang Zhao, Committee Member; Kathryn Mohror, Committee Member.
Florida State University
2020_Summer_Fall_Zhu_fsu_0071E_16381