company name kuas hpds dramdisk: efficient ram sharing on a commodity cluster vassil roussev,...
TRANSCRIPT
Company nameKUAS
HPDS
http://hpds.ee.kuas.edu.tw/
dRamDisk: Efficient RAM Sharing on a Commodity Cluster
Vassil Roussev, Golden G. Richard
Reporter : Min-Jyun Chen
publication year :2006publication year :2006
Hpds Lab
Abstract
The main point of the paper is to present a practical solution—a distributed RAM disk (dRamDisk) with an adaptive read-ahead scheme—which demonstrates that spare RAM capacity can greatly benefit I/O-constrained applications.
Our experiments show that sequential read/write operations can be sped up approximately 3.5 times relative to a commodity hard drive.
2010/04/27 KUAS-EE-HPDS2
Hpds Lab
Contents
1. Introduction 1. Introduction
2.Design rationale 2.Design rationale
3.Implementation issues 3.Implementation issues
4. Evaluation 4. Evaluation
5. Conclusion 5. Conclusion
Hpds Lab
Introduction
Better performance—using remote RAM should yield non-trivial performance gains. Transparency—RAM sharing should be transparent to the process. Efficiency—this requirement has two sides:
The system should utilize as much as possible of the available network resources.
A CPU-bound process should see no performance degradation.
Ease of use—the systems should be easy to deploy and administer.
2010/04/27 KUAS-EE-HPDS4
Hpds Lab
Design rationale (Cont.)
2010/04/27 KUAS-EE-HPDS6
User-level RAM server - One obvious drawback is that allocated memory
cannot be locked into RAM so there is the potential for it to be swapped out by the paging system.
Our rationale is that, for an idle system, the issue is moot—the user process will be able to use up all available memory simply because there is no competition.
Hpds Lab
Design rationale (Cont.)
2010/04/27 KUAS-EE-HPDS7
Block-level device driver- There are two obvious choices here.
The first one is to implement file system-level caching, which would allow optimizations based on the logical structure of the file system (FS).
The alternative is to implement a (block-level) device that does not have the benefit of filesystem knowledge but would work with any filesystem.
Hpds Lab
Design rationale (Cont.)
2010/04/27 KUAS-EE-HPDS8
TCP-based communication- IT users have been very reluctant to adopt more
efficient solutions designed to take advantage of more efficient communication
technologies . As a result, many vendors are providing TCP/IP
emulation that enables users to take advantage of the optimization without parting with the “good old” TCP/IP sockets.
Hpds Lab
Implementation issues
2010/04/27 KUAS-EE-HPDS9
We implemented an adaptive read-ahead scheme to accommodate the conflicting
requirements of small and large files: The initial transfer unit is set to the minimum
(4KB). If a successive block request is adjacent to the
previous block transferred, the size of the transfer unit is doubled subject to a maximum parameter (128KB).
Otherwise, the transfer unit is set to the minimum.
Hpds Lab
Evaluation
2010/04/27 KUAS-EE-HPDS10
Experimental setup-
5 x Dell Pentium 4 @ 3GHz/2GB RAM 1 x Gigabit 8-port Linksys Workgroup Switch Linux 2.4 kernel One host was dedicated to running the
application/benchmark. Other four host were providing the distributed
block device (i.e., all I/O requests/results cross the network).
Hpds Lab
Evaluation (Cont.)
2010/04/27 KUAS-EE-HPDS11
Benchmark results- For our benchmark testing, we used the
IOzone file system benchmark tool (http://www.iozone.org) .
And ran identical tests for both the local and the network drives.
Hpds Lab
Evaluation (Cont.)
2010/04/27 KUAS-EE-HPDS13
Application results- We used three different command-line
tools: tar—the standard Unix archiving utility. md5sum—a tool for computing MD5 file hashes. scalpel—an optimized tool for carving files out
of a disk image .
Hpds Lab
Conclusions
Unlike previous work, our solution is targeted at improving sequential read/write operations,which are the dominant disk access pattern.
Our experiments show that sequential read/write operations can be sped up approximately 3.5 times relative to a commodity IDE hard drive.
Furthermore, this speedup is approximately 90% of what is practically achievable for the tested system.
2010/04/27 KUAS-EE-HPDS15