Version 2 (modified by 13 years ago) (diff) | ,
---|
Volunteer data archival
Volunteer data archival means using disk space on volunteered home computers to store large data files. This document describes the design of a system to provide volunteer data archival on BOINC. We assume the goals include:
- Storing large (e.g. petabyte) files. Files may be thousands of times larger than the amount of space available on individual computers.
- Store files are long periods.
- Be able to reduce the probability of data loss to arbitrarily small levels.
Properties of the volunteer host population include:
- A host may be sporadically available because it is turned off, or because the user has suspended network activity. Unavailable periods may range from minutes to several days.
- The upload and download speeds of hosts vary widely, and can be fairly low (e.g. 1 Mbps) in some cases.
- The amount of disk space available to a project on a given host may fluctuate over time, because of the user's own disk usage or disk usage by other BOINC projects to which the host is attached.
- The population is dynamic: hosts are constantly arriving and leaving. The mean lifetime of a host may be fairly small (on the order of 100 days).
- Many hosts are behind firewalls. We assume that all communication is initiated by the BOINC client, and involves HTTP requests to trusted project servers. We don't consider direct client-to-client communication.
Recovering from the failure of a host, using techniques like replication, involves uploading data from a 2nd host, then downloading it to a 3rd host. Each of these steps may take days. This, for volunteer storage the ratio
average time to failure / average time to recover
may be fairly small (like 100). In other distributed storage systems (such as RAIDs) this ratio may be on the order of 100,000. Thus, these systems can modeled as a sequence of individual failures and recoveries.
Volunteer storage, on the other hand, must be modeled as process in which multiple recoveries may be in progress at the same time, and new failures may occur during these recoveries. There are two basic techniques for achieving reliable storage using unreliable resources:
Replication: a file is divided into N pieces, and each piece is stored on M hosts. If a replica is lost, and there another replica, that replica is uploaded to the server, then downloaded to another host.
Coding: with Reed-Solomon coding, a file is divided into N 'packets', and an additional K checksum packets are generated. The original data can be reconstructed from any N of these N+K packets.
In