OpenAFS logo
NJIT

Thursday 22 May 2:00pm

Location: Second Floor of Campus Center at NJIT

Speaker: Hartmut Reuter

Title: AFS and Object Storage

Abstract:

Starting with a common R&D project of CERN, CASPUR, ENEA, and RZG an extension to OpenAFS has been developed in the last 3 years. This OpenAFS+OSD (OSD=Object Storage Devices) allows to store files outside the vicep-partition where the volume lives. The OSDS presently used are called "rxosd". They are based on mature AFS technologies (rx and namei) and are managed by a new database server "osdserver". The client learns from the fileserver where the files are (to be) stored and access directly the rxosds. The fileserver works as a metadata server (in terms of object storage systems such as Lustre).

There are two major scenarios where AFS+OSD gives great advantages:

1) It can be used with disk-only OSDs to get better load balancing and higher throughput. Files may be striped over multiple OSDs and also may have RW-copies on multiple OSDs.

2) It allows for HSM within AFS similar to old MR-AFS. This was the main reason for RZG to join the development and to deploy it as 1st site. You may have archival OSDs: rxosd running on a partition controlled by an underlying HSM system such as TSM-HSM or others. Then other OSDs may be declared "wipeable": objects here may be wiped away when a copy exists elsewhere. You can set a "high water mark" in the osdserver database for an OSD and a special "archiver" and "wiper" automatically do the space control on the wipeable OSDs.
Wiped files remain visible in the AFS-tree and come back from tape when accessed.

AFS+OSD runs in production at RZG. presently 45 of our 210 TB have been migrated from MR-AFS to AFS+OSD.
 

Slides: PDF

AFS & Kerberos Best Practices Workshop 2008: Thursday Session 3 Slot 2