One of the big challenges in Grid computing is storage management and access. Several solutions exist to store data in a persistent way. In this work we describe our contribution within the Worldwide LHC Computing Grid project. Substantial samples of data produced by the High Energy Physics detectors at CERN are shipped for initial processing to specific large computing centers worldwide. Such centers are normally able to provide persistent storage for tens of Petabytes of data mostly on tapes. Special physics applications are used to refine and filter the data after spooling the required files from tape to disk. At smaller geographically dispersed centers, physicists perform the analysis of such data stored on disk-only caches. In this thesis we analyze the application requirements such as uniform storage management, quality of storage, POSIX-like file access, performance, etc. Furthermore, security, policy enforcement, monitoring, and accounting need to be addressed carefully in a Grid environment. We then make a survey of the multitude of storage products deployed in the WLCG infrastructure, both hardware and software. We outline the specific features, functionalities and diverse interfaces offered to users. We focus in particular on StoRM, a storage resource manager that we have designed and developed to provide an answer to specific user request for a fast and efficient Grid interface to available parallel file systems. We propose a model for the Storage Resource Management protocol for uniform storage management and access in the Grid. The black box testing methodology has been applied in order to verify the completeness of the specifications and validate the existent implementations. an extension for storage on the Grid. We finally describe and report on the results obtained.
Storage Management and Access in WLHC computing Grid
2007
Abstract
One of the big challenges in Grid computing is storage management and access. Several solutions exist to store data in a persistent way. In this work we describe our contribution within the Worldwide LHC Computing Grid project. Substantial samples of data produced by the High Energy Physics detectors at CERN are shipped for initial processing to specific large computing centers worldwide. Such centers are normally able to provide persistent storage for tens of Petabytes of data mostly on tapes. Special physics applications are used to refine and filter the data after spooling the required files from tape to disk. At smaller geographically dispersed centers, physicists perform the analysis of such data stored on disk-only caches. In this thesis we analyze the application requirements such as uniform storage management, quality of storage, POSIX-like file access, performance, etc. Furthermore, security, policy enforcement, monitoring, and accounting need to be addressed carefully in a Grid environment. We then make a survey of the multitude of storage products deployed in the WLCG infrastructure, both hardware and software. We outline the specific features, functionalities and diverse interfaces offered to users. We focus in particular on StoRM, a storage resource manager that we have designed and developed to provide an answer to specific user request for a fast and efficient Grid interface to available parallel file systems. We propose a model for the Storage Resource Management protocol for uniform storage management and access in the Grid. The black box testing methodology has been applied in order to verify the completeness of the specifications and validate the existent implementations. an extension for storage on the Grid. We finally describe and report on the results obtained.File | Dimensione | Formato | |
---|---|---|---|
Chapter_1_Abstract.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
180.52 kB
Formato
Adobe PDF
|
180.52 kB | Adobe PDF | Visualizza/Apri |
Chapter_0_Title.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
49.09 kB
Formato
Adobe PDF
|
49.09 kB | Adobe PDF | Visualizza/Apri |
Chapter_1.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
441.93 kB
Formato
Adobe PDF
|
441.93 kB | Adobe PDF | Visualizza/Apri |
Chapter_2.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
4.47 MB
Formato
Adobe PDF
|
4.47 MB | Adobe PDF | Visualizza/Apri |
Chapter_3.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
615.08 kB
Formato
Adobe PDF
|
615.08 kB | Adobe PDF | Visualizza/Apri |
Chapter_4.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
574.17 kB
Formato
Adobe PDF
|
574.17 kB | Adobe PDF | Visualizza/Apri |
Chapter_5.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
302.27 kB
Formato
Adobe PDF
|
302.27 kB | Adobe PDF | Visualizza/Apri |
Chapter_6.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
1.03 MB
Formato
Adobe PDF
|
1.03 MB | Adobe PDF | Visualizza/Apri |
Chapter_7.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
929.22 kB
Formato
Adobe PDF
|
929.22 kB | Adobe PDF | Visualizza/Apri |
Chapter_8.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
3.45 MB
Formato
Adobe PDF
|
3.45 MB | Adobe PDF | Visualizza/Apri |
Chapter_9.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
129.67 kB
Formato
Adobe PDF
|
129.67 kB | Adobe PDF | Visualizza/Apri |
Reference_Bibliography.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
69.17 kB
Formato
Adobe PDF
|
69.17 kB | Adobe PDF | Visualizza/Apri |
Reference_Bibliography1.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
125.2 kB
Formato
Adobe PDF
|
125.2 kB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/153950
URN:NBN:IT:UNIPI-153950