Nothing Special   »   [go: up one dir, main page]

CERN Accelerating science

CERN Document Server Sök i 2,048 journaler efter:  1 - 10nästaslut  gå till journal: Sökningen tog 0.29 sekunder. 
1.
Major changes to the LHCb Grid computing model in year 2 of LHC data / Arrabito, L (CC, Villeurbanne) ; Bernardoff, V (CERN) ; Bouvet, D (CC, Villeurbanne) ; Cattaneo, M (CERN) ; Charpentier, P (CERN) ; Clarke, P (Edinburgh U.) ; Closier, J (CERN) ; Franchini, P (INFN, Ferrara) ; Graciani, R (Barcelona U.) ; Lanciotti, E (CERN) et al.
The increase of luminosity of the LHC in 2011 also introduced an increase of computing requirements for data processing. This paper describes the data processing operations during 2011 prompt reconstruction as well as the end of year re-processing of the full data sample. [...]
2012 - 8 p.

In : Computing in High Energy and Nuclear Physics 2012, New York, NY, USA, 21 - 25 May 2012, pp.032092
2.
LHCb: The Evolution of the LHCb Grid Computing Model
Reference: Poster-2012-224
Created: 2012. -1 p
Creator(s): Arrabito, L; Bernardoff, V; Bouvet, D; Cattaneo, M; Charpentier, P [...]

The increase of luminosity in the LHC during its second year of operation (2011) was achieved by delivering more protons per bunch and increasing the number of bunches. Taking advantage of these changed conditions, LHCb ran with a higher pileup as well as a much larger charm physics introducing a bigger event size and processing times. These changes led to shortages in the offline distributed data processing resources, an increased need of cpu capacity by a factor 2 for reconstruction, higher storage needs at T1 sites by 70\% and subsequently problems with data throughput for file access from the storage elements. To accommodate these changes the online running conditions and the Computing Model for offline data processing had to be adapted accordingly. This paper describes the changes implemented for the offline data processing on the Grid, relaxing the Monarc model in a first step and going beyond it subsequently. It further describes other operational issues discovered and solved during 2011, present the performance of the system and concludes by lessons learned to further improve the data processing reliability and quality for the 2012 run augmented by first results on the computing performance from 2012.

Related links:
Conference: CHEP 2012
© CERN Geneva

Access to files
3.
The LHCb Distributed Computing Model and Operations during LHC Runs 1, 2 and 3 / Roiser, Stefan (CERN) ; Ramo, Adria Casajus (Barcelona U.) ; Cattaneo, Marco (CERN) ; Charpentier, Philippe (PIC, Bellaterra) ; Clarke, Peter (Edinburgh U.) ; Closier, Joel (CERN) ; Corvo, Marco (INFN, Padua) ; Falabella, Antonio (INFN, Bologna) ; Molina, Josè Flix (Caracas, IVIC) ; Medeiros, Joao Victor De Franca Messias (Rio de Janeiro, CBPF) et al.
SISSA, 2015 - Published in : PoS ISGC2015 (2015) 005 Fulltext: PDF; External link: Published version from PoS
In : International Symposium on Grids and Clouds 2015, Taipei, Taiwan, 15-20 Mar 2015, pp.005
4.
LHCbDirac: Distributed computing in LHCb / Stagni, F (CERN) ; Charpentier, P (CERN) ; Graciani, R (Barcelona U.) ; Tsaregorodtsev, A (Marseille, CPPM) ; Closier, J (CERN) ; Mathe, Z (CERN) ; Ubeda, M (CERN) ; Zhelezov, A (U. Heidelberg (main)) ; Lanciotti, E (CERN) ; Romanovskiy, V (Serpukhov, IHEP) et al.
We present LHCbDirac, an extension of the DIRAC community Grid solution that handles LHCb specificities. The DIRAC software has been developed for many years within LHCb only. [...]
2012 - 10 p. - Published in : J. Phys.: Conf. Ser. 396 (2012) 032104
In : Computing in High Energy and Nuclear Physics 2012, New York, NY, USA, 21 - 25 May 2012, pp.032104
5.
Behaviour of Small Gap + GEM Chambers in close LHC conditions / Bouvet, D ; Chorowicz, V ; Contardo, D ; Haroutunian, R ; Mirabito, L ; Perriès, S ; Smadja, G
2002 - Published in : Nucl. Instrum. Methods Phys. Res., A 478 (2002) 267-70
6.
Radiation Hardness Study With Small Gap Chambers Of 5 And 9 Cm Strips / Bouvet, D ; Chorowicz, V ; Contardo, D ; Haroutunian, R ; Mirabito, L ; Smadja, G
LYCEN-99-90.
- 1999.
IN2P3 Publications database
7.
Behaviour of Small Gap+gem Chambers in Close LHC Conditions / Bouvet, D ; Chorowicz, V ; Contardo, D ; Hartoutunian, R ; Mirabito, L ; Perriès, S ; Smadja, G
LYCEN-2001-98.- Lyon : CNRS Lyon. Inst. Phys. Nucl., 2002 - Published in : Nucl. Instrum. Methods Phys. Res., A 478 (2002) 267-270 Published version: PDF;
In : 9th International Vienna Conference on Instrumentation, Vienna, Austria, 19 - 23 Feb 2001, pp.267-270
8.
The LHCb Data Management System / Baud, JP (CERN) ; Charpentier, Ph (CERN) ; Ciba, K (CERN) ; Graciani, R (Barcelona U.) ; Lanciotti, E (CERN ; Dublin City U.) ; Mathe, Z (CERN ; Dublin City U.) ; Remenska, D (NIKHEF, Amsterdam) ; Santana, R (Rio de Janeiro, CBPF)
The LHCb Data Management System is based on the DIRAC Grid Community Solution. LHCbDirac provides extensions to the basic DMS such as a Bookkeeping System. [...]
2012 - 10 p.

In : Computing in High Energy and Nuclear Physics 2012, New York, NY, USA, 21 - 25 May 2012, pp.032023
9.
Architectures and methodologies for future deployment of multi-site Zettabyte-Exascale data handling platforms / Acín, V (Barcelona, IFAE ; PIC, Bellaterra) ; Bird, I (CERN) ; Boccali, T (INFN, Italy ; INFN, Pisa) ; Cancio, G (CERN) ; Collier, I P (Rutherford) ; Corney, D (Rutherford) ; Delaunay, B (CNRS, France ; CC, Villeurbanne) ; Delfino, M (Barcelona, Autonoma U. ; footnote (Correspondence) to be removed ; PIC, Bellaterra) ; dell'Agnello, L (INFN, Italy ; INFN, CNAF) ; Flix, J (Madrid, CIEMAT ; PIC, Bellaterra) et al.
Several scientific fields, including Astrophysics, Astroparticle Physics, Cosmology, Nuclear and Particle Physics, and Research with Photons, are estimating that by the 2020 decade they will require data handling systems with data volumes approaching the Zettabyte distributed amongst as many as 10(18) individually addressable data objects (Zettabyte-Exascale systems). It may be convenient or necessary to deploy such systems using multiple physical sites [...]
2015 - 8 p. - Published in : J. Phys.: Conf. Ser. 664 (2015) 042009 IOP Open Access article: PDF;
In : 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.042009
10.
Service monitoring in the LHC experiments / Barreiro Megino, Fernando (CERN) ; Bernardoff, Vincent (Paris, LPTHE) ; da Silva Gomes, Diego (Rio de Janeiro State U.) ; di Girolamo, Alessandro (CERN) ; Flix, Jos (Madrid, CIEMAT) ; Kreuzer, Peter (Aachen, Tech. Hochsch.) ; Roiser, Stefan (CERN)
The LHC experiments computing infrastructure is hosted in a distributed way across different computing centers in the Worldwide LHC Computing Grid (WLCG [1]) and needs to run with high reliability. It is therefore crucial to offer a unified view to shifters, who generally are not experts in the services, and give them the ability to follow the status of resources and the health of critical systems in order to alert the experts whenever a system becomes unavailable. [...]
2012 - 8 p.

In : Computing in High Energy and Nuclear Physics 2012, New York, NY, USA, 21 - 25 May 2012, pp.032010

Har du inte funnit vad du söker efter? Försök på andra platser:
recid:1565912 i Amazon
recid:1565912 i CERN EDMS
recid:1565912 i CERN Intranet
recid:1565912 i CiteSeer
recid:1565912 i Google Books
recid:1565912 i Google Scholar
recid:1565912 i Google Web
recid:1565912 i IEC
recid:1565912 i IHS
recid:1565912 i INSPIRE
recid:1565912 i ISO
recid:1565912 i KISS Books/Journals
recid:1565912 i KISS Preprints
recid:1565912 i NEBIS
recid:1565912 i SLAC Library Catalog