Abstract
In 2014 the Insertable B-Layer (IBL) will extend the existing Pixel Detector of the ATLAS experiment at CERN by over 12 million additional pixels. For calibration and monitoring purposes, occupancy and time-over-threshold data are being histogrammed in the read-out hardware. Further processing of the histograms happens on commodity hardware, which not only requires the fast transfer of histogram data from the read-out hardware to the computing farm via Ethernet, but also the integration of the software and hardware into the already existing data-acquisition and calibration framework (TDAQ and PixelDAQ) of the ATLAS experiment and the current Pixel Detector.
We implement the software running on the compute cluster with an emphasis on modularity, allowing for flexible adjustment of the infrastructure and a good scalability with respect to the number of network interfaces, available CPU cores, and deployed machines. By using a modular design we are able to not only employ CPU-based fitting algorithms, but also have the possibility to take advantage of the performance offered by a GPU-based approach to fitting.
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.