LCB 99-xx

Models of Networked Analysis at Regional Centres for LHC Experiments

(MONARC)

PROGRESS REPORT

Version of 11:00 24th May 1999

Prepared by

M. Aderholz (MPI), K. Amako (KEK), E. Arderiu Ribera (CERN), E. Auge (L.A.L/Orsay), G. Bagliesi (Pisa/INFN), L. Barone (Roma1/INFN), G. Battistoni (Milano/INFN), J. Bunn (Caltech/CERN), J. Butler (FNAL), M. Campanella (Milano/INFN), P. Capiluppi (Bologna/INFN), M. Dameri (Genova/INFN), D. Diacono (Bari/INFN), A. di Mattia (Roma1/INFN), U. Gasparini (Padova/INFN), F. Gagliardi (CERN), I. Gaines (FNAL), P. Galvez (Caltech), C. Grandi (Bologna/INFN), F. Harris (Oxford/CERN), K. Holtman (CERN), V. Karimäki (Helsinki), J. Klem (Helsinki), M. Leltchouk (Columbia), D. Linglin (IN2P3/Lyon Computing Centre), P. Lubrano (Perugia/INFN), L. Luminari (Roma1/INFN), M. Michelotto (Padova/INFN), I. McArthur (Oxford), H. Newman (Caltech), S.W. O'Neale (Birmingham), B. Osculati (Genova/INFN), M. Pepe (Perugia/INFN), L. Perini (Milano/INFN), J. Pinfold (Alberta), R. Pordes (FNAL), S. Rolli (Tufts), T. Sasaki (KEK), L. Servoli (Perugia/INFN), R.D. Schaffer (Orsay), M. Sgaravatto (Padova/INFN), T. Schalk (BaBar), J. Shiers (CERN), L. Silvestris (Bari/INFN), G.P. Siroli (Bologna/INFN), K. Sliwa (Tufts), C. Stanescu (Roma3/INFN), T. Smith (CERN), C. von Praun (CERN), E. Valente (INFN), I. Willers (CERN), R. Wilkinson (Caltech), D.O. Williams (CERN)

Executive Summary

Laura Perini will contribute one page.

To be consistent with the PEP this is not labelled as a chapter (you may think of it as chapter 0).

Chapter 1: Introduction

This comes from Harvey Newman and should describe the structure of the document.

 

Chapter 2: Deprecated chapter 2 ..Progress Reports of the Working Groups

The Working Groups present Progress Reports. However they now have a chapter each, this is retained to show the PEP-style markup of the division one level below chapters.

2.1 Architecture

Blah

2.2 Testbed

Blah

2.3 Analysis and Simulation


 
 
 
 

Chapter 2: Progress Reports of the Architecture Working Group


 
 

2.1 Introduction

The basic task of the Architecture Working Group is to develop distributed computing system architectures for LHC which can be modeled to verify their performance and viability. To carry out this task, the group considers the LHC analysis problem in the "large". We start with the general parameters of an LHC experiment, such as

From there we conduct detailed discussions about how the analysis task will be divided up between the computing facility at CERN and computing facilities located outside of CERN. We consider what kind of facilities will be viable given different analysis approaches and networking scenarios, what kind of issues each type of facility will face, and what kind of support will be required to sustain the facility and make it an effective contributor to LHC computing.

The general picture that has emerged from these discussions is:

The primary motivation for a hierarchical collection of computing resources, called Regional Centres, is to maximize the intellectual contribution of physicists all over the world, without requiring their physical presence at the CERN. An architecture based on RCs allows an organization of computing tasks which may take advantage of physicists no matter where they are located. Next, the computing architecture based on RCs is an acknowledgement of the facts of life about network bandwidths and costs. Short distance networks will always be cheaper and higher bandwidth than long distance (especially intercontinental) networks. A hierarchy of centers with associated data storage ensures that network realities will not interfere with physics analysis. Finally, RCs provide a way to utilize the expertise and resources residing in computing centres throughout the world. For a variety of reasons it is difficult to concentrate resources (not only hardware but, more importantly, personnel and support resources) in a single location. A RC architecture will provide greater total computing resources for the experiments by allowing flexibility in how these resources are configured and located. A corollary of these motivations is that the RC model allows one to optimize the efficiency of data delivery/access by making appropriate decisions on processing the data. One important motivation for having such 'large' Tier1 RCs is to have centres with a critical mass of support people while not proliferating centres which would then create an enormous coordination problem for CERN and the collaborations.

There are many issues with regard to this approach. Perhaps the most important involves the coordination of the various Tiers. While the group has a rough understanding of the scale and role of the CERN centre and the Tier1 RCs, whether we need Tier2 centres and special purpose centres and what their roles should be is much less clear. Also, there are a variety of approaches to actually implementing a Tier1 centre. Regional centres may serve one or more than one collaboration and each arrangement has its advantages and disadvantages.

To keep its discussions well grounded in reality, the group has undertaken the following tasks, which are described in the MONARC Project Execution Plan (PEP):

  1. A survey of the computing architectures of selected existing HEP experiments;
  2. A survey of the computing architectures of experiments that are just now coming on or are coming on in the next year or so;
  3. Discussions and meetings with representatives of proposed Regional Centre candidate sites concerning their proposed level of services and support, architecture, and management;
  4. Technology evaluation and cost tracking; and
  5. Network performance and cost tracking.
Items 1 and 2 help us develop models to input to the Simulation and Test Bed Working groups. Item 3 is essential to ensure that the proosed models of distributed computing are "real" in the sense that thay are compatible with the views of likely Tier1 RC sites. Items 4 and 5 keep model building within the boundaries of available technology and funding.

2.2 Results from the Last Year

This year, the Architecture Working Group has produced three documents that have been submitted to the full collaboration and is beginning work on a fourth:

  1. Report on Computing Architectures of Existing Experiments, V.O'Dell et al;
  2. Rough Sizing Estimates for a Computing Facility for a Large LHC Experiment, Les Robertson; and
  3. Regional Centers for LHC Computing, Luciano Barone et al.; and
  4. Report on Computing Architectures of Future Experiments (in progress).
The first three documents are available at
http://www.cern.ch/MONARC/docs/monarc_docs.htm
They are summarized briefly along with the plans for the fourth document.

2.2.1 Report on Computing Architectures of Existing Experiments

This survey included:

The main conclusion from this report is that LHC experiments are at such a different scale from these experiments and technology has changed so much since some of them ran, that LHC experiments will need a new model of computing. We can, however, derive valuable lessons on individual topics and themes from these experiments.

Some of the most important lessons learned were:

2.2.2 Rough Sizing Estimates for a Computing Facility for a Large LHC experiment, Les Robertson;

This document was prepared by Les Robertson of CERN IT. It attempts to summarize the rough capacities needed for the analysis of an LHC experiment and to derive from them the size of the CERN central facility and a Tier1 Regional Centre. The information has been obtained from estimates by CMS and cross checked with ATLAS and with the MONARC Analysis Working group. Some adjustments have been made to the numbers obtained from the experiments to account for overheads that are now measured but were not when the original estimates were made. While the result has not yet been reviewed by CERN management, it currently serves as our best indication of thinking on this topic at CERN so we are using it as the basis for proceeding.

It is believed that CERN will be able to satisfy about 1/2 of the aggregate computing need of the LHC experiments. The remainder must come from elsewhere. The view expressed by the author is that it must come from a 'small' number of Tier1 Regional Centres so that the problems of maintaining coherence and coordinating all the activities is not overwhelming. This sets the size of Tier1 RCs at 10-20% of the CERN centre in capacity.

2.2.3 Regional Centers for LHC Computing

Based on Les Robertson's estimates and the issues raised about the problems with distributed computing in the past by the survey Computing Architectures of Existing Experiments, we developed a framework for discussing Regional Centres and produced a document which gives a profile of a Tier1 Regional Centre.

This profile is based on facilities (and the corresponding capacities) and services (capabilities) which need to be provided to users. There is a clear emphasis on data access by users since this is seen as one of the largest challenges for LHC computing.

It is important to recognize that MONARC cannot and does not want to try to dictate the details of the Regional Centre architecture. That is best left to the collaborations, the candidate sites, and to CERN to work out on a case by case basis. MONARC wants to provide a forum for the discussion of how these centres will get started and develop and can play the role of facilitator of the effort to locate candidate centres and bring them into the discussion.

The report describes the services that we believe that CERN will supply to LHC data analysis (based on the work of Les and his team). These include:

CERN will have the original or master copy of the following data:

The regional centres will provide:

Support is called out as a key element in achieving the smooth functioning of this distributed architecture. It is essential for the RC to provide a critical mass of user support. It is also noted that since this is a committment that extends over a long period of time, permanent staff, a budget for hardware evolution , and support for R&D into new technologies must be provided.

2.2.4 Report on Computing Architectures of Future Experiments

Work on this report is just beginning. It will include a study of BaBar at SLAC, CDF and D0 Run II at Fermilab, COMPASS at CERN, and the STAR experiment at RHIC. The approach will be to survey the available public literature on these experiments and to abstract information that is particularly relevant to LHC computing. This can be supplemented where required by discussions with leaders of the computing and analysis efforts.There will not be an attempt to create complete, self-contained expositions of how each experiment does all its tasks. We will have a 'contact-person' for each exeriment who will be responsible for gathering the material and summarizing it for the report. Most of these 'contact-persons' are now in place. There will be an overall editor for the final report.

2.2.5 First meeting of Regional Centre Representatives:

On April 13, there was a meeting of representatives of potential Regional Centre sites. It was felt at this point that we had made good progress in understanding the issues of how Regional Centres could contribute to LHC computing and it was now time to share this with possible candidates, to hear their plans for the future, and to get their feedback on our discussions. The three documents discussed above, which had been made available in advance of the meeting, were summarized briefly. We then heard presentations from IN2P3/France, INFN/Italy, LBNL/US(ATLAS), FNAL/US(CMS), UK, Germany, KEK/Japan(ATLAS), Russia/Moscow. Transparencies of these presentations and a summary may be found at

http://www.fnal.gov/projects/monarc/task2/rc_mtg_apr_23_99.html

While nothing is yet certain, it did appear that there were several candidates for Regional Centres that have a good chance to get support to proceed and will be at a scale roughly equivalent to MONARC's profile of a Tier1 RC. It was also clear that there would be several styles of implementation of the RC concept. One variation is that several centres saw themselves serving all four major LHC experiments but others, especially in the US and Japan, will serve only single experiments. Another variation is that some Tier1 RCs will be located at a single site while others may be somewhat distributed themselves although presumably quite highly integrated.

2.2.6 Technology Tracking

The main initiative in technology tracking was to take advantage of CERN IT efforts in this area. We heard a report on the evolution of CPU costs by Sverre Jarpe of CERN who serves on a group called PASTA which is tracking processor and storage technologies. We look forward to additional such presentations in the future.

2.3 Goals and Milestones for the July-December period

2.3.1 Complete the Report on Computing Architectures of Future Experiments by mid-July

2.3.2 Produce the final document on the Regional Centres by the end of the year

2.3.3 Begin to develop realistic models

Begin the task of developing models of computing that can be simulated. Focus on simulations which emphasize the large scale production, data management, and analysis issues. Worry about real world issues such as priority assignmetns and scheduling.

Chapter 3: Progress Reports of the Testbed Working Group

The Working Groups present Progress Reports.
 
 

Chapter 4: Progress Reports of the Analysis and Simulation Working Groups

The Working Groups present Progress Reports.
 
 

Chapter 5: Workplan and Schedule

Laura

 

Chapter 6: Ideas for Phase Three

Harvey

 

Conclusions

(Is this another chapter or a sub section of Ideas?) Harvey

 

Chapter 7: References

The references (in the PEP) were more like a list of further reading and were not necessarily cited in the text. This makes editting a lot easier. The hypertext links worked and were presented in plain text for the use of a reader with a printed copy.

If we cite MONARC internal documents then they should be in a pulicly accessible area. Do we have a mechanism to store and distribute printed copies ?

  1. MONARC Home Page.

  2. http://www.cern.ch/MONARC/
  3. MONARC PAP, June 1998

  4. http://atlasinfo.cern.ch/Atlas/GROUPS/WWCOMP/pap_june30.html
  5. MONARC PEP, September 1998

  6. http://home.cern.ch/~soneale/monarc/pep-fab.html