The main topic of this meeting was to discuss the draft report on existing architectures. We began with a discussion of some general comments on the report. It was observed that the report was intended to identify what you could learn from this generation of experiments and what you could not. Harvey observed that it is clear from the report that the scale of the computing problem in these older experiments is so much smaller than for LHC experiments that they cannot possibly DEFINE a framework for the LHC. However, the survey will help us make the best possible use of the experiences of the past in formulating what is essentially a new model of computing. We should state in the conclusion section that, at least based on this survey, a distributed computing model for the LHC will have to be developed as a qualitatively and quantitatively new kind of system in the course of the LHC project. Mauro observed that we can learn much by contrasting the way in which Monte Carlo event generation and data analsyis were handled. First, remote sites made major contributions to the Monte Carlo work whereas they did not for the most part contribute to the event reconstruction or to the data analysis. While the reasons for this were discussed briefly, Vivian will amplify the discussion. Second, Mauro observed that even at central sites, Monte Carlo and analysis tended to be done on different machines. He wondered why this was and whether in a new distributed model it would be possible, desirable, and even natural to use all machines for both purposes. We then went through the document section by section. Scope of the Survey: Perhaps it would be more polite to say that 'NA48 and NA45 are intermediate in size of computing task although smaller in the size of the collaboration'. Harvey recalls that the first estimates for early LEP and early Run I were low by one order of magnitude for the startup and by two orders for steady state running. This can be documented by comparing the numbers in our tables to the 'Green Book' circa 1984. Tim Smith will try to locate a copy of the Green Book. From memory, we believe that the early LEP estimate was for 72 MIPS for analysis and reconstruction vs perhaps 20,000 finally installed. The estimate for the 1988/89 FNAL collider run was 50 MIPS which was off by a factor of at least 10 and is tiny compared to what was eventually required. We should also recall, based on our previous discussion that there are new drivers of computing requirements, such as the object database overhead, which add to the need. Monte Carlo needs have also traditionally been underestimated. The Monte Carlo needs are frequently of the same order as the real data analysis. Methodology of the Survey: It was suggested that we add to the list of experiments and sumarizers the 'contact person(s)' that they talked to. As long as the people agree, I think this is a good idea. Quantitative Results of the Survey: Concerning the survey numbers, Tim reminded us that the numbers in the tables represent steady state numbers of the solutions that finally worked. In some case, other approaches were tried that are not reflected in the survey. It was impossible to get people to provide numbers for 'event analysis time'. These times varied greatly depending on the analysis and the sources were reluctant to provide 'average' times. Tim suggested that maybe people would be more wiling to give a range of times. The summarizers will try to ask their sources on the experiment to answer the rephrased question. Tim also suggested that we leave the 'sparse' columns in the text for now since he still has hopes of filling them in. On the question of non-central analysis resources, Laura remarked that the kinds of machines available for each experiment among the Italian research institutions can in fact be learned through the funding agencies. It was felt that we should also leave the offsite CPU column in the report even though it was mostly blank. It was decided to include the FOCUS experiment at FNAL in the survey. FOCUS ran in 1996/97 and has used wide area networking along with tape export to implement a distributed DST production. The major participating sites are connected to VBNS. Joel will get the required information. Other examples that people could think of where systems have been used for remote computing are a system at Bologna which did Monte Carlo work on wokstation clusters, the ACP system at Michigan State, D0 facilities in Brazil (CBPF), Texas, and elsewhere. These are examples of offsite response to specific critical needs of an experiment. We need to try to get information about these efforts. Analysis and Conclusions: People liked the idea of comparing the LHC requirements with those of this generation in a table. However, it was noted that the test claimed that ALEPH was in the table and it is not. It was decided that we should include representative or even maximal numbers for the LEP experiments rather than single out one of them. It was observed that the BaBar and Run II nmubers are also small compared to the LHC numbers. That is the subject of another report. We need to include words mentioned at this beginning of this note as a conclusion. I repeat them here It is clear from the report that the scale of the computing problem in these older experiments is so much smaller than for LHC experiments that they cannot possibly DEFINE a framework for the LHC. However, the survey will help us make the best possible use of the experiences of the past in formulating what is essentially a new model of computing. People accepted the recommendation for a method of tracking the next round of experiments systematically. Joel will, however, come up with a better title for the section. There will be a revision of this document issued early next week. People are asked to read the revised document and to send comments to Vivian. Action List: 1) Expand the analysis of why Monte Carlo was more successfully distributed than other parts of the task (Vivian) 2) Locate a copy of the Green Book and look up initial LEP estimates (Tim) 3) Get permission and list names of 'contact persons' on experiments -- (survey team) 4) Get 'range of times' for analysis tasks (survey team) 5) Contact Laura to learn whether we canget a list of resources available for analysis at Italian institutions (Joel) 6) Add FOCUS (FNAL E831) to the survey (Joel) 7) Try to lean about D0 Monte Carlo production at CBPF and elsewhere (Mike) and Michigan State ACP farm (Joel) 8) Add paragraph above into conclusions (Vivian) 9) Generate revised draft (Vivian)