Minutes of November UEC Meeting: Present: Alton, Bertram, Finley, Gottschalk, Hagopian, Hughes, Messier, Nguyen, Rolli, Tanaka, Trischuk Apologies: Artuso, Bloom GSA: Clark, Katsanos, Sengupta Chair Trischuk convened the meeting at 9AM. Victoria White: Head of Computing Division (CD) Users at Fermilab and Computing Division Services and Directions Slides: http://www.fnal.gov/orgs/fermilab_users_org/minutes/white_UEC_Nov04.ppt Computing Division (CD) services support the computing infrastructure for the entire program at Fermilab, including the Run II collider experiments, the neutrino program (MiniBooNE and MINOS), astrophysics and cosmology (SDSS, SNAP, CDMS, Auger), as well as CMS, BTeV and accelerator computing. Many of the services, such as as email, central storage, networking and OS support are "24/7." Infrastructure Upgrades: In order to keep up with the computing needs of the Fermilab users, a large project to build satellite computing facilities to the Feynman Computing Center (which has run out of space, power and cooling ) has been undertaken. At the New Muon Lab, a new center for lattice gauge computing has been set up which now also accommodates some computing for CDF and D0. Meanwhile, the conversion of the old Wide Band building into a 2500 sq.ft satellite facility is nearly finished, with people now moving in. Though 1/6th the size of Feynman, it has more power and cooling and power than Feynman (2.5 MW versus 1 MW). While power needs will continue to increase with computing demand, the new facility is designed to be sufficient through FY2007. GRID: For future computing needs, CD is looking not only to increase on-site capacity, but to leverage computing provided by users at their facilities via the Grid. Fermilab has been a leader in Grid computing and is a Tier 1 CMS facility. Currently, D0 is reconstructing events across the world with 30 D0 SAM sites. The SAM grid is a fully functional distributed infrastructure that is also used by CDF (20 sites) and MINOS. In order to take the step from concept to reality, Fermilab took part in the GRID3 Challenge about a year ago to demonstrate the productivity of the Grid system. Nearly 3000 CPUs at 28 sites were part of the study. CMS utilized nearly all the available computing resources during its data challenge. Currently, a new initiative called the Open Science Grid will join together all the LHC computing resources in the US (both laboratories and universities) and attract a broader community of labs and universities in a wide range of disciplines (computer science, biology) to produce a Grid infrastructure. Fermilab has a strong role in this initiative. A white paper for the Open Science Grid is available at: http://www.opensciencegrid.org At Fermilab, CD is putting together FermiGrid to pull together the computing and storage at Fermilab. This will allow optimal usage by allowing otherwise idle resources to be used by other experiments and moving away from computing farms dedicated to specific experiments and projects. FermiGrid also paves the way for integration in the the Open Science grid and LCG (LHC computing grid). There have been many positive developments along the integrations of the US and European grids, together with an emerging trend to view the "Grid" not as a monolithic structure but as a "grid of grids." Networking: The primary link for Fermilab is the ESNet OC12, which delivers 622 Mbits/sec. This infrastructure is currently more or less saturated. The current plan is to utilize a 10Gbit/sec dark fiber link connected to the Starlight system owned by Northwestern University. This is currently in the R&D stage. Fermilab is working with ESNET and Chicago-area institutions to develop the network infrastructure in the Chicago area. Cybersecurity: In an increasingly dangerous world, KERBEROS has worked effectively to fend off attacks (which come in at a rate of 1000/minute) and maintain the openness needed for Fermilab's scientific mission. Its effectiveness has prevented the need to implement more drastic measures such as one-time passwords and firewalls. Central Storage 2.3 petabytes of data are now stored at Fermilab. Rates often exceed 40 TBytes/day, with peaks of up to 60 terabytes/day. The tape system is now completely automated: data is stored on tapes accessible by robots. Legacy data is also available on request (via the Helpdesk), with a typical turnaround of 24 hours, though one may have to wait up to two weeks. Helpdesk: The helpdesk service has been upgraded to provide an after-hours call center. This allows users to talk to live operators when problems arise. The helpdesk system maintains a database to track problems and escalates calls when problems remain unsolved. Scientific Linux (SL): Fermilab has an initiative, now joined by CERN, to produce a Red Hat Enterprise (RHE) clone. SL will remain binary compatible with RHE on a core package set. There has been a positive response so far across the HEP community. LHC Computing center CD is now working to make Fermilab a LHC Physics Center (LPC), by complementing the existing physics communities centered around CDF and D0 with the computing resources necessary to give Fermilab a central role in LHC analyses. The LPC is still under development. Avi Yagil and Sarah Eno are the contacts for information and comments. Bill Flaherty (Chief of Security) Security at Fermilab Security at Fermilab is managed by four Fermilab staff members with separate portfolios. In Fiscal Year 2003, there were 10 incidents of non-government items being stolen or lost. The total value of these items was about $12000. In Fiscal Year 2004, there were 22 incidents involving items with a total value of $9000. These include two stolen laptops reported in October. Most stolen items are wallets and purses. By comparison, inventory losses of government property amounted to $71.8 thousand dollars (estimated value) or $485 thousand dollars (purchase value). The procedure followed by security personnel responding to theft reports is to: 1. Fill out initial report. 2. Distribute to lab management 3. Follow up leads 4. Contact local law enforcement if appropriate, or if victim requests. Security responds to 1200-1400 reports per year (4 to 5 /day). Local law enforcement may be contacted when: 1. Elapsed time from crime is short 2. Forced entry 3. Evidence is present 4. Value of stolen items is high To report a theft: 1. Call x3414 (24/7 security dispatch) 2. Do not disturb crime scene. Some recommended theft deterrents: 1. Mark items: name and driver's license are recommended markers. 2. Keep items out of sight. 3. Lock items up. 4. Challenge strangers or suspicious individuals. 5. Report suspicious activity to security. Users can request locks by contacting their building manager. The UEC members discussed with Flaherty various means of alerting the user community when a pattern of crime emerges. Some ideas included an email to the user's list and signs around areas where thefts have been reported. The possibility of providing lockers to users, particularly those in areas where working spaces cannot be secured, was also discussed. Jim Alexander: Chair of the Physics Advisory Committee (PAC) The purpose of the PAC is to advise the Fermilab Directorate on the experimental program. Typically, users at Fermilab will propose an experiment which is reviewed by the PAC. The PAC can recommend that the proposal be approved or rejected, or deferred pending further investigation. The PAC may also ask proposers to address specific issues regarding the proposed experiment. With the expanding intersections between astrophysics, nuclear physics and high energy physics, the PAC membership now includes an astrophysicist and a nuclear physicist. Fermilab now has an extensive astrophysics program, while neutrino physics has become both a consumer and driver of developments in nuclear physics. Chair Trischuk reported on a meeting with Fermilab Director Witherell, who was unable to attend the UEC meeting due to the International Linear Collider meeting at KEK. 1) How is the current shutdown going? What is the schedule foreseen to return to physics production (for MiniBooNE and the collider)? The shutdown has lived up to all expectations. Electron cooling is just on schedule, and it was always known to be on the critical path. It has been generally accepted that this was an ambitious schedule, and the thirteen week nominal schedule will be accomplished in thirteen and a half weeks. This means the accelerators other than the proton source will start circulating beam just after Thanksgiving. The Linac and Booster are already operating. Other highlights: The MINOS lambertson is now installed. This was the critical path item for NuMI. The first store for the collider in expected in early December. The start of electron cooling commissioning will start by the end of the year with the goal of test shots of electron-cooled pbars by summer next year. Booster shutdown work went according to plan. The new RF stations have been installed and the second dogleg has been reconfigured. When I spoke with MiniBooNE on November 4th, Booster was already at near pre-shutdown intensities. The Booster is now in studies and achieving 4.5 10^12 batches with 88% efficiency. First physics beam is expected on November 30th. 2) Has the accident at SLAC had any impact on operations at FNAL? This accident got the attention of all the DOE labs. The incident has been thoroughly discussed at Fermilab to ensure that we have learned all lessons that are there to be learned. The review has not found any cause to change established FNAL procedures so no stand-downs have been necessary. We have looked very hard and are confident that there aren't any obvious weaknesses. We will look at ways to reinforce people's attention to the hazards of electrical work Fermilab has provided lab staff to participate in the safety reviews at SLAC. 3) Where is BTeV in the review and approval process? Erratum: The October UEC minutes include an incorrect statement that BTeV is expected to have lower flavor tagging efficiency compared to LHCB. The October minutes have been corrected. The BTeV CD-1 had been approved by DOE Office of Science and is now pending approval from the DOE Office of Engineering Construction and Management. The schedule has been fixed in response to CD-1 reviews; we are now headed for the CD-2 review (staging electromagnetic calorimeter + some other parts). The CD-2 and CD-3a reviews will take place December 14-16 at Fermilab. These reviews will establish the baseline and early procurement plans for BTeV. The Director's review was held September 28-30th. The funding profile has been adjusted to match available funds with the result that the detector & assembly hall outfitting and IP-insertion are now all rolled in. Jeff Spalding: Tevatron Run II Upgrade Plan Slides: http://www.fnal.gov/orgs/fermilab_users_org/minutes/spaulding_UEC_Nov04.ppt The goal of the Run II upgrade plan is to maximize integrated luminosity by FY2009. In order to do this, there will be efforts on all aspects of the luminosity integral, including emittances, efficiencies, and bunch intensities. So far, a large amount of progress has been made by increasing the proton brightness and the reliability. However, there is only so far one can go by increasing the proton bunch intensity. The program includes some major upgrades, but has a large operational component. An important part of the plan is to increase pbar production. In order to keep up with the pbar burn rate solely from collisions, one needs to produce 2 x 10^10 pbars/hour for 15 pb^-1/weekly delivery. Other losses of pbars means that this figure is actually 8 x 10^10, which is roughly the current stacking rate. In order to increase the weekly integral from 15 to 50 pb^-1, the pbar production must increase to 30-40 x 10^10 pbars/hour. The upgrade plan is dynamic and broken into several phases. At the end of each phase there will be decision points to adapt to the plan as it unfolds: Phase I: complete Phase II: Starting with the end of the current shutdown, operate with slip-stacking. Phase III: Bring electron cooling in the recycler into operations and implement interim upgrade of the stacktail cooling system. Phase IV: Full upgrade of the stacktail cooling system. In Phase I, all goals were met except one, which was pbar production. The goal was to achieve 18x10^10 pbar/hour but 12.7x10^10 pbar/hour is what was actually achieved. While the beam-on-target increased by 4%, the cycle time also increased by 29%. The current culprit is the debuncher-accumulator transfer line. During the shutdown, an alignment of the transfer line was performed, along with a replacement of a failing septum magnet. These should improve the situation, but it is still not known whether it will definitively resolve the problem. The FY05 plan 1. Delivered luminosity: 375 pb^-1 base and 475 pb^-1 design The base plan is to match FY04 performance while bringing new items online. 2. Operate with slip-stacking for pbar production. 3. Commission electron-cooling. This is the main consumer of the 20% pbar tax 4. Multi-batch operations for NuMI, leading to slip-stacked batches for NuMI. The design luminosity projection reaches 8 fb^-1 by the end of 2009, relying on improved performance of the pbar source and successful integration of electron cooling. A fall-back scenario without these improvements projects to 4 fb^-1. The main hinge points are: 1. Whether the debuncher-accumulator transfer line issues have been solved by the shut down work. 2. Whether e- cooling can be demonstrated by the 2005 shutdown. The Tevatron Run II program is a coordinated effort across all the divisions at the lab and with significant help from the experiments in several of the projects. Bill Foster: The Fermilab Proton Driver Plan Slides: http://www.fnal.gov/orgs/fermilab_users_org/minutes/Foster_UEC_Nov04.ppt Plans for a Fermilab proton driver are centered around an 8 GeV superconducting linac injecting H- into the Main Injector using SNS technology for the first 1.3 GeV and TESLA technology for the rest. As a result, 7/8th of the accelerator will have large overlap with the ILC. The initial setup will provide 0.5 MW output beam power at 8 GeV, with an upgrade path to 2 MW. Another attractive feature of the setup is that it can provide this power at both 8 GeV (coming straight out of the linac) or at 120 GeV (coming out of the main injector), or any energy in between. In other words, the power is independent of the beam energy. The small emittance of the linac also means smaller losses in the Main Injector. The scientific program of the proton driver could include: 1. Near term neutrino physics. The variable energy is particularly attractive for this. The beam can be directed to multiple detectors in different directions. 2. Bridge to the ILC: the current design calls for 50 cryomodules and 12 RF stations, which is effectively 1.5% of the ILC. 3. Fixed target and neutron spallation program 4. By accelerating electrons in the linac, a free electron laser (FEL) program. 5. Muon storage ring/neutrino factory front end. It will also serve as a seed project for industrial participation in the ILC. Recent developments in the project include a first round design and a recommendation from the Long-Range Planning Committee. A new design iteration has started that will result in a CD0 by the early 2005. The CD0 will investigate both a synchrotron and linac design, though the latter will be emphasized. The decision to use superconducting RF for the ILC was another boost, since the technology will now be common between the two accelerators. The recommendations of the recent APS neutrino study also has a strong accelerator-based component centered around a future proton driver. Finally, the development of the SMTF collaboration consolidates the R&D for both the proton driver and the ILC. Toward this end, a TESLA-compatible frequency has been chosen so that testing this component of the linac has complete overlap with the ILC. The design will leverage as much existing R&D and components as possible: 1. The high energy (beta nearly 1) part of the LINAC will be a TESLA copy. 2. For Beta<1, the SNS linac design will be used. 3. The front end is based on the pulsed RIA linac. 4. The H- source and RF quad is copied from the JPARC front end. As a result, collaboration will be important. A common R&D effort can accelerate the various projects. One key input from the Proton Driver to the ILC is the development of the fast ferrite phase shifters. These will allow a large RF distribution system that will reduce the electrical costs for the ILC. Another is the 3 ms pulse modulators that would reduce the number of klystrons needed by the ILC. Currently, multi-lab discussions are in place to consolidate multi-application superconducting RF development. Fortunately, the next steps for the ILC and Proton Driver R&D overlap: in each case it is the Superconducting Module Test Facility (SMTF). The CD0 documentation for the Proton Driver should be completed in early 2005. Currently, the resource allocation for R&D is insufficient. The next director will need to establish priority for the project. Andrew Alton: Inreach Committee Report A draft version of an inreach and quality of life survey has been reviewed by the subcommittee with a goal to finalize by December 1st. Some of the key issues are thefts at the lab and the lack of vegetarian options in the cafeteria. The e-mail list of post-docs working at Fermilab also needs to be updated. Simona Rolli: Non-US issues A survey is being put together based on the survey distributed last year. There will be some changes and new areas of focus: 1. Information on country of residence, citizenship and birth. These are relevant inputs to the visa approval process and site access. There were concerns regarding the privacy of user in providing this information. The survey will indicate that providing this information is voluntary. 2. Standardized list of experiments so that this information can be more easily cross-referenced. 3. The one year time lapse for the J1 visa used by non-US hires. The survey may target employers to see what impact this restriction has had in hiring people to work at Fermilab. Erik Gottschalk: Washington, D.C. Trip The subcommittee held its first meeting on November 12th and will regularly meet on Fridays. The subcommittee will start to collect relevant information such as congressional districts, etc. Currently, the projected date for the visit by SLUO members is January 15th. The visit to Washington is likely to be in mid-March before the spring recess. Next Meeting: 11 December, 2004