April 1 Minutes
Mikhail's Slides
1_c2.ps
2_c5.ps
11_yeild_c+hg.ps
12_avery_one.ps
13_avery_two.ps
14_ang.ps
15_ropt.ps
16_ed.ps
17_delt.ps
Mikhail showed results from a study whose goal was to see what kind
of things must be changed in order to get a target which could survive
2MW of proton power. For two interaction lengths of solid material with
a 1mm sigma (in x and y) proton beam, the temperature change due to the
beam was too high. But by increasing the beam spot size to 2.5 or 3mm,
and increasing the target transverse size also to something like 3 times
the beam sigma, then the target can survive, or at least the temperature
rise is much lower and the target won't break, or submilmate. The pion
yield (for 6-7 GeV pions) per unit proton power doesn't change by
more than about 10% for this modification. For the record, the
temperature for the shock wave limit is 2200-2400 degrees C, and the
sublimation temperature is 3600 degrees C (I think this is for graphite).
The graphite-graphite composite material only survives up to 1600 degrees C.
This study is promising, and Fritz asked if a cooling scheme was included
in this study--it hasn't, but the study does indicate that perhaps
the cooling requirements will not be too tough,
although some cooling will be necessary. It was also suggested that
Mikhail look into beam sweeping.
Mayda Velasco's Slides
http://lotus.phys.nwu.edu/~mvelasco/april_1/
This talk showed some of the optimization studies (optimizing
L/E...looking at results from different L's or different E's) that Mayda
and Michal have done, using the geant-based NUMI beam monte carlo.
Michal's talk (described below) optimized the signal reach by
increasing the signal efficiency at the expense of background
rejection, and by putting air between the various layers of steel
and scintillator. On page 6 of this file you can see a summary
of the different beams considered: 10km off axis at 735km,
12.5km off axis at 735km, 11.5km off axis at 900km, and the last
combination in antineutrino running. All these configurations are for
medium energy position of the target and horns. The numbers in the
quotients correspond to the number of events per (kton-3.8x10^20 POT ),
where the numerator is after all cuts (with oscillation probabilities
included) and the denominator is before cuts (but still with the oscillation
probabilities included, no matter effects, and I don't recall what value of
theta_13, but delta m^2 is 3x10^-3eV^2.
The signal efficiencies
range between 38 and 44%, and the NC backgrounds are for most cases,
about the same size as the beam nu_e backgrounds. (except for the
case of L=735 and off-axis distance=12.5km, where the nc backgrounds
after all cuts are only about 40% of the beam nu_e backgrounds.).
Doug's question was, how well do we need to know delta m^2 before picking
a site for the far detector? (this was addressed in a later message from
Mayda, but not shown in this meeting--but really we could
debate this one a long time!).
Brajesh's question was: can further optimize the flux by changing the
beam optics? Stan last week showed that a third horn probably wouldn't help
much, and the medium energy configuration was really where things were optimal
for the off-axis flux.
Mikhal Szleper, Detector Optimization
Michal reported on his study on optimizing the detector geometry.
Recall that earlier he did studies with single pi0's and electrons
to determine the best transverse and longitudinal segmentation.
That study provided the numbers of 4.5mm for the thickness of the
steel plates, and 2cm wide readout cells.
Now he's extending the study by simulating neutrino neutral
current interactions, and varying the amount of air between the
consecutive plates. (direct quote "Nice feature of air...it's free"
which isn't exactly true since it means you have to build a bigger
building). Originally there were 1.9cm air gaps between
the consecutive plates (a "feature" of the original MINOS monte carlo).
There the nue efficiency achieved was 20%, but the nc contamination
was only 1/4 as large as the intrinsic nue background. Air gaps
increase the effective radiation length so there is better separation
of 2 close tracks. On the other hand, individual showers get wider and more
scattered, so track fitting gets worse.
There are two ways to do the analysis: minizing the background at the
expense of the signal, or "just" getting the background to where it's
comparable to the nu_e background, and keeping signal efficiency as
high as possible. Air gaps tested ranged from 0cm to 3cm, with the
best results for 3cm. For the "no nc background" analysis, the signal
efficiency ended up being 24%, the nc background was 0.08% The second
way to do the analysis was to maximize the efficiency...so for that the
efficiency was 33% and the nc efficiency was 0.5%. However, by looking
at the events which survived Michal estimated that the efficiencies
could be as high as 38% and a NC background of <=0.3%. (for 10km
off-axis beam at 735km, and the LE beam). Michal showed some plots of
event displays, and showed how one could even see the recoil proton in
some of the events.
The list of cuts (for the low efficiency analysis) are as follows:
- 1 1 track per view
- maximum width is >= 3 cells
- Slope of tracks < 0.25
- width times depth > 7.5 (em showers are wider than hadronic showers)
- 0.02
Steve Geer's Slides
Nu_Roadmap.pdf.pdf
Steve broke up the goals for this field into two categories:
1) those goals which are independent of whatever gets measured in the
next few years, and 2) those which depend on the outcomes of a few
future experiments. He suggests that we need to be as quantative as
we can be in this "consensus" document.
Maury Goodman's Slides
Slide 1 (jpeg version)
Slide 2 (jpeg version)
Slide 3 (jpeg version)
Slide 4 (jpeg version)
Or Get the whole file here...
http://www.hep.anl.gov/mcg/mapmap.ppt
Maury gave the analogy that in order to get from north america to
south america (by land) you have to go through panama--and for us,
Panama is theta_13. How big that is determines how well you can
achieve various other goals in this field. He showed an email from
Lincoln Wolfenstein (forwarded to this group) stating the same thing--
we need to really focus on getting to a measurement of theta_13.
One big question in all of this (mentioned above) is how well do we
need to know delta m^2_23 to get too far in the planing for future
possible experiments?