Search results

RSS feed for this result

15,532 results

Book
1 online resource (TUPOA19 ): digital, PDF file.
The low-energy section of the photoinjector-based electron linear accelerator at the Fermilab Accelerator Science & Technology (FAST) facility was recently commissioned to an energy of 50 MeV. This linear accelerator relies primarily upon pulsed SRF acceleration and an optional bunch compressor to produce a stable beam within a large operational regime in terms of bunch charge, total average charge, bunch length, and beam energy. Various instrumentation was used to characterize fundamental properties of the electron beam including the intensity, stability, emittance, and bunch length. While much of this instrumentation was commissioned in a 20 MeV running period prior, some (including a new Martin- Puplett interferometer) was in development or pending installation at that time. All instrumentation has since been recommissioned over the wide operational range of beam energies up to 50 MeV, intensities up to 4 nC/pulse, and bunch structures from ~1 ps to more than 50 ps in length.
Superconducting linacs are capable of producing intense, stable, high-quality electron beams that have found widespread applications in science and industry. The 9-cell 1.3-GHz superconducting standing-wave accelerating RF cavity originally developed for $e^+/e^-$ linear-collider applications [B. Aunes, {\em et al.} Phys. Rev. ST Accel. Beams {\bf 3}, 092001 (2000)] has been broadly employed in various superconducting-linac designs. In this paper we discuss the transfer matrix of such a cavity and present its measurement performed at the Fermilab Accelerator Science and Technology (FAST) facility. The experimental results are found to be in agreement with analytical calculations and numerical simulations.
Book
1 online resource (00:07:08 ): digital, PDF file.
The use of superconducting radio frequency (SRF) technology is a driving force in the development of particle accelerators. Scientists from around the globe are working together to develop the newest materials and techniques to improve the quality and efficiency of the SRF cavities that are essential for this technology.
Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the rst to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (ltering and transforming experiment-specic data formats), these new technologies use dierent approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity. In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the ocial experiment data formats and produce publication physics plots. We will discuss advantages and disadvantages of each approach and give an outlook on further studies needed. 1.
Book
1 online resource (Article No. 022004 ): digital, PDF file.
Soft function relevant for transverse-momentum resummation for Drell-Yan or Higgs production at hadron colliders are computed through to three loops in the expansion of strong coupling, with the help of bootstrap technique and supersymmetric decomposition. The corresponding rapidity anomalous dimension is extracted. An intriguing relation between anomalous dimensions for transverse-momentum resummation and threshold resummation is found.
An in-situ calibration of a logarithmic periodic dipole antenna with a frequency coverage of 30 MHz to 80 MHz is performed. Such antennas are part of a radio station system used for detection of cosmic ray induced air showers at the Engineering Radio Array of the Pierre Auger Observatory, the so-called Auger Engineering Radio Array (AERA). The directional and frequency characteristics of the broadband antenna are investigated using a remotely piloted aircraft (RPA) carrying a small transmitting antenna. The antenna sensitivity is described by the vector effective length relating the measured voltage with the electric-field components perpendicular to the incoming signal direction. The horizontal and meridional components are determined with an overall uncertainty of 7.4^{+0.9}_{-0.3} % and 10.3^{+2.8}_{-1.7} % respectively. The measurement is used to correct a simulated response of the frequency and directional response of the antenna. In addition, the influence of the ground conductivity and permittivity on the antenna response is simulated. Both have a negligible influence given the ground conditions measured at the detector site. The overall uncertainties of the vector effective length components result in an uncertainty of 9.4^{+1.5}_{-1.6} % in the square root of the energy fluence for incoming signal directions with zenith angles smaller than 60{\deg}.
The muon anomaly aµ is one of the most precise quantity known in physics experimentally and theoretically. The high level of accuracy permits to use the measurement of aµ as a test of the Standard Model comparing with the theoretical calculation. After the impressive result obtained at Brookhaven National Laboratory in 2001 with a total accuracy of 0.54 ppm, a new experiment E989 is under construction at Fermilab, motivated by the diff of aexp SM µ − aµ ∼ 3σ. The purpose of the E989 experiment is a fourfold reduction of the error, with a goal of 0.14 ppm, improving both the systematic and statistical uncertainty. With the use of the Fermilab beam complex a statistic of × 21 with respect to BNL will be reached in almost 2 years of data taking improving the statistical uncertainty to 0.1 ppm. Improvement on the systematic error involves the measurement technique of ωa and ωp, the anomalous precession frequency of the muon and the Larmor precession frequency of the proton respectively. The measurement of ωp involves the magnetic field measurement and improvements on this sector related to the uniformity of the field should reduce the systematic uncertainty with respect to BNL from 170 ppb to 70 ppb. A reduction from 180 ppb to 70 ppb is also required for the measurement of ωa; new DAQ, a faster electronics and new detectors and calibration system will be implemented with respect to E821 to reach this goal. In particular the laser calibration system will reduce the systematic error due to gain fl of the photodetectors from 0.12 to 0.02 ppm. The 0.02 ppm limit on systematic requires a system with a stability of 10−4 on short time scale (700 µs) while on longer time scale the stability is at the percent level. The 10−4 stability level required is almost an order of magnitude better than the existing laser calibration system in particle physics, making the calibration system a very challenging item. In addition to the high level of stability a particular environment, due to the presence of a 14 m diameter storage ring, a highly uniform magnetic field and the detector distribution around the storage ring, set specific guidelines and constraints. This thesis will focus on the final design of the Laser Calibration System developed for the E989 experiment. Chapter 1 introduces the subject of the anomalous magnetic moment of the muon; chapter 2 presents previous measurement of g-2, while chapter 3 discusses the Standard Model prediction and possible new physics scenario. Chapter 4 describes the E989 experiment. In this chapter will be described the experimental technique and also will be presented the experimental apparatus focusing on the improvements necessary to reduce the statistical and systematic errors. The main item of the thesis is discussed in the last two chapters: chapter 5 is focused on the Laser Calibration system while chapter 6 describes the Test Beam performed at the Beam Test Facility of Laboratori Nazionali di Frascati from the 29th February to the 7th March as a final test for the full calibrations system. An introduction explain the physics motivation of the system and the diff t devices implemented. In the final chapter the setup used will be described and some of the results obtained will be presented.
Book
1 online resource (P01020 ): digital, PDF file.
This paper describes the CMS trigger system and its performance during Run 1 of the LHC. The trigger system consists of two levels designed to select events of potential physics interest from a GHz (MHz) interaction rate of proton-proton (heavy ion) collisions. The first level of the trigger is implemented in hardware, and selects events containing detector signals consistent with an electron, photon, muon, tau lepton, jet, or missing transverse energy. A programmable menu of up to 128 object-based algorithms is used to select events for subsequent processing. The trigger thresholds are adjusted to the LHC instantaneous luminosity during data taking in order to restrict the output rate to 100 kHz, the upper limit imposed by the CMS readout electronics. The second level, implemented in software, further refines the purity of the output stream, selecting an average rate of 400 Hz for offline event storage. The objectives, strategy and performance of the trigger system during the LHC Run 1 are described.
If QCD axions form a large fraction of the total mass of dark matter, then axion stars could be very abundant in galaxies. As a result, collisions with each other, and with other astrophysical bodies, can occur. We calculate the rate and analyze the consequences of three classes of collisions, those occurring between a dilute axion star and: another dilute axion star, an ordinary star, or a neutron star. In all cases we attempt to quantify the most important astrophysical uncertainties; we also pay particular attention to scenarios in which collisions lead to collapse of otherwise stable axion stars, and possible subsequent decay through number changing interactions. Collisions between two axion stars can occur with a high total rate, but the low relative velocity required for collapse to occur leads to a very low total rate of collapses. On the other hand, collisions between an axion star and an ordinary star have a large rate, $\Gamma_\odot \sim 3000$ collisions/year/galaxy, and for sufficiently heavy axion stars, it is plausible that most or all such collisions lead to collapse. We identify in this case a parameter space which has a stable region and a region in which collision triggers collapse, which depend on the axion number ($N$) in the axion star, and a ratio of mass to radius cubed characterizing the ordinary star ($M_s/R_s^3$). Finally, we revisit the calculation of collision rates between axion stars and neutron stars, improving on previous estimates by taking cylindrical symmetry of the neutron star distribution into account. Collapse and subsequent decay through collision processes, if occurring with a significant rate, can affect dark matter phenomenology and the axion star mass distribution.
Galaxy surveys probe both structure formation and the expansion rate, making them promising avenues for understanding the dark universe. Photometric surveys accurately map the 2D distribution of galaxy positions and shapes in a given redshift range, while spectroscopic surveys provide sparser 3D maps of the galaxy distribution. We present a way to analyse overlapping 2D and 3D maps jointly and without loss of information. We represent 3D maps using spherical Fourier-Bessel (sFB) modes, which preserve radial coverage while accounting for the spherical sky geometry, and we decompose 2D maps in a spherical harmonic basis. In these bases, a simple expression exists for the cross-correlation of the two fields. One very powerful application is the ability to simultaneously constrain the redshift distribution of the photometric sample, the sample biases, and cosmological parameters. We use our framework to show that combined analysis of DESI and LSST can improve cosmological constraints by factors of ${\sim}1.2$ to ${\sim}1.8$ on the region where they overlap relative to identically sized disjoint regions. We also show that in the overlap of DES and SDSS-III in Stripe 82, cross-correlating improves photo-$z$ parameter constraints by factors of ${\sim}2$ to ${\sim}12$ over internal photo-$z$ reconstructions.
In presence of non-standard neutrino interactions the neutrino flavor evolution equation is affected by a degeneracy which leads to the so-called LMA-Dark solution. It requires a solar mixing angle in the second octant and implies an ambiguity in the neutrino mass ordering. Non-oscillation experiments are required to break this degeneracy. We perform a combined analysis of data from oscillation experiments with the neutrino scattering experiments CHARM and NuTeV. We find that the degeneracy can be lifted if the non-standard neutrino interactions take place with down quarks, but it remains for up quarks. However, CHARM and NuTeV constraints apply only if the new interactions take place through mediators not much lighter than the electroweak scale. For light mediators we consider the possibility to resolve the degeneracy by using data from future coherent neutrino-nucleus scattering experiments. We find that, for an experiment using a stopped-pion neutrino source, the LMA-Dark degeneracy will either be resolved, or the presence of new interactions in the neutrino sector will be established with high significance.
Book
1 online resource (p. P01009-P01009 ): digital, PDF file.
We have developed a custom amplifier board coupled to a large-format 16-channel Hamamatsu silicon photomultiplier device for use as the light sensor for the electromagnetic calorimeters in the Muon g-2 experiment at Fermilab. The calorimeter absorber is an array of lead-fluoride crystals, which produces short-duration Cherenkov light. The detector sits in the high magnetic field of the muon storage ring. The SiPMs selected, and their accompanying custom electronics, must preserve the short pulse shape, have high quantum efficiency, be non-magnetic, exhibit gain stability under varying rate conditions, and cover a fairly large fraction of the crystal exit surface area. We describe an optimized design that employs the new-generation of thru-silicon via devices. The performance is documented in a series of bench and beam tests.
Integrable optics is an innovation in particle accelerator design that provides strong nonlinear focusing while avoiding parametric resonances. One promising application of integrable optics is to overcome the traditional limits on accelerator intensity imposed by betatron tune-spread and collective instabilities. The efficacy of high-intensity integrable accelerators will be undergo comprehensive testing over the next several years at the Fermilab Integrable Optics Test Accelerator (IOTA) and the University of Maryland Electron Ring (UMER). We propose an integrable Rapid-Cycling Synchrotron (iRCS) as a replacement for the Fermilab Booster to achieve multi-MW beam power for the Fermilab high-energy neutrino program. We provide a overview of the machine parameters and discuss an approach to lattice optimization. Integrable optics requires arcs with integer-pi phase advance followed by drifts with matched beta functions. We provide an example integrable lattice with features of a modern RCS - long dispersion-free drifts, low momentum compaction, superperiodicity, chromaticity correction, separate-function magnets, and bounded beta functions.
Book
1 online resource (Article No. 011001 ): digital, PDF file.
For this research, we study the measurement of transverse diffusion through beam echoes. We revisit earlier observations of echoes in RHIC and apply an updated theoretical model to these measurements. We consider three possible models for the diffusion coefficient and show that only one is consistent with measured echo amplitudes and pulse widths. This model allows us to parameterize the diffusion coefficients as functions of bunch charge. We demonstrate that echoes can be used to measure diffusion much quicker than present methods and could be useful to a variety of hadron synchrotrons.
Plasma wake-field acceleration in a strongly nonlinear (a.k.a. the blowout) regime is one of the main candidates for future high-energy colliders. For this case, we derive a universal efficiency-instability relation, between the power efficiency and the key instability parameter of the witness bunch. We also show that in order to stabilize the witness bunch in a regime with high power efficiency, the bunch needs to have high energy spread, which is not presently compatible with collider-quality beam properties. It is unclear how such limitations could be overcome for high-luminosity linear colliders.
Book
1 online resource (p. 86-91 ): digital, PDF file.
We report the test of many of the key elements of the laser-based calibration system for muon g - 2 experiment E989 at Fermilab. The test was performed at the Laboratori Nazionali di Frascati's Beam Test Facility using a 450 MeV electron beam impinging on a small subset of the final g - 2 lead-fluoride crystal calorimeter system. The calibration system was configured as planned for the E989 experiment and uses the same type of laser and most of the final optical elements. We show results regarding the calorimeter's response calibration, the maximum equivalent electron energy which can be provided by the laser and the stability of the calibration system components.
Book
1 online resource (218 p.) : digital, PDF file.
The NuMI Off-Axis $\nu_e$ Appearance (NO$\nu$A) experiment is a long baseline, off-axis neutrino oscillation experiment. It is designed to search for oscillations of $\nu_\mu$ to $\nu_e$ by comparing measurements of the NuMI beam composition in two detectors. These two detectors are functionally identical, nearly fully-active liquid-scintillator tracking calorimeters and located at two points along the beam line to observe the neutrinos. The Near Detector (ND), situated \unit[1]{km} away from the proton target at Fermilab, measures neutrinos prior to oscillation. Then the Far Detector (FD), located 810 km away at Ash River, Minnesota, measures the neutrinos after they have traveled and potentially oscillated. The neutrino beam is generated at Fermi Nation al Accelerator Laboratory in Batavia, Illinois by the Neutrinos at the Main Injector (NuMI) facility. \\~\\ By observing the $\nu_\mu\to\nu_e$ oscillation, NO$\nu$A is capable of measuring the neutrino mass hierarchy, CP violation and the octant of mixing angle $\theta_{23}$. This thesis presents the first measurement of $\nu_e$ appearance in the NO$\nu$A detectors with $3.52\times10^{20}$ protons-on-target (POT) data accumulated from February 2014 till May 2015. In this analysis the primary $\nu_e$ CC particle selection LID observes 6 $\nu_e$ like events in the far detector with a background prediction of $0.99\pm0.11$ (syst.), which corresponds to a $3.3\sigma$ excess over the no-oscillation hypothesis. This results disfavors $0.1\pi < \delta_{cp} < 0.5\pi$ in the inverted mass hierarchy at $90\%$ C.L with the reactor constrain on $\theta_$.
In models with universal extra dimensions (UED), the lightest Kaluza-Klein excitation of neutral electroweak gauge bosons is a stable, weakly interacting massive particle and thus is a candidate for dark matter thanks to Kaluza-Klein parity. We examine concrete model realizations of such dark matter in the context of non-minimal UED extensions. The boundary localized kinetic terms for the electroweak gauge bosons lead to a non-trivial mixing among the first Kaluza-Klein excitations of the ${\rm SU}(2)_W$ and ${\rm U}(1)_Y$ gauge bosons and the resultant low energy phenomenology is rich. We investigate implications of various experiments including low energy electroweak precision measurements, direct and indirect detection of dark matter particles and direct collider searches at the LHC. Notably, we show that the electroweak Kaluza-Klein dark matter can be as heavy as 2.4 TeV, which is significantly higher than $1.3$ TeV as is indicated as an upper bound in the minimal UED model.
The antiproton-to-proton ratio in the cosmic-ray spectrum is a sensitive probe of new physics. Using recent measurements of the cosmic-ray antiproton and proton fluxes in the energy range of 1-1000 GeV, we study the contribution to the $\bar{p}/p$ ratio from secondary antiprotons that are produced and subsequently accelerated within individual supernova remnants. We consider several well-motivated models for cosmic-ray propagation in the interstellar medium and marginalize our results over the uncertainties related to the antiproton production cross section and the time-, charge-, and energy-dependent effects of solar modulation. We find that the increase in the $\bar{p}/p$ ratio observed at rigidities above $\sim$ 100 GV cannot be accounted for within the context of conventional cosmic-ray propagation models, but is consistent with scenarios in which cosmic-ray antiprotons are produced and subsequently accelerated by shocks within a given supernova remnant. In light of this, the acceleration of secondary cosmic rays in supernova remnants is predicted to substantially contribute to the cosmic-ray positron spectrum, accounting for a significant fraction of the observed positron excess.
Beryllium is extensively used in various accelerator beam lines and target facilities as a material for beam windows, and to a lesser extent, as secondary particle production targets. With increasing beam intensities of future accelerator facilities, it is critical to understand the response of beryllium under extreme conditions to reliably operate these components as well as avoid compromising particle production efficiency by limiting beam parameters. As a result, an exploratory experiment at CERN’s HiRadMat facility was carried out to take advantage of the test facility’s tunable high intensity proton beam to probe and investigate the damage mechanisms of several beryllium grades. The test matrix consisted of multiple arrays of thin discs of varying thicknesses as well as cylinders, each exposed to increasing beam intensities. This paper outlines the experimental measurements, as well as findings from Post-Irradiation-Examination (PIE) work where different imaging techniques were used to analyze and compare surface evolution and microstructural response of the test matrix specimens.