We propose a way to simulate Cherenkov detector response using a generative adversarial neural network to bypass low-level details. This network is trained to reproduce high level features of the simulated detector events based on input observables of incident particles. This allows the dramatic increase of simulation speed. We demonstrate that this approach provides simulation precision which is consistent with the baseline and discuss possible implications of these results.
In this work, we propose an approach for electromagnetic shower generation on a track level. Currently, Monte Carlo simulation occupies 50-70\% of total computing resources that are used by physicists experiments worldwide. Thus, speedup of the simulation step allows to reduce simulation cost and accelerate synthetic experiments. In this paper, we suggest dividing the problem of shower generation into two separate issues: graph generation and tracks features generation. Both these problems can be efficiently solved with a cascade of deep autoregressive generative network and graph convolution network. The novelty of the proposed approach lies in the Neural networks application to the generation of the complex recursive physical process.
The increasing luminosities of future Large Hadron Collider runs and next generation of collider experiments will require an unprecedented amount of simulated events to be produced. Such large scale productions are extremely demanding in terms of computing resources. Thus new approaches to event generation and simulation of detector responses are needed. In LHCb, the accurate simulation of Cherenkov detectors takes a sizeable fraction of CPU time. An alternative approach is described here, when one generates high-level reconstructed observables using a generative neural network to bypass low level details. This network is trained to reproduce the particle species likelihood function values based on the track kinematic parameters and detector occupancy. The fast simulation is trained using real data samples collected by LHCb during run 2. We demonstrate that this approach provides high-fidelity results.
The cross-sections of 𝜓(2𝑆) meson production in proton-proton collisions at 𝑠√=13 TeV are measured with a data sample collected by the LHCb detector corresponding to an integrated luminosity of 275 pb−1. The production cross-sections for prompt 𝜓(2𝑆) mesons and those for 𝜓(2𝑆) mesons from b-hadron decays (𝜓(2𝑆)-from- 𝑏) are determined as functions of the transverse momentum, 𝑝T, and the rapidity, y, of the 𝜓(2𝑆) meson in the kinematic range 2<𝑝T<20 GeV/𝑐 and 2.0<𝑦<4.5
. The production cross-sections integrated over this kinematic region are
𝜎( prompt 𝜓(2𝑆),13 TeV)=1.430±0.005 (stat)±0.099 (syst)μb,𝜎(𝜓(2𝑆)-from- 𝑏,13 TeV)=0.426±0.002 (stat)±0.030 (syst)μb.
A new measurement of 𝜓(2𝑆)
production cross-sections in pp collisions at 𝑠√=7 TeV is also performed using data collected in 2011, corresponding to an integrated luminosity of 614 pb−1. The integrated production cross-sections in the kinematic range 3.5<𝑝T<14 GeV/𝑐 and 2.0<𝑦<4.5
𝜎( prompt 𝜓(2𝑆),7 TeV)=0.471±0.001 (stat)±0.025 (syst)μb,𝜎(𝜓(2𝑆)-from- 𝑏,7 TeV)=0.126±0.001 (stat)±0.008 (syst)μb.
All results show reasonable agreement with theoretical calculations.
We introduce SANgo (Storage Area Network in the Go language)—a Go-based package for simulating the behavior of modern storage infrastructure. The software is based on the discrete-event modeling paradigm and captures the structure and dynamics of high-level storage system building blocks. The flexible structure of the package allows us to create a model of a real storage system with a configurable number of components. The granularity of the simulated system can be defined depending on the replicated patterns of actual system behavior. Accurate replication enables us to reach the primary goal of our simulator—to explore the stability boundaries of real storage systems. To meet this goal, SANgo offers a variety of interfaces for easy monitoring and tuning of the simulated model. These interfaces allow us to track the number of metrics of such components as storage controllers, network connections, and hard- drives. Other interfaces allow altering the parameter values of the simulated system effectively in real-time, thus providing the possibility for training a realistic digital twin using, for example, the reinforcement learning (RL) approach. One can train an RL model to reduce discrepancies between simulated and real SAN data. The external control algorithm can adjust the simulator parameters to make the difference as small as possible. SANgo supports the standard OpenAI gym interface; thus, the software can serve as a benchmark for comparison of different learning algorithms.
A search for the doubly charmed baryon Ξ cc is performed through its decay to the Λ + c K − π + final state, using proton-proton collision data collected with the LHCb detector at centre-of-mass energies of 7, 8 and 13 TeV. The data correspond to a total integrated luminosity of 9 fb −1 . No significant signal is observed in the mass range from 3.4 to 3.8 GeV/c 2 . Upper limits are set at 95% credibility level on the ratio of the Ξ cc production cross-section times the branching fraction to that of Λ + c and Ξ cc baryons. The limits are determined as functions of the Ξ cc mass for different lifetime hypotheses, in the rapidity range from 2.0 to 4.5 and the transverse momentum range from 4 to 15 GeV/c.
The number of space objects will grow several times in a few years due to the planned launches of constellations of thousands microsatellites. It leads to a significant increase in the threat of satellite collisions. Spacecraft must undertake collision avoidance maneuvers to mitigate the risk. According to publicly available information, conjunction events are now manually handled by operators on the Earth. The manual maneuver planning requires qualified personnel and will be impractical for constellations of thousands satellites. In this paper we propose a new modular autonomous collision avoidance system called "Space Navigator". It is based on a novel maneuver optimization approach that combines domain knowledge with Reinforcement Learning methods.
The first untagged decay-time-integrated amplitude analysis of Bs0 → KS0K±π∓ decays is performed using a sample corresponding to 3.0 fb−1 of pp collision data recorded with the LHCb detector during 2011 and 2012. The data are described with an amplitude model that contains contributions from the intermediate resonances K*(892)0,+, K2*(1430)0,+ and K0*(1430)0,+, and their charge conjugates. Measurements of the branching fractions of the decay modes Bs0 → K*(892)±K∓ and 𝐵0𝑠→𝐾∗(---)(892)0𝐾0(---)Bs0→K∗(---)(892)0K0(---) are in agreement with, and more precise than, previous results. The decays Bs0 → K0*(1430)±K∓ and 𝐵0𝑠→𝐾∗0(---)(1430)0𝐾0(---)Bs0→K0∗(---)(1430)0K0(---) are observed for the first time, each with significance over 10 standard deviations.
The first amplitude analysis of the B±→π±K+K− decay is reported based on a data sample corresponding to an integrated luminosity of 3.0 fb−1 of pp collisions recorded in 2011 and 2012 with the LHCb detector. The data are found to be best described by a coherent sum of five resonant structures plus a nonresonant component and a contribution from ππ↔KK S-wave rescattering. The dominant contributions in the π± K∓ and K+ K− systems are the nonresonant and the B±→ρ(1450)0π± amplitudes, respectively, with fit fractions around 30%. For the rescattering contribution, a sizable fit fraction is observed. This component has the largest CP asymmetry reported to date for a single amplitude of (−66±4±2)%, where the first uncertainty is statistical and the second systematic. No significant CP violation is observed in the other contributionsю
A search for the Ξ++𝑐𝑐 baryon through the Ξ++𝑐𝑐→ D+pK−π+ decay is performed with a data sample corresponding to an integrated luminosity of 1.7 fb−1 recorded by the LHCb experiment in pp collisions at a centre-of-mass energy of 13 TeV. No significant signal is observed in the mass range from the kinematic threshold of the decay to 3800 MeV/c2. An upper limit is set on the ratio of branching fractions =(Ξ++𝑐𝑐→𝐷+𝑝𝐾−𝜋+)(Ξ++𝑐𝑐→Λ+𝑐𝐾−𝜋+𝜋+) with ℛ < 1.7 (2.1) × 10−2 at the 90% (95%) confidence level at the known mass of the Ξ++𝑐𝑐 state.
Formal language theory has a deep connection with such areas as static code analysis, graph database querying, formal verifica- tion, and compressed data processing. Many application problems can be formulated in terms of languages intersection. The Bar-Hillel theo- rem states that context-free languages are closed under intersection with a regular set. This theorem has a constructive proof and thus provides a formal justification of correctness of the algorithms for applications mentioned above. Mechanization of the Bar-Hillel theorem, therefore, is both a fundamental result of formal language theory and a basis for the certified implementation of the algorithms for applications. In this work, we present the mechanized proof of the Bar-Hillel theorem in Coq.
In HEP experiments CPU resources required by MC simulations are constantly growing and become a very large fraction of the total computing power (greater than 75\%). At the same time the pace of performance improvements from technology is slowing down, so the only solution is a more efficient use of resources. Efforts are ongoing in the LHC experiments to provide multiple options for simulating events in a faster way when higher statistics is needed. A key of the success for this strategy is the possibility of enabling fast simulation options in a common framework with minimal action by the final user. In this talk we will describe the solution adopted in Gauss, the LHCb simulation software framework, to selectively exclude particles from being simulated by the Geant4 toolkit and to insert the corresponding hits generated in a faster way. The approach, integrated within the Geant4 toolkit, has been applied to the LHCb calorimeter but it could also be used for other subdetectors. The hits generation can be carried out by any external tool, e.g. by a static library of showers or more complex machine-learning techniques. In LHCb generative models, which are nowadays widely used for computer vision and image processing are being investigated in order to accelerate the generation of showers in the calorimeter. These models are based on maximizing the likelihood between reference samples and those produced by a generator. The two main approaches are Generative Adversarial Networks (GAN), that takes into account an explicit description of the reference, and Variational Autoencoders (VAE), that uses latent variables to describe them. We will present how both approaches can be applied to the LHCb calorimeter simulation, their advantages as well as their drawbacks.
The radiative decay Λ0b→Λγ is observed for the first time using a data sample of proton-proton collisions corresponding to an integrated luminosity of 1.7 fb−1 collected by the LHCb experiment at a center-of-mass energy of 13 TeV. Its branching fraction is measured exploiting the B0→K*0γ decay as a normalization mode and is found to be B(Λ0b→Λγ)=(7.1±1.5±0.6±0.7)×10−6, where the quoted uncertainties are statistical, systematic, and systematic from external inputs, respectively. This is the first observation of a radiative decay of a beauty baryon.
Simulation is one of the key components in high energy physics. Historically it relies on the Monte Carlo methods which require a tremendous amount of computation resources. These methods may have difficulties with the expected High Luminosity Large Hadron Collider (HL-LHC) needs, so the experiments are in urgent need of new fast simulation techniques. We introduce a new Deep Learning framework based on Generative Adversarial Networks which can be faster than traditional simulation methods by 5 orders of magnitude with reasonable simulation accuracy. This approach will allow physicists to produce a sufficient amount of simulated data needed by the next HL-LHC experiments using limited computing resources.
A search for the rare leptonic decays was performed, using the proton-proton collision data collected with the LHCb experiment at center-ofmass energies of 8TeV and 13TeV. The following analysis are reviewed in this work:
• Measurements of the B0s → µ+µ− branching fraction and effective lifetime and search for B0 → µ+µ− decays
• Search for the decays B0s → τ+τ− and B0 → τ+τ−
• Search for the lepton-flavour violating decays B0(s) → e±µ∓
• Search for the lepton-flavour-violating decays B0s → τ±µ∓ and B0 → τ±µ∓
• Search for the rare decay B+ → µ+µ−µ+νµ
All results are consistent with the Standard Model. Nearly all results presented are either unique or the most accurate for the time.
Data analysis in high energy physics often deals with data samples consisting of a mixture of signal and background events. The sPlot technique is a common method to subtract the contribution of the background by assigning weights to events. Part of the weights are by design negative. Negative weights lead to the divergence of some machine learning algorithms training due to absence of the lower bound in the loss function. In this paper we propose a mathematically rigorous way to train machine learning algorithms on data samples with background described by sPlot to obtain signal probabilities conditioned on observables, without encountering negative event weight at all. This allows usage of any out-of-the-box machine learning methods on such data.
The production fractions of ¯B0s and Λ0b hadrons, normalized to the sum of B− and ¯B0 fractions, are measured in 13 TeV pp collisions using data collected by the LHCb experiment, corresponding to an integrated luminosity of 1.67 fb−1. These ratios, averaged over the b hadron transverse momenta from 4 to 25 GeV and pseudorapidity from 2 to 5, are 0.122±0.006 for ¯B0s, and 0.259±0.018 for Λ0b, where the uncertainties arise from both statistical and systematic sources. The Λ0b ratio depends strongly on transverse momentum, while the ¯B0s ratio shows a mild dependence. Neither ratio shows variations with pseudorapidity. The measurements are made using semileptonic decays to minimize theoretical uncertainties. In addition, the ratio of D+ to D0 mesons produced in the sum of ¯B0 and B− semileptonic decays is determined as 0.359±0.006±0.009, where the uncertainties are statistical and systematic.
The production of charged hadrons within jets recoiling against a Z boson is measured in proton-proton collision data at √s=8 TeV recorded by the LHCb experiment. The charged-hadron structure of the jet is studied longitudinally and transverse to the jet axis for jets with transverse momentum pT>20 GeV and in the pseudorapidity range 2.5<η<4. These are the first measurements of jet hadronization at these forward rapidities and also the first where the jet is produced in association with a Z boson. In contrast to previous hadronization measurements at the Large Hadron Collider, which are dominated by gluon jets, these measurements probe predominantly light-quark jets which are found to be more longitudinally and transversely collimated with respect to the jet axis when compared to the previous gluon dominated measurements. Therefore, these results provide valuable information on differences between quarks and gluons regarding nonperturbative hadronization dynamics.
Measurements of CP observables in B0 → DK∗0 decays are presented, where D represents a superposition of D0 and 𝐷⎯⎯⎯⎯⎯0D¯0 states. The D meson is reconstructed in the two-body final states K+π−, π+K−, K+K− and π+π−, and, for the first time, in the fourbody final states K+π−π+π−, π+K−π+π− and π+π−π+π−. The analysis uses a sample of neutral B mesons produced in proton-proton collisions, corresponding to an integrated luminosity of 1.0, 2.0 and 1.8 fb−1 collected with the LHCb detector at centre-of-mass energies of 𝑠√=7,8s=7,8 and 13 TeV, respectively. First observations of the decays B0 → D(π+K−)K∗0 and B0 → D(π+π−π+π−)K∗0 are obtained. The measured observables are interpreted in terms of the CP -violating weak phase γ.
A time-dependent analysis of the B0s→ϕγ decay rate is performed to determine the CP -violating observables Sϕγ and Cϕγ and the mixing-induced observable AΔϕγ. The measurement is based on a sample of pp collision data recorded with the LHCb detector, corresponding to an integrated luminosity of 3 fb−1 at center-of-mass energies of 7 and 8 TeV. The measured values are Sϕγ=0.43±0.30±0.11, Cϕγ=0.11±0.29±0.11, and AΔϕγ=−0.67+0.37−0.41±0.17, where the first uncertainty is statistical and the second systematic. This is the first measurement of the observables S and C in radiative B0s decays. The results are consistent with the standard model predictions.