This is an old revision of the document!


The LHC Inverse Problem

We eagerly await data from the Large Hadron Collider (LHC) and we anticipate that LHC data will provide hints for what physics lies beyond the Standard Model (BSM). In studies that explore how accurately the LHC can determine the parameters of a specific BSM scenario, it is found that the LHC can often achieve a remarkable level of precision.

However, this seems to contradict the well-known fact that there are only a limited number of observables that the LHC can measure. In particular, if a BSM scenario predicts the production of dark matter candidates at the LHC, then these dark matter particles pass invisibly through the detector, resulting in a fundamental loss of information.

How it is then that we can measure the parameters of a given theory very accurately even though there are relatively few measurements we can make? The answer seems to be that the loss of information at the LHC results in discrete ambiguities in mapping LHC data back to theoretical models. That is, even if one model fits the data very well with small error bars, there will be a (possibly large) number of other models that also fit the data equally well. Moreover, precisely because the error bars are small, these other “degenerate” models are hard to find through brute force scanning.

The presence of discrete ambiguities in interpreting LHC data is known as the LHC Inverse Problem. It is an “inverse problem” because it is well-known how to solve the forward problem of determining the potential LHC signatures for a particular model, while in real life were are trying to invert this process and determine the underlying model from a set of measured LHC signatures. Because these ambiguities are discrete, it seems likely that simple observables might be able to resolve at least some of these degeneracies. This inspires us to search for new methods to analyze LHC data.

MARMOSET

Fall 2006 - Spring 2007

What is the minimal parameterization of new physics signals that does not suffer from the problem of ambiguities? In studying the LHC Inverse problem, we found that discrete ambiguities arise when two models have similar particle masses, similar particle production modes, and similar particle decays models. Therefore, we proposed the idea of an “On-Shell Effective Theory” (OSET) which summarizes new physics models directly in terms of these three properies: masses, cross sections, and branching ratios.

In the language of OSETs, the problem of discrete ambiguities is mitigated because it is less likely for two OSETs to look similar since OSETs are directly related to the actual LHC signals that are observed. Moreover, OSETs are parameterized in terms of quantities that can be easily calculated in a fundamental theory. In this way, OSETs are a useful intermediary between LHC data and theoretical models.

To provide a concrete implementation of the OSET philosophy, we created a program MARMOSET (Mass and Ratio Modeling in On-Shell Effective Theories) that allows the user to create LHC pseudo-data based on OSET input parameters.

  • MARMOSET: The Path from LHC Data to the New Standard Model via On-Shell Effective Theories.
    Nima Arkani-Hamed, Bruce Knuteson, Stephen Mrenna, Philip Schuster, Jesse Thaler, Natalia Toro, and Lian-Tao Wang.
    hep-ph/0703088

SUSY and the LHC Inverse Problem

Summer/Fall 2005

The Minimal Supersymmetric Standard Model (MSSM) is one of the leading candidates for a theory beyond the standard model. The MSSM predicts a new supersymmetric partner particle for every standard model particle, effectively doubling the known number of fundamental states.

In principle, the spectrum of these partner states can yield information about the ultraviolet structure of the standard model. However, we found that given a generic strategy for analyzing LHC data, there are often discrete ambiguities in trying to determine the pattern of supersymmetric states. Moreover, these ambiguities are difficult to identify, because they correspond to drastically rearranging the MSSM spectrum.

This study offered an explicit confirmation of the LHC Inverse Problem.

LHC Olympics

Summer 2005 - Summer 2007

The LHC Olympics were a series of four workshops where theorists attempted to determine the underlying TeV scale model from pseudo-LHC “blackbox” data. Using crude but semi-realistic data analysis methods, we learned firsthand the challenges of interpreting ambiguous data, and developed an intuition for the LHC Inverse Problem.

I was a member of the Harvard team, and we created a U(1)B-L gauge-mediated supersymmetric blackbox for the 2nd LHC Olympics. We were able to solve (or solve up to degeneracies) several blackboxes created by other teams. As part of my participation with the LHC Olympics, I helped improve the user interface to John Conway's PGS (Pretty Good Simulation) detector simulation.

lhc_inverse.1460218749.txt.gz · Last modified: 2016/04/09 16:19 by jthaler