### Split Supersymmetry for [Part III Students]

Yesterday I gave a `Part III talk,’ which is meant to be a chance for students here to practice their seminar skills and to share neat mathematical ideas to their peers. It’s also something of a commercial for the Part II students who are interested in doingÂ Part III. Most people are giving talks on their essay topics, but I thought I’d give a talk with fewer equations on what I think is a really cute idea.

My talk is available in pdf form here: Split Supersymmetry.

Summary (roughly following the slides in the link above)

A very important idea: physics at different scales **decouple**. This is why we can calculate the trajectory of a basketball without considering modifications from general relativity of quantum gravity. So at a given energy scale, one only needs an `**effective theory**‘ at that scale. In fancy words, this idea is formalized by the **renormalization group**: the renormalization group flow near a fixed point will cause irrelevant operators to die away while keeping relevant operators robust.

Another important idea: **fine-tuning** versus **naturalness**. These are awkward to define formally, but one probably already has an inherent sense for what these words mean. A finely-tuned theory is one that depends the value of a parameter to within an unreasonable degree of accuracy. A parameter takes a `natural’ value if it is order 1 or of the order of an effective theory’s cutoff scale to the appropriate dimension. `Fine-tuning’ problems typically occur when a dimensionful parameter appears in a theory since this would `naturally’ point to high energy scales at which new physics occurs.

A few important background ideas:

- Our current understanding of physics is the
**standard model**, which has passed every experimental test, but that we know is only an effective theory that may break down at scales as low as the TeV scale. - We believe that at the Planck scale there exists a theory of
**quantum gravity**. I shall assume that this is a string theory, but this is to use the concept of the `string landscape’ (introduced later) to motivate an assumption. - We know that
**dark matter**exists and that it should be explained within particle physics. That is, we need to identify a dark matter particle. - We have aesthetic and (some) experimental reason to believe that there exists a `
**grand unified theory**‘ (GUT) at some “GUT scale.” This means that the Standard Model lives in a larger gauge group and the gauge couplings unify at the GUT scale. - The
**cosmological constant problem**: the universe is expanding at an accelerating rate. In order to account for this, the cosmological constant must have an incredibly small (but nonzero) value of 10^{-120} in Planck units. This is finely tuned in a bad way. It’s non-zero, but unnaturally tiny. **The hierarchy problem**: the Higgs boson mass is light (~100 GeV). In the Standard Model, however, loop contributions would push the Higgs mass to a*natural*value at a higher scale where the Standard Model breaks down. The naive choice for this high scale is the Planck scale, in which case the Higgs mass is finely tuned by 17 orders of magnitude between the electroweak symmetry breaking scale and the Planck scale. While not as bad as the cosmological constant problem, this is also a question of fine tuning.

The general strategy for “beyond the Standard Model” model builders is to ignore the cosmological constant and try to attack the hierarchy problem. The hope is that by solving the small fine-tuning of the hierarchy, we hope to learn clever ways to attack the big fine-tuning of the cosmological constant.

The main approach for solving the hierarchy problem is **supersymmetry** (SUSY). This is a symmetry between bosons and fermions that solves the hierarchy problem by introducing new particles and interactions that cancel the divergence in the Higgs mass. In a nutshell: every standard model particle has a SUSY partner particle with the exact same properties, except that fermions become bosons and vice versa. (i.e. force particles become matter particles and vice versa.) ((*click `more’ below to continue reading… this is a bit long*))

As an *added bonus*, SUSY also gives us a dark matter particle and grand unification! If we impose a symmetry called **R-parity** that (roughly) imposes that super-particles couple pair-wise to standard model particles, then the **lightest supersymmetric particle** (LSP) is stable. Such a particle would have to decay into a standard model particle and another SUSY particle, but since it’s the *lightest* SUSY particle, such a decay would violate energy conservation. With regards to grand unification, results at the LEP collider were highly suggestive of the convergence of the coupling strengths of the strong, weak, and electromagnetic forces at a high scale *if *we assumed that supersymmetry exists at a lower scale.

However, there’s a problem: we don’t observe any of these predicted SUSY particles. Surely if a bosonic particle with the same mass, quantum numbers and couplings as the electron existed, we would have detected it by now?

Thus, we suppose that supersymmetry is broken. When the temperature of the universe was above this **SUSY breaking scale**, supersymmetry was a good symmetry. After the universe cooled to below this temperature, supersymmetry no longer existed and the particles and superparticles no longer have the same mass. (This is a little subtle: in finite-temperature field theory, the inclusion of thermal energy into the Lagrangian changes the shape of the potential. For the Higgs potential, for example, the quadtratic perturbation is smoothed out at high temperatures so that the potential only has a single symmetry-preserving vacua.)

To preserve the natural solution of the hierarchy problem, we assume SUSY is broken at around the TeV scale. (Recall that the natural value for a dimensionful parameter, such as the Higgs mass, is the scale at which the effective theory for that parameter breaks down.) Thus the reason why we don’t see superpartner particles is because our colliders haven’t probed the SUSY breaking scale.

Unfortunately, there are a few more details to work out. If the SUSY particles are at the TeV scale, most models would predict interactions that don’t seem to occur: flavour changing neutral currents and proton decay, among others. It’s important to note here that these `bad’ processes are mediated by the SUSY scalars (partners of standard model fermions).

This is roughly the state of SUSY model-building. Lots of work has been put in to think up really clever models of supersymmetry and supersymmetry breaking, and for the most part the scientific community is just waiting to see what kinds of signatures for new physics we see at the LHC so that they can compare them to the predictions of the various models.

Let’s switch gears a little bit: why is it that string theory doesn’t make any predictions at the LHC scale? The straightforward answer is that string theory lives at the Planck scale, and in order to make predictions at the low scale one has to traverse several orders of magnitude in energy. This involves passing through regions of possible `new physics’ that we dont’ understand. On the way down, this idea of **decoupling** (or “relevant/irrelevant operators in a RG flow”) tells us that it’s nearly impossible to get a predictive signal of string theory at the TeV scale. It’s important to note that at this point, this is *different *from saying string theory is non-predictive. This is only saying that string theory lives at a different scale that is far away from the TeV scale where we do experiments.

But there may be a second reason why we can’t make LHC predictions from string theory:

Maybe string theory is inherently non-predictive.

Physicists have recently proposed that string theory may have something like 10^{500} different `metastable vacua.’ First of all, a metastable vacuum is a possible state in which the universe (or a big chunk of the universe) could settle into. Different vacuum states would have different values of the cosmological constant, Higgs mass, etc. They would essentually be different universes with different physics. (Subtle: this is different from saying “different universes with different *laws of physics*” since all such vacua are governed by string theory.)

Secondly: 10^{500} is a **huge** number! I would guess that this is the largest non-infinite number that anyone has ever talked about seriously. I’m unable to compare this to anything that would show how big this number is. The volume of the known universe would have to be calculated in the smallest distances we’re familiar with would *still *be off by hundreds of orders of magnitude. Maybe the integrated number of quantum fluctuations since the big bang? The point is, this number is practically infinite. This is **fine-tuning** in the worst possible way: why should the universe fall into one point in the landscape rather than any other?

There may be a selection principle. Anyone who grew up on popular science books with words like `theory of everything’ would* hope *that there is a selection principle that uniquely tells us why our universe ended up the way that it did. Then we wouldn’t have to worry about the 10^{500}.

But then again, maybe there isn’t a selection principle. Maybe the universe is in this string vacuum (assuming our universe is among the 10^{500} string vacua) only because of the **anthropic principle**. Things are the way they are because if they weren’t, then intelligent life wouldn’t form to ask such a question.

As you can imagine, this is a controversial idea that people have called `unscientific’ or `defeatist.’ There are a few good arguments one can make to take this landscape idea seriously. Here’s my favourite (I’ve heard it from a few people now):

In Kepler’s time, people “knew” the planets moved in circular orbits. They “knew” this because they believed circles were mathematically beautiful and the universe was mathematically beautiful and therefore the planets `naturally’ travelled in circles. Once they (wrongly) understood this, the only relevant question was what set the orbital radii of different planets? Kepler developed a very intricate theory of embedded platonic solids that determine these radii. I’m sure it was a very beautiful mathematical theory since he produced such pretty pictures. However, Kepler himself later realized that the planets don’t move in circles. In fact, the planets moved in ellipses, and all of his `mathematical beauty’ was rubbish. In fact, if we look up in the sky with telescopes, we can observe different solar systems each with different spectra of plantary orbital radii… there is a **landscape **of different such configurations! There’s nothing special about our solar system’s particular set of planetary radii! We can ask why we live on a planet that is a certain distance from the sun… and then this is an **anthropic** argument. (Any closer and we burn up, any further and we freeze.) But there *was *somthing important to learn: Newtonian gravity. This was the underlying ‘lesson’ in understanding planetary orbits. It had a deeper beauty than the one Kepler originally assumed. The problem was that Kepler was originally asking the *wrong questions*.

So *maybe *when we’re concerned about the size of the landscape we’re worrying about the wrong question. Maybe (and this is a big maybe) **nature is fine-tuned**. Before you go off and say that I’ve been promoting the end of science or that everything is anthropic, let’s think about this a little.

Why do people think the anthropic principle is unscientific? It’s non-predictive. Hence without the scientific method, it’s not science. But string theory was already having difficulties making predictions from the Planck scale. Let’s take this message (“nature is fine-tuned”) and see what happens at the TeV scale.

Until now our TeV-scale model building has been based on the hierarchy problem. We’ve ignored the white elephant in the room: the comsological constant problem which is even more fine-tuned. However, if we accept fine-tuning as a fact of life, then the hierarchy problem and the cosmological constant aren’t problems at all. Do we still need supersymmetry, then?

Well, if nature is finely tuned, then the SUSY breaking scale doesn’t have to be at a TeV, and in fact can be set quite high. One can argue that the SUSY fermions are naturally light by chiral symmetry (don’t worry about the details), but otherwise the SUSY scalars live at this higher SUSY breaking scale and are thus **decoupled **from the TeV scale theory. Recall, however, that it was these SUSY scalars that mediated all of the bad processes that we don’t observe. Thus by setting the SUSY breaking scale higher, we’ve gotten rid of the main problems of most SUSY models! To make it even better, we’ve managed to retain a natural dark matter candidate and grand unification in our SUSY theories!

And so the split supersymmetry proposal is this:

- Nature may well be finely-tuned. The hierarchy problem is no longer an issue.
- Instead, take dark matter and grand unification as our primary motivation for supersymmetry.
- See what kind of models we get.

Thus far this is a cute idea, but we *still *haven’t said anything about experimental predictions. (Until now one could argue that all we’ve been doing is `intellectual wanking,’ as a string theorist I knew once lamented about his work.) But we now have a class of models at the TeV scale that are rather distinct from other SUSY models because of this raised SUSY breaking scale.

In fact, one finds a smoking gun signature of split supersymmetry: long lived gluinos (the SUSY partner to the gluon). In most split-SUSY spectra the gluino isn’t much heavier than a TeV (within an order or magnitude or two). Gluinos decay to a quark and a squark (the SUSY partner to the quark) which then decay into other particles. However, the squarks are really heavy (much heavier than the gluino), so the quantum probability for such an event is really small. Thus the gluino is a long-lived particle. This can lead to very obvious experimental signatures such as displaced vertices. There are other signatures (akin to usual SUSY signatures) that are important that can be used to distinguish between split supersymmetric models.

Split SUSY model building is also remarkably straightforward. One can start with the relic density of dark matter in the universe (known by astrophysical observations), and use this to constrain the cross section for dark matter annihilation in the early universe. This constrains the effective decay coupling of the dark matter particle decay modes. One can then impose supersymmetry at the SUSY breaking scale (a parameter) and grand unification at the GUT scale (generated from the spectrum at the SUSY breaking scale) to further constrain the free parameters of the theory. Using the RF equations to flow back down to the TeV scale, one ends up with the spectrum of a split SUSY model.

So here’s what’s happened:

- We’ve assumed that fine tuning isn’t a problem. We motivated this by the string landscape, though the idea doesn’t actually depend on the landscape.
- We’ve used dark matter and grand unification as main motivations for supersymmetry and built our models around this.
- Such models have the SUSY breaking scale much higher than a TeV, removing scalar-mediated problems that plague most `natural’ SUSY models.
- We’ve been able to extract TeV-scale experimental signatures.

If we see such a signature, we would have to seriously consider the possibility that nature is fine-tuned and until now we’ve been asking the wrong questions. This doesn’t mean we should give up on natural solutions. In fact, this might be even more motivation to find a natural solution. However, it does say that we should be aware of the possibilities ahead of us.

For references please see the pdf at the top of this post. For other pedagogical introductions, please see various talks such as this one: “The Last Word on Nature’s Greatest Puzzles.” (From which I’ve borrowed several arguments!)

Filed under: Physics | 5 Comments

Flip, this is the most informative post on my feed that I’ve read all week. It has appeared at just the right time for me as I’m half way through reading ‘The trouble with physics’. Please keep up the good work… you will still have time to study, won’t you?!

Great post, Flip. Sorry I couldn’t come to your talk — I was in London seeing a concert.

Regarding 10^500, take a look at

http://en.wikipedia.org/wiki/Graham's_number

Also, http://xkcd.com/c207.html

more on the biggest numbers:

http://blag.xkcd.com/2007/03/14/large-numbers/