### [Semi-forgotten] Tricks for simple loop integrals

10Aug07

I used to have a fear of loop diagrams — something about being intimidated by having to integrate over loop momenta. I’ve gotten over this loop-a-phobia after being faced with several such integrals as part of my work this summer, where I’ve been calculating box and penguin diagrams for B-mesons.

The Form of the Beast

For such systems, one can usually ignore the momenta of the external particles since they will be much smaller than the loop momenta. This simplifies the loop integrals a bit, allowing them to be written in the form:

(1) $I_p^{(2n)} (m_1, ..., m_p) = \int \frac{d^4 k}{(2\pi)^4} \frac{k^{2n}}{(k^2-m_1^2) \cdots (k^2-m_p^2)}$.

(See the Sanity Check’ in the appendix below to see why loop integrals with no external momenta can be written this way.)

Keep this notation in mind as I’ll be using it for the rest of the post. The standard’ notation I’ve found is to write $I_p^{(0)} = I_p$, $I_p^{(2)} = J_p$, and $I_p^{(4)} = K_p$. But I think this is an awkward system reminiscent of the SPDF’ nomenclature for electron orbitals.

These integrals are rather common in flavour physics (where the external momenta of a heavy meson can usually be ignored) and have standard values that aren’t to hard to look up. It takes a bit of trickery, however, to actually do these integrals. These tricks seem to be well known among those in the field, but aren’t common in the usual QFT textbooks. Maybe they’re obvious to everyone but me. In case I’m not the only one, though, I’d like to share two of these useful tricks. Both boil down to simple algebra.

Easy tricks that you already know

Our goal will be to simplify general integrals $I^{(2n)}_p$ into easy-to-handle integrals $I^{(0)}_2$. (Or $I^{(0)}_1$ if you’re really timid.)

First of all, in order to reduce the powers in the numerator, we make use of the handy algebraic relation:

(2) $\frac{k^2}{k^2-m^2} = 1 + \frac{m^2}{k^2-m^2}$

This is straightforward to check in a line or two of algebra. If you’re really lazy, ask one of the physics-for-pre-meds students that you periodically have to TA.

Now what does this do when we apply it to integrals of the form $I^{(2n)}_p$? It gives us the handy relation:

(3) $I^{(2n)}_p(m_1^2, ..., m_p^2) = I^{(2n-2)}_{p-1}(m_1^2, ..., m_{p-1}^2) + m_p^2 I^{(2n)}_p(m_1^2, ..., m_p^2)$

And hence we’re trading one integral with a numerator $k^2n$ for two integrals with a simpler numerator $k^{2n-2}$. As a bonus, one of the integrals has one less factor of $(k^2-m_i^2)$ in the denominator. By applying this trick iteratively, one will end up with only integrals of the form $I^{(0)}_p$.

Now we’d like to simplify the denominator, so we need an algebraic trick that turns a product of denominators into a sum. Here’s what we’ll use:

(4) $\frac{1}{k^2-m_1^2}\cdot \frac{1}{k^2-m_2^2} = \frac{1}{m_1^2-m_2^2}\left( \frac{1}{k^2-m_1^2} - \frac{1}{k^2-m_2^2}\right)$

This is just as straightforward to check as (2). If you’re still really lazy and there are no pre-med students around, then call up your little sister.

This allows us to write down the rule:

(5) $I^{(2n)}_p(m_1^2, ..., m_p^2) = \frac{1}{m_1^2 - m_p^2}\left( I^{(2n)}_{p-1}(m_1^2, ..., m_{p-1}^2) - I^{(2n)}_{p-1}(m_2^2, ..., m_p^2) \right)$.

Thus we can reduce the amount of junk in the denominator in exchange for having to do two simpler integrals and some multiplication.

Putting it all together, we can iterate these procedures to turn any complicated integral $I^{(2n)}_p$ into a sum of simple integrals $I^{(0)}_{1,2}$. These can be solved using dimensional regularization.

The heart of the matter is the trick:

(6) $\int \frac{d^n k}{(2\pi)^n} \frac{(k^2)^\beta}{(k^2-A^2)^\alpha} = \frac{i}{(4\pi)^{n/2}}(-1)^{\alpha+\beta}(A^2)^{\beta-\alpha+n/2} \frac{\Gamma(\beta+n/2)\Gamma(\alpha-\beta-n/2)}{\Gamma(n/2)\Gamma(\alpha)}$

This is a standard result, but I call this Mars’ Trick after the fellow student who taught it to me. Some tips for proving this are provided in the Appendix below.

Anyway, that’s it! Unfortunately complicated integrals end up costing you a mess of algebra, but the actual mess is easy enough to type into Mathematica if you’re really that lazy. If you’re even lazier than that and there are no pre-meds and your little sister isn’t home, you can ask a girl scout to do it for you in exchange for buying cookies.

At any rate, the girl scout will probably tell you that all you’re doing is partial fraction decomposition. You’ll slap your hand to your forehead and realize that you should have known this all along.

One neat feature of doing the integrals this way is that you can keep an eye on the divergences. They come from $I^{(0)}_2(m_1^2, m_2^2)$ and can be isolated as:

(7) $I_{\textrm{divergent}} = I^{(0)}_2(m^2,m^2) = \frac{1}{16 \pi^2}\left(\frac{2}{\epsilon} - \gamma_E + \log \left(\frac{4\pi}{m^2}\right)\right)$.

I’ve skipped some manipulations (see appendix below if you’re totally lost), but this shouldn’t be too hard to show if you play with the integrals. This expression should be familiar to anybody who has used dimensional regularization — indeed, that’s where it comes from.

That’s not the way I learned it from Peskin and Schroeder!

The reason why I referred to these as “semi-forgotten” tricks is that they seem to have fallen out of the usual QFT pedagogy. In the age of Mathematica, doing integrals is no longer a skill that will put bread on your table. To the best of my knowledge, this isn’t taught in any of the standard introductory texts. In fact, my interest in performing such calculations first came about when I was trying to solve a problem in Cheng and Li in which logarithms mysteriously’ appeared. (I didn’t write it out above, but the integrals yield logs of mass ratios. These come from dimensional regularization and the trick $A^{\epsilon} = e^{\epsilon \log A} = 1 + \epsilon \log A + O(\epsilon^2)$.)

When faced with integrals of the form (1), a student’s gut feeling will be to reach for Mathematica. Unfortunately, those who aren’t very Mathematica-saavy would have to sift through a bunch of conditionals to decipher the solution. More complicated integrals also seem to take quite some time to compute analytically (I suspect this is because of the presence of poles).

Failing this approach, the next step is to try to follow an analogous calculation in Peskin and Schroeder. Thus one will attempt to use Feynman’s trick to simplify the integral into the form (1) and then Feynman parameters:

(8) $\frac{1}{A_1 \cdots A_n} = \int_0^1 dx_1 \cdots dx_n \delta(\sum x_i -1) \frac{(n-1)!}{(\sum x_iA_i)^n}$

This makes the momentum integral easy since one can use Mars’ Trick, equation (6). However, one is left over with a rather ugly integral over the Feynman parameters $x_i$ where the integration region is a unit box (rather than all space). These integrals aren’t impossible, but they require carefully integrating logs and can somewhat obscure the nature of divergences.

Thus, the above tricks do come in handy for some calculations.

Harder Integrals

But wait a minute — will you ever have to calculate a loop integral in the limit where the external momenta is negligible? What about more general loop integrals?

Unfortunately I won’t go into these manipulations, and the chances are that the pre-meds, little sisters, and girlscouts mentioned above are going to be less inclined to help you.

However, it should put your soul at ease to know that such manipulations have been done by Passarino and Veltman (Nucl. Phys. B 160:151, 1979), after whom the integrals are named.

I’m told that a good reference for these manipulations are the appendices of Pokorski’s Gauge Field Theories from the Cambridge Monographs on Mathematical Physics series. For general background and exercises on doing loop integrals in their natural setting,’ Ramond’s Modern Field Theory is a good reference. (Though bear in mind that it’s no longer so modern’ and one might get frustrated that everything is Euclidean.)

Doing integrals is certainly becoming a lost art. (Gradstein and Ryzhik would be sad.) While there are still a few people in the world who can really appreciate a cleverly done integral, most of the nuts and bolts of calculations are now being passed to computers that can do the algebraic analysis. It’s nice to know, however, that these very same tricks are used by programs for evaluating loop integrals. For example: SOFTSUSY and xloops. (I’m sure there are heaps of others, but that’s what came up after a quick Google search.)

Appendix: Sanity Check

Why do loop integrals take the form of (1)? In the absence of external momenta our denominators naturally take this form, i.e. with no linear terms in k. Since they are functions of $k^2$, they are even in k. In general, the numerators take the form $k_{\mu_1} k_{\mu_2} \cdots k_{\mu_{2n}}$. If there were an odd number, the integral would vanish identically since one would be integrating an odd function over all space.

Now how do we simplify the numerator? It is a symmetric product of vectors, i.e. a totally symmetric tensor. The only symmetric tensor we have lying around is the metric, so the numerator must be a multiple of $k^{2n} \sum_i g_{\mu_{i_1}\mu_{i_2}}g_{\mu_{i_3}\mu_{i_4}}\cdots g_{\mu_{i_{2n-1}}\mu_{i_{2n}}}$, where the sum is over permutations of the indices. It’s relatively easy to find the coefficient by contracting both forms of the numerator with, say $g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}}\cdots g_{\mu_{{2n-1}}\mu_{{2n}}}$. It gets tedious for n greater than 3, but if you write this out for arbitrary spacetime dimension d, you’ll notice a pattern. (If you’re clever you can work out the pattern a priori.)

Anyway, now you’ve separated the Lorentz structure from the integration variable. The metrics can then be used to simplify the Dirac structure of your amplitude, since the k‘s in the numerator are usually contracted with gamma matrices. And voila, you’ve written your integral in the form of equation (1) above. More importantly, you’ve disentangled the Lorentz/Dirac structure from the loop integral, allowing you to see what your effective operators look like. (You may have to play around a bit with Fierz identities.)

Appendix: Proving and Using Mars’ Trick

A few tips for proving and using Mars’ Trick, equation (6). First of all, you’ll need to use the trick:

$\frac{1}{A}= \int_0^\infty ds \phantom{a} e^{-As}$.

Next, you’ll need to keep in mind the definition of the $\Gamma$ function:

$\Gamma(z) = \int_0^\infty dt \phantom{a} t^{z-1} \phantom{a} e^{-t}$

And that should do it. To actually use Mars’ Trick, you should be familiar with dimensional regularization. There are plenty of standard references, any introductory QFT book will do the trick. Here’s an old one by Leibbrandt from RMP.

Be sure to keep your factors of $\epsilon = 4-d$ straight. These come into play when you use equations like:

$A^{\epsilon} = e^{\epsilon \log A} = 1 + \epsilon \log A + O(\epsilon^2)$

and

$\Gamma(-n+\epsilon) = \frac{(-1)^n}{n!}\left[ \frac{1}{\epsilon} + 1 + \frac{1}{2} + \cdots + \frac{1}{n} - \gamma + O(\epsilon) \right]$.

Anyway, with a little bit of practice, you’ll be doing most of this in your head. Just in time to catch up to all the pre-meds, little sisters, and girl scouts.

#### 9 Responses to “[Semi-forgotten] Tricks for simple loop integrals”

1. for one of my part 3 subjects a variant of trick no (6) was assumed knowledge and not given in an exam question where you had to get through some integration. if you didn’t know your gamma function integrals you could not press on and would be unable to do most of the question.

i will miss part 3

NOT!

2. 2 robert

The facility with definite integrals is sadly a skill that is falling, at best, into abeyance. Over on Cosmic Variance, though, Sean Carroll gets reasonably excited about doing some fancy twiddling with spherical harmonics. He had to get his hands dirty because Mathematica could not do his example ‘from cold’ . Whatever else Wolfram’s whiz might do, it can only furnish you with what, and not how or why. (And it can get the answer totally wrong, or in a misleading form)

To me, doing integrals is a bit like the Times crossword (or even part 3) -it requires a broad but rather superficial knowledge of suitably arcane matters and, if you can do it , it looks good, and enhances your rep as a mega-mind . It is neither a neccessary nor sufficient attribute when it comes to the real world.

At an altogether more exalted level check out the tale of the youthful HMS Coxeter’s first paper in which he came up with some integrals whose values he could determine by geometrical arguments but which thwarted him on the analytical front. The youthful geometer was thrilled ot the core to get a letter from Hardy, who confessed to an inability to leave such things alone, which supplied some twenty pages of derivation. (Recounted in the preface to the Beauty of Geometry by HSMC, published as a Dover reprint)

Needless to say, Mathematica gives up on these guys straight away.

3. 3 Alejandro Rivero

Just to compare, could you show the Mathematica code for the same calculations?

4. Hi Alejandro — the short answer is no, I can’t. 🙂

I did the integral by hand using Feynman parameters and only put it into Mathematica once I’d already had an algebraic form. The integrals are symmetric in their arguments (the masses), so I was trying to test if my algebraic answer was indeed symmetric. Unfortunately, they weren’t, so I must have made errors.

However, the following webpage has a nice step-by-step guide to doing the integrals via Feynman parameters while using Mathematica all the way through:

http://www.scientificarts.com/feynman/feynman.html