Momentum Balance in Hadron Collisions

In hadron colliders, like the LHC, one of the most important tools used when looking for new particles, specifically “dark matter” candidates, is conservation of momentum. Indeed, this simple rule which Newton made rigorous when developing his version of mechanics holds even down to the level of individual fundamental particles so far as we can tell.

Most modern colliders at the energy frontier have focused on smashing together protons with protons or protons with antiprotons. The alternatives are of course smashing together heavier things, like gold, or smashing together lighter things, like electrons. Protons are a great candidate for a few reasons. They have a long, probably infinite, lifetime, they are heavier than electrons, which means it is easier to ramp them up to higher energies, but they are also not made of too many pieces, like heavier atoms, so that the total energy of the particle is split between too many constituents (which means lower energy collisions).

However, protons are still composite objects. When pieces of the protons smash into each other, no matter how well we tune the energies of the individual protons, there is vanishingly small probability that the pieces will have precisely the same the momentum. A Newtonian picture is sufficient here. If we imagine the proton to be a ball filled with other balls called quarks and gluons, then fixing the net momentum of larger ball only tells us about the sum of the momenta of all the individual pieces, and nothing can be said about the momentum of an individual quark or gluon.

When we throw in relativity, the mass-energy relation tells us that no individual quark or gluon can have so much momentum that its energy exceed the mass of the proton. When we throw in quantum mechanics, we have to give up on the idea of balls which are well defined objects that have well defined momenta at every instant of time all together. The result of this is that we end up with what’s called a parton distribution function, and the upshot is that two subatomic particles that collide will generally have some momentum imbalance in the lab frame. This means that any particle made in a proton-proton collision will likely have some nonzero momentum in the lab frame. In electron-positron colliders, like LEP, this was not the case, the beam energies could be tuned so precisely that particles produced would normally be at rest in the lab frame.

However, one thing we can say about the newly produced particles is that they will have almost no transverse momentum, that is momentum in the 2 dimension plane tangent to the beam motion. The most obvious reason for this is that the protons are in a tight beam, which means as a whole, they can not be moving up and down all that much or else they would escape that beam. But this is not enough (it is enough to tell us there will be overall transverse momentum conservation, but not enough to say any new particle made alone will have 0 transverse momentum), remember the protons are composite objects, fixing their net transverse momentum does not tell us, from a Newtonian standpoint, that the individual pieces don’t have high transverse momentum, but thankfully, conservation of energy does.

If we were in the rest frame of one of the traveling protons, the momentum of any individual piece of the proton is bounded by the mass of the proton, about 1 GeV. Since you can get from the lab frame to the rest frame of a proton in the beam by a boost along the direction of the beam, the transverse momentum of a parton in the rest frame is equal to the transverse momentum in the lab frame, the boost is along an orthogonal axis. This means that we can say with absolute certainty that no constituent of the proton can have transverse momentum larger than the rest mass of the proton. This fact, that any object produced in a collision should not have almost any transverse momentum is a huge tool in searching for new ‘dark’ particles.

Dark matter is matter which does not interact electromagnetically, or at least does so extremely weakly, which is why we can’t see it with telescopes. If we make dark matter in a collider, there’s a good chance that we won’t be able to see it in our detectors either, electromagnetically neutral particles are notoriously hard to catch. The main tool we have to search for dark particles is the absence of detection. If the dark particle made from LHC collisions is stable and it is produced alone, then it will be very tough to see in the detector since it probably just escape down the beam line unnoticed.

However, if the particle is produced with a charged particle that we detect (because it had some transverse momentum), or is unstable so that it decays to a standard model particle and another dark particle(which also have transverse momentum), then we can find it by looking for imbalance in transverse momentum in our collision. If the transverse momentum in any collision does not add up to nearly zero, then there’s a good chance we’ve made a particle that escaped the detector unnoticed. Of course, sometimes we ‘mismeasure’ the momentum of things, and the production of neutrinos also leads to momentum imbalance as well, these are the challenges we have to work around. But nevertheless, a lot of the game in looking for dark particles comes from looking for momentum imbalance in collisions where there should not be any.

That’s all we can say about the initial momenta of new particles produced at the hadron colliders. Next time I will talk about what can be said about the momentum of the daughter particles in a two body decay.

Advertisements

Initial State Radiation

In collider events, we often see what’s called Initial State Radiation or (ISR). Particle physicists use this term to identify events where one of the particles in some collision produced a boson before there was any interaction. I’d like to give a brief discussion as to what this means for the physics, that is, what’s different about an event with initial state radiation and the same processes without it?

Standard_Model_Feynman_Diagram_Vertices

The tree level diagrams of the standard model, courtesy of Wikipedia.

First off, ISR is typically a photon, a Z boson, or a gluon. You can see why by looking at this wonderful image to the right that shows a list of the “tree level” standard model decays. Notice that there are mostly diagrams which have three particles at each vertex, to be honest, I don’t know whether the physicists consensus is that we have ruled out vertices between a higher number of standard model particles, but I seem to recall some argument about renormalizability that might exclude such processes… In any case, such decays are certainly not a part of the standard model Lagrangian (a topic for another post), so if we are working within the context of the standard model, they don’t exist. Another point to further convince you that other decays are less relevant is that even if those decays were possible, with each particle you attach to a vertex, you lose some probability of having all the ingredients you need to make it happen, so even if higher vertex diagrams exist, they would be have a lower probability of occurring (we say they would be suppressed) by that fact as well.

These are the most likely decays because they all happen to first order in perturbation theory, that is to say, they don’t have any loops. So these are the most important processes to keep in mind when working in standard model physics; they are going to happen most frequently.

When we speak about ISR, we are typically talking within the context of another physical processes. For instance, below is a diagram of an electron and a positron annihilating to make a photon (the star means it’s an “off-shell” photon with mass), then our final state is a charm /anti-charm pair.

isr

A positron and electron combine into a massive photon which then decays into a charm/anti-charm pair. Taken from here.

In the context of this decay, we have the ISR of another photon. So if we want to talk about initial state radiation, we are talking about a decay where we can tack on another decay and still have the same physics process. If you look at the chart of the standard model interactions, you’ll notice that the only way a fermion can come into and out of a decay as the same particle is when it is coupled to one of the boson I listed above.

Now that we’ve shown that ISR (in the sense we’ve defined it here) has to be a gluon, a photon, or a Z boson, we can get to real point of this post… what happens to an event when you add in some ISR?

The first thing is the most obvious, the rate of the decay is reduced. Each vertex in your Feynman diagram adds some factor, less than 1, to the amplitude associated with that diagram. So no matter what decay you choose, since you’ve added a vertex to your diagram, you’ve reduced the probability of the process occurring.

Unfortunately, I am not positive how to compute the precise variation of the probability of measuring an event under the constraint of additional ISR particles. I’ll explain why. With two ISR bosons, you have to consider at least one extra diagram at tree level. For instance, if we wanted to add another \gamma to the electron-positron annihilation, we could either attach that particle to the electron or the positron, so we end up with 3 indistinguishable initial states that lead to our charm-anticharm pair with two particles of ISR. Both bosons are attached to the positron, both are attached to the electron, or they are split. Accounting in quantum field theory is still very confusing to me, so unfortunately the best I can do is say another ISR piece probably reduces the probability of the event occurring by about the same percentage as the first. This follows the same line of logic as before, since there is only one diagram without any ISR, and two diagrams with ISR, one for each choice of lepton to which we attach the \gamma.

The next, and most physically relevant effect is the so-called boost of the off-shell photon. Consider a point in our diagram where the \gamma * exists in the case with and without ISR. When there is no ISR, the center of mass of the system is equal to the rest frame of the \gamma *, but when there is ISR, the center of mass of the system is now somewhere in between the two. Notice that center of mass is a misleading term in particle physics, what we really mean the frame in which the particles have 0 net momentum, the ISR photon in this system has momentum equal to its energy, but no mass. Why is this important?

Let’s think about what effect this has on the momentum of the charm quarks. In the case of the decay without any ISR, there are a lot of nice things that are true about the momenta of the charms. First, if this is a collider, then there should be essentially no transverse momentum in initial electron-positron system. That is to say, there’s a beam line in which the electrons are moving, so the leptons should have no component of momentum in the plane at a right angle to that beam line. That means that if you add up the momentum vectors of the charm pair, it should also have no transverse momentum. With an ISR boson however, it can be seen easily from momentum conservation that the momentum of the charm pair is different from the momentum of the electron-positron pair by the exactly the ISR photon’s momentum.

p_e + p_{\bar{e}} + p_{\gamma^{ISR}} = p_{\gamma *} = p_c + p_{\bar{c}}

which implies that

p_c + p_{\bar{c}} - p_e - p_{\bar{e}} =p_{\gamma^{ISR}} 

This means that the transverse momentum of the charm pair need not be 0, since it’s equal and opposite to the transverse momentum of the emitted ISR photon. In another post, I’d like to discuss the probability distribution for the direction of the emitted photon, or more generally, how to compute that probability in arbitrary decays.

The other point to address is that in particle collisions, we typically have a lot of symmetry in the initial momentum distribution. If we were at an electron+positron collider like in the example above, in most cases, the net momentum along the beam line would be approximately 0 for the e+p system by construction (normally we build colliders to accelerate two beams to equal momentum). This means that the \gamma * would have the highest probability to be produced at rest in the lab frame. In the case of ISR, however, this is clearly not the case, the \gamma * instead has the highest probability to be boosted with respect to the lab frame by 3-velocity equal the ISR particle’s momentum over the mass of the \gamma *. That is

v_{\gamma *} \rightarrow v_{\gamma *} \frac{p_{\gamma}}{m_{\gamma *}} .

This means two things for the momentum vectors of the charm quarks. First, their components will typically be larger. Why? Because their kinetic energy will be larger as they were produced by a particle that was in motion with respect to the lab. Secondly, they will tend to be closer together, since under a Lorentz boost the momentum vectors of the two particles will be smooshed together. I’ve tried to illustrate this below.

ddd

The smooshing together of vectors under a Lorentz boost. In the rest frame of the off-shell photon, we have a ruler at rest with respect to the lab which has its graduation lines smooshed together due to Lorentz contraction of the lab. Then we choose any two vectors for the momentum of the daughter particles on one half of the plane (to balance an ISR vector, not shown, pointing to the left). Consider points along the trajectory of the momenta A and A’, after unsmooshing the ruler, we inspect A and A’ in the new coordinate system. The vertical distance to the ruler has not changed, but we see the angle between the two particles is now smaller.

So let’s recap.

  1. ISR is typically spoken about in the context of a known decay that can be found without the emission of a Z, \gamma, or gluon.
  2. By looking for a decay with ISR, you reduce the rate at which that decay occurs.
  3. ISR boosts the entire decay system with respect to the lab causing the transverse momentum of decay products to be non-zero, and adding kinetic energy to the system.
  4. Boosted decay products will be smooshed together in the lab frame.

Thanks for reading!

-Bobak

The Unreasonable Effectiveness of Scalar Algebriac Relations.

One of the most interesting things to me in the theoretic framework of physics is the fact that basically all descriptions of physics are based on algebraic relations. There is a chicken and egg argument to be had here a bit about whether this is because we as people like to think this way, or whether there is some deeper meaning, but when you look at the track record of following this line of thought, it’s quite compelling. There are formulas like

E^2 -p^2 = m^2

which somehow transcend the scale or nature of the problem and somehow finagle their way into a point of view where they are true. That equation above is the famous Einstein energy momentum relation. This equation holds not only in special relativity, where it was initially conceived, and where it describes the classical trajectory of a honest-to-goodness particle or rigid body, but it also is a cornerstone of physical quantum field theories.In QFT, this is called the mass shell, any time a particle can be observed directly, it obeys that relationship.

Somehow, even though we drag this relationship through the mud by building an extended objects out of a bulk of elementary particles that have decohered to the point of obeying classical physics laws, the algebra still has an interpretation where it holds. I find it a great mystery to ask ‘Why?’ And that has proven to be an exceedingly difficult question to answer.

Starting from an algebriac relationship that holds in classical physics and finding a quantum interpretation is the story modern physics students are told about the origins of quantum mechanics. The time independent Schrodinger equation for a non-relativistic quantum system is given by taking the systems classical energy equation, called the Hamiltonian, a slightly less complicated version of the Einstein expression above, and turning it into an operator on functions over configuration space by substituting

p \rightarrow -i \hbar \frac{\partial}{\partial x}

then acting on a “wave function” with the resulting operator. This is a boiled down formulation of the famous canonical quantization prescribed by Dirac. In order to make a relativistic field theory, Dirac used the same idea, but instead he used the equation above. There was some issue with doing so as it pushed the idea of negative probabilities, but in the end, that idea turns out to be correct.

Now I will go into some speculation. First things first, I am not the most competent physicist at quantum field theory, so take everything I say with a grain of salt. But as far as I can tell, this answer to the unreasonable effectiveness of algebriac relationships has to do with symmetry. Basically, the equation above is the expression for making a scalar quantity from a 4-vector (if one uses the Minkowski metric, which is equal to requiring a theory of vectors to obey invarience under Lorentz transformations ).

Now, I am of the mind that it doesn’t bother me to do such a thing. There are good arguments as to why all physical quantities should be able to be described in terms of tensor fields. So I guess that if we suppose a rank 1 tensor exists in our theory, then we are basically forced to have ‘representation’ of the Einstein relation somewhere that has some meaning, since the scalar made by doing operation above (summing the square of the components except subtracting the first) is a number that is recoverable from any reference frame. But why should that be enough to make the Dirac equation? Look how abstract that all got, is there no simpler explanation as to why we can go backwards here?

Maybe this should have been a post titled “Why the heck does canonical quantization work?”