Electrodynamics (maps)
The term of electrodynamics is referring to Quantum Electrodynamics (QED) which is the relativistic quantum field theory of Electromagnetism.
This section is referring to wiki page-22 of gist section-18 that is inherited from the gist section-113 by prime spin-31 and span- with the partitions as below.
In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved.
Basic Transformation
The first formulation of a quantum theory describing radiation and matter interaction is attributed to British scientist Paul Dirac.
Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles.
Based on Bethe’s intuition and fundamental papers on the subject by Shin’ichirō Tomonaga,[16] Julian Schwinger,[17][18] Richard Feynman[1][19][20] and Freeman Dyson,[21][22] it was finally possible to get fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics.
The key components of Feynman’s presentation of QED are three basic actions
- A photon goes from one place and time to another place and time.
- An electron goes from one place and time to another place and time.
- An electron emits or absorbs a photon at a certain place and time.
These actions are represented in the form of visual shorthand by the three basic elements of diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing emission or absorption of a photon by an electron. (Wikipedia)
QED is based on the above three building blocks and then using the probability amplitudes to calculate the probability of any such complex interaction.
It turns out that the basic idea of QED can be communicated while assuming that the square of the total of the probability amplitudes mentioned above (P(A to B), E(C to D) and j) acts just like our everyday probability (a simplification made in Feynman’s book). Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman.
The basic rules of probability amplitudes that will be used are:
- If an event can occur via a number of indistinguishable alternative processes (a.k.a. “virtual” processes), then its probability amplitude is the sum of the probability amplitudes of the alternatives.
- If a virtual process involves a number of independent or concomitant sub-processes, then the probability amplitude of the total (compound) process is the product of the probability amplitudes of the sub-processes.
The indistinguishability criterion in (a) is very important: it means that there is no observable feature present in the given system that in any way “reveals” which alternative is taken. In such a case, one cannot observe which alternative actually takes place without changing the experimental setup in some way (e.g. by introducing a new apparatus into the system). (Wikipedia)
They are related to our everyday ideas of probability by the simple rule that the probability of an event is the square of the length of the corresponding amplitude arrow.
Feynman replaces complex numbers with spinning arrows, which start at emission and end at detection of a particle.
- The sum of all resulting arrows gives a final arrow whose length squared equals the probability of the event.
- In this diagram, light emitted by the source S can reach the detector at P by bouncing off the mirror (in blue) at various points.
- Each one of the paths has an arrow associated with it (whose direction changes uniformly with the time taken for the light to traverse the path).
- To correctly calculate the total probability for light to reach P starting at S, one needs to sum the arrows for all such paths.
The graph below depicts the total time spent to traverse each of the paths above.
Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen.
Mathematically, QED is an abelian gauge theory with the symmetry group U(1), defined on Minkowski space (flat spacetime). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field. The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field in natural units gives rise to the QED Action
Finally, one has to compute P(A to B) and E(C to D) corresponding to the probability amplitudes for the photon and the electron respectively.
Subsequent Theories
QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is Quantum Chromodynamics (QCD).
Quantum Chromo Dynamics (in short QCD) is the field that studies the strong force between quarks . Like QED (Quantum Electro Dynamics) studies the electromagnetic force on the basis of quantum field theory, QCD does so as well. So we will find many similarities in applying fields, waves, interactions and how the force comes about. However, many of the processes in QCD cannot be calculated exactly. So it is not as advanced as QED.
The three kinds of charge in QCD (as opposed to one in quantum electrodynamics or QED) are usually referred to as "color charge" by loose analogy to the three kinds of color (red, green and blue) perceived by humans.
Other than this nomenclature, the quantum parameter "color" is completely unrelated to the everyday, familiar phenomenon of color.
In the 1980s, scientists discovered that a proton’s three valance quarks (red, green, blue) account for only a fraction of the proton’s overall spin. More recent measurements have revealed that gluons (yellow corkscrews) contribute as much as or possibly more than the quarks. (Brookhaven National Laboratory)
Since the theory of electric charge is dubbed "electrodynamics", the Greek word χρῶμα (chrōma, "color") is applied to the theory of color charge, "chromodynamics".
By 1947, physicists believed that they had a good understanding of what the smallest bits of matter were. There were electrons, protons, neutrons, and photons (the components that make up the vast part of everyday experience such as atoms and light) along with a handful of unstable (i.e., they undergo radioactive decay) exotic particles needed to explain cosmic rays observations such as pions, muons and hypothesized neutrino.
- In addition, the discovery of the positron suggested there could be anti-particles for each of them.
- It was known a “strong interaction” must exist to overcome electrostatic repulsion in atomic nuclei.
- Not all particles are influenced by this strong force but those that are, are dubbed “hadrons”, which are now further classified as mesons (middle mass) and baryons (heavy weight).
- But the discovery of the neutral kaon in late 1947 and the subsequent discovery of a positively charged kaon in 1949 extended the meson family in an unexpected way and in 1950 the lambda particle did the same thing for the baryon family.
- These particles decay much more slowly than they are produced, a hint that there are two different physical processes involved. This was first suggested by Abraham Pais in 1952.
- In 1953, Murray Gell-Mann and a collaboration in Japan, Tadao Nakano with Kazuhiko Nishijima, independently suggested a new conserved value now known as “strangeness” during their attempts to understand the growing collection of known particles.[4][5][b]
- The trend of discovering new mesons and baryons would continue through the 1950s as the number of known “elementary” particles ballooned. Physicists were interested in understanding hadron-hadron interactions via the strong interaction.
- The concept of isospin, introduced in 1932 by Werner Heisenberg shortly after the discovery of the neutron, was used to group some hadrons together into “multiplets” but no successful scientific theory as yet covered the hadrons as a whole.
This was the beginning of a chaotic period in particle physics that has become known as the “particle zoo” era. The eightfold way represented a step out of this confusion and towards the quark model, which proved to be the solution (Wikipedia).
The hadrons were sorted into groups having similar properties and masses using the eightfold way, invented in 1961 by Gell-Mann[11].
The pseudoscalar meson octet. Particles along the same horizontal line share the same strangeness, s, while those on the same left-leaning diagonals share the same charge, q (given as multiples of the elementary charge).
All the theories describing fundamental interactions, except gravitation, whose quantum counterpart is only conjectural and presently under very active research, are renormalizable theories.
Renormalization was first developed in quantum electrodynamics (QED) to make sense of infinite integrals in perturbation theory. Initially viewed as a suspect provisional procedure even by some of its originators, renormalization eventually was embraced as an important and self-consistent actual mechanism of scale physics in several fields of physics and mathematics. Despite his later skepticism, it was Paul Dirac who pioneered renormalization.
Via the 11 partitions as the lexer and 13 frames as the parser we do a recombination to build the grammar in 6 periods.
Renormalizability has become an essential criterion for a quantum field theory to be considered as a viable one.
Renormalization is a collection of techniques in quantum field theory, statistical field theory, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of these quantities to compensate for effects of their self-interactions.
- But even if no infinities arose in loop diagrams in quantum field theory, it could be shown that it would be necessary to renormalize the mass and fields appearing in the original Lagrangian.[1]
- For example, an electron theory may begin by postulating an electron with an initial mass and charge. In quantum field theory a cloud of virtual particles, such as photons, positrons, and others surrounds and interacts with the initial electron.
- Accounting for the interactions of the surrounding particles (e.g. collisions at different energies) shows that the electron-system behaves as if it had a different mass and charge than initially postulated.
Renormalization, in this example, mathematically replaces the initially postulated mass and charge of an electron with the experimentally observed mass and charge. (Wikipedia)
When recombination is occur then the prime 13 is forced to → 12 where the impact (Δ1) goes to 18+The tensor product of a triplet with an octet reducing to a deciquintuplet, an anti-sextet, and a triplet appears diagrammatically as a total of 24 states.
The special unitary group SU is the group of unitary matrices whose determinant is equal to 1.[1] This set is closed under matrix multiplication.
- All transformations characterized by the special unitary group leave norms unchanged. The SU(3) symmetry appears in the light quark flavour symmetry (among up, down, and strange quarks) dubbed the Eightfold Way (physics).
- The same group acts in quantum chromodynamics on the colour quantum numbers of the quarks that form the fundamental (triplet) representation of the group.
- The group SU(3) is a subgroup of group U(3), the group of all 3×3 unitary matrices. The unitarity condition imposes nine constraint relations on the total 18 degrees of freedom of a 3×3 complex matrix.
- Thus, the dimension of the U(3) group is 9. Furthermore, multiplying a U by a phase, eiφ leaves the norm invariant. Thus U(3) can be decomposed into a direct product U(1) × SU(3)/Z3.
Because of this additional constraint, SU(3) has dimension 8. (Wikipedia)
So then a logical explanation is needed. How can quantum electrodynamics (QED) be so successful when it is clearly not a complete theory?
Quantum Electrodynamics (QED) is a fundamental theory of nature and also the most accurate one. Since 1947 it has been tested to ever increasing precision. The highest precision measurement now matches the theoretical prediction to 12 significant digits.
- The QED theory can best be understood as a theory of the simplest natural system interacting via electromagnetic interaction - one electron interacting with its own electromagnetic field. In addition to the quantum state of the electron interacting electromagnetically, the theory also provides the quantum state of the electromagnetic field.
- In QED or any other quantum field theory one deals with fields. A field is different from a particle in the sense that unlike the particle which has finite degrees of freedom, a field has infinite degrees of freedom.
- QED consists of 2 quantum fields - the quantum field of the electron and the quantum field of the photon. For a quantum field, the behavior of the field is governed by Heisenberg’s uncertainty principle instead of Newtonian dynamics.
- Any particle is a quantized excitation of its unique field. For example an electron is a quantized excitation of an electron’s quantum field. Similarly a photon is a quantized excitation of the quantum electromagnetic field. All such excitations always lead to point particles - particles with zero sizes.
- The word quantized means the mass energy and spin of the particle is always quantized. Spin is an entirely quantum mechanical phenomenon, with no classical analogue. Mass energy, by Einstein’s special theory of relativity, is the relationship between the mass of a body and its corresponding energy by the famous mass-energy relationship. The mass energy of an electron is about 0.5 million electron volts, whereas the mass energy of a photon is zero since it is a massless particle.
- The QED allows one to take into account in a quantitatively concrete way the interaction between these two fields. This is the most important feature of the theory and also the most complex one to understand and calculate. The problem arises from the fact that since an electron is a point particle, from a purely Newtonian physics point of view, its self energy due to its interaction with its own electromagnetic field should become infinite. Since no physical quantity can have an infinite value, this is a major problem.
- It is successfully resolved in QED by a highly sophisticated and extremely complex mathematical procedure known as renormalization, that makes the theory free of infinity. After renormalization the QED provides extremely precise values of physically measurable quantities, for example the magnetic dipole moment of the electron.
- The basic equation of QED, known as the path integral, is far too complex to solve and to this ray no exact solution of this equation has ever been found. But that doesn’t mean that this equation has not been solved at all. It is possible to solve the complex path integral of QED by transforming it into an infinite series in such a way that the sum of all these terms gives the exact value of this integral.
Each term represents increasing number of Feynman loops. So the first term has zero loops, second term contains 1 loop and so on. (Quora).
It is the measure of the complexity of QED that since 1948 only the first six terms have been computed. The last term has been calculated only partially and that too using the most powerful computers in the world
Quantum Triviality Issues
If the only resulting value of the renormalized charge is zero, the theory is said to become a "trivial" theory of noninteracting free particles.
This phenomenon is referred to as quantum triviality. Strong evidence supports the idea that a field theory involving only a scalar Higgs boson is trivial in four spacetime dimensions,[1][2] but the situation for realistic models including other particles in addition to the Higgs boson is not known in general. Nevertheless, because the Higgs boson plays a central role in the Standard Model of particle physics, the question of triviality in Higgs models is of great importance. (Wikipedia)
From them, computations of probability amplitudes are straightforwardly given. An example is Compton scattering, with an electron and a photon undergoing elastic scattering.
Given a Model, MARTY may compute symbolically and automatically theoretical quantities. First, Feynman rules are derived.
MARTY is a code generator. Analytical expressions, squared amplitudes or Wilson coefficients are converted into C++ code in a self-contained library compiled independently of MARTY. This code can therefore be used for numerical evaluation in different scenarios to perform a phenomenological analysis. (marty-manual.pdf)
The coupling constant runs to infinity at finite energy, signalling a Landau pole. Quantum electrodynamics also leads to predictions beyond perturbation theory.
Feynman was concerned that all field theories known in the 1960s had the property that the interactions become infinitely strong at short enough distance scales. This property called a Landau pole, made it plausible that quantum field theories were all inconsistent. In 1974, Gross, Politzer and Wilczek showed that another quantum field theory, quantum chromodynamics, does not have a Landau pole. Feynman, along with most others, accepted that QCD was a fully consistent theory.
In the presence of very strong electric fields, it predicts that electrons and positrons will be spontaneously produced, so causing the decay of the field.
The Schrödinger-Pauli theory of electrons explicitly considers the spin moment of the electrons, and therefore goes beyond the Schrödinger theory description of spinless electrons.
- As a consequence of the electrons possessing a spin moment, the Schrödinger-Pauli theory Hamiltonian is derived non-relativistically via the Feynman kinetic energy operator. In this chapter, the Schrödinger-Pauli theory of electrons in the presence of static and time-dependent electromagnetic fields is described from the new perspective of the individual electron via the corresponding ‘Quantal Newtonian’ First and Second Laws.
- These laws are a description in terms of ‘classical’ fields experienced by each electron, the fields arising from sources that are quantum-mechanical expectation values of Hermitian operators taken with respect to the system wave function. In the temporal case–the Second Law–each electron experiences an external field comprised of the Coulomb and Lorentz fields, and an internal field whose components are representative of electron correlations due to the Pauli principle and Coulomb repulsion, kinetic effects, the electron density, and an internal magnetic field.
- The response of the electron is described by a field representative of the physical current density which is a sum of its paramagnetic, diamagnetic and magnetization components. The First Law, descriptive of the stationary-state theory, constitutes a special case. The Schrödinger-Pauli theory is generalized such that the Hamiltonian operator is proved to be an exactly known universal functional of the wave function.
- This then shows the stationary-state and time-dependent Schrödinger-Pauli equations to be intrinsically self-consistent. To facilitate the understanding of this new description and of proofs within it, further relevant aspects of the stationary-state Schrödinger theory of spinless electrons in an electromagnetic field are discussed.
- The Hamiltonian operator, as obtained by the correspondence principle, is expressed in terms of operators representative of the gauge invariant properties of the electronic density and physical current density. It is also written so as to explicitly show the existence of the Lorentz force via the corresponding operator. Thus, with any scalar potential representative of external electrostatic forces, the Hamiltonian can now be seen to explicitly encompass both the external Coulomb and Lorentz forces.
Finally, it is proved that the stationary state wave function is a functional of a gauge function. (As will be proved in a future chapter, for a uniform magnetic field, the wave function is also a functional of the gauge invariant ground state density and physical current density). The wave function is thus ensured to be gauge variant.
The absence of any new particles beyond the Standard Model (SM) in high-energy collisions at the LHC highlights the need to probe the SM in low-energy experiments.
Abstract flavio is an open source tool for phenomenological analyses in flavour physics and other precision observables in the Standard Model and beyond. It consists of a library to compute predictions for a plethora of observables in quark and lepton flavour physics and electroweak precision tests, a database of experimental measurements of these observables, a statistics package that allows to construct Bayesian and frequentist likelihoods, and of convenient plotting and visualization routines. (flavio - pdf)
From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy.
The entanglement implies that there remains a connection between the photon and the emitting atom after emission even in very strong fields.
Despite the conceptual clarity of this Feynman approach to QED, almost no early textbooks follow him in their presentation. When performing calculations, it is much easier to work with the Fourier transforms of the propagators.
- Experimental tests of quantum electrodynamics are typically scattering experiments. In scattering theory, particles’ momenta rather than their positions are considered, and it is convenient to think of particles as being created or annihilated when they interact. Feynman diagrams then look the same, but the lines have different interpretations.
- The electron line represents an electron with a given energy and momentum, with a similar interpretation of the photon line. A vertex diagram represents the annihilation of one electron and the creation of another together with the absorption or creation of a photon, each having specified energies and momenta.
Using Wick’s theorem on the terms of the Dyson series, all the terms of the S-matrix for quantum electrodynamics can be computed through the technique of Feynman diagrams. In this case, rules for drawing are the following
An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero. Here we use observables of Standard Model.
New physics effects are parameterised as Wilson coefficients of dimension-six operators in the weak effective theory below the electroweak scale or the Standard Model EFT above it. To display automatically generated tables with lists of all observables currently implemented in flavio. See also the notes on conventions at the bottom.
- W± decays
- Hadronic decays (1 observable)
- Leptonic decays (12 observables)
- Z° decays
- FCNC decays (3 observables)
- Flavour conserving decays (51 observables)
- τ lepton decays
- Hadronic tree-level decays (2 observables)
- LFV decays (16 observables)
- Leptonic tree-level decays (2 observables)
- b hadron decays
- FCNC decays (860 observables)
- Leptonic tree-level decays (6 observables)
- Lifetimes (1 observable)
- Non-leptonic decays (2 observables)
- Semi-leptonic tree-level decays (686 observables)
- c hadron decays
- Leptonic tree-level decays (6 observables)
- Semi-leptonic tree-level decays (52 observables)
- e+ e− scattering
- VV (2 observables)
- s hadron decays
- FCNC decays (8 observables)
- Leptonic tree-level decays (6 observables)
- Non-leptonic decays (1 observable)
- Semi-leptonic tree-level decays (18 observables)
- Dipole moments
- Atomic electric dipole moments (1 observable)
- Lepton anomalous magnetic moments (3 observables)
- Molecular energy shifts (3 observables)
- Nucleon electric dipole moments (1 observable)
- Higgs production and decay
- h→VV (30 observables)
- h→ff (24 observables)
- Meson-antimeson mixing
- B° B° mixing (5 observables)
- Bs Bs mixing (5 observables)
- D° D° mixing (8 observables)
- K° K° mixing (1 observable)
- Nucleon decays
- Beta decays (25 observables)
- Unflavoured meson decays
- Leptonic tree-level decays (4 observables)
- contact interactions
- pp→μν (1 observable)
- pp→μ+ μ− (1 observable)
- pp→eν (1 observable)
- pp→e+ e− (1 observable)
- muon decays
- LFV decays (5 observables)
- neutrino physics
- scattering cross sections (1 observable)
- quarkonium lepton decays
- P→ℓ+ ℓ− (16 observables)
- S→ℓ+ ℓ− (24 observables)
- V→ℓ+ ℓ− (135 observables)
- V→ℓ+ ℓ−γ (120 observables)
We discuss how higher-spin operators and QED corrections alter the standard angular distribution used throughout the literature, potentially leading to differences between the method of moments and the likelihood fits. (flavio)
The problem is essentially that QED appears to suffer from quantum triviality issues. This is one of the motivations for embedding QED within a Grand Unified Theory.