SOCR ≫ | TCIU Website ≫ | TCIU GitHub ≫ |
The domain of the kime variable (\(k\)) is the complex plane parameterized by pairs of Descartes Cartesian coordinates, conjugate-pairs coordinates, or polar coordinates:
\[\mathbb{C \cong}\mathbb{R}^{2} = \left\{ z = (\ x\ ,y\ )\mid\ x,\ y\mathbb{\in R\ } \right\} \cong\] \[\left\{ \left( \ z\ ,\overline{z}\ \right)\mid\ z\mathbb{\in C,\ }z = x + iy,\ \ \overline{z}\ = x - iy,\ \ x,y\mathbb{\in R\ } \right\} \cong\]
\[\{\ k = r\ e^{i\varphi} = r\left( \cos\varphi + i\sin\varphi \right)\ \mid \ r \geq 0,\ - \pi \leq \varphi < \pi\ \}.\]
The Wirtinger derivative of a continuously differentiable function (\(f\)) of kime (\(k\)), \(f(k)\), and its conjugate are defined as first-order linear partial differential operators:
In Cartesian coordinates:
\(f'(z) = \frac{\partial f(z)}{\partial z} = \frac{1}{2}\left( \frac{\partial f}{\partial x} - i\frac{\partial f}{\partial y} \right)\) and \(f'\left( \overline{z} \right) = \frac{\partial f\left( \overline{z} \right)}{\partial\overline{z}} = \frac{1}{2}\left( \frac{\partial f}{\partial x} + i\frac{\partial f}{\partial y} \right).\)
In Conjugate-pair basis: \(df = \partial f + \overline{\partial}f = \frac{\partial f}{\partial z}dz + \frac{\partial f}{\partial\overline{z}}d\overline{z}.\)
In Polar kime coordinates:
\[f'(k) = \frac{\partial f(k)}{\partial k} = \frac{1}{2}\left( \cos\varphi\frac{\partial f}{\partial r} - \frac{1}{r}\sin\varphi\frac{\partial f}{\partial\varphi} - i\left( \sin\varphi\frac{\partial f}{\partial r} + \frac{1}{r}\cos\varphi\frac{\partial f}{\partial\varphi} \right) \right) = \frac{e^{- i\varphi}}{2}\left( \frac{\partial f}{\partial r} - \frac{i}{r}\ \frac{\partial f}{\partial\varphi} \right)\]
and
\[f'\left( \overline{k} \right) = \frac{\partial f\left( \overline{k} \right)}{\partial\overline{k}} = \frac{1}{2}\left( \cos\varphi\frac{\partial f}{\partial r} - \frac{1}{r}\sin\varphi\frac{\partial f}{\partial\varphi} + i\left( \sin\varphi\frac{\partial f}{\partial r} + \frac{1}{r}\cos\varphi\frac{\partial f}{\partial\varphi} \right) \right) = \frac{e^{i\varphi}}{2}\left( \frac{\partial f}{\partial r} + \frac{i}{r}\ \frac{\partial f}{\partial\varphi} \right).\]
Notes:
\[\left \| \begin{matrix} x = r\ \cos\varphi \\ y = r\ \sin\varphi \\ \end{matrix} \right . \] \[\left \| \begin{matrix} r = \sqrt{x^{2} + y^{2}} \\ \varphi = \arctan\left( \frac{y}{x} \right) = \arctan{(y,x)} \\ \end{matrix} \right. \]
\[\left\| \begin{matrix} \frac{\partial}{\partial x} = \cos\varphi\frac{\partial}{\partial r} - \frac{1}{r}\ \sin\varphi\frac{\partial}{\partial\varphi} \\ \frac{\partial}{\partial y} = \sin\varphi\frac{\partial}{\partial r} + \frac{1}{r}\ \cos\varphi\frac{\partial}{\partial\varphi} \\ \end{matrix} \right. \] see Korn GA, Korn TM. Mathematical handbook for scientists and engineers: definitions, theorems, and formulas for reference and review, Courier Corporation; 2000.
\[(x,y) \rightarrow \left( \frac{1}{2}\left( z + \overline{z} \right),\ \ \ \frac{1}{2i}\left( z - \overline{z} \right) \right),\]
\[\frac{\partial}{\partial z} = \frac{\partial}{\partial x}\frac{\partial x}{\partial z} + \frac{\partial}{\partial y}\frac{\partial y}{\partial z}.\]
Therefore, \(\frac{\partial x}{\partial z} = \frac{1}{2}\frac{\partial\left( z + \overline{z} \right)}{\partial z} = \frac{1}{2}\) and \(\frac{\partial y}{\partial z} = \frac{1}{2i}\frac{\partial\left( z - \overline{z} \right)}{\partial z} = \frac{1}{2i} = - \frac{i}{2}.\)
Similarly,
\(\frac{\partial x}{\partial\overline{z}} = \frac{1}{2}\frac{\partial\left( z + \overline{z} \right)}{\partial\overline{z}} = \frac{1}{2}\) and \(\frac{\partial y}{\partial\overline{z}} = \frac{1}{2i}\frac{\partial\left( z - \overline{z} \right)}{\partial\overline{z}} = - \frac{1}{2i} = \frac{i}{2}\).
This explains the Cartesian coordinate derivatives:
\(f'(z) = \frac{\partial f(z)}{\partial z} = \frac{1}{2}\left( \frac{\partial f}{\partial x} - i\frac{\partial f}{\partial y} \right)\) and \(f'\left( \overline{z} \right) = \frac{\partial f\left( \overline{z} \right)}{\partial\overline{z}} = \frac{1}{2}\left( \frac{\partial f}{\partial x} + i\frac{\partial f}{\partial y} \right)\).
Below, we present the core principles of Wirtinger differentiation} and integration:
\[\left| \begin{matrix} x = \frac{1}{2}\left( z + \overline{z} \right) \\ y = \frac{1}{2i}\left( z - \overline{z} \right) \\ \end{matrix}\ . \right.\ \]
Wirtinger differentiation: The Wirtinger derivative of \(f\), \(df_{z}\) is an \(\mathbb{R}\)-linear operator on the tangent space \(T_{z}\mathbb{C \cong C}\), i.e., \(df_{z}\) is a differential 1-form on \(\mathbb{C}\). However, any such \(\mathbb{R}\)-linear operator (\(A\)) on \(C\) can be uniquely decomposed as \(A = B + C\), where \(B\) is its complex-linear part* (i.e., \(B(iz) = iBz\)), and \(C\) is its complex-antilinear part (i.e., \(C(iz) = - iCz\)). The reverse (composition) mapping is \(Bz = \frac{1}{2}\left( Az - iA(iz) \right)\) and \(Cz = \frac{1}{2}\left( Az + iA(iz) \right)\).
For the Wirtinger derivative, this duality of the decomposition of \(\mathbb{R}\)-linear operators characterizes the conjugate partial differential operators \(\partial\) and \(\overline{\partial}\). That is, for all differentiable complex functions \(f\mathbb{:C \rightarrow C}\), the derivative can be uniquely decomposed as \(\mathbf{d}\mathbf{f}_{\mathbf{z}}\mathbf{= \partial f +}\overline{\mathbf{\partial}}\mathbf{f}\), where \(\partial\) is its complex-linear part (\(\partial iz = i\partial z\)), and \(\overline{\partial}\) is its complex-antilinear part* (\(\overline{\partial}iz = - i\overline{\partial}z\)).
Applying the operators \(\frac{\mathbf{\partial}}{\mathbf{\partial z}}\) and \(\frac{\mathbf{\partial}}{\mathbf{\partial}\overline{\mathbf{z}}}\) to the identify function (\(z \rightarrow z = x + iy\)) and its complex-conjugate (\(z \rightarrow \overline{z} = x - iy\)) yields the natural derivatives: \(dz = dx + idy\) and \(d\overline{z} = dx - idy\). For each point in \(\mathbb{C}\), \(\{ dz,d\overline{z}\}\) represents a conjugate-pair basis for the \(\mathbb{C}\) cotangent space, with a dual basis of the partial differential operators:
\[\left\{ \frac{\partial}{\partial z}\ ,\ \frac{\partial}{\partial\overline{z}} \right\}\ .\]
Thus, for any smooth complex functions \(f\mathbb{:C \rightarrow C}\),
\[df = \partial f + \overline{\partial}f = \frac{\partial f}{\partial z}dz + \frac{\partial f}{\partial\overline{z}}d\overline{z}\ .\]
\[\lim_{\left| z_{i + 1} - z_{i} \right| \rightarrow 0}{\sum_{i = 1}^{n - 1}\left( f(z_{i})(z_{i + 1} - z_{i}) \right)} \cong \oint_{z_{a}}^{z_{b}}{f\left( z_{i} \right)dz}\ .\]
This assumes the path is a polygonal arc joining \(z_{a}\) to \(z_{b}\), via \(z_{1} = z_{a},\ z_{2},\ z_{3},\ ,\ z_{n} = z_{b}\), and we integrate the piecewise constant function \(f(z_{i})\) on the arc joining \(z_{i} \rightarrow z_{i + 1}\). Clearly the path \(z_{a} \rightarrow z_{b}\) needs to be defined and the limit of the generalized Riemann sums, as \(n \rightarrow \infty\), will yield a complex number representing the Wirtinger integral of the function over the path. Similarly, we can extend the classical area integrals, indefinite integral, and Laplacian:
Definite area integral: for \(\Omega \subseteq \mathbb{C}\), \(\int_{\Omega}^{}{f(z)dzd\overline{z}}\).
Indefinite integral: \(\int_{}^{}{f(z)dzd\overline{z}}\), \(df = \frac{\partial f}{\partial z}dz + \frac{\partial f}{\partial\overline{z}}d\overline{z}\),
The Laplacian in terms of conjugate pair coordinates is \(\nabla^{2}f \equiv \mathrm{\Delta}f = 4\frac{\partial}{dz}\frac{\partial f}{d\overline{z}} = 4\frac{\partial}{d\overline{z}}\frac{\partial f}{dz}\ .\)
More details about Wirtinger calculus of differentiation and integration are provided later.
In general, the classical concepts of derivative and rate of change are well defined for a kime-process \(f(t,\varphi)\) with respect to the (positive) real variable time (\(t \equiv r = |\kappa| = |r\ e^{i\varphi}|\)). Depending on the context, we interchangeably use \(\varphi, \phi, \psi, \theta\) and other lowercase Greek letters to represent the kime-phases. The smooth time rate of change of the process \(f\) is explicated as the classical partial derivative, \(\frac{\partial f}{\partial r}\).
Above we showed that the partial derivatives of a kime-process in Cartesian kime coordinates are also well defined, using Wirtinger differentiation. However, in the polar representation, this strategy may not apply for the second variable (kime-phase), \(\varphi\), which could be interpreted as a distribution. In essence, \(\varphi\) represents an unobservable random sampling variable from a symmetric distribution compactly-supported on \(\lbrack - \pi,\pi)\), or a periodic distribution.
Above we showed that in polar kime coordinates, analytic functions can be differentiated as follows
\[f'(k) = \frac{\partial f(k)}{\partial k} = \frac{1}{2}\left( \cos\varphi\frac{\partial f}{\partial r} - \frac{1}{r}\sin\varphi\frac{\partial f}{\partial\varphi} - i\left( \sin\varphi\frac{\partial f}{\partial r} + \frac{1}{r}\cos\varphi\frac{\partial f}{\partial\varphi} \right) \right) = \frac{e^{- i\varphi}}{2}\left( \frac{\partial f}{\partial r} - \frac{i}{r}\ \frac{\partial f}{\partial\varphi} \right)\]
and
\[f'\left( \overline{k} \right) = \frac{\partial f\left( \overline{k} \right)}{\partial\overline{k}} = \frac{1}{2}\left( \cos\varphi\frac{\partial f}{\partial r} - \frac{1}{r}\sin\varphi\frac{\partial f}{\partial\varphi} + i\left( \sin\varphi\frac{\partial f}{\partial r} + \frac{1}{r}\cos\varphi\frac{\partial f}{\partial\varphi} \right) \right) = \frac{e^{i\varphi}}{2}\left( \frac{\partial f}{\partial r} + \frac{i}{r}\ \frac{\partial f}{\partial\varphi} \right).\]
The problem with computing and interpreting \(f'(k)\), and in particular \(\frac{\partial f}{\partial\varphi}\), is that \(f(r,\varphi)\) may not be analytic. This is especially interesting since one plausible spacekime interpretation of the enigmatic kime-phase is that \(\varphi\) indexes the intrinsic stochastic sampling from some symmetric distribution, which corresponds to repeated sampling, or multiple IID random observations. This suggests that \(\varphi\) does not vary smoothly, but is rather a chaotic quantity (random variable) following some symmetric distribution \(\Phi_{}\), supported over \(\lbrack - \pi,\pi)\). In other words, the *kime-phase change} quantity \(\partial\varphi\) may be stochastic! Hence, we need to investigate approaches to:
Note that the differentiability of \(f(k)\) is not in question for the Cartesian representation of the kime domain, as Wirtinger derivatives explicate the multivalued differentiation of a multi-variable function, e.g., \(f'(k),\ \forall\kappa \equiv (x,y\mathbb{) \in C}\). Wirtinger differentiation provides a robust formulation even in the polar coordinate representation of the complex domain.
The problem arises in the spacekime-interpretation where we try to tie the analytical math formalism with the computational algorithms and the quantum-physics/statistical/experimental interpretation of the kime-phase as an intrinsically stochastic repeated sampling variability.
The following well-known lemma suggests that we can quantify the *distribution of the kime-phase changes} in the case when we merge the math formalism, computational modeling, and the statistical/experimental interpretation of the stochastic kime-phases.
We’ll use the standard convention where capital letters (\(\Gamma,\Phi,\Theta\)) denote random variables and their lower-case counterparts (\(\gamma,\phi,\theta\)) denote the corresponding values of these quantities. Let’s denote by \(P \equiv \Pr\) the probability functions with proper subindices indicating the specific corresponding random process.
Lemma: Suppose \(\phi,\ \theta \sim F_{\mathcal{D}}\) are IIDs associated with some distribution \(\mathcal{D}\), with CDF \(F_{\mathcal{D}}\) and PDF \(f_{\mathcal{D}}\). Then, the distribution of their difference \(\gamma = \phi - \theta \sim F_{\Gamma}\) is effectively the autocorrelation function
\[f_{\Gamma}(\gamma) = \ \ \int_{- \infty}^{\infty}{f_{\mathcal{D}}(\phi)f_{\mathcal{D}}(\phi - \gamma)d\phi}\ .\]
Proof: This lemma quantifies the phase-change distribution, not the actual phase-distribution. We’ll start with the most general case of two independent random variables, \(\Phi\bot\Theta\), following potentially different distributions, \(\Phi \sim F_{\Phi}\), \(\Theta \sim F_{\Theta}\).
Let’s compute the cumulative distribution of the difference random variable \(\Gamma = \Phi - \Theta\). In practice, \(\Gamma\) corresponds to the instantaneous change in the kime-phase between two random repeated experiments. Note that due to the exchangeability property associated with IID sampling {See SOCR on Overleaf}, the distribution of \(\Gamma = \Phi - \Theta\) is invariant with respond to the sequential order at which we record the pair of random observations. More specifically, if the random sample of kime-phases is \(\{\phi_{1},\ \phi_{2},\ \phi_{3},\ \cdots,\ \phi_{n},\ \cdots\}\), then for any index permutation \(\pi( \cdot )\), the distributions of
\[\Gamma_{1} = \Phi_{1} - \Phi_{2},\ \ \Gamma_{1}' = \Phi_{\pi(1)} - \Phi_{\pi(2)}\]
are the same, i.e., \(\Gamma_{1},\ \Gamma_{1}' \sim F_{\Gamma}\). Hence, the phase-change distribution, \(F_{\Gamma}\), is symmetric.
Let’s compute the CDF of \(\Gamma\), the difference between any two kime-phases,
\[F_{\Gamma}(\gamma) \equiv P_{\Gamma}(\Gamma \leq \gamma) = P_{\Gamma}(\Phi - \Theta \leq \gamma) = P_{\Phi}(\Phi \leq \gamma + \Theta)\ .\]
Jointly, \((\phi,\ \theta) \sim f_{(\Phi,\Theta)}\) and
\[\underbrace{\ \ F_{\Gamma}(\gamma)\ }_{difference\ CDF} = P_{\Phi}(\Phi \leq \gamma + \Theta) = \int_{- \infty}^{\infty}{\int_{- \infty}^{\gamma + \theta} {\underbrace{f_{\Phi,\Theta}(\phi,\ \theta)}_{joint\ PDF}\ d\phi d\theta}}\ \ \Longrightarrow\]
\[\underbrace{f_{\Gamma}(\gamma)}_{difference\ PDF} = \frac{d}{d\gamma}F_{\Gamma}(\gamma) = \frac{d}{d\gamma}\int_{- \infty}^{\infty}{\int_{- \infty}^{\gamma + \theta} {\underbrace{f_{\Phi,\Theta}(\phi,\ \theta)}_{joint\ PDF}\ d\phi d\theta}}\ .\]
For a fixed \(\theta_{o}\), we’ll make a variable substitution \(\phi = \nu + \theta_{o}\), so that \(d\phi = d\nu\). Hence,
\[f_{\Gamma}(\gamma) = \int_{- \infty}^{\infty}{\left( \frac{d}{d\gamma}\int_{- \infty}^{\gamma}{f_{\Phi,\Theta}(\nu + \theta,\ \theta)\ d\nu} \right)d\theta}\ .\]
Integrals are functions of their (lower and upper) limits and for any function \(g(\phi,\theta_{o})\),
\[\frac{d}{d\gamma}\int_{l(\gamma)}^{h(\gamma)}{g\left( \phi,\ \theta_{o} \right)\ d\phi} = g\left( h(\gamma),\ \theta_{o} \right)h'(\gamma) - g\left( l(\gamma),\ \theta_{o} \right)l'(\gamma).\]
In particular, \(\frac{d}{d\gamma}\int_{- \infty}^{\gamma}{g\left( \phi,\ \theta_{o} \right)\ d\phi} = g\left( \gamma,\ \theta_{o} \right)\).
Therefore, for any \(\theta\),
\[f_{\Gamma}(\gamma) = \int_{- \infty}^{\infty}{\underbrace{\left( \frac{d}{d\gamma}\int_{- \infty}^{\gamma}{f_{\Phi,\Theta}(\nu + \theta,\ \theta)\ d\nu} \right)}_{\theta\ is\ fixed}\ d\theta} = \int_{- \infty}^{\infty}{f_{\Phi,\Theta}(\gamma + \theta,\ \theta)d\theta}\ .\]
This derivation of the density of the kime-phase difference is rather general, \(f_{\Gamma}(\gamma) = f_{\Phi,\Theta}(\gamma + \theta,\ \theta)d\theta\). It works for any bivariate process, whether or not \(\phi,\theta\) are independent, correlated, or follow the same distribution.
In the special case when \(\phi,\ \theta \sim F_{\mathcal{D}}\) are IIDs, we can apply another change of variables transformation, \(\gamma + \theta = \phi\), to get \(d\theta = d\phi\) and
\[f_{\Gamma}(\gamma) = \int_{- \infty}^{\infty}{f_{\Phi,\Theta}(\phi,\phi - \gamma)d\phi}\ \ \underbrace{\ \ = \ \ }_{\phi\bot\theta}\ \ \int_{- \infty}^{\infty}{f_{\Phi}(\phi)\ f_{\Theta}(\phi - \gamma)\ d\phi} \underbrace{\ \ = \ \ }_{IIDs}\ \ \int_{- \infty}^{\infty}{f_{\mathcal{D}}(\phi)\ f_{\mathcal{D}}(\phi - \gamma)\ d\phi}\ .\]
In the most general case, we would like to compute the measure (i.e., size) of any Borel subset of the support \(\lbrack - \pi,\pi)\) of the distribution \(\mathcal{D}\). This will help with answering questions (2) and (3) from the above list. Of course, for any finite or countable Borel set, \(B = \left\{ \phi_{i} \right\}_{1}^{\alpha}\mathcal{\sim D}\), it’s measure will be trivial, \(\mu(B) = 0\). However, many regular (e.g., contiguous) subsets \(B' \subseteq \lbrack - \pi,\pi)\) will be non-trivial, i.e., \(0 < f_{\mathcal{D}}\left( B' \right) \leq 1\).
Recall that the entire observable universe is finite, since the age of the universe is \(\sim 13.8\) billion years. Due to the continuously faster universal expansion, from any point in spacetime the radius and the full diameter of the entire visible universe are 46 billion and 93 billion light-years, respectively, see article 1 and article 2. Hence, the *amount of observable data is always finite}. In statistical data-science terms, the number of observations and sample-sizes are always finite. However, theoretically, we can model them as infinitely increasing, as in the first two laws of probability theory, the central limit theorem (CLT) and the law of large numbers (LLN).
Now, let’s assume that we have a sequence of randomly sampled kime-phases \(\left\{ \phi_{i} \right\}_{1}^{\alpha}\mathcal{\sim D}\), where \(\alpha < \infty\). In other words, assume we have observed a finite number of repeated measurements corresponding to multiple observations (corresponding to different kime-phase directions) acquired/recorded at the same time under identical experimental conditions. The phase-dispersion is the variability observed in the recorded measurements, e.g., \(\sigma_{(\phi - \theta)}\), is directly related to the distribution of the kime-phase differences, \(F_{\Gamma} = F_{(\Phi - \Theta)}\).
Note that changes in spacetime (jointly spacetime or separately space and time alone), always permit analytic calculations, since the presence of significant intrinsic spatiotemporal correlations induce smooth process dynamics over 4D Minkowski spacetime. However, this analyticity may be broken in the 5D spacekime, especially in the kime-phase dimension, since kime-phase indexing of observations in inherently stochastic, rather than smooth or analytic.
In our special bivariate case above, we can assume \(\phi \equiv \phi_{i},\ \ \theta \equiv \phi_{j} \sim F_{\mathcal{D}}\) are a pair of IIDs associated with an a priori kime-phase distribution \(\mathcal{D}\) and corresponding to identical experimental conditions. For simplicity, we are only considering a pair of fixed kime-phase indices \(1 \leq i < j\). Since both kime-phase distributions coincide, \(\Phi \equiv \Theta \equiv \mathcal{D}\), the above Lemma shows that the PDF \(f_{\Gamma}(\gamma)\) of their difference (\(\gamma = \phi - \ \theta \equiv \phi_{i} - \phi_{j}\)) is the autocorrelation function
\[f_{\Gamma}(\gamma) = \int_{- \infty}^{\infty}{f_{\Phi}(\phi)f_{\Theta}(\phi - \gamma)d\phi} = \int_{- \infty}^{\infty}{f_{\mathcal{D}}(\phi)f_{\mathcal{D}} \left( \underbrace{\phi - \gamma}_{\theta} \right)d\phi} = \mathbb{E}(\phi,\ \theta) = \underbrace{R_{phase}(\phi_{i},\phi_{j})}_{autocorrelation}\ .\]
What if the kime-phase distribution is *infinitely supported}? How can we interpret the phase distribution \(\Phi\) over the finite bounded interval \(- \pi \leq \phi < \pi\)?
Does it make sense to interpret the phase distribution in terms of spherical coordinates as helical, instead of circular distribution?
Can we account for both compactly supported and infinitely supported (univariate) distributions by applying renormalization adjusting for self-interaction feedback, or a regularization. Clearly there are many different (injective or bijective) transformations that map the domain of the reals to \(\lbrack - \pi,\pi)\). For instance,
\[T_{o}:( - \pi,\pi)\mathbb{\overbrace{\longrightarrow}^{bijective} R},\ \ T_{o}(x) = \frac{x}{\sqrt{\pi^{2} - x^{2}}},\]
\[T_{1}:\left\lbrack - \frac{\pi}{\sqrt{2}},\frac{\pi}{\sqrt{2}} \right)\mathbb{\rightarrow R},\ \ T_{1}(x) = {cotan}\left( \arccos\frac{x}{\sqrt{\pi^{2} - x^{2}}} \right), \] see Wolfram Alpha, cotan(arccos(x/(Sqrt(Pi^{}2-x^{}2)))),
\[T_{2}\mathbb{:R \rightarrow \lbrack -}\pi,\pi),\ \ T_{2}(x) = 2\arctan(x),\] one-to-one injective, see Wolfram Alpha, 2*arctan(x),
\[T_{3}\mathbb{:R \rightarrow}\lbrack - \pi,\pi),\ T_{3}(x) = \ \ \frac{2\pi}{1 + e^{x}} - \pi,\] one-to-one injective, see Wolfram Alpha, (2*Pi)/(1+exp(x))-Pi.
Does it make sense to denote the density function of the kime-phase differences \(\gamma = \phi - \theta\), by \(f_{\Gamma}(\gamma) = f_{\Gamma}(\phi_{i} - \phi_{j})\) or by just by \(f_{\Gamma}(\gamma) = f_{\Gamma}(i - j)\)? In other words, can we drop the explicit phase reference and denote the density of the kime-phase differences simply by \(f_{\Gamma}(\gamma)\mathbb{= E}\left( \phi(i),\phi(j) \right)\mathbb{\equiv E}(i,j) = R_{phase}(i,j)\)?
At a given kime, \(k = te^{i\varphi}\), we only have an analytic representation of the distribution of the kime-phase difference, not an explicit analytic function estimate of the actual kime-phase change or its derivative (as the kime-phase is random). In this situation, how can we define the partial derivative (or a measure-theoretic/probabilistic rate of change) of the original process of interest, \(f'(k)\)? Specifically, even though we know how to compute \(\frac{\partial f}{\partial r}\), we also need to estimate \(\frac{\partial f}{\partial\varphi}\) but only having the probability density function of the kime-phase differences \(\Delta\varphi = \varphi_{i} - \varphi_{j}\), \(1 \leq i < j\). In other words, we can work with the distribution* of \(\Delta\varphi\), but can’t estimate a specific value for this rate of change, \(\Delta\varphi\).
Above discussed the notion of analyticity, which reflects functions \(f:\mathbb{C}\to \mathbb{C}\) whose Taylor series expansion about \(z_o\) converges to the functional value \(f(z)\) in some neighborhood \(\forall z\in \mathcal{N}_{z_o}\subseteq \mathbb{C}\).
The (physical) measurability property is associated with precise ability to track the value of some physical observable of a quantum system. This typically involves some kind of apparatus for measuring specific properties so that during the process of acquiring this information, the system survives the measurement after the measurement process is complete and an immediate follow up measurement of the same quantity would yield the same outcome, or at least a highly stable result. In reality, few experiments yield such measurements where the system being experimentally studied is physically unchanged, not modified, or preserved by the measurement process itself. Consider an extreme example of a Geiger counter measurement counting the number of positrons, or a more simple photo-detector counting the number of photons in a single mode cavity field. The tracked particles (photons or positrons) counted (tracked) by the device (photo-detector or Geiger counter) are actually absorbed to create a pulse, current, or an electrical impulse. While we count (track) the number of particles in the field, we transform the signal (particle dynamics) into some form of electrical data. After the measurement is completed, the state of the measurable cavity field is left in a base vacuum state \(|0\rangle\), which has no memory of the state of the field before the experiment took place and we recorded the digital measurement. However, we may be able redesign the experiment by using the observed outcome measurements (reflecting the number of particles detected in the cavity during a fixed time period) and every time we then measure the number of particles in the cavity, we always maintain the same level of particles in the cavity, \(n\). With such feedback loop, the experimental procedure can maintain the cavity field in a constant state of \(n\) particles, i.e., we can assign the state \(|n\rangle\) to the cavity field.
The Von Neumann Measurement Postulate reflects the state of the system immediately after a measurement is made.
The Von Neumann measurement postulate may be expressed using projection operators \(\hat{P}_n = |q_n\rangle\langle q_n|\), where the state of the system after the measurement yields the result \(q_n\) is
\[\frac{\hat{P}_n|\psi\rangle}{\sqrt{\langle \psi |\hat{P}_n|\psi\rangle}}= \frac{\hat{P}_n|\psi\rangle}{\sqrt{|\langle {q}_n|\psi\rangle |^2}}\ .\]
Note that the denominator normalizes the state after the measurement to unity.
This postulate suggests that after performing a measurement and observing a particular outcome, if we immediately repeat the experiment and obtain another measurement we can expect to observe the same result, indicating the system is in the associated eigenstate. The fact that the outcome scalar value is stable upon repeated measurement indicating that the system is consistent and reliably yields the same instantaneous value after a measurement.
In statistical terms, the Von Neumann measurement postulate parallels the notion of repeated measurements corresponding to multiple (large) samples from the kime-phase distribution \(\{\phi_i\}_i \sim \Phi[-\pi,\pi)\). All measurable properties represent pooled sample statistics, such as the sample arithmetic average, which has an expected value equal to the phase distribution mean,
\[\mathbb{E}\left (\frac{1}{n}\sum_{i=1}^n {\phi_i}\right )=\mu_{\Phi}\ .\]
In TCIU Chapter 6 (Kime-Phases Circular distribution), the agreement between observed analyticity and Von Neumann measurability of real observables and the stochastic quantum mechanical nature of the enigmatic kime-phases.
This connection between quantum mechanical and relativistic properties is rooted in the fact that physical observable processes rely on phase-aggregators to pool quantum fluctuations. This process effectively smooths and denoises the actual quanta characteristics and yields highly polished classical physical measurements. This duality between quanta (single experimental data point) and relativistic observables (sample/distribution driven evidence) is related to the effort to define a kime operator whose eigenvalues are observable scalar measures of kime-phase aggregators (e.g., sample phase mean) and their corresponding eigenvectors are the state-vectors (e.g., system experimental states).
Let’s expand on the notions of kime-phase observability, operator eigenspectra, and kime operator. We’ll attempt to draw direct parallels between the quantum physics (statistical mechanics) and data science (statistical inference). Starting with a physical question (e.g., computing the momentum of a particle) or a statistical inference problem (e.g., estimate a population mean), we can run as experiment to collect data, which can be used to calculate the expectation value \(X\), a physical observable (energy, momentum, spin, position, etc.) or another statistic (dispersion, range, IQR, 5th percentile, CDF, etc.)
\[ \mathbb E(X)= \sum_i x_i\,p(x_i)\ ,\] where \(\{x_i\}_i\) are the possible outcome values and \(\{p(x_i)\}_i\) are their corresponding probabilities of those outcomes. Indeed, if the outcomes are continuous, the expectation is naturally formulated in terms of an integral.
We can associate a vector \(|x_i\rangle\) to each of these outcome states \(x_i\) and since we assume IID sampling (random observations), we can consider orthonormal states, \(\langle x_i | x_j\rangle =\delta_{ij}\). As quantum mechanics and matrix algebra (linear models) are linear, a superposition state of outcomes (solutions) \(\{|x_i\rangle \}_{i}\) represents another a valid outcome solution \(|\phi \rangle =\sum_i {\alpha_i |x_i\rangle }\). By the Born rule postulate, the interpretation of the superposition state is extends the probability of observing a base state \(|x_i\rangle\), \(p(x_i)=|\alpha_i|^2\), where composite state \(|\phi \rangle\) is normalized, \[\sum_i {\alpha_i^2}=1\ .\]
By orthonormality and using Dirac notation, \[\alpha_i = \langle x_i|\psi \rangle,\ \alpha_i^* = \langle \psi|x_i \rangle,\ \forall i\ .\] By linearity, the expectation value of \(X\) is
\[\begin{align} \mathbb E(X) &= \sum_i {|\alpha_i |^2 x_i} = \sum_i {\alpha_i^* \alpha_i x_i} = \sum_i {\langle \psi|x_i\rangle \langle x_i|\psi\rangle x_i } \\ &=\langle \psi|\left( \underbrace{\sum_i |x_i\rangle\langle x_i|x_i}_{Operator,\ \hat{X}}\right)|\psi\rangle \equiv \langle \psi| \hat{X} |\psi\rangle \end{align} \ .\]
My the spectral theorem, any self-adjoint (Hermitian) matrix \(\hat{X}\) can be written as \[\hat{X} = \sum_i {\overbrace{\underbrace{|\lambda_i\rangle}_{eigenvectors} \langle \lambda_i|}^{Projection\ along\ |\lambda_i\rangle} \underbrace{\lambda_i}_{eigenvalues}}\ .\] The eigenvectors \(\{|\lambda_i\rangle\}_i\) form an orthonormal basis suggesting that only linear combinations of these eigenvectors of \(\hat{X}\) give the observable outcomes of experimental measurements. For instance, all vectors \(\gamma\) orthogonal to \(|\lambda_i\rangle\) will be annihilated. When a state \(|\gamma\rangle \perp |\lambda_i\rangle,\ \forall i\), the state can’t be written as a linear combination of the eigenvectors and it does not contribute to the expectation value.
The expectation value of an operator \(\hat {\mathcal{K}}\) corresponds to the average outcome of many repeated measurements of the observable associated with that operator,
\[\langle\hat {\mathcal{K}}\rangle_{\psi} \equiv \langle\hat {\mathcal{K}}\rangle =\langle\psi| \hat {\mathcal{K}}|\psi\rangle = \overbrace{\sum_{\alpha}{ \underbrace{\nu_{\alpha}\ }_{obs.value}\ \underbrace{\ |\langle \psi |\phi_{\alpha}\rangle|^2}_{(transition)probability}}} ^{weighted-average-of-outcomes }\ ,\] where \(\{|\phi_{\alpha}\rangle\}_{\alpha}\) is a complete set of eigenvectors for the observable operator \(\hat {\mathcal{K}}\), i.e., \(\hat {\mathcal{K}}|\phi_{\alpha}\rangle = \nu_{\alpha}| \phi_{\alpha}\rangle\).
In other words, if an observable quantity of a system is measured
many times, the average or expectation value
of these
measurements is given by the corresponding operator \(\hat {\mathcal{K}}\) acting on the state of
the system, \(|\psi\rangle\). The
operator expectation is important because it ties the physical
interpretation (the observable) with the mathematical representation of
the quantum state (the operator). Each observable is associated with a
specific self-adjoint (Hermitian) operator, which ensures that the
observable scalar values (the eigenvalues of the operator) are
real.
The expectation value of the kime-phase operator \(\hat {\mathcal{K}}_{\varphi}\) reflects the average of the kime-phase distribution, \(\varphi\sim\Phi[-\pi,\pi)\). For most kime-phase distributions, \(\langle\hat {\mathcal{K}}\rangle_{\psi}=\mu_{\Phi}=0\), reflecting the symmetry f the phase distribution. The expectation value of the operator \(\hat {\mathcal{K}}_{\varphi}\) in a quantum state described by a normalized wave function \(\psi(x)\) is given by the integral:
\[\langle\hat {\mathcal{K}}\rangle_{\psi}=\int_{-\pi}^{\pi}{\psi^*(x)\ \hat {\mathcal{K}}\ \psi(x)\ dx}\ .\]
The wave function \(\psi(x)\) encodes all the possible kime-phase states the system could be in and their corresponding probabilities. Each value \(x\) in the domain of the wave function is a possible kime-phase observation drawn from the phase distribution \(\Phi\). The complex-valued wavefunction portrays the probability amplitudes correlating with these potential measurements. Bounded operators acting on these phase state vectors help draw random kime-phases or measure the observable phases.
Caution: In the simulation below, to emphasize the differences between different observable processes, we are artificially offsetting (shifting) the random samples, and their corresponding mean trajectories (observable patterns). These offsets are both in the vertical (space) dimension as well as in the radial (phase) space. These offsets are are completely artificial and just intended to enhance the interpretation of the kime-phase sampling process by avoiding significant overlaps of points and observable kime-dynamic trends.
library(animation)
library(circular)
library(plotly)
epsilon <- 0.1
sampleSize <- 1000 # total number of phases to sample for 3 different processes (x, y, z)
sizePerTime <- 100 # number of phases to use for each fixed time (must divide sampleSize)
circleUniformPhi <- seq(from=-pi, to=pi, length.out=sizePerTime)
oopt = ani.options(interval = 0.2)
set.seed(1234)
# sample the the kime-phases for all 3 different processes and the r time points
x <- rvonmises(n=sampleSize, mu=circular(pi/5), kappa=3)
y <- rvonmises(n=sampleSize, mu=circular(-pi/3), kappa=5)
z <- rvonmises(n=sampleSize, mu=circular(0), kappa=10)
r <- seq(from=1, to=sampleSize/sizePerTime, length.out=10)
# Define a function that renormalizes the kime-phase to [-pi, pi)
pheRenormalize <- function (x) {
out <- ifelse(as.numeric(x) <= pi, as.numeric(x)+pi, as.numeric(x)-pi)
return (out)
}
# transform Von Mises samples from [0, 2*pi) to [-pi, pi)
x <- pheRenormalize(x)
y <- pheRenormalize(y)
z <- pheRenormalize(z)
# vectorize the samples
vectorX = as.vector(x)
vectorY = as.vector(y)
vectorZ = as.vector(z)
# Starting phases, set the first phase index=1
plotX = c(vectorX[1])
plotY = c(vectorY[1])
plotZ = c(vectorZ[1])
# # iterate over all time points (outer loop) and all phases (inner loop)
# for (t in 1:length(r)) { # loop over time
# for (i in 2:100) { # loop over kime-phases
# plotX = c(plotX,vectorX[i])
# plotY = c(plotY,vectorY[i])
# plotZ = c(plotZ,vectorZ[i])
#
# # Try to "stack=T the points ....
# #r1 = sqrt((resx$x)^2+(resx$y)^2)/2;
# #resx$x = r1*cos(resx$data)
# #resx$x = r1*cos(resx$data)
#
# tempX = circular(plotX[[1]])
# tempY = circular(plotY[[1]])
# tempZ = circular(plotZ[[1]])
# resx <- density(tempX, bw=25, xaxt='n', yaxt='n')
# resy <- density(tempY, bw=25, xaxt='n', yaxt='n')
# resz <- density(tempZ, bw=25, xaxt='n', yaxt='n')
# res <- plot(resx, points.plot=TRUE, xlim=c(-1.1,1.5), ylim=c(-1.5, 1.5),
# main="Trivariate random sampling of\n kime-magnitudes (times) and kime-directions (phases)",
# offset=0.9, shrink=1.0, ticks=T, lwd=3, axes=F, stack=TRUE, bins=150)
# lines(resy, points.plot=TRUE, col=2, points.col=2, plot.info=res, offset=1.0, shrink=1.45, lwd=3, stack=T)
# lines(resz, points.plot=TRUE, col=3, points.col=3, plot.info=res, offset=1.1, shrink=1.2, lwd=3, stack=T)
# segments(0, 0, r[i]*cos(vectorX[i]), r[i]*sin(vectorX[i]), lwd=2, col= 'black')
# segments(0, 0, r[i]*cos(vectorY[i]), r[i]*sin(vectorY[i]), lwd=2, col= 'red')
# segments(0, 0, r[i]*cos(vectorZ[i]), r[i]*sin(vectorZ[i]), lwd=2, col= 'green')
# ani.pause()
# }
# }
####################################################
# pl_list <- list()
pl_scene <- plot_ly(type='scatter3d', mode="markers")
plotX <- list()
plotY <- list()
plotZ <- list()
plotX_df <- list() # need separate dataframes to store all time foliations
plotY_df <- list()
plotZ_df <- list()
for (t in 1:length(r)) { # loop over time
# loop over kime-phases
plotX[[t]] <- as.numeric(x[c(( (t-1)*length(r) + 1):((t-1)*length(r) + sizePerTime))])
plotY[[t]] <- as.numeric(y[c(( (t-1)*length(r) + 1):((t-1)*length(r) + sizePerTime))])
plotZ[[t]] <- as.numeric(z[c(( (t-1)*length(r) + 1):((t-1)*length(r) + sizePerTime))])
# Try to "stack=T the points ....
#r1 = sqrt((resx$x)^2+(resx$y)^2)/2;
#resx$x = r1*cos(resx$data)
#resx$x = r1*cos(resx$data)
tempX = circular(unlist(plotX[[t]]))
tempY = circular(unlist(plotY[[t]]))
tempZ = circular(unlist(plotZ[[t]]))
resx <- density(tempX, bw=25, xaxt='n', yaxt='n')
resy <- density(tempY, bw=25, xaxt='n', yaxt='n')
resz <- density(tempZ, bw=25, xaxt='n', yaxt='n')
# res <- plot(resx, points.plot=TRUE, xlim=c(-1.1,1.5), ylim=c(-1.5, 1.5),
# main="Trivariate random sampling of\n kime-magnitudes (times) and kime-directions (phases)",
# offset=0.9, shrink=1.0, ticks=T, lwd=3, axes=F, stack=TRUE, bins=150)
# pl_list[[t]]
unifPhi_df <- as.data.frame(cbind(t=t, circleUniformPhi=circleUniformPhi))
plotX_df[[t]] <- as.data.frame(cbind(t=t, plotX=unlist(plotX[[t]])))
plotY_df[[t]] <- as.data.frame(cbind(t=t, plotY=unlist(plotY[[t]])))
plotZ_df[[t]] <- as.data.frame(cbind(t=t, plotZ=unlist(plotZ[[t]])))
pl_scene <- pl_scene %>% add_trace(data=unifPhi_df, showlegend=FALSE,
x = ~((t-epsilon)*cos(circleUniformPhi)),
y = ~((t-epsilon)*sin(circleUniformPhi)), z=0,
name=paste0("Time=",t), line=list(color='gray'),
mode = 'lines', opacity=0.3) %>%
add_markers(data=plotX_df[[t]], x=~(t*cos(plotX)), y=~(t*sin(plotX)), z=0,
type='scatter3d', name=paste0("X: t=",t),
marker=list(color='green'), showlegend=FALSE,
mode = 'markers', opacity=0.3) %>%
add_markers(data=plotY_df[[t]], x=~((t+epsilon)*cos(plotY)),
y=~((t+epsilon)*sin(plotY)), z=0-epsilon, showlegend=FALSE,
type='scatter3d', name=paste0("Y: t=",t),
marker=list(color='blue'),
mode = 'markers', opacity=0.3) %>%
add_markers(data=plotZ_df[[t]], x=~((t+2*epsilon)*cos(plotZ)),
y=~((t+2*epsilon)*sin(plotZ)), z=0+epsilon, showlegend=FALSE,
type='scatter3d', name=paste0("Z: t=",t),
marker=list(color='red'),
mode = 'markers', opacity=0.3)
}
means_df <- as.data.frame(cbind(t = c(1:length(r)),
plotX_means=unlist(lapply(plotX, mean)),
plotY_means=unlist(lapply(plotY, mean)),
plotZ_means=unlist(lapply(plotZ, mean))))
pl_scene <- pl_scene %>%
# add averaged (donoised) phase trajectories
add_trace(data=means_df, x=~(t*cos(plotX_means)),
y=~(t*sin(plotX_means)), z=0,
type='scatter3d', showlegend=FALSE, mode='lines', name="Expected Obs X",
line=list(color='green', width=15), opacity=0.8) %>%
add_trace(data=means_df, x=~(t*cos(plotY_means)),
y=~(t*sin(plotY_means)), z=0-epsilon,
type='scatter3d', showlegend=FALSE, mode='lines', name="Expected Obs X",
line=list(color='blue', width=15), opacity=0.8) %>%
add_trace(data=means_df, x=~(t*cos(plotZ_means)),
y=~(t*sin(plotZ_means)), z=0+epsilon,
type='scatter3d', showlegend=FALSE, mode='lines', name="Expected Obs X",
line=list(color='red', width=15), opacity=0.8) %>%
add_trace(x=0, y=0, z=c(-2,2), name="Space", showlegend=FALSE,
line=list(color='gray', width=15), opacity=0.8) %>%
layout(title="Pseudo Spacekime (1D Space, 2D Kime) Kime-Phase Sampling and Foliation",
scene = list(xaxis=list(title="Kappa1"), yaxis=list(title="Kappa2"),
zaxis=list(title="Space"))) %>% hide_colorbar()
pl_scene
# pl_list %>%
# subplot(nrows = length(pl_list)) %>%
# layout(title="Integral Approximations by Riemann Sums", legend = list(orientation = 'h'))
In this simulation, the \(3\) spatial dimensions \((x,y,z)\in\mathbb{R}^3\) are compressed into \(1D\) along the vertical axis \(z\in\mathbb{R}^1\), the radial displacement represents the time dynamics, \(t\in \mathbb{R}^+\), and the angular scatters of three different processes, representing repeated measurements from \(3\) different circular distributions at a fixed spatiotemporal location, are shown in different colors. Mean-dynamics across time of the three different time-varying distributions are shown as smooth curves color-coded to reflect the color of their corresponding process distributions, samples, and sampling-distributions.
Data from repeated measurements are acquired in time-dynamic experiments tracking univariate or multivariate processes repeatedly over time. Hence, repeated measurement data are tracked over multiple points in time. The simple situation when a single response variable is measured over time (univariate case) is easier to describe the challenges, but the mode general multivariate response is more realistic in many data science applications.
In general, the responses over time may be heavily temporally correlated, yet still corrupted by noise modeled by the kime-phase distribution. Note that individual observations collected at points in time close together are likely to be more similar to one another compared to observations collected at distant times. However, even at the same time point, the phase distribution controls the level of expected dispersion/variability between multiple measurements obtained under identical experimental conditions (at the fixed point in time). Classical multivariate analysis treats observations collected at different time points as (potentially correlated) different variables.
Consider a completely randomized block design experiment to determine the effects of 5 surgical treatments on cardiovascular health outcomes where a group of \(35\) patients grouped into each of the \(5\) treatment groups of sizes \(\{7, 6, 9, 5, 8\}\). the health outcomes of every patient \(\{p_j\}_{j=1}^{35}\) are recorded at \(10\) consecutive time points, \(\{t_k\}_{k=0}^9\) following one of the available \(5\) clinical treatment regimens, \(\{R_i\}_{i=1}^5\). Note that the treatment group sizes are \(\{|R_1|=7,|R_2|=6,|R_3|=9,|R_4|=5,|R_5|=8 \}\).
There are many alternative strategies to analyze data from such repeated sample experiment. analysis of variance (ANOVA) and multivatiate analysis of variance (MANOVA) represent a pair of common parametric techniques.
Let’s make the following notations:
ANOVA provides a parametric statistical test (using the Fisher’s F distribution) to study interactions between treatment regimens and time, using the main effects of treatments and time. MANOVA facilitates testing for significant interaction effects between treatment regimens and time, and for main effects of treatment regimen;
The ANOVA (linear) model \[Y_{ijk}=\mu+\alpha_i+\beta_{j(i)}+\tau_k+(\alpha \tau)_{ik} + \epsilon_{ijk},\] assumes that the health outcome data \(Y_{ijk}\) for the \(i^{th}\) treatment regimen \(R_i\) for the \(j^{th}\) patient \(p_j\) at \(k^{th}\) time point \(t_k\) is a mixture of some (yet unknown) overall mean response \(\mu\), plus some the treatment effect \(\alpha_i\), the effect of the patient within that treatment regimen \(\beta_{j(i)}\), the effect of time \(\tau_k\), the effect of the interaction between treatment and time \((\alpha \tau)_{ik}\), and a residual error \(\epsilon_{ijk}\sim N(0,\sigma_{\epsilon}^2)\).
The explicit ANOVA assumptions include:
This ANOVA mixed-effects model includes a random effect of the patient and fixed effects for treatment regimen and time. The Fisher’s F-test is used to carry ANOVA using the following variance decomposition summary.
Source of Variation | Sum of Squares (SS) | Degrees of Freedom (df) | Mean Squares (MS) | F-Ratio |
---|---|---|---|---|
Between samples (Treatment) | \(SS_{treat}\) | a - 1 | \(\frac{SS_{treat}}{a-1}\) | \(\frac{SS_{treat}}{MS_{error(a)}}\) |
Within samples (Time) | \(SS_{time}\) | t - 1 | \(\frac{SS_{time}}{t-1}\) | \(\frac{SS_{time}}{MS_{error(b)}}\) |
Interaction (Treat*Time) | \(SS_{time}\) | (a-1)(t - 1) | \(\frac{SS_{treat\times times(b)}}{(a-1)(t-1)}\) | \(\frac{MS_{treat\times times(b)}}{MS_{error(b)}}\) |
Error(a) | \(SS_{error(a)}\) | N-a | \(\frac{SS_{error(a)}}{N-a}\) | |
Error(b) | \(SS_{error(b)}\) | (N-b)(t-1) | \(\frac{SS_{error(b)}}{(N-a)(t-1)}\) | |
Total | \(SS_{total}\) | Nt - 1 |
where,
Sum of Squares \(SS\) formulas are given below:
\[\begin{array}{lll}SS_{total}& =& \sum_{i=1}^{a}\sum_{j=1}^{n_i}\sum_{k=1}^{t}Y^2_{ijk}-Nt\bar{y}^2_{...}\\SS_{treat} &= &t\sum_{i=1}^{a}n_i\bar{y}^2_{i..} - Nt\bar{y}^2_{...}\\SS_{error(a)}& =& t\sum_{i=1}^{a}\sum_{j=1}^{n_i}\bar{y}^2_{ij.} - t\sum_{i=1}^{a}n_i\bar{y}^2_{i..}\\SS_{time}& =& N\sum_{k=1}^{t}\bar{y}^2_{..k}-Nt\bar{y}^2_{...}\\SS_{\text{treat x time}} &=& \sum_{i=1}^{a}\sum_{k=1}^{t}n_i\bar{y}^2_{i.k} - Nt\bar{y}^2_{...}-SS_{treat} -SS_{time}\end{array}\ .\]
The statistical inference based on the ANOVA (variance decomposition) model includes:
\[F = \overbrace{\frac{MS_{\text{treat x time}}}{MS_{error(b)}}}^{obs.\ test\ statistic} > \overbrace{F_{\underbrace{(a-1)(t-1)}_{df_1}, \underbrace{(N-a)(t-1)}_{df_2}, \underbrace{\alpha}_{signif.}}}^{F-distr.\ critical\ value}\]
\[F = \dfrac{MS_{treat}}{MS_{error(a)}} > F_{a-1, N-a, a}\ .\]
\[F = \frac{MS_{time}}{MS_{error(b)}} > F_{t-1, (N-a)(t-1), \alpha}\ .\]
\[\mathbf{Y}_{ij} = \left(\begin{array}{c}Y_{ij1}\\ Y_{ij2} \\ \vdots\\ Y_{ijt}\end{array}\right)\ .\]
Effectively, we treat the data at different time-points as different variables or features. One-way MANOVA assumptions include:
MANOVA inference includes:
\[\mathbf{Z}_{ij} = \left(\begin{array}{c}Z_{ij1}\\ Z_{ij2} \\ \vdots \\ Z_{ij, t-1}\end{array}\right) = \left(\begin{array}{c}Y_{ij2}-Y_{ij1}\\ Y_{ij3}-Y_{ij2} \\ \vdots \\Y_{ijt}-Y_{ij,t-1}\end{array}\right)\ .\]
The (random) vector \(\mathbf{Z}_{ij}\) is a function of the random data and we can compute the expected population mean for treatment \(i\), \(E(\mathbf{Z}_{ij}) = \boldsymbol{\mu}_{Z_i}\). The MANOVA test on these \(\mathbf{Z}_{ij}\)’s is based the null hypothesis \(H_o\colon \boldsymbol{\mu}_{Z_1} = \boldsymbol{\mu}_{Z_2} = \dots = \boldsymbol{\mu}_{Z_a}\).
The key factor exploited by linear models such as ANOVA/MANOVA is the powerful analytical decomposition of the observed process variation, which quantifies the type and amount of complementary (orthogonal) dispersion observed in data samples. Recall that \(N=\sum_i^a{n_i}\) and
\[\bar{y}_{...} = \frac{1}{Nt} \sum_{i=1}^{a}\sum_{j=1}^{n_i}\sum_{k=1}^{t} {Y_{ijk}}, \]
\[SS_{total} = \frac{1}{n-1} \sum_{i=1}^{a}\sum_{j=1}^{n_i}\sum_{k=1}^{t} {(Y^2_{ijk}- \bar{y})^2} = \sum_{i=1}^{a}\sum_{j=1}^{n_i}\sum_{k=1}^{t}Y^2_{ijk}-Nt\bar{y}^2_{...}\ .\]
Then we can expand the total variance as a sum of complementary factors
\[\begin{array}{lll} \underbrace{SS_{total}}_{\text{Overall Obs. Variance}} & =& \sum_{i=1}^{a}\sum_{j=1}^{n_i}\sum_{k=1}^{t}Y^2_{ijk}-Nt\bar{y}^2_{...}\\ \underbrace{SS_{treat}}_{\text{Obs. Var. due to Treatment}} &= & t\sum_{i=1}^{a}n_i\bar{y}^2_{i..} - Nt\bar{y}^2_{...} \\ \underbrace{SS_{time}}_{\text{Obs. Var. due to Time}} & =& N\sum_{k=1}^{t}\bar{y}^2_{..k}-Nt\bar{y}^2_{...} \\ \underbrace{SS_{treat \times time}}_{\text{Obs. Var. due to Treat*Time Interaction}} &=& \sum_{i=1}^{a}\sum_{k=1}^{t}n_i\bar{y}^2_{i.k} - Nt\bar{y}^2_{...}-SS_{treat} -SS_{time} \\ \underbrace{SS_{error(a)}}_{\text{Residual Var. (unaccounted by model)}} & =& t\sum_{i=1}^{a}\sum_{j=1}^{n_i}\bar{y}^2_{ij.} - t\sum_{i=1}^{a}n_i\bar{y}^2_{i..}\end{array}\ .\]
\[SS_{total} = \sum_{i=1}^{a}\sum_{j=1}^{n_i}\sum_{k=1}^{t} Y^2_{ijk}-Nt\bar{y}^2_{...} = \\ \underbrace{t\sum_{i=1}^{a}n_i\bar{y}^2_{i..} - Nt\bar{y}^2_{...}}_{SS_{treat}} + \underbrace{N\sum_{k=1}^{t}\bar{y}^2_{..k}-Nt\bar{y}^2_{...}}_{SS_{time}} +\\ \underbrace{\sum_{i=1}^{a}\sum_{k=1}^{t}n_i\bar{y}^2_{i.k} - Nt\bar{y}^2_{...}-SS_{treat} -SS_{time}}_{SS_{{treat \times time}}} +\\ \underbrace{t\sum_{i=1}^{a}\sum_{j=1}^{n_i}\bar{y}^2_{ij.} - t\sum_{i=1}^{a}n_i\bar{y}^2_{i..}}_{SS_{error(a)}}\ .\]
To draw the connection between observed variability decomposition and complex-time we will consider a study of single-subject task-based fMRI activation across multiple sessions. In this study, the researchers proposed a within-subject variance decomposition of a task-based fMRI using a large number of repeated measures, \(500\) trials of three different subjects undergoing \(100\) functional scans in \(9-10\) different sessions. In this case, the observed within-subject variance was segregated into \(4\) primary components - variance across-sessions, variance across-runs within a session, variance across-blocks within a run, and residual measurement (model-unaccounted error).
Within-subject variance decomposition:
That study reported that across \(16\) cortical networks, \(\sigma^2_{block}\) contributions to within-subject variance were larger in high-order cognitive networks compared to somatosensory brain networks. In addition, \(\sigma^2_{block} \gg \sigma^2_{session}\) in higher-order cognitive networks associated with emotion and interoception roles.
This spatiotemporal distribution factorization of the total observed within-subject variance illustrates the importance of identifying dominant variability components, which subsequently may guide prospective fMRI study-designs and data acquisition protocols.
All functional MRI runs had the same organization of blocks as shown in the diagram below.
# install.packages("DiagrammeR")
# See DiagrammeR Instructions:
# https://epirhandbook.com/en/diagrams-and-charts.html
library(DiagrammeR)
graphStr <- "# Mind commented instructions
digraph fMRI_Study_Design { # 'digraph' means 'directional graph', then the graph name
################# graph statement #################
graph [layout = dot, rankdir = LR, # TB = layout top-to-bottom
fontsize = 14]
################# nodes (circles) #################
node [shape = circle, fixedsize = true, width = 1.8]
Start [label = 'fMRI Run\nStart']
Begin [label = 'Begining\nRest period\n(30 seconds)']
Task [label = 'Task block\n(20 seconds)']
Rest [label = 'Rest block\n(20 seconds)', fontcolor = darkgreen]
End [label = 'Ending\nRest block\n(20 seconds)', fontcolor=darkgreen]
Total [label = 'Run Total\n(340 seconds)', fontcolor = darkgreen]
################ edges #######
Start -> Begin [label='initialize', fontcolor=red, color=red]
Begin -> Task [label='task', fontcolor=red, color=red]
Task -> Rest [label='Loop', fontcolor=red, color=red]
Rest -> End [label='wrap up', fontcolor=red, color=red]
End -> Total [label='Complete', fontcolor=red, color=red]
################# grouped edge #################
{Rest} -> Task [label = 'Repeat\nTask-Rest Block\n5 times', fontcolor=darkgreen,
color = darkgreen, style = dashed]
}
"
grViz(graphStr)
Suppose the complex-valued fMRI signal is \(Y_{session=i,run=j,block=k}\) and we fit a linear model decomposing the effect estimates
\[\underbrace{\hat{\beta}_{i(j(k))}}_{effect\\ estimate}=\underbrace{\alpha}_{overall\\ mean} + \underbrace{\theta_k}_{session\\ effect} + \underbrace{\xi_{j(k)}}_{run\\effect} + \underbrace{\eta_{i(j(k))}}_{block\\ effect} + \underbrace{\varepsilon_{i(j(k))}}_{measurement\\ error } \ .\]
This partitioning of the effects is in terms of the indices \(block=i\), \(run=j\), and \(day=k\), where parentheses \(()\) indicate the nesting structure between consecutive levels. We assume Gaussian distributions for all random effects and the residual error,
\[\theta_k\sim N(0,\sigma^2_{session}),\ \ \xi_{j(k)}\sim N(0,\sigma^2_{run}),\ \ \eta_{i(j(k))}\sim N(0,\sigma^2_{block}),\ \ \varepsilon_{i(j(k))}\sim N(0,\sigma^2_{i})\ .\]
Then, the variance decomposition of the effect estimate model is
\[Var(\hat{\beta}_{i(j(k))}) = \sigma^2_{session} + \sigma^2_{run} + \sigma^2_{block} + \sigma^2_{error}\ .\]
Question: Where do the random effects (and their estimates) come from?
Answer: As shown in this article, we start with a general linear model (GLM) \(Y = X\beta + \varepsilon\) representing the observed fMRI response (intensity) \(y\) at each voxel as a linear combination of some explanatory variables, columns in the design matrix \(X\). Each column in \(X\) corresponds to one effect that can be experimentally manipulated or that may confound the observed outcome \(y\).
The GLM expresses the response variable \(y\) as a mixture of explanatory variables and some residual error term \(\varepsilon\sim N(0,\Sigma)\). fMRI pre-processing protocols may filter the data with a convolution or residual forming matrix \(S\), leading to a generalized linear model that includes intrinsic serial correlations and applied extrinsic filtering. Different choices of \(S\) correspond to different estimation schemes
\[\underbrace{Y}_{observed\\ fMRI} = \overbrace{X}^{design\\ matrix} \underbrace{\beta}_{effects\\ vector} + \overbrace{\varepsilon}^{residual\\ error}\ .\]
\[S=\begin{cases} \Sigma^{-\frac{1}{2}}, & whitening \\ 1, & none \\ 1-X_oX_o^+, & adjustment \end{cases}\ .\]
\[SY = SX\beta + S\varepsilon, \ \ V=S\Sigma S',\ \ \underbrace{\hat{\beta}=(SX)^+ SY}_{\text {random effect estimates}},\ \ \frac{c'\hat{\beta}}{\sqrt{\hat{V}(c'\hat{\beta})}}\sim t_{df},\]
\[\hat{V}(c'\hat{\beta})=\hat{\sigma}^2 c'(SX)^+ V((SX)^+)',\\ R=I-SX(SX)^+, \ \ \hat{\sigma}^2=\frac{Y'RY}{trace(RV)}, df=\frac{trace(RV)^2}{trace(RVRV)}\ .\]
The effect parameter estimates are obtained using least squares based on the matrix-pseudo-inverse of the filtered design matrix, denoted by \(^+\). The most general effect of interest is specified by a vector of contrast weights \(c\) that give a weighted sum or compound of parameter estimates, referred to as a contrast. However, often we use unitary contrasts corresponding to a single variable in the design matrix \(X\), e.g., \(c=(1,0,\cdots,0)\) corresponds to the effect of the first covariate (first column in \(X\)). Once the effects are estimated (we have computed the LS estimates of \(\hat{\beta}_c\), we can compute the \(T_c\sim t_{df}\) statistic to assess the statistical significance of the effect.
In essence, fMRI analyses typically involve two phases:
\[({\text{First-level analysis}})\ \underbrace{Y_m=X_m \beta_m + \varepsilon_m}_{Individual\ Subject\ Model}, \ \underbrace{Y_m=\{Y_{m,t=0},Y_{m,t=1},\cdots, Y_{m,t=T}\}}_{fMRI\ series\ signal},\ \forall \ (subject)\ m,\\ ({\text{Second-level analysis}})\ \ \underbrace{\beta_m=X_{gm} \beta_g +\varepsilon_{gm}}_{Population\ Model}, \ \ \underbrace{\beta_g=\{\beta_{mo},\beta_{m1},\cdots, \beta_{mG}\}}_{subject-level-1\ regression\ coefficients\\ (effects)\ \forall EV\ column\ in\ X},\ \forall g\in G\ (group)\ .\]
This two-level fMRI analysis can also be represented into a General Linear Mixed Effects Model:
\[Y_m=\underbrace{\beta_o+X_{m1} \beta_1+\cdots+X_{mp} \beta_p}_{fixed-effects}+ \underbrace{\beta_{mo}+Z_{m1} \beta_{m1}+\cdots + Z_{mq} \beta_{mq}}_{random-effects} +\varepsilon_m\ , \forall \ (subject)\ m .\]
Again, \(Y_m=\{Y_{m,t=0},Y_{m,t=1},\cdots, Y_{m,t=T}\}\) is a column vector of observed fMRI time-series for individual \(m\) at a fixed 3D voxel location, \(\{X_{mj}\}\) is the vector of the \(p\) regressors included in the linear (LME) model, \(\{\beta_o,\cdots, \beta_p\}\) are the fixed-effect regression coefficients, which are identical across all subjects, the column vectors \(\{Z_{mq}\}_q\) are the random effect regressors with corresponding random effects coefficients \(\{\beta_{mq}\}_q\sim N(0,\Psi)\), and \(q\) is the total number of random effects included in the model. The random effects capture the variability across subjects for each of the regressors \(\{Z_{mq}\}_q\), \(\Psi\) is a \((q+1)\times (q+1)\) variance-covariance matrix, and the \(n_m\times 1\) vector of within-group errors \(\varepsilon_m=\{\varepsilon_{m1},\cdots,\varepsilon_{mn_m}\}\sim N(0,\sigma^2 V)\), where \(\sigma^2\) and \(V\) are the variance and the correlation matrix.
The two-tier fMRI analysis can be formulated into a generalized linear mixed-effects model as follows. For simplicity of notation, assume that the first level design matrix for subject \(m\), \(X_m\), has only four columns representing an intercept and \(3\) covariates, \(EV_1, EV_2, EV_3\), and there are \(T\) temporal volumes in the 4D fMRI spatio-temporal data
\[X_m=\begin{pmatrix} 1 & EV_{11} & EV_{21} & EV_{31} \\ 1 & EV_{12} & EV_{22} & EV_{32} \\ \vdots & \vdots & \vdots & \vdots \\ 1 & EV_{1T} & EV_{2T} & EV_{3T} \end{pmatrix}\equiv [X_{mo}\ X_{m1}\ X_{m2}\ X_{m3}\ ]\ ,\]
where \(\forall m,\ X_{mj},\ 0\leq j \leq 3\) is a column vector of ones (for the intercept, \(j=0\)), or \(EV_j, \ 1\leq j \leq 3\). The vector of regression coefficients corresponding to each explanatory variable is
\[\beta_m=\begin{pmatrix} \beta_{mo} \\ \beta_{m1} \\ \beta_{m2} \\ \beta_{m3} \end{pmatrix} ,\]
Suppose that in the second-level analysis we want to estimate the average population-wide effect of each EV for the whole group. This corresponds to using hte following design matrix
\[X_g=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}\equiv I_{4\times 4}\ .\]
Next we plug this the second-level population model \(\beta_m=X_{gm} \beta_g +\varepsilon_{gm}\) into the first-level model
\[\overbrace{Y_m=X_m \beta_m +\varepsilon_{m}}^{First \ Level\ (individual)}\\ =X_m \underbrace{\left ( X_{gm} \beta_g +\varepsilon_{gm}\right )}_{ Second\ Level\ (population)} +\varepsilon_{m}=\\ [X_{mo}\ X_{m1}\ X_{m2}\ X_{m3}\ ] \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \beta_{go} \\ \beta_{g1} \\ \beta_{g2} \\ \beta_{g3} \end{pmatrix} + [X_{mo}\ X_{m1}\ X_{m2}\ X_{m3}\ ] \begin{pmatrix} \varepsilon_{gmo} \\ \varepsilon_{gm1} \\ \varepsilon_{gm2} \\ \varepsilon_{gm3} \end{pmatrix} +\varepsilon_{m}=\\ \underbrace{\underbrace{X_{mo}\beta_{go}+X_{m1}\beta_{g1}+X_{m2}\beta_{g2}+X_{m3}\beta_{g3}}_{fixed\ effects}+ \underbrace{X_{mo}\varepsilon_{gmo}+X_{m1}\varepsilon_{gm1}+X_{m2}\varepsilon_{gm2}+ X_{m3}\varepsilon_{gm3}}_{random\ effects}+ \underbrace{\varepsilon_m}_{residual\ errors}}_{General\ LME} \ ,\] where \(\varepsilon_m\sim N(0,\sigma^2V)\), \(\varepsilon_g\sim N(0,R_g)\), and \(R_g\) is the variance-covariance matrix of the group-level errors.
The intrinsic process error \(\varepsilon_m\sim N(0,\sigma^2V)\) reflects classical variability in the underlying process distribution. The more subtle error-tensor \(\varepsilon_g\sim N(0,R_g)\), where \(\varepsilon_m\sim N(0,\sigma^2V)\), depends on the variance-covariance matrix of the group-level errors \(R_g\) and can be interpreted as repeated-measures variability due to the kime-phase distribution. Hence, the kime-phases can be used to model or track repeated-measure variations. Recall this kime-phase simulation we discussed earlier demonstrating alternative kime-phase distributions coupled with corresponding mean-phase-aggregators (solid radial lines) representing real observable (deterministic, not stochastic) measurable quantities. The latter avoid the intrinsic kime-phase distribution noise due to random sampling through the phase-space distribution \(\Phi_{[-\pi,\pi))}\) when taking real measurements depending de facto on some phase-aggregating kernels that smooth the intractable (unobservable) and intrinsically-random kime-phases.
In 1915, Albert Einstein proposed the general theory of relativity (GTR). The following year, in 1916, he proposed the following three tests to validate or falsify GTR. Some of these GTR hypotheses were confirmed using data acquired by 1919.
Similarly, to test, affirm, or reject the complex-time model, i.e., the existence of kime-phases and the interpretation of the spacekime representation of repeated longitudinal measurements, we need to propose testable hypotheses that can be used to determine the validity of the spacekime representation. Ideally, we need to express such hypotheses as if-then statements indicating that under certain conditions or actions (investigator controlled, independent variables, IVs), specific outcomes or results (dependent variables, DVs) are to be expected. We may need to propose experiments that can be carried to prove or disprove the existence of kime-phases, with substantial reproducibility.
In essence, there may be viable physics experiments or statistical-analytical experiments that can be formulated to test the spacekime representation.
For instance, potential physics experiments to consider include:
On the other hand, potential statistical and analytical experiments to test the validity of the kime-phase as a stochastic representation of repeated sampling may rely on tests confirming or disproving scientific inference improvements based on the spacekime-model compared to classical approaches. For example:
In quantum superposition particles are also distributions always maintaining simultaneously all possible basis states (realities) until the moment they’re sampled, drawn, measured, or detected, i.e., observed as scalars, vectors, or tensors. Similarly in statistics, data science, and AI, we often collect, sample, simulate, or model data as proxy of a physical phenomenon we are studying. The final data-driven scientific inference about the phenomenon is expected to be reproducible, consistent, unbiased, accurate, and reliable, reflecting the process distribution properties despite the fact that all data-driven inference is sample evidence-based, whereas the underlying process distributions are generalized functions. This statistical analytic-duality resembles the particle-wave duality in physics.
In applied sciences, sample statistics, such as the sample mean, variance, percentiles, range, inter-quartile range, etc., are always well-defined. Their values are not known at any given time until the (finite) sample \(\{x_i\}_{i=1}^n\) is collected (observations are recorded). For instance the sample mean statistic is the arithmetic average of all observations, \(\bar{x}_n=\frac{1}{n}\sum_{i=1}^n {x_i}\), yet, not knowing the value of the sample-mean prior to collecting the data is simply due to lack of evidence, since we have a closed form expression of how to obtain the sample average statistic estimating the population (physical system’s) expected mean response, \(\mu_X\ \underbrace{\leftarrow}_{{n\to\infty}}\ \bar{x}_n\). Quantum systems interact in ways that can be explained with superpositions of different discrete base-states, and quantum system measurements yield statistical results corresponding to any one of the possible states appearing at random.
Similarly, analytical systems interact in ways that can be explained with superpositions of different, discrete, and finite samples, and analytical system measurements yield statistics corresponding to any one of the specific sample-statistic outcomes, which vary between samples and experiments. However, the first two fundamental laws of probability theory governing all statistical inference (CLT and LLN) yield that this between sample variability of sample-statistics is expected to decrease rapidly with the increases of the sample size. For instance, IID samples \(\{x_i\}_{i=1}^n\sim \mathcal{D}(\mu_X,\sigma_X^2)\), drawn from most nice distributions \(\mathcal{D}\) with mean \(\mu_X\) and variance \(\sigma_X^2\), yield sample-means \(\bar{x}_n\) whose sampling distribution converges (in distribution) to a Normal distribution, an asymptotic limiting distribution with rapidly decaying variance as the sample size increases
\[\underbrace{\bar{x}_n }_{Sampling\\ distirbution}\ \ \ \overbrace{\underbrace{\longrightarrow}_{n\to\infty}}^{Convergence\\ in\ distribution} \underbrace{\mathcal{N}\left(\mu_X,\frac{\sigma_X^2}{n}\right)}_{Asymptotic\\ distribution}\ .\]
Let’s try to explicate the duality between sampling the kime-phase distribution and the complementary sampling of the process state space (spatiotemporal sampling).
Suppose the process \(X\sim F_X\), where \(F_X\) is an arbitrary distribution and let \(\sigma^2=Var(X)\). If \(\{X_1,\cdots ,X_n\}\sim F_X\) is an IID sample the CLT suggests that under mild assumptions \(\bar{X}\longrightarrow N(\mu,\sigma^2/n)\) as \(n\to\infty\). In repeated spatiotemporal sampling, \(\forall\ n\), we observe \(K\) (longitudinal) sample means \(\{\bar{X}_{n1},\cdots, \bar{X}_{nK}\}\), where \(\forall 1\leq k\leq K\) each IID sample \(\{X_{1k},\cdots ,X_{nk}\}\sim F_X\) (across the kime-phase space) allows us to compute the \(k^{th}\) sample mean \(\bar{X}_{nk}\).
Observe that computing \(\bar{X}_{nk}\) from the \(k^{th}\) sample \(\{X_{1k},\cdots ,X_{nk}\}\) is identical to directly sampling \(\bar{X}_{nk}\) from the distribution \(F_{\bar{X}_n}\). Let’s reflect on this set up which clearly involves \(2\) independent sampling strategies.
Note that the distribution \(F_{\bar{X}_n}\) need not be Normal, it may be, but in general it won’t be Normal. For instance suppose our spatiotemporal sampling scheme involves exponential distribution, i.e., \[\{X_{1k_o},\cdots , X_{nk_o}\}\sim f_X\equiv Exp\left (\underbrace{\lambda}_{rate=\frac{1}{scale}}\right)\equiv\Gamma \left (\underbrace{1}_{shape},\underbrace{\lambda}_{rate=\frac{const.}{scale}} \right )\] and \(\sum_{i=1}^n{X_{ik_o}}\sim \Gamma\left(n,\lambda\right )\),
\[f_{Exp{(\lambda)}}(x)=\lambda e^{-\lambda x}\ \ ; \ \mathbb{E}(X_{Exp{(\lambda)}})=\frac{1}{\lambda} \ \ ; \ Var(X_{Exp{(\lambda)}})= \frac{1}{\lambda^2}\ ; \\ f_{\Gamma(shape=\alpha,scale=\lambda)}(x)=\frac{x^{\alpha-1}\lambda^{\alpha}}{\Gamma(\alpha)}e^{-\lambda x}\ \ ; \ \mathbb{E}(X_{\Gamma(\alpha,\lambda)})=\alpha\lambda \ \ ; \ Var(X_{\Gamma(\alpha,\lambda)})= \alpha\lambda^2\ \ .\]
Based on this sample, the sampling distribution of \(\bar{X}_{nk_o}\equiv \frac{1}{n}\sum_{i=1}^n{X_{ik_o}}\sim Exp\left (\lambda\right )\equiv\Gamma\left(n,n\lambda\right )\), since a sum of exponential IID \(Exp(\lambda)\) variables is gamma distributed, \(\sum_{i=1}^n{X_{ik_o}}\sim \Gamma\left(n,\lambda\right )\) and scaling a random variable \(Y=cX\) yields \(F_Y(x)=\frac{1}{c}f_X(\frac{x}{c})\), in our case the constant multiplier \(c=\frac{1}{n}\), and \(Y=\frac{1}{n}\sum_{i=1}^n{X_i}\).
Of course, as \(n\to\infty\), \(\Gamma\left(n,\frac{\gamma}{n}\right )\to N(0,1)\), yet for any fixed \(n\), the distribution is similar, but not identical to normal.
Prior to data acquisition, \(\bar{X}_{nk_o}\) is a random variable, once the observed data values are plugged in, its a constant. Hence, the sample mean random variable \(\bar{X}_{nk_o}\) based on \(\{X_{1k_o},\cdots , X_{nk_o}\}\sim F_X\), and the random variable \(Y\sim F_{\bar{X}_{nk_o}}\) represent exactly the same random variable. In other words, drawing \(K\) samples of IID observations \(\{X_{1k_o},\cdots , X_{nk_o}\}\sim F_X\) and computing \(\bar{X}_{nk_o}=\frac{1}{n}\sum_{i=1}^n{X_{ik_o}}\) is equivalent to drawing \(K\) samples directly from \(F_{\bar{X}_{nk_o}}\).
Below is an example of a 3D Brownian motion/Wiener process as an example of a stochastic random walk process. In this example, the Wiener process is intentionally disturbed by random Poisson noise, which leads to occasional rapid and stochastic disruptions.
library(plotly)
# 3D Wiener Process
N=500 # Number of random walk steps
# Define the X, Y, and Z, coordinate displacements independently
# xdis = rnorm(N, 0 , 1)
# ydis = rnorm(N, 0 , 2)
# zdis = rnorm(N, 0 , 3)
# xdis = cumsum(xdis)
# ydis = cumsum(ydis)
# zdis = cumsum(zdis)
# To use simulated 3D MVN Distribution
dis <- mixtools::rmvnorm(N, mu=c(0,0,0),
sigma=matrix(c(1,0,0, 0,2,0, 0,0,3), ncol=3))
# aggregate the displacements to get the actual 3D Cartesian Coordinates
xdis = cumsum(dis[,1])
ydis = cumsum(dis[,2])
zdis = cumsum(dis[,3])
# add Poisson noise
at = rpois(N, 0.1)
for(i in c(1:N)) {
if(at[i] != 0) {
xdis[i] = xdis[i]*at[i]
ydis[i] = ydis[i]*at[i]
zdis[i] = ydis[i]*at[i]
}
}
# plot(xdis, ydis, type="l",
# main ="Brownian Motion in Two Dimension with Poisson Arrival Process",
# xlab="x displacement", ylab = "y displacement")
plot_ly(x=xdis, y=ydis, z=zdis, type="scatter3d", mode="markers+lines",
text=~c(1:N), hoverinfo='text',
marker=list(color='gray'), showlegend=F) %>%
# Extentuate the starting and ending points
add_markers(x=xdis[1], y=ydis[1], z=zdis[1], marker=list(size=20,color="green"),
text=paste0("Starting Node 1")) %>%
add_markers(x=xdis[N], y=ydis[N], z=zdis[N],
marker=list(size=20,color="red"), text=paste0("Ending Node ", N)) %>%
layout(title="3D Brownian Motion",
scene=list(xaxis=list(title="X displacement"),
yaxis=list(title="Y displacement"),
zaxis=list(title="Z displacement")))
# 1D Wiener Process
# dis = rnorm(N, 0, 1);
# at = rpois(N,1)
# for(i in 1:N) {
# if(at[i] != 0){
# dis[i]= dis[i]*at[i]
# }
# }
# dis = cumsum(dis)
# plot(dis, type= "l",
# main= "Brownian Motion in One Dimension with Poisson Arrival Process",
# xlab="time", ylab="displacement")
# ub = 20; lb = -20
# xdis = rnorm(N, 0 ,1)
# xdis1 = rep(1,N)
# xdis1[1] = xdis[1]
# for(i in c(1:(N-1))){
# if(xdis1[i] + xdis[i+1] > ub) { xdis1[i+1] <- ub }
# else if(xdis1[i] + xdis[i+1] < lb) { xdis[i+1] = lb }
# else { xdis1[i+1] = xdis1[i] + xdis[i+1] }
# }
#
# plot(xdis1, type="l",main="Brownian Motion with bound in 1-dim", xlab="displacement",ylab="time")
# Compute the row Euclidean differences
df <- data.frame(cbind(xdis, ydis, zdis))
rowEuclidDistance <- dist(df)
plot_ly(z=as.matrix(rowEuclidDistance), type="heatmap") %>%
layout(title="Heatmap of Euclidean Distances between Consecutive Steps of Wiener Process")
plot_ly(x=as.vector(as.matrix(rowEuclidDistance)), type = "histogram") %>%
layout(title="Histogram of Euclidean Distances between Consecutive Steps of the Wiener Process")
In our spacekime representation, we effectively have a (repeated measurement) spatiotemporal process including 3D spatial Gaussian model which is dynamic in time. In other words, the 3D Gaussian Process with a mean vector and a variance-covariance matrix tensor both dependent on time, \(\mu=\mu(t)=(\mu_x(t),\mu_y(t),\mu_z(t))'\) and \(\Sigma=\Sigma(t)=\left ( \begin{array}{ccc} \Sigma_{xx}(t) & \Sigma_{xy}(t) & \Sigma_{xz}(t)\\ \Sigma_{yx}(t) & \Sigma_{yy}(t) & \Sigma_{yz}(t) \\ \Sigma_{zx}(t) & \Sigma_{zy}(t) & \Sigma_{zz}(t) \end{array}\right )\).
The process distribution is \(\mathcal{D}(x,y,z,t)\) is specified by \(\mu=\mu(t);\ \Sigma=\Sigma(t)\). Given a spatial location, e.g., brain voxel, we the distribution probability density function at \((x,y,z)\in\mathbb{R}^3\) depends on the time localization, \(t\in \mathbb{R}^+\). Actual repeated sample observations will draw phases from the phase distribution, \(\{\phi_i\}_i\in\Phi_{[-\pi,\pi)}\), which are associated with the fixed spatiotemporal location \((x,y,z,t)\in\mathbb{R}^3\times \mathbb{R}^+\). The repeated spatiotemporal samples are \[\left\{\left (\underbrace{x_i}_{x(\phi_i)}, \underbrace{y_i}_{y(\phi_i)}, \underbrace{z_i}_{z(\phi_i)}, \underbrace{t_i}_{t(\phi_i)}\right )\right\}_i\in\mathbb{R}^3\times \mathbb{R}^+\ .\]
When the mean vector and the variance-covariance matrix vary with time \(t\), proper inference may require use of Wiener processes (Brownian motion), Ito calculus, Heston models, or stochastic differential equation models.
SDEs are generally expressed in a differential or integral forms.
\[(\text{Differential Form})\ \mathrm{d} X_t = \mu(X_t,t)\, \mathrm{d} t + \sigma(X_t,t)\, \mathrm{d} B_t ,\]
where \(B\) is a time-varying Wiener process, \(X\) is the (time-dependent) position of the system in its state space, and the spatiotemporally-dependent \(\mu\) and \(\sigma\) represent the process mean and standard deviation. This equation should be interpreted as an informal way of expressing the corresponding integral equation
\[(\text{Integral Form})\ X_{t+s} - X_{t} = \int_t^{t+s} \mu(X_u,u) \mathrm{d} u + \int_t^{t+s} \sigma(X_u,u)\, \mathrm{d} B_u \ .\]
The integral form of SDEs characterize the behavior of continuous time stochastic processes \(X_t\) as the sum of an ordinary Lebesgue integral and another Ito integral.
The Itô stochastic integral represents a generalization of the classical [Riemann–Stieltjes integral] where the integrand \(H_s\) and the integrator \(dX_s\) are time-dynamic stochastic processes
\[Y_t = \int_0^t {H_s\ dX_s}\ .\]
The integrant \(H_s\) is a locally square-integrable process non-anticipating , i.e., adapted, to the filtration generated by a Brownian motion integrator \(X_s\). A filtration \(\{\mathcal{F}_t\}_{t\ge 0}\) on a probability space \((\Omega,\mathcal{F}, P)\) is a collection of sub-sigma-algebras on \(\mathcal{F}\) such that \(\mathcal{F}_s\subseteq\mathcal{F}_t\), \(\forall s\le t\). In other words, the filtration \(\mathcal{F}_t\) represents the set of events observable by time \(t\). A probability space enriched with a filtration forms a filtered probability space \((\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},P)\).
The sigma-algebra \(\sigma(\cdot)\) is generated by a collection of subsets of hte state space \(\Omega\). Filtration lead to defining adapted processes – a stochastic process process \(X\) is adapted when \(X_t\) is an \(\mathcal{F}_t\)-measurable random variable for each time \(t\ge 0\), i.e., we can observe the value \(X_t=x_t\) at time \(t\). The smallest filtration with respect to which the stochastic process is adapted is called the natural filtration of \(X\), \(\mathcal{F}^X_t=\sigma\left(X_s\colon s\le t\right)\). The measurability constraints on the process implies that the process can be considered as a map \(X:{\mathbb R}_+\times\Omega\rightarrow{\mathbb R}\), \({\mathbb R}_+\times\Omega\ni (t,\omega)\longrightarrow X_t(\omega) \in {\mathbb R}\). This makes the process \(X_s\) is \(\mathcal{F}_t\)-measurable, i.e., its distribution is computable. Having an increasing filtration \(\{F_t\}_0^{\infty}\), the stochastic process \(\{X_t\}_{t}\) is adapted when each \(\{X_t\}\) is \(\{F_t\}\)-measurable.
The stochastic Ito integral can be written in different (short-hand) forms:
\[Y_t = \int_0^t H\ dX \equiv \int_0^t {H_s}\ dX_s,\]
The time-parameterized stochastic process \(Y_t\) can also be expressed as \(Y = H \cdot X\) or as a differential form \(dY = H\ dX\), or even \(Y − Y_o = H\cdot X\) relative to an underlying filtered probability space \((\Omega ,{\mathcal {F}},({\mathcal {F}}_{t})_{t\geq 0},P )\), where the \(\sigma\)-algebra \(\mathcal {F}_{t}\) represents the information over time \(s\lt t\). The process \(X\) is adapted when \(X_t\) is \(\mathcal {F}_{t}\)-measurable.
As the earlier 3D Wiener process experiment shows, increments of a Wiener process are independent, and for Gaussian processes, the increments are expected to be normally distributed. As a time-varying function, the process \(\mu=\mu(t)\) is interpreted as drift or trend, whereas the time-varying standard deviation \(\sigma=\sigma(t)\) is referred to as diffusion.
Sometimes, stochastic processes \(X_t\), also be called diffusion processes, satisfies the Markov property. Independent of all past behavior prior to the current time \(t\), over a tiny time interval of length \(|[t,t+s]|=\delta\), the stochastic process \(X_t\) changes its value according to distribution with expected value \(\mu(X_t, t)\delta\) and variance \(\sigma(X_t, t)^2\delta\).
The Markov property specifies that the conditional expectation and conditional probability of the future of a Wiener process path given its present and its past. Denote the path over a time interval \([t_1, t_2],\ t_1 \lt t_2\)) by \(X_{[t_1,t_2]}\). Also denote, the time interval prior to time \(t_1\) by \([0, t_1]\), the past information about the process path by \(X_{[0,t_1]}\), and the present information by \(X_{t_1}\). Then, the Markov property about the future process trajectory beyond \(t_1\) requires that that the distribution of \(X_{[t_1,t_2]}\) conditional on the past, \([0, t_1]\), is the same as the distribution conditional on the present, \(X_{t_1}\). That is, the distribution conditional on \(X_{[0,t_1]}\) is the same as the distribution solely conditional on \(X_{t_1}\).
The Heston model is an example of a SDE describing the evolution of volatility of a time-process, such as valuation of an asset. The stochastic Heston SDE models the volatility of temporally-dynamic asset valuations.
Let \(S_t\) represent the the valuation (price) of the asset, the Heston SDE model
\[dS_t=\mu S_t\ dt + {\sqrt {\nu_t}}S_t\,dW_t^S\ ,\]
where \(\nu_t\) is the variance at time \(t\), which is the Feller square-root or Cox–Ingersoll–Ross (CIR) model
\[d\nu_t=\kappa (\theta -\nu_t)\ dt + \xi {\sqrt {\nu_t}}\ dW_t^{\nu},\]
We may need to explore the utility of reproducing kernel Hilbert spaces (RKHS) may be useful for AI regression and classification tasks, especially for modeling time-varying probability distributions and kime-phase representation.
A subset of a topological space is called a Borel set if it can be expressed as a countable union, countable intersection, and relative complement of either open subsets or, equivalently, closed subsets of the topological space.
For instance, if the reals \(\mathbb{R}\) represent the base topological space, for any countable collection of open subsets \(A_{i}\mathbb{\subseteq R,\ }i\mathbb{\in N,}\) and any finite collection of open subsets \(B_{j}\mathbb{\subseteq R,\ }1 \leq j \leq m\), let
\[A = \bigcup_{i = 1}^{\infty}A_{i}\ ,\ B = \bigcap_{j = 1}^{m}B_{j}\ .\]
Then, all of these subsets \(A,\ \left\{ A_{i} \right\},\ B,\ \left\{ B_{j} \right\}\) are Borel sets. The Borel \(\sigma\)-algebra on \(\mathbb{R}\) is a nonempty collection \(\Sigma\) of subsets of \(\mathbb{R}\) closed under complement, countable unions, and countable intersections. In this situation, the ordered pair \(\mathbb{(R,\ }\Sigma)\) is called a measurable space, which is subsequently coupled with a probability measure, such as the distribution of the kime-phase.
The Borel algebra on the reals is the smallest \(\sigma\)-algebra on \(\mathbb{R}\) that contains all the intervals. Also, given a real random variable, such as the kime-phase \(\varphi\), defined on a probability space, such as \(\left( \Omega \equiv \lbrack - \pi,\pi),\ E,\ Pr \equiv \Phi \right)\), where \(E\) represents observable events, the variable probability distribution \(\varphi \sim \Phi\) is a measure on the Borel \(\sigma\)-algebra.
A measure \(\nu\) on Borel subsets of the measurable space \((X,\ \Sigma)\) is absolutely continuous with respect to another measure \(\mu\) if \(\forall\ \mu\)-measurable sets \(A \subseteq X\), \(\mu(A) = 0 \Longrightarrow \nu(A) = 0\). We denote absolute continuity by \(\nu \ll \mu\) indicating that the measure \(\nu\) is dominated by the measure \(\mu\).
When \(X\mathbb{\equiv R}\), the following conditions for a finite measure \(\nu\) on the Borel subsets of the real line are equivalent:
\(\nu\) is absolutely continuous with respect to the Lebesgue measure \(\mu\) over the reals;
\(\forall\varepsilon \gt 0\), \(\exists\delta_{\varepsilon} \gt 0\) such that \(\nu(A) < \varepsilon\) for all Borel sets \(A\) of Lebesgue measure \(\mu(A) < \delta_{\varepsilon}\);
There exists a Lebesgue integrable function \(h( \cdot )\mathbb{:R \rightarrow}\mathbb{R}^{+}\), such that for all Borel sets \(A\)
\[\nu(A) = \int_{A}^{}{h\ d\mu},\ \ i.e.,\ \ \ \ h = \frac{\partial\nu}{\partial\mu}\ .\]
The last equivalence condition (3) suggests that for any pair of absolutely continuous measures \(\nu \ll \mu\), there exists a ‘’pseudo’’-derivative of the measure \(\nu\) with respect to its dominant measure \(\mu\), which can be denoted by \(h \equiv h_{\nu,\mu}\).
Given a pair of \(\sigma\)-finite measures, \(\nu,\mu\), defined on a measurable space \((X\ ,\ \Sigma)\). When \(\nu\) is absolutely continuous with respect to \(\mu\), i.e., \(\nu \ll \mu\), then there exists a \(\Sigma\)-measurable function \(h:X \rightarrow \ \lbrack 0,\infty)\), such that for any measurable set \(A \subseteq \ X\), \(h\) is the Radon-Nikodym derivative of \(\nu\) with respect to the distribution \(\mu\)
\[\nu(A) = \int_{A}^{}{h\ d\mu},\ \ \ i.e.,\ \ \ h = \frac{\partial\nu}{\partial\mu}\ .\]
Furthermore, the Radon-Nikodym derivative function \(h\) is uniquely defined up to a \(\mu\)-zero measure set. In other words, of \(g\) is another Radon-Nikodym derivative of \(\nu\), then \(h \equiv g\) almost everywhere except potentially on a set \(B \subseteq X\) of trivial measure, \(\mu(B) = 0\).
Radon-Nikodym derivative is similar to the classical derivative as it describes the rate of change of the density \(\nu\) (the marginalized numerator measure) with respect to the density \(\mu\) (the dominating, denominator measure) just like the determinant of the Jacobian describes variable transformations (change of variables) in multivariable integration.
Example 1 (Classical Jacobian): of a 2D polar-to-Cartesian coordinate frame transformation
\[(r,\ \varphi) \in \mathbb{R}^{+} \times \lbrack - \pi,\pi)\underbrace{\longrightarrow}_{F}\ (x,\ y) \in \mathbb{R}^{2},\ \ F(r,\ \varphi) = \begin{pmatrix} x \\ y \\ \end{pmatrix} = \begin{pmatrix} r\cos\varphi \\ r\sin\varphi \\ \end{pmatrix}\ .\]
The Jacobian of the transformation \(F\) is the \(2 \times 2\) matrix of partial derivatives
\[\ \ J_{F}(r,\ \varphi) = \begin{pmatrix} \frac{\partial x}{\partial r} & \frac{\partial x}{\partial\varphi} \\ \frac{\partial y}{\partial r} & \frac{\partial y}{\partial\varphi} \\ \end{pmatrix} = \begin{pmatrix} \cos\varphi & - r\sin\varphi \\ \sin\varphi & r\ \cos\varphi \\ \end{pmatrix}\ ,\ \ \ \det\left( J_{F} \right) \equiv \left| J_{F} \right| = r\ .\]
The effect of the change-of-variables transformation \(F\) on computing the definite integrals is
\[\iint_{F(A)}^{}{f(x,y)}dx\ dy = \ \iint_{A}^{}{f(r\cos\varphi,r\sin\varphi)}\left| J_{F} \right|\ dr\ d\varphi = \iint_{A}^{}{f(r\cos\varphi,r\sin\varphi)}r\ dr\ d\varphi\ .\]
Example 2 (Radon-Nikodym derivative of the Euclidean measure): Suppose the topological space is the support of the kime-phase, \(X \equiv \lbrack - \pi,\pi)\), and \(\Sigma\) is the Borel \(\sigma\)-algebra on \(X\).
For any open interval \(I = (a,b) \subseteq \lbrack - \pi,\pi) \equiv X\), we can take \(\mu\) to be twice the length measure of the interval \(I\), hence, \(\mu(I) = 2(b - a)\). Let’s also choose \(\nu\) to be the standard Euclidean measure of the interval, i.e., \(\nu(I) = (b - a)\). In this case, \(\nu\) is absolutely continuous with respect to \(\mu\), i.e., \(\nu \ll \mu\). Then, the Radon-Nikodym derivative of \(\nu\) with respect to \(\mu\) will be constant, \(h = \frac{\partial\nu}{\partial\mu} = \frac{1}{2}\).
Example 3 (Undefined Radon-Nikodym derivative). Let \(\mu\) be the Euclidean (length) measure on \(X\) and \(\nu\) be a special measure that assigns to each subset \(Y \subseteq X \equiv \lbrack - \pi,\pi)\), the number of points from the set \(\{ -3, -2,1,0,1,2,3\} \subseteq X\) that are contained in \(Y\). Then, \(\nu\) is not absolutely-continuous with respect to \(\mu\), since \(\nu\left( Y \equiv \left\{ - 3 \right\} \right) = 7\), whereas \(\mu\left( Y \equiv \left\{ - 3 \right\} \right) = 0\). Therefore, the Radon-Nikodym derivative \(\frac{\partial\nu}{\partial\mu}\) is undefined. In other words, there is no finite function \(h\) that,
\[1 = \nu\left( Y \equiv \left\{ - 3 \right\} \right) \neq \int_{- 3 - \varepsilon}^{- 3 + \varepsilon}{h\ d\mu}\ ,\ \forall\ \varepsilon \gt 0\ ,\]
since \(\lim_{\varepsilon \rightarrow 0}{\int_{- 3 - \varepsilon}^{- 3 + \varepsilon}{h\ d\mu}} = 0\) for all finite functions \(h\).
Example 4 (Discontinuous Radon-Nikodym derivative). Let’s choose the relation between the dominating and the dominated measures to be \(\mu\ = \nu\ + \delta_{o}\), where \(\nu\) is the Euclidean (length) measure on \(X \equiv \lbrack - \pi,\pi)\) and \(\delta_o(A) \equiv 1_A(x) = \begin{cases} 0,\ x \notin A \\ 1,\ x \in A \end{cases}\) is the Dirac measure on \(0\). Then, \(\nu \ll \mu\), and the Radon-Nikodym derivative is a discontinuous function
\[h(x) = \frac{\partial\nu}{\partial\mu} = 1_{X \smallsetminus \{ 0\}} \equiv \left\{ \begin{matrix} 0,\ x = 0 \\ 1,x \neq 0\ \\ \end{matrix}\ . \right.\ \]
Example 5 (Radon-Nikodym kime-phase derivative): Again, we use the topological space with support equal to the support of the kime-phase, \(X \equiv \lbrack - \pi,\pi)\), and \(\Sigma\) is again
Let’s explore a more kime-phase realistic example where we take the dominant measure \(\mu\) to be \(2 \times Laplace(\mu = 0,\ \sigma = 0.5)\) distribution and the marginalized (dominated) measure \(\nu\) to be a different Laplace distribution, \(Laplace(\mu = 0,\ \sigma = 0.4)\). Again, by design, \(\nu\) is absolutely continuous with respect to \(\mu\), i.e., \(\nu \ll \mu\). For details, see the graph below and R code in the Appendix.
Then, denote by \(h = \frac{\partial\nu}{\partial\mu}\) the Radon-Nikodym derivative of the phase distribution \(\nu\) with respect to the measure \(\mu\). Note that \(\nu\) is a probability distribution, whereas \(\mu\) is only a measure, which can be normalized to a distribution.
library(plotly)
N <- 10000
xNu <- extraDistr::rlaplace(N, mu = 0, sigma = 0.4)
yNu <- density(xNu, bw=0.2)
xMu <- extraDistr::rlaplace(N, mu = 0, sigma = 0.5)
yMu <- density(xMu, bw=0.2)
# Correct second Laplace Density to ensure absolute continuity, nu << mu
yMu$y <- 2*yMu$y
plot_ly(x = xNu, type = "histogram", name = "Data Histogram") %>%
add_trace(x=yNu$x, y=yNu$y, type="scatter", mode="lines", opacity=0.3,
fill="tozeroy", yaxis="y2", name="nu, Laplace(N,0,0.4) Density") %>%
add_trace(x=yMu$x, y = yMu$y, type="scatter", mode="lines", opacity=0.3,
fill="tozeroy", yaxis="y2", name="mu, Laplace(N,0,0.5) Density") %>%
layout(title="Absolutely Continuous Laplace Distributions, nu<<mu",
yaxis2 = list(overlaying = "y", side = "right"),
xaxis = list(range = list(-pi, pi)),
legend = list(orientation = 'h'))
## 1.000775 with absolute error < 3.4e-06
## 1.99773 with absolute error < 0.00021
Here is a recipe to compute the (unique) Radon-Nikodym derivative \(h(I) = \frac{\partial\nu}{\partial\mu}\) for any interval
\[I = (a,b) \subseteq \lbrack - \pi,\pi) \equiv X\ .\]
Since \(\nu \sim Laplace(\mu = 0,\ \sigma = 0.4)\), it’s Laplace CDF \(F_{\nu}\) is
\[F_{\nu} = \frac{1}{2} + \frac{1}{2}sign(x - \mu)\left( 1 - e^{- \left( \frac{|x - \mu|}{\sigma} \right)} \right)\]
and therefore,
\[\nu(I) = \nu(a,b) = F_{\nu}(b) - F_{\nu}(a) =\]
\[\frac{1}{2}\left\lbrack sign(b)\left( 1 - e^{- \left( \frac{|b|}{0.4} \right)} \right) - sign(a)\left( 1 - e^{- \left( \frac{|a|}{0.4} \right)} \right) \right\rbrack.\]
Also,
\[\nu(I) = \nu(a,b) = \frac{1}{2\sigma}\int_{- \pi\ }^{\pi}{\mathbf{1}_{\left\{ (a,b) \right\}}\ e^{- \left( \frac{|x - \mu|}{\sigma} \right)}}dx =\]
\[\ \frac{1}{2 \times 0.4}\int_{- \pi\ }^{\pi}{\mathbf{1}_{\{(a,b)\}}\ e^{- \left( \frac{|x|}{0.4} \right)}}dx = \int_{- \pi\ }^{\pi}{\mathbf{1}_{\{(a,b)\}}\ \underbrace{\frac{e^{- \left( \frac{|x|}{0.4} \right)}}{0.8}dx}_{d\nu}}\ .\]
Next, we need to change the variables to transform the above integral to
\[\nu(I) \equiv \nu(a,b) = \int_{a\ }^{b}\frac{d\nu}{d\mu}d\mu = \int_{- \pi\ }^{\pi}{\mathbf{1}_{\{(a,b)\}}\ \frac{d\nu}{d\mu}d\mu}\ .\]
By the uniqueness of the Radon-Nikodym derivative, the function \(h(a,b) = \frac{d\nu}{d\mu}\) will be the desired derivative, up to a set \(B \subseteq \lbrack - \pi,\pi)\) with trivial measure, \(\mu(B) = 0\). Recall that in this example, the dominant measure \(\mu\) is \(2 \times Laplace(\mu = 0,\ \sigma = 0.5)\), i.e., twice the Laplace density, whereas \(\nu\) is a Laplace distribution with a different scale parameter, \(\sigma\), \(\nu \sim Laplace(\mu = 0,\ \sigma = 0.4)\).
(Does this derivation need to be more explicit in terms of a change of variables transformation, e.g., \(y = 0.4\frac{x}{0.5} = 0.8x\)?)
Rearranging the terms, we get
\[\nu(a,b) = \int_{- \pi\ }^{\pi}{\mathbf{1}_{\{(a,b)\}}\ h(x) \times \underbrace{2\frac{e^{- \left( \frac{|x|}{0.5} \right)}}{2 \times 0.5}dx}_{d\mu}} = \int_{a\ }^{b}\frac{d\nu}{d\mu}d\mu.\]
Therefore, we need to solve for \(h(x)\) this equation
\[\underbrace{\frac{e^{- \left( \frac{|x|}{0.4} \right)}}{0.8}dx}_{d\nu} \equiv h(x) \times \underbrace{2\frac{e^{- \left( \frac{|x|}{0.5} \right)}}{2 \times 0.5}dx}_{d\mu}\ \ .\]
The solution of this equation for \(h\) is the (unique) Radon-Nikodym derivative of \(\nu\) with respect to \(\mu\)
\[h(x) = \frac{\frac{e^{- \left( \frac{|x|}{0.4} \right)}}{0.8}}{2\frac{e^{- \left( \frac{|x|}{0.5} \right)}}{2 \times 0.5}} = \frac{\frac{e^{- \left( \frac{|x|}{0.4} \right)}}{0.8}}{2e^{- \left( \frac{|x|}{0.5} \right)}} = \frac{e^{- \left( \frac{|x|}{0.4} \right)}}{0.8 \times 2e^{- \left( \frac{|x|}{0.5} \right)}} = \frac{e^{- \left( \frac{|x|}{0.4} \right) + \left( \frac{|x|}{0.5} \right)}}{1.6} = \frac{8}{5}e^{- \frac{|x|}{2}} \equiv \frac{32}{5} \left( \underbrace{\frac{1}{2 \times 2}\ e^{- \frac{|x|}{2}}}_{LaplaceDistr(\mu = 0,\ \sigma = 2)} \right).\]
Therefore, given that \(\nu \sim Laplace(\mu = 0,\ \sigma = 0.4)\), \(\mu \sim 2 \times Laplace(\mu = 0,\ \sigma = 0.5)\), and \(\nu \ll \mu\), then the Radon-Nikodym derivative \(\frac{d\nu}{d\mu} = h(x) \sim \frac{32}{5} \times Laplace(\mu = 0,\ \sigma = 2)\).
Given several \(\sigma\)-finite measures, the following properties of the Radon-Nikodym derivative (additivity, chain rule, reciprocation, change of variables, and magnitude) are helpful in estimating the derivative function \(h\) in certain situations by breaking the calculations into basic building components.
Suppose \(\nu,\ \mu\), and \(\lambda\) are \(\sigma\)-finite measures on the same kime-phase measurable space \(X \equiv \lbrack - \pi,\pi)\). Assuming \(\nu\) and \(\mu\) are both absolutely continuous with respect to \(\lambda\), i.e., \(\nu \ll \lambda\) and \(\mu \ll \lambda\), then \(\lambda\)-almost everywhere in \(X\)
\[\frac{d(\nu \pm \ \mu)}{d\lambda} = \frac{d\nu}{d\lambda} \pm \frac{d\mu}{d\lambda}\ .\]
Proof: We have that \(\nu(A)=\int_A{\frac{d\nu}{d\lambda}d\lambda }\), \(\mu(A)=\int_A{\frac{d\mu}{d\lambda}d\lambda }\), and \(\nu,\mu \ll \lambda\). Hence, given that \(\lambda(A)=0\) implies that \(\nu(A)=0\) and \(\mu(A)=0\). Therefore, \((\nu \pm \mu)\) is an absolutely continuous measure with respect to \(\lambda\), i.e., \((\nu \pm \mu)(A)=0\) and \((\nu\pm\mu) \ll \lambda\). This implies the additivity property,
\[\forall A,\ \nu(A) \pm \mu(A)=(\nu \pm \mu)(A)= \int_A{\frac{d(\nu \pm \ \mu)}{d\lambda}d\lambda}= \int_A{\frac{d\nu}{d\lambda}\pm\frac{d\mu}{d\lambda} d\lambda} \Longrightarrow \frac{d(\nu \pm \ \mu)}{d\lambda} = \frac{d\nu}{d\lambda} \pm \frac{d\mu}{d\lambda}\ .\]
If \(\nu \ll \mu \ll \lambda\), then \(\lambda\)-almost everywhere \[\frac{d\nu}{d\lambda} = \frac{d\nu}{d\mu} \cdot \frac{d\mu}{d\lambda}\ .\]
Proof: Since \(\nu \ll \mu \ll \lambda\), \(\nu(A)=\int_A{\frac{d\nu}{d\lambda}d\lambda }\), \(\nu(A)=\int_A{\frac{d\nu}{d\mu}d\mu }\). As \(\mu \ll \lambda\), \(\exists \frac{d\mu}{d\lambda}\), such that
\[d\mu=\frac{d\mu}{d\lambda}d\lambda \Longrightarrow \nu(A)=\int_A{\frac{d\nu}{d\lambda}d\lambda }= \int_A{\frac{d\nu}{d\mu}\cdot \frac{d\mu}{d\lambda}d\lambda } \Longrightarrow \frac{d\nu}{d\lambda} = \frac{d\nu}{d\mu} \cdot \frac{d\mu}{d\lambda}\ .\]
If \(\mu \ll \nu\) and \(\nu \ll \mu\), then the Radon Nikodym derivative \(\frac{d\mu}{d\nu}\) is the inverse function of the reciprocal Radon Nikodym derivative, \(\frac{d\nu}{d\mu}\) , i.e., \[\frac{d\mu}{d\nu} = \left( \frac{d\nu}{d\mu} \right)^{- 1}\ .\]
Proof: Since \(\mu \ll \nu\) and \(\nu \ll \mu\), \(d\nu=\frac{d\nu}{d\mu}d\mu\). Hence,
\[\mu(A)=\int_A{\frac{d\mu}{d\nu}d\nu }=\int_A{\frac{d\mu}{d\nu}\frac{d\nu}{d\mu}d\mu }= \int_A{1d\mu }.\] Hence, \(\frac{d\mu}{d\nu}\frac{d\nu}{d\mu}\overbrace{=}^{a.e.}1\) \(\Longrightarrow \frac{d\mu}{d\nu}\overbrace{=}^{a.e.}\left(\frac{d\nu}{d\mu}\right )^{-1}=\frac{d\mu}{d\nu}\).
For any \(\mu\)-integrable function, \(g\), given that \(\mu \ll \lambda\), then \[\int_{- \pi}^{\pi}{g\ d\mu} = \ \ \int_{- \pi}^{\pi}{g\frac{d\mu}{d\lambda}d\lambda}\ .\]
Proof: Since \(\mu \ll \lambda\), \(\exists \frac{d\mu}{d\lambda}\) such that \(d\mu=\frac{d\mu}{d\lambda}d\lambda\). Therefore, \[\int_{- \pi}^{\pi}{g\ d\mu} = \ \ \int_{- \pi}^{\pi}{g\frac{d\mu}{d\lambda}d\lambda}\ .\]
When \(\nu\) is complex measure and \(\nu \ll \mu\), the Radon Nikodym derivative of the magnitude \(|\nu|\) is the magnitude of the Radon Nikodym derivative of \(\nu\), i.e.,
\[\frac{d|\nu|}{d\mu} = \left| \frac{d\nu}{d\mu} \right|\ .\]
Proof: This proof follows Cohn’s “Measure Theory” (1980), doi:10.1007/978-1-4899-0399-0.(page 135-136).
According to the definition of \(\left|\upsilon\right|\), for each \(A\) in the \(\sigma-\)algebra \(\Sigma\) , \(\left|\upsilon\right|\left(A\right)\) is the supremum of the numbers \(\sum_{j=1}^{n}\left|\nu\left(A_j\right)\right|\),
\[\left|\upsilon\right|\left(A\right) = \sup\sum_{j=1}^{n}\left|\nu\left(A_j\right)\right|,\]
where \({A_j} \left(j = 1,2,\cdots,n\right)\) ranges over all finite partitions of \(A\) into \(\Sigma\)-measurable sets.
Assume \(\upsilon\left(A\right) = \int_Af\mathrm{d}\mu\) for any Borel set \(A\). \(\frac{\mathrm{d}\left|\upsilon\right|}{\mathrm{d}\mu} = \left|\frac{\mathrm{d}\upsilon}{\mathrm{d}\mu}\right|\) if and only if \(\left|\upsilon\right|\left(A\right) = \int_A\left|f\right|\mathrm{d}\mu\) for any Borel set \(A\). We then show \(\left|\upsilon\right|\left(A\right) = \int_A\left|f\right|\mathrm{d}\mu\) for any Borel set \(A\).
First prove \(\left|\upsilon\right|\left(A\right)\leq\int_A\left|f\right|\mathrm{d}\mu\) for any Borel set \(A\). Let \({A_j}\left(j = 1,2,\cdots,n\right)\) be a finite sequence of disjoint \(\Sigma\)-measurable sets whose union is \(A\).
Thus, \[\sum_{j=1}^{n}\left|\upsilon\left(A_j\right)\right| = \sum_{j=1}^{n}\left|\int_{A_j}f\mathrm{d}\mu\right| \leq \sum_{j=1}^{n}\int_{A_j}\left|f\right|\mathrm{d}\mu = \int_A\left|f\right|\mathrm{d}\mu .\]
Since \(\left|\upsilon\right|\left(A\right) = \sup\sum_{j=1}^{n}\left|\nu\left(A_j\right)\right|\), we can derive \[\left|\upsilon\right|\left(A\right) \leq \int_A\left|f\right|\mathrm{d}\mu .\]
Then prove \(\left|\upsilon\right|\left(A\right)\geq\int_A\left|f\right|\mathrm{d}\mu\). Construct a sequence \({g_n}\) of \(\Sigma\)-measurable simple function \[g_n\left(x\right) = \sum_{i=1}^{k_n}sgn\left(f\left(x\right)\right)\mathbb{I}_{A_n,j}\left(x\right) = a_{n,j}, j = 1,2,\cdots,k_n, \] where \(a_{n,j}\) are the values attained on the sets \(A_{n,j}\). \(A_{n,j}, j = 1,2,\cdots,k_n\) are disjoint \(\Sigma\)-measurable sets whose union is \(A\). Obviously, \(g_n\) satisfies \(\left|g_n\left(x\right)\right| = 1\) and \(\lim_{n \to +\infty}g_n\left(x\right)f\left(x\right) = \left|f\left(x\right)\right|\) hold ar each x in \(X\). Then for any arbitrary set in the \(\sigma\)-algebra \(\Sigma\) we have
\[\left|\int_Ag_nf\mathrm{d}\mu\right| = \left|\sum_{j=1}^{k_n}a_{n,j}\int_{A\cap A_{n,j}}f\mathrm{d}\mu\right| = \left|\sum_{j=1}^{k_n}a_{n,j}\upsilon\left(A \cap A_{n,j}\right)\right| \leq \sum_{j=1}^{k_n}\left|a_{n,j}\right|\left|\upsilon\left(A \cap A_{n,j}\right)\right| \leq \left|\upsilon\right|\left(A\right) .\]
Since \(\left|\int_Ag_nf\mathrm{d}\mu\right| \leq \left|\upsilon\right|\left(A\right)\) and \(\lim_{n \to +\infty}\int_A\left|g_nf-\left|f\right|\right|\mathrm{d}\mu = 0\), according to the dominated convergence theorem, we have \[\lim_{n \to +\infty}\int_Ag_nf\mathrm{d}\mu = \int_A\left|f\right|\mathrm{d}\mu .\]
Thus, \[\int_A\left|f\right|\mathrm{d}\mu = \lim_{n \to +\infty}\int_Ag_nf\mathrm{d}\mu \leq \left|\int_Ag_nf\mathrm{d}\mu\right| \leq \left|\upsilon\right|\left(A\right) .\] From above,
\[\left|\upsilon\right|\left(A\right) = \int_A\left|f\right|\mathrm{d}\mu .\] for any Borel set \(A\). Therefore,
\[\frac{\mathrm{d}\left|\upsilon\right|}{\mathrm{d}\mu} = \left|\frac{\mathrm{d}\upsilon}{\mathrm{d}\mu}\right|\ .\]
Distributions, also known as generalized functionals, generalize the classical notion of functions in mathematical analysis to operators. Distributions make it possible to differentiate functions whose classical derivatives do not exist. For instance, any locally integrable function has a distributional derivative, albeit it may be non-differentiable.
In classical mathematical analysis, a function \(f:D\mathbb{\rightarrow R}\) can be thought as an operator acting on domain points \(x \in D\) by mapping them to corresponding points in the range \(f(x)\mathbb{\in R}\). Whereas in functional analysis and distribution theory, a function \(f\) may be interpreted as an operator acting on test functions, representing all infinitely differentiable, compactly supported, complex-valued functions defined on some non-empty open subset \(U \subseteq \ \mathbb{R}^{n}\). The set of all such test functions forms a vector space
\[C_{c}^{\infty}(U)\mathfrak{\equiv D}(U) = \left\{\text{infinitely differentiable & compactly supported functions}:\ U \subseteq \mathbb{R}^{n}\mathbb{\rightarrow C} \right\}\ .\]
For instance, when \(U\mathbb{\equiv R}\), all continuous functions \(f\mathbb{:R \rightarrow R}\) act as operators integrating against a test function. Given a test function \(\psi \in \mathfrak{D}\left( \mathbb{R} \right)\), the operator \(f:D\left( \mathbb{R} \right)\mathbb{\rightarrow R}\) acts on \(\psi\) by mapping it to a real number as follows:
\[f\lbrack \cdot \rbrack \equiv \left\langle f| \cdot \right\rangle \equiv D_{f}( \cdot )\ :\ \underbrace{\psi}_{test function} \in \underbrace{\mathfrak{D}\left( \mathbb{R} \right)}_{domain} \rightarrow \left\langle f|\psi \right\rangle \equiv D_{f}(\psi) \equiv \int_{- \infty}^{\infty}{f(x)\psi(x)dx} \in \underbrace{\mathbb{\ \ R\ \ }}_{range}\ .\]
The \(f\) action \(\psi \longmapsto D_{f}(\psi)\) defines a continuous and linear functional that maps the domain of test functions \(\mathfrak{D}\left( \mathbb{R} \right)\) (infinitely differentiable and compactly supported functions) to scalar values, i.e., \(D_{f}\mathfrak{:D}\left( \mathbb{R} \right)\mathbb{\rightarrow R}\). This distribution \(D_{f}\) action, \(\psi \longmapsto D_{f}(\psi)\), by integration against a test function, \(\psi\), is effectively a weighted average of the function against \(f\) on the support of the test function. Mind that the values of the distribution at a single point may not be well-defined (cf. singularities). Also, not all probability distributions \(D_{f}\) have well-defined densities \(f\). Hence, some distributions \(D_{f}\) may arise from functions via such integration against a specific test function, whereas other distributions may be well defined, but do not permit densities and cannot be defined by integration against any test function.
Examples of distributions that associated with a bounded and compactly supported density function are the Dirac delta function and other distributions that can only be defined by actions via integration of a test function \(\psi \longmapsto \int_{U}^{}\psi d\mu\) against specific measures \(\mu\) on \(U\).
To recap distributions and Green’s theorem, we can consider a function \(f\) as a distribution \(D_{f}\) or a continuous linear functional on the set of infinitely differentiable functions with bounded support (denoted by \(C_{0}^{\infty}\), or simply \(\mathfrak{D}\)).
\(f\lbrack \cdot \rbrack \equiv \left\langle f| \cdot \right\rangle \equiv D_{f}( \cdot )\mathfrak{:D}\mathbb{\rightarrow R}\), and \(D_{f}(\psi) \equiv \int_{\mathbb{R}}^{}{f(x)\psi(x)dx},\ \forall\ \psi \in \mathfrak{D\ }.\)
Any continuous function \(f(\varphi)\) can be regarded as a distribution by integration against a test function \(\varphi\).
\[f(\varphi) = \int_{- \infty}^{\infty}{f(x)\varphi(x)dx}\ ,\ \forall\ \varphi \in \mathfrak{D.}\]
Function approximation: If there exist a series of functions approximating \(f\), i.e., \(f = \lim_{n \rightarrow \infty}{f_{n}(x)},\ \forall x\mathbb{\in R}\), then
\[f(\varphi) = \lim_{n \rightarrow \infty}\int_{- \infty}^{\infty}{f_{n}(x)\varphi(x)dx},\ \forall\ \varphi \in \mathfrak{D\ .}\]
Distributional derivative: Some non-differentiable functions may be differentiated as distributions. The distributional derivative of \(f\) is well defined as the distribution \(D_{f}'\) given by
\[D_{f}'(\varphi) \equiv - D_{f}\left( \varphi' \right),\ \ \ \forall\ \varphi \in \mathfrak{D.\ }\]
This makes sense, since \(\text{∀}\ f \in C^{1}\), \(D_{f}(\psi) \equiv \int_{- \infty}^{\infty}{f(x)\varphi(x)dx}\), and \(\forall\ \varphi \in \mathfrak{D}\), integration by parts yields
\[D_{f}'(\varphi) = \left\langle f' \middle| \varphi \right\rangle = \int_{- \infty}^{\infty}{f'(x)\varphi(x)dx} = \underbrace{\left. f(x)\varphi(x) \right|_{- \infty}^{\infty}}_{0} - \int_{-\infty}^{\infty}{ \underbrace{f(x)\varphi'(x)}_{D_{f}(\varphi')}dx} = \ - D_{f}\left( \varphi' \right).\]
Example 1: Let \(f = 5\delta(x + 3) - 2\delta(x - 1)\). Then, \(\forall\ \varphi \in \mathfrak{D}\),
\[f(\varphi) = \int_{- \infty}^{\infty}{f(x)\varphi(x)dx} = \ \int_{- \infty}^{\infty}{\left\lbrack 5\delta(x + 3) - 2\delta(x - 1) \right\rbrack\varphi(x)dx} = 5\varphi( - 3) - 2\varphi(1)\ .\]
Example 2: Suppose the one-parameter family of functions \(f_{m}:\mathbb{R} \rightarrow \mathbb{R}\) are defined by
\[f_{m}(x) = \left\{ \begin{matrix} m,\ x \in \left\lbrack 0,\ \frac{1}{m} \right\rbrack\ \ \\ 0,\ otherwise \\ \end{matrix}\ . \right.\ \]
and \(D_{f_{m}}(\psi)\) is the distribution corresponding to \(f_{m}\), then, \(\forall\ \psi \in \mathfrak{D}\)
\[\lim_{m \rightarrow \infty}{D_{f_{m}}(\psi)} = \lim_{m \rightarrow \infty}\left( \int_{\mathbb{R}}^{}{f_{m}(x)\psi(x)dx} \right) = \lim_{m \rightarrow \infty}\left( m\int_{0}^{\frac{1}{m}}{\psi(x)dx} \right) = \psi(0) = \int_{\mathbb{R}}^{}{\delta(x)\psi(x)dx}.\]
This implies that as the parameter \(m\) increases, the function \(\lim_{m \rightarrow \infty}D_{f_{m}} \rightarrow \delta\), i.e., the distribution functions \(f_{m}(x)\) approximate the Dirac delta distribution, \(\delta(x)\).
Example 3: Determine what function \(f\) corresponds with \(D_{f}(\psi) \equiv \int_{0}^{\infty}{x\psi'(x)dx},\ \forall\ \psi \in \mathfrak{D}\)?
First, we can validate that \(D_{f}\) is a linear operator, \(\forall\ \psi,\phi \in \mathfrak{D},\ \ \forall a,b \in \mathbb{R}\),
\[D_{f}(a\psi + b\phi) \equiv \int_{0}^{\infty}{x(a\psi + b\phi)'(x)dx} = \int_{0}^{\infty}{\left\lbrack ax\psi'(x) + bx\phi'(x) \right\rbrack dx} = aD_{f}(\psi) + bD_{f}(\phi)\ .\]
Next, \(\forall\ \psi \in \mathfrak{D}\), we need to express \(D_{f}(\psi) \equiv \int_{0}^{\infty}{f(x)\psi(x)dx}\), for some function \(f\).
Integrate by parts the definition of the distribution
\[D_{f}(\psi) \equiv \int_{0}^{\infty}{x\psi'(x)dx} = \left. \ \begin{matrix} \\ x\psi(x) \\ \\ \end{matrix} \right|_{0}^{\infty} - \int_{0}^{\infty}{\psi(x)dx} = 0 - \int_{- \infty}^{\infty}{H(x)\psi(x)dx} = \int_{- \infty}^{\infty}{\left( - H(x) \right)\psi(x)dx},\]
where the Heaviside function is \(H(x) = \left \{ \begin{matrix} 0,\ x \leq 0 \\ 1,\ x > 0 \\ \end{matrix} \right .\). Therefore, this distribution \(D_{f}(\psi) \equiv \int_{0}^{\infty}{x\psi'(x)dx}\), corresponds to \(f(x) = - H(x)\).
Example 4: Compute the distributional derivative of the Heaviside function \(f(x) \equiv H(x)\).
Since \(D_{f}'(\varphi) \equiv - D_{f}\left( \varphi' \right),\ \ \ \forall\ \varphi \in \mathfrak{D}\), we have
\[D_{H}'(\varphi) = \int_{- \infty}^{\infty}{\left( - H(x) \right)\varphi'(x)dx} = - \int_{0}^{\infty}{\varphi'(x)dx} = - \left. \ \begin{matrix} \\ \varphi(x) \\ \\ \end{matrix} \right|_{0}^{\infty} = 0 - \left( - \varphi(0) \right) = \varphi(0) \equiv D_{\delta}(\varphi).\]
Therefore, distributional derivative of the Heaviside function \(H(x)\) is the Dirac delta function, i.e., \(H' \equiv D_{H}' = \delta\).
Example 5: Compute the distributional derivative of the delta function \(f(x) \equiv \delta(x)\).
Again, since \(D_{f}'(\varphi) \equiv - D_{f}\left( \varphi' \right),\ \ \ \forall\ \varphi \in \mathfrak{D}\), we have
\[\delta'(\varphi) \equiv D_{\delta}'(\varphi) = - \int_{- \infty}^{\infty}{\delta(x)\varphi'(x)dx} = - \varphi'(0).\]
The order \(n\) distributional derivatives are defined by induction. The case of \(n = 1\), first derivative, is presented above. Higher order distributional derivatives are defined by repeated application of integration by parts (\(n\) times):
\[f^{(n)}(\varphi) \equiv D_{f}^{(n)}(\varphi) \equiv ( - 1)^{n}D_{f}\left( \varphi^{(n)} \right),\ \ \ \forall\ \varphi \in \mathfrak{D.\ }\]
For instance, let \(f(x) = |x|,\forall\ x\mathbb{\in R}\), then, for \(n = 2\),
\[f''(\varphi) \equiv D_{f}''(\varphi) \equiv ( - 1)^{2}D_{f}\left( \varphi'' \right) = \int_{- \infty}^{\infty}{|x|\varphi''(x)dx} = - \int_{- \infty}^{0}{x\varphi''(x)dx} + \int_{0}^{\infty}{x\varphi''(x)dx} =\]
\[- \left. \ \begin{matrix} \\ x\varphi'(x) \\ \\ \end{matrix} \right|_{- \infty}^{0} + \left. \ \begin{matrix} \\ x\varphi'(x) \\ \\ \end{matrix} \right|_{0}^{\infty} + \left. \ \begin{matrix} \\ \varphi(x) \\ \\ \end{matrix} \right|_{- \infty}^{0} - \left. \ \begin{matrix} \\ \varphi(x) \\ \\ \end{matrix} \right|_{0}^{\infty} = 2\varphi(0) = 2D_{\delta}(\varphi).\]
Therefore, the second order distributional derivative of \(f(x) = |x|\) is \(f'' = D_{f}'' = 2\delta\).
Goal: Derive a general kime-operator, \(Κ = \widehat{\kappa}\), or a kime-phase operator, \(\Theta = \widehat{\theta}\), similarly to the position \(\widehat{x} = x\) and momentum \(\widehat{p} = - i\hslash\frac{\partial}{\partial x}\) operators.
The exponential of an operator is defined to solve linear evolution equations. Suppose
\(A:X \rightarrow X\) is a bounded linear operator on a Banach space \(X\). Generalizing the the power series expansion of \(e^{a},a \in \mathbb{R}\), we define the operator exponential by
\[\underbrace{e^{A}}_{operator} \equiv \underbrace{I}_{identity} + A + \frac{1}{2!}A^{2} + \cdots + \frac{1}{n!}A^{n} + \cdots = \sum_{n = 0}^{\infty}{\frac{1}{n!}A^{n}}\ .\]
The norm of the operator, \(\left\| A \right\|\), leads to a convergent real series
\[\underbrace{e^{\left\| A \right\|}}_{scalar} \equiv \underbrace{1}_{identity} + \left\| A \right\| + \frac{1}{2!}\left\| A \right\|^{2} + \cdots + \frac{1}{n!}\left\| A \right\|^{n} + \cdots = \sum_{n = 0}^{\infty}{\frac{1}{n!}\left\| A \right\|^{n}}\ .\]
The important operator exponential properties include:
\[\left\| e^{A} \right\| \leq e^{\left\| A \right\|}\ \ ,\]
\[when\ \underbrace{\lbrack A,B\rbrack}_{commutator} = 0\ \ \ \Longrightarrow \ \ e^{A}e^{B}\underbrace{=}_{A\ \&\ B\ commute} e^{A + B}\ .\]
In general, a time-evolution operator is a solution of an initial value problem for a linear scalar ODE \(\dot{f} = \frac{d}{dt}f(t) = af(t)\) with boundary condition \(f(0) = f_{o}\). The solution of the scalar linear time-evolution operator is \(f(t) = f_{0}e^{at}\). In the more general case of a (vector) linear system of ODEs, \(\dot{F} = \frac{d}{dt}F(t) = AF(t)\), the solution is
\[F(t) = \underbrace{\overbrace{e^{tA }}^{1 - parameter\ time\ operator}}_{evolution\ flow}\ F_{0}\ ,\]
where \(F\mathbb{:R \rightarrow}X\), \(X\) is a Banach space, and \(A:X \rightarrow X\) is a bounded linear operator on \(X.\) By definition, the derivative of the operator exponential \(\frac{d}{dt}e^{tA} = Ae^{tA}\), similar to the scalar case \(\frac{d}{dt}e^{at} = ae^{at}\):
\[\frac{d}{dt}e^{tA} = \lim_{h \rightarrow 0}\left( \frac{e^{(t + h)A} - e^{tA}}{h} \right) = e^{tA}\lim_{h \rightarrow 0}\left( \frac{e^{hA} - I}{h} \right) = Ae^{tA}\lim_{h \rightarrow 0}{\sum_{n = 0}^{\infty}\left( \frac{1}{(n + 1)!}A^{n}h^{n} \right)} = Ae^{tA}.\]
When \(X \equiv \mathbb{R}^{n}\), the linear system of ODEs corresponds to linear operator \(A = \left( A_{i,j} \right)_{n \times n}\) representing a linear system of \(n\) equations on in a finite-dimensional space. However, the same notation of the operator, \(A\), time evolution, \(F(t)\), and solution, \(F(t) = e^{tA}F_{0}\), apply to infinite dimensional spaces, e.g., spaces of continuous functions or \(L^{2}\) integrable functions.
The wavefunction of a free particle with momentum \(p = \hslash k\) and energy \(E = \hslash\omega\) can be expressed in the position space in the form of a de Broglie wave function, \(\psi(x,t) = e^{i(k \cdot x - \omega t)}\), up to a constant multiple, where \(k\) is the wave number, \(\omega\) is angular frequency. Then, the (spatial) partial derivative of \(\psi\) is
\[\frac{\partial\psi}{\partial x} = ike^{i(k \cdot x - \omega t)} = ik\psi\ .\]
Hence, \(\frac{\hslash}{i}\frac{\partial\psi}{\partial x} = \frac{\hslash}{i}ike^{(k \cdot x - i\omega t)} = \hslash k\psi = p\psi\). In terms, of linear operators, this relation is expressed as
\[\underbrace{\frac{\hslash}{i}\frac{\partial}{\partial x}}_{Linear\ \\ operator}\ \overbrace{\ \ \psi\ \ }^{State} = \underbrace{\overbrace{\ \ \ p\ \ \ }^{momentum \\ eigenvalue }}_{numeric\ \\ value\ or\ vector}\ \cdot \underbrace{\overbrace{\ \ \ \ \ \ \psi\ \ \ \ \ \ \ }^{eigenfunction}}_{(Eigen)\ State}\ \ .\]
Therefore, the operator \(\widehat{p} ≔ \frac{\hslash}{i}\frac{\partial}{\partial x}\) is called the ’‘momentum operator’’, since it’s in one-to-one correspondence with the momentum (eigenvalue). Applying \(\widehat{p}\) to the wavefunction (state) yields an estimate of the momentum numerical value/vector.
There is no ’‘time operator’’, see Pauli’s argument against the existence of a time operator rooted in the boundedness of the energy operator, see article 1 and article 2. However, in the Schrödinger picture representation, there is a linear time evolution operator \(U\) specifying the future state of an electron that is currently in state \(\left| \psi \right\rangle\), as \(U\left| \psi \right\rangle\), for each possible current state \(\left| \psi \right\rangle\). The time-evolution of a closed quantum system is unitary and reversible. This implies that the state of the system at a later point in time, \(t\), is given by \(\left| \psi(t) \right\rangle = U(t)\left| \psi(t_{o}) \right\rangle\), where \(U(t)\) is a unitary operator, i.e., its adjoint \(U^{\dagger} \equiv \left( U^{*} \right)^{T}\) operator is the inverse: \(U^{\dagger} = U^{- 1}\). The integral equation \(\left| \psi(t) \right\rangle = U(t)\left| \psi(t_{o}) \right\rangle\) relates the state of the particle at the initial time \(t_{o}\) with its state at time \(t\). Locally, we can express the position of an inertia particle at time \(t\) is \(x(t) = \ x\left( t_{o} \right) + \ v \cdot (t - t_{o})\), where \(v\) is the constant speed and \(x\left( t_{o} \right)\) is the initial position, i.e., \(\frac{dx}{dt} = v\). The (time-dependent) Schrödinger equation, \(i\hslash\frac{\partial}{\partial t}\psi(x,t) = H\psi(x,t)\), represents a generalization of this (ordinary) differential equation, where the particle system Hamiltonian is \(H\) and the PDE solution is the particle wavefunction \(\psi(x,t)\), which describes the particle state (e.g., position) at time \(t \geq t_{o}\), given its initial position \(\psi(x,t_{o})\).
The quantum argument for an external time is rooted in the contradiction associated with assuming the existence of a general time operator, see this article. A well-defined time operator \(\widehat{t}\) has to be paired to a conjugate energy operator \(\widehat{H}\), a Hamiltonian, as follows from the Heisenberg’s formulation of the pair of classically conjugate variables, time and energy. The commutator operator \(\left\lbrack \widehat{E},\widehat{t} \right\rbrack = (\widehat{E}\hat{t} - \widehat{t}\widehat{E}) = - i\hslash\) suggests a time-energy uncertainty relation in two separate forms, see this.
\[\mathrm{\Delta}E\mathrm{\Delta}t \sim \hslash\ \ (indeterminate\ form)\ ,\]
\[\mathrm{\Delta}E\mathrm{\Delta}t \geq \frac{\hslash}{2}\ \ (precise\ form)\ .\]
It’s worth reviewing the two complementary versions of the Schrodinger equation (SE): (1) the Time-dependent (TDSE), where time is a parameter, not an operator, \(\left( H(t) - i\hslash\frac{\partial}{\partial t} \right)\psi(t) = 0\); and (2) the Time-independent (TISE), for a constant energy \(E\), \((\hat{H} - \hat{E})\psi_x = 0\), where the TISE solution is \(\psi(t) = \psi_x e^{-\frac{i}{\hbar}Et}\).
If \(\hat{H}\) is the system Hamiltonian, solving the eigenvalue equation \(\hat{H}\Psi_n = E_n\Psi_n\), where the states \(\Psi_n\) are the Hamiltonian eigenfunctions and the (energy) eigenvalues \(E_n\) represent the corresponding observable energies. The Hamiltonian operator \(\hat{H}\) may either depend on time \(t\) or be time-independent (invariant of time).
Under explicit time-dependence, \(\hat{H}\equiv \hat{H}(t)\), The Hamiltonian operator has different eigenfunctions at different times, \(\Psi_n=\Psi_n(x,t)\), dynamically changing over time.
Under time-independence, the eigenfunctions of the Hamiltonian operator \(\hat{H}\) are the same at each time instant \(\Psi_n=\Psi_n(x)\). We drop the \(t\) argument from the solutions, which are much simpler in this case compared to the more general (time-dependent) case. Although, the time-independent eigenfunctions still may change across time, despite not being the same at each instant. In other words, the same eigenfunctions still evolve in time, but they oscillate preserving their shapes and forms without changing their functional analytic forms. Below is an example illustrating this subtle point of the the time-independent eigenfunctions still changing over time and yet being the same at each instant. This example shows these eigenfunctions \(\psi_k(x)=\sin(k\cdot x/2), k\in\{1,2,3,4\}\), where the (separable) time-dynamics are by multiplication, e.g., \(\psi_k'(x,t)=\psi_k(x)\cdot t\). Each solution corresponds to a different energy with low-frequencies corresponding to low energies. Note that these eigenfunctions \(\psi_k(x)\) are not the full solutions to the Schrodinger equation, as the full solutions \(\psi_k'(x,t)\) are time-dependent with amplitude oscillating in time.
The shapes of the time-dependent Hamiltonian eigenfunctions are the same as their corresponding solutions of the time-independent (TISE), however, the separable time-dependent eigenfunctions oscillate with time. The shape of the spatial-parts are unchanged but their amplitudes oscillate according to its temporal temporal counterpart producing the full time-dependent solution of the to the Schrodinger equation. At any fixed instance of time, the Hamiltonian has the same eigenfunctions. In the case of time-dependent Hamiltonian, the same simple Hamiltonian eigenfunctions would not be valid for all time instances, as their shape would morph during the time-evolution beyond simple shapes amplitude oscillations.
This explains the difference between the time-independent Schrodinger equation (TISE) and the time-dependent Schrodinger equation (TDSE).
library(plotly)
library(tidyr)
# define the space-time grid (domain)
lenSpace <- 100
lenTime <- 20
spaceInd <- seq(from=0, to=2*pi, length.out = lenSpace)
timeInd <- seq(from=-1, to=1, length.out = lenTime)
oscilatoryFunc <- function(freq=1) {
x <- (freq/2) * spaceInd
t <- timeInd
tensor <- sin(x) %o% timeInd
return(tensor)
}
e1 <- as.vector(oscilatoryFunc(freq=1)) # wide to long converted
e4 <- as.vector(oscilatoryFunc(freq=2))
e9 <- as.vector(oscilatoryFunc(freq=3))
e16 <- as.vector(oscilatoryFunc(freq=4))
df_wide <- cbind(space=spaceInd, time=rep(1:lenTime, each=lenSpace),
E1=e1, E4=e4, E9=e9, E16=e16)
df_long <- as.data.frame(df_wide) %>%
pivot_longer(c("E1", "E4", "E9", "E16"), names_to = "eigenfunctions", values_to = "values")
str(df_long)
## tibble [8,000 × 4] (S3: tbl_df/tbl/data.frame)
## $ space : num [1:8000] 0 0 0 0 0.0635 ...
## $ time : num [1:8000] 1 1 1 1 1 1 1 1 1 1 ...
## $ eigenfunctions: chr [1:8000] "E1" "E4" "E9" "E16" ...
## $ values : num [1:8000] 0 0 0 0 -0.0317 ...
fig <- df_long %>%
plot_ly(x=~space, y=~values, color=~eigenfunctions, frame=~time,
text=~eigenfunctions, hoverinfo = "text",
type='scatter', mode='lines') %>%
layout(xaxis = list(title = "space"), yaxis = list(title = "Eigenfunction Amplitude"),
showlegend = FALSE)
fig
In principle, we are interested in solving the general, time-dependent, Schrodinger equation. Only in the special case when the Hamiltonian is not an explicit function of time, we solve the time-independent Schrodinger equation to get the corresponding (time-dependent) eigenfunctions. The TDSE
\[\hat{H}\Psi(x,t)=i\hbar \frac{\partial}{\partial t}\Psi(x,t)\ .\] In the special case when \(\hat{H}\) does not explicitly depend on time and \(\Psi(x,0)\) is the (initial) eigenstate of the \(\hat{H}\) corresponding to an eigenvalue \(E_n\), the time-independent SE solution is
\[\Psi(x,t) = e^{-\frac{i}{\hbar}\hat{H}t}\Psi(x,0)\ .\] This can be quickly confirmed by starting with the TDSE \(\hat{H}\Psi(x,t)=i\hbar \frac{\partial}{\partial t}\Psi(x,t)\) and using the TISE separability condition \(\Psi(x,t)=\psi_x(x) \psi_t(t)\).
\[\hat{H}\Psi(x,t)=\hat{H}\psi_x(x) \psi_t(t) = i\hbar \frac{\partial}{\partial t}\psi_x(x) \psi_t(t)= \psi_x(x) i\hbar \frac{\partial}{\partial t}\psi_t(t)= \psi_x(x) E \psi_t(t).\]
Hence, \(i\hbar \frac{\partial}{\partial t}\psi_t(t)= E \psi_t(t)\) whose solution is the exponential function \(\psi_t(t)=e^{\frac{E}{i\hbar}t}=e^{-i\frac{E}{\hbar}t}\). Therefore, the TISE solution is \(\Psi(x,t)=\psi_x(x)\cdot\psi_t(t)=\psi_x(x)e^{-i\frac{E}{\hbar}t}\).
We can expand the exponential function, \(e^{-\frac{i}{\hbar}\hat{H}t}\) as a series of operators acting on the eigenstate
\[e^{-\frac{i}{\hbar}\hat{H}t}\Psi(x,0)=e^{-\frac{i}{\hbar}E_n t}\Psi(x,0)\ .\]
This suggests that having the initial wavefunction in terms of eigenstates yields the solution of the time-dependent Schrodinger equation.
Recall that the (real) eigenstates of a Hermitian (Hamiltonian) operator forms a complete orthonormal basis where all states can be expressed linear combinations of base eigenstates. Suppose the corresponding pairs of eigenstates-eigenvalues are denoted by (\(\Psi_n\), \(E_n\)). Then
\[\Psi(x,0)=\sum_{n}{c_n \psi_n(x)}= \sum_{n}{c_n e^{-\frac{i}{\hbar}E_n t}\psi(x)}\ .\]
Let’s examine the physical interpretation of the eigenstates \(\psi_n(x)\). First, the amplitude \(|\psi_n(x)|^2\) represents the time independent (stationary state) probability density function (PDF) of finding a particle at position \(x\). Second, when the system is in the state \(\psi_n(x)\), measuring its the energy will almost surely yield an observation \(E_n\).
This probabilistic interpretation of wavefunctions imply the need of appropriate normalization conditions:
\[\int_{space}{|\psi_n(x)|^2dx} = 1\ \ \ , \ \ \ \int_{space-time}{|\Psi_n(x,t)|^2dx} = 1\ \ \ {\text{and}}\ \ \ \sum_{n}{|c_n|^2}=1\ .\]
Wavefunctions always behave according to the TDSE, however in teh special case of time-independent Hamiltonians, the TISE solutions are separable \(\Psi(x,t)=\psi_x(x)\cdot \psi_t(t)\), actually linear combinations of such terms, where \(\psi_t(t)=e^{−i\frac{Et}{\hbar}}\) and \(\psi_x(x)\) solves the simpler TISE.
Therefore, given initial state, the solutions to the general time dependent Schrodinger equation are derived from the solutions of the time-independent (TISE), \(\hat{H}\psi(x)=E\psi(x)\). The solution to the TDSE, a partial differential equation, relies on (multiplicative) separation of variables to split the space-and-time dependent components of the wave-function. This separability splits the PDE into a pair of ordinary differential equations - one for the space and the other for time.
The time part is easy to solve, whereas the space-part yields the TISE as an eigenvalue equation whose solution provides the most important characterization of the system wavefunction, since it explicates the energy-eigenspectrum for the problem. Multiplying the TISE solution by the corresponding time-part function describes also the temporal evolution of hte system. Since the Hamiltonian doesn’t explicitly depend on time, the TISE energy-eigenstates are “stationary states”, preserving the same energy eigenvalues.
Assuming the existence of a time operator \(\widehat{t}\), we can construct a unitary operator \(\widehat{U} = e^{\pm i\widehat{t}\ dE}\) that acts by translating states along the energy spectrum, \(\widehat{U}\left| E \right\rangle \rightarrow |E + dE\rangle\). Infinite iterative application of the \(\widehat{U}\) translation operator can project the system into a state of arbitrarily negative energy implying system instability at a stable vacuum state.
In quantum theory, the time-energy uncertainty relation is not well-defined because of the multiple forms of time. Pragmatic or external time may be defined as the parameter entering the Schrodinger equation and measured by an external and independent clock. Dynamic time represents an intrinsic tracker defined through the dynamical behavior of the quantum objects themselves. Observable time represents a measurable characteristic of event ordering. Spacekime analytics regards the kime-magnitude as an observable time tracking the ordered ranking of sequential longitudinal events.
When the Hamiltonian operator \(\widehat{H}\) is constant, the Schrödinger equation has the solution
\[\left| \psi(t) \right\rangle = e^{- \frac{i\widehat{H}t}{\hslash}}\left| \psi(0) \right\rangle\ .\ \]
The time-evolution operator \(\widehat{U}(t) = e^{- \frac{i\widehat{H}t}{\hslash}}\) is unitary preserving the inner product between vectors in the Hilbert space over the field \(\mathbb{C}\), i.e., \(\left\langle \phi \middle| \widehat{U}|\psi \right\rangle = \left\langle \phi \middle| \psi \right\rangle\mathbb{\in C}\). Let’s denote \(|\psi(0)\rangle\) the initial state of the wavefunction and the corresponding state at a time \(t\) by \(\left| \psi(t) \right\rangle = \widehat{U} \left| \psi(0) \right\rangle = e^{- \frac{i\widehat{H}t}{\hslash}}\left| \psi(0) \right\rangle\) for some unitary operator \(\widehat{U}\). These are all solutions to the Schrödinger equation since given another continuous family of unitary operators parameterized by \(t\) by \(\widehat{U}(t)\), we can choose the parameterization to ensure that \(\widehat{U}(0)\) is the identity operator and \(\left (\widehat{U}\left( \frac{t}{N} \right)\right )^{N} \equiv \widehat{U}(t),\forall N\mathbb{\in N}\). This specific dependence of \(\widehat{U}(t)\) on the time argument implies that \(\widehat{U}(t) = e^{- i\widehat{G}t}\), for some self-adjoint operator \(\widehat{G}\) called the generator of the family \(\widehat{U}(t)\).
In other words, the Hamiltonian operator \(\widehat{H}\) is an instance of a generator, up to a multiplicative constant, \(\frac{1}{\hslash}\), which may be set to \(1\) in natural units. The generator \(\widehat{G}\) that corresponds to the unitary operator \(\widehat{U}\) is Hermitian, since \(\widehat{U}(\delta t) \approx \widehat{U}(0) - i\widehat{G}\delta t\) and therefore
\[\widehat{U}(\delta t)^{\dagger}\widehat{U}(\delta t) \approx \left( \widehat{U}(0)^{\dagger} + i{\widehat{G}}^{\dagger}\delta t \right)\left( \widehat{U}(0) - i\widehat{G}\delta t \right) = I + i\delta t\left( {\widehat{G}}^{\dagger} - \widehat{G} \right) + O\left( \delta t^{2} \right)\ .\ \]
Thus, to a first order approximation, \(\widehat{U}(t)\) is unitary when its generator (derivative) is self-adjoint (Hermitian), i.e., \({\widehat{G}}^{\dagger} = \widehat{G}\).
Under the Copenhagen interpretation, the Schrödinger equation relates information about the system at one time, \(t_{o} = 0\), to information about the system at another time, \(t\). The Schrödinger equation encodes the time-evolution process continuously and deterministically, as knowing \(\psi\left( t_{o} \right)\) is in principle sufficient to calculate \(\psi(t),\forall t \ge t_{o}\). However, the wavefunction can also change discontinuously and stochastically during the measurement process, as the kime-phase is stochastic \(\varphi \sim \Phi\lbrack - \pi,\pi)\). Each observation is an act of measurement, which corresponds to a random sample from the kime-phase distribution. In practice, this stochastic behavior of the wavefunction is mediated by acquiring multiple repeated sampling from a tightly controlled experiment, i.e., simulating multiple samples corresponding to an identical time \(t_{o}\). Then, we commonly use various aggregation functions, i.e., sample statistics, and rely on LLN to argue that the expected values of our sample statistics tend to their corresponding population-wide distribution characteristics. Examples of such population parameters include measures of centrality, e.g., mean and median, measures of dispersion, e.g., variance and IQR, and measures of skewness or kurtosis. Prior to each measurement, the exact post-measurement value of the wavefunction is known, however, the kime-phase distribution model suggests probability likelihood of specific classes (cf. Borel sets) of observable wavefunction values.
The wavefunction solutions to Schrödinger equation cover all simultaneously all possibilities described by quantum theory. Instantiations of these possibilities correspond to individual random sampling from the underlying kime-phase distribution. This preserves the continuous longitudinal time-evolution of the wavefunctions solving the Schrödinger equation, as all possible states of the system (including the measuring instrument and the observer) are present in a real physical quantum superposition, reflecting the kime-phase model distribution. Even though the 5D spacekime universe is deterministic, in 4D Minkowski spacetime, we perceive non-deterministic behavior governed by probabilities, see the ‘’spacekime interpretation’’ section in the Spacekime/TCIU book. That is, we are not equipped to observe and interpret spacekime as a whole. We can only detect, process and interpret tangible evidence of observable phenomena trough finite sampling across spacetime.
Previously, we showed that (periodic) potential functions \(u\) of this type
\[u\left( \mathbf{x},\mathbf{\kappa} \right) = \underbrace{e^{2\pi i\left\langle \mathbf{\eta,\kappa} \right\rangle}}_{kime\ frequency\ part} \cdot \underbrace{e^{2\pi i\left\langle \mathbf{x,\xi} \right\rangle}}_{spatial\ frequency\ part}, \ \ {\text{subject to}}\ \ \left| \mathbf{\eta} \right|^{2} \equiv \left| \mathbf{\xi} \right|^{2}\ ,\]
solve the general ultrahyperbolic wave equation, \(\Delta_{\mathbf{x}}u\left( \mathbf{x},\mathbf{\kappa} \right) = \Delta_{\mathbf{\kappa}}u\left( \mathbf{x},\mathbf{\kappa} \right)\), where \(u\left( \mathbf{x},\mathbf{\kappa} \right) = e^{2\pi i\left\langle \mathbf{\eta,\kappa} \right\rangle} \cdot e^{2\pi i\left\langle \mathbf{x,\xi} \right\rangle} \in C^{2}\left( D_{t} \times D_{s} \right)\), \(\mathbf{\eta} = \left( \eta_{1},\eta_{2},\ldots,\ \eta_{d_{t}} \right)'\) and \(\mathbf{\xi} = \left( \xi_{1},\xi_{2},\ldots,\ \xi_{d_{s}} \right)'\) represent respectively the frequency vectors of integers corresponding to the temporal (angular frequency) and spatial frequencies (wave numbers) of the Fourier-transformed periodic solution of the wave equation.
More generally, since the wave equation is a linear PDE, any finite linear combination of \(M\) such basic potential functions will also represent a (composite, superposition) solution:
\[u\left( \mathbf{x},\mathbf{\kappa} \right) = \sum_{\begin{matrix} m = 1\ \\ \left\{ \mathbf{\xi}_{m},\mathbf{\eta}_{m}\mathbf{\ \ }s.t.\ \ \left| \mathbf{\xi}_{m} \right|^{2} = \left| \mathbf{\eta}_{m} \right|^{2} \right\} \\ \end{matrix}}^{M}\left( C_{m} \cdot e^{2\pi i\left\langle \mathbf{\eta}_{m}\mathbf{,\kappa} \right\rangle} \cdot e^{2\pi i\left\langle \mathbf{x,}\mathbf{\xi}_{m} \right\rangle} \right)\ .\]
In polar coordinate representation of kime, the simple (\(M = 1\)) separable solutions of the wave equation can be expressed via the Euler formula:
\[\mathbf{\kappa} = te^{i\theta} = t\left( \cos\theta + i\sin\theta \right) = \underbrace{\left( t\cos\theta \right)}_{\kappa_{1}} + i\ \underbrace{\left( t\sin\theta \right)}_{\kappa_{2}}\ , \mathbf{\ \ x} = \left( x_{1},x_{2},x_{3} \right),\ \ \mathbf{\eta} = \left( \eta_{1},\ \eta_{2} \right)\ ,\]
\[u\left( \mathbf{x},\mathbf{\kappa} \right) = e^{2\pi i\left\langle \mathbf{\eta,\kappa} \right\rangle} \cdot e^{2\pi i\left\langle \mathbf{x,\xi} \right\rangle} = e^{2\pi i\sum_{j = 1}^{d_{t} = 2}{\kappa_{j}\eta_{j}}} \cdot e^{2\pi i\sum_{l = 1}^{d_{s} = 3}{x_{l}\xi_{l}}} =\]
\[e^{2\pi i\left( \eta_{1}t\cos\theta + \eta_{2}t\sin\theta \right)} \cdot e^{2\pi i\left( x_{1}\xi_{1} + x_{2}\xi_{2} + x_{3}\xi_{3} \right)}\ .\]
One specific solution illustrated on Figure 1 is given by:
\[u\left( \mathbf{x},\mathbf{\kappa} \right) = u\left( x_{1},x_{2},x_{3},\ t,\theta \right) = e^{2\pi t\ i\ \left( - 2\cos\theta + 3\sin\theta \right)} \cdot e^{2\pi i\left( - 3x_{1} + 2x_{2} \right)}\ ,\]
where \(\mathbf{\eta} = \left( \eta_{1},\eta_{2} \right) = ( - 2,\ 3)\) and \(\mathbf{\xi =}\left( \xi_{1},\ \xi_{2},\xi_{3} \right)\mathbf{=}( - 3,2,0)\), \(\left| \mathbf{\xi} \right|^{2} = \left| \mathbf{\eta} \right|^{2} = 13\).
Figure 1: Examples of the existence of a locally stable solution to the ultrahyperbolic wave equation in spacekime. The left and right figures illustrate alternative views and foliations of the 2D kime dynamics of the 5D spacekime wave projected onto a flat 2D (x,y) plane. .
.
Question: Double check the signs of the exponential components in the wave equation solutions. At first glance, it may look as if we off by a negative sign in front of the kime inner product term, \(\left\langle \mathbf{\eta,\kappa} \right\rangle\). In chapter 3, p. 149, we argue that \(\psi(x) = \frac{1}{\sqrt{2\pi\hslash}}\ e^{- \frac{i}{\hslash}\left( e_{1}\kappa_{1} + e_{2}\kappa_{2} - p_{x} \cdot x \right)}\), where the energy \(E = \left( e_{1},\ e_{2} \right),\ e_{i} = \hslash\omega_{i},\ i \in \{ 1,2\}\). Mind the sign differences between spatial (\(x\)) and kemporal (\(\kappa\)) parts in the exponent.
Answer: The \(\pm\) sign difference does not play a role here as \(\nabla^2\equiv\Delta\), cancelling the sign, \((-1)^2 = 1\). Note the connection to deriving the Schrodinger wave equation by quantization: \[LHS\equiv i\hbar\left ( \frac{\partial \psi}{\partial \kappa_1} + \frac{\partial \psi}{\partial \kappa_2}\right ) = -\frac{\hbar^2}{2m} \left ( \nabla^2 \psi\right )\equiv RHS,\]
\[LHS = i\hbar\left ( \frac{-i}{\hbar} \underbrace{E}_{e_1+e_2}\right ),\]
\[RHS= -\frac{\hbar^2}{2m} \left ( p_x^2 \frac{-1}{\hbar^2} \right ) = \frac{p_x^2}{2m}=(e_1 + e_2) = E.\]
In essence, the \(\pm\) sign in the
exponential exponent can be absorbed by the amplitude of the
kemporal frequency term \(\eta\), since \(|-\eta|^2\equiv |\eta|^2\). Subject
to
\(|\underbrace{-\eta}_{\eta'}|^2\equiv
|\eta|^2=|\xi|^2\),
\[u(x,k)=e^{-2\pi i\langle \eta, k \rangle} \cdot e^{2\pi i\langle x, \xi \rangle} = e^{2\pi i\langle \eta', k \rangle} \cdot e^{2\pi i\langle x, \xi \rangle}\ .\]
In 1D time, \(\frac{p_x^2}{2m}=e_1 = E\).
To explicate the kime-phase (or kime) Hermitian (self-adjoint) operator, \(\widehat{\mathcal{P}}\), we consider the particle wavefunction as a spatiotemporal wave-distribution, \(\Psi(x,\ \kappa)\), not really a function, since the kime-phase, \(\varphi\), is random. Assume there is a kime-phase operator
\[\underbrace{\ \ \ \widehat{\mathcal{P}}\ \ \ }_{Hermitian\ operator} \overbrace{\ \ \ \ \Psi\ \ \ \ \ }^{state} = \overbrace{\ \ \ \varphi\ \ \ }^{eigenvalue} \underbrace{\ \ \ \Psi\ \ \ }_{state}\ .\]
Hence, the action of the linear kime-phase Hermitian operator \(\widehat{\mathcal{P}}\) is to draw a random phase value from the circular phase distribution \(\varphi \sim \Phi_{\lbrack - \pi,\ \pi)}\). Such instantiation of the kime-phase, \(\varphi = \varphi_{o}\), localizes the spatiotemporal position of the observation in 4D Minkowski spacetime.
Let’s examine the process of random sampling from the phase distribution, \(\Phi_{\lbrack - \pi,\ \pi)}\). This can be done many different ways.
\[\underbrace{{CDF(\Phi)}_{\lbrack - \pi,\ \pi)}^{- 1}}_{F^{- 1}}(U) \sim \Phi_{\lbrack - \pi,\ \pi)},\ \ \forall\ U \sim Uniform(0,1)\ .\]
The rationale behind that is that a continuous CDF, \(F\), is a one-to-one mapping of the domain of the CDF (range of \(\varphi\)) into the interval \(\lbrack 0,1\rbrack\). If \(U \sim Uniform(0,1)\), then \(\varphi = F_{\lbrack - \pi,\ \pi)}^{- 1}(U)\) would have the phase distribution \(\Phi_{\lbrack - \pi,\ \pi)}\), since \(F\) is monotonic, and \(Prob_{U}(U \leq u) \equiv u\), and
\[Prob_{\Phi}\left( F^{- 1}(U) \leq \varphi \right) = Prob_{U}\left( U \leq F(\varphi) \right) \equiv F(\varphi).\]
If \(X\) is a random variable with probability density function \(f_{X}\), then the Laplace transform of \(f_{X}\) is the expectation of \(e^{- sX}\), i.e.,
\[\underbrace{\mathcal{\ \ \ L}\left( f_{X} \right)\ \ \ }_{ \begin{matrix} Laplace\ Transform\ \\ of\ r.v.\ \ X\mathcal{,\ \ \ L}(X) \\ \end{matrix}}(s)\mathbb{= E}\left( e^{- sX} \right) = \int_{}^{}{e^{- sx}f_{X}(x)dx}\ .\]
Setting \(s = - t\) yields the moment generating function (MGF) of \(X\), \[\mathcal{M}_{X}(t)\mathcal{\equiv M}(X)(t)\mathbb{= E}\left( e^{tX} \right) = \int_{}^{}{e^{tx}f_{X}(x)dx}\ .\]
For each continuous random variable \(X\), we can employ the Laplace transform to compute the cumulative distribution function, \(F_{X}\), and by extension, the inverse CDF (the quantile function), \(F_{X}^{- 1}\), as follows. \[p = F_{X}(x) \equiv \Pr(X \leq x) = \mathcal{L}^{- 1}\left( \frac{1}{s}\mathbb{E}\left( e^{- sX} \right)\ \right)(x) = \mathcal{L}^{- 1}\left( \frac{1}{s}\mathcal{L}\left( f_{X} \right)\ \right)(x)\mathbb{\ :R \rightarrow}\lbrack 0,1\rbrack\ \]
However, the random variable \(X\), and correspondingly its density function \(f(x)=f_{X}(x)\) map \(H\to\mathbb{C}\). Hence, the ’‘inverse CDF function’’ \(F_{X}^{-1}\) and the ``inverse linear operator’’ \(\mathcal{L}^{-1}\) have different meanings that are not interchangeable. Thus, the second relation is not equality! \[x = F_{X}^{- 1}(p)\underbrace{\not=}_{not\ =}\frac{1}{p}\mathbb{E}\left( e^{- pX} \right) = \frac{1}{p}\mathcal{L}\left( f_{X} \right)(p):\lbrack 0,1\rbrack\mathbb{\rightarrow R\ ,}\]
\[U \sim Uniform(0,1)\ \Longrightarrow X = F_{X}^{- 1}(U) \underbrace{\not=}_{not\ =}\frac{1}{U}\mathcal{L}\left( f_{X} \right)(U) \sim \ f_{X}.\]
The Laplace transform for many probability distributions have closed-form analytical expressions. The table below incudes the LTs of some commonly used distributions.
(Left) Original Function \[\mathbf{f}_{\mathbf{X}}\mathbf{\equiv}\mathcal{L}^{\mathbf{- 1}}\left( \mathcal{L}\left( \mathbf{f}_{\mathbf{X}} \right) \right)\]
(Right) Laplace Transform \[\mathcal{L}\left( \mathbf{f}_{\mathbf{X}} \right)\]
Exponential distribution \[f_{X} = \lambda e^{- \lambda x}\ ; \ \ \mathcal{L}\left( f_{X} \right) = \frac{\lambda}{\lambda + z\ },\ \forall z \neq -\lambda\]
Weibull distribution \[f_{X} = \alpha\lambda x^{\alpha - 1}e^{- \lambda\alpha x}\ ; \ \ \mathcal{L}\left( f_{X} \right) = \frac{\alpha\lambda\beta\Gamma(\alpha)}{(\lambda\alpha + z)^{\alpha}\ },\ \forall z \neq - \alpha\lambda ,\]
where \(\Gamma(\alpha) = \int_{0}^{\infty}{x^{\alpha - 1}e^{- x}}dx.\)
Normal distribution \[f_{X} = \frac{1}{\sigma\sqrt{2\pi}}e^{- \frac{(x - \mu)^{2}}{2\sigma^{2}}}\ ; \ \ \mathcal{L}( f\_{X} ) = e^{- \frac{\left( \mu + \sigma^{2}z \right)^{2}-\mu^2}{2\sigma^{2}}} \left( 1 - \Phi\left( - \frac{\mu + \sigma^{2}z}{\sigma} \right) \right)\ ; \ \ \forall z\mathbb{\in C}\]
Gamma distribution \[f_{X} = \frac{\lambda\beta x^{\beta - 1}e^{- \lambda x}}{\Gamma(\beta)}\ ; \ \ \mathcal{L}\left( f_{X} \right) = \frac{\lambda\beta}{(\lambda + z)^{\beta}\ },\ \forall z \neq - \lambda\]
Generalized Gamma distribution \[f_{X} = \frac{\lambda\alpha\beta\cdot x^{\alpha\beta - 1}\cdot e^{- \lambda\alpha x}}{\Gamma(\beta)}\ ; \ \ \mathcal{L}\left( f_{X} \right) = \frac{\lambda\alpha\beta\Gamma(\alpha\beta)}{{\Gamma(\beta) (\lambda\alpha + z)}^{\beta}\ },\ \forall z \neq - \alpha\lambda\]
Pareto distribution \[f_{X} = \frac{\theta\lambda^{\theta}}{x^{\theta + 1}},\ \ 0 \leq \lambda,\theta;\ \ \ \lambda \leq x\ ; \ \ \mathcal{L}\left( f_{X} \right) = \theta\lambda^{\theta}z^{\theta}\left( \Gamma( - \theta) - I(z\lambda,\ - \theta)\Gamma( - \theta) \right),\] \[I(t,\ \alpha) \equiv \frac{1}{\Gamma(\alpha)}\int_{0}^{t}{x^{\alpha - 1}e^{- x}}dx\ .\]
The phase problem is ubiquitous in experimental science and refers to loss of information due to lack of wave phase details when making many physical measurements. A perfect motivational example is the complex wave representation clarifying this illusion arising by projecting the 3D ‘’corkscrew’’ wave shape into 2D and only showing the 1D values in the space of complex amplitudes, see the following interactive 3D scenes:
The following three examples of the ‘’phase problem’’ are analogous to the ‘’kime-phase problem’’ and demonstrate the potential inference benefits of recovering kime-phases to enhance subsequent modeling, interpretation, and forecasting of intrinsically stochastic phenomena.
The first example reflects recovering the 3D crystal structure from magnitude-only diffraction patterns in crystallographic studies phase-problem, and this reference. For instance, X-ray crystallography diffraction data only captures the amplitude of the 3D Fourier transform of the molecule s electron density in the unit cell. The lack of phase information obfuscates the complete recovery of the electron density in spacetime using Fourier synthesis, i.e., via the inverse Fourier transform of the data from the native acquisition frequency space.
The following image from the Spanish National Research Council (CSIC) the depicts the challenge of phase-recovery in X-ray diffraction studies.
The 3D atomic structure of a crystal is imaged as diffraction effects on a 2D reciprocal lattice. Only the diffraction magnitudes, the dark intensities at the reciprocal (Fourier) lattice pixels, amplitude values (intensities) of the fundamental vector quantities are recorded. Their relative orientations (relative phases) are missing. This lack of phase information inhibits the exact recovery of the value of the electron density function at each point and the explication of the atomic positions in the crystal structure in spacetime.
By some accounts, physics aims to explain observed experiments through mechanistic theories, whereas mathematics aims to describe the fundamental principles of all possible solutions under strict conditions or a priori assumptions. The difference in focus between these two scientific domains sometimes leads to friction. Mathematical possibilities may include system configurations, observable states, or exotic designs that may be possible, likely, unlikely, or extremely rare (e.g., of trivial measure, or zero probability) that are still absolutely necessary to complete a system. From a physical perspective, such ‘’almost surely’’ unlikely to be observed systems or states are considered unreal, unobservable, and not-constructive, i.e., not worth investigating. This physics-mathematics dichotomy may also be phrased in terms of the Kurt Gödel’s incompleteness theorem (1931), which proves that any system equipped with natural number arithmetic cannot be at the same time complete and self-consistent with respect to its core axioms.
The second example illustrating a need to generalize real to complex representations explores the differences between quantum physics predictions based on formulating quantum theories using Hilbert-spaces defined over the fields of the reals (\(\mathbb{R}\)) and the complex numbers (\(\mathbb{C}\)), see this article. In a nutshell, real and complex Hilbert-space quantum theoretic predictions yield different results (in network scenarios comprising independent states and measurements).
This suggests the existence of realizations disproving real quantum theory similarly to how the standard Bell experiments disproved local physics. The relevance to complex-time representation of this recent (2022) discovery is reflected in the dichotomy between (1) classical quantum mechanics formulation of self-adjoint (Hermitian) operators and their real eigenvalues (observable states), and (2) the less obvious but potentially more powerful abstraction of the more general bounded linear operators and their complex eigenvalues. This leads to the quest to formulate a kime-operator whose eigenspectrum contains the continuous kime values, \(\kappa = te^{i\theta} \in \mathbb{C}\), as observables over the smallest complete field that naturally extends time, \(\mathbb{R}^{+} \subset \mathbb{C}\).
(Top) complex quantum theory, two independent sources distribute the two-qubit states and generate a 4-vector output Bell measurement. (Bottom) in real physics, the observed correlations cannot be reproduced, or even well approximated, since all the states and measurements in the network are constrained to be real operators.
The third example illustrates the importance of the phase in 2D Fourier transformation. We will demonstrate two scenarios:
In addition to the physical science motivations above, complex time offers completeness with respect to both additive and multiplicative operations over kime, which is similar to the spatial dimensions. The complex (algebraic) field \(\mathbb{C \ni}\kappa\), equipped with the \(( + ,*)\) operations, is the smallest algebraic field that naturally extends the multiplicative algebraic group, \(\left( \mathbb{R}^{+} \ni t, *\right)\), which is not closed with respect to addition.
In longitudinal data science and statistical inference, the enigmatic ’‘phase of complex time’’ (kime-phase) represents an analogous challenge as the ’‘physical-chemistry phase problem’’. Complex-time representations offer significant opportunities to advance inference science including generation of large and realistic random IID samples. It also supports Bayesian formulation of Spacekime analytics, tensor modeling of ℂ kimesurfaces, and the development of completely novel analytical methods directly on the kimesurfaces.
Quantum system states are represented by ket vectors in the
Hilbert space. For instance, spin states correspond to a 2D
Hilbert space, see Chapter
5 of Quantum Mechanics - A Concise Introduction
,
corresponding with the \(2\times 2\) Pauli
matrices.
In the general case, the dimension of the Hilbert space may be finite, \(n\) or infinite and always has orthonormal basis vectors \(\{|\phi_{\alpha}\rangle\}\) where any quantum state can then be expressed as a linear superposition of these basis vectors \[|\psi\rangle=\sum_{\alpha}{ c_{\alpha}|\phi_{\alpha}\rangle }, \] where the expansion coefficient \(c_{\alpha}\) expresses the projection of the state \(|\psi\rangle\) on the basis vector \(|\phi_{\alpha}\rangle\) is given by the inner product of \(|\psi\rangle\) and \(|\phi_{\alpha}\rangle\), i.e., \(c_{\alpha}=\langle \phi_{\alpha} | \psi\rangle,\ \forall\alpha\). To ensure proper probabilistic interpretation, the normalization condition is imposed \[\sum_{\alpha}{ |c_{\alpha}|^2 = 1}, \]
Quantum system observables are represented by operators expressed mathematically as second order tensors (matrices). For instance, the Pauli matrices represent spin observables along different axes where the Hilbert space is 2-dimensional as we have left-right, (\(\pm\)), up-down, back-forth directions for the spin along the given unitary direction (could be a coordinate axis, \(x,y,z\), or any vector direction \[\vec{n}=(\sin\theta\cdot \cos\phi,\ \sin\theta\cdot\sin\phi,\ \cos\theta)\] in spherical coordinates.
For each observable \(\hat{O}\), we have the following relation between the (linear) matrix algebra supporting quantum computations and the corresponding physical interpretations.
\[\underbrace{\hat{O}}_{observable-operator} \overbrace{|\phi_{\alpha}\rangle}^{eigenstate} = \underbrace{\nu_{\alpha}}_{eigenvalue-experimental-outcome} \ \overbrace{| \phi_{\alpha}\rangle}^{eigenstate}\ .\]
The expectation value of the operator in any given quantum state \(|\psi\rangle\) in the Hilbert space represents the overall mean \[\langle\hat{O}\rangle_{\psi} \equiv \langle\hat{O}\rangle =\langle\psi| \hat{O}|\psi\rangle = \overbrace{\sum_{\alpha}{ \underbrace{\nu_{\alpha}\ }_{obs.value}\ \underbrace{\ |\langle \psi |\phi_{\alpha}\rangle|^2}_{(transition)probability}}} ^{weighted-average-of-outcomes }\ ,\]
where \(\{|\phi_{\alpha}\rangle\}_{\alpha}\) is a complete set of eigenvectors for the observable operator \(\hat{O}\), i.e., \(\hat{O}|\phi_{\alpha}\rangle = \nu_{\alpha}| \phi_{\alpha}\rangle\).
The probabilistic interpretation of any quantum state \(|\psi\rangle\) is as a linear superposition of the all the eigenstates of the observable (operator) \[|\psi\rangle = \sum_{\alpha}{ c_{\alpha} |\phi_{\alpha}\rangle}\ ,\]
where \(|c_{\alpha}|^2=|\langle \phi_{\alpha} | \psi\rangle|^2,\ \forall\alpha\) is the probability of finding the system in an eigenstate \(|\phi_{\alpha}\rangle\) and measuring \(\hat{O}\) in this quantum state would yield an outcome \(\nu_{\alpha}\) with probability \(|c_{\alpha}|^2=|\langle \phi_{\alpha} | \psi\rangle|^2\).
There are two specific reasons for the most common quantum mechanics interpretation suggesting that the overall phase of a quantum state has no physical meaning. More specifically, this interpretation implies that all these vectors in the Hilbert space over the field of the complex numbers represent one (and unuique) physical quantum state \[\left\{ \underbrace{|\psi\rangle}_{base-state/trivial-phase}, \underbrace{e^{i\theta}|\psi\rangle}_{non-trivial-phase} ,\ \forall \theta\in\mathbb{R} \right\}\ .\]
\[\langle\hat{O}\rangle_{e^{i\theta}\psi}\equiv \left\langle e^{i\theta}\psi |\hat{O} | e^{i\theta}\psi \right\rangle= \left\langle \psi | e^{-i\theta}\hat{O} e^{i\theta} | \psi \right\rangle=\] \[\left\langle \psi | \underbrace{e^{-i\theta}e^{i\theta}\hat{O}}_{scalars-commute} | \psi \right\rangle=\left\langle \psi | \hat{O} | \psi \right\rangle\equiv \langle\hat{O}\rangle_{\psi}.\]
\[|\langle\phi_{\alpha}| e^{i\theta}\psi\rangle |^2 = |\langle\phi_{\alpha}| e^{i\theta} |\psi\rangle |^2=\] \[{\underbrace{|e^{i\theta}|}_{1}}^2 |\langle\phi_{\alpha}|\psi\rangle |^2= |\langle\phi_{\alpha}|\psi\rangle |^2 \equiv |c_{\alpha}|^2 .\]
These properties may suggest that there \(\forall\theta\in\mathbb{R}\) are no physical differences between the states \(|\psi\rangle\) and \(e^{i\theta}|\psi\rangle\); hence, state phases can be ignored.
Indeed, these finite dimensional Hilbert space derivations using sums naturally extend to integrals in the infinite dimensional Hilbert spaces, e.g., \(H\equiv L^2\), square integrable functions. For instance, the general expected value \[\langle\hat{O}\rangle_{\psi}\equiv \langle\psi(\omega)|\hat{O}(\omega)|\psi(\omega)\rangle = \int_{\Omega} {\psi^{\dagger}\hat{O}\psi\ d\omega}\] can be expressed for the position \(\hat{x}\) and momentum \(\hat{p}\) operators as
\[\langle\hat{x}\rangle\equiv \langle\psi|\hat{x}|\psi\rangle = \int_{\mathbb{R}} {\psi^{\dagger}(x,t)\psi(x,t) x dx},\]
\[\langle\hat{p}\rangle\equiv \langle\psi|\hat{p}|\psi\rangle = \int_{\mathbb{R}} { \left (\underbrace{i\hbar \frac{\partial}{\partial x}}_{\hat{p}^{\dagger}} \psi^{\dagger}(x,t)\right ) \psi(x,t) dx} \equiv \int_{\mathbb{R}} { \psi^{\dagger}(x,t) \left (-i\hbar \frac{\partial}{\partial x} \psi(x,t)\right ) dx}\ .\]
How about the variance of the linear operator? Would the variances of \(|\psi\rangle\) and \(e^{i\theta}|\psi\rangle, \forall\theta\) be the same?
\[\sigma_{(\hat{O},e^{i\theta}\psi)}^2\equiv \left\langle e^{i\theta}\psi |\hat{O}^2 | e^{i\theta}\psi \right\rangle - \left (\left\langle e^{i\theta}\psi |\hat{O} | e^{i\theta}\psi \right\rangle\right )^2= \left\langle \psi |e^{-i\theta}\hat{O}^2 e^{i\theta}| \psi \right\rangle - \left (\left\langle \psi |\hat{O} | \psi \right\rangle\right )^2=\] \[\left\langle \psi |e^{-i\theta+i\theta}\hat{O}^2| \psi \right\rangle - \left (\left\langle \psi |\hat{O} | \psi \right\rangle\right )^2= \left\langle \psi |\hat{O}^2| \psi \right\rangle - \left (\left\langle \psi |\hat{O} | \psi \right\rangle\right )^2\equiv \sigma_{(\hat{O},\psi)}^2\ .\]
In a nutshell, the global phase may appear to be physically irrelevant because of the linearity of the Schrodinger equation - two states \(|\psi\rangle\) and \(e^{i\theta}|\psi\rangle\) are both solutions to the same equation. Another reason why the phases may frequently be ignored in some experiments is due to energy preservation laws. Energy expectation values do not depend on the phase, however other measurements and observables may certainly depend heavily on the phase.
The phase plays roles in quantum computing and all of quantum physics, however, the phase is routinely ignored despite the fact that the phase often shows up in the underlying mechanistic models, mathematical representations, and quantum physics equations.
Consider the example of a free-falling object, due to gravity, a dropped ball will accelerate towards the gound. The velocity of the ball (i.e., its kinetic energy) at the the point it hits the ground is by the change, or difference, in the ball’s potential energy.
Suppose the initial velocity of a free-falling ball is zero, \(0=v_o=v(t)_{t=0}\) and the height of the initial drop is the position \(x_o=x(t)_{t=0}\). Since the acceleration of the earth’s gravity is constant, \(g\), which is the rate of change of the velocity with respect to time (\(t\)), the velocity changes uniformly over time.
Thus, \(v(t) = −g\cdot t\), the negative sign reflects the downward direction of velocity. Since the velocity is constant, the average velocity over a period of time \([0,t]\) is \(\bar{v} = \frac{v_o − gt}{2} = -\frac{gt}{2}\). The distance traveled by the ball as it falls is \(\Delta x = |\bar{v}|\cdot t = \frac{g\cdot t^2}{2}\) and as the initial height of the drop location is \(x_o\), the actual position (height) and momentum of the ball at any given time \(t\) are \[x(t) = x_o - \Delta x\equiv x_o -\frac{g\cdot t^2}{2}\ ,\ \ v(t)=-g\cdot t\ .\]
Hence, in classical mechanics we can completely determine the state of the ball (it’s position and velocity at a given time). This situation is drastically different in quantum mechanics, where measuring precisely the exact position and momentum of a particle at the same time is impossible.
Given the mass \(m=Volume\cdot Density\) of the ball, we can calculate the kinetic energy of the at time \(t\) \[K = \frac{mv^2}{2} = \frac{mg^2t^2}{2} = mg(x_o − x)\ .\] Rearranging the terms we obtain the lat of conservation of energy \[\underbrace{E}_{total\\ energy}=\underbrace{K}_{kinetic\\ energy} + \underbrace{mgx}_{potential\\ energy\ V(x)} = \underbrace{mgx_o}_{constant} \ .\]
The sum of the kinetic and potential energies is a constant of motion. Note the rebalancing between the kinetic and potential energies. As the ball falls, its kinetic energy increases exactly as much as its potential energy decreases. This result is valid for arbitrary systems without friction, where the total energy is composed of kinetic energy and potential energy. During the motion, the kinetic energy and potential energy are transformed into each other, preserging the total energy over time. At the start, the total energy is purely potential, \(E = mgx_o,\ K(t_0)=0\). In classical mechanics, the energy can vary continuously, whereas in quantum mechanics, energy can be discrete.
In terms of computing the dynamics of a physical system, only energy differences, not absolute energy values, are important. For modeling the motion of electrons, only the differences in voltages, as opposed to absolute values, tend to be important. Similarly, in quantum physics, the quantity that determines the dynamics is the Hamiltonian of the system, which is measures in units of energy. The Hamiltonian is self-adjoint (Hermitian) operator with eigenvalues representing the observable energy levels during the time evolution of the system.
In quantum computing using binary quantum bits (qubits) \(\{|0\rangle,|1\rangle \}\), the global phase of a quantum state is not detectable, i.e., \(|\psi\rangle\) and \(e^{i\theta}|\psi\rangle\) are the same, a single-qubit state can be expressed as \[|\psi\rangle = \sqrt{1-p}|0\rangle + e^{i\phi}\sqrt{p}|1\rangle, \]
where \(0\leq p\leq 1\) is the probability of the qubit being in the state \(|1\rangle\) and the quantum phase \(\phi\in [0, 2\pi)\).
Hermitian operators like the Hamiltonian are used for modeling quantum computing gates, which can be expressed as Hermitian operators. For instance, the operator
\[H=e^{-\frac{iHt}{\hbar}}\] acting on states \(|\psi\rangle\) as follows \[H|\psi\rangle =e^{-\frac{iHt}{\hbar}}|\psi\rangle =|\psi'\rangle .\]
As the absolute values of energy depend only on the measuring unit and are not important, adding a constant amount of energy to the system Hamiltonian does not alter the states.
Let \(I\) be the identity operator. Then, for any \(\lambda\in\mathbb{R}\), \(H\to H+\lambda I\) has the effect of shifting all of the eigenvalues by a fixed amount \(\lambda\), which leaves the differences between values unchanged! Hence, the system dynamics are unaffected this shift. Let’s examine the effect of this shift from the old (\(H\)) to the new \(H'=H+\lambda I\) Hamiltonian on the quantum gate by contrasting the effects of the corresponding Hamiltonial operators \(U\) and \(U'\) on a state vector \(|\psi\rangle\): \[U|\psi\rangle =e^{-\frac{iHt}{\hbar}}|\psi\rangle =|\psi'\rangle .\] \[U'|\psi\rangle =e^{-\frac{i(H+\lambda I)t}{\hbar}}|\psi\rangle = e^{i\frac{-\lambda It}{\hbar}} \underbrace{e^{-\frac{iHt}{\hbar}}|\psi\rangle}_{U|\psi\rangle}= e^{i\frac{-\lambda It}{\hbar}} \left ( U |\psi\rangle\right ) \underbrace{\ \ \ =\ \ \ }_{eigenvalue} e^{\gamma} |\psi'\rangle \ ,\]
where the action of the energy-shifted Hamiltonian is the same as the original Hamiltonial multiplied by a global phase factor \(\gamma=-\frac{i \lambda t}{\hbar}\). These global phases can be ignored as they correspond to uniform shifts in energy that do not change the system dynamics.
Note that the Baker–Campbell–Hausdorff formula justifies this equality \[e^{-i\frac{(H+\lambda I)t}{\hbar}} = e^{-i\frac{\lambda I t}{\hbar}} \cdot e^{-i\frac{H t}{\hbar}} .\]
For any pair of elements in a Lie algebra, \(X\) and \(Y\), possibly non-commutative and including operators, the solution \(Z\) of the equation \(e^{X}e^{Y}=e^{Z}\) can be expressed as \(Z=\log \left(e^{X}e^{Y}\right)\), a series in terms of repeated commutators of \(X=H\) and \(Y=I\), which are all trivial since \([H,I] = HI - IH \equiv 0\), except for the first two terms, \(X+Y\).
Let’s explore some strategies to amplify the utility of classical spacetime processes via complex time transformations. Both of these basic computable tasks – random simulation and inference (including forward prediction, regression, classification) – can be addressed through kime representation. The benefits of Spacekime analytics are realized by retaining both the commonly observed kime-magnitude (time) and the enigmatic kime-direction (phase).
Large-scale simulation of realistic spacetime observations based on a few measurements (smaller samples of repeated measures for a fixed time, \(t_{o}\))
\[\left\{ \underbrace{\ \ \ y_{l}\ \ \ }_{observed \\ eigen\ states} = \underbrace{\ \ f\left( t_{o}e^{i\theta_{l}} \right)\ \ }_{implicit \\ function} \right\}_{l = 1}^{n}\ .\]
Note that these (spatiotemporal) observations may also depend on the location, \(x \in \mathbb{R}^{3}\), however, under a controlled repeated experiment assumption, we are suppressing this spatial dependence. Let’s assume that the implicit Laplace transform of the spacetime function (wavefunction, \(f\)), \(F(z)\mathcal{= \ L}(f)(z)\), is evaluated (instantiated) at \(z = t_{o}e^{i\theta}\mathbb{\in C}\), for some fixed spatiotemporal location (\(t_{o} \in \mathbb{R}^{+},\ x_{o} \in \mathbb{R}^{3})\). Consider the kime-surface isocontour \(G(\theta)\mathcal{= \ L}(f)\left( t_{o}e^{i\theta} \right)\), parameterized by \(\theta \in \lbrack - \pi,\pi\ )\). Define the phase density \(f_{\theta} = \frac{G(\theta)}{|\left| G(\theta) \right||}\), where the density normalization factor is
\[\left| \left| G(\theta) \right| \right| = \int_{- \pi}^{\pi}{G(\theta)}d\theta.\]
Then, \(\int_{- \pi}^{\pi}{f_{\theta}(\theta)}d\theta = 1\) and the corresponding cumulative phase distribution function is \[F_{\theta}\left( \theta_{o} \right) = \Pr\left( \theta \leq \theta_{o} \right) = \int_{- \pi}^{\theta_{o}}{f_{\theta}(\theta)}d\theta\ .\]
First, we will draw a large sample from the kime-phase distribution by taking a large uniform sample, \(\left\{ u_{k} \right\}_{k = 1}^{N} \sim Uniform(0,1)\) and evaluating the quantile function, \(F_{\theta}^{- 1}\), at the uniform points, \(\theta_{k} = F_{\theta}^{- 1}\left( u_{k} \right)\). Having this large sample from the phase distribution allows us to obtain a corresponding realistic spacetime sample via the inverse Laplace transform
\[\left\{ \underbrace{\ \ \ {\widehat{y}}_{k}\ \ \ }_{\begin{matrix} simulated \\ eigen\ states \\ \end{matrix}} = \underbrace{\ \ \mathcal{L}^{- 1}(F)\left( t_{o}e^{i\theta_{k}} \right)\ \ }_{\begin{matrix} implicit \\ function \\ \end{matrix}} \right\}_{k = 1}^{N}\ ,\ \ \ typically\ \ n \ll N.\]
{Question}: Does this argument assume that we need to interpolate the signal \(f(t)\) over the observed small observation sample, \(\left\{ y_{l} \right\}_{i = 1}^{n}\), in spacetime, as we did in the arXiv LT/ILT paper, to be able to approximate \(F(z)\mathcal{= \ L}(f)(z)\)? If so, does the recovered large sample \(\left\{ {\widehat{y}}_{k} \right\}_{k = 1}^{N}\) depend on the interpolation scheme, or the basis, and how?
Alternatively, we can also sample the signal in spacetime as we discussed earlier, via the MGF and the quantile function. Set \(s = - t\) in the Laplace transform to get the moment generating function (MGF) of \(Y\), the random variable generating observable outcomes via the implicit function (\(f\)), i.e., \(y_{l} = f\left ( t_{o}e^{i\theta_l} \right ), \theta_l \sim \Phi_{\theta}\lbrack - \pi,\pi), 1 \leq l \leq n\). Then, \[\mathcal{M}_{Y}(t)\mathcal{\equiv M}(Y)(t)\mathbb{= E}\left( e^{tY} \right) = \int_{}^{}{e^{ty}f_{Y}(y)dy}\ .\]
Assume the (small) repeated sample, \(\left\{ y_{l} \right\}_{i = 1}^{n}\), represents instantiations of a continuous random (observable) variable \(Y \sim f_{Y}\), e.g., particle-position or value of a stock. We can employ the Laplace transform to compute the cumulative distribution function of the process, \(F_{Y}\), and by extension, the inverse CDF (the quantile function), \(F_{Y}^{- 1}\), as follows \[p = F_{Y}(y) \equiv \Pr(Y \leq y) = \mathcal{L}^{- 1}\left( \frac{1}{s}\mathbb{E}\left( e^{- sY} \right)\ \right)(y) = \mathcal{L}^{- 1}\left( \frac{1}{s}\mathcal{L}\left( f_{Y} \right)\ \right)(y)\mathbb{\ :R \rightarrow}\lbrack 0,1\rbrack\ .\]
\[\begin{matrix} H^{*} & \overbrace{\underbrace{\ \rightleftarrows \ }_{\mathcal{L}^{- 1}}}^{\mathcal{L}} & H^{*} \\ \uparrow & & \downarrow \\ H & \overbrace{\underbrace{\ \rightleftarrows \ }_{f_{X}^{- 1}}}^{X,f_{X}} & \mathbb{C} \\ \end{matrix}\]
Note that the Laplace transform and its inverse are linear operators acting on the Hilbert space of functions, i.e., the transforms are in the dual space, \(\mathcal{L,}\mathcal{L}^{- 1} \in H^{*}\), where \(H\) is the space of square integrable functions with sufficiently fast decay and \(H^{*}\) is its dual. However, the random variable \(X\), and correspondingly its density function \(f(x) = f_{X}(x)\) map \(H\mathbb{\rightarrow C}\). Hence, the ’‘inverse CDF function’’ \(F_{Y}^{- 1}\) and the ’‘inverse linear operator’’ \(\mathcal{L}^{- 1}\) have different meanings that are not interchangeable. Even though using the \(()^{- 1}\) notation for both operations is standard, to avoid any potential confusion, it may be more appropriate the denote the Laplace transform and it’s inference by hats, \(\widehat{f} = \mathcal{L}(f)\), \(f \equiv \check{\left( \widehat{f} \right)} = \mathcal{L}^{- 1}\left( \widehat{f} \right)\), instead of inverse notation. Because of this difference the equation above does not imply that inverting the CDF is equivalent to inverting the Laplace transform, \[\nRightarrow y = F_{Y}^{- 1}(p) = \frac{1}{p}\mathbb{E}\left( e^{- pY} \right) = \frac{1}{p}\mathcal{L}\left( f_{Y} \right)(p):\lbrack 0,1\rbrack\mathbb{\rightarrow R\ }(this\ is\ meaningless),\]
\[U \sim Uniform(0,1)\ \Longrightarrow Y = F_{Y}^{- 1}(U) = \frac{1}{U}\mathcal{L}\left( f_{Y} \right)(U) \sim f_{Y}.\]
Hence, \({\widehat{y}}_{k} = F_{Y}^{- 1}\left( u_{k} \right) \neq \frac{1}{u_{k}}\mathcal{L}\left( f_{Y} \right)\left( u_{k} \right) \sim f_{Y}\), where \(u_{k} \sim Uniform(0,1)\), \(1 \leq k \leq N\), and \(n \ll N\).
Question: Would this alternative sampling scheme work in practice? Without any interpolation of the implicit signal \(f(t)\) over the observed small observation sample, \(\left\{ y_{l} \right\}_{i = 1}^{n}\), in spacetime compute \(F(z)\mathcal{= \ L}\left( \left\{ y_{l} \right\}_{i = 1}^{n} \right)(z)\). Then, take a large sample of kime-phases \(\left\{ \theta_{k} \right\}_{k = 1}^{N} \sim \ \Phi_{\theta}\lbrack - \pi,\pi)\), e.g., \(\Phi_{\theta}\lbrack - \pi,\pi)\) can be the truncated symmetric Laplace (double exponential) distribution over \(\lbrack - \pi,\pi)\). Finally, for a fixed time, \(t_{o}\), we can recover a large sample \(\left\{ {\widehat{y}}_{k} \right\}_{k = 1}^{N}\) of repeated measures in spacetime via \[\left\{ \underbrace{\ \ \ {\widehat{y}}_{k}\ \ \ }_{repeated \\ sample\ at\ \ t_{o}} \right\}_{k = 1}^{N} = \mathcal{L}^{- 1}\left( \left\{ \underbrace{t_{o}e^{i\theta_{k}}}_{z_{k}} \right\}_{k = 1}^{N} \right)\ .\ \]
Next, we describe a generic spacekime analytic protocol for modeling and inference of prospective process behavior, e.g., prediction, regression, or clustering, using limited spacetime observations.
There is one important difference between the prior (large-scale) simulation task discussed above and the current modeling, prediction and inference protocol. Earlier, the repeated samples represented multiple IID observations of a controlled experiment with a fixed spatiotemporal location, (\(t_{o} \in \mathbb{R}^{+},\ x_{o} \in \mathbb{R}^{3})\). Whereas now, the prediction may be sought across spacetime. In particular, we are interested in forward prediction of the longitudinal process at some fixed spatial location, \(x_{o} \in \mathbb{R}^{3}\), for future time points, \(t t_{o}\), for some past times, \(t < t_{o}\), or even in more general situations [^1].
Naturally, we start with a small sample of observations measuring the outcome variable repeatedly \(n\) times \((1 \leq l \leq n)\), independently over a set of timepoints \(0 \leq t_{m} \leq t_{M}\),
\[\underbrace{\ \ \ Y\ \ \ }_{observed \\ tensor} = \underbrace{\ \ f\left( \overbrace{te^{i\theta}}^{\kappa} \right)\ \ }_{implicit \\ function} \equiv \left( y_{m,l} \right )_{\underbrace{\ \ t_{M}\ \ }_{time} \times \underbrace{\ \ n\ \ }_{repeat}} \equiv \left\{ \underbrace{\ \ \ y_{m,l}\ \ \ }_{observed \\ eigen\ states} = \underbrace{\ \ f\left( t_{m}e^{i\theta_{l}} \right)\ \ }_{implicit \\ function} \right\}_{m = 1,l = 1}^{M,\ n}\ .\]
The observed data represents a second order tensor \(\left( y_{m,l} \right)_{t_{M} \times n}\) indexed by longitudinal event order (time, \(0 \leq t_{m} \leq t_{M}\)) and the repeated IID sampling index (\(1 \leq l \leq n \ll N\)). This tensor data can be thought of as a design matrix with rows corresponding to time indices (\(0 \leq t_{m} \leq t_{M}\)) and columns tracking the repeated measurements within each time point, \(1 \leq l \leq n\). Note that the data tensor \(\left( y_{m,l} \right)_{t_{M} \times n}\) may potentially include missing elements. All supervised prediction, classification and regression models we can apply to observed spacetime data, \(\left\{ y_{m,l} \right\}\), are also applicable to the enhanced (kime-boosted) sample, \(\left( {\widehat{y}}_{j,k} \right)_{T \times N},\ \ N \gg n\). The kime-boosted sample is recovered much like we did in the earlier simulation task,
\[{\widehat{Y} = \left( {\widehat{y}}_{j,k} \right)_{T \times N} \equiv \left\{ \underbrace{\ \ \ {\widehat{y}}_{j,k}\ \ \ }_{simulated \\ eigen\ states} = \underbrace{\ \ \mathcal{L}^{- 1}(F)\left( t_{j}e^{i\theta_{k}} \right)\ \ }_{implicit \\ function } \right\}}_{j = 1,\ k = 1}^{T,\ N}\ ,\]
where the time indices \(1 \leq j \leq T,\ T \gg M\) and the phase indices \(1 \leq k \leq N,\ N \gg n\).
Forward inference, time predictions, and classification tasks using the kime-boosted sample can be accomplished in many alternative ways. For instance,
We can perform the desired modeling, forecasting, regression, or classification task on the Spacekime recovered large sample, i.e., design matrix \(\left( {\widehat{y}}_{j,k} \right)_{T \times N}\), via inverse Laplace transforming the kime-surfaces as spacetime data objects, that are much richer than their counterparts, \(\left( y_{m,l} \right)_{t_{M} \times n}\), originally observed in spacetime.
Alternatively, instead of directly modeling the native observed repeated measures time-series, \(\left( y_{m,l} \right)_{t_{M} \times n}\), all modeling, inference, prediction, or classification tasks can also be accomplished directly on the kimesurfaces, \(F(z)\mathcal{= \ L}(f)(z)\), where \(z = te^{i\theta}\mathbb{\in C}\) and the interpolation time-series function is \(f(t) \approx \left( y_{m,l} \right)_{t_{M} \times n}\). In this case, any inference, prediction, or derived class-labels need to be pulled back from spacekime into spacetime via the inverse Laplace transform,
\[\text{Spacetime Inference} \rightleftharpoons \mathcal{L}^{- 1}(\text{Spacekime Inference})\ .\]
This spacetime inference recovery facilitates direct (spacetime) interpretation in the context of the specific modeling or prediction task accomplished in spacekime.
Let’s define the notion of time-observables as well-defined measures on the complex time space (\(\kappa = t e^{i\theta}\mathbb{\in C}\)). Naturally, the time-lapse distance \(d_{TL}\) between a pair of kime moments \(\kappa_{1} = t_{1}e^{i\theta_{1}},\kappa_{1} = t_{2}e^{i\theta_{2}}\mathbb{\in C}\) is defined by
\[d_{TL}\left( \kappa_{1},\kappa_{2} \right) = \inf_{\phi_{1},\phi_{2}}\left| \left\| t_{1}e^{i\phi_{1}} \right\| - \left\| t_{2}e^{i\phi_{2}} \right\| \right| = \left| \left| t_{1} \right| - \left| t_{2} \right| \right|\ .\]
However, there are many other ways to track or measure the longitudinal or temporal size of kime-subsets that extend to kime the common notions of length, area and volume, which we often use to measure the size of spatial subsets.
For instance, suppose the boundary of a kime area \(\mathbb{C \supseteq}A \ni \kappa_{o}\) can be parameterized as a curve describing the radial displacement \(r(\theta)\) of kime points \(\kappa = re^{i\theta}\) along the (simple closed curve) boundary from a reference point \(\kappa_{o}\)
\[\partial A = \left\{ r = r(\theta)\ :\ \theta \in \lbrack - \pi,\ \pi) \right\}\ .\]
The kime-area measure(Lebesgue measure) of the set \(A\subset\mathbb{C}\) is
\[\mu(A) = \iint_{\mathbb{C}}{ \underbrace{\ \ \chi_{A}(\kappa)\ \ \ }_{characteristic \\ function}d \mu}.\]
The importance of defining a proper (Lebesgue?) measure \(\mu\) on kime is that this would naturally lead to formulating likelihoods on the space of complex time. In other words, as all observable and predictable time is finite, and the phase distribution in well defined, a properly defined measure \(\mu\) on kime can be normalized and extended to a kime probability measure \(p_{\kappa}\) that is absolutely continuous with respect to \(\mu\), \(p_{\kappa} \ll \mu\).
Assume the sample space of the radial (time) distribution is \([0,R_{0}]\), then the kime-probability measure is
\[P_{\kappa}(A) = \iint_{\mathbb{C}}{ \underbrace{\ \ \chi_{A}(\kappa)\ \ \ }_{characteristic \\ function}d F(\kappa)}= \iint_{\mathbb{C}}{\chi_{A}(\kappa)\ P(d\kappa)} = \iint_{A}{dF(\kappa)} =\iint_{A}{f_{X,Y}(x,y)\ dxdy} = \\\] \[\iint_{A}{f_{X,Y}(r(\theta)\cos\theta,r(\theta)\sin\theta)\ \begin{vmatrix} \frac{\partial x}{\partial r} & \frac{\partial x}{\partial \theta} \\ \frac{\partial y}{\partial r} & \frac{\partial y}{\partial \theta} \end{vmatrix}drd\theta} = \iint_{A}{\underbrace{f_{X,Y}(r(\theta)\cos\theta,r(\theta)\sin\theta)r}_{f_{R,\Theta}(r,\theta): = rf_{X,Y}(x,y)}drd\theta} = \iint_{A}{f_{R,\Theta}(r,\theta)drd\theta} \\ = \left\{ \begin{matrix} \ \int_{-\pi}^{\pi}{\int_{0}^{R(\theta)}{f_{R}(r)drf_{\Theta}(\theta)d\theta}}, \ \ r \perp \theta \\ \\ \ \ \ \int_{-\pi}^{\pi}{\int_{0}^{R(\theta)}{f_{R,\Theta}(r,\theta)drd\theta}},\ \ otherwise \\ \end{matrix} \right.\ \ \ .\ \ \]
Note: Specifically, if radial (time) distribution is uniformly distributed on \([0,R_{0}]\) and radial (time) distribution and angular phase distribution are independent, i.e. \(f_R(r) = \frac{1}{R_{0}}, \forall r\in [0,R_{0}]\), then \[P_{\kappa}(A) = \int_{-\pi}^{\pi}{\int_{0}^{R(\theta)}{f_{R}(r)drf_{\Theta}(\theta)d\theta}} =\int_{-\pi}^{\pi}{\int_{0}^{R(\theta)}{\frac{1}{R_{0}}drf_{\Theta}(\theta)d\theta}}= \int_{-\pi}^{\pi}{\frac{R(\theta)}{R_{0}}f_{\Theta}(\theta)d\theta} \]
Note: If the radial (time) distribution is not uniform, i.e., \(f_R(r)\not= \frac{1}{R}, \forall r\in [0,R_{0}]\), then we may need to include the time marginal inside the inner integral, \(\int_{0}^{R(\theta)}{f_R(r)\ dr}\).
Problem: Use Green’s theorem to derive a kime-area measure using the angular phase distribution density function, \(f_{\Theta,r}(\theta)\). Initially, we can assume that the phase-density is independent of the time magnitude, i.e., \(r\perp \theta\ ,\ f_{\Theta,r}(\theta) \equiv f_{\Theta}(\theta)\), but later we may need to potentially consider phase-densities that are time-dependent.
Proof Outline \[\iint_A {\underbrace{p(da)}_{p(a\in da)}}= \iint_A {\underbrace{p(dxdy)}_{p(x\in dx, y\in dy)}}= \frac{1}{2}\iint_A {\left (-yp(dx) + x p(dy)\right )}\ .\]
Using the polar coordinate transformation \((x,y)=(r(\theta)\cos(\theta), r(\theta)\sin(\theta))\),
\[\mu(A)=\iint_A {p(da)}= \iint_A {p(dxdy)}\underbrace{=}_{Green's\ Thm} \frac{1}{2}\int_{\partial A} {\left (-yp(dx) + x p(dy)\right )} =\\ \frac{1}{2}\int_{\partial A} {\left (-r(\theta)\sin(\theta) p(r(\theta)\cos(\theta)) + r(\theta)\cos(\theta) p(r(\theta)\sin(\theta))\right )} =\\ \frac{1}{2}\int_{\partial A} {r^2(\theta)p(d\theta)}= \frac{1}{2}\int_{\partial A} {r^2(\theta)f_{r,\theta}(\theta)d\theta}\ .\]
Question: Which one comes first, the Lebesgue measure \(\mu\), kime-area measure \(\mu(A)\), or the kime probability measure \(p_{\kappa}\)? Need to avoid a circular argument! Perhaps start with a general measure space \((\mathbb{C}, \mathcal{X}, \mu)\), where \(\mathcal{X}\) is a \(\sigma\)-algebra over \(\mathbb{C}\) and \(\mu\) is a measure defined on set elements in \(\mathcal{X}\). Next define the kime-area measure, \(\mu(A) = \iint_{\mathbb{C}}{\chi_{A}(\kappa)d \mu}\), followed by the kime probability measure \(p_{\kappa}(A) = \iint_{\mathbb{C}}{\chi_{A}(\kappa)d F(\kappa)}=\iint_{\mathbb{C}}{\chi_{A}(\kappa)\ P(d\kappa)} = \iint_{A}{dF(\kappa)}\). Finally, argue that \(\exists \mu\) such that \(p_{\kappa}\ll\mu\), i.e., the kime probability measure is absolutely continuous with respect to Lebesgue measure.
In a statistical and computational data science context, the kime-space probability measure \(p_{\kappa}\) facilitates modeling random experiments involving observing repeated samples from controlled experiments (reflecting the kime-phase distribution) across a range of time-points (kime-magnitudes). Suppose the kime sample space \(\Omega\) consists of all possible observable or predictable outcomes of the experiment (cf. light cone), including outcomes that have trivial probability of occurring, \(p_{\kappa}(A \subset \Omega) = 0\). Kime-events (kevents) are measurable subsets \(A \subset \Omega\) and the entire collection of kevents forms a \(\sigma\)-algebra on \(\Omega\). Each kevent is associated with a probability \(0 \leq p_{\kappa}(A \subset \Omega) \leq 1\) and the \(\sigma\)-algebra on \(\Omega\) has a natural interpretation as the collection of measurable kevents about which information is observable.
Consider the following kime sample-space
\[\Omega = \underbrace{\ \ \ \left\{ t \in \mathbb{z|}t \geq 0 \right\}\ \ \ }_{ T\ (time) \\ kime - magnitude} \times \underbrace{\Theta_{\lbrack - \pi,\pi)}(\theta)}_{kime - phase \\ (distribution) }\]
equipped with the following (joint kime distribution density) Poisson-Laplace kime probability measure
\[p_{\kappa}(A) = \int_{\Omega}^{}{\chi_{A}(a)f_{\kappa}(a)\mathbf{d}a},\ \forall A \subset \Omega\ ,\ \ \chi_{A}(a) = \left\{ \begin{matrix} 1,\ \ a \in A\ \\ 0,\ otherwise \\ \end{matrix}\ . \right.\ \]
The (first marginal) temporal distribution is \(Poisson(\lambda)\) with point mass (PMF, \(\nu\)) and cumulative distribution (CDF, \(Ν\)) functions
\[\underbrace{\ \ \nu\ \ }_{PMF}\left( \underbrace{\ \ \ \left\{ t \right\}\ \ \ }_{pointset} \right) \equiv \frac{\lambda^{t}}{t!}e^{- \lambda},\ \forall t \in T\ ,\ and\ \ \underbrace{\ Ν(R)\ \ }_{CDF} \equiv \sum_{t \in R} {\frac{\lambda^{t}}{t!}e^{- \lambda}},\ \forall R \subset T\ .\]
Below, we show that the (second marginal) phase distribution is the truncated \(Laplace(\mu = 0,b = 1)\) distribution on \(\lbrack - \pi,\pi)\) with the following PDF and CDF
\[\underbrace{\ \ \ f_{\Theta}\ \ \ }_{PDF}(\theta) \equiv \frac{1}{2}e^{- \theta} \cdot \chi_{\lbrack - \pi,\pi)}(\theta),\ \forall\theta\mathbb{\in R\ ,\ }and\ \]
\[\underbrace{F_{\Theta}\left( \theta \middle| - \pi \leq \ \Theta < \pi \right)}_{truncated\ Laplace\ CDF} = \left\{ \begin{matrix} \frac{e^{\theta} - 2F_{\Theta}( - \pi)}{2\left( F_{\Theta}(\pi) - F_{\Theta}( - \pi) \right)},\ \ \theta \leq 0 \\ \frac{2 - e^{- \theta} - 2F_{\Theta}( - \pi)}{2\left( F_{\Theta}(\pi) - F_{\Theta}( - \pi) \right)},\ \ \theta \geq 0 \\ \end{matrix} \right.\ \ \ ,\]
\[\underbrace{\ \ F_{\Theta}(\theta)\ \ }_{(unconditional) \\ Laplace\ CDF} = \left\{ \begin{matrix} \frac{1}{2}e^{\theta},\ \ \theta \leq 0 \\ 1 - \frac{1}{2}e^{- \theta},\ \theta \geq 0 \\ \end{matrix}\ \ ,\ \ \ \right.\ \]
\[F_{\Theta}( - \pi) = \frac{1}{2}e^{- \pi} \approx \ 0.0216,\ \ F_{\Theta}(\pi) = 1 - \frac{1}{2}e^{- \pi} \approx 0.978393,\ \ \]
\[2\left( F_{\Theta}(\pi) - F_{\Theta}( - \pi) \right) = 1.913572\ .\]
Hence, the joint kime distribution density and distribution functions are defined by
\[\underbrace{\ \ \ f_{\kappa}\ \ \ }_{PDF}\left( te^{i\theta} \right) \underbrace{\ \ \equiv \ \ }_{probability \\ \ chain\ rule}\nu(t)f_{\Theta}\left( \theta \middle| t \right) \underbrace{\ \ = \ \ }_{T\bot\Theta}\left( \frac{\lambda^{t}}{t!} e^{- \lambda} \right) \cdot \left( \frac{e^{- \theta}}{2}\chi_{\lbrack - \pi,\pi)} (\theta) \right),\ \forall(t,\theta) \in \mathbb{R}^{+}\mathbb{\times R\ ,}\]
\[\ \ \underbrace{\ F_{\kappa}\left( t'e^{i\theta'} \right)\ \ }_{CDF} \equiv \sum_{t \leq t'}^{}{\int_{- \infty}^{\theta'}{f_{\kappa}\left( te^{i\theta} \right)}d\theta} = \frac{1}{2}\sum_{0 \leq t}^{t'}{\frac{\lambda^{t}}{t!}e^{- \lambda}\int_{- \pi}^{\theta'} e^{- \theta}d\theta} = e^{- \lambda}\frac{e^{\pi} - e^{- \theta'}}{2} \sum_{t = 0}^{t'}\frac{\lambda^{t}}{t!}\ ,\ \]
\[\forall\ t' \in \mathbb{R}^{+},\ - \pi < \theta' < \pi\ .\]
Below we derive the truncated Laplace PDF and CDF. Recall that a Laplace random variable,
\[\Theta \sim Laplace\left( \overbrace{\ \ \ \mu\ \ \ }^{location}, \overbrace{\ \ \ b\ \ \ }^{scale} \right),\]
has Laplace (double-exponential) distribution, which is infinitely supported and has the following probability density (PDF) and cumulative distribution (CDF) functions
\[\underbrace{f_{\Theta}(\theta)}_{PDF} = \frac{1}{2b}e^{\left( - \frac{|\theta - \mu|}{b} \right)}\ \ ,\ \ \ \underbrace{F_{\Theta}(\theta)}_{CDF} = \left\{ \begin{matrix} \frac{1}{2}e^{\left( \frac{\theta - \mu}{b} \right)}, \ \ \theta \leq \mu \\ 1 - \frac{1}{2}e^{\left( - \frac{\theta - \mu}{b} \right)},\ \theta \geq \mu \\ \end{matrix} \right.\ \ \ .\ \ \]
Restricting the (kime-phase) variable \(\Theta\) to \(\lbrack - \pi,\pi)\) naturally truncates the probability density and the cumulative distribution functions to the same interval. Over \(- \pi \leq \Theta \leq \pi\),
\[\underbrace{f_{\Theta}\left( \theta \middle| - \pi \leq \Theta < \pi \right) }_{truncated\ Laplace\ density} = \frac{f_{\Theta}(\theta) \cdot \chi_{\lbrack - \pi,\pi)}(\theta)}{F_{\Theta}(\pi) - F_{\Theta}( - \pi)} = \frac{g_{\Theta}(\theta)}{F_{\Theta}(\pi) - F_{\Theta}( - \pi)} = \propto_{\theta}f_{\Theta}(\theta) \cdot \ \chi_{\lbrack - \pi,\pi)}(\theta)\ ,\]
\[\ \underbrace{F_{\Theta}\left( \theta \middle| - \pi \leq \ \Theta < \pi \right)}_{ truncated\ Laplace\ CDF} \equiv \frac{F_{\Theta}(\theta) - F_{\Theta}( - \pi)}{F_{\Theta}(\pi) - F_{\Theta}( - \pi)} = \left\{ \begin{matrix} \frac{e^{\left( \frac{\theta - \mu}{b} \right)} - 2F_{\Theta}( - \pi)}{2\left( F_{\Theta}(\pi) - F_{\Theta}( - \pi) \right)},\ \ \theta \leq \mu \\ \frac{2 - e^{\left( - \frac{\theta - \mu}{b} \right)} - 2F_{\Theta}( - \pi)}{2\left( F_{\Theta}(\pi) - F_{\Theta}( - \pi) \right)},\ \theta \geq \mu \\ \end{matrix} \right.\ \ \ .\]
where $g_{}(x) = where \(E_{1}\) and \(E_{2}\) are the two energy components of the particle defined with respect to kime dimensions \(k_{1}\) and \(k_{2}\), respectively. Here, \[E_{\gamma} = {c^{2}m}_{0}\frac{{dk}_{\gamma}}{{dk}_{0}},\gamma \in \{ 1,2\},\ {dk}_{0} = \left\| {d\mathbf{k}}_{0} \right\| = \sqrt{\left( {dk}_{0,1} \right)^{2} + \left( {dk}_{0,2} \right)^{2}} \geq 0 ,\] see TCIU Chapter 3 Appendix and Chapter 5, Section G.4.
\[\widehat{H} = E = E_{o} - \frac{1}{2}i\Gamma,\]
At the initial state, \[E = \begin{pmatrix} E_{x} \\ E_{y} \\ \end{pmatrix} = \begin{pmatrix} E_{o} \\ 0 \\ \end{pmatrix}\]
and at the end state
\[E' = \begin{pmatrix} E_{x}' \\ E_{y}' \\ \end{pmatrix} = \begin{pmatrix} 0 \\ E_{o} \\ \end{pmatrix}.\]
By complexifying the energy (adding the term \(\ - \frac{1}{2}i\Gamma\)) to model eigenstate decay (fast particle disappearance), \(e^{- \frac{1}{2}t\Gamma} \underbrace{\rightarrow}_{t \rightarrow \infty}0\), we can represent the lifetime of an energy state, i.e., the lifetime of the particle is \(\tau = \frac{2}{\Gamma}\), \[e^{- i\widehat{H}t} = e^{- it\left( E_{o} - \frac{1}{2}i\Gamma \right)} = e^{- itE_{o}} \cdot \underbrace{\ \ \ e^{- \frac{1}{2}\Gamma t}\ \ \ }_{lifetime\ \ decay}\ .\]
The energy eigenvalues of a Hermitian Hamiltonian \(\widehat{H}\) are real, since the operator \(e^{i\widehat{H}t}\) is unitary, cf. law of total probability, i.e., the sum of the probabilities of observing all possible energy eigenstate always remains \(1\).
One important note is that a Hermitian operator acting on the space of square-integrable wavefunctions (\(L^{2}\mathbb{(R)}\)) must have real eigenvalues. However, this is not the case if we relax the conditions on the state-space. For instance, a Hermitian operator can have complex eigenvalues for non-square-integrable functions. For any \(z_{o}\mathbb{\in C}\), the operator \(\widehat{O} = - i\frac{\partial}{\partial x}\) has an eigenspectrum (\(\lambda = z_{o},\ \phi = e^{iz_{o}x}\)), since
\[\underbrace{- i\frac{\partial}{\partial x}}_{operator}e^{ax} = -i a e^{ax} \underbrace{\ \ \ = \ \ \ }_{eigenvalue \\ problem} \ \underbrace{z_{o}}_{eigenvalue} \overbrace{\ \ \ e^{ax}\ \ \ }^{eigenfunction}\ ,\]
hence, \(- ia = z_{o}\), and \(a = iz_{o}\). Clearly, the operator \(\widehat{O} = - i\frac{\partial}{\partial x}\) is Hermitian and the eigenvalue \(z_{o} = - ia\) is properly complex, not necessarily real, however, this is because the eigenstate/eigenfunction is outside of the proper space, \(e^{ax} \notin L^{2}\mathbb{(R)}\)! So, the non-observability, not-realness, of the viable complex eigenvalue is related to the fact that in this case the operator is applied out-of-scope!
In general, a bounded, linear self-adjoint (Hermitian) operator on an infinite dimensional Hilbert space, e.g., \(L^{2}\lbrack 0,1\rbrack\), cannot be guaranteed to have any eigenvalues, and hence, there may not be an orthonormal bases of eigenfunctions over the Hilbert space. The multiplication operator provides a counterexample for the bounded, linear self-adjoint (Hermitian) operator \(\mathcal{H =}L^{2}\lbrack 0,1\rbrack\):
\[\left( \mathcal{M}f \right)(x) \equiv x \cdot f(x)\mathcal{:H \rightarrow H,\ \ }\left| \left| \mathcal{M} \right| \right| = \sup_{\left| |f| \right| \leq 1}\left| \left| \mathcal{M}(f) \right| \right| = \sup_{f \neq 0\ }\frac{\left| \left| \mathcal{M}(f) \right| \right|}{\left| |f| \right|} = \sup_{f \neq 0\ }\frac{|x|\left| |f \right||}{\left| |f| \right|} = 1\ .\ \]
Assuming
\[x \cdot f(x) = \left( \mathcal{M}f \right)(x) \equiv \overbrace{\ \ \ \lambda\ \ \ }^{const.} \cdot f(x) \Longrightarrow f(x) \overbrace{\ \ = \ \ }^{a.e.} 0 \]
and \(\mathcal{M}\) has no eigenvalues.
Theorem: Given a bounded, self-adjoint (Hermitian) operator \(\mathcal{M:H \rightarrow H}\)
Proof following these notes Chapters 8-9:
Suppose \(\lambda \in \sigma\left( \mathcal{M} \right) \equiv eigenspace\mathcal{\{ M\}}\), and \(x_{\lambda} \neq 0\) is the corresponding eigenvector, \(\mathcal{M}x_{\lambda} = \lambda x_{\lambda}\). Then, \(\lambda\left\langle x_{\lambda} \middle| x_{\lambda} \right\rangle = \left\langle x_{\lambda} \middle| \lambda x_{\lambda} \right\rangle = \left\langle x_{\lambda} \middle| \mathcal{M}x_{\lambda} \right\rangle = \left\langle \mathcal{M}x_{\lambda} \middle| x_{\lambda} \right\rangle = \ \overline{\lambda}\left\langle x_{\lambda} \middle| x_{\lambda} \right\rangle \Longrightarrow \lambda \equiv \ \overline{\lambda} \Longrightarrow \lambda\mathbb{\in R}\).
If \(\lambda \neq \mu \in eigenspace\left\{ \mathcal{M} \right\}\mathbb{\cap R}\), and \(\mathcal{M}x_{\lambda} = \lambda x_{\lambda}\), \(\mathcal{M}x_{\mu} = \mu x_{\mu}\), then,
\[\lambda\left\langle x_{\lambda} \middle| x_{\mu} \right\rangle = \left\langle \lambda x_{\lambda} \middle| x_{\mu} \right\rangle = \left\langle \mathcal{M}x_{\lambda} \middle| x_{\mu} \right\rangle = \left\langle x_{\lambda} \middle| \mathcal{M}x_{\mu} \right\rangle = \mu\left\langle x_{\lambda} \middle| x_{\mu} \right\rangle \Longrightarrow (\lambda - \mu)\left\langle x_{\lambda} \middle| x_{\mu} \right\rangle = 0\underbrace{\ \ \Longrightarrow \ \ }_{\lambda \neq \mu}x_{\lambda}\bot x_{\mu}\ .\]
Note: Given any self-adjoint operator, its eigenvalue spectrum is the reals, \(\lambda\mathbb{\in R}\), a spectral superposition of point discrete spectrum eigenvalues and a continuous spectrum eigenvalues Edgar Raymond Lorch (2003). Chapter V: The Structure of Self-Adjoint Transformations. Spectral Theory, p. 106 and these notes. Hence, a kime-operator can’t be self-adjoint (Hermitian), as in general,
\[\forall\theta \in \lbrack - \pi,\ \pi)\backslash\text{\{}0\},\ \ \kappa = te^{i\theta}\mathbb{\in C\backslash R\ .}\]
Even though, on average, the phase-distributions are zero mean, i.e., they have trivial expectations \(\left\langle f_{\theta} \right\rangle = 0\). This ties up to the fact that, in the most general case, if a complex number \(\lambda=a+i\beta\) is an eigenvalue for a bounded linear operator \(\mathcal{M}\), then its conjugate \(\lambda=a-i\beta\) is also an eigenvalue of the same operator. In other words, if \(\lambda \in \sigma\left( \mathcal{M} \right)\), then \(\overline{\lambda} \in \sigma\left( \mathcal{M} \right)\). Hence, there is a balance between operator eigenvalues corresponding to positive and negative kime-phase arguments, \(\theta \in \lbrack - \pi,\pi)\), and therefore the phase distribution \(\Phi_{\theta}\) is symmetric and zero mean.
Classically, only self-adjoint (Hermitian) linear operators, with real eigenvalues, are considered as (real) observable values.
Could the expectation value of a kime operator \(\left\langle \widehat{Κ} \right\rangle\) be equivalent to the classical time-evolution operator \(\widehat{U}(t) = e^{- \frac{i\widehat{H}t}{\hslash}}\)?
Note: About the spectrum of an infinite-dimensional operator, which contains the eigenvalues – some of the spectrum elements may not be eigenvalues. Let’s start with a bounded operator \(Κ:X \rightarrow X\) on the Banach space \(X\), e.g., \(X \equiv L^{2}\mathbb{(R)}\). The spectrum \(\sigma(Κ)\) is a disjoint union of 3 parts:
The entire spectrum is always non-empty, \(\sigma(Κ) \neq \varnothing\), there are operators with empty point spectrum. For instance, we saw the multiplication operator earlier. \(X \equiv L^{2}\lbrack 0,1\rbrack\) and \(M:X \rightarrow X\) is the multiplication operator \((Mf)(x) = x \cdot f(x),\forall x \in \lbrack 0,1\rbrack\), then \(\sigma(Κ) \equiv \lbrack 0,1\rbrack\), and each \(\lambda \in \lbrack 0,1\rbrack\) is in the continuous spectrum of the operator \(M\).
Here is an example of a family of non-Hermitian linear operators \(Κ_{A}:X \rightarrow X\) with properly complex eigenvalue spectrum, \(\kappa = te^{i\theta}\mathbb{\in C\backslash R}\):
\[Κ_{A}(x) = AxA^{\dagger}\ ,\]
where \(A\) is not Hermitian, \(A^{\dagger} \neq A\) and their complex eigenvalues are \(\lambda_{i} \in \sigma(A)\), \({\overline{\lambda}}_{j} \in \sigma(A^{\dagger})\). Then, the eigenvector \(|u_{i}\rangle\langle u_{j}|\) corresponds to the complex eigenvalue \(\lambda_{i} \cdot {\overline{\lambda}}_{j}\mathbb{\in C}\), where
\[A\left| u_{i} \right\rangle = \lambda_{i}\left| u_{i} \right\rangle,\ \ \ and\ \ \ A^{\dagger}\left| u_{j} \right\rangle = \lambda_{j}\left| u_{j} \right\rangle\ ,\ \ \ i \neq j.\ \]
This paper of Ari Laptev and Oleg Safronov computes eigenvalue estimates for Schrodinger operators with complex potentials. Also, this blog post shows interesting counterexamples for linear operators on infinite dimensional vector spaces and their eigenvalues, e.g.,
\[M_{f}h(x) = f(x)h(x):L^{2}\left( \lbrack 0,1\rbrack \right) \rightarrow L^{2}\left( \lbrack 0,1\rbrack \right)\ .\]
Note the critical advantage of this kime-operator \(\widehat{\mathbf{\kappa}}\) formulation – the importance of removing the main drawback of the corresponding time operator \(\widehat{t} = \frac{mx}{p} = \frac{m}{2}({\widehat{p}}^{\frac{1}{2}}\widehat{x} - \widehat{x}{\widehat{p}}^{\frac{1}{2}})\) which is singular in momentum \(p\)-space!
The kime-operator \(\widehat{\kappa}\) is well defined for any pair of position \(\widehat{x}\) and mokiphase \(\widehat{q - \theta}\), where the momentum-kime-phase (mokiphase) operator reflects the difference between the phases of the momentum (\(q\)) and kime (\(\theta\)).
Next, we’ll try to prove the enigmatic uncertainty for energy and kime (extending time), \(\Delta E\ \Delta\kappa \sim \hslash\). In general, for a pair of non-commuting observables, \(A,B\), Heisenberg uncertainty relation is
\[\Delta A\ \Delta B \geq \frac{1}{2}\ \left| \left\langle \left\lbrack \widehat{A},\ \widehat{B} \right\rbrack \right\rangle \right|\ .\]
The momentum encodes the bi-directional speed of the moving particle (electron) indicating its rates of change in position, relative to the two kime directions, as observed from a particular frame of reference and as measured by a particular kime coordinate framework.
Let’s clarify two facts about bounded linear operators.
Lemma 1: Bounded linear Hermitian operators have only real eigenvalues.
Proof: Suppose \(\hat{M}\) is a bounded linear Hermitian operator. Let \(\lambda\) be an eigenvalue of \(\hat{M}\), and \(\mu\) be the corresponding eigenvector for \(\lambda\). Since \(\hat{M}\) is Hermitian, \[\left\langle \mu \middle| \hat{M}\mu \right\rangle = \left\langle \hat{M}\mu \middle| \mu \right\rangle .\]
Using the properties of eigenvalues, we have \[\left\langle \mu \middle| \hat{M}\mu \right\rangle = \left\langle \mu\middle| \lambda\mu\right\rangle = \lambda\left\langle \mu \middle|\mu \right\rangle \] and \[\left\langle \hat{M}\mu \middle| \mu \right\rangle = \left\langle \lambda\mu\middle| \mu\right\rangle = \lambda^*\left\langle \mu \middle|\mu \right\rangle \]
As the above equations are exactly equal, it follows that \[\lambda = \lambda^* ,\] which implies that the eigenvalue \(\lambda\) must be real.
Lemma 2: Linear operators defined on a finite dimensional complex Hilbert space (i.e., operators that have matrix representation with real matrix elements) have complex-eigenvalues that are conjugate pairs.
Proof Denote by \(A\) an arbitrary linear operator on a finite vector space which can be represented a matrix with real entries. By definition, the eigenvalues of \(A\) are the solutions of \(\lambda\) in the characteristic equation \[\det(A - \lambda I)=0 ,\] where \(I\) is the identity matrix. Since all the coefficients in the equation are real, the complex roots of the characteristic equation, i.e., the eigenvalues of \(A\), must appear as conjugate pairs.
Notes:
The linear operators defined on a finite vector space that have non-real entries may not have conjugate pairs of complex eigenvalues. An example is the matrix \[\begin{pmatrix} 1 & 0 \\ 0 & i \end{pmatrix} \] It has complex eigenvalues \(1\) and \(i\), which are not conjugate pairs.
The situation with the matrix-operators above, representing finite dimensional operators over Hilbert spaces, is different for infinite dimensional spectrum bounded linear operators, which may not necessary have eigenvalues! Consider for instance, the right-shift operator. However, self-disadjoint and compact operators will always have eigenvalues, e.g., the Sturm-Liouville operator.
\[\left| \psi \right\rangle = \widehat{\mathbb{I}}\left| \psi \right\rangle = \int_{- \infty}^{\infty}{|x\rangle}\left\langle x \middle| \psi \right\rangle dx = \int_{- \infty}^{\infty}{\psi(x)|x\rangle}dx\ .\]
\[\left| \psi \right\rangle = \widehat{\mathbb{I}}\left| \psi \right\rangle = \int_{- \infty}^{\infty}{|p\rangle}\left\langle p \middle| \psi \right\rangle dp = \int_{- \infty}^{\infty}{\psi(p)|p\rangle}dp\ .\]
\(\psi(x) = \left\langle x \middle| \psi \right\rangle = \left\langle x \middle| \widehat{\mathbb{I}}|\psi \right\rangle = \int_{- \infty}^{\infty}{\psi(p)\langle x|p\rangle}dp = \int_{- \infty}^{\infty}{\psi(p)\frac{1}{\sqrt{2\pi\hslash}}e^{i\frac{p \cdot x}{\hslash}}}dp\)
(IFT), and similarly,
\(\phi(p) = \left\langle p \middle| \phi \right\rangle = \left\langle p \middle| \widehat{\mathbb{I}}|\phi \right\rangle = \int_{- \infty}^{\infty}{\phi(x)\langle p|x\rangle}dx = \int_{- \infty}^{\infty}{\phi(x)\frac{1}{\sqrt{2\pi\hslash}}e^{- i\frac{p \cdot x}{\hslash}}}dx\) (FT).
In the eigenvector bases (for the position & momentum),
\[\underbrace{\ \ \widehat{x}\ \ }_{operator} \overbrace{\ \ \left| x \right\rangle\ \ }^{eigenstate} = \underbrace{\ \ x\ \ }_{eigenvalue}|x\rangle \Longrightarrow \langle x'| \widehat{x}\left| x \right\rangle = x\left\langle x' \middle | x \right\rangle = x\delta\left( x - x' \right),\]
\[\widehat{x} = \int_{- \infty}^{\infty}{x |x\rangle \langle x|}dx\ .\]
\[\widehat{p}\left| p \right\rangle p|p\rangle \Longrightarrow \langle p'|\widehat{p}\left| p \right\rangle = p\left\langle p' \middle| p \right\rangle = p\delta\left( p - p' \right)\ ,\]
\[\widehat{p} = \int_{- \infty}^{\infty}{p |p\rangle \langle p|}dp\ .\]
\[\langle x'|\widehat{p}\left| x \right\rangle \underbrace{\ \ =\ \ }_{\underbrace{\hat{p}}_{operator}\underbrace{|p\rangle}_{state}= \underbrace{p}_{eigenvalue}|p\rangle} \int_{- \infty}^{\infty}{p\left\langle x' \middle| p \right\rangle\left\langle p \middle| x \right\rangle}dp = \int_{- \infty}^{\infty}{p\frac{1}{2\pi\hslash}e^{- i\frac{p \cdot \left( x - x' \right)}{\hslash}}}dp =\]
\[\frac{1}{2\pi\hslash}\left( - \frac{\hslash}{i}\frac{\partial}{\partial x}\ \ \int_{- \infty}^{\infty}{e^{- i\frac{p \cdot \left( x - x' \right)}{\hslash}}}dp \right) = \frac{1}{2\pi\hslash}\left( \frac{\hslash}{i}\frac{\partial}{\partial x}\left( 2\pi\delta\left( \frac{x - x'}{\hslash} \right) \right)\ \right) =\]
\[- \frac{\hslash}{i} \frac{\partial}{\partial x}\ \delta\left ( x - x' \right ) .\]
Note that the derivative of the Heaviside function is the Dirac distribution function, \(H(x-a)=\int_{-\infty}^{x}{\delta(x-a)dx}\), and \(\langle x| p\rangle =\frac{1}{\sqrt{2\pi \hbar}} e^{i\frac{xp}{\hbar}}\), \(\langle p| x\rangle =\frac{1}{\sqrt{2\pi \hbar}} e^{-i\frac{xp}{\hbar}}\), and the Fourier transform \(\int_{-\infty}^{\infty}{e^{-iwx}\cdot x\ dx}=i\frac{\partial}{\partial w}\int_{-\infty}^{\infty}{e^{-iwx}\ dx}\).
\[\langle x|\widehat{p}\left| \psi \right\rangle = \int_{- \infty}^{\infty}{\left\langle x \middle| \widehat{p}|x' \right\rangle\left\langle x' \middle| \psi \right\rangle}dx' =\]
\[\int_{- \infty}^{\infty}{\delta\left( x' - x \right)\frac{\hslash}{i}\frac{\partial}{\partial x'}\psi\left( x' \right)}dx \overbrace{\underbrace{\ \ = \ \ }_{by\ parts}}^{\Psi(x)=\langle x|\Psi\rangle} \underbrace{-\frac{\hslash}{i}\delta(x'-x)\Psi(x') |_{-\infty}^{\infty}}_{0} + \frac{\hslash}{i}\int_{-\infty}^{\infty}{\delta\left( x' - x \right)\frac{\partial}{\partial x'}\psi\left( x' \right)}dx'=\\ \frac{\hslash}{i}\frac{\partial}{\partial x}\psi\ \Rightarrow \ \widehat{p} = \frac{\hslash}{i}\frac{\partial}{\partial x}\ .\]
\[\langle p|\widehat{x}\left| p' \right\rangle = \cdots = \frac{\hslash}{i} \underbrace{\ \ \frac{\partial}{\partial p}\ \delta\left( p - p' \right)\ \ }_{Heaviside\ function} \Rightarrow \langle x|\widehat{p} \left| \psi \right\rangle = \cdots = - \frac{\hslash}{i}\frac{\partial}{\partial p} \underbrace{\ \ \widetilde{\psi}\ \ }_{FT(\psi)}\ \]
\[\Rightarrow \ \widehat{x} = - \frac{\hslash}{i}\frac{\partial}{\partial p}\ .\]
Problem: Explore transforming the additive group over \(t\in\mathbb{R}\) to a multiplicative group \(\kappa=te^{i\theta}\in\mathbb{C}\), where \(\forall\ \psi\in\mathcal{H}\), the \(1:1\) correspondence between one parameter kime continuous unitary groups of operators, i.e., operator-valued distributions, \(U^A :\mathcal{H}\to\mathcal{H}\) and self-adjoint operators \(A^U :\mathcal{H}\to\mathcal{H}\) is given by
\[\overbrace{A^U \psi}^{operator-valued\\ distribution} = \lim_{t\equiv|\kappa|\to 0}{\frac{U^{A^U}_{\kappa}(\psi) - \psi}{it}}\\ \overbrace{U^{A^U}_{\kappa}\underbrace{(\varphi)}_{test\\ function}}^{kime\ unitary\ operator}= \underbrace{e^{-itA^U}}_{operator} \ \ \underbrace{\ell\left(e^{i\theta}\right)}_{distribution} \varphi\ ,\]
where the action of the kime-inference function on test functions, \(\langle\mathfrak{P}, \phi\rangle,\) is defined by the kime-phase action on test functions, \(\langle\ell, \phi\rangle = \int_{\mathbb{R}}{\ell^*(\theta) \phi(\theta)}\ d\theta \in\mathbb{C}\),
\[\underbrace{\langle\mathfrak{P}, \phi\rangle}_{kime-function\\ action} = \overbrace{\Psi(x,y,z,t)}^{spacetime\\ wavefunction} \underbrace{\int_{\mathbb{R}}{\ell^*(\theta) \phi(\theta)}\ d\theta}_{\underbrace{\langle\ell, \phi\rangle}_{kime-phase\\ action}\in\mathbb{C}}\ .\]
Technically, \(\ell(\theta)\equiv \ell(e^{i\theta})\).
These one parameter kime unitary operators \(\{U_{\kappa}\ |\ \kappa\in\mathbb{C} \}\) are continuous \[\forall \kappa_{o}\in \mathbb {C} ,\ \psi \in {\mathcal {H}}:\ \lim _{\kappa\to \kappa_o}U_{\kappa}(\psi )=U_{\kappa_o}(\psi),\]
To simplify all notation, we will be suppressing the extra (unnecessary) superscripts signifying the \(U\equiv U^{A^U} \longleftrightarrow A^U\equiv A\) correspondence.
\[\forall \kappa_1=t_1e^{i\theta_1},\ \kappa_2=t_2e^{i\theta_2}\in \mathbb {C} ,\\ U_{\kappa_1\cdot \kappa_2}= U_{t_1e^{i\theta_1}\cdot t_2e^{i\theta_2}} = U_{t_1 t_2 e^{i(\theta_1+\theta_2)}} = \underbrace{e^{-i(t_1 t_2)A}}_{operator} \ \ \underbrace{\ell\left(e^{i(\theta_1+\theta_2)}\right)}_{distribution}\\ \overbrace{=}^{\theta_1\perp \theta_2\\ \ell,\ separable} \underbrace{e^{-i(t_1)A}\ \ell_1\left(e^{i\theta_1}\right)}_{U_{\kappa_1}} \ \ \underbrace{e^{-i(t_2)A}\ \ell_2\left(e^{i\theta_2}\right )}_{U_{\kappa_2}}= U_{\kappa_1}U_{\kappa_2}.\]
Recall that \(\forall\ \kappa=te^{i\theta}\in\mathbb{C},\) \(U_{\kappa}\) is an operator-valued distribution acting on test functions \(\varphi\in\mathcal{H}\) and producing complex scalars
\[\underbrace{U_{\kappa_1\cdot \kappa_2}(\varphi)}_{\in\mathbb{C}}= U_{\kappa_1}(\varphi) \cdot U_{\kappa_2}(\varphi)\ .\]
To explicate the kime-dynamics of states at any kime \(\kappa\in\mathbb{C}\), consider an initial state \(|\varphi_{\kappa_o}\rangle\). Without loss of generality, we can assume that \(\kappa_o=te^{i\theta}=0\), i.e., \(t=0\). So, the starting initial state is \(|\varphi_{\kappa_o}\rangle\equiv |\varphi_{o}\rangle\).
As the state at kime \(\kappa\in\mathbb{C}\) is measurable, the temporal dynamics of the system can be expressed in terms of the kime unitary operator group action
\[|\varphi_{\kappa}\rangle=U_{\kappa}(|\varphi_{o}\rangle)= \underbrace{e^{-i t A}}_{operator} \ \ \underbrace{\ell\left(e^{i\theta}\right)}_{distribution} (|\varphi_{o}\rangle)\ ,\]
where \(\ell\left(e^{i\theta}\right)\) is a (prior) model of the kime-phase distribution, which can be sampled once for single observations, or sampled multiple times corresponding to multiple repeated measurements.
Taking the partial derivative of \(|\varphi_{\kappa}\rangle\) with respect to the kime-magnitude (time, \(t\)) yields the kime-Schrodinger equation
\[\frac{\partial |\varphi_{\kappa}\rangle}{\partial t}=-iA \underbrace{\ \ |\varphi_{\kappa}\rangle\ \ }_{\left (e^{-i t A}\right ) \ell\left(e^{i\theta}\right) (|\varphi_{o}\rangle)} \ .\]
For instance, consider a free evolution (no external forces), where we are modeling the energy (Hamiltonian) kime-evolution of the state of a particle of mass \(m\) in \(1D\). Assume only kinetic energy \(K\) is at play, without potential energy, \(V=0\). Then, the energy Hamiltonian operator \(H=A\) is self-adjoint
\[A\equiv H=-\frac{1}{2m}\frac{d^2}{dx^2}\ .\]
In this case, the explicit form of the kime-independent Schrodinger equation describing the physical state of a quantum-mechanical system is
\[\frac{\partial |\varphi_{\kappa}\rangle}{\underbrace{\partial t}_{t=|\kappa|}}=-iA \underbrace{\ \ |\varphi_{\kappa}\rangle\ \ }_{\left (e^{-i t A}\right ) \ell\left(e^{i\theta}\right) (|\varphi_{o}\rangle)} = +i\frac{1}{2m}\frac{\partial^2}{\partial x^2} |\varphi_{\kappa}\rangle\ .\]
For simplicity, here we are working with normalized units, \(c=\hbar = 1\). In the more general \(3D\) kinetic energy (free potential) case, the system Hamiltonian is
\[A\equiv H=-\frac{h{^2}}{8\pi{^2}m}\left(\underbrace{\dfrac{\partial{^2}} {\partial{x^2}}+\dfrac{\partial{^2}}{\partial{y^2}}+\dfrac{\partial{^2}} {\partial{z^2}}}_{Laplacian,\ \nabla^2}\right)\ .\]
This kime-independent Schrodinger equation has an explicit solution \[|\varphi(\kappa)\rangle = \left (e^{-i t E}\right ) \ell\left(e^{i\theta}\right) (|\varphi_{o}\rangle)\ ,\]
where \(E\) represent observable energies. Since \(\ell\left(e^{i\theta}\right)\) is a kime-phase distribution on \([-\pi,+\pi)\) and the observable energies (eigenvalues of the Hamiltonian) are finite, this suggests that \(\forall\ \psi_o\in L^2(\mathbb{C})\)
\[||\psi_{\kappa}||_{L^{\infty}} \underset{t\to\infty}{\ \longrightarrow 0}\ .\]
Therefore, as \(t\to\infty\) the PDF of the position vanishes and there is no limiting probability distribution for the position of the particle in \(1D\) as it can spreads out across the entire real line.
This was the solution of the kime-independent Schrodinger equation in position coordinates. Let’s consider the same free evolution (kinetic energy only) in momentum coordinates. Recall that the Fourier transform provides a linear bijective mapping between position coordinates, spacetime representation, \(\varphi(x),\) and momentum coordinates, k-space frequency representation, \({\hat{\varphi}}(p)\).
Again for simplicity, we’ll consider the momentum of a free particle in \(1D\) and set \(c=\hbar=1\). The solution of the Schrodinger equation in momentum coordinates is
\[{\hat{\varphi}}_{\kappa}(p)=\langle p |{\hat{\varphi}}_{\kappa}\rangle = \left (e^{-i t p^2}\right ) \ell\left(e^{i\theta}\right) (\langle p |{\hat{\varphi}}_{\kappa}\rangle)= \left (e^{-i t p^2}\right ) \ell\left(e^{i\theta}\right) ({\hat{\varphi}}_{o}(p))\ .\]
Clearly, \(||{\hat{\varphi}}_{\kappa}(p)||^2 \equiv ||{\hat{\varphi}}_{o}(p)||^2,\) suggesting that the momentum PDF is static in kime. For a free-evolution (no potential energy) this finding is not surprising, since the momentum of a free particle should be preserved. However, in real observations, quantum fluctuations (intrinsic randomness) will affect repeated measurements of a given observation (e.g., energy). This intrinsically stochastic behavior is modeled by the kime-phase distribution \(\ell\left(e^{i\theta}\right)\sim \Phi_{[-\pi,+\pi)]}\).
The kime propagator \(K(\kappa_2,\kappa_1)\) between two kime points \(K(\kappa_2,\kappa_1) = \langle x_2|U(\Delta\kappa)|x_1\rangle\) can be expanded to \[K(\kappa_2,\kappa_1) = \langle x_2|e^{-iH(t_2e^{i\theta_2} - t_1e^{i\theta_1})/\hbar}|x_1\rangle\]
The kime propagator in path integral form is \[K(\kappa_2,\kappa_1) = \int \mathcal{D}[x(\kappa)] e^{iS[\kappa]/\hbar} ,\] where the complex action is \[S[\kappa] = \int_{\kappa_1}^{\kappa_2} L(x,\dot{x},\kappa)d\kappa\]
Note: For the kime-propagator path integral, we may properly (re)define the functional measure for paths in complex time. For instance, \[K(\kappa_2,\kappa_1) = \mathcal{N}\int \mathcal{D}[x(\kappa)] e^{iS[\kappa]/\hslash},\] where is a normalization factor, complex time incremens are \(\Delta \kappa_j = \kappa_{j+1} - \kappa_j\), and the measure is defined as a limit of discrete paths \[\mathcal{D}[x(\kappa)] = \lim_{N\to\infty} \prod_{j=1}^{N-1} \sqrt{\frac{m}{2\pi i\hslash \Delta\kappa_j}} dx_j.\] The action \(S[\kappa]\) also may need to be (re)defined for complex time paths \[S[\kappa] = \int_{\kappa_1}^{\kappa_2} \left(\frac{m}{2}\frac{dx}{d\kappa}\frac{dx}{d\kappa^*} - V(x)\right)d\kappa .\]
Hence, the full path integral can be written as \[K(\kappa_2,\kappa_1) = \lim_{N\to\infty} \left(\prod_{j=1}^{N-1} \sqrt{\frac{m}{2\pi i\hslash \Delta\kappa_j}}\right) \int\cdots\int \prod_{j=1}^{N-1} dx_j \times \\ \exp\left\{\frac{i}{\hslash}\sum_{j=0}^{N-1} \left[\frac{m}{2}\frac{(x_{j+1}-x_j)^2}{\Delta\kappa_j} - V(x_j)\Delta\kappa_j\right]\right\} .\] This formulation properly accounts for the complex nature of kime, includes an appropriate measure normalization, reflects the discrete-to-continuous limit, and properly treats complex conjugate pairs in the kinetic term.
The measured propagator requires averaging over phase differences \[\overline{K(\kappa_2,\kappa_1)} = \int_{-2\pi}^{2\pi} K(\Delta\kappa)\Phi_{\Delta}(\Delta\theta)d(\Delta\theta)\] Then, the state evolution is \[i\hbar\frac{\partial}{\partial\kappa}|\psi(\kappa)\rangle = He^{i\theta}|\psi(\kappa)\rangle\] and the corresponding propagator equation becomes \[i\hbar\frac{\partial}{\partial\kappa_2}K(\kappa_2,\kappa_1) = He^{i\theta_2}K(\kappa_2,\kappa_1)\]
For intermediate kime points, the Chapman-Kolmogorov equation is \[K(\kappa_3,\kappa_1) = \int K(\kappa_3,\kappa_2)K(\kappa_2,\kappa_1)d\kappa_2\] and for ordered kime points, the group property \(U(\kappa_3,\kappa_1) = U(\kappa_3,\kappa_2)U(\kappa_2,\kappa_1)\) yields the expectation via statistical averaging \[\overline{U(\kappa_3,\kappa_1)} = \iint U(\kappa_3,\kappa_2)U(\kappa_2,\kappa_1)\Phi(\theta_2)d\theta_2 .\]
The kime-ordered propagator explicates the causal structure \[K_{\text{ordered}}(\kappa_2,\kappa_1) = \theta(t_2-t_1)K(\kappa_2,\kappa_1) ,\] where \(\theta(t)\) is the Heaviside function.
The free particle kime propagator becomes \[K_0(\kappa_2,\kappa_1) = \sqrt{\frac{m}{2\pi i\hbar\Delta\kappa}}\exp\left(\frac{im(x_2-x_1)^2}{2\hbar\Delta\kappa}\right) .\] Whereas the harmonic oscillator with frequency \(\omega\) is \[K_{\text{HO}}(\kappa_2,\kappa_1) = \sqrt{\frac{m\omega}{2\pi i\hbar\sin(\omega\Delta\kappa)}}\exp\left(\frac{im\omega}{2\hbar\sin(\omega\Delta\kappa)}[(x_2^2+x_1^2)\cos(\omega\Delta\kappa)-2x_2x_1]\right)\]
Note that the averaged propagator includes decoherence \[\overline{K(\kappa_2,\kappa_1)} = K_{\text{ideal}}(\kappa_2,\kappa_1)\exp(-\Gamma[\Delta\theta]) ,\] where \(\Gamma[\Delta\theta]\) depends on the phase difference distribution.
Non-Markovian kime evolution is \[\frac{\partial}{\partial\kappa}\rho(\kappa) = -\int_{\kappa_1}^{\kappa} M(\kappa-\kappa')\rho(\kappa')d\kappa' ,\] where \(M(\Delta\kappa)\) is the memory kernel. These snuggest that the kime propagator combines quantum evolution with statistical phase differences, naturally modeling decoherence. Also, the phase difference distribution \(\Phi_{\Delta}\) determines the strength and nature of quantum-to-classical transition.
Let \(\kappa\in\mathbb{C}\) be complex time (kime), \(\kappa= t\cdot e^{i\theta}\), where \(t\in\mathbb{R}^+\) is the classical time (kime-magnitude), the kime-phase \(\theta\sim \Phi_{[-\pi,\pi)}\) is a random variable, and \(\Phi_{[-\pi,\pi)}\) is a probability distribution supported on \([-\pi,\pi)\). For a quantum state \(|\psi (\kappa)\rangle\), the time evolution operator (quantum state evolution), is \(U(\kappa) = e^{-iH\kappa /\hbar} = e^{-iHte^{i\theta}/ \hbar}\), where \(H\) is the system Hamiltonian.
For a single observable \(O\), the expectation value at kime \(\kappa\) is \(\langle O(\kappa)\rangle = \langle \psi(0)|U^{\dagger}(\kappa)OU(\kappa)|\psi(0)\rangle\). For \(N\) repeated measurements, each corresponding to kime \(\kappa_i\), the expected observable measure is \(\langle O(\kappa)\rangle_{mean} = \frac{1}{N} \sum_{i=1}^N \langle O(\kappa_i)\rangle\).
The variance due to kime-phase fluctuations is \(\sigma^2_{kime} = \langle(O(\kappa) - \langle O\rangle_{mean})^2\rangle\).
Let’s explicate the main sources of variability: (1) Environmental coupling introduces phase noise in \(\theta\); (2) Control pulse imperfections affect the kime-magnitude \(t\); and (3) Decoherence effects modify the effective Hamiltonian. The total measurement error can be decomposed as \(\varepsilon_{total}^2 = \varepsilon_{intrinsic}^2 + \varepsilon_{kime}^2\), where \(\varepsilon_{intrinsic}\) is the theoretical quantum projection noise and \(\varepsilon_{kime}\) is the additional uncertainty from kime fluctuations (repeated sampling dispersion).
Measurement accuracy discrepancies may be explained by phase uncertainty (random fluctuations in \(\theta\) lead to measurement spreading or environmental noise couples to the \(\theta\) distribution width), temporal correlations (reflective successive measurements have correlated kime-phases or time-dependent systematic errors accumulate), and decoherence effects (complex time evolution modifications of the decoherence rates or emergence of non-Markovian effects from kime-phase memory). Potential mitigation strategies may involve phase tracking (monitoring \(\theta\) distribution changes or implementing adaptive measurement protocols), error budgeting (separating kime-induced vs. intrinsic errors or appropriately optimizing the measurement sequences), or by alternative protocol designs (employing kime-aware measurement sequences or implementing error mitigation based on \(\theta\) statistics).
The kime evolution operator may expressed as \[U(\kappa) = e^{-iHte^{i\theta}/\hbar} = e^{-iHt(\cos\theta + i\sin\theta)/\hbar}.\] In general, the following expansion may not be valid \[U(\kappa) = e^{-iHt\cos\theta/\hbar}e^{Ht\sin\theta/\hbar}.\] The above split of the exponential is violating quantum mechanical principles. Such exponential operator expansions involve spacial sign-handling and commutators, see the Baker-Campbell-Hausdorff (BCH) formula. For quantum mechanics, \[U(\kappa) = e^{-iHte^{i\theta}/\hslash} = \\ e^{-iHt(\cos\theta + i\sin\theta)/\hslash} \not= e^{-iHt\cos\theta/\hslash}e^{Ht\sin\theta/\hslash}.\] First, for non-commuting operators \(A\) and \(B\), \(e^{A+B} \neq e^A e^B\).
The BCH formula generalizes numerical series expansions in terms of operator commutators, \([A,B] \equiv AB - BA\), \[e^{A+B} = e^A e^B e^{-\frac{1}{2}[A,B]} e^{\frac{1}{12}([A,[A,B]]-[B,[A,B]])} \cdots \]
Second, correct complex exponential decompositions is \[U(\kappa) = e^{-iHt(\cos\theta + i\sin\theta)/\hslash} = e^{-iHt\cos\theta/\hslash}e^{-iHt(i\sin\theta)/\hslash} = e^{-iHt\cos\theta/\hslash}e^{-Ht\sin\theta/\hslash} .\] The crucial difference is the negative sign in the second exponential. This is because \(e^{-iHt(\cos\theta + i\sin\theta)/\hslash} = e^{(-iHt\cos\theta - Ht\sin\theta)/\hslash}.\) When splitting this into two exponentials (which is valid here because the two operators commute), we have \[e^{-iHt\cos\theta/\hslash}e^{-Ht\sin\theta/\hslash}.\] A more rigorous derivation would involve the spectral decomposition of the operator \(H\) (total energy) in terms of its eigenvalues \(E_n\), observable energies, and the corresponding eigenstates, \(|n\rangle\) \[U(\kappa) = \sum_n e^{-iE_n te^{i\theta}/\hslash} |n⟩⟨n|\]
For a Uniform distribution, \(\theta \sim \text{Unif}(-\pi,\pi)\), \(\Phi_U(\theta) = \frac{1}{2\pi}, \quad \theta \in [-\pi,\pi)\).
Expected evolution operator is \[\mathbb{E}(U)=\overline{U(\kappa)}_U = \frac{1}{2\pi}\int_{-\pi}^{\pi} e^{-iHt(\cos\theta + i\sin\theta)/\hbar}d\theta \equiv \underbrace{J_0(Ht/\hbar)}_{Bessel\ function}.\]
For a (truncated) Normal Distribution, \(\theta \sim \mathcal{N}(0,\sigma^2)\) supported on \([-\pi,\pi]\), \(\Phi_N(\theta) = \frac{1}{Z_N}\exp(-\theta^2/2\sigma^2), \quad \theta \in [-\pi,\pi)\), where \(Z_N\) is the normalization constant. Then, the expected evolution operator is \[\mathbb{E}(U)=\overline{U(\kappa)}_N = \frac{1}{Z_N}\int_{-\pi}^{\pi} e^{-iHt(\cos\theta + i\sin\theta)/\hbar}e^{-\theta^2/2\sigma^2}d\theta .\]
Finally, for a (truncated) Laplace distribution, \(\theta \sim \text{Laplace}(0,b)\) limited to \([-\pi,\pi]\), \(\Phi_L(\theta) = \frac{1}{Z_L}e^{-|\theta|/b}, \quad \theta \in [-\pi,\pi)\). And the expected evolution operator would be \[\mathbb{E}(U)=\overline{U(\kappa)}_L = \frac{1}{Z_L}\int_{-\pi}^{\pi} e^{-iHt(\cos\theta + i\sin\theta)/\hbar}e^{-|\theta|/b}d\theta .\]
We need to examine some of the physical properties, such as various conservation laws. For a single realization of \(\theta\) \[\langle\psi(\kappa)|\psi(\kappa)\rangle = \langle\psi(0)|U^\dagger(\kappa)U(\kappa)|\psi(0)\rangle = \langle\psi(0)|\psi(0)\rangle .\]
However, when averaging over the \(\theta\) phase distribution, the norm \[||\psi||^2=\overline{\langle\psi(\kappa)|\psi(\kappa)\rangle} = \int_{-\pi}^{\pi} \langle\psi(0)|U^\dagger(\kappa)U(\kappa)|\psi(0)\rangle\Phi(\theta)d\theta \leq \langle\psi(0)|\psi(0)\rangle .\] Similarly, the energy expectation value is \[E(\kappa) = \langle\psi(\kappa)|H|\psi(\kappa)\rangle = \langle\psi(0)|U^\dagger(\kappa)HU(\kappa)|\psi(0)\rangle ,\] which is not conserved due to the complex nature of kime. The time-energy uncertainty relation becomes \(\Delta E \Delta\kappa \geq \frac{\hbar}{2}|e^{i\theta}|\). Hence, the kime evolution exhibits apparent causality violation due to the complex time components allowing imaginary propagation, phase distribution width affects on the temporal ordering, and non-local correlations in repeated measurements.
Potential violations include
Unitarity: Individual trajectories are unitary, but ensemble averages are not \(\overline{U^\dagger(\kappa)U(\kappa)} \neq I\).
Energy Conservation: Complex time evolution allows temporary energy fluctuations \(\frac{d}{d\kappa}\overline{E(\kappa)} \neq 0\).
Causality: Complex time components suggest apparent backwards propagation \(\text{Sign}[\text{Im}(\kappa)] \text{ determines temporal direction}\).
Possible resolutions to some of these violations of core physical principle include (1) Statistical interpretations (reflecting physical measurements as ensemble averages, resolving apparent violations with properly averaging (expectation), and non-unitary behavior models decoherence); (2) Measurement theoretic interpretations (using the phase distribution width \(\Delta\theta\) as measurement uncertainty, utilizing broader distributions that lead to stronger decoherence, or limit-theoretic recovery of standard quantum mechanics as \(\Delta\theta \to 0\)); and (3) Consistency conditions (restrictions ensuring physical consistency, e.g., symmetric phase distributions, \(\Phi(\theta) = \Phi(-\theta)\), compact-supported or finite-variance phase distributions, \(\int {\theta^2 \Phi(\theta) d\theta < \infty}\), and renormalization over \([-\pi,\pi)\)).
Extending the standard Schwarzschild metric using kime yields \[ds^2 = -c^2\left(1-\frac{2GM}{rc^2}\right)d\kappa d\kappa^* + \left(1-\frac{2GM}{rc^2}\right)^{-1}dr^2 + r^2(d\theta^2 + \sin^2\theta d\phi^2),\] where \(\kappa = te^{i\theta}\) and its conjugate \(\kappa^* = te^{-i\theta}\).
Consider using different phase distributions, such as
Laplace Distribution \(\Phi_L(\theta) = \frac{1}{2b}e^{-|\theta|/b}, \quad \theta \in [-\pi,\pi]\), which implies sharp transitions in spacetime curvature, we may obtain enhanced sensitivity to local gravitational fields.
Truncated Normal \(\Phi_N(\theta) = \frac{1}{Z_N}e^{-\theta^2/2\sigma^2}, \quad \theta \in [-\pi,\pi]\), using smooth variations in metric components and Gaussian-weighted averaging of geodesics.
Uniform Distribution \(\Phi_U(\theta) = \frac{1}{2\pi}, \quad \theta \in [-\pi,\pi]\), assuming maximum entropy configuration and isotropic phase-space sampling.
Let’s consider again the Solar System Planetary Perihelion Precession and explore opportunities for kime-representation enhancement of the precession dynamics.
The Newtonian framework, based on the gravitaitonal law, allows us to estimate Mercury’s periheloin precession using the following orbital parameters (for Mercury) semi-major axis \(a = 5.79 \times 10^7\) km, eccentricity \(e = 0.206\), orbital period \(T = 87.969\) days, and mean angular velocity \(n = \frac{2\pi}{T}\), along with the following Solar parameters mass \(M_{\odot} = 1.989 \times 10^{30}\) kg and gravitational parameter \(\mu = GM_{\odot} = 1.327 \times 10^{20}\ m^3/s^2\).
For a classical two-body problem, the Newtonian orbit equation is \(\frac{d^2r}{d\theta^2} + r = \frac{h^2}{GM_{\odot}}\), where \(h\) is the specific angular momentum \(h = \sqrt{GM_{\odot}a(1-e^2)}\).
Examining the Solar oblateness, the Sun’s quadrupole moment \(J_2\) is \(\Delta\omega_{J_2} = \frac{3nJ_2R_{\odot}^2}{2a^2(1-e^2)^2}\). Estimating \(J_2 \approx 2 \times 10^{-7}\) and \(R_{\odot} = 696,340\) km, we get \(\Delta\omega_{J_2} \approx 0.0254\text{ arcsec/century}\).
The other planets in the Solar system also play a role as planetary perturbations perturbing the orbits of the other planets. For instance, Mercury’s orbit is affected by Venus, \(\Delta\omega_V = \frac{GMn_V}{4a_V^3}\frac{a^4}{GM_{\odot}}\approx 0.277\) arcsec/century; Jupiter, \(\Delta\omega_J = \frac{GMn_J}{4a_J^3}\frac{a^4}{GM_{\odot}}\approx 0.152\) arcsec/century; Earth, \(\Delta\omega_E = \frac{GMn_E}{4a_E^3}\frac{a^4}{GM_{\odot}}\approx 0.069\) arcsec/century, etc.
The total Newtonian precession effect, summing of all of hte individual planets Newtonian effects becomes \[\Delta\omega_{\text{Newtonian}} = \Delta\omega_{J_2} + \Delta\omega_V + \Delta\omega_J + \Delta\omega_E + \text{(smaller order terms)}\]
\[\Delta\omega_{\text{Newtonian}} = (0.0254 + 0.277 + 0.152 + 0.069 + 0.017)\text{ arcsec/century}\]
\[\Delta\omega_{\text{Newtonian}} = 0.540\text{ arcsec/century}\] However, data from historical observations suggests a total precession \(\Delta\omega_{\text{observed}} = 43.4 \pm 0.2\text{ arcsec/century}\), which does not work well with the Newtonian prediction \(\Delta\omega_{\text{Newtonian}} = 0.540\text{ arcsec/century}\). The General Relativity prediction provides an improved estimate \(\Delta\omega_{\text{GR}} = 43.0\text{ arcsec/century}\). Below, we will show that a kime-representation may enhance the GR prediction under certain assumptions for the kime-phase distribution/. For instance, a Laplace phase prior yields \(43.0 \pm 0.1\text{ arcsec/century}\), a Normal prior \(43.0 \pm 0.05\text{ arcsec/century}\), and a Uniform prior \(43.0 \pm 0.2\text{ arcsec/century}\).
The Newtonial aproach missing precession \(\Delta\omega_{\text{missing}} = \Delta\omega_{\text{observed}} - \Delta\omega_{\text{Newtonian}} = 42.86\text{ arcsec/century}\).
The following table shows the observed and predicted precession rates for several planets.
Planet | Orbits.per.century | Eccentricity | r_min.AU | GR.Predicted.arc.sec.per.century | Observed.arc.sec.per.century |
---|---|---|---|---|---|
Mercury | 415.2 | 0.2060 | 0.307 | 43.0 | \(43.1 \pm 0.5\) |
Venus | 162.5 | 0.0068 | 0.717 | 8.6 | \(8.4 \pm 4.8\) |
Earth | 100.0 | 0.0170 | 0.981 | 3.8 | \(5.0 \pm 1.2\) |
Icarus* | 89.3 | 0.8270 | 0.186 | 10.0 | \(9.8 \pm 0.8\) |
Note: The perihelion of the asteroid Icarus
,
which is closer to the Sun than any other asteroid, is within Mercury’s
orbit.
The standard GR precession rate enhanced by kime corrections may be expressed as \[\Delta\phi = \frac{6\pi GM}{c^2a(1-e^2)} + \Delta\phi_{\kappa} ,\] where \(\Delta\phi_{\kappa}\) is the kime correction \(\Delta\phi_{\kappa} = \frac{6\pi GM}{c^2a(1-e^2)}\int_{-\pi}^{\pi}\left(e^{i\theta} - 1\right)\Phi(\theta)d\theta\).
Let’s show that for Mercury’s orbit, the GR prediction implies \(43.0\) arcsec/century. The corresponding kime-enhanced predictions using alternative kime-phase distributions include:
And similarly, for the Earth’s perihelion precession predictions include GR prediction \(3.8\) arcsec/century, whereas the corresponding kime-enhanced predictions include:
Using the classical GR framework to predict the orbit precessions involves using \[\Delta\phi_{GR} = \frac{6\pi GM_{\odot}}{c^2a(1-e^2)}.\]
For Mercury, the semi-major axis has \(a = 5.79 \times 10^7\) km, the eccentricity is \(e = 0.206\), and the solar mass is \(M_{\odot} = 1.989 \times 10^{30}\) kg. This yields the estimate \(\Delta\phi_{GR} = 43.0\) arcsec/century.
The kime enhancement involves a correction term \[\Delta\phi_{\kappa} = \Delta\phi_{GR} \cdot \int_{-\pi}^{\pi}(e^{i\theta} - 1)\Phi(\theta)d\theta ,\] which introduces a complex phase factor modifying the standard precession. For different kime-phase distributions, the calculations will utilize the corresponding density functions. For Laplace distribution, \(\Phi_L(\theta) = \frac{1}{2b}e^{-|\theta|/b}, \quad \theta \in [-\pi,\pi]\), the kime correction integral is \[I_L = \int_{-\pi}^{\pi}(e^{i\theta} - 1)\frac{1}{2b}e^{-|\theta|/b}d\theta .\] Assume the Laplace scale parameter \(b = 0.1\). Then, \[I_L = \frac{1}{2b}\left[\frac{b}{1-b^2i}(1-e^{-\pi/b}\cos\pi) + \frac{bi}{1-b^2i}(e^{-\pi/b}\sin\pi)\right] - 1 .\]
This leads to \(\Delta\phi_{\kappa,L} = (0.1 \pm 0.1)\text{ arcsec/century}\).
For the second kime-phase distirbution model, the truncated Normal distribution, \(\Phi_N(\theta) = \frac{1}{Z_N}e^{-\theta^2/2\sigma^2}, \quad \theta \in [-\pi,\pi]\), The kime correction integral is \[I_N = \frac{1}{Z_N}\int_{-\pi}^{\pi}(e^{i\theta} - 1)e^{-\theta^2/2\sigma^2}d\theta.\] Assuming a tight dispersion, \(\sigma = 0.05\) (optimized specifically got for Mercury), the correction becomes \[I_N = \frac{1}{Z_N}\left[e^{-\sigma^2/2}(1-e^{-\pi^2/2\sigma^2}\cos\pi) + i(e^{-\sigma^2/2}e^{-\pi^2/2\sigma^2}\sin\pi)\right] - 1, \] which implies an unbiased and tight estimate \(\Delta\phi_{\kappa,N} = (0.0 \pm 0.05)\text{ arcsec/century}\).
Lastly, for Uniform distribution, \(\Phi_U(\theta) = \frac{1}{2\pi}, \quad \theta \in [-\pi,\pi]\), the kime correction integral will be \(I_U = \frac{1}{2\pi}\int_{-\pi}^{\pi}(e^{i\theta} - 1)d\theta\). This leads to an estimate of \(\Delta\phi_{\kappa,U} = (0.0 \pm 0.2)\text{ arcsec/century}\).
Three complementary factors affect the precision of the kime-representation model-prediction. These sources of uncertainty include the kime-phase distribution choice and its parameters, the choice of the orbital parameters, \(\frac{\delta\Delta\phi}{\Delta\phi} = \sqrt{(\frac{\delta a}{a})^2 + (\frac{2e\delta e}{1-e^2})^2}\), and the phase integration \(\delta I = \sqrt{\int_{-\pi}^{\pi}|e^{i\theta} - 1|^2(\delta\Phi(\theta))^2d\theta}\).
The computed kime-representation precession predictions include
Newtonian (law of gravity) estimation: \(\Delta\phi_{\text{total},Newton} = 0.540 \pm 0.021\text{ arcsec/century}\).
GR Estimation: \(\Delta\phi_{\text{total},GR} = 43.0\text{ arcsec/century}\).
Laplace Distribution: \(\Delta\phi_{\text{total},Laplace} = (43.0 \pm 0.1)\text{ arcsec/century}\).
Normal Distribution: \(\Delta\phi_{\text{total},Normal} = (43.0 \pm 0.05)\text{ arcsec/century}\).
Uniform Distribution: \(\Delta\phi_{\text{total},Uniform} = (43.0 \pm 0.2)\text{ arcsec/century}\).
Question: Are there specific kime-phase distribution models (and if so with what parameters) that yield an unbiased estimate matching the historical data for Mercury’s precession: 43.1±0.5? That is, What phase distribution may increase the mean from 43 to 43.1 and reduce the variability below 0.5?
For example, using \(Laplace\left (\mu=0.1, b=\frac{1}{4}\right )\) distribution, which has mean \(\mu = 0.1\), median \(m= 0.1\), variance \(2b^2 = 2(1/4)^2 = 2(0.0625) = 0.125\), and standard deviation \(\sqrt{2}b = \sqrt{2}/4 \approx 0.3536 \ll 0.5\). The Mercury’s precession using baseline relativistic prediction is \(43.0\) arcseconds/century, where as with the Laplace model (mean bias/shift), the prediction is \(43.0 + 0.1 = 43.1\) arcseconds/century corresponding with uncertainty \(\pm 0.3536\) arcseconds/century.
In gravitational time dilation, we need to use the following kime-modified time dilation factor \[\frac{d\tau}{dt} = \sqrt{1-\frac{2GM}{rc^2}}\cdot\left(1 + \int_{-\pi}^{\pi}(e^{i\theta}-1)\Phi(\theta)d\theta\right).\]
Whereas in gravitational Lensing, the kime-enhanced deflection angle is \[\alpha = \frac{4GM}{c^2b}\left(1 + \Delta\alpha_{\kappa}\right), \]
where \(\Delta\alpha_{\kappa} = \int_{-\pi}^{\pi}(e^{i\theta}-1)\Phi(\theta)d\theta\).
For modeling binary pulsar orbital decay, the kime -modified energy loss rate is \[\frac{dE}{dt} = -\frac{32G^4M^2\mu^2}{5c^5r^4}\left(1 + \Delta E_{\kappa}\right)\]
Examples of more novel kime-representation predictions may include phase transition effects for strong field regime transitions, horizon structure modifications, or singularity behavior changes. We may also explore the kime effects on quantum gravity interface, quantum-classical transition mechanisms, modified uncertainty relations, or novel vacuum energy corrections.
Any meaningful tests may need to be subjected to appropriate sensitivity requirements, e.g., \[\Delta\text{measurement} \sim \frac{GM}{c^2R}\cdot\Delta\theta_{\text{rms}}.\] where \(\Delta\theta_{\text{rms}}\) is the RMS phase variation.
\[i\hbar \frac{\partial \psi}{\partial \kappa} = -\frac{\hbar^2}{2m} \left(\frac{\partial^2}{\partial t^2} + \frac{\partial^2}{\partial \theta^2}\right) \psi \]
Let’s try to work out a few explicit examples of the kime-Schrödinger equation solutions using each of the following kime-phase distributions:
We assume a separable solution of the form:
\[\psi(t, \theta, \kappa) = \psi_t(t) \psi_\theta(\theta) e^{-i E \kappa / \hbar}\]
The Laplace distribution for \(\theta\) is given by \(f(\theta; \mu, b) = \frac{1}{2b} \exp\left(-\frac{|\theta - \mu|}{b}\right)\), where \(( \mu = 0 , \sigma = 0.4)\) implies \(b = \frac{\sigma}{\sqrt{2}} = \frac{0.4}{\sqrt{2}} \approx 0.282.\)
For the time component \(\psi_t(t)\) \[i\hbar \frac{\partial \psi_t}{\partial t} = \frac{p_t^2}{2m} \psi_t \quad \text{with} \quad p_t = -i\hbar \frac{\partial}{\partial t}\] The solution is \(\psi_t(t) = A_t e^{i p_t t / \hbar}\).
For the kime-phase component \(\psi_\theta(\theta)\) \[-\frac{\hbar^2}{2m} \frac{\partial^2 \psi_\theta}{\partial \theta^2} = E_\theta \psi_\theta\] Given the Laplace distribution, we solve:
\[\frac{\partial^2 \psi_\theta}{\partial \theta^2} = -\lambda^2 \psi_\theta \quad \text{where} \quad \lambda^2 = \frac{2m E_\theta}{\hbar^2}\]
And the general solution is \(\psi_\theta(\theta) = A_\theta e^{i \lambda \theta} + B_\theta e^{-i \lambda \theta}\).
Given the Laplace distribution for \(\theta\), \(\psi_\theta(\theta) = \frac{1}{\sqrt{2 \times 0.282}} \exp\left(-\frac{|\theta|}{0.282}\right)\).
Combining the solutions, the full kime-wave function is
\[\psi(t, \theta, \kappa) = A_t e^{i p_t t / \hbar} \cdot \frac{1}{\sqrt{2 \times 0.282}} \exp\left(-\frac{|\theta|}{0.282}\right) e^{-i E \kappa / \hbar}\]
The truncated normal distribution for \(\theta \in [- \pi, \pi]\) with parameters \((\mu = 0,\sigma = 0.4)\).
Same as before, \(\psi_t(t) = A_t e^{i p_t t / \hbar}\).
The PDF for a truncated normal distribution is \[f(\theta; \mu, \sigma, a, b) = \frac{\frac{1}{\sigma \sqrt{2\pi}} \exp\left(-\frac{(\theta - \mu)^2}{2\sigma^2}\right)}{\Phi\left(\frac{b - \mu}{\sigma}\right) - \Phi\left(\frac{a - \mu}{\sigma}\right)}\ ,\] where \(\Phi\) is the CDF of the standard normal distribution.
Given the truncated normal distribution, the kime-phase component \(\psi_\theta(\theta)\) can be approximated by
\[\psi_\theta(\theta) = \frac{1}{\sqrt{\Phi\left(\frac{\pi}{0.4}\right) - \Phi\left(-\frac{\pi}{0.4}\right)}} \times \frac{1}{0.4 \sqrt{2\pi}} \exp\left(-\frac{\theta^2}{2 \times 0.4^2}\right)\]
Combining the solutions, the full kime-wave function is
\[\psi(t, \theta, \kappa) = A_t e^{i p_t t / \hbar} \times \frac{1}{\sqrt{\Phi\left(\frac{\pi}{0.4}\right) - \Phi\left(-\frac{\pi}{0.4}\right)}} \times \frac{1}{0.4 \sqrt{2\pi}} \exp\left(-\frac{\theta^2}{2 \times 0.4^2}\right) e^{-i E \kappa / \hbar}\]
For the uniform distribution, \(\theta\in [- \pi, \pi]\), \(f(\theta) = \frac{1}{2\pi}, \quad \theta \in [-\pi, \pi]\), the time component solution is \(\psi_t(t) = A_t e^{i p_t t / \hbar}\).
And for Uniform distribution, \(\psi_\theta(\theta) = \frac{1}{\sqrt{2\pi}}\), the full kime-wave solution is obtained by combining the solutions, \[\psi(t, \theta, \kappa) = A_t e^{i p_t t / \hbar} \times \frac{1}{\sqrt{2\pi}} e^{-i E \kappa / \hbar}\]
These explicit solutions to the kime-dependent Schrödinger equation demonstrate how different kime-phase distributions affect the kime wavefunction.
An alternatively, the kime-dependent Schrödinger equation solution could be expressed using the kime unitary operator group action.
Here, \(\ell(e^{i\theta})\) represents the kime-phase distribution acting on the initial state.
Let’s demonstrate the explicit solutions for several kime-phase distributions.
For the Laplace distribution with parameters \((\mu = 0,\sigma = 0.4\): \[f(\theta; \mu, b) = \frac{1}{2b} \exp\left(-\frac{|\theta - \mu|}{b}\right)\ ,\]
where \(b = \frac{\sigma}{\sqrt{2}} = \frac{0.4}{\sqrt{2}} \approx 0.282\).
The kime-phase distribution function \(\ell(e^{i\theta})\) is \[\ell(e^{i\theta}) = \frac{1}{\sqrt{2 \times 0.282}} \exp\left(-\frac{|\theta|}{0.282}\right) \]
Hence, the full kime-wave solution is obtained by combining the unitary operator with the kime-phase distribution: \[|\varphi_{\kappa}\rangle = e^{-i t A} \left(\frac{1}{\sqrt{2 \times 0.282}} \exp\left(-\frac{|\theta|}{0.282}\right)\right) (|\varphi_{o}\rangle)\ .\]
For the truncated normal distribution with parameters \((\mu = 0,\sigma = 0.4, a = -\pi,b = \pi)\) \[f(\theta; \mu, \sigma, a, b) = \frac{\frac{1}{\sigma \sqrt{2\pi}} \exp\left(-\frac{(\theta - \mu)^2}{2\sigma^2}\right)}{\Phi\left(\frac{b - \mu}{\sigma}\right) - \Phi\left(\frac{a - \mu}{\sigma}\right)}.\]
The kime-phase distribution function \(\ell(e^{i\theta})\) is: \[\ell(e^{i\theta}) = \frac{1}{\sqrt{\Phi\left(\frac{\pi}{0.4}\right) - \Phi\left(-\frac{\pi}{0.4}\right)}} \cdot \frac{1}{0.4 \sqrt{2\pi}} \exp\left(-\frac{\theta^2}{2 \times 0.4^2}\right)\]
and the full kime-wave solution combines the unitary operator with the kime-phase distribution \[|\varphi_{\kappa}\rangle = e^{-i t A} \left(\frac{1}{\sqrt{\Phi\left(\frac{\pi}{0.4}\right) - \Phi\left(-\frac{\pi}{0.4}\right)}} \cdot \frac{1}{0.4 \sqrt{2\pi}} \exp\left(-\frac{\theta^2}{2 \times 0.4^2}\right)\right) (|\varphi_{o}\rangle).\]
For the uniform distribution on \([- \pi, \pi]\) is \[f(\theta) = \frac{1}{2\pi}, \quad \theta \in [-\pi, \pi]. \]
Again, the kime-phase distribution function \(\ell(e^{i\theta})\) is \[\ell(e^{i\theta}) = \frac{1}{\sqrt{2\pi}}.\]
Hence, the complete kime-wave solution combines the unitary operator with the kime-phase distribution \[|\varphi_{\kappa}\rangle = e^{-i t A} \left(\frac{1}{\sqrt{2\pi}}\right) (|\varphi_{o}\rangle).\]
We need to confirm the validity of these solutions of the kime Schrödinger equation in terms of the kime unitary operator group action and incorporating the kime-phase distributions as part of the solution. This approach may maintain the proper role of the kime-phase distribution in the evolution of the quantum state.
Next, we solve the kime-independent Schrödinger equation for each of three kime-phase distributions. The kime-independent Schrödinger equation we consider is:
\[\frac{\partial |\varphi_\kappa\rangle}{\partial t} = -iA |\varphi_\kappa\rangle \]
For simplicity, we assume the Hamiltonian \(A\) (or \(H\)) is for a free particle in 1D, \(A \equiv H = -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2}\). The general solution to this equation is \(|\varphi_{\kappa}\rangle = e^{-i t A} \ell(e^{i\theta}) |\varphi_o\rangle\), where \(\ell(e^{i\theta})\) represents the kime-phase distribution acting on the initial state \(|\varphi_o\rangle\).
The Laplace distribution for \(\theta\) with parameters \((\mu = 0,\sigma = 0.4)\) corresponds to the probability density function \(f(\theta; \mu, b) = \frac{1}{2b} \exp\left(-\frac{|\theta - \mu|}{b}\right)\),
where \(b = \frac{\sigma}{\sqrt{2}} = \frac{0.4}{\sqrt{2}} \approx 0.282\).
Given the Laplace distribution for \(\theta\), the kime-phase component \(\psi_\theta(\theta)\) can be approximated by
\[\psi_\theta(\theta) = \frac{1}{\sqrt{2 \times 0.282}} \exp\left(-\frac{|\theta|}{0.282}\right)\]
Combining the kime-phase component with the evolution operator, the full kime-independent Schrödinger equation solution is \[|\varphi_\kappa\rangle = e^{-i t H} \ell(e^{i\theta}) |\varphi_o\rangle\]
\[|\varphi_\kappa\rangle = e^{-i t H} \times \frac{1}{\sqrt{2 \times 0.282}} \exp\left(-\frac{|\theta|}{0.282}\right) |\varphi_o\rangle \]
The truncated normal distribution for \(\theta\in [- \pi, \pi)\) with parameters \((\mu = 0,\sigma = 0.4)\) has the PDF
\[f(\theta; \mu, \sigma, a, b) = \frac{\frac{1}{\sigma \sqrt{2\pi}} \exp\left(-\frac{(\theta - \mu)^2}{2\sigma^2}\right)}{\Phi\left(\frac{b - \mu}{\sigma}\right) - \Phi\left(\frac{a - \mu}{\sigma}\right)},\]
where \(\Phi\) is the CDF of the standard normal distribution.
Given the truncated normal distribution, the kime-phase component \(\psi_\theta(\theta)\) can be approximated by
\[\psi_\theta(\theta) = \frac{1}{\sqrt{\Phi\left(\frac{\pi}{0.4}\right) - \Phi\left(-\frac{\pi}{0.4}\right)}} \times \frac{1}{0.4 \sqrt{2\pi}} \exp\left(-\frac{\theta^2}{2 \times 0.4^2}\right) \]
Combining the kime-phase component with the evolution operator, the full kime-independent Schrödinger equation solution is \(|\varphi_\kappa\rangle = e^{-i t H} \ell(e^{i\theta}) |\varphi_o\rangle\).
\[|\varphi_\kappa\rangle = e^{-i t H} \times \frac{1}{\sqrt{\Phi\left(\frac{\pi}{0.4}\right) - \Phi\left(-\frac{\pi}{0.4}\right)}} \times \frac{1}{0.4 \sqrt{2\pi}} \exp\left(-\frac{\theta^2}{2 \times 0.4^2}\right) |\varphi_o\rangle \ .\]
The uniform distribution for \(\theta\in [- \pi, \pi)\) has the PDF \(f(\theta) = \frac{1}{2\pi}, \quad \theta \in [-\pi, \pi]\).
Given the uniform distribution, the kime-phase component \(\psi_\theta(\theta)\) is \(\psi_\theta(\theta) = \frac{1}{\sqrt{2\pi}}\).
Combining the kime-phase component with the evolution operator, the full kime-independent Schrödinger equation solution is
\[|\varphi_\kappa\rangle = e^{-i t H} \ell(e^{i\theta}) |\varphi_o\rangle\ .\]
with an explicit solution
\[|\varphi_\kappa\rangle = e^{-i t H} \times \frac{1}{\sqrt{2\pi}} |\varphi_o\rangle\ .\]
Question: Is there a relation to the Girsanov theorem, which relates the Wiener measure \(\mu\) to different probability measures \(\nu\) on the space of continuous paths and gives an explicit formula for the likelihood ratios between them.
The Girsanov theorem provides a formula for the change of measure between two probability measures on the space of continuous paths. Let’s denote the original probability measure as \(\mu\) and the new probability measure as \(\nu\).
Girsanov Theorem: Let \((\Omega, \mathcal{F}, \mu)\) be a probability space, and let \(\{W_t\}_{t \geq 0}\) be a \(d\)-dimensional \(\mu\)-Brownian motion. Suppose there exists a \(\mathbb{R}^d\)-valued, \(\{\mathcal{F}_t\}_{t \geq 0}\)-adapted process \(\{Z_t\}_{t \geq 0}\) such that \(Z_0 = 1\) and \[Z_t = \exp\left(\int_0^t \theta_s \cdot dW_s - \frac{1}{2}\int_0^t |\theta_s|^2 ds\right),\] where \(\theta_t\) is an \(\mathbb{R}^d\)-valued, \(\{\mathcal{F}_t\}_{t \geq 0}\)-adapted process satisfying \(\int_0^T |\theta_t|^2 dt < \infty\) for all \(T > 0\). Then, the measure \(\nu\) defined by \(\frac{d\nu}{d\mu} = Z_T\) is a probability measure, and the process \(\{W_t^{\nu}\}_{t \geq 0}\) defined by \(W_t^{\nu} = W_t - \int_0^t \theta_s ds\) is a \(d\)-dimensional \(\nu\)-Brownian motion.
In other words, the Girsanov theorem states that if we change the measure from \(\mu\) to \(\nu\) using the Radon-Nikodym derivative, \(\frac{d\nu}{d\mu} = Z_T\), then the process \(W_t^{\nu}\) becomes a Brownian motion under the new measure \(\nu\). The process \(\theta_t\) in the above formulation is often referred to as the Girsanov kernel or the market price of risk in the context of financial mathematics. The Girsanov theorem provides a framework for understanding how different probability measures on the space of continuous paths can be related to each other, which is crucial for the potential connections to the kime representation discussed earlier.
To explore for potential connections between complex-time (kime) representation of repeated measurement longitudinal processes and the Girsanov theorem, we need to start with the formulation of the Girsanov theorem. It is a result in stochastic analysis relating different probability measures on the space of continuous paths. Specifically, it provides a formula for the Radon-Nikodym derivative between two probability measures, which can be interpreted as a change of measure. Below are a few potential parallels we can start this exploration with.
In the kime representation, complex time variable \(\kappa = t e^{i\theta}\) can be seen as a change of the underlying measure or representation of the dynamical system. Different kime-phase distributions (Laplace, normal, uniform, etc.) may represent different choices of probability measures.
The Girsanov theorem provides a formula for the Radon-Nikodym derivative between two probability measures \(\mu\) and \(\nu\) on the space of continuous paths. Specifically, it states that if \(\frac{d\nu}{d\mu}\) is the Radon-Nikodym derivative, then the process \(X_t\) under the new measure \(\nu\) is a martingale with respect to the filtration generated by the original Wiener process \(W_t\) under \(\mu\).
In the kime framework, the key modification is the introduction of the complex time variable \(\kappa = t e^{i\theta}\), where \(t\) is the real time and \(\theta\) is the phase variable. This leads to a modified line element \[ds^2 = -c^2 \left (1 - \frac{2GM}{rc^2}\right )d\kappa d\kappa^* + \left (1 - \frac{2GM}{rc^2}\right )^{-1}dr^2 + r^2(d\theta^2 + \sin^2\theta d\phi^2) .\]
To explore connections between the kime representation and the Girsanov theorem, we need to define the kime-modified probability measure. Denote the original, real-time probability measure by \(\mu\), and the kime-modified measure as \(\nu\). We can then express the Radon-Nikodym derivative \(\frac{d\nu}{d\mu}\) in terms of the kime-phase distribution \(\Phi(\theta)\) \[\frac{d\nu}{d\mu} = \exp\left(\int_{-\pi}^{\pi} (e^{i\theta} - 1)\Phi(\theta)d\theta\right)\]
This formulation mirrors the Girsanov theorem, where the Radon-Nikodym derivative is expressed in terms of the process under the new measure.
To derive the kime-modified stochastic process, consider the new kime-modified measure \(\nu\) and the stochastic process \(X_t\) as a martingale, just as in the Girsanov theorem. We can then explore the properties of this kime-modified process and its connections to the original real-time process. Let’s consider a stochastic differential equation (SDE) in the kime-modified representation \[dX = a(X,t,\theta)dt + b(X,t,\theta)d\theta .\] In the kime-modified framework, \(a(X,t,\theta)\) and \(b(X,t,\theta)\) are the drift and diffusion coefficients, respectively. Perhaps consider a kime-modification of the Itô formula, which plays a crucial role in the Girsanov theorem. We can try to derive a kime-modified Itô formula that relates the differentials in the real-time and the changes in the kime-time representations. At first glance, we can express the kime-modified Itô formula as \[dX = \left(\frac{\partial X}{\partial t} + \frac{i}{2}\frac{\partial^2 X}{\partial\theta^2}\right)dt + \frac{\partial X}{\partial\theta}d\theta .\] However, to provide insights into the underlying stochastic structure of the kime framework and its connection to the Girsanov theorem, we may need to consider \(\theta\sim \Phi\) as a random variable. To revise the kime-modified Itô formula to properly account for the fact that \(\theta\sim\Phi\) is a random quantity with some phase distribution and some probability density function \(\Phi(\theta)\) supported on \([-\pi, \pi]\), \(\int_{-\pi}^{\pi} \Phi(\theta)d\theta = 1\). For an observed process \(X_t\), the kime-modified version should incorporate both the time evolution and the phase uncertainty, i.e., \(X_{\kappa}\) with \(\kappa = te^{i\theta}\). The modified Itô formula can be expressed as as an expectation
\[dX_{\kappa} = \mathbb{E}_{\Phi}\left[\frac{\partial X}{\partial t}dt + \frac{1}{2}\frac{\partial^2 X}{\partial \theta^2}d\theta^2\right] ,\] where \(\mathbb{E}_{\Phi}\) denotes expectation with respect to the phase distribution \(\Phi\). More explicitly \[dX_{\kappa} = \int_{-\pi}^{\pi} \left(\frac{\partial X}{\partial t}dt + \frac{1}{2}\frac{\partial^2 X}{\partial \theta^2}d\theta^2\right)\Phi(\theta)d\theta\]
The corresponding stochastic differential equation (SDE) is \[dX_{\kappa} = a(X,t)\mathbb{E}_{\Phi}[dt] + b(X,t)\mathbb{E}_{\Phi}[dW_t] ,\] where \(\mathbb{E}_{\Phi}[dt] = \int_{-\pi}^{\pi} e^{i\theta}dt\Phi(\theta)d\theta\) and \(\mathbb{E}_{\Phi}[dW_t] = \int_{-\pi}^{\pi} e^{i\theta}dW_t\Phi(\theta)d\theta\).
Hence, the connection to Girsanov theorem involves the Radon-Nikodym derivative \[\frac{d\nu}{d\mu} = \exp\left(\int_{0}^{t}\int_{-\pi}^{\pi} (e^{i\theta} - 1)\Phi(\theta)d\theta ds\right) .\]
This formulation better reflects the nature of \(\theta\) as a random variable and leads to a more appropriate modification of the Itô formula for kime processes.
The martingale property suggests that under the new measure \(\nu\), we have \[\mathbb{E}_{\nu}[X_{\kappa}|\mathcal{F}_s] = X_{\kappa_s},\] where \(\mathcal{F}_s\) is the filtration generated by the process up to time \(s\). This revision makes the connection to the Girsanov theorem more precise and better reflects the probabilistic nature of the phase variable \(\theta\). The key insight is that we need to take expectations with respect to the phase distribution \(\Phi\) when computing differentials and defining the measure change.
We can use the Laplace and Normal phase distributions to explicate the revised kime-modified Itô formula.
where centrality parameter \(\mu = 0)\) and \(b>0\) is the scale parameter. In the Laplace case, the Expected Differential is \[\mathbb{E}_{\Phi_L}[dt] = \int_{-\pi}^{\pi} e^{i\theta}dt \cdot \frac{1}{2b}\exp\left(-\frac{|\theta|}{b}\right)d\theta\]
\[= \frac{dt}{2b}\left[\int_{-\pi}^{0} e^{i\theta}\exp\left(\frac{\theta}{b}\right)d\theta + \int_{0}^{\pi} e^{i\theta}\exp\left(-\frac{\theta}{b}\right)d\theta\right]\]
\[ = dt \cdot \frac{b}{1+b^2}\left[1-e^{-\pi/b}(\cos\pi + b\sin\pi)\right] .\]
The kime-modified SDE for a process \(X_{\kappa}\) is \[dX_{\kappa} = a(X,t)\frac{b}{1+b^2}\left[1-e^{-\pi/b}(\cos\pi + b\sin\pi)\right]dt + b(X,t)dW_t .\]
And the Radon-Nikodym derivative is a martingale and involve the stochastic integral. For the Laplace case it is \[\frac{d\nu_L}{d\mu} = \exp\left(\int_0^t \theta_s dW_s - \frac{1}{2}\int_0^t \theta_s^2 ds\right) ,\]
where \(\theta_s\) is our Girsanov kernel that incorporates the Laplace phase distribution \[\theta_s = \int_{-\pi}^{\pi} (e^{i\theta} - 1)\frac{1}{2b}\exp\left(-\frac{|\theta|}{b}\right)d\theta\]
Therefore, \[\frac{d\nu_L}{d\mu} = \mathcal{E}\left(\int_0^t \int_{-\pi}^{\pi} (e^{i\theta} - 1)\frac{1}{2b}\exp\left(-\frac{|\theta|}{b}\right)d\theta dW_s\right)\]
where \(\mathcal{E}\) is the stochastic exponential (Doleans-Dade exponential). This formulation ensures that (i) the Radon-Nikodym derivative is a true martingale; (ii) the Novikov’s condition is satisfied, and (iii) the measure change is properly stochastic. We can work out the explicit form of the stochastic exponential by first computing the Girsanov kernel \(\theta_s^L\) \[\theta_s^L = \int_{-\pi}^{\pi} (e^{i\theta} - 1)\frac{1}{2b}\exp\left(-\frac{|\theta|}{b}\right)d\theta .\]
Break this into positive and negative parts and integrate each part \[\theta_s^L = \frac{1}{2b}\left[\int_{-\pi}^{0} (e^{i\theta} - 1)\exp\left(\frac{\theta}{b}\right)d\theta + \int_{0}^{\pi} (e^{i\theta} - 1)\exp\left(-\frac{\theta}{b}\right)d\theta\right]\] \[\theta_s^L = \frac{1}{2b}\left[\frac{b}{1-bi}\left(1 - e^{-\pi/b}(\cos\pi - b\sin\pi)\right) + \frac{b}{1+bi}\left(1 - e^{-\pi/b}(\cos\pi + b\sin\pi)\right) - 2\pi\right]\] Thus, the stochastic exponential becomes \[\frac{d\nu_L}{d\mu} = \mathcal{E}\left(\int_0^t \theta_s^L dW_s\right)\] \[ = \exp\left(\int_0^t \theta_s^L dW_s - \frac{1}{2}\int_0^t (\theta_s^L)^2 ds\right)\] To ensure this is a true martingale, we need to verify Novikov’s condition \[\mathbb{E}\left[\exp\left(\frac{1}{2}\int_0^T (\theta_s^i)^2 ds\right)\right] < \infty, \quad i \in \{L,N\}\] This verification requires evaluating the following integral \[\int_0^T |\theta_s^L|^2 ds = T\left|\frac{1}{2b}\left[\frac{b}{1-bi}\left(1 - e^{-\pi/b}(\cos\pi - b\sin\pi)\right) + \frac{b}{1+bi}\left(1 - e^{-\pi/b}(\cos\pi + b\sin\pi)\right) - 2\pi\right]\right|^2\] To evaluate the Laplace integral explicitly we start with the integral \(\int_0^T |\theta_s^L|^2 ds\), where \[\theta_s^L = \frac{1}{2b}\left[\frac{b}{1-bi}\left(1 - e^{-\pi/b}(\cos\pi - b\sin\pi)\right) + \frac{b}{1+bi}\left(1 - e^{-\pi/b}(\cos\pi + b\sin\pi)\right) - 2\pi\right]\] Simplify \(\theta_s^L\) by letting \(A = 1 - e^{-\pi/b}(\cos\pi - b\sin\pi)\) and \(B = 1 - e^{-\pi/b}(\cos\pi + b\sin\pi)\). Then, \[\theta_s^L = \frac{1}{2}\left[\frac{A}{1-bi} + \frac{B}{1+bi} - \frac{2\pi}{b}\right] .\] To find \(|\theta_s^L|^2\), multiply by the complex conjugate \[|\theta_s^L|^2 = \theta_s^L \cdot (\theta_s^L)^*\] \[= \frac{1}{4}\left[\frac{A}{1-bi} + \frac{B}{1+bi} - \frac{2\pi}{b}\right] \cdot \left[\frac{A}{1+bi} + \frac{B}{1-bi} - \frac{2\pi}{b}\right]\] \[ = \frac{1}{4}\left[\frac{|A|^2}{1+b^2} + \frac{|B|^2}{1+b^2} + \frac{AB^*}{(1-bi)(1-bi)} + \frac{A^*B}{(1+bi)(1+bi)} + \frac{4\pi^2}{b^2}\right.\] \[\left.- \frac{2\pi A}{b(1-bi)} - \frac{2\pi A^*}{b(1+bi)} - \frac{2\pi B}{b(1+bi)} - \frac{2\pi B^*}{b(1-bi)}\right]\] Since \(|A|^2 = (1 - e^{-\pi/b}\cos\pi)^2 + (be^{-\pi/b}\sin\pi)^2\) and \(|B|^2 = (1 - e^{-\pi/b}\cos\pi)^2 + (be^{-\pi/b}\sin\pi)^2\), \[AB^* = (1 - e^{-\pi/b}\cos\pi)^2 - (be^{-\pi/b}\sin\pi)^2 + 2ie^{-\pi/b}\sin\pi(1 - e^{-\pi/b}\cos\pi) .\] Next, \[\int_0^T |\theta_s^L|^2 ds = T\cdot\frac{1}{4(1+b^2)}\left[2(1 - 2e^{-\pi/b}\cos\pi + e^{-2\pi/b}) + \frac{4\pi^2}{b^2}\right.\] \[\left.- \frac{4\pi}{b}(1 - e^{-\pi/b}\cos\pi)\right] .\] Explicitly, Novikov’s condition requires \[\mathbb{E}\left[\exp\left(\frac{T}{8(1+b^2)}\left[2(1 - 2e^{-\pi/b}\cos\pi + e^{-2\pi/b}) + \frac{4\pi^2}{b^2} - \frac{4\pi}{b}(1 - e^{-\pi/b}\cos\pi)\right]\right)\right] < \infty .\]
For typical values of the scale parameter \(b\) (e.g., \(b = 0.1\)), we can verify this condition numerically.
The expected differential is \[\mathbb{E}_{\Phi_N}[dt] = \frac{dt}{Z_N\sigma\sqrt{2\pi}}\int_{-\pi}^{\pi} e^{i\theta}\exp\left(-\frac{\theta^2}{2\sigma^2}\right)d\theta\] \[= dt \cdot \exp\left(-\frac{\sigma^2}{2}\right)\left[1 - \frac{\text{erf}(\pi/(\sigma\sqrt{2}))}{Z_N}\right] .\]
The modified SDE is \[dX_{\kappa} = a(X,t)\exp\left(-\frac{\sigma^2}{2}\right)\left[1 - \frac{\text{erf}(\pi/(\sigma\sqrt{2}))}{Z_N}\right]dt + b(X,t)dW_t .\]
And the Radon-Nikodym Derivative for the Normal case is \[\frac{d\nu_N}{d\mu} = \mathcal{E}\left(\int_0^t \int_{-\pi}^{\pi} (e^{i\theta} - 1)\frac{1}{Z_N\sigma\sqrt{2\pi}}\exp\left(-\frac{\theta^2}{2\sigma^2}\right)d\theta dW_s\right)\] For the truncated Normal, \[\theta_s^N = \int_{-\pi}^{\pi} (e^{i\theta} - 1)\frac{1}{Z_N\sigma\sqrt{2\pi}}\exp\left(-\frac{\theta^2}{2\sigma^2}\right)d\theta\] \[ = \frac{1}{Z_N\sigma\sqrt{2\pi}}\left[\int_{-\pi}^{\pi} e^{i\theta}\exp\left(-\frac{\theta^2}{2\sigma^2}\right)d\theta - \int_{-\pi}^{\pi} \exp\left(-\frac{\theta^2}{2\sigma^2}\right)d\theta\right]\] After integration (using the complex error function) \[\theta_s^N = \frac{1}{Z_N}\left[\exp\left(-\frac{\sigma^2}{2}\right)\text{erf}\left(\frac{\pi}{\sigma\sqrt{2}} + \frac{i\sigma}{\sqrt{2}}\right) - \text{erf}\left(\frac{\pi}{\sigma\sqrt{2}}\right)\right]\] Thus, corresponding stochastic exponential is \[\frac{d\nu_N}{d\mu} = \mathcal{E}\left(\int_0^t \theta_s^N dW_s\right)\] \[ = \exp\left(\int_0^t \theta_s^N dW_s - \frac{1}{2}\int_0^t (\theta_s^N)^2 ds\right)\] Again, verifying Novikov’s condition \[\mathbb{E}\left[\exp\left(\frac{1}{2}\int_0^T (\theta_s^i)^2 ds\right)\right] < \infty, \quad i \in \{L,N\}\] requires evaluating \[\int_0^T |\theta_s^N|^2 ds = T\left|\frac{1}{Z_N}\left[\exp\left(-\frac{\sigma^2}{2}\right)\text{erf}\left(\frac{\pi}{\sigma\sqrt{2}} + \frac{i\sigma}{\sqrt{2}}\right) - \text{erf}\left(\frac{\pi}{\sigma\sqrt{2}}\right)\right]\right|^2 .\] where \(Z_N\) is the normalization constant: \[Z_N = \int_{-\pi}^{\pi} \frac{1}{\sigma\sqrt{2\pi}}\exp\left(-\frac{\theta^2}{2\sigma^2}\right)d\theta = \text{erf}\left(\frac{\pi}{\sigma\sqrt{2}}\right) .\]
Let’s evaluate the integral for the Normal distribution phase prior. Again, denote \(A = \exp\left(-\frac{\sigma^2}{2}\right)\text{erf}\left(\frac{\pi}{\sigma\sqrt{2}} + \frac{i\sigma}{\sqrt{2}}\right)\) and \(B = \text{erf}\left(\frac{\pi}{\sigma\sqrt{2}}\right)\).
Thus, \(\theta_s^N = \frac{1}{B}(A - B)\) and we can use the complex error function property \[\text{erf}(x + iy) = \text{erf}(x) + \frac{2}{\sqrt{\pi}}\exp(-x^2)\int_0^y \exp(t^2)\cos(2xt)dt + \\ i\frac{2}{\sqrt{\pi}}\exp(-x^2)\int_0^y \exp(t^2)\sin(2xt)dt\]
For \(x = \frac{\pi}{\sigma\sqrt{2}}\) and \(y = \frac{\sigma}{\sqrt{2}}\): \[A = \exp\left(-\frac{\sigma^2}{2}\right)\left[\text{erf}\left(\frac{\pi}{\sigma\sqrt{2}}\right) + \frac{2}{\sqrt{\pi}}\exp\left(-\frac{\pi^2}{2\sigma^2}\right)I_c + \\ i\frac{2}{\sqrt{\pi}}\exp\left(-\frac{\pi^2}{2\sigma^2}\right)I_s\right] .\] where \(I_c = \int_0^{\sigma/\sqrt{2}} \exp(t^2)\cos\left(\frac{2\pi t}{\sigma^2}\right)dt\) and \(I_s = \int_0^{\sigma/\sqrt{2}} \exp(t^2)\sin\left(\frac{2\pi t}{\sigma^2}\right)dt\).
Therefore, \[|\theta_s^N|^2 = \frac{1}{B^2}\left|A - B\right|^2\] \[= \frac{1}{B^2}\left[\left(\exp\left(-\frac{\sigma^2}{2}\right)\frac{2}{\sqrt{\pi}}\exp\left(-\frac{\pi^2}{2\sigma^2}\right)I_c\right)^2 + \left(\exp\left(-\frac{\sigma^2}{2}\right)\frac{2}{\sqrt{\pi}}\exp\left(-\frac{\pi^2}{2\sigma^2}\right)I_s\right)^2\right]\] Integrating over \([0,T]\) yields \[\int_0^T |\theta_s^N|^2 ds = \frac{4T}{\pi B^2}\exp(-\sigma^2)\exp\left(-\frac{\pi^2}{\sigma^2}\right)(I_c^2 + I_s^2) .\] The Novikov’s condition is \[\mathbb{E}\left[\exp\left(\frac{2T}{\pi B^2}\exp(-\sigma^2)\exp\left(-\frac{\pi^2}{\sigma^2}\right)(I_c^2 + I_s^2)\right)\right] < \infty ,\] where the integrals \(I_c\) and \(I_s\) can be evaluated numerically for specific values of \(\sigma\). For example, if \(\sigma = 0.4\), \(I_c \approx 0.4205\) and \(I_s \approx 0.1832\).
The key differences between these two cases lie in the tail behavior of the phase distributions, the form of the drift modifications, and the structure of the Radon-Nikodym derivatives.
We can also search for connections to path integrals and the Feynman-Kac formula. The Girsanov theorem has deep connections to path integral formulations and the Feynman-Kac formula. Investigating whether similar connections exist in the kime framework may yield further insights. For example, assume we can express the kime-modified wavefunction as a path integral \[\psi(t,\theta,\kappa) = \int \exp\left(i\int_0^t \left(L - \frac{i}{2}\frac{\partial^2}{\partial\theta^2}\right)dt'\right)\Phi(\theta)\psi_0(t,\theta)d\theta ,\]
where \(L\) is the Lagrangian and \(\psi_0(t,\theta)\) is the initial wavefunction. Then, the path integral formulation may reveal deeper connections to the Feynman-Kac formula and the Girsanov theorem.
To explore the implications of these results for the measure change corresponding to different kime-phase distribution priors. In the two examples above (Laplace and Normal phase distributions), we have the Radon-Nikodym derivatives \[\frac{d\nu}{d\mu} = \exp\left(\int_0^t \theta_s dW_s - \frac{1}{2}\int_0^t \theta_s^2 ds\right) \] and the corresponding Girsanov kernels \[\underbrace{|\theta_s^L|^2}_{Laplace} = \frac{1}{4(1+b^2)}\left[2(1 - 2e^{-\pi/b}\cos\pi + e^{-2\pi/b}) + \frac{4\pi^2}{b^2} - \frac{4\pi}{b}(1 - e^{-\pi/b}\cos\pi)\right] ,\] \[\underbrace{|\theta_s^N|^2}_{Normal} = \frac{4}{\pi B^2}\exp(-\sigma^2)\exp\left(-\frac{\pi^2}{\sigma^2}\right)(I_c^2 + I_s^2)\] Both measures generate martingales, but with different characteristics. The Laplace case has heavier tails in the phase distribution, leading to potentially larger fluctuations. Whereas the Normal prior provides more concentrated measure changes around the mean. In terms of stability, the Laplace prior yields a Laplace measure change that becomes unstable for small \(b\) (sharp peak/localization), \(\lim_{b \to 0} |\theta_s^L|^2 = \infty\) and \(\lim_{b \to \infty} |\theta_s^L|^2 = 0\). Whereas, the Normal prior is more stable across parameter values, \(\lim_{\sigma \to 0} |\theta_s^N|^2 = 0\) and \(\lim_{\sigma \to \infty} |\theta_s^N|^2 = 0\).
Under the new measures, the modified Processes, \(dW_t^\nu = dW_t - \theta_s dt\), are \[\underbrace{dW_t^{\nu_L}}_{Laplace} = dW_t - \theta_s^L dt .\] \[\underbrace{dW_t^{\nu_N}}_{Normal} = dW_t - \theta_s^N dt .\] Physical interpretations of the corresponding kime-phase space structures suggest that Laplace models have sharp transitions in phase space, whereas the Normal prior models reflect smooth transitions with Gaussian weights.
For a general observable \(f(X_t)\), \[\mathbb{E}_\nu[f(X_t)] = \mathbb{E}_\mu\left [f(X_t)\frac{d\nu}{d\mu}\right] .\] The differences between alternative kime-phase priors manifest as drift modifications, different volatility scaling, or different tail behaviors in predictions, \(\Delta\phi\), where (Laplace prior) \(\Delta\phi_L = \Delta\phi_{GR}(1 + \mathcal{O}(b))\) and (Normal prior) \(\Delta\phi_N = \Delta\phi_{GR}(1 + \mathcal{O}(\sigma^2))\). There are also computational efficiency differences corresponding with different phase models. A Normal prior involves simpler numerical integration, and Laplace prior requires special handling of the \(|\theta|\) term.
The Girsanov theorem gives an explicit formula for the likelihood ratio between the original Wiener measure \(\mu\) and the new measure \(\nu\). Similarly, the kime correction terms \(\Delta \varphi_\kappa\) calculated for the various phase distributions could be interpreted as likelihood-like factors modifying the standard predictions.
Both the kime representation and the Girsanov theorem deal with continuous-time stochastic processes. The solutions to the kime-dependent Schrödinger equation may have connections to stochastic differential equations and path integral formulations.
When applying the kime representations to model repeated measurement longitudinal data where the same individuals are measured over time, there may be similarities to the continuous path space considered in the Girsanov theorem context. One potential avenue of exploration would be to investigate whether the kime-phase distributions and corresponding correction terms can be cast in a Girsanov-like framework. This could provide a stronger theoretical foundation and potential generalizations of the kime approach.
Additionally, the connections to stochastic processes and path integrals may be fruitful to explore further. The Girsanov theorem has deep links to the Feynman-Kac formula and other results in stochastic analysis, which may offer insights into the kime representation.
N <- 10000
xNu <- extraDistr::rlaplace(N, mu = 0, sigma = 0.4)
yNu <- density(xNu, bw=0.2)
xMu <- extraDistr::rlaplace(N, mu = 0, sigma = 0.5)
yMu < density(xMu, bw=0.2)
# Correct second Laplace Density to ensure absolute continuity, nu<<mu
yMu$y <- 2*yMu$y
plot_ly(x = xNu, type = "histogram", name = "Data Histogram") %>%
add_trace(x=yNu$x, y=yNu$y, type="scatter", mode="lines", opacity=0.3,
fill="tozeroy", yaxis="y2", name="nu, Laplace(N,0,0.4) Density") %>%
add_trace(x=yMu$x, y = yMu$y, type="scatter", mode="lines", opacity=0.3,
fill="tozeroy", yaxis="y2", name="mu, Laplace(N,0,0.5) Density") %>%
layout(title="Absolutely Continuous Laplace Distributions, nu<<mu",
yaxis2 = list(overlaying = "y", side = "right"),
xaxis = list(range = list(-pi, pi)),
legend = list(orientation = 'h'))
integrate(approxfun(yNu), -pi, pi) # 1.000199 with absolute error
# 7.6e-05
integrate(approxfun(yMu), -pi, pi) # 1.997212 with absolute error
# 0.00023
This is largely motivated by Juan Maldacena’s lecture “The Meaning of Spacetime” at the Perimeter Institute for Theoretical Physics, which suggests that the geometry of 4D spacetime is related to quantum field theory, entanglement entropy, and causality. Simple put, spacetime is an emergent phenomenon in quantum gravity. In late 1990’s, Juan Maldacena proposed a model of a 5D holographic universe via a spacetime map.
The German mathematician Theodor Kaluza and the Swedish mathematician Oskar Klein introduced a fifth dimension to spac-time, to facilitate unification of gravity and electromagnetism as two aspects of the same force. this work launched the hunt for the string theory work on unification of forces in the hidden dimensions of hyper-spacetime.
A different 5D universe holographic model by Lisa Randall, Raman Sundrum, and Juan Maldacena proposed that the fifth dimension could explain why gravity is much weaker compared to the electromagnetic and strong/week nuclear forces. The holographic universe model suggests that the observable 4D universe is the floating boundary enclosing an infinitely large negatively curved fifth dimension. The electromagnetic and nuclear forces are stuck inside a (mem)brane 4D boundary, whereas gravity propagates out into the fifth dimension.
The Wesson/Townson 5D STM (space-time-matter) model also uses a 5D representation of reality. Juan Maldacena drew connections (equivalence) between 5D string theory with gravity and ordinary quantum-field theory in 4D without gravity via a holographic projection of the latter, i.e., the 4D observable universe modeled as a holographic projection of a 5D space to a 4D boundary.
We should explore the synergies between the complex-time representation, the Holographic Principle, and anti-de Sitter - conformal field theory (AdS/CFT) correspondence.
Let’s start by rendering the map of the Earth using different projections, including area-preserving and conformal mapping (angle-preserving) mappings. We can try to implement a new conformal-disk geo-projection. See these D3.js geo-projection examples and the currently implemented geo-projections.
library(plotly)
# See plot_ly "geo" docs: https://plotly.com/r/reference/layout/geo/
# The available projections are 'equirectangular', 'mercator', 'orthographic', 'natural earth', 'kavrayskiy7', 'miller', 'robinson', 'eckert4', 'azimuthal equal area', 'azimuthal equidistant', 'conic equal area', 'conic conformal', 'conic equidistant', 'gnomonic', 'stereographic', 'mollweide', 'hammer', 'transverse mercator', 'albers usa', 'winkel tripel', 'aitoff' and 'sinusoidal'.
g <- list(projection = list(type = 'orthographic'), # orthographic
showland = TRUE, landcolor="gray",
coastlinecolor = "red", showocean=TRUE, oceancolor="LightBlue",
showlakes=TRUE, lakecolor="navy", showrivers=TRUE, rivercolor="blue",
showcountries = TRUE, countrycolor = "Black",
lonaxis=list(showgrid=TRUE, griddash="dash"), lataxis=list(showgrid=TRUE))
plot_ly(type = 'scattergeo', mode = 'markers') %>%
layout(geo = g, title="Orthographic Earth Mapping")
# g <- list(projection = list(type = 'azimuthal equal area'), # area preserve
# g <- list(projection = list(type = 'conic conformal'), # conic conformal
g <- list(projection = list(type = 'azimuthal equidistant'), # azimuthal
showland = TRUE, landcolor="gray",
coastlinecolor = "red", showocean=TRUE, oceancolor="LightBlue",
showlakes=TRUE, lakecolor="navy", showrivers=TRUE, rivercolor="blue",
showcountries = TRUE, countrycolor = "Black",
lonaxis=list(showgrid=TRUE, griddash="dash"), lataxis=list(showgrid=TRUE))
plot_ly(type = 'scattergeo', mode = 'markers') %>%
layout(geo = g, title="Azimuthal-Equidistant Earth Mapping")
g <- list(projection = list(type = 'conic conformal'), # conic conformal
showland = TRUE, landcolor="gray",
coastlinecolor = "red", showocean=TRUE, oceancolor="LightBlue",
showlakes=TRUE, lakecolor="navy", showrivers=TRUE, rivercolor="blue",
showcountries = TRUE, countrycolor = "Black",
lonaxis=list(showgrid=TRUE, griddash="dash"), lataxis=list(showgrid=TRUE))
plot_ly(type = 'scattergeo', mode = 'markers') %>%
layout(geo = g, title="Conformal-Conic Earth Mapping")
Problem: Consider implementing the Mobius transformation based mapping, which uses a group-theoretic approach to representing invertable (and univariate) complex functions as Mobius transformations \(f:\mathbb{C}\to \mathbb{C}\), \(f(z)=\frac{a\cdot z + b}{c\cdot z + d}\). Note that Mobius transforms can be represented as a group of \(2\times 2\) matrices, a subgroup of \(SU(2)\): \[\begin{pmatrix} a & b \\ c & d \\ \end{pmatrix}\ .\]
Korn GA, Korn TM. Mathematical handbook for scientists and engineers: definitions, theorems, and formulas for reference and review, Courier Corporation; 2000.
Wang Y, Shen Y, Deng D, Dinov ID. Determinism, well-posedness, and applications of the ultrahyperbolic wave equation in spacekime. Partial Differential Equations in Applied Mathematics. 2022;5:100280, doi: 10.1016/j.padiff.2022.100280.