SOCR ≫ TCIU Website ≫ TCIU GitHub ≫

This TCIU Section in Chapter 6 extends previous work to show alternative strategies to transform repeated data measurements of longitudinal data as 2D parametric manifolds (kimesurfaces).

There are alternative strategies to map series of repeated longitudinal measurements into kimesurfaces, i.e., 2D manifolds parameterized by complex time \(\kappa = t e^{i\theta}\). Types of such transformations include analytical, numerical, and algorithmic approaches. The selection of an appropriate transform of series of repeated longitudinal measurements into kimesurfaces generally depends on the specific data characteristics including data density, noise levels, and interpretability needs. The resulting kimesurface can effectively unify repeated time-series into a structured 2D manifold, enabling novel subsequent AI modeling and analytics to support gaining intuition and insights into the phenomenon’s temporal-phase variability.

There are multiple complementary approaches for mapping repeated time-series (longitudinal data) into a 2D kimesurface, i.e., a manifold parametrized by the kime magnitude and phase \((t,\theta)\), where \(t = |\kappa|\), the usual notion of time, and \(\theta\) is an added phase index indexing the repeated samples.

1 Analytical Transforms of Repeated Time-series into Kimesurfaces

Analytical transforms of time-series to kimesurfaces provide interpretability but require mathematical tractability.

1.1 Laplace and Fourier Transform Mapping

One idea is to mathematically transform the time-series data \(f(t)\) into a complex function \(F(\kappa)\) with \(\kappa = t e^{i\theta}\). For instance, consider the generalized Laplace transform, \(\mathcal{L}:\{f\in L^1(\mathbb{R}^+)\} \longrightarrow \{F: \mathbb{C}\overbrace{\longrightarrow}^{analytic}\mathbb{C}\}\), which is defined over \(\kappa = t e^{i\theta}\in \mathbb{C}\) by

\[F(\kappa) = \mathcal{L}\{f(t)\}(\kappa) = \int_0^\infty f(t) e^{-\kappa t} dt = \int_0^\infty f(t) e^{-t^2 e^{i\theta}} dt,\]

where \(\theta\) as a parameter. This maps time-series data to a complex surface, with \(\theta\) indexing the repeated longitudinal trials. The output of the Laplace transform is a complex analytic function with suitable decay, which has a convergent power series expansion whose coefficients decompose the function \(F\) into its (statistical) moments. Recall that the standard Laplace transform (LT) of \(f(t)\) is \(\mathcal{L}\{f\}(s) = \int_0^\infty f(t)\,e^{-st}dt\), for \(\mathrm{Re}(s) > 0\). The LT generalizes to

\[F(\kappa)\;=\;\int_0^\infty f(t)\, e^{-\kappa\,t}\,dt \quad\text{or}\quad F(\kappa)\;=\;\int_0^\infty f(\tau)\,\kappa^{- \tau}\,d\tau,\] where \(\kappa = t\,e^{i\theta}\) might be treated as the transform variable.
However, this requires ensuring the integral convergence and appropriate interpretation of how \(\kappa\mapsto t e^{i\theta}\) factors in.

Alternatively, we can employ the Fourier series expansion for each fixed \(t\), expand observations across \(\theta\) into the Fourier coefficients, \(c_k(t)\), \(f(t, \theta) = \sum_{k=-\infty}^\infty c_k(t) e^{ik\theta},\) where \(c_k(t)\) are time-dependent coefficients estimated from repeated trials.

In practice, having a repeated dataset \(\{f_n(t)\}\), requires transforming each one \(F_n(\kappa) = \int_0^T f_n(t)\,\varphi(\kappa,t)\,dt\) using some transform kernel \(\varphi(\kappa,t)\). Then, if the phase \(\theta\sim\Phi(t)\) is the experiment index, we gather all \(\{F_n(\kappa)\}_{n-1}^N\) and interpret them as sampling phase angles in the complex plane. We can also consider a Fourier transform in time, but then replace the real frequency \(\omega\) by the phase angle \(\theta\).

Another alternative is to consider a Hilbert transform approach to define an analytic continuation of the real time series, thereby obtaining a phase function for each sample. In many of these transforms, a purely real time variable is vital as input. Extending time to kime, \(t\to \kappa = t\, e^{i\theta}\) can complicate standard theorems on convergence, hence we need to validate the rigorous integral transforms into a complex domain.

Many examples of Laplace-transformed time-series are shown in the TCIU Chapter 6 Tutorial.

1.1.1 Simulated fMRI Time-Series to Kimesurface Example

First plot the simulated On (stimulus) and Off (rest) fMRI time-series at a fixed voxel location \((44,42,33)\in\mathbb{R}^3\), along with the averaged (pooled) On and Off signal over all \(8\) repeats in the single run (epoch) of \(160\) time-points.

# Extract just the On kime-series at voxel (44,42,33), each time-series has 8 repeats!
library(TCIU)
library("R.matlab")
library(plotly)

# Data import
# pathname <- file.path("./test_data", "subj1_run1_complex_all.mat")
mat1 = readMat("./test_data/subj1_run1_complex_all.mat")
bigim1 = mat1$bigim[,64:1,,]
bigim1_mod = Mod(bigim1) # Modulus

onFMRISeries <- 
  bigim1_mod[44,42,33, c(1:10, 21:30, 41:50, 61:70, 81:90, 101:110, 121:130, 141:150)]
# the corresponding Off kime-series at voxel (44,42,33) will be the temporal complement
offFMRISeries <- 
  bigim1_mod[44,42,33, -c(1:10, 21:30, 41:50, 61:70, 81:90, 101:110, 121:130, 141:150)]

t_indx <- seq(1, 80, 1) # index of time (not actual time value)
f_On <- onFMRISeries
f_Off <- offFMRISeries

vline <- function(x=0, color = "lightgray") {
  list(type="line", y0=0, y1=1, yref="paper", name="time break points", 
       opacity = 0.5, x0=x, x1=x, line=list(color=color))
}

hline <- function(y = 0, color = "blue") {
  list(type="line", x0=0, x1=1, xref="paper", name="intensity break points",
       opacity = 0.3, y0=y, y1=y, line=list(color=color))
}

plot_ly() %>%
  # On fMRI time-series
  add_trace(x=t_indx, y=f_On, type="scatter", mode="lines", name="On time-series") %>%
  # On fMRI time-series
  add_trace(x=t_indx, y=f_Off, type="scatter", mode="lines", name="Off time-series") %>%
  # Repeated measurement break points
  # add_trace(x=c(10,20,30,40,50,60,70,80), y=f_Off, type="scatter", mode="lines", name="Off time-series") %>%
  layout(title="3D fMRI Simulation: On & Off Time-series at Voxel(44,42,33)",
         shapes = list(
           vline(10),vline(20),vline(30),vline(40),vline(50),vline(60),vline(70),vline(80)),
         legend = list(orientation='h', y=-0.2))
# Compute and plot against each other the Average On and Average Off time-series
seriesAvg <- function(f=0, period=0) {
  tempAvg <- rep(0,period)  # initialize avg signal
  for (i in c(1:period)) {
    counter =0  #  empirically count the number of repeated samples (here it's 8)
    for (j in c(1:length(f))) {
      if (j %% period == i-1) {  # e.g., 159 %% 10 # [1] 9
        tempAvg[i] = tempAvg[i] + f[j]
        counter = counter + 1
      }
    }
    tempAvg[i] = tempAvg[i]/counter
  }
  return(tempAvg)
}

period <- 10
onAvg  <- seriesAvg(f=f_On,  period=period) 
offAvg <- seriesAvg(f=f_Off, period=period) 

plot_ly() %>%
  # Average On fMRI time-series
  add_trace(x=c(1:period), y=onAvg, type="scatter", mode="lines", name="On Average") %>%
  # On fMRI time-series
  add_trace(x=c(1:period), y=offAvg, type="scatter", mode="lines", name="Off Average") %>%
  # Repeated measurement break points
  layout(title=
  "3D fMRI Simulation: On & Off Time-series\n (averaged over all 8 repeats) at Voxel (44,42,33)",
  legend = list(orientation='h', y=-0.2))

Next, we’ll define and apply the Laplace Transform (LT) and its inverse (ILT) and use them to show the analytical kimesurface reconstruction.

# create the LT
NuLT = function(datax, datay, inputz, k = 3, fitwarning = FALSE, 
                mirror = FALSE, range = 2*pi) {
  
  datax = as.numeric(datax)
  datay = as.numeric(datay)

  n = length(datax)
  x1 = n/(n+0.5)*((datax-min(datax))/(max(datax)-min(datax)))*range
  
  if(mirror){
    x1 = c(x1,rev(2*range - x1))/2
    n = 2*n
    datay = c(datay, rev(datay))
    plot(x1, datay)
  }
  
  #generate the coefficients in indefinite integral of t^n*exp(-zt)
  coef = 1;
  coefm = as.matrix(coef)
  for(i in 1:k){
    coefm = cbind(coefm,0)
    coef = c(coef*i,1)
    coefm = rbind(coefm,coef)
  }
  # these coefficients ordered by ^0, ^1, ^2, ... in column format
  
  # compute 1, z, z^2...,z^k
  zz = cbind(1,inputz)
  zt = inputz
  for (i in 2:k){
    zt = zt*inputz
    zz = cbind(zz,zt)
  }
  zd = zt*inputz
  
  # compute 1, x, x^2...,x^k
  tx = x1;
  xm = cbind(1,x1)
  for (i in 2:k){
    tx = tx*x1
    xm = cbind(xm,tx)
  }
  
  
  # sum over intervals
  result = 0*inputz
  ii = 1
  while(ii+k <= n) {
    A = xm[seq(ii,ii+k),c(0:k+1)]
    b = datay[seq(ii,ii+k)]
    # polyfit might be faster when using polynomial basis, while matrix inverse, `solve()`,
    # is the more general approach that works for any function basis
    polyc = as.numeric(solve(A,b))

  
    #ordered by ^0, ^1, ^2, ... in column format
    
    # Enter a new function variable qualityCheck=FALSE
    # check fit quality; this step can be skipped for speed/efficiency
    # if (qualityCheck) { .... }
    
    if (fitwarning){
      xx = seq(A[1,2],A[k+1,2],length.out = 100);
      yy = polyval(rev(polyc),xx)
      if(max(abs(yy-mean(b)))>2*max(abs(b-mean(b)))){
        print(c("Warning: Poor Fit at ",ii,", Largest Deviation is",
                max(abs(yy-mean(b)))))
        print(c("Spline Polynomial is", polyc),3);
        #print(c(polyval(rev(polyc),A[,2]),b))
        plot(xx, yy, main="Polynomial fit", ylab="", type="l", col="blue")
        lines(A[,2],b, col="red")
        legend("topleft",c("fit","data"),fill=c("blue","red"))
        print(" ")
      }
    }
    
    # Use vector/matrix operations to avoid looping, 
    # some of the expressions look weird
    # May need to actually compare the efficiency/speed of
    # vector based vs. standard numeric calculations
    
    m1 = t(t(polyc*coefm)*A[1,])
    m11 = as.numeric(tapply(m1, col(m1)-row(m1), sum))[0:k+1] 
    
    m2 = t(t(polyc*coefm)*A[k+1,])
    m22 = as.numeric(tapply(m2, col(m2)-row(m2), sum))[0:k+1]
    
    intgl = (exp(-inputz*A[1,2])*colSums(t(zz)*m11)-
               exp(-inputz*A[k+1,2])*colSums(t(zz)*m22))/zd
    result = result+intgl
    ii=ii+k
  }
  
  # Computations over the last interval
  if(ii < n){
    nk = n-ii;
    A = xm[seq(ii,ii+nk),c(0:nk+1)]
    b = datay[seq(ii,ii+nk)]
    nc = as.numeric(solve(A,b))
    nc = c(nc,seq(0,0,length.out = k-nk))
    
    A = xm[seq(ii,ii+nk),]
    m1 = t(t(nc*coefm)*A[1,])
    m11 = as.numeric(tapply(m1, col(m1)-row(m1), sum))[0:k+1]
    
    m2 = t(t(nc*coefm)*A[nk+1,])
    m22 = as.numeric(tapply(m2, col(m2)-row(m2), sum))[0:k+1]
    
    # cc = colSums(coefm*polyc)
    intgl = (exp(-inputz*A[1,2])*colSums(t(zz)*m11)-
               exp(-inputz*A[nk+1,2])*colSums(t(zz)*m22))/zd
    result = result+intgl
  }
  #offset = 0.05*pi
  #result = result+datay[n]*(exp(-2*pi*inputz)-exp(-(2*pi+offset)*inputz))/inputz
  return(result)
}

Let’s test the discrete LT using the \(\sin(x),\cos(x)\) function.

datax = seq(1,200)

n = length(datax)

x1 = n/(n+0.5)*((datax-min(datax))/(max(datax)-min(datax)))*(2*pi)

datay = cos(x1)  # test function!!!

Lout = 61
range_limit <- 20
x2 <- seq(from = 0.1, to = range_limit, length.out = Lout)
y2 <- seq(from = 0, to = range_limit, length.out = Lout)

x2_g = x2 %o% seq(1,1, length.out = Lout)    
y2_g =  seq(1,1, length.out = Lout)%o%y2 
z2_grid = x2 %o% y2

argz = as.vector(x2_g + 1i*y2_g)
LTz = NuLT(x1, datay, argz)
rec1 = matrix(LTz,nrow = Lout)

recm = abs(rec1)
recr = Re(rec1)
reci = Im(rec1)
ph = reci/recm
surf_color <- atan2(reci,recr)
# colorscale = cbind(seq(-pi, pi, by=1/10, rainbow(11)))
p <-  plot_ly(hoverinfo="none", showscale = TRUE) %>%
    add_trace(x = x2_g, y = y2_g, z = surf_color, 
    # F-magnitude or Re(F),   # z = Im(z2_grid),  # Real or Imaginary part of F(t)
          surfacecolor=log(recm), # colorscale=colorscale, #Phase-based color
          type = 'surface', opacity=1, visible=T) %>%
  layout(title = 
    "Truncated Laplace transform of cos(x), Color = Mag(Z)", 
         showlegend = TRUE,
         scene = list(aspectmode = "manual", aspectratio = list(x=1, y=1, z=1.0), 
              xaxis=list(title= "Real=Re(z)"), yaxis=list(title = "Imaginary=Im(z)"),
              zaxis=list(title= "Z = tan-1(Im(L[f](z))/Re(L[f](z))))")) ) # 1:1:1 aspect ratio
p

Compare the above numerical calculation of the Laplace Transform (LT) of \(f(x)=\cos(x)\) against the below kimesurface of the exact analytical (ground truth integration) of the LT of \(f(x)=\cos(x)\), which is

\[F(z)\equiv \mathcal{L}(f)(z)\equiv \int_0^{2\pi}f(t)e^{-zt}dt = \frac{z-e^{-2\pi z}z}{1+z^2}\ .\]

z_val <- (x2_g+y2_g*1i)
ground_truth <- (z_val - exp(-2*pi*z_val)*z_val)/(1+z_val^2)
surf_ground_truth <- atan2(Im(ground_truth),Re(ground_truth))
plot_ly(hoverinfo="none", showscale = TRUE) %>%add_trace(x = x2_g, y = y2_g, z = surf_ground_truth,surfacecolor=log(abs(ground_truth)),type = 'surface', opacity=1, visible=T)%>%layout(title = 
    "Ground Truth for the truncated Laplace transform of cos(x), Color = Mag(Z)",scene = list(aspectmode = "manual", aspectratio = list(x=1, y=1, z=1.0), 
              xaxis=list(title= "Real=Re(z)"), yaxis=list(title = "Imaginary=Im(z)"),
              zaxis=list(title= "Z = tan-1(Im(L[f](z))/Re(L[f](z)))")) )

The visual agreement between the exact and numerical calculations of the kimesurface \(F(z)\) corresponding to the signal \(f(x)=\cos(x)\) can also be checked by computing the average surface difference as shown below.

print(paste0("The average difference between the ground truth and the numerical integration is ",
             sum(abs(ground_truth -rec1))/length(ground_truth)))
## [1] "The average difference between the ground truth and the numerical integration is 0.000156495820765476"

Next, apply the discrete LT to the average-On (onAvg) and average-Off signals (offAvg), interpolating from their original size, \(n=10\), to a new supersampled size \(n=200\), and transforming the time support from \(t\in[1:10]\) in increments of \(\Delta t=1\), to \(t' \in[0,2\pi)\), in increments of \(\Delta t'=\frac{n}{n+0.5}\times \frac{1}{(200-1)2\pi}.\)

This numerical longitudinal data preprocessing is done purely to establish some homologies in the structure of the LT domain, i.e., the input space signals (time-series), and the LT Range, i.e., the output space manifold (kimesurface). See the DSPA2 signal interpolation appendix to find out how to regularize either regularly (equally-spaced) sampled longitudinal data or irregularly (unequally-spaced) sampled longitudinal data.

datax = seq(1,200)
n = length(datax)
x_old <- c(1:10)
x1 = n/(n+0.5)*((datax-min(datax))/(max(datax)-min(datax)))*(2*pi)
# longitudinal data series are: onAvg & offAvg
xnew <- x1
spl_onAvg <- spline(x=x_old, y=onAvg, xmin=min(x_old), xmax=max(x_old), n=200)
spl_offAvg <- spline(x=x_old, y=offAvg, xmin=min(x_old), xmax=max(x_old), n=200)

plot_ly(type="scatter", mode="markers") %>%
  add_trace(x=x_old, y=onAvg, name="Raw ON-Avg", 
            marker=list(size=20, color="lightblue", sizemode="area")) %>%
  add_trace(x=x_old, y=offAvg, name="Raw OFF-Avg",
            marker=list(size=20, color="orange", sizemode="area")) %>%
  add_trace(x=spl_onAvg$x, y=spl_onAvg$y, type="scatter", mode="markers+lines",
            name="Spline ON-Avg Model", marker=list(size=8, color="blue"),
            line = list(color = "blue", width = 4)) %>%
  add_trace(x=spl_offAvg$x, y=spl_offAvg$y, type="scatter", mode="markers+lines",
            name="Spline OFF-Avg Model", marker=list(size=8, color="red"),
            line = list(color = "red", width = 4)) %>%
  layout(title="Spline Modeling of 1D ON and OFF fMRI data\n (averaged over repeated samples)",
         legend = list(orientation = 'h', y=-0.2)) %>%
  hide_colorbar()
# datay = sin(x1)  # test function!!!

# Define a time-series to kimesurface plotting function
plotKimeSurface <- function(datay = sin(x1),index="", 
           title="Laplace transform of the Time-Series, Color = phase(Z)") {
  Lout = 61
  range_limit <- 20
  x2 <- seq(from = 0.1, to = range_limit, length.out = Lout)
  y2 <- seq(from = 0, to = range_limit, length.out = Lout)
  
  x2_g = x2 %o% seq(1,1, length.out = Lout)    
  y2_g =  seq(1,1, length.out = Lout)%o%y2 
  # z2_grid = x2 %o% y2
  
  argz = as.vector(x2_g + 1i*y2_g)
  # print(argz)
  LTz = NuLT(x1, datay, argz)
  rec1 = matrix(LTz,nrow = Lout)
  
  recm = abs(rec1)
  recr = Re(rec1)
  reci = Im(rec1)
  ph = reci/recm
  surf_color <- atan2(reci,recr)
  # colorscale = cbind(seq(-pi, pi, by=1/10, rainbow(11)))
  p <-  plot_ly(hoverinfo="none", showscale = TRUE,
           scene = paste("scene",as.character(index),sep = "")) %>%
      add_trace(x = x2_g, y = y2_g, z = surf_color, 
      # F-magnitude or Re(F),   # z=Im(z2_grid),  # Real or Imaginary part of F(t)
            surfacecolor=log(recm), # colorscale=colorscale, #Phase-based color
            type = 'surface', opacity=1, visible=T) %>%
      layout(title = title, showlegend = TRUE,scene=list(aspectmode="manual", aspectratio=list(x=1, y=1, z=1.0), 
                xaxis=list(title= "Real"), yaxis=list(title = "Imaginary"),
                zaxis=list(title= "Z = phase")) ) # 1:1:1 aspect ratio)
  
  return(list("plot"=p,"phase"=surf_color,"magnitude"=recm,"x_grid"=x2_g,"y_grid"=y2_g))
}
#paste("scene",as.character(index),sep = "")
# list(aspectmode="manual", aspectratio=list(x=1, y=1, z=1.0), 
#                 xaxis=list(title= "Real"), yaxis=list(title = "Imaginary"),
#                 zaxis=list(title= "Z = log(mag(Lf(x+iy)))")) ) # 1:1:1 aspect ratio
pOn  <- plotKimeSurface(datay=spl_onAvg$y, "",
           title="ON fMRI ((44,42,33): Laplace transform of Avg Time-Series, Color = phase(Z)")
pOff <- plotKimeSurface(datay=spl_offAvg$y,2, 
           title="OFF fMRI ((44,42,33): Laplace transform of Avg Time-Series, Color = phase(Z)")

Finally, we generate a synchtonized 3D scene plot showing the ON and OFF fMRI kime-surfaces along with their difference (ON - OFF) fMRI kimesurface.

library(plotly)
library(htmlwidgets)

# Generate some example 3D data
# ... (Create your pOn and pOff plots here)
diff_plot <- plot_ly(hoverinfo="none", showscale = TRUE,
           scene = "scene3") %>%
      add_trace(x = pOn$x_grid, y = pOn$y_grid, z = pOn$phase - pOff$phase, 
      # F-magnitude or Re(F),   # z=Im(z2_grid),  # Real or Imaginary part of F(t)
            surfacecolor=pOn$magnitude-pOff$magnitude, # colorscale=colorscale, #Phase-based color
            type = 'surface', opacity=1, visible=T) %>%
      layout(title = title, showlegend = TRUE,scene3=list(aspectmode="manual", aspectratio=list(x=1, y=1, z=1.0), 
                xaxis=list(title= "Real"), yaxis=list(title = "Imaginary"),
                zaxis=list(title= "Z = Phase")) )
# Combine them into a subplot
main_plot <- subplot(pOn$plot, pOff$plot,diff_plot) %>%
  layout(
    title = "kimesurfaces (Left: On; Right: Off; Bottom: Kimesurface Difference (On vs. Off))",
    scene  = list(domain = list(x = c(0, 0.5), y = c(0.5, 1)), aspectmode = "cube"),
    scene2 = list(domain = list(x = c(0.5, 1), y = c(0.5, 1)), aspectmode = "cube"),
    scene3 = list(domain = list(x = c(0.25, 0.75), y = c(0, 0.5)), aspectmode = "cube")
  )
#https://stackoverflow.com/questions/77757102/possible-to-synchronize-3d-scatter-plotly-plots-in-r
main_plot %>%
  htmlwidgets::onRender(
    "function(x, el) {
      x.on('plotly_relayout', function(d) {
        const camera = Object.keys(d).filter((key) => /\\.camera$/.test(key));
        if (camera.length) {
          const scenes = Object.keys(x.layout).filter((key) => /^scene\\d*/.test(key));
          const new_layout = {};
          scenes.forEach(key => {
            new_layout[key] = {...x.layout[key], camera: {...d[camera]}};
          });
          Plotly.relayout(x, new_layout);
        }
      });
    }")

1.2 Wavelet transform

For the continuous wavelet transform (CWT) in our kime framework, given a mother wavelet \(\psi(t)\), we can define a complex kime-wavelet by \(\psi_{\kappa}(t) = \psi(t)e^{i\theta}\), where \(\kappa = te^{i\theta}\) is kime. The CWT of a time-series signal \(x(t)\) with respect to our kime-wavelet is \[W_x(a,\kappa) = \frac{1}{\sqrt{|a|}} \int_{-\infty}^{\infty} x(t)\psi^*\left(\frac{t-\tau}{a}\right)e^{-i\theta}dt ,\] where \(a\) is the scale parameter, \(\tau\) is the translation parameter, \(\psi^*\) is the complex conjugate of the mother wavelet, and \(\theta\) is the kime-phase.

The R example below uses synthetic sample data (simulation) with multiple frequency components, \(f(t)=\sin(2\pi t) + 0.5 \sin(2\pi 3 t) + N(\mu=0, \sigma= 0.2)\), to demonstrate the use of the Morlet mother wavelet, \(e^{-t^2/2} \cdot e^{i \omega_o t}\), to implement the kime-wavelet transform with the phase parameter, \(\omega_o\). It computes the CWT across different scales and phases and creates a 3D visualization using plotly(). In this implementation, the scale-phase relationship is represented via the the wavelet transform capturing multi-scale features, The phase \(\theta\) modulates the complex oscillation and jointly they form a kimesurface in the \((t, \theta, scale)\) space. The analysis shows the time-frequency localization through wavelets, phase-dependent feature extraction, and scale-dependent resolution.

This example animates the kimesurface in 3D across scales encoding

  1. Time \(t\) as radial distance in polar coordinates \(x = t\cos(\theta), y = t\sin(\theta)\),
  2. Phase \(\theta\) as angular direction (sampled from a Laplace distribution for directional bias),
  3. Scale \(a\) as animation frames (controlled by a slider),
  4. Wavelet Magnitude \(|W_x(a, \kappa)|\) as surface height.

Each frame of the animation represents a scale-specific kimesurface, illustrating the frequency localization. At smaller scales, e.g., \(a=1\), high-frequency components (3Hz in the signal) dominate and show visibly as sharp peaks. The phase modulation, in this case Laplace prior distribution for \(\theta\), creates asymmetric angular patterns, highlighting phase-dependent signal features. The scale evolution, controlled by the slider moves from low to larger scales, e.g., \(a=32\), low-frequency components (1Hz) emerge as smoother, broader peaks.

In this visualization multi-scale analysis identifies scales that capture dominant signal components. Phase-sensitive detection shows how phase randomization via \(\Phi(t)\) affecting feature prominence. And finally, noise discrimination distinguishes true signal components (persistent across scales) from transient noise. While the first 3D scene shows the kimesurface color-coded by its intensity, the second 3D plot utilizes enhanced directional phases to highlight the phase effect by color-coding the surface with phase angles.

# Check the animation frames as 2D heatmaps # image(all_surfaces[[1]]); image(all_surfaces[[3]])

library(plotly)

# 1. Morlet wavelet with correct complex handling
morlet_wavelet <- function(t, omega0 = 6) { exp(-t^2/2) * exp(1i * omega0 * t) }

# 2. Robust kimesurface computation
compute_kimesurface <- function(signal, t_time, scales, theta_values) {
  N <- length(signal)
  dt <- diff(t_time)[1]
  surfaces <- list()
  
  for (scale_idx in seq_along(scales)) {
    scale <- scales[scale_idx]
    z_grid <- matrix(0, nrow = length(theta_values), ncol = N)
    
    t_wavelet <- seq(-4*scale, 4*scale, by = dt)
    wavelet_base <- morlet_wavelet(t_wavelet/scale)
    
    for (theta_idx in seq_along(theta_values)) {
      theta <- theta_values[theta_idx]
      wavelet <- Conj(wavelet_base * exp(1i * theta)) / sqrt(scale)
      
      conv_full <- convolve(signal, rev(wavelet), conj = FALSE, type = "open")
      valid_conv <- conv_full[seq_len(N) + max(0, (length(conv_full)-N)/2)]
      
      z_grid[theta_idx, ] <- abs(valid_conv[1:N])
    }
    surfaces[[scale_idx]] <- z_grid
  }
  return(surfaces)
}

# 3. Generate data
N <- 100
t_time <- seq(0, 10, length.out = N)
signal <- sin(2*pi*1*t_time) + 0.5*sin(2*pi*3*t_time) + rnorm(N, sd = 0.2)
scales <- 2^seq(0, 5, length.out = 10)
n_grid <- 30  # x, and y axes grid size (30 * 30)
# theta_values <- seq(-pi, pi, length.out = n_grid)   # Uniform phase prior
theta_values <- ExtDist::rLaplace(n=n_grid, mu = 0, b = 1) # Laplace prior

######### do we need to order/sort the phases for smooth wavelet kime-surfaces???
# theta_values <- sort(theta_values)
###################################

# 4. Compute surfaces
all_surfaces <- compute_kimesurface(signal, t_time, scales, theta_values)

# 5. Create coordinate grids
x_grid <- outer(t_time, cos(theta_values), "*")
y_grid <- outer(t_time, sin(theta_values), "*")

# 6. Build frames with explicit 0-based indexing
frames <- lapply(seq_along(scales), function(i) {
  list(name = as.character(i-1),  # 0-based index for Plotly
    data = list(list(type = "surface",
      x = x_grid, y = y_grid, z = all_surfaces[[i]])),
    traces = list(0))
})

# 7. Initialize plot
fig <- plot_ly() %>%
  add_surface(x = x_grid, y = y_grid, z = all_surfaces[[1]], colorscale = "Viridis",
    showscale = TRUE)

# 8. Assign frames directly
fig$x$frames <- frames

# 9. Configure slider with explicit 0-based frame references
fig <- fig %>% layout(
  scene = list(xaxis = list(title = "X (t·cosθ)"),
    yaxis = list(title = "Y (t·sinθ)"), zaxis = list(title = "Magnitude")),
  sliders = list(list(active = 0, currentvalue = list(prefix = "Scale: "),
    steps = lapply(seq_along(scales), function(i) {
      list(args = list(list(i-1),  # 0-based frame index
          list(mode = "immediate", frame = list(duration = 0, redraw = TRUE),
            transition = list(duration = 0))),
        label = round(scales[i], 2), method = "animate")
    })
  )))

fig
# 10. Directional Phase Highlighting to enhance phase interpretation by 
#     color-coding the surface with phase angles
fig1 <- fig %>% add_surface(
  x = x_grid, y = y_grid, z = all_surfaces[[1]],
  surfacecolor = matrix(theta_values, nrow = n_grid, ncol = N),  # Phase-based color
  colorscale = "HSV", cmin = -pi, cmax = pi,
  colorbar = list(title = "Phase (θ)", len = 0.5)
)
fig1

In the above simulation, we actually only used a single (simulated) time-series, \(f(t) = \sin(2\pi t) + 0.5\sin(2\pi 3t) + rnorm(N, sd = 0.2)\), to show the wavelet transform surfaces across scales. In practice, we have a number of repeatedly measured time-series, \(\{f_i(t)\}_i\) and need to reconstruct a single kimesurface corresponding to the entire collection of time-series. In essence, the sequences of 1D time-series are mapped onto a 2D parametric manifold indexed by the ordered time (\(t\)) and the stochastic phase (\(\theta\)), randomly drawn from a prior phase distribution.

Many additional examples of wavelet-constructed kimesurfaces are shown in the TCIU Chapter 6 Tutorial.

1.3 Kimesurface as an Ensemble Manifold

To represent multiple repeated time-series as a single kimesurface parameterized by the time \(t\) and the stochastic phase \(\theta \sim \Phi(t)\), we need to integrate all signals into a unified 2D manifold using the Laplace phase prior and an ensemble-based integral transform.

We want to have the kimesurface represent the aggregated behavior of all repeated time-series, where time \(t\) is the radial coordinate in polar space $x = t $, \(y = t \sin\theta\), where the phase \(\theta\), drawn from a Laplace distribution, is the angular coordinate, and the kimesurface magnitude \(z\) encodes the expected signal value at \((t, \theta)\), derived from the ensemble.

library(KernSmooth)
library(plotly)

# 1. Compute Kimesurface function remains the same
compute_kimesurface <- function(signal, t_time, scales, theta_values) {
  N <- length(signal)
  dt <- diff(t_time)[1]
  n_theta <- length(theta_values)
  n_scales <- length(scales)
  
  cwt_array <- array(0, dim = c(n_scales, n_theta, N))
  
  for (scale_idx in seq_along(scales)) {
    scale <- scales[scale_idx]
    t_wavelet <- seq(-4*scale, 4*scale, by = dt)
    wavelet_base <- Conj(morlet_wavelet(t_wavelet/scale)) / sqrt(scale)
    
    for (theta_idx in seq_along(theta_values)) {
      theta <- theta_values[theta_idx]
      wavelet <- wavelet_base * exp(1i * theta)
      
      conv_full <- convolve(signal, rev(wavelet), conj = FALSE, type = "open")
      valid_conv <- conv_full[seq_len(N) + max(0, (length(conv_full)-N)/2)]
      cwt_array[scale_idx, theta_idx, ] <- abs(valid_conv[1:N])
    }
  }
  return(cwt_array)
}

# Generate the data
set.seed(123)  # For reproducibility
N_series <- 100
N_time <- 100
t_time <- seq(0, 10, length.out = N_time)
theta_values <- ExtDist::rLaplace(n = N_series, mu = 0, b = 1)

# Generate signals
signals <- lapply(theta_values, function(theta) {
  sin(2 * pi * 1 * (t_time + theta)) +
    0.5 * sin(2 * pi * 3 * t_time) +
    rnorm(N_time, sd = 0.2)
})

# Create phase grid
theta_grid <- seq(-pi, pi, length.out = 50)
bw <- 0.2

# Compute smoothed kimesurface with progress tracking
kimesurface <- matrix(0, nrow = N_time, ncol = length(theta_grid))
for (t_idx in 1:N_time) {
  t_vals <- sapply(signals, `[`, t_idx)
  # Use try-catch to handle potential locpoly errors
  result <- try({
    locpoly(
      x = theta_values,
      y = t_vals,
      bandwidth = bw,
      gridsize = length(theta_grid),
      range.x = c(-pi, pi)
    )
  }, silent = TRUE)
  
  if (!inherits(result, "try-error")) {
    kimesurface[t_idx, ] <- result$y
  }
}

# Transform to polar coordinates
x_grid <- outer(t_time, cos(theta_grid), "*")
y_grid <- outer(t_time, sin(theta_grid), "*")
z_grid <- kimesurface

# Ensure z_grid has finite values
z_grid[!is.finite(z_grid)] <- NA
z_range <- range(z_grid, na.rm = TRUE)

# Create the visualization with explicit colorscale range
p <- plot_ly(
  x = x_grid,
  y = y_grid,
  z = z_grid,
  type = "surface",
  colorscale = "Viridis",
  cmin = z_range[1],
  cmax = z_range[2]
) %>%
  layout(
    scene = list(
      xaxis = list(title = "X (t·cosθ)"),
      yaxis = list(title = "Y (t·sinθ)"),
      zaxis = list(title = "Signal Magnitude",
                  range = z_range),
      camera = list(
        eye = list(x = 1.5, y = 1.5, z = 1.5)
      )
    ),
    title = "Kimesurface Visualization"
  )
p

This alternative (ensemble) approach is compared to the earlier strategy in this table.

Original Approach Improved Ensemble Strategy
Single time-series analysis Ensemble integration across repeated measurements
Phase as wavelet parameter Phase as stochastic coordinate (Laplace prior)
Static scale animation Static manifold showing phase-time relationships

The ensemble constructed kimesurface shows how the phase shifts (from Laplace noise) affect the \(1Hz\) component’s alignment, while the \(3Hz\) component remains stable. Noise suppression is achieved via kernel smoothing, which averages out Gaussian noise and reveals more stable (true?) signal structure. The kime manifold topology exhibits radial ridges corresponding to time-localized events, while the angular variations reflect the underlying phase uncertainty.

This ensemble kimesurface representation may support subsequent time-frequency analysis using wavelets to compute the scale-phase energy density. The intensities of these ensemble kimesurfaces, \(z(t, \theta)\) represents multi-scale energy combining wavelet and phase analysis.

# 1. Compute CWT for each signal
cwt_list <- lapply(signals, function(s) {
  scales <- 2^seq(0, 5, length.out = 20)
  cwt <- compute_kimesurface(s, t_time, scales, theta_grid)
  colMeans(abs(cwt))  # Average over scales
})

# Aggregate energy density
energy_surface <- apply(simplify2array(cwt_list), 1:2, mean)

# 2: Ensemble Averaging - Compute the kimesurface across all signals and scales:
# Parameters
N_series <- 100  # Number of repeated trials
scales <- 2^seq(0, 5, length.out = 20)
theta_grid <- ExtDist::rLaplace(n = 30, mu = 0, b = 1)  # Laplace phases

# Precompute CWT for all signals
cwt_list <- lapply(1:N_series, function(i) {
  signal <- sin(2*pi*1*t_time) + 0.5*sin(2*pi*3*t_time) + rnorm(N_time, sd = 0.2)
  compute_kimesurface(signal, t_time, scales, theta_grid)
})

# Average across signals and scales
cwt_ensemble <- apply(simplify2array(cwt_list), 1:3, mean)  # [scales, theta, time]
kimesurface <- apply(cwt_ensemble, c(2,3), mean)  # Average over scales: [theta, time]

# 3: Visualization
# Polar coordinates
x_grid <- outer(t_time, cos(theta_grid), "*")  # [time × theta]
y_grid <- outer(t_time, sin(theta_grid), "*")

# Plot
plot_ly(z = kimesurface, x = x_grid, y = y_grid, type = "surface") %>%
  layout(
    scene = list(
      xaxis = list(title = "X (t·cosθ)"),
      yaxis = list(title = "Y (t·sinθ)"),
      zaxis = list(title = "Ensemble Magnitude")
    )
  )

By treating \(\theta\) as a stochastic coordinate (e.g., Laplace-distributed) and aggregating over repeated measurements, the ensemble kimesurfaces may be useful for visualizing phase-sensitive signal components, for characterizing noise resilience across trials, and for identifying time-localized phenomena in longitudinal studies.

The 3D ensemble kimesurface array structure handles the (scales, \(\theta\), time) dimensions. The ensemble averaging aggregates across repeated trials and scales and the phase randomization values (\(\theta\sim Laplace\) model) represent the stochastic phase variability.

The surface geometry reflects a radial direction \((X/Y)\) encoding time (\(t\)) and phase (\(\theta\)). The height (\(Z\)) is the expected wavelet magnitude across trials and the phase sensitivity is due to the Laplace-distributed phases, creating asymmetric “streaks” showing likely phase alignments. Stable features (e.g., \(3Hz\) component) remain consistent across \(\theta\). the multi-trial robustness of hte ensemble kimesurface consruction is due to cancellation of Gaussian noise during the ensemble averaging, which may suggest that the true signal components persist across trials.

2 Stochastic Gaussian Process (SGP) Kimesurface Modeling

We can model the kimesurface as a stochastic (Gaussian) process (SGP) indexed by \((t, \theta)\)

\[f(t, \theta) \sim \mathcal{GP}\left(\mu(t), K\left((t, \theta), (t', \theta')\right)\right),\] where the covariance kernel \(K\) combines time and phase dependencies (e.g., product of a squared-exponential kernel in \(t\) and a periodic kernel in \(\theta\)).

Let’s demonstrate an example of a stochastic (Gaussian) process (GP model) for aggregating a collection of repeated fMRI time-courses as kime-surfaces. The input is multiple repetitions of simulated fMRI data representing multiple repeated samples including multiple participants (e.g., \(n=15\)) undergoing the same fMRI/BOLD event-related block design of ON vs. OFF finger-tapping stimulus, and for each participant using multiple runs (e.g., \(m=20\)). For simplicity we only focus on on a single brain location, ignoring the brain spatial localization. The output is a pair of kime-surfaces, one corresponding to the aggregated ON (stimulus) condition from the stochastic (Gaussian) process model and the other for the OFF (rest) condition.

The fMRI simulation models the fMRI BOLD responses using canonical HRF (hemodynamic response function). It simulates block design (ON/OFF finger-tapping stimulus/rest conditions) with \(30s\) blocks and includes participant-specific noise and drift. For a kime pair, \(\kappa=(t,\theta), \kappa'=(t',\theta')\) the kime-kernel

\[K((t,\theta),(t',\theta')) = \sigma_t^2 e^{-\frac{(t-t')^2}{2l_t^2}} \times \sigma_\theta^2 e^{-\frac{2\sin^2(|\theta-\theta'|/2)}{l_\theta^2}}\] includes a squared-exponential (time) capturing temporal correlations in the BOLD responses and a periodic (phase) modeling phase relationships using a Laplace-distributed phases.

The SGP model kimesurface construction aggregates data across participants and runs, uses Cholesky decomposition for efficient GP computation, and maintains polar coordinate parameterization \((t,\theta)\). Visualizing the ON condition kimesurface shows stimulus-locked responses with characteristic HRF shape and the OFF condition kimesurface displays baseline fluctuations and noise structure. The radial patterns indicate time-locked responses and hte angular variations show phase-dependent modulation.

Compared to traditional fMRI analyses, a phase-sensitive spacekime analysis incorporates phase randomization through Laplace prior. Multi-subject integration of the kimesurface representation naturally combines data from multiple participants and runs, i.e., repeated experimental evidence. For uncertainty quantification, the full covariance structure captures the complete response variability. Finally, non-stationary GP modeling handles drifts and noise correlations explicitly.

2.1 Basic GP Model

A first generation, oversimplified but computationally efficient SGP model is demonstrated below.

library(MASS)
library(plotly)
library(htmlwidgets)
library(kernlab)
library(parallel)
library(fields)
library(ExtDist)

# ----------------------------
# 1. Simulate fMRI Data
# ----------------------------
simulate_fmri_data <- function(n_participants=15, n_runs=20, tr=2, duration=300) {
  # Hemodynamic Response Function (HRF)
  hrf <- function(t) {
    a1 <- 6; a2 <- 12; b1 <- 0.9; b2 <- 0.9; c <- 0.35
    t^(a1-1)*b1^a1*exp(-b1*t)/gamma(a1) - c*(t^(a2-1)*b2^a2*exp(-b2*t)/gamma(a2))
  }
  
  # Experimental design parameters
  n_timepoints <- duration/tr
  block_length <- 30  # 30s blocks
  n_blocks <- duration/block_length
  
  # Storage arrays
  data <- list(
    ON = array(0, dim=c(n_participants, n_runs, n_timepoints)),
    OFF = array(0, dim=c(n_participants, n_runs, n_timepoints))
  )
  
  # Generate data for each participant and run
  set.seed(123)
  for(p in 1:n_participants) {
    for(r in 1:n_runs) {
      # Generate block design
      design <- rep(c(0,1), each=n_blocks/2)
      hrf_response <- rep(hrf(1:(block_length/tr)), n_blocks) * design
      
      # Add participant-specific noise and drift
      noise <- cumsum(rnorm(n_timepoints, sd=0.1))
      drift <- 0.01*seq(n_timepoints)
      
      # Store responses
      data$ON[p,r,] <- hrf_response + noise + drift + rnorm(n_timepoints, sd=0.2)
      data$OFF[p,r,] <- noise + drift + rnorm(n_timepoints, sd=0.2)
    }
  }
  return(data)
}

# ----------------------------
# 2. Gaussian Process Kernel
# ----------------------------
kime_kernel <- function(t1, t2, theta1, theta2, sigma_t=1, lt=10, sigma_theta=1, ltheta=pi/2) {
  # Squared-exponential kernel for time
  k_time <- sigma_t^2 * exp(-(t1 - t2)^2/(2*lt^2))
  
  # Periodic kernel for phase (θ)
  k_phase <- sigma_theta^2 * exp(-2*sin(abs(theta1 - theta2)/2)^2/(ltheta^2))
  
  return(k_time * k_phase)
}

# ----------------------------
# 3. Construct Kimesurfaces
# ----------------------------
# construct_kimesurface <- function(data, condition, theta_samples=30) {
#   # Aggregate data across participants and runs
#   aggregated <- apply(data[[condition]], 3, mean)
#   n_time <- length(aggregated)
#   
#   # Phase distribution (Laplace prior)
#   theta <- ExtDist::rLaplace(theta_samples, mu=0, b=1)
#   
#   # Create grid
#   t_grid <- seq(0, 300, length=n_time)
#   grid <- expand.grid(t=t_grid, theta=theta)
#   
#   # GP Prior
#   K <- matrix(0, nrow=nrow(grid), ncol=nrow(grid))
#   for(i in 1:nrow(grid)) {
#     for(j in 1:nrow(grid)) {
#       K[i,j] <- kime_kernel(grid$t[i], grid$t[j], grid$theta[i], grid$theta[j])
#     }
#   }
#   
#   # Add observation noise
#   K_obs <- K + diag(0.1, nrow(K))
#   
#   # GP Posterior
#   L <- chol(K_obs)
#   alpha <- solve(t(L), solve(L, aggregated))
#   
#   # Posterior mean
#   post_mean <- K %*% alpha
#   
#   return(list(
#     grid = grid,
#     mean = matrix(post_mean, nrow=length(t_grid), ncol=theta_samples),
#     covariance = K
#   ))
# }
construct_kimesurface <- function(data, condition, theta_samples=30) {
  # Aggregate data across participants and runs
  aggregated <- apply(data[[condition]], 3, mean)
  n_time <- length(aggregated)
  
  # Phase distribution (Laplace prior)
  theta <- seq(0, 2*pi, length.out=theta_samples)  # Changed to regular grid
  
  # Create grid
  t_grid <- seq(0, 300, length=n_time)
  
  # Initialize output matrix directly
  result_matrix <- matrix(0, nrow=length(t_grid), ncol=theta_samples)
  
  # Process each time point separately to avoid memory issues
  for(t_idx in 1:length(t_grid)) {
    # Create kernel only for this time point
    K_t <- matrix(0, nrow=theta_samples, ncol=theta_samples)
    for(i in 1:theta_samples) {
      for(j in 1:theta_samples) {
        K_t[i,j] <- kime_kernel(t_grid[t_idx], t_grid[t_idx], 
                               theta[i], theta[j])
      }
    }
    
    # Add observation noise
    K_t <- K_t + diag(0.1, theta_samples)
    
    # Compute posterior for this time point
    L <- chol(K_t)
    alpha <- solve(t(L), solve(L, rep(aggregated[t_idx], theta_samples)))
    result_matrix[t_idx,] <- alpha
  }
  
  return(list(
    grid = expand.grid(t=t_grid, theta=theta),
    mean = result_matrix,
    timepoints = t_grid,
    phases = theta
  ))
}

# ----------------------------
# 4. Visualization
# ----------------------------
visualize_kimesurface <- function(surface, title) {
  x <- outer(surface$grid$t, cos(surface$grid$theta), "*")
  y <- outer(surface$grid$t, sin(surface$grid$theta), "*")
  
  plot_ly(x = x, y = y, z = surface$mean, type = "surface",
          colorscale = "Viridis") %>%
    layout(
      scene = list(
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "BOLD Response"),
        camera = list(eye = list(x = 1.5, y = 1.5, z = 1.5))
      ),
      title = list(text = title)
    )
}

##### Kimesurface Vizualization
visualize_kimesurfaces_triple <- function(on_surface, off_surface) {
  # Helper function to process surface data in chunks
  process_surface_data <- function(surface) {
    t_vals <- unique(surface$grid$t)
    theta_vals <- unique(surface$grid$theta)
    
    chunk_size <- 10
    n_chunks <- ceiling(length(t_vals) / chunk_size)
    
    x_all <- y_all <- z_all <- NULL
    
    for(i in 1:n_chunks) {
      start_idx <- (i-1) * chunk_size + 1
      end_idx <- min(i * chunk_size, length(t_vals))
      t_chunk <- t_vals[start_idx:end_idx]
      
      x_chunk <- outer(t_chunk, cos(theta_vals))
      y_chunk <- outer(t_chunk, sin(theta_vals))
      z_chunk <- surface$mean[start_idx:end_idx,]
      
      x_all <- rbind(x_all, x_chunk)
      y_all <- rbind(y_all, y_chunk)
      z_all <- rbind(z_all, z_chunk)
    }
    
    return(list(x = x_all, y = y_all, z = z_all))
  }
  
  # Process both surfaces
  on_data <- process_surface_data(on_surface)
  off_data <- process_surface_data(off_surface)
  
  # Reduce resolution for plotting
  n <- 3  # Use every 3rd point
  subsample <- function(mat) {
    mat[seq(1, nrow(mat), n), seq(1, ncol(mat), n)]
  }
  
  # Create the first plot (ON)
  fig <- plot_ly() %>%
    add_surface(
      x = subsample(on_data$x),
      y = subsample(on_data$y),
      z = subsample(on_data$z),
      colorscale = "Viridis",
      showscale = FALSE,
      name = "ON",
      scene = 'scene1'
    ) %>%
    layout(
      scene = list(
        domain = list(x = c(0, 0.33), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "BOLD Response")
      )
    )

  # Add the second plot (OFF)
  fig <- fig %>%
    add_surface(
      x = subsample(off_data$x),
      y = subsample(off_data$y),
      z = subsample(off_data$z),
      colorscale = "Viridis",
      showscale = FALSE,
      name = "OFF",
      scene = 'scene2'
    ) %>%
    layout(
      scene2 = list(
        domain = list(x = c(0.33, 0.66), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "BOLD Response")
      )
    )

  # Add the third plot (OVERLAY)
  fig <- fig %>%
    add_surface(
      x = subsample(on_data$x),
      y = subsample(on_data$y),
      z = subsample(on_data$z),
      colorscale = "Viridis",
      opacity = 0.7,
      showscale = FALSE,
      name = "ON (overlay)",
      scene = 'scene3'
    ) %>%
    add_surface(
      x = subsample(off_data$x),
      y = subsample(off_data$y),
      z = subsample(off_data$z),
      colorscale = "Portland",
      opacity = 0.7,
      showscale = FALSE,
      name = "OFF (overlay)",
      scene = 'scene3'
    ) %>%
    layout(
      scene3 = list(
        domain = list(x = c(0.66, 1), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "BOLD Response")
      )
    )

  # Add final layout settings
  fig <- fig %>%
    layout(
      title = "Kimesurface Visualization of fMRI ON and OFF Conditions",
      showlegend = TRUE,
      legend = list(x = 0.85, y = 0.1),
      annotations = list(
        list(
          showarrow = FALSE,
          text = '(ON)',
          xref = 'paper',
          yref = 'paper',
          x = 0.15,
          y = 0.95,
          xanchor = 'center',
          yanchor = 'bottom',
          font = list(size = 16)
        ),
        list(
          showarrow = FALSE,
          text = '(OFF)',
          xref = 'paper',
          yref = 'paper',
          x = 0.5,
          y = 0.95,
          xanchor = 'center',
          yanchor = 'bottom',
          font = list(size = 16)
        ),
        list(
          showarrow = FALSE,
          text = '(OVERLAY)',
          xref = 'paper',
          yref = 'paper',
          x = 0.85,
          y = 0.95,
          xanchor = 'center',
          yanchor = 'bottom',
          font = list(size = 16)
        )
      )
    )

  # Add synchronization
  fig <- fig %>% onRender(
    "function(el) {
      el.on('plotly_relayout', function(d) {
        const camera = Object.keys(d).filter((key) => /\\.camera$/.test(key));
        if (camera.length) {
          const scenes = Object.keys(el.layout).filter((key) => /^scene\\d*/.test(key));
          const new_layout = {};
          scenes.forEach(key => {
            new_layout[key] = {...el.layout[key], camera: {...d[camera]}};
          });
          Plotly.relayout(el, new_layout);
        }
      });
    }"
  )

  return(fig)
}

# Usage example
fmri_data <- simulate_fmri_data()
on_surface <- construct_kimesurface(fmri_data, "ON")
off_surface <- construct_kimesurface(fmri_data, "OFF")
visualize_kimesurfaces_triple(on_surface, off_surface)

The 3D visualization below shows the simple rendition of the raw simulated fMRI On/Off data without any processing. Note the extreme level of noise, i.e., low signal to noise ration (SNR) in the fMRI data. One of the axis (\(y\)) is a proxy of the kime-phase, indexing the repeated runs within a participant and the multiple samples across participants.

#############################################
## 5) The triple visualization function
#############################################
visualize_kimesurfaces_tripleREV <- function(fmri_data) {
  t_min <- 1;     t_max <- dim(fmri_data$ON)[3]
  th_min<- 1;     th_max<- dim(fmri_data$ON)[1]*dim(fmri_data$ON)[2]
  
  t_seq     <- c(t_min:t_max)
  theta_seq <- c(th_min:th_max)
  
  # flatten the repeats within subject over runs (15) and across subjects (20)
  # into a single dimensions, random sampling for theta,
  reshapedOn <- matrix(fmri_data$ON, nrow = 15*20, ncol = 150)
  reshapedOff <- matrix(fmri_data$OFF, nrow = 15*20, ncol = 150)
  
  # 2D surface in (t,theta) space
  fig <- plot_ly() %>%
    # First: ON
    add_surface(
      z = reshapedOn,
      colorscale = "Viridis",
      showscale  = FALSE,
      name       = "ON",
      scene      = "scene1"
    ) %>%
    layout(
      scene = list(
        domain     = list(x = c(0, 0.33), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "time (t)"),
        yaxis = list(title = "Runs * Participants (θ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    # Second: OFF
    add_surface(
      z = reshapedOff,
      colorscale = "Viridis",
      showscale  = FALSE,
      name       = "OFF",
      scene      = "scene2"
    ) %>%
    layout(
      scene2 = list(
        domain     = list(x = c(0.33, 0.66), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "time (t)"),
        yaxis = list(title = "Runs * Participants (θ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    # Third: Overlay
    add_surface(
      z = reshapedOn,
      colorscale = "Viridis",
      opacity    = 0.5,
      showscale  = FALSE,
      name       = "ON (overlay)",
      scene      = "scene3"
    ) %>%
    add_surface(
      z = reshapedOff,
      colorscale = "Portland",
      opacity    = 0.8,
      showscale  = FALSE,
      name       = "OFF (overlay)",
      scene      = "scene3"
    ) %>%
    layout(
      scene3 = list(
        domain     = list(x = c(0.66, 1), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "time (t)"),
        yaxis = list(title = "Runs * Participants (θ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    layout(
      title      = "Kimesurface Visualization of fMRI ON and OFF",
      showlegend = TRUE,
      legend     = list(x=0.85, y=0.1),
      annotations= list(
        list(showarrow=FALSE, text='(ON)',  xref='paper', yref='paper',
             x=0.15, y=0.95, xanchor='center', yanchor='bottom',
             font=list(size=16)),
        list(showarrow=FALSE, text='(OFF)', xref='paper', yref='paper',
             x=0.5,  y=0.95, xanchor='center', yanchor='bottom',
             font=list(size=16)),
        list(showarrow=FALSE, text='(OVERLAY)',
             xref='paper', yref='paper', x=0.85, y=0.95,
             xanchor='center', yanchor='bottom',
             font=list(size=16))
      )
    ) %>%
    # Sync the cameras
    onRender("
      function(el) {
        el.on('plotly_relayout', function(d) {
          const camera = Object.keys(d).filter((key) => /\\.camera$/.test(key));
          if (camera.length) {
            const scenes = Object.keys(el.layout).filter((key) => /^scene\\d*/.test(key));
            const new_layout = {};
            scenes.forEach(key => {
              new_layout[key] = {...el.layout[key], camera: {...d[camera]}};
            });
            Plotly.relayout(el, new_layout);
          }
        });
      }
    ")
  
  fig
}

fmri_data <- simulate_fmri_data()
fig <- visualize_kimesurfaces_tripleREV(fmri_data)
fig

Clearly, these over-simplified approaches do not yield insightful kimesurface representations, and hence, we will next explore more elaborate kimesurface constructions from repeated time-series observations.

2.2 Advanced Hierarchical SGP Model

2.2.1 Time-dynamic Phase Distributions

Let’s demonstrate time-dynamic phase priors using the the example of 1-parameter Laplace distribution where the longitudinal parameter is time \(t\). As phase distributions are symmetric, we typically model the scale parameter \(b=b(t)\) as a function of time, while fixing the location parameter \(\mu=0\). The 1-Parameter Laplace Distribution (Time-Dependent Scale) probability density function (PDF) is \(f(x \mid t) = \frac{1}{2b(t)} \exp\left(-\frac{|x|}{b(t)}\right).\)

The time-dependent scale \(b(t) > 0\) controls the spread/variance over time and the fixed location \(\mu = 0\) ensures the process is unbiased. The mean is \(\mathbb{E}[X \mid t] = 0\), the variance is \(\text{Var}(X \mid t) = 2b(t)^2\), and the maximum density at \(x = 0\), with sharpness inversely proportional to \(b(t)\).

Suppose we model phase angles \(\theta(t)\) in a kime-surface framework, where phase variability increases linearly with time \(b(t) = b_0 + \alpha t\), where \(b_0\) is the initial scale (constant) and \(\alpha\) is the rate of scale increase (constant). Then, the PDF becomes \(f(\theta \mid t) = \frac{1}{2(b_0 + \alpha t)} \exp\left(-\frac{|\theta|}{b_0 +\alpha t}\right).\)

Application Interpretation
Time-series noise modeling Increasing/decreasing variance over time
Phase angle distributions Time-dependent phase uncertainty (e.g., sensor drift)
Financial volatility Stochastic volatility models
Biological rhythms Circadian or stimulus-locked phase variability

The example below utilizes a Laplace PDF with time-dependent scale, however, other more elaborate kime-phase distributions that explicitely introduce time-phase relations can similarly be implemented and tested.

library(plotly)

# Define the time-dependent Laplace distribution function
dlaplace_time <- function(x, t, b0=1, alpha=0.1) {
  b <- b0 + alpha * t
  1 / (2 * b) * exp(-abs(x) / b)
}

# Generate data
times <- c(0, 5, 10)
x <- seq(-10, 10, 0.1)
df <- data.frame()
for(t in times) {
  y <- dlaplace_time(x, t, b0=1, alpha=0.3)
  df <- rbind(df, data.frame(x=x, y=y, t=factor(t)))
}

# Create plotly visualization
plot_ly() %>%
  add_trace(
    data = subset(df, t == "0"),
    x = ~x,
    y = ~y,
    name = "t = 0",
    type = "scatter",
    mode = "lines",
    line = list(width = 2),
    hovertemplate = "x: %{x:.2f}<br>density: %{y:.4f}<extra>t = 0</extra>"
  ) %>%
  add_trace(
    data = subset(df, t == "5"),
    x = ~x,
    y = ~y,
    name = "t = 5",
    type = "scatter",
    mode = "lines",
    line = list(width = 2),
    hovertemplate = "x: %{x:.2f}<br>density: %{y:.4f}<extra>t = 5</extra>"
  ) %>%
  add_trace(
    data = subset(df, t == "10"),
    x = ~x,
    y = ~y,
    name = "t = 10",
    type = "scatter",
    mode = "lines",
    line = list(width = 2),
    hovertemplate = "x: %{x:.2f}<br>density: %{y:.4f}<extra>t = 10</extra>"
  ) %>%
  layout(
    title = list(
      text = "Time-Dependent Laplace Distributions",
      x = 0.5,
      xanchor = "center"
    ),
    xaxis = list(
      title = "x",
      zeroline = TRUE,
      gridcolor = "rgb(235, 235, 235)"
    ),
    yaxis = list(
      title = "Density",
      zeroline = TRUE,
      gridcolor = "rgb(235, 235, 235)"
    ),
    hovermode = "closest",
    showlegend = TRUE,
    legend = list(
      x = 0.85,
      y = 0.95
    ),
    plot_bgcolor = "rgb(255, 255, 255)",
    paper_bgcolor = "rgb(255, 255, 255)"
  )

To generalize for non-zero location we can replace \(|x|\) with \(|x - \mu(t)|\). For discrete-time processes, replace \(t\) with \(t_i\) for timesteps \(i = 1, 2, 3, \dots\). This formulation preserves the Laplace distribution’s characteristic sharp peak and heavy tails while allowing the spread to evolve dynamically with time.

To enhance the complexity and biological plausibility of the kimesurface construction for repeated fMRI time-series, we can refine the Gaussian Process (GP) framework to include domain-specific modifications. A biophysical kernel may replace the simplistic product kernel we used in the basic model above. Here we will utilize a biophysically informed composite kernel

\[\underbrace{K\left(\overbrace{(t,\theta)}^{\kappa}, \overbrace{(t',\theta')}^{\kappa'}\right )}_{composite\ kernel} = \underbrace{K_{\text{HRF}}(t,t')}_{HRF\ temporal\ kernel} \times\ \underbrace{K_{\theta}(\theta,\theta')}_{phase\ kernel} \ + \underbrace{K_{\text{noise}}(t,t')}_{physio\ noise\ kernel},\] where the HRF-Derived Temporal Kernel \[K_{\text{HRF}}(t,t') = \sigma_{\text{HRF}}^2 \text{HRF}(t) \cdot \text{HRF}(t') \cdot e^{-\frac{(t-t')^2}{2l_{\text{HRF}}^2}}\] captures hemodynamic response shape and temporal correlations.

The phase-amp coupling kernel \[K_{\theta}(\theta,\theta') = \sigma_{\theta}^2 \left(1 + \cos(\theta - \theta')\right) \cdot e^{-\frac{|\theta - \theta'|}{b_{\text{Laplace}}}}\] combines the Laplace phase prior (via exponential term) and periodic coupling. Alternatively, we can use a periodic phase kernel with \(2\pi\)-periodic component enhancing hte phase-dependent modulation \[K_\theta(\theta, \theta') = \exp\left(-\frac{2\sin^2(|\theta-\theta'|/2)}{l_\theta^2}\right).\]

And finally, the physiological noise kernel \[K_{\text{noise}}(t,t') = \sigma_{\text{AR}}^2 \rho^{|t-t'|} + \sigma_{\text{white}}^2 \delta_{tt'}\] represents an \(AR(1)\) process for low-frequency drift and Gaussian white noise.

Feature Biological Rationale Impact on Kimesurface
HRF-Informed Kernel Matches BOLD response latency/undershoot Creates characteristic “double-peak” temporal patterns
Laplace-Periodic Phase Combines stochastic phase sampling (Laplace) with oscillatory coupling Generates asymmetric angular modulations
Hierarchical Structure Preserves individual participant/run variability Adds surface texture from biological variability
AR(1) + White Noise Models physiological (low-frequency) and thermal (high-frequency) noise Realistic noise floor with temporal correlations
Uncertainty Quantification \(95\%\) credible intervals via posterior variance Reveals regions of low signal-to-noise

The expected kimesurface characteristics include temporal complexity, e.g., double-peaked responses from HRF kernel with post-stimulus undershoot, phase modulation, e.g., asymmetric angular patterns from Laplace-phase periodic coupling, and hierarchical texture, e.g., fine-scale variations from preserved individual differences. Note that for computational efficiency specialized linear matrix computing libraries are used for sparse hierarchical GP inference (\(O(n \log(n))\) complexity vs.  \(O(n^3)\) for exact SGP modeling). Further calibration of the kernel parameters using empirical Bayes may be accomplished by using inla.hyperpar(result), i.e., INLA (Integrated Nested Laplace Approximations) estimate the hyperparameters from the data itself.

A recipe for parameter tuning is shown in this table.

Parameter Role Adjustment Strategy
b (Laplace) Phase diversity Increase for wider phase sampling
l_hrf Temporal detail resolution Decrease to capture faster HRF dynamics
sigma_theta Phase coupling strength Increase to enhance angular modulation
rho (AR noise) Low-frequency drift correlation Decrease to reduce slow signal drifts

In practice, there should be distinct differences between the ON Condition, e.g., radial coherence in HRF-shaped responses, and OFF Condition, e.g., random angular patterns with lower amplitude. This framework moves beyond simplistic averaging to model the multi-scale, multi-participant nature of fMRI data while respecting its biophysical constraints.

This SGP model implementation provides a principled framework for analyzing repeated neuroimaging experiments while maintaining phase-time relationships in a geometrically meaningful space. The simulation can be further improved and enhanced by using

  1. sparse GP approximations inducing points for large datasets
  2. hierarchical modeling adding participant-level random effects
  3. dynamic phase distributions extending the phase priot to a time-dependent \(\Phi(t)\)
  4. Bayesian inference sampling from the posterior predictive probability distribution using Stan/TensorFlow probability.

The key is that the plotting function expects a rectangular grid in (time × phase). That means you need a 2D matrix of predictions of shape \([n_{\text{time}}, n_{\theta}]\), and a data frame grid with numeric columns t and theta that represent the same rectangular product \(\{t_i\}\times\{\theta_j\}\).

### (A) Helper Functions
library(parallel)
library(kernlab)
library(ExtDist)   # Only needed if you want Laplace draws
library(plotly)    # For 3D surface plotting
library(htmlwidgets)

# Example HRF (same as your prior snippets)
hrf <- function(t) {
  t_hrf <- seq(0, 30, by=2)
  a1 <- 6; a2 <- 12; b1 <- 0.9; b2 <- 0.9; c <- 0.35
  resp <- t_hrf^(a1-1)*b1^a1*exp(-b1*t_hrf)/gamma(a1) -
          c*t_hrf^(a2-1)*b2^a2*exp(-b2*t_hrf)/gamma(a2)
  approx(t_hrf, resp, xout=t, rule=2)$y
}

simulate_fmri_data <- function(n_time=100, n_runs=5) {
  # Creates an array [time, 1, runs]
  set.seed(123)
  arr <- array(rnorm(n_time * n_runs, sd=1.0), dim=c(n_time, 1, n_runs))
  # Could also simulate different patterns for ON vs OFF
  list(
    ON  = arr,
    OFF = -arr  # trivial example: negative signal
  )
}

visualize_kimesurfaces_triple <- function(on_surface, off_surface) {
  # Helper function to process surface data in chunks
  process_surface_data <- function(surface) {
    t_vals <- unique(surface$grid$t)
    theta_vals <- unique(surface$grid$theta)
    
    chunk_size <- 10
    n_chunks <- ceiling(length(t_vals) / chunk_size)
    
    x_all <- y_all <- z_all <- NULL
    
    for(i in 1:n_chunks) {
      start_idx <- (i-1) * chunk_size + 1
      end_idx <- min(i * chunk_size, length(t_vals))
      t_chunk <- t_vals[start_idx:end_idx]
      
      x_chunk <- outer(t_chunk, cos(theta_vals))
      y_chunk <- outer(t_chunk, sin(theta_vals))
      z_chunk <- surface$mean[start_idx:end_idx,]
      
      x_all <- rbind(x_all, x_chunk)
      y_all <- rbind(y_all, y_chunk)
      z_all <- rbind(z_all, z_chunk)
    }
    
    return(list(x = x_all, y = y_all, z = z_all))
  }
  
  # Process both surfaces
  on_data <- process_surface_data(on_surface)
  off_data <- process_surface_data(off_surface)
  
  # Reduce resolution for plotting
  n <- 3  # Use every 3rd point
  subsample <- function(mat) {
    mat[seq(1, nrow(mat), n), seq(1, ncol(mat), n)]
  }
  
  # Create the first plot (ON)
  fig <- plot_ly() %>%
    add_surface(
      x = subsample(on_data$x),
      y = subsample(on_data$y),
      z = subsample(on_data$z),
      colorscale = "Viridis",
      showscale = FALSE,
      name = "ON",
      scene = 'scene1'
    ) %>%
    layout(
      scene = list(
        domain = list(x = c(0, 0.33), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "BOLD Response")
      )
    )

  # Add the second plot (OFF)
  fig <- fig %>%
    add_surface(
      x = subsample(off_data$x),
      y = subsample(off_data$y),
      z = subsample(off_data$z),
      colorscale = "Viridis",
      showscale = FALSE,
      name = "OFF",
      scene = 'scene2'
    ) %>%
    layout(
      scene2 = list(
        domain = list(x = c(0.33, 0.66), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "BOLD Response")
      )
    )

  # Add the third plot (OVERLAY)
  fig <- fig %>%
    add_surface(
      x = subsample(on_data$x),
      y = subsample(on_data$y),
      z = subsample(on_data$z),
      colorscale = "Viridis",
      opacity = 0.7,
      showscale = FALSE,
      name = "ON (overlay)",
      scene = 'scene3'
    ) %>%
    add_surface(
      x = subsample(off_data$x),
      y = subsample(off_data$y),
      z = subsample(off_data$z),
      colorscale = "Portland",
      opacity = 0.7,
      showscale = FALSE,
      name = "OFF (overlay)",
      scene = 'scene3'
    ) %>%
    layout(
      scene3 = list(
        domain = list(x = c(0.66, 1), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "BOLD Response")
      )
    )

  # Add final layout settings
  fig <- fig %>%
    layout(
      title = "Kimesurface Visualization of fMRI ON and OFF Conditions",
      showlegend = TRUE,
      legend = list(x = 0.85, y = 0.1),
      annotations = list(
        list(
          showarrow = FALSE,
          text = '(ON)',
          xref = 'paper',
          yref = 'paper',
          x = 0.15,
          y = 0.95,
          xanchor = 'center',
          yanchor = 'bottom',
          font = list(size = 16)
        ),
        list(
          showarrow = FALSE,
          text = '(OFF)',
          xref = 'paper',
          yref = 'paper',
          x = 0.5,
          y = 0.95,
          xanchor = 'center',
          yanchor = 'bottom',
          font = list(size = 16)
        ),
        list(
          showarrow = FALSE,
          text = '(OVERLAY)',
          xref = 'paper',
          yref = 'paper',
          x = 0.85,
          y = 0.95,
          xanchor = 'center',
          yanchor = 'bottom',
          font = list(size = 16)
        )
      )
    )

  # Add synchronization
  fig <- fig %>% onRender(
    "function(el) {
      el.on('plotly_relayout', function(d) {
        const camera = Object.keys(d).filter((key) => /\\.camera$/.test(key));
        if (camera.length) {
          const scenes = Object.keys(el.layout).filter((key) => /^scene\\d*/.test(key));
          const new_layout = {};
          scenes.forEach(key => {
            new_layout[key] = {...el.layout[key], camera: {...d[camera]}};
          });
          Plotly.relayout(el, new_layout);
        }
      });
    }"
  )

  return(fig)
}

construct_kimesurface_runs <- 
  function(fmri_data, condition  = "ON", cores = detectCores()-1,
           # Time dimension
           t_length   = NULL,    # optional subset
           # Phase dimension
           assign_phase = c("uniform", "random"), seed = 999) 
{
  # 1) Extract data array [time, 1, runs]
  data_array <- fmri_data[[condition]]  # e.g., [T, 1, R]
  T_available <- dim(data_array)[1]
  R_runs      <- dim(data_array)[3]
  
  # 2) Aggregate data across the second dimension if needed (if >1)
  #    Typically it's [time, 1, run], so no changes if second dim=1.
  
  # 3) Decide how many time points we use
  #    If t_length is NULL or > T_available, use T_available
  if (is.null(t_length) || t_length > T_available) {
    t_length <- T_available
  }
  
  # 4) Extract the first t_length points from each run
  #    => dimension: [t_length, 1, R_runs]
  data_array <- data_array[1:t_length, , , drop=FALSE]
  
  # 5) Flatten the data into a T×R matrix => y_mat[t, r]
  #    We'll do parApply for a partial speedup
  cl <- makeCluster(cores)
  # parApply(..., MARGIN=c(1,3)) => a 2D result [time, run]
  # because dimension 2 is length=1
  y_mat <- parApply(cl, data_array, c(1,3), mean)
  stopCluster(cl)
  
  # y_mat now has dimension [t_length, R_runs]
  T_final <- nrow(y_mat)
  R_final <- ncol(y_mat)
  
  # 6) Assign a single phase angle for each run
  assign_phase <- match.arg(assign_phase)
  set.seed(seed)
  if (assign_phase == "uniform") {
    # e.g. equally spaced angles around [0, 2π)
    run_angles <- seq(0, 2*pi, length.out=R_final+1)[-1]
    # drop the last to avoid duplication at 2*pi
  } else {
    # random
    run_angles <- runif(R_final, min=0, max=2*pi)
  }
  
  # 7) Build the time kernel
  t_grid <- seq(0, 300, length.out = T_final)
  
  # Example: RBF × HRF
  c_hrf <- 1.2
  ell_t <- 3.0
  time_diff <- as.matrix(dist(t_grid))
  hrf_vals <- hrf(t_grid)
  K_time <- (c_hrf^2)*outer(hrf_vals, hrf_vals)*exp(-time_diff^2/(2*ell_t^2))
  
  # Optionally add a small diagonal term
  diag(K_time) <- diag(K_time) + 0.01
  
  # 8) Build the "phase" kernel among runs
  #    K_phase(r, s) = periodic kernel in angles
  ell_phase   <- 0.5
  sigma_phase <- 2.5
  
  angle_diff <- outer(run_angles, run_angles, FUN=function(a,b) a - b)
  # periodic RBF
  K_phase <- sigma_phase^2 * exp(-2 * sin(angle_diff/2)^2 / ell_phase^2)
  
  # 9) Kronecker product => (T_final * R_final) x (T_final * R_final)
  #    Flatten the y_mat into a vector => length T_final*R_final
  K_2D <- kronecker(K_time, K_phase)
  y_vec <- c(y_mat)   # stack columns or rows; we just need consistent ordering
  # By default, `c(y_mat)` stacks in column-major order => the first column first, etc.
  # We'll keep that order consistent with the Kron ordering.

  # 10) as.kernelMatrix -> gausspr
  K_kern  <- as.kernelMatrix(K_2D)
  gp_model <- gausspr(x = K_kern, y = y_vec)
  
  # 11) Predict in-sample => length T_final*R_final
  pred_vec <- predict(gp_model)
  
  # 12) Reshape => [t_length, R_runs]
  pred_mat <- matrix(pred_vec, nrow=T_final, ncol=R_final)
  
  # 13) Build a "grid" data frame so that each row is (time_i, run_angle_r)
  #     => T_final*R_final rows
  # We'll do expand.grid in "column-major" order so it matches c(y_mat).
  df_grid <- expand.grid(
    t     = t_grid,
    runID = seq_len(R_final),
    KEEP.OUT.ATTRS = FALSE
  )
  # Now add the angle for each run ID
  # Because expand.grid goes rowwise in the sense of slowest-varying first,
  # we want runID to be the second factor. This matches the default c() pattern.
  
  # We'll replace runID with 'theta' for your radial plotting
  # (each run has exactly one angle)
  df_grid$theta <- run_angles[df_grid$runID]
  
  # Convert to numeric explicitly
  df_grid$t <- as.numeric(df_grid$t)
  df_grid$theta <- as.numeric(df_grid$theta)
  
  # Return a structure matching your visualize code:
  #   surface$grid$t, surface$grid$theta, surface$mean
  #   where surface$mean is [T_final, R_final]
  list(
    grid    = df_grid,
    mean    = pred_mat,   # matrix [time, run], each run is a column
    t_grid  = t_grid,
    thetas  = run_angles,
    runs    = R_final,
    gp_model= gp_model,
    y_raw   = y_mat
  )
}

# Suppose we have 15 runs, each with 80 time points
fmri_data <- simulate_fmri_data(n_time=80, n_runs=60)

# Build ON-surface
on_surface <- construct_kimesurface_runs(
  fmri_data   = fmri_data,
  condition   = "ON",
  cores       = 2,
  t_length    = 80,               # use all time points
  assign_phase= "uniform"         # equally spaced angles for the runs
)
## Using automatic sigma estimation (sigest) for RBF or laplace kernel
# Build OFF-surface
off_surface <- construct_kimesurface_runs(
  fmri_data   = fmri_data,
  condition   = "OFF",
  cores       = 2,
  t_length    = 80,
  assign_phase= "uniform"
)
## Using automatic sigma estimation (sigest) for RBF or laplace kernel
# Now we have:
#   on_surface$grid$t, on_surface$grid$theta,
#   on_surface$mean [80, 15]  => time x run
# 
# If you pass these to your visualization function:
fig <- visualize_kimesurfaces_triple(on_surface, off_surface)
fig

2.2.2 Time-Dependent Laplace \(\theta(t)\) Modeling

When we model each time point as corresponding to a different random draw \(\theta_i\sim\mathrm{Laplace}(0,b_0+\alpha t_i)\), we lose the standard rectangular grid, \((t,\theta)\). Then, we can still fir a SGP with an elementwise 2D kernel, but the result is just a vector of predictions of length n_time (one \(\theta\) per \(t\)), which is not a regular \((t,\theta)\) surface that visualize_kimesurfaces_triple() can slice with outer(...).
Hence the simplest approach to display the 3D scenes with the kimesurfaces is to define a full 2D grid in \(time\times\theta\), as shown in the example above. To account for random \(\theta\) draws, we can sample them once, using the same set of \(\theta\) for each time to form a rectangular mesh for the plotting logic.

The function construct_kimesurface_2D(...) generates a rectangular \((t,\theta)\) grid as a 2D matrix of predictions, and a data frame grid with numeric t, theta, which can be displayed with visualize_kimesurfaces_triple(...).

2.2.3 Fourier k-Space Kime-surface Constructon

Let’s show a kimesurface construction using time-dependent phases, \(\theta \sim \Phi(t)\), i.e., random draws from Laplace distribution. This will yield an irregularly sampled kime-surface intensity values at locations \((t_i, \theta(t_i, run_j))\), which represent all observed experimental runs (i.e., repeated fMRI measurements across time). The irregularly sampled kime-surface values at these 2D (time, phase) coordinates represent just the actually simulated fMRI intensities at different runs. Specifically, we will explore a pair of complementary strategies for constructing the kime-surfaces.

  • The first approach (Strategy A) interpolates the irregularly sampled intensities in the time domain directly and converts the original irregular 2D coordinates into a regular interpolated grid, which is then used to display the kime-surface using plot_ly().
  • The second approach (Strategy B) is to do the entire kime surface reconstruction in the Fourier (frequency) k-space. In this case, we use the Fourier transform to map each of the individual fRMI time-series runs into the Fourier Domain, construct the k-space kimesurface, which again is done by using random Laplace phases and interpolating in the 2D frequency space to resample the surface on a regular 2D grid in k-space. Finally, we use the inverse 2D Fourier transform to maps the k-space kime-surface from k-space back into a kime-surface in the original time domain for analogous visualization.

2.2.3.1 First approach (Strategy A) Interpolating Irregularly Sampled Data in the Time Domain

In the example below, the smoothness parameter \(\lambda = 0.001\) plays a pivotal role in the kimesurface construction via interpolate_kimesurface(), where we use thin plate spline regression function Tps() to interpolate the irregular \((t,\theta)\) grid into a regular equispace grid.

#############################################
# Complete End-to-End: Strategy A
#############################################

library(parallel)
library(fields)
library(plotly)
library(ExtDist)

# (A) 1. Simulate
simulate_fmri_data <- function(n_time = 80, n_runs = 5) {
  set.seed(123)
  arr <- array(rnorm(n_time * n_runs, mean=0, sd=1),
               dim = c(n_time, 1, n_runs))
  list(ON = arr)
}

# (B) 2. Build (t,theta,intensity) with time-dependent Laplace angles
build_irregular_kimesurface_data <- function(fmri_data, 
                                             condition = "ON",
                                             b0 = 0.3,
                                             alpha = 0.005,
                                             T_max = 300,
                                             cores = parallel::detectCores()-1) {
  data_array <- fmri_data[[condition]]
  n_time <- dim(data_array)[1]
  n_runs <- dim(data_array)[3]
  
  cl <- makeCluster(cores)
  y_mat <- parApply(cl, data_array, c(1,3), mean)
  stopCluster(cl)
  
  t_grid <- seq(0, T_max, length.out = n_time)
  
  df <- expand.grid(
    timeIndex = seq_len(n_time),
    runID     = seq_len(n_runs),
    KEEP.OUT.ATTRS = FALSE
  )
  df$t <- t_grid[df$timeIndex]
  
  # Draw Laplace(0, b0+alpha*t)
  set.seed(999)
  df$theta <- mapply(function(tt) {
    scale_i <- b0 + alpha * tt
    rLaplace(1, mu=0, b=scale_i)
  }, df$t)
  
  df$intensity <- c(y_mat)
  
  df
}

# (C) 3. Interpolate with Tps
interpolate_kimesurface <- function(df) {
  fit <- Tps(
    x = cbind(df$t, df$theta),
    Y = df$intensity,
    lambda = 0.001   # Smoothing control, 0.0 to force "interpolate exactly" with no smoothing penalty
  )
  fit
}

# (D) 4. Predict on a regular grid & plot
plot_kimesurface_tps <- function(fit, df, 
                                 n_time_plot=50, 
                                 n_theta_plot=50) {
  t_min <- min(df$t);    t_max <- max(df$t)
  th_min<- min(df$theta);th_max<- max(df$theta)
  
  t_seq    <- seq(t_min,  t_max,   length.out=n_time_plot)
  theta_seq<- seq(th_min, th_max,  length.out=n_theta_plot)
  
  grid_mat <- as.matrix(expand.grid(t_seq, theta_seq))
  pred_vals<- predict(fit, grid_mat)
  
  pred_mat <- matrix(pred_vals, nrow=n_time_plot, ncol=n_theta_plot, byrow=FALSE)
  
  # 2D surface in (t,theta) space
  p <- plot_ly(
    x = t_seq,
    y = theta_seq,
    z = ~pred_mat
  ) %>%
    add_surface() %>%
    layout(
      title = "Kimesurface in (t, θ) from Tps Interpolation",
      scene = list(
        xaxis = list(title = "time (t)"),
        yaxis = list(title = "θ (Laplace-dist)"),
        zaxis = list(title = "Intensity")
      )
    )
  p
}


#############################################
# Example usage:
#############################################

# 1) Simulate data
fmri_data <- simulate_fmri_data(n_time=80, n_runs=5)

# 2) Build random-laplace (t,theta,intensity)
df_kime <- build_irregular_kimesurface_data(
  fmri_data  = fmri_data,
  condition  = "ON",
  b0         = 0.3,
  alpha      = 0.005,
  T_max      = 300,
  cores      = 2
)
str(df_kime)
## 'data.frame':    400 obs. of  5 variables:
##  $ timeIndex: int  1 2 3 4 5 6 7 8 9 10 ...
##  $ runID    : int  1 1 1 1 1 1 1 1 1 1 ...
##  $ t        : num  0 3.8 7.59 11.39 15.19 ...
##  $ theta    : num  -0.0753 0.0579 -0.5625 0.4361 0.3204 ...
##  $ intensity: num  -0.5605 -0.2302 1.5587 0.0705 0.1293 ...
# 'data.frame': 400 obs. of  5 variables:
#  $ timeIndex: int [1:400] 1 1 1 1 1 2 2 2 2 2 ...
#  $ runID    : int [1:400] 1 2 3 4 5 1 2 3 4 5 ...
#  $ t        : num [1:400] 0 0 0 0 0 ...
#  $ theta    : num [1:400] 0.2129 0.0831 -0.1051 ...
#  $ intensity: num [1:400] 1.715 0.461 0.46 -0.686 -0.446 ...
summary(df_kime$theta)
##     Min.  1st Qu.   Median     Mean  3rd Qu.     Max. 
## -5.82481 -0.68465 -0.02791 -0.06985  0.66743  6.89110
# 3) Interpolate in 2D with Tps
fit_tps <- interpolate_kimesurface(df_kime)
## Warning: 
## Grid searches over lambda (nugget and sill variances) with  minima at the endpoints: 
##   (GCV) Generalized Cross-Validation 
##    minimum at  right endpoint  lambda  =  1134.969 (eff. df= 3.001009 )
# 4) Evaluate on a regular grid & plot
p <- plot_kimesurface_tps(fit_tps, df_kime, 
                          n_time_plot=60, 
                          n_theta_plot=60)
p

3 Numerical Strategies

The numerical transforms of time-series to kimesurfaces tend to be more flexible but need careful handling in cases of sparse, incomplete or irregularly-sampled data.

3.1 Strategy A - Interpolating an irregularly sampled \((t,\theta)\)

Assuming time-dependent Laplace phases, the Strategy-A protocol includes:

  1. Simulating the ON and OFF fMRI data with multiple runs.
  2. For each stimulus condition:
  • Building an irregular data frame \(\{(t,\theta), y\}\), where each run/time point has a random Laplace-distributed phase \(\theta_{r}(t)\).
  • Fitting a thin-plate spline (TPS) using the fields package for 2D interpolation.
  • Evaluating the spline on a regular \((t,\theta)\)-grid and storing the result in a structure for visualize_kimesurfaces_triple() plotting.
  1. Displaying both the ON and OFF surfaces side-by-side and in overlay, using the visualize_kimesurfaces_triple(...) function.
library(plotly)
library(htmlwidgets)

visualize_kimesurfaces_triple <- function(on_surface, off_surface) {
  # Helper function to process surface data in chunks
  process_surface_data <- function(surface) {
    t_vals    <- unique(surface$grid$t)
    theta_vals<- unique(surface$grid$theta)
    
    chunk_size <- 10
    n_chunks <- ceiling(length(t_vals) / chunk_size)
    
    x_all <- y_all <- z_all <- NULL
    
    for(i in seq_len(n_chunks)) {
      start_idx <- (i - 1) * chunk_size + 1
      end_idx   <- min(i * chunk_size, length(t_vals))
      t_chunk   <- t_vals[start_idx:end_idx]
      
      # radial coords
      x_chunk <- outer(t_chunk, cos(theta_vals))
      y_chunk <- outer(t_chunk, sin(theta_vals))
      # pick the corresponding rows in the 'mean' matrix
      z_chunk <- surface$mean[start_idx:end_idx, ]
      
      x_all <- rbind(x_all, x_chunk)
      y_all <- rbind(y_all, y_chunk)
      z_all <- rbind(z_all, z_chunk)
    }
    
    list(x = x_all, y = y_all, z = z_all)
  }
  
  # Process both surfaces
  on_data  <- process_surface_data(on_surface)
  off_data <- process_surface_data(off_surface)
  
  # Reduce resolution for plotting
  n <- 3  # sample every 3rd row/col
  subsample <- function(mat) {
    mat[seq(1, nrow(mat), n), seq(1, ncol(mat), n)]
  }
  
  # (1) ON surface
  fig <- plot_ly() %>%
    add_surface(
      x = subsample(on_data$x),
      y = subsample(on_data$y),
      z = subsample(on_data$z),
      colorscale = "Viridis",
      showscale  = FALSE,
      name       = "ON",
      scene      = 'scene1'
    ) %>%
    layout(
      scene = list(
        domain     = list(x = c(0, 0.33), y = c(0, 1)),
        aspectmode = "cube",
        xaxis      = list(title = "X (t·cosθ)"),
        yaxis      = list(title = "Y (t·sinθ)"),
        zaxis      = list(title = "BOLD Response")
      )
    )
  
  # (2) OFF surface
  fig <- fig %>%
    add_surface(
      x = subsample(off_data$x),
      y = subsample(off_data$y),
      z = subsample(off_data$z),
      colorscale = "Viridis",
      showscale  = FALSE,
      name       = "OFF",
      scene      = 'scene2'
    ) %>%
    layout(
      scene2 = list(
        domain     = list(x = c(0.33, 0.66), y = c(0, 1)),
        aspectmode = "cube",
        xaxis      = list(title = "X (t·cosθ)"),
        yaxis      = list(title = "Y (t·sinθ)"),
        zaxis      = list(title = "BOLD Response")
      )
    )
  
  # (3) OVERLAY surfaces
  fig <- fig %>%
    add_surface(
      x = subsample(on_data$x),
      y = subsample(on_data$y),
      z = subsample(on_data$z),
      colorscale = "Viridis",
      opacity    = 0.7,
      showscale  = FALSE,
      name       = "ON (overlay)",
      scene      = 'scene3'
    ) %>%
    add_surface(
      x = subsample(off_data$x),
      y = subsample(off_data$y),
      z = subsample(off_data$z),
      colorscale = "Portland",
      opacity    = 0.7,
      showscale  = FALSE,
      name       = "OFF (overlay)",
      scene      = 'scene3'
    ) %>%
    layout(
      scene3 = list(
        domain     = list(x = c(0.66, 1), y = c(0, 1)),
        aspectmode = "cube",
        xaxis      = list(title = "X (t·cosθ)"),
        yaxis      = list(title = "Y (t·sinθ)"),
        zaxis      = list(title = "BOLD Response")
      )
    )
  
  # Final layout
  fig <- fig %>%
    layout(
      title      = "Kimesurface Visualization of fMRI ON and OFF Conditions",
      showlegend = TRUE,
      legend     = list(x = 0.85, y = 0.1),
      annotations= list(
        list(showarrow=FALSE, text='(ON)',      xref='paper', yref='paper',
             x=0.15, y=0.95, xanchor='center', yanchor='bottom',
             font=list(size=16)),
        list(showarrow=FALSE, text='(OFF)',     xref='paper', yref='paper',
             x=0.5,  y=0.95, xanchor='center', yanchor='bottom',
             font=list(size=16)),
        list(showarrow=FALSE, text='(OVERLAY)', xref='paper', yref='paper',
             x=0.85, y=0.95, xanchor='center', yanchor='bottom',
             font=list(size=16))
      )
    )
  
  # Add camera sync across the 3 scenes
  fig <- fig %>% onRender("
    function(el) {
      el.on('plotly_relayout', function(d) {
        const camera = Object.keys(d).filter((key) => /\\.camera$/.test(key));
        if (camera.length) {
          const scenes = Object.keys(el.layout).filter((key) => /^scene\\d*/.test(key));
          const new_layout = {};
          scenes.forEach(key => {
            new_layout[key] = {...el.layout[key], camera: {...d[camera]}};
          });
          Plotly.relayout(el, new_layout);
        }
      });
    }
  ")
  
  fig
}

Next, simulate ON/OFF fMRI data, for instance, in this simple example we create two arrays ON (stimulus) and OFF (rest) conditions.

simulate_fmri_data <- function(n_time = 80, n_runs = 5) {
  set.seed(123)
  
  # Example: 'ON' data
  arr_on <- array(
    rnorm(n_time * n_runs, mean=0, sd=1),
    dim = c(n_time, 1, n_runs)
  )
  # 'OFF' could be something different. For demonstration, let's shift it
  arr_off <- array(
    rnorm(n_time * n_runs, mean=1, sd=1),
    dim = c(n_time, 1, n_runs)
  )
  
  list(ON = arr_on, OFF = arr_off)
}

We can expand to a more elaborate simulation with different time signals, HRF modulations, etc. Using irregular \((t,\theta)\) sampling with a time-dependent Laplace prior. Next, we create a function that:

  1. Takes fmri_data[[condition]] (shape [time, 1, runs]).
  2. Aggregates to get a matrix \(\{(t_i, \text{run}_j)\} \to y_{i,j}\).
  3. For each (i, j), draws a random \(\theta_{ij}\sim\mathrm{Laplace}(0, b_0+\alpha t_i)\).
  4. Returns a data frame with columns (t, theta, intensity, runID, timeIndex).
library(parallel)
library(ExtDist)  # for rLaplace

build_irregular_kimesurface_data <- 
  function(fmri_data, condition = "ON", b0 = 0.3, alpha   = 0.005, T_max   = 300,
           cores   = parallel::detectCores()-1) {
  data_array <- fmri_data[[condition]]  # [n_time, 1, n_runs]
  n_time <- dim(data_array)[1]
  n_runs <- dim(data_array)[3]
  
  # Flatten in parallel
  cl <- makeCluster(cores)
  y_mat <- parApply(cl, data_array, c(1,3), mean)
  stopCluster(cl)
  
  # Time vector 0..T_max
  t_grid <- seq(0, T_max, length.out=n_time)
  
  # Expand (timeIndex, runID)
  df <- expand.grid(
    timeIndex = seq_len(n_time),
    runID     = seq_len(n_runs),
    KEEP.OUT.ATTRS = FALSE
  )
  # Add actual continuous time
  df$t <- t_grid[df$timeIndex]
  
  set.seed(999)
  # For each row => random angle from Laplace(0, b0 + alpha * t)
  df$theta <- mapply(function(tt) {
    scale_i <- b0 + alpha * tt
    rLaplace(1, mu=0, b=scale_i)
  }, df$t)
  
  # Attach intensities from y_mat in the correct order
  # By default, expand.grid enumerates runID fastest => matches c(y_mat)
  df$intensity <- c(y_mat)
  
  df
}

As the random phases yield a scatter point-cloud data, we need to employ 2D interpolation with fields::Tps to construct a kime-surface approximation over a regular grid for plotting. This process requires

  1. Taking the irregular data frame \((t, \theta, intensity)\).
  2. Fitting a thin-plate spline in 2D.
  3. Evaluating on a regular \((t,\theta)\)-grid of size \((n_t \times n_\theta)\).
  4. Returning a structure with a “grid” data frame and a 2D matrix mean, matching what visualize_kimesurfaces_triple() expects.
construct_kimesurface_interpolated <- 
  function(df_kime, n_t = 50, n_theta = 50, lambda  = 0.01){  # 0 for no smoothing (exact interp)
  # 1) Fit Tps model
  fit <- Tps( x = cbind(df_kime$t, df_kime$theta), Y = df_kime$intensity, lambda = lambda)
    # or control smoothing with df=..., or leave defaults
  
  # 2) Build a regular grid in the (t, theta) domain
  t_min    <- min(df_kime$t)
  t_max    <- max(df_kime$t)
  theta_min<- min(df_kime$theta)
  theta_max<- max(df_kime$theta)
  
  t_seq    <- seq(t_min, t_max, length.out=n_t)
  theta_seq<- seq(theta_min, theta_max, length.out=n_theta)
  
  # 3) Evaluate Tps on that grid
  grid_mat <- as.matrix(expand.grid(t_seq, theta_seq))
  pred_vals<- predict(fit, grid_mat)
  
  # 4) Reshape => [n_t, n_theta]
  pred_mat <- matrix(pred_vals, nrow=n_t, ncol=n_theta, byrow=FALSE)
  
  # 5) Construct the 'grid' data frame
  df_grid <- expand.grid(
    t     = t_seq,
    theta = theta_seq,
    KEEP.OUT.ATTRS = FALSE
  )
  # Make sure numeric
  df_grid$t     <- as.numeric(df_grid$t)
  df_grid$theta <- as.numeric(df_grid$theta)
  
  # 6) Return in the format needed by visualize_kimesurfaces_triple
  #    with 'mean' = [n_t x n_theta]
  list(
    grid   = df_grid,
    mean   = pred_mat,
    t_grid = t_seq,
    theta  = theta_seq
  )
}

Notice we set lambda=0.01 to reduce smoothing and see more surface details. In practice, we can tweak this parameter for coarser or finer kimesurface detail, as needed.

#############################################
## 0) Libraries
#############################################
library(parallel)
library(fields)   # Tps for 2D interpolation
library(plotly)
library(ExtDist)  # For rLaplace if not built-in

#############################################
## 1) Simulate fMRI data (ON/OFF)
#############################################
simulate_fmri_data <- function(n_time = 80, n_runs = 5) {
  # Returns a list with "ON" and "OFF" conditions, each [n_time, 1, n_runs].
  # For demonstration, "OFF" is just negative of "ON."
  set.seed(123)
  arr_on <- array(rnorm(n_time * n_runs, mean=0, sd=1),
                  dim = c(n_time, 1, n_runs))
  
  list(
    ON  = arr_on,
    OFF = -arr_on  # trivial difference
  )
}

#############################################
## 2) Build Irregular (t, theta, intensity)
##    with time-dependent Laplace angles
#############################################
build_irregular_kimesurface_data <- function(fmri_data,
                                             condition = "ON",
                                             b0 = 0.3,
                                             alpha = 0.005,
                                             T_max = 300,
                                             cores = detectCores()-1) {
  # fmri_data[[condition]] is [n_time, 1, n_runs].
  data_array <- fmri_data[[condition]]
  
  n_time <- dim(data_array)[1]
  n_runs <- dim(data_array)[3]
  
  # Flatten intensities: [time, run]
  cl <- makeCluster(cores)
  y_mat <- parApply(cl, data_array, c(1,3), mean) 
  stopCluster(cl)
  
  # Build a time vector from 0..T_max
  t_grid <- seq(0, T_max, length.out = n_time)
  
  # We'll create one row per (time_i, run_j).
  df <- expand.grid(
    timeIndex = seq_len(n_time),
    runID     = seq_len(n_runs),
    KEEP.OUT.ATTRS = FALSE
  )
  # Add the continuous time
  df$t <- t_grid[df$timeIndex]
  
  # For each row => draw random Laplace(0, b0 + alpha*t).
  set.seed(999)
  df$theta <- mapply(function(tt) {
    scale_i <- b0 + alpha*tt
    rLaplace(1, mu=0, b=scale_i)
  }, df$t)
  
  # Map the intensity from y_mat -> vector
  y_vec <- c(y_mat)
  df$intensity <- y_vec
  
  df
}

#############################################
## 3) Fit a 2D Thin-Plate Spline over (t,theta)
#############################################
fit_kimesurface_interpolation <- function(df, lambda=NULL, df_smooth=NULL) {
  # df has columns (t, theta, intensity)
  # Tps automatically chooses a smoothing parameter by default,
  # but you can override with lambda=0 or set df=... for less smoothing.
  
  # Prepare inputs
  X <- cbind(df$t, df$theta)
  Y <- df$intensity
  
  if(!is.null(lambda)) {
    # Force a specific lambda (e.g. 0 => exact interpolation)
    fit <- Tps(x=X, Y=Y, lambda=lambda)
  } else if(!is.null(df_smooth)) {
    # Force a certain degrees of freedom
    fit <- Tps(x=X, Y=Y, df=df_smooth)
  } else {
    # Let Tps choose automatically
    fit <- Tps(x=X, Y=Y)
  }
  
  fit
}

#############################################
## 4) Construct a regular grid & predict
##    => Return an object for triple-plot
#############################################
construct_kimesurface_interpolated <- function(fit,
                                               df,
                                               n_time_plot=50,
                                               n_theta_plot=50) {
  # We'll build a regular grid in t & theta over the range of the data,
  # then evaluate 'fit' (the Tps model) there, returning a matrix of predictions.
  
  t_min <- min(df$t);    t_max <- max(df$t)
  th_min<- min(df$theta);th_max<- max(df$theta)
  
  t_seq    <- seq(t_min,  t_max,   length.out=n_time_plot)
  theta_seq<- seq(th_min, th_max,  length.out=n_theta_plot)
  
  # Predict for all combos
  grid <- expand.grid(t=t_seq, theta=theta_seq, KEEP.OUT.ATTRS=FALSE)
  
  pred_vals <- predict(fit, as.matrix(grid))
  
  # Reshape => [n_time_plot, n_theta_plot]
  pred_mat  <- matrix(pred_vals,
                      nrow=n_time_plot,
                      ncol=n_theta_plot,
                      byrow=FALSE)
  
  # Return a structure that "visualize_kimesurfaces_triple()" expects:
  list(
    grid = grid,       # data.frame with columns 't', 'theta'
    mean = pred_mat,   # 2D matrix of dimension [n_time_plot, n_theta_plot]
    timepoints = t_seq,
    phases     = theta_seq
  )
}

#############################################
## 5) The triple visualization function
#############################################
visualize_kimesurfaces_triple <- function(on_surface, off_surface) {
  # Helper to chunk surface data
  process_surface_data <- function(surface) {
    t_vals    <- unique(surface$grid$t)
    theta_vals<- unique(surface$grid$theta)
    
    chunk_size <- 10
    n_chunks <- ceiling(length(t_vals) / chunk_size)
    
    x_all <- y_all <- z_all <- NULL
    
    for(i in seq_len(n_chunks)) {
      start_idx <- (i-1)*chunk_size + 1
      end_idx   <- min(i*chunk_size, length(t_vals))
      t_chunk   <- t_vals[start_idx:end_idx]
      
      # radial coords => x=r*cosθ, y=r*sinθ, r=t
      x_chunk <- outer(t_chunk, cos(theta_vals))
      y_chunk <- outer(t_chunk, sin(theta_vals))
      z_chunk <- surface$mean[start_idx:end_idx, ]
      
      x_all <- rbind(x_all, x_chunk)
      y_all <- rbind(y_all, y_chunk)
      z_all <- rbind(z_all, z_chunk)
    }
    
    list(x=x_all, y=y_all, z=z_all)
  }
  
  # Process surfaces
  on_data  <- process_surface_data(on_surface)
  off_data <- process_surface_data(off_surface)
  
  # Downsample for plotting
  n <- 3
  subsample <- function(mat) {
    mat[seq(1, nrow(mat), n), seq(1, ncol(mat), n)]
  }
  
  fig <- plot_ly() %>%
    # First: ON
    add_surface(
      x = subsample(on_data$x),
      y = subsample(on_data$y),
      z = subsample(on_data$z),
      colorscale = "Viridis",
      showscale  = FALSE,
      name       = "ON",
      scene      = "scene1"
    ) %>%
    layout(
      scene = list(
        domain     = list(x = c(0, 0.33), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    # Second: OFF
    add_surface(
      x = subsample(off_data$x),
      y = subsample(off_data$y),
      z = subsample(off_data$z),
      colorscale = "Viridis",
      showscale  = FALSE,
      name       = "OFF",
      scene      = "scene2"
    ) %>%
    layout(
      scene2 = list(
        domain     = list(x = c(0.33, 0.66), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    # Third: Overlay
    add_surface(
      x = subsample(on_data$x),
      y = subsample(on_data$y),
      z = subsample(on_data$z),
      colorscale = "Viridis",
      opacity    = 0.7,
      showscale  = FALSE,
      name       = "ON (overlay)",
      scene      = "scene3"
    ) %>%
    add_surface(
      x = subsample(off_data$x),
      y = subsample(off_data$y),
      z = subsample(off_data$z),
      colorscale = "Portland",
      opacity    = 0.7,
      showscale  = FALSE,
      name       = "OFF (overlay)",
      scene      = "scene3"
    ) %>%
    layout(
      scene3 = list(
        domain     = list(x = c(0.66, 1), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    layout(
      title      = "Kimesurface Visualization of fMRI ON and OFF",
      showlegend = TRUE,
      legend     = list(x=0.85, y=0.1),
      annotations= list(
        list(showarrow=FALSE, text='(ON)',  xref='paper', yref='paper',
             x=0.15, y=0.95, xanchor='center', yanchor='bottom',
             font=list(size=16)),
        list(showarrow=FALSE, text='(OFF)', xref='paper', yref='paper',
             x=0.5,  y=0.95, xanchor='center', yanchor='bottom',
             font=list(size=16)),
        list(showarrow=FALSE, text='(OVERLAY)',
             xref='paper', yref='paper', x=0.85, y=0.95,
             xanchor='center', yanchor='bottom',
             font=list(size=16))
      )
    ) %>%
    # Sync the cameras
    onRender("
      function(el) {
        el.on('plotly_relayout', function(d) {
          const camera = Object.keys(d).filter((key) => /\\.camera$/.test(key));
          if (camera.length) {
            const scenes = Object.keys(el.layout).filter((key) => /^scene\\d*/.test(key));
            const new_layout = {};
            scenes.forEach(key => {
              new_layout[key] = {...el.layout[key], camera: {...d[camera]}};
            });
            Plotly.relayout(el, new_layout);
          }
        });
      }
    ")
  
  fig
}

################ NEW
#############################################
## 5) The triple visualization function
#############################################
visualize_kimesurfaces_tripleNEW <- function(fitOn, dfOn, fitOff, dfOff, n_time_plot=50, n_theta_plot=50) {
  t_min <- min(dfOn$t);     t_max <- max(dfOn$t)
  th_min<- min(dfOn$theta); th_max<- max(dfOn$theta)
  
  t_seq    <- seq(t_min,  t_max,   length.out=n_time_plot)
  theta_seq<- seq(th_min, th_max,  length.out=n_theta_plot)
  
  grid_mat <- as.matrix(expand.grid(t_seq, theta_seq))
  pred_valsOn  <- predict(fitOn, grid_mat)
  pred_valsOff <- predict(fitOff, grid_mat)
  
  pred_matOn <- matrix(pred_valsOn, nrow=n_time_plot, ncol=n_theta_plot, byrow=FALSE)
  pred_matOff <- matrix(pred_valsOff, nrow=n_time_plot, ncol=n_theta_plot, byrow=FALSE)
  
  # 2D surface in (t,theta) space
  p <- plot_ly(
    x = t_seq,
    y = theta_seq,
    z = ~pred_mat
  )
  fig <- plot_ly() %>%
    # First: ON
    add_surface(
      x = t_seq,
      y = theta_seq,
      z = ~pred_matOn,
      colorscale = "Viridis",
      showscale  = FALSE,
      name       = "ON",
      scene      = "scene1"
    ) %>%
    layout(
      scene = list(
        domain     = list(x = c(0, 0.33), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    # Second: OFF
    add_surface(
      x = t_seq,
      y = theta_seq,
      z = ~pred_matOff,
      colorscale = "Viridis",
      showscale  = FALSE,
      name       = "OFF",
      scene      = "scene2"
    ) %>%
    layout(
      scene2 = list(
        domain     = list(x = c(0.33, 0.66), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    # Third: Overlay
    add_surface(
      x = t_seq,
      y = theta_seq,
      z = ~pred_matOn,
      colorscale = "Viridis",
      opacity    = 0.5,
      showscale  = FALSE,
      name       = "ON (overlay)",
      scene      = "scene3"
    ) %>%
    add_surface(
      x = t_seq,
      y = theta_seq,
      z = ~pred_matOff,
      colorscale = "Portland",
      opacity    = 0.8,
      showscale  = FALSE,
      name       = "OFF (overlay)",
      scene      = "scene3"
    ) %>%
    layout(
      scene3 = list(
        domain     = list(x = c(0.66, 1), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "X (t·cosθ)"),
        yaxis = list(title = "Y (t·sinθ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    layout(
      title      = "Kimesurface Visualization of fMRI ON and OFF",
      showlegend = TRUE,
      legend     = list(x=0.85, y=0.1),
      annotations= list(
        list(showarrow=FALSE, text='(ON)',  xref='paper', yref='paper',
             x=0.15, y=0.95, xanchor='center', yanchor='bottom',
             font=list(size=16)),
        list(showarrow=FALSE, text='(OFF)', xref='paper', yref='paper',
             x=0.5,  y=0.95, xanchor='center', yanchor='bottom',
             font=list(size=16)),
        list(showarrow=FALSE, text='(OVERLAY)',
             xref='paper', yref='paper', x=0.85, y=0.95,
             xanchor='center', yanchor='bottom',
             font=list(size=16))
      )
    ) %>%
    # Sync the cameras
    onRender("
      function(el) {
        el.on('plotly_relayout', function(d) {
          const camera = Object.keys(d).filter((key) => /\\.camera$/.test(key));
          if (camera.length) {
            const scenes = Object.keys(el.layout).filter((key) => /^scene\\d*/.test(key));
            const new_layout = {};
            scenes.forEach(key => {
              new_layout[key] = {...el.layout[key], camera: {...d[camera]}};
            });
            Plotly.relayout(el, new_layout);
          }
        });
      }
    ")
  
  fig
}


#############################################
## 6) Usage: Full Pipeline
#############################################

# 1) Simulate data
fmri_data <- simulate_fmri_data(n_time=80, n_runs=5)

# 2) Build irregular (t,theta,intensity) with Laplace(0, b(t))
df_on  <- build_irregular_kimesurface_data(fmri_data, "ON",  b0=0.3, alpha=0.02, T_max=300)
df_off <- build_irregular_kimesurface_data(fmri_data, "OFF", b0=0.3, alpha=0.02, T_max=300)

# 3) Fit Tps in 2D
fit_on  <- fit_kimesurface_interpolation(df_on,  lambda=0.001)  # lambda=0 => exact interp
## Warning: 
## Grid searches over lambda (nugget and sill variances) with  minima at the endpoints: 
##   (GCV) Generalized Cross-Validation 
##    minimum at  right endpoint  lambda  =  1271.662 (eff. df= 3.000885 )
fit_off <- fit_kimesurface_interpolation(df_off, lambda=0.001)
## Warning: 
## Grid searches over lambda (nugget and sill variances) with  minima at the endpoints: 
##   (GCV) Generalized Cross-Validation 
##    minimum at  right endpoint  lambda  =  1271.662 (eff. df= 3.000885 )
# pOn <- plot_kimesurface_tps(fit_on, df_on, n_time_plot=60, n_theta_plot=60)
# pOn
# pOff <- plot_kimesurface_tps(fit_off, df_off, n_time_plot=60, n_theta_plot=60)
# pOff
# 5) Visualize triple
fig <- visualize_kimesurfaces_tripleNEW(fit_on, df_on, fit_off, df_off)
fig

The 3D graph shows 3 synchronized scenes (ON, OFF, Overlay). Each panel is in radial coordinates: \(\bigl(x,y,z\bigr) = \bigl(t\cos\theta,\;t\sin\theta,\;\text{fMRI-BOLD}\bigr)\). Because we set alpha=0.02, the phase-angle distribution can get fairly wide for \(t=300\). If the plots looks overly flat or constant, check if the data has a real difference or variation. If it’s pure noise with no \((t,\theta)\) correlation, a flat interpolation might appear. Adjusting the parameter \(\lambda\) controls the the kimesurface smoothness.
Bigger \(\alpha\) or bigger \(\beta_o\) parameters yield increase of the random angle spread.
Larger or smaller n_time and n_runs may alter the pattern of the constructed kimesurface.

  1. For irregularly sampled data over \((t,\theta)\), each \((time, run)\) pair has a random phase \(\theta_{r}(t)\sim\mathrm{Laplace}(0, b_0 + \alpha t)\). This yields a scattered set of \(\{(t,\theta)\}\).
  2. 2D Interpolation: We use the function fields::Tps() to get a continuous function \(\hat{f}(t,\theta)\).
  3. Regular Grid: We then sample \(\hat{f}\) on a rectangular \((t,\theta)\)-grid and use visualize_kimesurfaces_triple(...) to display the kimesurfaces constructed with outer(t, cos(theta)).

3.2 Second approach (Strategy B) Interpolating Irregularly Sampled Data in Fourier k-Space

The next approach involves Fourier-transforming each time-series run \(\{y_{i,j}\}\) from the time domain to the frequency domain. Construct a 2D k-space representation using the random phases \(\theta_j(t)\). Possibly interpolating in \((frequency, phase)\) or \((k, \theta)\)-space. Inverting the 2D Fourier transform to return to \(time \times phase\) and plotting the resulting kimesurface. This requires careful definition of what we are Fourier-transforming, because the original data are irregular in \(\theta\).

Typically, a Fourier transform is applied to a 1D or 2D function defined on a regular grid. When the time points are uneven or the phase \(\theta\) is random, we encounter a non-uniform or irregular FFT (sometimes called NUFFT). To map each observed fMRI run \(\{y_{i,j}\}\) to the frequency domain requires a 1D transform in time. But the data also varies in \(\theta\), which requires careful handling to incorporate that second dimension in the frequency domain. The 2D Fourier transform \((t,\theta)\to (k_t,k_\theta)\), we still need the data on a rectangular grid \((t,\theta)\) for the standard 2D FFT, but in reality we have an irregular grid because \(\theta_j(t_i)\) is a function of time for each run. Hence, to do a standard 2D Fourier transform (with fft or fft2), requires and interpolation pre-processing of the irregular data onto a regular grid \((t,\theta)\). The protocol involves the following steps.

  • Collection of the \((t,\theta(t),y)\) data from all runs.
  • Interpolation to a uniform grid in \((t,\theta)\).
  • Application of 2D FFT to that grid on the “k-space” data.
  • A potential need to manipulation or filtering in k-space.
  • And inversion of the 2D FFT back to a uniform \((t,\theta)\)-grid.

This is reminiscent of how you might do MRI image reconstruction from non-Cartesian k-space data. In MRI, we sample k-space on radial or spiral trajectories, then do gridding or NUFFT to get a uniform sampling in k-space. And finally, do an inverse FFT to get the spatial image. By analogy, Strategy-B starts with the “irregularly sampled” data in \((t,\theta)\) (the real domain), which is FFT transformed, manipulated, and FFT inverted. Potentially we can do spectral smoothing or filtering in \((k_t,k_\theta)\)-space that might yield different reconstructions than local interpolation in \((t,\theta)\). When the data are naturally periodic or quasi-periodic in time or \(\theta\), a Fourier approach might be more direct.

We still need to handle the irregular sampling in \((t,\theta)\). The standard 2D FFT requires uniform spacing and an interpolation or NUFFT step is required. If \(-\infty < \theta < \infty\) (Laplace with unbounded domain), we nay need a finite domain or a periodic assumption to do a standard FFT. Usually we might forcibly wrap \(\theta\in[-\pi,\pi)\). A naive approach (interpolate data \(\to\) 2D FFT \(\to\) manipulate \(\to\) inverse 2D FFT \(\to\) re-sample) may introduce artifacts. The Fourier (Strategy B) can be more powerful if we want to do frequency-based filtering or to exploit strong signal periodicities.

The Strategy-B protocol involves the following steps:

  1. simulating multi-run fMRI data with random time-dependent Laplace angles \(\theta_{r,i}\).
  2. collecting \(\{(t,\theta_{r}(t), y)\}\) from each run/time.
  3. interpolating those scattered points onto a uniform \((t,\theta)\)-grid \((t_m,\theta_n)\).
  4. 2D FFT transforming to k-space, \((k_t, k_\theta)\).
  5. optionally, manipulateing or filtering the k-space kimesurface.
  6. inverting 2D FFT to get a reconstructed kimesurface in \((t,\theta)\).
  7. mapping \((t,\theta) \mapsto (x,y,z)\) with \(\,x = t\cos(\theta)\) for 3D plotting.

This example is intentionally oversimplified. Real pipeline workflowsd may also incorporate windowing (to reduce FFT boundary artifacts), zero-padding, or using a NUFFT approach to avoid the interpolation step. Also, 2D FFT typically assumes periodicity in both dimensions, but our real data in time or phase-angles might not be truly periodic.

library(parallel)
library(fields)   # We'll reuse Tps for scattered -> uniform grid interpolation
library(plotly)
library(ExtDist)  # for rLaplace

## 2) Simulate Multi-Run fMRI Data + Time-Dependent Laplace Angles
# Similar to “Strategy A,” but we’ll keep it separate for clarity. Each `(time_i, run_j)` 
# has a random draw $\theta_{r}(t_i)\sim\mathrm{Laplace}(0,\,b_0 + \alpha\,t_i)$. 
# We store these in a data frame.

simulate_fmri_data <- function(n_time = 80, n_runs = 5) {
  # For demonstration, we store ON and OFF
  set.seed(123)
  arr_on <- array(rnorm(n_time * n_runs, mean=0, sd=1), dim = c(n_time, 1, n_runs))
  list(ON  = arr_on, OFF = -arr_on)
}

build_irregular_kimesurface_data <- function(fmri_data, condition = "ON",
                                             b0 = 0.3, alpha = 0.005, T_max = 300,
                                             cores = parallel::detectCores()-1) {
  data_array <- fmri_data[[condition]]
  n_time <- dim(data_array)[1]
  n_runs <- dim(data_array)[3]
  
  cl <- makeCluster(cores)
  y_mat <- parApply(cl, data_array, c(1,3), mean)
  stopCluster(cl)
  
  # time from 0..T_max
  t_grid <- seq(0, T_max, length.out = n_time)
  
  df <- expand.grid(
    timeIndex = seq_len(n_time),
    runID     = seq_len(n_runs),
    KEEP.OUT.ATTRS = FALSE
  )
  df$t <- t_grid[df$timeIndex]
  
  # random Laplace(0, b0+alpha*t)
  set.seed(999)
  df$theta <- mapply(function(tt) {
    scale_i <- b0 + alpha*tt
    rLaplace(1, mu=0, b=scale_i)
  }, df$t)
  
  df$intensity <- c(y_mat)
  
  df
}

### Example Usage:

fmri_data <- simulate_fmri_data(n_time=80, n_runs=5)
df_on  <- build_irregular_kimesurface_data(fmri_data, "ON",  b0=0.3, alpha=0.02, T_max=300)
df_off <- build_irregular_kimesurface_data(fmri_data, "OFF", b0=0.3, alpha=0.02, T_max=300)

The resulting df_on and df_off each have \((80\times 5)=400\) irregular points in \((t,\theta)\) with intensities. To perform a 2D FFT, we need a rectangular matrix \(\{Z_{m,n}\}\) of intensities at regular \((t_m,\theta_n)\). We can do this interpolation with a scattered-data approach (e.g., fields::Tps or akima::interp). The final dimension must be \(n_t \times n_\theta\).

## 3) Interpolate onto a Uniform $(t,\theta)$-Grid
interpolate_to_grid <- function(df, n_time_plot = 50, n_theta_plot= 50, lambda= 0) { # less smoothing
  # 1) Scatter -> Tps fit
  X <- cbind(df$t, df$theta)
  Y <- df$intensity
  
  # A thin-plate spline with minimal smoothing
  fit <- Tps(X, Y, lambda=lambda)
  
  # 2) Regular grid
  t_min <- min(df$t);   t_max <- max(df$t)
  th_min<- min(df$theta); th_max<- max(df$theta)
  
  t_seq    <- seq(t_min,  t_max,   length.out=n_time_plot)
  theta_seq<- seq(th_min, th_max,  length.out=n_theta_plot)
  
  grid     <- expand.grid(t=t_seq, theta=theta_seq, KEEP.OUT.ATTRS=FALSE)
  Z_pred   <- predict(fit, as.matrix(grid))
  
  # 3) Reshape => [n_time_plot, n_theta_plot]
  Z_mat <- matrix(Z_pred, nrow=n_time_plot, ncol=n_theta_plot, byrow=FALSE)
  
  list(
    t_seq      = t_seq,
    theta_seq  = theta_seq,
    Z          = Z_mat,     # 2D matrix
    fit_object = fit        # might be used for debugging
  )
}

## 4) 2D FFT in R
# Base R doesn’t have a built-in “2D FFT” function, but we can define it by applying the 1D FFT row-wise, then column-wise (or vice versa). For an $(M \times N)$ matrix `Z`, we can do:

fft2d <- function(Z, inverse=FALSE) {
  # 1) FFT each column
  # If inverse=TRUE => we use fft(z, inverse=TRUE)
  M <- nrow(Z)
  N <- ncol(Z)
  
  # Step 1: transform columns
  for (j in seq_len(N)) {
    Z[,j] <- fft(Z[,j], inverse=inverse)
  }
  
  # Step 2: transform rows
  for (i in seq_len(M)) {
    Z[i,] <- fft(Z[i,], inverse=inverse)
  }
  
  # Note: For an *unscaled* DFT, the inverse also is unscaled.
  # Typically, you'd divide by M*N if you want the "true" inverse.
  # We'll handle scaling below.
  
  Z
}

The base R fft(..., inverse=TRUE) does not automatically divide by the array size. Typically, the true inverse DFT formula is \[X_{m,n} = \frac{1}{MN} \sum_{k=0}^{M-1}\sum_{\ell=0}^{N-1} F_{k,\ell}\, e^{2\pi i (\dots)}.\] So if we do an inverse transform, we might need to manually divide by \(M\times N\). This function interpolates scattered data to (t,theta)Z(t,theta) and applies a 2D FFT \(\to\) F(k_t, k_theta). Possibly it does a filter or manipulation in F and inverts 2D FFT \(\to\) reconstructed Z_ifft.

## 5) Constructing the k-Space Kimesurface
construct_kimesurface_fourier <- function(df, n_time_plot  = 50, n_theta_plot = 50,
                                          lambda_interp= 0, filter_fraction=1.0) {
  ########################################
  # 1) Interpolate to uniform grid
  ########################################
  grid_obj <- interpolate_to_grid(
    df,
    n_time_plot   = n_time_plot,
    n_theta_plot  = n_theta_plot,
    lambda        = lambda_interp
  )
  Z <- grid_obj$Z   # shape [n_time_plot, n_theta_plot]
  M <- nrow(Z)
  N <- ncol(Z)
  
  ########################################
  # 2) Forward 2D FFT => F in k-space
  ########################################
  F_k <- fft2d(Z, inverse=FALSE)
  
  # 'F_k' is an unscaled DFT. Its indices correspond to frequencies in rows, columns.
  # Typically the DC component is at [1,1] or [0,0]. 
  # If we want to do a "centered" transform, we might shift. We'll keep it simple here.
  
  ########################################
  # 3) Optional: Filter or manipulate in k-space
  ########################################
  # For demonstration, let's do a simple "low-pass" or partial filter if filter_fraction < 1.0
  # We'll keep only the top fraction of largest Fourier magnitudes, set the rest to 0.
  
  if(filter_fraction < 1.0) {
    mag <- Mod(F_k)
    cutoff <- quantile(mag, probs=1-filter_fraction)
    F_k[mag < cutoff] <- 0
  }
  
  ########################################
  # 4) Inverse 2D FFT => Z_ifft
  ########################################
  Z_ifft <- fft2d(F_k, inverse=TRUE)
  # Now we must divide by M*N for the "true" inverse in typical DFT conventions:
  Z_ifft <- Z_ifft / (M*N)
  
  # Re(Z_ifft) is typically your real output, but it might have small imaginary noise
  Z_ifft <- Re(Z_ifft)  # drop imaginary part if negligible
  
  ########################################
  # 5) Return everything for plotting
  ########################################
  list(
    t_seq      = grid_obj$t_seq,
    theta_seq  = grid_obj$theta_seq,
    Z_original = Z,
    Z_recon    = Z_ifft
  )
}

Using the filter_fraction, e.g., filter_fraction=0.3, we keep only the top \(30\%\) largest Fourier coefficients by magnitude, setting others to zero. This is a naive “hard thresholding.” Then the inverse transform yields a “smoothed” surface. The next function that takes the output of construct_kimesurface_fourier(...), which has t_seq, theta_seq, and a matrix Z_recon, and plots it via plot_ly() in \((x,y,z)\) space \[x = t \cdot \cos(\theta),\quad y = t \cdot \sin(\theta),\quad z = \text{intensity}.\]

plot_kimesurface_fourier <- function(kimesurf_obj,
                                     use_reconstructed=TRUE,
                                     title="Fourier Kimesurface") {
  # kimesurf_obj has:
  #   t_seq, theta_seq, Z_original, Z_recon
  t_seq    <- kimesurf_obj$t_seq
  theta_seq<- kimesurf_obj$theta_seq
  
  Z_mat <- if(use_reconstructed) kimesurf_obj$Z_recon
           else                 kimesurf_obj$Z_original
  M <- length(t_seq)
  N <- length(theta_seq)
  
  # Build the radial coordinate system
  x_mat <- outer(t_seq, cos(theta_seq))
  y_mat <- outer(t_seq, sin(theta_seq))
  
  plot_ly() %>%
    add_surface(
      x = ~x_mat,
      y = ~y_mat,
      z = ~Z_mat,
      colorscale = "Viridis"
    ) %>%
    layout(
      title = title,
      scene = list(
        xaxis = list(title="X = t cos(θ)"),
        yaxis = list(title="Y = t sin(θ)"),
        zaxis = list(title="Intensity")
      )
    )
}

### to Also  to Plot the Original (Unfiltered) Surface
# plot_kimesurface_fourier(kimesurf_obj, use_reconstructed=FALSE, title="Original Kimesurface")

##############################
# End-to-End Demo: Strategy B
##############################

# A) Simulate data
fmri_data <- simulate_fmri_data(n_time=80, n_runs=5)

# B) Build random-laplace angles => Irregular (t,θ,intensity)
df_on  <- build_irregular_kimesurface_data(
  fmri_data, "ON",
  b0=0.3, alpha=0.02, T_max=300
)
df_off <- build_irregular_kimesurface_data(
  fmri_data, "OFF",
  b0=0.3, alpha=0.02, T_max=300
)

# C) For each condition => interpolate, do 2D FFT, etc.
kimesurf_on <- construct_kimesurface_fourier(
  df_on,
  n_time_plot  = 60, 
  n_theta_plot = 60,
  lambda_interp= 0,          # no smoothing in Tps interpolation
  filter_fraction= 0.5       # keep top 50% largest Fourier components
)
## Warning: 
## Grid searches over lambda (nugget and sill variances) with  minima at the endpoints: 
##   (GCV) Generalized Cross-Validation 
##    minimum at  right endpoint  lambda  =  1271.662 (eff. df= 3.000885 )
kimesurf_off <- construct_kimesurface_fourier(
  df_off,
  n_time_plot  = 60,
  n_theta_plot = 60,
  lambda_interp= 0,
  filter_fraction= 0.5
)
## Warning: 
## Grid searches over lambda (nugget and sill variances) with  minima at the endpoints: 
##   (GCV) Generalized Cross-Validation 
##    minimum at  right endpoint  lambda  =  1271.662 (eff. df= 3.000885 )
# D) Plot the reconstructed surfaces
p_on  <- plot_kimesurface_fourier(kimesurf_on,
                                  use_reconstructed=TRUE,
                                  title="ON (Reconstructed from 2D FFT)")
p_off <- plot_kimesurface_fourier(kimesurf_off,
                                  use_reconstructed=TRUE,
                                  title="OFF (Reconstructed)")
# ##### You can show them separately ....
# p_on
# p_off
# #### Or jointly and synchronously ....

#############################################
## 5) The triple visualization function
#############################################
visualize_kimesurfaces_tripleREV2 <- function(kimesurf_on, kimesurf_off) {
  t_min <- 1;     t_max  <- length(kimesurf_on$t_seq)
  th_min<- 1;     th_max <- length(kimesurf_on$theta_seq)
  t_seq     <- c(t_min:t_max)
  theta_seq <- c(th_min:th_max)
  
  reshapedOn  <- kimesurf_on$Z_recon
  reshapedOff <- kimesurf_off$Z_recon
  
  # 2D surface in (t,theta) space
  fig <- plot_ly() %>%
    # First: ON
    add_surface(
      z = reshapedOn,
      colorscale = "Viridis",
      showscale  = FALSE,
      name       = "ON",
      scene      = "scene1"
    ) %>%
    layout(
      scene = list(
        domain     = list(x = c(0, 0.33), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "time (t)"),
        yaxis = list(title = "Runs (θ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    # Second: OFF
    add_surface(
      z = reshapedOff,
      colorscale = "Viridis",
      showscale  = FALSE,
      name       = "OFF",
      scene      = "scene2"
    ) %>%
    layout(
      scene2 = list(
        domain     = list(x = c(0.33, 0.66), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "time (t)"),
        yaxis = list(title = "Runs (θ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    # Third: Overlay
    add_surface(
      z = reshapedOn,
      colorscale = "Viridis",
      opacity    = 0.5,
      showscale  = FALSE,
      name       = "ON (overlay)",
      scene      = "scene3"
    ) %>%
    add_surface(
      z = reshapedOff,
      colorscale = "Portland",
      opacity    = 0.8,
      showscale  = FALSE,
      name       = "OFF (overlay)",
      scene      = "scene3"
    ) %>%
    layout(
      scene3 = list(
        domain     = list(x = c(0.66, 1), y = c(0, 1)),
        aspectmode = "cube",
        xaxis = list(title = "time (t)"),
        yaxis = list(title = "Runs (θ)"),
        zaxis = list(title = "Intensity")
      )
    ) %>%
    layout(
      title      = "Kimesurface Visualization of fMRI ON and OFF",
      showlegend = TRUE,
      legend     = list(x=0.85, y=0.1),
      annotations= list(
        list(showarrow=FALSE, text='(ON)',  xref='paper', yref='paper',
             x=0.15, y=0.95, xanchor='center', yanchor='bottom',
             font=list(size=16)),
        list(showarrow=FALSE, text='(OFF)', xref='paper', yref='paper',
             x=0.5,  y=0.95, xanchor='center', yanchor='bottom',
             font=list(size=16)),
        list(showarrow=FALSE, text='(OVERLAY)',
             xref='paper', yref='paper', x=0.85, y=0.95,
             xanchor='center', yanchor='bottom',
             font=list(size=16))
      )
    ) %>%
    # Sync the cameras
    onRender("
      function(el) {
        el.on('plotly_relayout', function(d) {
          const camera = Object.keys(d).filter((key) => /\\.camera$/.test(key));
          if (camera.length) {
            const scenes = Object.keys(el.layout).filter((key) => /^scene\\d*/.test(key));
            const new_layout = {};
            scenes.forEach(key => {
              new_layout[key] = {...el.layout[key], camera: {...d[camera]}};
            });
            Plotly.relayout(el, new_layout);
          }
        });
      }
    ")
  
  fig
}

fig <- visualize_kimesurfaces_tripleREV2(kimesurf_on, kimesurf_off)
fig

Periodicity: A standard 2D FFT treats the domain in both dimensions as if it’s periodic. If the actual time or phase range is not truly periodic, we may see wraparound artifacts at the edges. Also, in practice, real signals often require windowing or zero-padding.

Irregular \(\theta\): Because each run/time might have a unique \(\theta\), we must do an interpolation to get a uniform \((t_m,\theta_n)\) grid. This is exactly the same problem we faced in Strategy A, but now after we get the uniform grid, we do a 2D FFT. In real imaging contexts (e.g. fMRI), we can utilize a NUFFT (non-uniform FFT) when the sampling is radial or spiral in k-space.

Filtering: In this example, we applied an oversimplified approach – just keeping the largest Fourier coefficients by magnitude. In practice, we may need a more effective radial low-pass filtering, e.g., bandpass-filtering, or use a wavelet basis (again with wavelet-shrinkage. The general principle is the same – apply a forward (integral) transform, manipulate, and invert the transform.

Vizualization: For aesthetics reasons, in the above example we used a radial coordinate system \((x,y)=(t\cos\theta,t\sin\theta)\), which yields a circular-pattern in the \(\theta\)-dimension when \(\theta\in[-\pi,\pi)\). When the phase values can go beyond \(\pm \pi\), there might be bigger circles, helical or negative phases \(\theta\), which may be fine, as long as we’re consistent.

This example demonstrates how to incorporate Fourier (k-space) ideas for the kimesurface construction. Of course, the problem specifics might necessitate protocol modifications, particularly of how to handle boundaries, periodic assumptions, or advanced non-uniform FFT libraries.

3.3 Alternative Approaches for Kimesurface Construction

Radial Basis Function (RBF) Interpolation: For each time \(t\), we can interpolate the observed values \(\{f_n(t)\}\) at phase angles \(\{\theta_n\}\) using circular RBFs, \(f(t, \theta) = \sum_{n=1}^N w_n(t) \phi\left(|\theta - \theta_n|\right),\) where \(\phi\) is a radial basis function (e.g., von Mises kernel for angular smoothness).

Kernel Smoothing with Adaptive Bandwidth: We can use Nadaraya-Watson kernel smoothing to estimate \(f(t, \theta)\) \[\hat{f}(t, \theta) = \frac{\sum_{n=1}^N K_t(t - t_n) K_\theta(\theta - \theta_n) f_n(t_n)}{\sum_{n=1}^N K_t(t - t_n) K_\theta(\theta - \theta_n)},\] usig time-adaptive bandwidths to account for \(\Phi(t)\).

Time-Phase Gridding: Discretize \((t, \theta)\) into a grid, aggregate observations in each cell, and apply bilinear interpolation for continuity.

3.4 Stitching Strategy

In TCIU Chapter 6 Tutorial we show the simple time-series stitching numerical strategy to construct kimesurface representations of fMRI data and other repeated-measurement longitudinal datasets. Consider \(N\) repeated time-series measurements \(\{x_n(t)\}_{n=1}^N\), each measured over a time interval \(t \in [0,T]\). Rather than treating these as separate 1D curves, we will embed them in a 2D polar coordinate system \((t,\theta)\) where the radial coordinate \(t\) is the usual time, and the angular coordinate \(\theta_n\) is a direction or angle representing the phase index, i.e., the \(n\)-th repeated measurement.

Using a uniform assignment of \(\theta\) we can distribute the repeated experiments evenly around a circle, e.g. \(\theta_n = \frac{2\pi}{N}(n-1)\in S^1\). As \(t\in\{0, 1, \cdots, T\}\), each time series is a spoke or radiating-ray in this polar diagram.
Alternatively, given an estimate, or a prior, phase distribution \(\Phi(t)\), we can sample phase angles \(\theta_n\sim\Phi(t)\) in a more adaptive manner-particularly if the variance of \(\theta\) depends on \(t\).

To construct the kimesurface, for each \(n\) and each time \(t\), we plot \(\bigl(r,\theta_n\bigr) \equiv (t,\theta_n)\) in polar coordinates, and assign a measured time-series value, such as \(x_n(t)\). Then, we can visualize the amplitude or the value of the kimesurface, \(x_n(t)\), as a color, height, or contour on the 2D kimesurface manifold. Note that often interpolation between the radial-rays, i.e., the stitching spokes at \(\theta_n\)), may be necessary to fill the entire parametric grid over th4e continuous domain in \(\theta \in [0, 2\pi)\).

The stiching kimesurface construction is conceptually straightforward and works directly from the repeated time-series data.
However, unless strongly motivated by prior knowledge about how the repeated experiments differ, the phase angles \(\theta_n\) can be quite arbitrary and break the analyticity of the kimesurface maps. The stitching might reveal structure visually, but can obscure deeper relationships unless carefully interpreted.

library(spatstat)
library(plotly)
library(htmlwidgets)

# Extract just the On kime-series at voxel (44,42,33), each time-series includes 10 time-points and has 8 repeats!
# onFMRISeries <- bigim1_mod[44,42,33, c(1:10, 21:30, 41:50, 61:70, 
#                                        81:90, 101:110, 121:130, 141:150)]
# # the corresponding Off kime-series at voxel (44,42,33) will be the temporal complement
# offFMRISeries <- bigim1_mod[44,42,33, -c(1:10, 21:30, 41:50, 61:70, 
#                                          81:90, 101:110, 121:130, 141:150)]

# t_indx <- seq(1, 80, 1) # index of time (not actual time value)
# f_On <- onFMRISeries
# f_Off <- offFMRISeries

# Compute and plot against each other the Average On and Average Off time-series
period <- 10   # repeated measurement period
numSeries <- 8 # number of fMRI timeseries in the entire run (for each of On and Off signals)
extract8fMRI_TimeSeries <- function(f=0, period=0) {
  # Split the fMRI vector into (numSeries=8) individual time-courses
  fMRI_splitSeries <- split(f, ceiling(seq_along(f) / period))
  return(fMRI_splitSeries)
}

# 1. Extract numSeries=8 series for stitching and kimesurface reconstruction
fMRI_ON_Series <- extract8fMRI_TimeSeries(f=f_On, period=10)
fMRI_OFF_Series <- extract8fMRI_TimeSeries(f=f_Off, period=10)

# 1.a. Stitch together (using random kime-phase draws, $\varphi\sim\Phi_{[-\pi,\pi)}$) all these complex-valued FT series into a kimesurface in k-space.
phi_8_vec <- matrix(NA, ncol=period, nrow = numSeries)
for (t in 1:period) { 
  # for a given t, generate 8 new phases
  set.seed(t);
  phi_8_vec[ ,t] <- extraDistr::rlaplace(numSeries, mu=0, sigma=0.5)
  # rank-order the phases for consistency
  # within the same foliation leaf
  phi_8_vec[ ,t] <- sort(phi_8_vec[ ,t])
  # force phases in [-pi: pi)
  for (i in 1:numSeries) {
    if (phi_8_vec[i,t] < -pi) phi_8_vec[i,t] <- -pi
    if (phi_8_vec[i,t] >= pi) phi_8_vec[i,t] <- pi
  }
}

matrix_ON <- matrix(0, nrow = 2*period+1, ncol = 2*period+1) 
matrix_OFF <- matrix(0, nrow = 2*period+1, ncol = 2*period+1) 
for (t in 1:period) {  # POLAR to CARTESIAN coordinates
  for (p in 1:numSeries) {
    x = (period+1) + t*cos(phi_8_vec[p,t])
    y = (period+1) + t*sin(phi_8_vec[p,t])
    matrix_ON[x,y]  <- fMRI_ON_Series[[p]][t]
    matrix_OFF[x,y] <- fMRI_OFF_Series[[p]][t]
  }
}
# smooth/blur the matrices
# matrix_ON_smooth <- (1/10000)*as.matrix(blur(as.im(matrix_ON), sigma=0.5))
# matrix_OFF_smooth <- (1/10000)*as.matrix(blur(as.im(matrix_OFF), sigma=0.5))
matrix_ON_smooth <- (1/10000)*as.matrix(blur(as.im(matrix_ON), sigma=0.5))
matrix_OFF_smooth <- (1/10000)*as.matrix(blur(as.im(matrix_OFF), sigma=0.5))

# Display the Re() and Im() parts of the complex k-space kimesurfaces
# Load necessary libraries
# Create sequences for x and y axes
x <- seq(1, 2*period+1, length.out = 2*period+1)
y <- seq(1, 2*period+1, length.out = 2*period+1)

# Create the first surface plot
pON_Re <- plot_ly(x=x, y=y, z=Re(matrix_ON), type="surface", 
                  scene='scene1', name="Re(ON)") %>% 
  layout(title = "Re(fMRI-ON Basic Stiching)", 
         scene = list(domain=list(x=c(-period, period), y=c(-period, period)),
                      aspectmode = "cube")) %>% hide_colorbar()
pOFF_Re <- plot_ly(x=x, y=y, z=Re(matrix_OFF), type="surface", 
                   scene='scene2', name="Re(OFF)") %>% 
  layout(title = "Re(fMRI-OFF Basic Stiching)", 
         scene = list(domain=list(x=c(-period, period), y=c(-period, period)),
                      aspectmode = "cube")) %>% hide_colorbar()
pDIFF_Re <- plot_ly(x=x, y=y, z=Re(matrix_ON-matrix_OFF), type="surface",
                    scene='scene3', name="Re(Diff)") %>% 
  layout(title = "Re(fMRI-ON - fMRI-OFF Basic Stiching)", 
         scene = list(domain=list(x=c(-period, period), y=c(-period, period)),
                      aspectmode = "cube")) %>% hide_colorbar()
# Scene titles
annotations=list(list(showarrow=FALSE, text='(ON)', 
                      xref='scene',yref='scene',zref='scene', x=0.0,y=0.9,z=0.1,
                      xanchor='left',yanchor='bottom',font=list(size=16)),
                 list(showarrow=FALSE, text='(OFF)', 
                      xref='scene2',yref='scene',zref='scene', x=0.35,y=0.9,z=0.1,
                      xanchor='left',yanchor='bottom',font=list(size=16)),
                 list(showarrow=FALSE, text='(DIFF)', 
                      xref='scene3',yref='scene',zref='scene', x=0.7,y=0.9,z=0.1,
                      xanchor='left',yanchor='bottom',font=list(size=16)))
# Combine the plots
combined_plot <- subplot(pON_Re, pOFF_Re, pDIFF_Re) %>% #, nrows=2,margin=0.05) %>%
  layout(title="Native-space Reconstruction of fMRI ON and OFF Kimesurfaces using Basic Stitching", annotations=annotations)

# Display the synchronized plot
combined_plot %>% htmlwidgets::onRender(
    "function(x, el) {
      x.on('plotly_relayout', function(d) {
        const camera = Object.keys(d).filter((key) => /\\.camera$/.test(key));
        if (camera.length) {
          const scenes = Object.keys(x.layout).filter((key) => /^scene\\d*/.test(key));
          const new_layout = {};
          scenes.forEach(key => {
            new_layout[key] = {...x.layout[key], camera: {...d[camera]}};
          });
          Plotly.relayout(x, new_layout);
        }
      });
    }")

3.5 Using a Phase Distribution \(\Phi(\theta;t)\)

Another numerical approach enhancing the stitching strategy involves treating the index \(\theta\) not just as a convenient label (measurable quantity) for experiments, but as a random variable drawn from \(\Phi(\theta;t)\). Suppose at each real time \(t\), we regard the repeated measurements \(x_1(t), x_2(t), \cdots, x_N(t)\) as IID draws from some underlying distribution that depends on \(\theta\). Then \(\theta\) is an unobserved phase parameter that might shift or shape the measurement outcomes.

A Bayesian or likelihood-based approach expresses the likelihood \(p\bigl(x_n(t)\mid \theta\bigr)\) in terms of a prior \(\Phi(\theta;t)\) and fits or updates \(\Phi(\theta;t)\) by maximizing the posterior given the repeated datasets.
Once we estimate \(\Phi(\theta;t)\), we can construct an expected amplitude function \(\langle x(t,\theta)\rangle\), or a pointwise map from \((t,\theta)\) to data space, that effectively describes how the system behaves if the phase is \(\theta\).
The resulting kimesurface may then be displayed as a 2D parametric manifold \(\{(t,\theta)\}\) with an estimated amplitude or distribution of amplitudes, e.g.
\(x_{\mathrm{mean}}(t,\theta)\;=\;\int x\,p\bigl(x\mid t,\theta\bigr)\,dx,\) or more directly \(\langle x_n(t)\rangle\approx f(t,\theta)\).
A Bayesian approach is more principled when we interpret \(\theta\) as a random phase influencing the time-series. It generates a data-driven estimate of \(\Phi(\theta;t)\), thereby removing some arbitrariness in how stitching places each repeated series on the stitching-constructed kimesurface manifold.

4 Algorithmic Strategies

The algorithmic transforms of time-series to kimesurfaces scale well but may require larger datasets and more computational resources.

  • A Gaussian Process Regression (GPR) relies on training a Gaussian process with a composite kernel (e.g., Matérn kernel in \(t\) + periodic kernel in \(\theta\)) to predict \(f(t, \theta)\) from sparse observations.

  • Alternatively, Deep Learning with Phase Embedding trains a neural network (e.g., CNN or transformer) to map \((t, \theta)\) to \(f(t, \theta)\), using phase embeddings and time-series encoders.

  • Manifold Learning with Constraints applies UMAP, diffusion maps, or t-SNE with constraints to preserve temporal (\(t\)) and phase (\(\theta\)) structure in a 2D latent space.

  • Probabilistic and Statistical Methods, such as Bayesian Hierarchical Modeling treat \(\theta\) as a latent variable with prior \(\Phi(t)\) and infer the posterior kimesurface using Markov Chain Monte Carlo (MCMC) or variational inference.

  • Functional Data Analysis (FDA) represent time-series as functions in a Hilbert space, then use Karhunen-Loève expansion to decompose variability into \(t\)- and \(\theta\)-dependent modes.

4.1 Interpolation and Manifold Learning Approaches

Manifold learning may be useful to obtain a direct 2D embedding of repeated time-series in a kime plane. We can collect all data points \(\bigl(t, x_n(t)\bigr)\) for each repeated series \(n\) and use a dimensionality reduction method (e.g., Isomap,

Diffusion maps, PCA + custom transformations, UMAP, etc.) to embed them in a 2D manifold.
Then, we can post-process the 2D coordinates to interpret one axis as time, the other as phase, and possibly identify an angle \(\theta\) that smoothly increments around the manifold.
Using spline or kernel interpolation, we can define an approximate mapping \((n,t)\mapsto (t,\theta_n)\) and then perform a 2D interpolation of hte observed longitudinal data values to obtain a continuous kimesurface as a function of \((t,\theta)\).
This is conceptually similar to the stitching approach, but incorporating smoothing or weighting across multiple repeated data sets.

5 Practical aspects

In practice, the implementations of most algorithms may require the following steps.

  1. Data Alignment: Normalize time-series to a common time grid establishing direct homologies between all repeated experiments. Let the repeated series be arranged in a 2D array \((n, t)\). For each \(n\in \{1,\cdots,N\}\) and discrete time index \(t_j\in\{t_0,\dots,t_{max}\}\), we have measurements \(x_n(t_j)\).

  2. Phase Assignment: Assign each repetition a phase \(\theta_n \sim \Phi(t)\) (fixed or time-varying). For a Uniform assignment, \(\theta_n = 2\pi (n-1)/N\), whereas a data-driven assignment relies on prior knowledge or an inferred PDF \(\Phi(\theta;t_j)\), to sample, or invert to CDF (quantile function) to obtain \(\theta_n(t_j)\).

  3. Surface Construction: For analytical or numerical methods, interpolatations or smoothing over \(\theta\) may be necessary at each \(t\). For purely algorithmic methods, model training over \((t, \theta, f(t, \theta))\) tuples may be required. In the simplest approach, we can treat \(\theta_n\) as constant in \(t\) and consider each repeated series is a radial spoke in polar coordinates. Alternatively, we can allow \(\theta\) to vary in time to reflect time-dependent phases from some inference procedure.

  4. Implementation: Implement a 2D grid or mesh in \((r,\theta)\). At each grid node \((t_i,\theta_j)\), define or interpolate the data value from the nearest repeated measurement. For large data sets, we can store the kimesurface in a 2D array of size \(\#(t)\times\#(\theta)\).

  5. Validation: Use held-out repetitions to assess surface fidelity. Having a 2D representation, standard data-visualization libraries (plotly, matplotlib, etc.) can be used to display polar-coordinate contour plots, heatmaps, or 3D kimesurface plots.

  6. Statistical inference across repetitions utilizing the kimesurface representation is vital at the end. Both parametric an nonparametric inference on \(\Phi\) would be valuable. Assuming \(\theta\) influences the shape of the time series, we can parametrize \(\Phi(\theta;t)\) as a function and fit it by maximizing a likelihood that the repeated samples match the (marginalized) distribution over \(\theta\). If \(\Psi(t,\theta)\) is an amplitude model, we want repeated draws at real time \(t\) to be consistent with \(\Psi(t,\theta)\) integrated over \(\theta\). Alternatively, we can quantify the uncertainty of each repeated time series subject to noise, assuming the phase \(\theta\) is partially hidden. By modeling that noise, we may estiate a posterior distribution for \(\theta\). The summaries of the posterior distribution can help place each experiment’s data onto the kimesurface. To assess whether the variability among repeated experiments is low-dimensional, we can use PCA or ICA on the time series. One or two principal components might be interpreted as phase shifts or timing offsets, mapping naturally to \(\theta\) in a 2D domain.

SOCR Resource Visitor number Web Analytics SOCR Email