| Title: | Nonlinear Functional Principal Component Analysis using Neural Networks | 
| Version: | 1.0 | 
| Description: | Implementation for 'nFunNN' method, which is a novel nonlinear functional principal component analysis method using neural networks. The crucial function of this package is nFunNNmodel(). | 
| License: | GPL (≥ 3) | 
| Encoding: | UTF-8 | 
| RoxygenNote: | 7.1.1 | 
| Imports: | fda, splines, stats, torch | 
| NeedsCompilation: | no | 
| Packaged: | 2024-04-26 01:50:25 UTC; 11013 | 
| Author: | Rou Zhong [aut, cre], Jingxiao Zhang [aut] | 
| Maintainer: | Rou Zhong <zhong_rou@163.com> | 
| Repository: | CRAN | 
| Date/Publication: | 2024-04-28 09:40:02 UTC | 
Curve reconstruction
Description
Curve reconstruction by the trained transformed functional autoassociative neural network.
Usage
nFunNN_CR(model, X_ob, L, t_grid)
Arguments
| model | The trained transformed functional autoassociative neural network obtained from  | 
| X_ob | A  | 
| L | An  | 
| t_grid | A  | 
Value
A torch tensor denoting the predicted values.
Examples
n <- 2000
m <- 51
t_grid <- seq(0, 1, length.out = m)
m_est <- 101
t_grid_est <- seq(0, 1, length.out = m_est)
err_sd <- 0.1
Z_1a <- stats::rnorm(n, 0, 3)
Z_2a <- stats::rnorm(n, 0, 2)
Z_a <- cbind(Z_1a, Z_2a)
Phi <- cbind(sin(2 * pi * t_grid), cos(2 * pi * t_grid))
Phi_est <- cbind(sin(2 * pi * t_grid_est), cos(2 * pi * t_grid_est))
X <- Z_a %*% t(Phi)
X_to_est <- Z_a %*% t(Phi_est)
X_ob <- X + matrix(stats::rnorm(n * m, 0, err_sd), nr = n, nc = m)
L_smooth <- 10
L <- 10
J <- 20
K <- 2
R <- 20
nFunNN_res <- nFunNNmodel(X_ob, t_grid, t_grid_est, L_smooth,
L, J, K, R, lr = 0.001, n_epoch = 1500, batch_size = 100)
model <- nFunNN_res$model
X_pre <- nFunNN_CR(model, X_ob, L, t_grid)
sqrt(torch::nnf_mse_loss(X_pre, torch::torch_tensor(X_to_est))$item())
Nonlinear FPCA using neural networks
Description
Nonlinear functional principal component analysis using a transformed functional autoassociative neural network.
Usage
nFunNNmodel(
  X_ob,
  t_grid,
  t_grid_est,
  L_smooth,
  L,
  J,
  K,
  R,
  lr = 0.001,
  batch_size,
  n_epoch
)
Arguments
| X_ob | A  | 
| t_grid | A  | 
| t_grid_est | A  | 
| L_smooth | An  | 
| L | An  | 
| J | An  | 
| K | An  | 
| R | An  | 
| lr | A scalar denoting the learning rate. (default: 0.001) | 
| batch_size | An  | 
| n_epoch | An  | 
Value
A list containing the following components:
| model | The resulting neural network trained by the observed data. | 
| loss | A  | 
| Comp_time | An object of class "difftime" denoting the computation time in seconds. | 
Examples
n <- 2000
m <- 51
t_grid <- seq(0, 1, length.out = m)
m_est <- 101
t_grid_est <- seq(0, 1, length.out = m_est)
err_sd <- 0.1
Z_1a <- stats::rnorm(n, 0, 3)
Z_2a <- stats::rnorm(n, 0, 2)
Z_a <- cbind(Z_1a, Z_2a)
Phi <- cbind(sin(2 * pi * t_grid), cos(2 * pi * t_grid))
Phi_est <- cbind(sin(2 * pi * t_grid_est), cos(2 * pi * t_grid_est))
X <- Z_a %*% t(Phi)
X_to_est <- Z_a %*% t(Phi_est)
X_ob <- X + matrix(stats::rnorm(n * m, 0, err_sd), nr = n, nc = m)
L_smooth <- 10
L <- 10
J <- 20
K <- 2
R <- 20
nFunNN_res <- nFunNNmodel(X_ob, t_grid, t_grid_est, L_smooth,
L, J, K, R, lr = 0.001, n_epoch = 1500, batch_size = 100)