mlr3torch 0.3.2
Bug Fixes
- t_opt("adamw")now actually uses AdamW and not
Adam.
- Caching: Cache directory is now created, even if its parent
directory does not exist.
- Add mlr3torchtomlr_reflections$loaded_packagesto fix errors when usingmlr3torchin parallel.
mlr3torch 0.3.1
Bug Fixes
- FT Transformer can now be (un-)marshaled after being trained on
categorical data (#412).
- Parameters (batch)-sampler now work (#420, thanks @tdhock)
Features
mlr3torch 0.3.0
Breaking Changes:
- The output dimension of neural networks for binary classification
tasks is now expected to be 1 and not 2 as before. The behavior of
nn("head")was also changed to match this. This means that
for binary classification tasks,t_loss("cross_entropy")now generatesnn_bce_with_logits_lossinstead ofnn_cross_entropy_loss. This also came with a
reparametrization of thet_loss("cross_entropy")loss
(thanks to @tdhock,
#374).
New Features:
PipeOps & Learners:
- Added po("nn_identity")
- Added po("nn_fn")for calling custom functions in a
network.
- Added the FT Transformer model for tabular data.
- Added encoders for numericals and categoricals
- nn("block")(which allows to repeat the same network
segment multiple times) now has an extra argument- trafo,
which allows to modify the parameter values per layer.
Callbacks:
- The context for callbacks now includes the network prediction
(y_hat).
- The lr_one_cyclecallback now infers the total number
of steps.
- Progress callback got argument digitsfor controlling
the precision with which validation/training scores are logged.
Other:
- TorchIngressTokennow also can take a- Selectoras argument- features.
- Added function lazy_shape()to get the shape of a lazy
tensor.
- Better error messages for MLP and TabResNet learners.
- TabResNet learner now supports lazy tensors.
- The LearnerTorchbase class now supports the private
method$.ingress_tokens(task, param_vals)for generating
thetorch::dataset.
- Shapes can now have multiple NAs and not only the batch
dimension can be missing. However, mostnn()operators
still expect only one missing values and will throw an error if multiple
dimensions are unknown.
- Training now does not fail anymore when encountering a missing value
during validation but uses NAinstead.
- It is now possible to specify parameter groups for optimizers via
the param_groupsparameter.
Bug Fixes:
- fix: lazy tensors of length 0 can now be materialized.
- fix: NAis now a valid shape for lazy tensors
- fix: The lr_reduce_on_plateaucallback now works.
mlr3torch 0.2.1
Bug Fixes:
- LearnerTorchModelcan now be parallelized and trained
with encapsulation activated.
- jit_tracenow works in combination with batch
normalization.
- Ensures compatibility with R6version 2.6.0
mlr3torch 0.2.0
Breaking Changes
- Removed some optimizers for which no fast (‘ignite’) variant
exists.
- The default optimizer is now AdamW instead of Adam.
- The private LearnerTorch$.dataloader()method now
operates no longer on thetaskbut on thedatasetgenerated by the privateLearnerTorch$.dataset()method.
- The shuffleparameter during model training is now
initialized toTRUEto sidestep issues where data is
sorted.
- Optimizers now use the faster (‘ignite’) version of the optimizers,
which leads to considerable speed improvements.
- The jit_traceparameter was added toLearnerTorch, which when set toTRUEcan lead
to significant speedups. This should only be enabled for ‘static’
models, see the torch
tutorial for more information.
- Added parameter num_interop_threadstoLearnerTorch.
- The tensor_datasetparameter was added, which allows to
stack all batches at the beginning of training to make loading of
batches afterwards faster.
- Use a faster default image loader.
Features
- Added PipeOpfor adaptive average pooling.
- The n_layersparameter was added to the MLP
learner.
- Added multimodal melanoma and cifar{10, 100} example tasks.
- Added a callback to iteratively unfreeze parameters for
finetuning.
- Added different learning rate schedulers as callbacks.
Bug Fixes:
- Torch learners can now be used with AutoTuner.
- Early stopping now not uses epochs - patiencefor the
internally tuned values instead of the trained number ofepochsas it was before.
- The datasetof a learner must no longer return the
tensors on the specifieddevice, which allows for parallel
dataloading on GPUs.
- PipeOpBlockshould no longer create ID clashes with
other PipeOps in the graph (#260).
mlr3torch 0.1.2
- Don’t use deprecated data_formatsanymore
- Added CallbackSetTB, which allows logging that can be
viewed by TensorBoard.
mlr3torch 0.1.1
- fix(preprocessing): regarding the construction of some
PipeOpssuch aspo("trafo_resize")which
failed in some cases.
- fix(ci): tests were not run in the CI
- fix(learner): LearnerTabResnetnow works correctly
- Fix that tests were not run in the CI
- feat: added the nn()helper function to simplify the
creation of neural network layers
mlr3torch 0.1.0