Nequip-OAM-XL
Predictions
Convex hull distance prediction errors projected onto elements
Trained By
Model Info
- Model Version 0.1
- Model Type UIP
- Targets EFSG
- Openness OSOD
- Train Task S2EFS
- Test Task IS2RE-SR
- Trained for Benchmark Yes
Training Set
OMat24: 101M structures from 3.23M materials
Subsampled Alexandria: 10.4M structures from 3.23M materials
MPtrj: 1.58M structures from 146k materials
Description
Extra-large NequIP foundation potential; see https://www.nequip.net/models/mir-group/NequIP-OAM-XL:0.1 for details and https://arxiv.org/abs/2504.16068 for model/training infrastructure.
Steps
Training performed by: (1) pre-training on OMat24; (2) fine-tuning on MPtrj+sAlex, with a reduced learning rate (1e-4), energy-loss-upweighting (1:1:0.01 instead of 1:5:0.01) and StochasticWeightAveraging (SWA).
Hyperparameters
- max_force:
0.005 - max_steps:
500 - ase_optimizer:
"GOQN" - cell_filter:
"FrechetCellFilter" - optimizer:
"AdamW" - weight_decay:
1e-8 - graph_construction_radius:
6 - sph_harmonics_l_max:
4 - n_layers:
6 - n_features:
"320 (l=0 scalars), 96 (l=1 vectors), 64 (l=2 tensors), 32 (l=3,4 tensors)" - parity:
false - zbl_potential:
true - type_embed_num_features:
32 - polynomial_cutoff:
5 - n_radial_bessel_basis:
8 - loss:
"Huber - delta=0.01 for energy, delta=0.1 for stress, stratified delta (0.01, 0.007, 0.004, 0.001) for force" - loss_weights:
{"energy":1,"force":5,"stress":0.1} - batch_size:
640 - initial_learning_rate:
0.005 - gradient_clip_val:
0.25 - learning_rate_schedule:
"ReduceLROnPlateau - factor=0.1, patience=100, min_lr=1e-6" - epochs:
30 - max_neighbors:
null