Nequip-OAM-XL
Predictions
Convex hull distance prediction errors projected onto elements
1 H 0.0681
2 He 0
3 Li 0.0026
4 Be 0.0215
5 B 0.0291
6 C 0.0098
7 N 0.0248
8 O 0.0788
9 F 0.0395
10 Ne 0
11 Na 0.0016
12 Mg 0.0215
13 Al 0.0121
14 Si 0.0149
15 P 0.0438
16 S 0.0428
17 Cl 0.0058
18 Ar 0
19 K 0.0109
20 Ca 0.0105
21 Sc 0.0195
22 Ti 0.0054
23 V 0.0111
24 Cr 0.0157
25 Mn 0.0758
26 Fe 0.0846
27 Co 0.0049
28 Ni 0.0273
29 Cu 0.0166
30 Zn 0.0143
31 Ga 0.023
32 Ge 0.0213
33 As 0.0089
34 Se 0.0168
35 Br 0.0092
36 Kr 0
37 Rb 0.0035
38 Sr 0.0276
39 Y 0.004
40 Zr 0.0199
41 Nb 0.007
42 Mo 0.045
43 Tc 0.0002
44 Ru 0.0195
45 Rh 0.0414
46 Pd 0.0186
47 Ag 0.0149
48 Cd 0.0039
49 In 0.0185
50 Sn 0.0154
51 Sb 0.0087
52 Te 0.103
53 I 0.0061
54 Xe 0
55 Cs 0.0288
56 Ba 0.0173
57 La 0.0028
58 Ce 0.007
59 Pr 0.0045
60 Nd 0.0038
61 Pm 0.0128
62 Sm 0.0084
63 Eu 0.0208
64 Gd 0.0024
65 Tb 0.0083
66 Dy 0.0105
67 Ho 0.0028
68 Er 0.0227
69 Tm 0.0115
70 Yb 0.0294
71 Lu 0.0158
72 Hf 0.028
73 Ta 0.0267
74 W 0.0022
75 Re 0.0018
76 Os 0.0047
77 Ir 0.0331
78 Pt 0.0253
79 Au 0.0252
80 Hg 0.0017
81 Tl 0.015
82 Pb 0.0081
83 Bi 0.0266
84 Po 0
85 At 0
86 Rn 0
87 Fr 0
88 Ra 0
89 Ac 0.0283
90 Th 0.0179
91 Pa 0.0275
92 U 0.0117
93 Np 0.0164
94 Pu 0.162
95 Am 0
96 Cm 0
97 Bk 0
98 Cf 0
99 Es 0
100 Fm 0
101 Md 0
102 No 0
103 Lr 0
104 Rf 0
105 Db 0
106 Sg 0
107 Bh 0
108 Hs 0
109 Mt 0
110 Ds 0
111 Rg 0
112 Cn 0
113 Nh 0
114 Fl 0
115 Mc 0
116 Lv 0
117 Ts 0
118 Og 0
57-71 La-Lu Lanthanides
89-103 Ac-Lr Actinides
Trained By
Model Info
- Model Version 0.1
- Model Type UIP
- Targets EFSG
- Openness OSOD
- Train Task S2EFS
- Test Task IS2RE-SR
- Trained for Benchmark Yes
Training Set
OMat24: 101M structures from 3.23M materials
Subsampled Alexandria: 10.4M structures from 3.23M materials
MPtrj: 1.58M structures from 146k materials
Description
Extra-large NequIP foundation potential; see https://www.nequip.net/models/mir-group/NequIP-OAM-XL:0.1 for details and https://arxiv.org/abs/2504.16068 for model/training infrastructure.
Steps
Training performed by: (1) pre-training on OMat24; (2) fine-tuning on MPtrj+sAlex, with a reduced learning rate (1e-4), energy-loss-upweighting (1:1:0.01 instead of 1:5:0.01) and StochasticWeightAveraging (SWA).
Hyperparameters
- max_force:
0.005 - max_steps:
500 - ase_optimizer:
"GOQN" - cell_filter:
"FrechetCellFilter" - optimizer:
"AdamW" - weight_decay:
1e-8 - graph_construction_radius:
6 - sph_harmonics_l_max:
4 - n_layers:
6 - n_features:
"320 (l=0 scalars), 96 (l=1 vectors), 64 (l=2 tensors), 32 (l=3,4 tensors)" - parity:
false - zbl_potential:
true - type_embed_num_features:
32 - polynomial_cutoff:
5 - n_radial_bessel_basis:
8 - loss:
"Huber - delta=0.01 for energy, delta=0.1 for stress, stratified delta (0.01, 0.007, 0.004, 0.001) for force" - loss_weights:
{"energy":1,"force":5,"stress":0.1} - batch_size:
640 - initial_learning_rate:
0.005 - gradient_clip_val:
0.25 - learning_rate_schedule:
"ReduceLROnPlateau - factor=0.1, patience=100, min_lr=1e-6" - epochs:
30 - max_neighbors:
null