Stockfish Download.
Forum rules
Any post on the Stockfish download MUST include a direct link to download the new executable - MANDATORY.
Any post on the Stockfish download MUST include a direct link to download the new executable - MANDATORY.
- APOCALYPSE
- Posts: 181
- Joined: Fri Oct 14, 2022 6:50 am
- Has thanked: 659 times
- Been thanked: 328 times
- Contact:
Re: Stockfish Download.
Author: Guenther Demetz
Date: Sun Jun 4 23:01:14 2023 +0200
Timestamp: 1685912474
Simplify away SEE verification
After 4 simplificatons over PR#4453 the idea does not yield significant
improvement anymore. Maybe also
https://tests.stockfishchess.org/tests/ ... 2c3394c1c5 was
a fluke.
Passed non-regression bounds:
STC:
https://tests.stockfishchess.org/tests/ ... 583146d873
LLR: 2.93 (-2.94,2.94) <-1.75,0.25>
Total: 131936 W: 35040 L: 34930 D: 61966 Elo +0.29
Ptnml(0-2): 336, 14559, 36035, 14735, 303
LTC:
https://tests.stockfishchess.org/tests/ ... cf2fb213cd
LLR: 2.95 (-2.94,2.94) <-1.75,0.25>
Total: 407700 W: 109999 L: 110164 D: 187537 Elo -0.14
Ptnml(0-2): 279, 39913, 123689, 39632, 337
closes https://github.com/official-stockfish/S ... /pull/4595
bench: 2675974
Date: Sun Jun 4 23:01:14 2023 +0200
Timestamp: 1685912474
Simplify away SEE verification
After 4 simplificatons over PR#4453 the idea does not yield significant
improvement anymore. Maybe also
https://tests.stockfishchess.org/tests/ ... 2c3394c1c5 was
a fluke.
Passed non-regression bounds:
STC:
https://tests.stockfishchess.org/tests/ ... 583146d873
LLR: 2.93 (-2.94,2.94) <-1.75,0.25>
Total: 131936 W: 35040 L: 34930 D: 61966 Elo +0.29
Ptnml(0-2): 336, 14559, 36035, 14735, 303
LTC:
https://tests.stockfishchess.org/tests/ ... cf2fb213cd
LLR: 2.95 (-2.94,2.94) <-1.75,0.25>
Total: 407700 W: 109999 L: 110164 D: 187537 Elo -0.14
Ptnml(0-2): 279, 39913, 123689, 39632, 337
closes https://github.com/official-stockfish/S ... /pull/4595
bench: 2675974
- APOCALYPSE
- Posts: 181
- Joined: Fri Oct 14, 2022 6:50 am
- Has thanked: 659 times
- Been thanked: 328 times
- Contact:
Re: Stockfish Download.
Author: Linmiao Xu
Date: Sun Jun 4 23:05:28 2023 +0200
Timestamp: 1685912728
Remove static eval threshold for extensions when giving check
Passed non-regression STC:
https://tests.stockfishchess.org/tests/ ... 3c4c9f4f2a
LLR: 2.93 (-2.94,2.94) <-1.75,0.25>
Total: 114688 W: 30701 L: 30571 D: 53416 Elo +0.39
Ptnml(0-2): 336, 12708, 31136, 12818, 346
Passed non-regression LTC:
https://tests.stockfishchess.org/tests/ ... 5b572de770
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 107310 W: 28920 L: 28796 D: 49594 Elo +0.40
Ptnml(0-2): 33, 10427, 32621, 10531, 43
closes https://github.com/official-stockfish/S ... /pull/4599
bench 2597974
Date: Sun Jun 4 23:05:28 2023 +0200
Timestamp: 1685912728
Remove static eval threshold for extensions when giving check
Passed non-regression STC:
https://tests.stockfishchess.org/tests/ ... 3c4c9f4f2a
LLR: 2.93 (-2.94,2.94) <-1.75,0.25>
Total: 114688 W: 30701 L: 30571 D: 53416 Elo +0.39
Ptnml(0-2): 336, 12708, 31136, 12818, 346
Passed non-regression LTC:
https://tests.stockfishchess.org/tests/ ... 5b572de770
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 107310 W: 28920 L: 28796 D: 49594 Elo +0.40
Ptnml(0-2): 33, 10427, 32621, 10531, 43
closes https://github.com/official-stockfish/S ... /pull/4599
bench 2597974
- APOCALYPSE
- Posts: 181
- Joined: Fri Oct 14, 2022 6:50 am
- Has thanked: 659 times
- Been thanked: 328 times
- Contact:
Re: Stockfish Download.
Author: Michael Chaly
Date: Sun Jun 4 23:12:23 2023 +0200
Timestamp: 1685913143
Move internal iterative reduction before probcut
This patch moves IIR before probcut which allows probcut
to be produced at lower depths. Comments in IIR are also slightly updated.
Passed STC:
https://tests.stockfishchess.org/tests/ ... e4cfa749fd
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 387616 W: 103295 L: 102498 D: 181823 Elo +0.71
Ptnml(0-2): 976, 42322, 106381, 43187, 942
Passed LTC:
https://tests.stockfishchess.org/tests/ ... 3c4c9f42e8
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 202836 W: 54901 L: 54281 D: 93654 Elo +1.06
Ptnml(0-2): 85, 19609, 61422, 20205, 97
closes https://github.com/official-stockfish/S ... /pull/4597
bench 2551691
Date: Sun Jun 4 23:12:23 2023 +0200
Timestamp: 1685913143
Move internal iterative reduction before probcut
This patch moves IIR before probcut which allows probcut
to be produced at lower depths. Comments in IIR are also slightly updated.
Passed STC:
https://tests.stockfishchess.org/tests/ ... e4cfa749fd
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 387616 W: 103295 L: 102498 D: 181823 Elo +0.71
Ptnml(0-2): 976, 42322, 106381, 43187, 942
Passed LTC:
https://tests.stockfishchess.org/tests/ ... 3c4c9f42e8
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 202836 W: 54901 L: 54281 D: 93654 Elo +1.06
Ptnml(0-2): 85, 19609, 61422, 20205, 97
closes https://github.com/official-stockfish/S ... /pull/4597
bench 2551691
- APOCALYPSE
- Posts: 181
- Joined: Fri Oct 14, 2022 6:50 am
- Has thanked: 659 times
- Been thanked: 328 times
- Contact:
Re: Stockfish Download.
Author: peregrineshahin
Date: Tue Jun 6 21:07:43 2023 +0200
Timestamp: 1686078463
Fix no previous moves on root.
guards against no previous move existing if qSearch is called on the root node (i.e. when razoring).
Passed Non-regression STC:
https://tests.stockfishchess.org/tests/ ... 400e408143
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 53120 W: 14167 L: 13976 D: 24977 Elo +1.25
Ptnml(0-2): 109, 5597, 14981, 5740, 133
closes https://github.com/official-stockfish/S ... /pull/4604
Bench: 2551691
Date: Tue Jun 6 21:07:43 2023 +0200
Timestamp: 1686078463
Fix no previous moves on root.
guards against no previous move existing if qSearch is called on the root node (i.e. when razoring).
Passed Non-regression STC:
https://tests.stockfishchess.org/tests/ ... 400e408143
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 53120 W: 14167 L: 13976 D: 24977 Elo +1.25
Ptnml(0-2): 109, 5597, 14981, 5740, 133
closes https://github.com/official-stockfish/S ... /pull/4604
Bench: 2551691
- APOCALYPSE
- Posts: 181
- Joined: Fri Oct 14, 2022 6:50 am
- Has thanked: 659 times
- Been thanked: 328 times
- Contact:
Re: Stockfish Download.
Author: disservin
Date: Tue Jun 6 21:12:24 2023 +0200
Timestamp: 1686078744
Add binaries to releases with github actions
when a release is made with a tag matching sf_* the binaries will also be uploaded to the release as assets.
closes https://github.com/official-stockfish/S ... /pull/4596
No functional change.
Date: Tue Jun 6 21:12:24 2023 +0200
Timestamp: 1686078744
Add binaries to releases with github actions
when a release is made with a tag matching sf_* the binaries will also be uploaded to the release as assets.
closes https://github.com/official-stockfish/S ... /pull/4596
No functional change.
- APOCALYPSE
- Posts: 181
- Joined: Fri Oct 14, 2022 6:50 am
- Has thanked: 659 times
- Been thanked: 328 times
- Contact:
Re: Stockfish Download.
Author: Linmiao Xu
Date: Tue Jun 6 21:17:36 2023 +0200
Timestamp: 1686079056
Update default net to nn-0dd1cebea573.nnue
Created by retraining an earlier epoch of the experiment leading to the first SFNNv6 net
on a more-randomized version of the nn-e1fb1ade4432.nnue dataset mixed with unfiltered
T80 apr2023 data. Trained using early-fen-skipping 28 and max-epoch 960.
The trainer settings and epochs used in the 5-step training sequence leading here were:
1. train from scratch for 400 epochs, lambda 1.0, constant LR 9.75e-4, T79T77-filter-v6-dd.min.binpack
2. retrain ep379, max-epoch 800, end-lambda 0.75, T60T70wIsRightFarseerT60T74T75T76.binpack
3. retrain ep679, max-epoch 800, end-lambda 0.75, skip 28, nn-e1fb1ade4432 dataset
4. retrain ep799, max-epoch 800, end-lambda 0.7, skip 28, nn-e1fb1ade4432 dataset
5. retrain ep439, max-epoch 960, end-lambda 0.7, skip 28, shuffled nn-e1fb1ade4432 + T80 apr2023
This net was epoch 559 of the final (step 5) retraining:
```bash
python3 easy_train.py \
--experiment-name L1-1536-Re4-leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr-shuffled-sk28 \
--training-dataset /data/leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr.binpack \
--nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-L1-1536 \
--early-fen-skipping 28 \
--start-lambda 1.0 \
--end-lambda 0.7 \
--max_epoch 960 \
--start-from-engine-test-net False \
--start-from-model /data/L1-1536-Re3-nn-epoch439.nnue \
--engine-test-branch linrock/Stockfish/L1-1536 \
--lr 4.375e-4 \
--gamma 0.995 \
--tui False \
--seed $RANDOM \
--gpus "0,"
```
During data preparation, most binpacks were unminimized by removing positions with
score 32002 (`VALUE_NONE`). This makes the tradeoff of increasing dataset filesize
on disk to increase the randomness of positions in interleaved datasets.
The code used for unminimizing is at:
https://github.com/linrock/Stockfish/tr ... s-unminify
For preparing the dataset used in this experiment:
```bash
python3 interleave_binpacks.py \
leela96-filt-v2.binpack \
dfrc99-16tb7p-eval-filt-v2.binpack \
filt-v6-dd-min/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd-min/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd-min/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd-min/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd/test80-jul2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd/test80-oct2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd/test80-nov2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd-min/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd-min/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd/test79-apr2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd/test79-may2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd-min/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd/test78-juntosep2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd/test77-dec2021-16tb7p-filter-v6-dd.binpack \
test80-apr2023-2tb7p.binpack \
/data/leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr.binpack
```
T80 apr2023 data was converted using lc0-rescorer with ~2tb of tablebases and can be found at:
https://robotmoon.com/nnue-training-data/
Local elo at 25k nodes per move vs. nn-e1fb1ade4432.nnue (L1 size 1024):
nn-epoch559.nnue : 25.7 +/- 1.6
Passed STC:
https://tests.stockfishchess.org/tests/ ... f0f53f9cbb
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 59200 W: 16000 L: 15660 D: 27540 Elo +2.00
Ptnml(0-2): 159, 6488, 15996, 6768, 189
Passed LTC:
https://tests.stockfishchess.org/tests/ ... 400e4085d8
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 58800 W: 16002 L: 15657 D: 27141 Elo +2.04
Ptnml(0-2): 44, 5607, 17748, 5962, 39
closes https://github.com/official-stockfish/S ... /pull/4606
bench 2141197
Date: Tue Jun 6 21:17:36 2023 +0200
Timestamp: 1686079056
Update default net to nn-0dd1cebea573.nnue
Created by retraining an earlier epoch of the experiment leading to the first SFNNv6 net
on a more-randomized version of the nn-e1fb1ade4432.nnue dataset mixed with unfiltered
T80 apr2023 data. Trained using early-fen-skipping 28 and max-epoch 960.
The trainer settings and epochs used in the 5-step training sequence leading here were:
1. train from scratch for 400 epochs, lambda 1.0, constant LR 9.75e-4, T79T77-filter-v6-dd.min.binpack
2. retrain ep379, max-epoch 800, end-lambda 0.75, T60T70wIsRightFarseerT60T74T75T76.binpack
3. retrain ep679, max-epoch 800, end-lambda 0.75, skip 28, nn-e1fb1ade4432 dataset
4. retrain ep799, max-epoch 800, end-lambda 0.7, skip 28, nn-e1fb1ade4432 dataset
5. retrain ep439, max-epoch 960, end-lambda 0.7, skip 28, shuffled nn-e1fb1ade4432 + T80 apr2023
This net was epoch 559 of the final (step 5) retraining:
```bash
python3 easy_train.py \
--experiment-name L1-1536-Re4-leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr-shuffled-sk28 \
--training-dataset /data/leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr.binpack \
--nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-L1-1536 \
--early-fen-skipping 28 \
--start-lambda 1.0 \
--end-lambda 0.7 \
--max_epoch 960 \
--start-from-engine-test-net False \
--start-from-model /data/L1-1536-Re3-nn-epoch439.nnue \
--engine-test-branch linrock/Stockfish/L1-1536 \
--lr 4.375e-4 \
--gamma 0.995 \
--tui False \
--seed $RANDOM \
--gpus "0,"
```
During data preparation, most binpacks were unminimized by removing positions with
score 32002 (`VALUE_NONE`). This makes the tradeoff of increasing dataset filesize
on disk to increase the randomness of positions in interleaved datasets.
The code used for unminimizing is at:
https://github.com/linrock/Stockfish/tr ... s-unminify
For preparing the dataset used in this experiment:
```bash
python3 interleave_binpacks.py \
leela96-filt-v2.binpack \
dfrc99-16tb7p-eval-filt-v2.binpack \
filt-v6-dd-min/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd-min/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd-min/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd-min/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd/test80-jul2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd/test80-oct2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd/test80-nov2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd-min/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd-min/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd/test79-apr2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd/test79-may2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd-min/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.binpack \
filt-v6-dd/test78-juntosep2022-16tb7p-filter-v6-dd.binpack \
filt-v6-dd/test77-dec2021-16tb7p-filter-v6-dd.binpack \
test80-apr2023-2tb7p.binpack \
/data/leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr.binpack
```
T80 apr2023 data was converted using lc0-rescorer with ~2tb of tablebases and can be found at:
https://robotmoon.com/nnue-training-data/
Local elo at 25k nodes per move vs. nn-e1fb1ade4432.nnue (L1 size 1024):
nn-epoch559.nnue : 25.7 +/- 1.6
Passed STC:
https://tests.stockfishchess.org/tests/ ... f0f53f9cbb
LLR: 2.95 (-2.94,2.94) <0.00,2.00>
Total: 59200 W: 16000 L: 15660 D: 27540 Elo +2.00
Ptnml(0-2): 159, 6488, 15996, 6768, 189
Passed LTC:
https://tests.stockfishchess.org/tests/ ... 400e4085d8
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 58800 W: 16002 L: 15657 D: 27141 Elo +2.04
Ptnml(0-2): 44, 5607, 17748, 5962, 39
closes https://github.com/official-stockfish/S ... /pull/4606
bench 2141197
- APOCALYPSE
- Posts: 181
- Joined: Fri Oct 14, 2022 6:50 am
- Has thanked: 659 times
- Been thanked: 328 times
- Contact:
Re: Stockfish Download.
Author: Linmiao Xu
Date: Tue Jun 6 21:21:56 2023 +0200
Timestamp: 1686079316
Remove optimism multiplier in nnue eval calculation
The same formula had passed SPRT against an earlier version of master.
Passed non-regression STC vs. d99942f:
https://tests.stockfishchess.org/tests/ ... 8e1d98f72e
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 118720 W: 31402 L: 31277 D: 56041 Elo +0.37
Ptnml(0-2): 301, 13148, 32344, 13259, 308
Passed non-regression LTC vs. d99942f:
https://tests.stockfishchess.org/tests/ ... 8e1d991146
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 74286 W: 20019 L: 19863 D: 34404 Elo +0.73
Ptnml(0-2): 31, 7189, 22540, 7359, 24
The earlier patch had conflicted with a faster SPRT passer, so this
was tested again after rebasing on latest master.
Passed non-regression STC:
https://tests.stockfishchess.org/tests/ ... 400e408790
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 166176 W: 44309 L: 44234 D: 77633 Elo +0.16
Ptnml(0-2): 461, 18252, 45557, 18387, 431
Passed non-regression LTC:
https://tests.stockfishchess.org/tests/ ... bc11255e7b
LLR: 2.95 (-2.94,2.94) <-1.75,0.25>
Total: 28170 W: 7713 L: 7513 D: 12944 Elo +2.47
Ptnml(0-2): 14, 2609, 8635, 2817, 10
closes https://github.com/official-stockfish/S ... /pull/4607
bench 2503095
Date: Tue Jun 6 21:21:56 2023 +0200
Timestamp: 1686079316
Remove optimism multiplier in nnue eval calculation
The same formula had passed SPRT against an earlier version of master.
Passed non-regression STC vs. d99942f:
https://tests.stockfishchess.org/tests/ ... 8e1d98f72e
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 118720 W: 31402 L: 31277 D: 56041 Elo +0.37
Ptnml(0-2): 301, 13148, 32344, 13259, 308
Passed non-regression LTC vs. d99942f:
https://tests.stockfishchess.org/tests/ ... 8e1d991146
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 74286 W: 20019 L: 19863 D: 34404 Elo +0.73
Ptnml(0-2): 31, 7189, 22540, 7359, 24
The earlier patch had conflicted with a faster SPRT passer, so this
was tested again after rebasing on latest master.
Passed non-regression STC:
https://tests.stockfishchess.org/tests/ ... 400e408790
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 166176 W: 44309 L: 44234 D: 77633 Elo +0.16
Ptnml(0-2): 461, 18252, 45557, 18387, 431
Passed non-regression LTC:
https://tests.stockfishchess.org/tests/ ... bc11255e7b
LLR: 2.95 (-2.94,2.94) <-1.75,0.25>
Total: 28170 W: 7713 L: 7513 D: 12944 Elo +2.47
Ptnml(0-2): 14, 2609, 8635, 2817, 10
closes https://github.com/official-stockfish/S ... /pull/4607
bench 2503095
- APOCALYPSE
- Posts: 181
- Joined: Fri Oct 14, 2022 6:50 am
- Has thanked: 659 times
- Been thanked: 328 times
- Contact:
Re: Stockfish Download.
Author: Linmiao Xu
Date: Sun Jun 11 15:23:52 2023 +0200
Timestamp: 1686489832
Update default net to nn-ea57bea57e32.nnue
Created by retraining an earlier epoch (ep659) of the experiment that led to the first SFNNv6 net:
- First retrained on the nn-0dd1cebea573 dataset
- Then retrained with skip 20 on a smaller dataset containing unfiltered Leela data
- And then retrained again with skip 27 on the nn-0dd1cebea573 dataset
The equivalent 7-step training sequence from scratch that led here was:
1. max-epoch 400, lambda 1.0, constant LR 9.75e-4, T79T77-filter-v6-dd.min.binpack
ep379 chosen for retraining in step2
2. max-epoch 800, end-lambda 0.75, T60T70wIsRightFarseerT60T74T75T76.binpack
ep679 chosen for retraining in step3
3. max-epoch 800, end-lambda 0.75, skip 28, nn-e1fb1ade4432 dataset
ep799 chosen for retraining in step4
4. max-epoch 800, end-lambda 0.7, skip 28, nn-e1fb1ade4432 dataset
ep759 became nn-8d69132723e2.nnue (first SFNNv6 net)
ep659 chosen for retraining in step5
5. max-epoch 800, end-lambda 0.7, skip 28, nn-0dd1cebea573 dataset
ep759 chosen for retraining in step6
6. max-epoch 800, end-lambda 0.7, skip 20, leela-dfrc-v2-T77decT78janfebT79aprT80apr.binpack
ep639 chosen for retraining in step7
7. max-epoch 800, end-lambda 0.7, skip 27, nn-0dd1cebea573 dataset
ep619 became nn-ea57bea57e32.nnue
For the last retraining (step7):
python3 easy_train.py
--experiment-name L1-1536-Re6-masterShuffled-ep639-sk27-Re5-leela-dfrc-v2-T77toT80small-Re4-masterShuffled-ep659-Re3-sameAs-Re2-leela96-dfrc99-16t-v2-T60novdecT80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-Re1-LeelaFarseer-new-T77T79 \
--training-dataset /data/leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr.binpack \
--nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-L1-1536 \
--early-fen-skipping 27 \
--start-lambda 1.0 \
--end-lambda 0.7 \
--max_epoch 800 \
--start-from-engine-test-net False \
--start-from-model /data/L1-1536-Re5-leela-dfrc-v2-T77toT80small-epoch639.nnue \
--lr 4.375e-4 \
--gamma 0.995 \
--tui False \
--seed $RANDOM \
--gpus "0,"
For preparing the step6 leela-dfrc-v2-T77decT78janfebT79aprT80apr.binpack dataset:
python3 interleave_binpacks.py \
leela96-filt-v2.binpack \
dfrc99-16tb7p-eval-filt-v2.binpack \
test77-dec2021-16tb7p.no-db.min-mar2023.binpack \
test78-janfeb2022-16tb7p.no-db.min-mar2023.binpack \
test79-apr2022-16tb7p-filter-v6-dd.binpack \
test80-apr2022-16tb7p.no-db.min-mar2023.binpack \
/data/leela-dfrc-v2-T77decT78janfebT79aprT80apr.binpack
The unfiltered Leela data used for the step6 dataset can be found at:
https://robotmoon.com/nnue-training-data
Local elo at 25k nodes per move:
nn-epoch619.nnue : 2.3 +/- 1.9
Passed STC:
https://tests.stockfishchess.org/tests/ ... d9fc6d7cc8
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 40992 W: 11017 L: 10706 D: 19269 Elo +2.64
Ptnml(0-2): 113, 4400, 11170, 4689, 124
Passed LTC:
https://tests.stockfishchess.org/tests/ ... d9fc6d8208
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 129174 W: 35059 L: 34579 D: 59536 Elo +1.29
Ptnml(0-2): 66, 12548, 38868, 13050, 55
closes https://github.com/official-stockfish/S ... /pull/4611
bench: 2370027
Date: Sun Jun 11 15:23:52 2023 +0200
Timestamp: 1686489832
Update default net to nn-ea57bea57e32.nnue
Created by retraining an earlier epoch (ep659) of the experiment that led to the first SFNNv6 net:
- First retrained on the nn-0dd1cebea573 dataset
- Then retrained with skip 20 on a smaller dataset containing unfiltered Leela data
- And then retrained again with skip 27 on the nn-0dd1cebea573 dataset
The equivalent 7-step training sequence from scratch that led here was:
1. max-epoch 400, lambda 1.0, constant LR 9.75e-4, T79T77-filter-v6-dd.min.binpack
ep379 chosen for retraining in step2
2. max-epoch 800, end-lambda 0.75, T60T70wIsRightFarseerT60T74T75T76.binpack
ep679 chosen for retraining in step3
3. max-epoch 800, end-lambda 0.75, skip 28, nn-e1fb1ade4432 dataset
ep799 chosen for retraining in step4
4. max-epoch 800, end-lambda 0.7, skip 28, nn-e1fb1ade4432 dataset
ep759 became nn-8d69132723e2.nnue (first SFNNv6 net)
ep659 chosen for retraining in step5
5. max-epoch 800, end-lambda 0.7, skip 28, nn-0dd1cebea573 dataset
ep759 chosen for retraining in step6
6. max-epoch 800, end-lambda 0.7, skip 20, leela-dfrc-v2-T77decT78janfebT79aprT80apr.binpack
ep639 chosen for retraining in step7
7. max-epoch 800, end-lambda 0.7, skip 27, nn-0dd1cebea573 dataset
ep619 became nn-ea57bea57e32.nnue
For the last retraining (step7):
python3 easy_train.py
--experiment-name L1-1536-Re6-masterShuffled-ep639-sk27-Re5-leela-dfrc-v2-T77toT80small-Re4-masterShuffled-ep659-Re3-sameAs-Re2-leela96-dfrc99-16t-v2-T60novdecT80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-Re1-LeelaFarseer-new-T77T79 \
--training-dataset /data/leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr.binpack \
--nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-L1-1536 \
--early-fen-skipping 27 \
--start-lambda 1.0 \
--end-lambda 0.7 \
--max_epoch 800 \
--start-from-engine-test-net False \
--start-from-model /data/L1-1536-Re5-leela-dfrc-v2-T77toT80small-epoch639.nnue \
--lr 4.375e-4 \
--gamma 0.995 \
--tui False \
--seed $RANDOM \
--gpus "0,"
For preparing the step6 leela-dfrc-v2-T77decT78janfebT79aprT80apr.binpack dataset:
python3 interleave_binpacks.py \
leela96-filt-v2.binpack \
dfrc99-16tb7p-eval-filt-v2.binpack \
test77-dec2021-16tb7p.no-db.min-mar2023.binpack \
test78-janfeb2022-16tb7p.no-db.min-mar2023.binpack \
test79-apr2022-16tb7p-filter-v6-dd.binpack \
test80-apr2022-16tb7p.no-db.min-mar2023.binpack \
/data/leela-dfrc-v2-T77decT78janfebT79aprT80apr.binpack
The unfiltered Leela data used for the step6 dataset can be found at:
https://robotmoon.com/nnue-training-data
Local elo at 25k nodes per move:
nn-epoch619.nnue : 2.3 +/- 1.9
Passed STC:
https://tests.stockfishchess.org/tests/ ... d9fc6d7cc8
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 40992 W: 11017 L: 10706 D: 19269 Elo +2.64
Ptnml(0-2): 113, 4400, 11170, 4689, 124
Passed LTC:
https://tests.stockfishchess.org/tests/ ... d9fc6d8208
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 129174 W: 35059 L: 34579 D: 59536 Elo +1.29
Ptnml(0-2): 66, 12548, 38868, 13050, 55
closes https://github.com/official-stockfish/S ... /pull/4611
bench: 2370027
- APOCALYPSE
- Posts: 181
- Joined: Fri Oct 14, 2022 6:50 am
- Has thanked: 659 times
- Been thanked: 328 times
- Contact:
Re: Stockfish Download.
Author: AndrovT
Date: Mon Jun 12 20:41:27 2023 +0200
Timestamp: 1686595287
Use block sparse input for the first layer.
Use block sparse input for the first fully connected layer on architectures with at least SSSE3.
Depending on the CPU architecture, this yields a speedup of up to 10%, e.g.
```
Result of 100 runs of 'bench 16 1 13 default depth NNUE'
base (...ockfish-base) = 959345 +/- 7477
test (...ckfish-patch) = 1054340 +/- 9640
diff = +94995 +/- 3999
speedup = +0.0990
P(speedup > 0) = 1.0000
CPU: 8 x AMD Ryzen 7 5700U with Radeon Graphics
Hyperthreading: on
```
Passed STC:
https://tests.stockfishchess.org/tests/ ... 77ca12409c
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 8864 W: 2479 L: 2223 D: 4162 Elo +10.04
Ptnml(0-2): 13, 829, 2504, 1061, 25
This commit includes a net with reordered weights, to increase the likelihood of block sparse inputs,
but otherwise equivalent to the previous master net (nn-ea57bea57e32.nnue).
Activation data collected with https://github.com/AndrovT/Stockfish/tr ... ctivations, running bench 16 1 13 varied_1000.epd depth NNUE on this data. Net parameters permuted with https://gist.github.com/AndrovT/9e3fbae ... 7e02094cb3.
closes https://github.com/official-stockfish/S ... /pull/4612
No functional change
Date: Mon Jun 12 20:41:27 2023 +0200
Timestamp: 1686595287
Use block sparse input for the first layer.
Use block sparse input for the first fully connected layer on architectures with at least SSSE3.
Depending on the CPU architecture, this yields a speedup of up to 10%, e.g.
```
Result of 100 runs of 'bench 16 1 13 default depth NNUE'
base (...ockfish-base) = 959345 +/- 7477
test (...ckfish-patch) = 1054340 +/- 9640
diff = +94995 +/- 3999
speedup = +0.0990
P(speedup > 0) = 1.0000
CPU: 8 x AMD Ryzen 7 5700U with Radeon Graphics
Hyperthreading: on
```
Passed STC:
https://tests.stockfishchess.org/tests/ ... 77ca12409c
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 8864 W: 2479 L: 2223 D: 4162 Elo +10.04
Ptnml(0-2): 13, 829, 2504, 1061, 25
This commit includes a net with reordered weights, to increase the likelihood of block sparse inputs,
but otherwise equivalent to the previous master net (nn-ea57bea57e32.nnue).
Activation data collected with https://github.com/AndrovT/Stockfish/tr ... ctivations, running bench 16 1 13 varied_1000.epd depth NNUE on this data. Net parameters permuted with https://gist.github.com/AndrovT/9e3fbae ... 7e02094cb3.
closes https://github.com/official-stockfish/S ... /pull/4612
No functional change
- APOCALYPSE
- Posts: 181
- Joined: Fri Oct 14, 2022 6:50 am
- Has thanked: 659 times
- Been thanked: 328 times
- Contact:
Re: Stockfish Download.
Author: Andreas Matthies
Date: Tue Jun 13 08:45:25 2023 +0200
Timestamp: 1686638725
Fix for MSVC compilation.
MSVC needs two more explicit casts to compile new affine_transform_sparse_input.
See https://www.intel.com/content/www/us/en ... stsi256_ps
and https://www.intel.com/content/www/us/en ... stsi128_ps
closes https://github.com/official-stockfish/S ... /pull/4616
No functional change
Date: Tue Jun 13 08:45:25 2023 +0200
Timestamp: 1686638725
Fix for MSVC compilation.
MSVC needs two more explicit casts to compile new affine_transform_sparse_input.
See https://www.intel.com/content/www/us/en ... stsi256_ps
and https://www.intel.com/content/www/us/en ... stsi128_ps
closes https://github.com/official-stockfish/S ... /pull/4616
No functional change