From TalkChess
From TalkChess
SPCC: EAS-Tool major update
Post by pohl4711 » Tue Nov 15, 2022 12:38 pm
I released a major update for the EAS-tool (Engines Aggressiveness Statistics-tool):
There are now 2 tools: The EAS-tool and the Gauntlet EAS-tool: The EAS-tool evaluates all played games in a source.pgn-file of all engines/players. The Gauntlet EAS-tool only evaluates the engine/player, which played the most games in the source.pgn file. The Gauntlet EAS-tool is IMHO a good thing for engine-developers, when they test their engine-dev-version vs. several opponents and are just interested in the EAS-score of their own engine...or for analyzing human players (search one player in the Megabase and calculate the EAS-score, without doing so for all of his opponents).
The scoring-system of the EAS-tools was complete rewritten and improved. Please check the ReadMe-File in the EAS-tool download for full explanation, because meanwhile the scoring-system got really complex...
Download the EAS-tool V5.2 on my website or right here:
https://www.sp-cc.de/files/engines_aggr ... cs_tool.7z
(The EAS-ratinglist on my website is already updated with the scores of the new EAS-tool)
Re: From TalkChess
SPCC: 750000 clicks and a new tool released
Post by pohl4711 » Sat Nov 12, 2022 8:21 am
My website reached 750000 clicks/visits !!!
I never expected so much interest in my little website and my computerchess-projects...
As a gift for the computerchess-community, I release a new tool: The Interesting Wins Search Tool.
https://www.sp-cc.de/files/interesting_ ... ch_tool.7z
The 2 output-files of the IWS-Tool contain different types of wins:
interesting_wins.pgn contains:
1) Queen Sacrifices, followed by
2) 5+ PawnUnit Sacrifices, followed by
3) 4 PawnUnit Sacrifices, followed by
4) 3 PawnUnit Sacrifices, followed by
5) 2 PawnUnit Sacrifices, followed by
6) 1 PawnUnit Sacrifices, followed by
7) Games, ended before endgame (material) was reached, followed by
8) Games with material imbalance (Rook vs. Bishop and 2 pawns for example)
very_interesting_wins.pgn contains: same as above, but without category 6 and 8, because, if the IWS-Tool is used on huge databases, the interesting_wins.pgn-file can be very huge, too.
The games in the output-files are sorted in 2 ways:
First: The games are sorted by categories (category 1 is followed by category 2, 3, ... etc.).
Second: In each category, the games are sorted by length (0-19 moves, followed by 20-29 moves, followed by 30-39 moves... and so on, up to 120 moves and beyond). So, in each category, the shortest wins are at the beginning and followed by the longer wins...
And each games gets a new Annotator-Tag, so it is clear, which category the game belongs to.
The IWS-Tool does not output any statistics, because I wanted the tool to be as fast as possible.
And - believe me - it is VERY fast! Filtering my SPCC-ratinglist gamebase, which contains 193000 games, only takes around 3 minutes on a modern PC (!!!). If you are interested in statistics, you can, of course, use my other free tools: EAS-Tool, Sacrifice Games Search Tool and Short Games Analyzer Tool.
So, the IWS-Tool can be used to filter all interesting games out of each pgn-database very quickly in a single pgn-file, cotaining the interesting wins very well sorted and annotated.
Re: From TalkChess
Human Beats Top Go Computer
Post by towforce » Thu May 04, 2023 7:36 pm
Kellin Pelrine, an amateur Go player, and his team uncovered weaknesses in the play of the top Go computers - a mistake a human simply wouldn't make (misunderstanding the nature of a group of stones). They built a program that exploited this weakness, and it consistently beat the top Go program.
More here >>
https://talkchess.com/forum3/viewtopic. ... 397ef92289
Re: From TalkChess
How did chess pieces get their names?
One player’s pawn is another’s farmer. And at one time, the queen was a rather powerless virgin.
Go here for more .. very interesting article >>
https://bigthink.com/strange-maps/names ... lasobscura
Re: From TalkChess
Re: EN Engine Test 2023
Post by Eduard » Fri May 12, 2023 3:44 pm
GUI Powerfritz 18, Ryzen 3900X, 20 Threads, Hash 4 GB, all 3456men Syzygy. Time 60s for move.
Updated May 12, 2023:
SugaR AI SE was removed from my ENET-2023 list. What is the reason? I heard that the search code of SugaR AI SE is 1:1 identical to Stockfish dev, which was released at the same time. I compared the code myself and got the same result. I don't want to have a duplicate Stockfish dev in the list. I am therefore now also surprised that the author of SugaR AI SE only mentions himself as the author. With a 100% Stockfish dev search code, the other SF developers should also be mentioned.
[I have not checked this as yet so cannot confirm/refute his observation]
1-2) ShashChess GZ EXT-S, Result: 103 out of 110 = 93.6%. ShashChess GZ EXT-S.txt (ZIP)
1-2) Leptir Analyzer, Result: 103 out of 110 = 93.6%. Leptir Analyzer.txt (ZIP)
3-4) Corchess 4 050523, Result: 101 out of 110 = 91.8%. Corchess 4 050523.txt (ZIP)
3-4) Little Beast sl, Result: 101 out of 110 = 91.8%. Little Beast sl.txt (ZIP)
5-6) ShashChess GZ, Result: 100 out of 110 = 90.9%. ShashChess GZ.txt (ZIP)
5-6) Crystal 5 KWK, Result: 100 out of 110 = 90.9%. Crystal 5 KWK.txt (ZIP)
7) SugaR AI Iccf SE, Result: 99 out of 110 = 90.0%. SugaR AI Iccf SE.txt (ZIP)
8) Eman 8.91, Result: 96 out of 110 = 87.2%. Eman 8.91.txt (ZIP)
9-10) Stockfish 050523, Result: 94 out of 110 = 85.4%. Stockfish 050523.txt (ZIP
9-10) Blue Marlin 15.7, Result: 94 out of 110 = 85.4%. Blue Marlin 15.7.txt (ZIP)
11) Dark SisTer 5.0, Result: 93 out of 110 = 84.5%. Dark SisTer 5.0.txt (ZIP)
12) Lc0 v0.30Beta BT2, RTX 3080+3080ti, 86 out of 110. Lc0 v0.30 BT2, 3080+3080ti.txt (ZIP)
13-14) CatroPOLY v1.8 060523, Result: 85 out of 110 = 77.2%. CatroPOLY v1.8.txt (ZIP)
13-14) Lc0 v0.30.0-rc1 BT2, RTX 4070ti, 85 out of 110 = 77.2%. Lc0 v0.30.0-rc1.txt (ZIP)
15) Lc0 v0.31.0 dev T1 RTX 3060m (CCC parameters), 80 out of 110 = 72.7%. Lc0 0.31.0 T1.txt (ZIP)
16) Lc0 v0.30.0-rc1 BT2, RTX 3060m, cache-opt=true, 78 out of 110. Download.txt (ZIP)
17-18) Lc0 v0.30Beta BT2, RTX 3080ti, 75 out of 110 (68%). Lc0 v0.30.0-BT2 RTX 3080ti.txt (ZIP)
17-18) Lc0 v0.30.0-rc1 BT2, RTX 3060m, 75 out of 110 (66.3%). Lc0 v0.30.0-rc1.txt (ZIP)
19) Rebel-16.2, Result: 50 out of 110 = 45.4%. Rebel-16.2.txt (ZIP)
20) Powerfritz 18, Result: 45 out of 110 = 40.9%. Powerfritz 18.txt (ZIP)
EDIT : clarification by MZ
Re: EN Engine Test 2023
Post by Zerbinati » Sat May 13, 2023 11:21 am
Dear Edward,
i simply forgot to update the authors, as can be seen even in new versions only the Iccf version reports the authors
- massimilianogoi
- Site Admin
- Posts: 396
- Joined: Thu Aug 04, 2022 1:42 pm
- Has thanked: 604 times
- Been thanked: 648 times
- Contact:
Re: From TalkChess
I don't get it... what kind of test is this? Is it a tournament? Ni specifics about the test here.
And he wrote about SugaR AI but left other Stockfish clones inside: ShashChess, CorChess, Crystal, SugaR Iccf, Eman, Blue Marlin and Dark Sister.
And he wrote about SugaR AI but left other Stockfish clones inside: ShashChess, CorChess, Crystal, SugaR Iccf, Eman, Blue Marlin and Dark Sister.
janus wrote: ↑Sat May 13, 2023 6:39 am
Re: EN Engine Test 2023
Post by Eduard » Fri May 12, 2023 3:44 pm
GUI Powerfritz 18, Ryzen 3900X, 20 Threads, Hash 4 GB, all 3456men Syzygy. Time 60s for move.
Updated May 12, 2023:
SugaR AI SE was removed from my ENET-2023 list. What is the reason? I heard that the search code of SugaR AI SE is 1:1 identical to Stockfish dev, which was released at the same time. I compared the code myself and got the same result. I don't want to have a duplicate Stockfish dev in the list. I am therefore now also surprised that the author of SugaR AI SE only mentions himself as the author. With a 100% Stockfish dev search code, the other SF developers should also be mentioned.
[I have not checked this as yet so cannot confirm/refute his observation]
1-2) ShashChess GZ EXT-S, Result: 103 out of 110 = 93.6%. ShashChess GZ EXT-S.txt (ZIP)
1-2) Leptir Analyzer, Result: 103 out of 110 = 93.6%. Leptir Analyzer.txt (ZIP)
3-4) Corchess 4 050523, Result: 101 out of 110 = 91.8%. Corchess 4 050523.txt (ZIP)
3-4) Little Beast sl, Result: 101 out of 110 = 91.8%. Little Beast sl.txt (ZIP)
5-6) ShashChess GZ, Result: 100 out of 110 = 90.9%. ShashChess GZ.txt (ZIP)
5-6) Crystal 5 KWK, Result: 100 out of 110 = 90.9%. Crystal 5 KWK.txt (ZIP)
7) SugaR AI Iccf SE, Result: 99 out of 110 = 90.0%. SugaR AI Iccf SE.txt (ZIP)
8) Eman 8.91, Result: 96 out of 110 = 87.2%. Eman 8.91.txt (ZIP)
9-10) Stockfish 050523, Result: 94 out of 110 = 85.4%. Stockfish 050523.txt (ZIP
9-10) Blue Marlin 15.7, Result: 94 out of 110 = 85.4%. Blue Marlin 15.7.txt (ZIP)
11) Dark SisTer 5.0, Result: 93 out of 110 = 84.5%. Dark SisTer 5.0.txt (ZIP)
12) Lc0 v0.30Beta BT2, RTX 3080+3080ti, 86 out of 110. Lc0 v0.30 BT2, 3080+3080ti.txt (ZIP)
13-14) CatroPOLY v1.8 060523, Result: 85 out of 110 = 77.2%. CatroPOLY v1.8.txt (ZIP)
13-14) Lc0 v0.30.0-rc1 BT2, RTX 4070ti, 85 out of 110 = 77.2%. Lc0 v0.30.0-rc1.txt (ZIP)
15) Lc0 v0.31.0 dev T1 RTX 3060m (CCC parameters), 80 out of 110 = 72.7%. Lc0 0.31.0 T1.txt (ZIP)
16) Lc0 v0.30.0-rc1 BT2, RTX 3060m, cache-opt=true, 78 out of 110. Download.txt (ZIP)
17-18) Lc0 v0.30Beta BT2, RTX 3080ti, 75 out of 110 (68%). Lc0 v0.30.0-BT2 RTX 3080ti.txt (ZIP)
17-18) Lc0 v0.30.0-rc1 BT2, RTX 3060m, 75 out of 110 (66.3%). Lc0 v0.30.0-rc1.txt (ZIP)
19) Rebel-16.2, Result: 50 out of 110 = 45.4%. Rebel-16.2.txt (ZIP)
20) Powerfritz 18, Result: 45 out of 110 = 40.9%. Powerfritz 18.txt (ZIP)
People who have lost the hope.
Re: From TalkChess
massimilianogoi wrote: ↑Sat May 13, 2023 8:56 am I don't get it... what kind of test is this? Is it a tournament? Ni specifics about the test here.
And he wrote about SugaR AI but left other Stockfish clones inside: ShashChess, CorChess, Crystal, SugaR Iccf, Eman, Blue Marlin and Dark Sister.
janus wrote: ↑Sat May 13, 2023 6:39 am
Re: EN Engine Test 2023
Post by Eduard » Fri May 12, 2023 3:44 pm
GUI Powerfritz 18, Ryzen 3900X, 20 Threads, Hash 4 GB, all 3456men Syzygy. Time 60s for move.
Updated May 12, 2023:
SugaR AI SE was removed from my ENET-2023 list. What is the reason? I heard that the search code of SugaR AI SE is 1:1 identical to Stockfish dev, which was released at the same time. I compared the code myself and got the same result. I don't want to have a duplicate Stockfish dev in the list. I am therefore now also surprised that the author of SugaR AI SE only mentions himself as the author. With a 100% Stockfish dev search code, the other SF developers should also be mentioned.
[I have not checked this as yet so cannot confirm/refute his observation]
1-2) ShashChess GZ EXT-S, Result: 103 out of 110 = 93.6%. ShashChess GZ EXT-S.txt (ZIP)
1-2) Leptir Analyzer, Result: 103 out of 110 = 93.6%. Leptir Analyzer.txt (ZIP)
3-4) Corchess 4 050523, Result: 101 out of 110 = 91.8%. Corchess 4 050523.txt (ZIP)
3-4) Little Beast sl, Result: 101 out of 110 = 91.8%. Little Beast sl.txt (ZIP)
5-6) ShashChess GZ, Result: 100 out of 110 = 90.9%. ShashChess GZ.txt (ZIP)
5-6) Crystal 5 KWK, Result: 100 out of 110 = 90.9%. Crystal 5 KWK.txt (ZIP)
7) SugaR AI Iccf SE, Result: 99 out of 110 = 90.0%. SugaR AI Iccf SE.txt (ZIP)
8) Eman 8.91, Result: 96 out of 110 = 87.2%. Eman 8.91.txt (ZIP)
9-10) Stockfish 050523, Result: 94 out of 110 = 85.4%. Stockfish 050523.txt (ZIP
9-10) Blue Marlin 15.7, Result: 94 out of 110 = 85.4%. Blue Marlin 15.7.txt (ZIP)
11) Dark SisTer 5.0, Result: 93 out of 110 = 84.5%. Dark SisTer 5.0.txt (ZIP)
12) Lc0 v0.30Beta BT2, RTX 3080+3080ti, 86 out of 110. Lc0 v0.30 BT2, 3080+3080ti.txt (ZIP)
13-14) CatroPOLY v1.8 060523, Result: 85 out of 110 = 77.2%. CatroPOLY v1.8.txt (ZIP)
13-14) Lc0 v0.30.0-rc1 BT2, RTX 4070ti, 85 out of 110 = 77.2%. Lc0 v0.30.0-rc1.txt (ZIP)
15) Lc0 v0.31.0 dev T1 RTX 3060m (CCC parameters), 80 out of 110 = 72.7%. Lc0 0.31.0 T1.txt (ZIP)
16) Lc0 v0.30.0-rc1 BT2, RTX 3060m, cache-opt=true, 78 out of 110. Download.txt (ZIP)
17-18) Lc0 v0.30Beta BT2, RTX 3080ti, 75 out of 110 (68%). Lc0 v0.30.0-BT2 RTX 3080ti.txt (ZIP)
17-18) Lc0 v0.30.0-rc1 BT2, RTX 3060m, 75 out of 110 (66.3%). Lc0 v0.30.0-rc1.txt (ZIP)
19) Rebel-16.2, Result: 50 out of 110 = 45.4%. Rebel-16.2.txt (ZIP)
20) Powerfritz 18, Result: 45 out of 110 = 40.9%. Powerfritz 18.txt (ZIP)
I think the 'test' is to solve as many problems out of the 110 provided. The epd was made available somewhere but since its of no interest to me I didnt download the file. I think Eduard was more concerned about attribution - I dont know if the ones you mention include the SF Team as authors. I dont use those engines.