- Updated Monday, July 18th 2016 @ 17:31:56
I've tried using it, but it doesn't give me nearly enough nps. So I would have been very surprised if you were able to get that much.
- Updated Monday, July 18th 2016 @ 18:07:10
@Daporan A little bug I found whilst trying to challenge on my phone and on my browser was that they both go through with their own 5 minute interval. Then I tried with 4 different browsers, Tor browser, Edge, Opera and Mobile Chrome. Each of which had their own 5 minute interval.
One thing I noticed is that just changing your user agent isn't the only thing because I tried constantly changing my user agent on opera using an extension but it didn't work the same.
- Updated Friday, July 22nd 2016 @ 22:03:06
My bot (also) looks at about 4 million nodes a second. Depth varies between 8 and 12 normally. That's full search depth. Some partial deeper searches are possible. My evaluation is unlikely to be equal to zero, so I stop the search after three zeroes in a row and assume a draw.
- Created Tuesday, July 26th 2016 @ 17:45:20
Hi ! My question might look a bit stupid, but i'm a beginner and I'm wondering how do you get you AIs to search this fast ? I've created my own kind of AI, however it goes like to 4 of depth in 15s...
What could I be doing wrong ? Do you have hints ?
- Created Wednesday, July 27th 2016 @ 02:30:29
@elirso, I did get also a depth of 4 in my earlier implementations, so I think this is the normal start. Here are the ideas that most people work on (in no particular order):
- Fast Board Representation
- Negamax & AlphaBeta Pruning
- Opening Book
- Transposition Table
- Horizon Effect & Quiescent Search
- Endgame Database
- Move Ordering
- Killer Moves
- History Heuristics
- Heavy or Light Evaluation Functions (will affect your search depth).
- Iterative Deepining
- And much more, but this list is good enough to get you on the right track
Also note that you should have a strong testing framework (especially unit tests), because you are going to be refactoring your code more than once.
- Updated Friday, July 29th 2016 @ 00:30:48
There is of course also the option of a monte carlo search, but for your typical minimax the list of ghooo is pretty complete. On the chess programming wiki you can find explanations applied to chess for pretty much all of them. I'd keep the fast representation to last.
You'll need alpha beta pruning with minimax at least. After that the choice of ingredients is yours, but note that small mistakes in a minimax with alpha beta pruning framework can lead to very surprising results that can be difficult to trace.
Fast representation and optimization improved my speed with a factor 8. More then I expected. After that algorithmic improvements are more work, so keep it slower and more readable at first.
- Created Wednesday, August 3rd 2016 @ 23:51:27
As a comparison, my bot is pure alpha-beta (nothing beyond cause I'm a noob) and my bot's rating oscillates between 1700 and 1800 ELO. So you'll probably need something beyond just alpha-beta to reach the playoffs.
- Updated Wednesday, August 10th 2016 @ 06:01:45
As for alpha-beta improvements very popular are aspiration window and pvs. And technics like Late Move Reduction, futility pruning can be useful.
- Updated Tuesday, August 16th 2016 @ 13:54:05
@NotABug A bit late, but my bot is using MCTS. I pretty much tweaked it as far as I can - I guess not many bots in the upper ranks use it though. FrankTheTank manages ~30k playthroughs per turn. I only log the number of played boards. The number is quite low, because I pimped the policies (it played a lot worse when making 200k playthroughs per turn).
- Updated Tuesday, August 16th 2016 @ 14:35:07
@Bytekeeper I'm glad to hear that a decent bot is using MCTS.
On that note, I believe the bot Standardbfs is also using it
- Created Tuesday, August 16th 2016 @ 17:11:59
I'm also using a modified MCTS algorithm. Jaeger is doing around 150k-200k simulations per second in the first couple of moves. I cranked really really hard on the board optimization knob, but my actual rollouts are pretty simple. I haven't had much luck using heavier rollout policies, though. When we're done, I would be really interested in comparing policies.
- Created Tuesday, August 16th 2016 @ 17:51:29
Wow cool. Jaeger is doing pretty awesome. I also did a lot of board optimizations, but only after much fiddling with the playout policy. Maybe I'll try again with more simulations instead. It's incredible how strong the first 10 bots are.
- Created Tuesday, August 16th 2016 @ 23:01:24
From earlier comments I was actually pretty sure that Jaeger used MCTS. (Some people didn't think pondering was worth it, but with MCTS things are a bit different from alpha beta.)
Indeed cool to see that alpha beta and MCTS are very close at the top. I was actually already pondering the idea of a hybrid approach, because both techniques have their own strengths. I do kind of like the repeatable exact answer of alpha beta, but then again that exact answer is based on your made up evaluation function.
It might be interesting to use MCTS to train the evaluation function (offline) for an alpha beta search. That would be a nice hybrid and game independent approach. (Sort of, because you'll still design the training variables based on the game.)
- Created Tuesday, August 23rd 2016 @ 10:22:00
Hi! I would also be interested in talking about techniques. Marvin is using MCTS, and a rather surprising result I found was that it's not effective to prefer winning subgames in the playout policy... I wish I had a better idea of what a generally good move is :) Marvin is not among the very top bots, from Jaeger's example I think probably because it's not optimized for speed (maybe 25k playouts per second).
- Created Wednesday, August 24th 2016 @ 02:48:38
Chickenfeed is also MCTS, with some tweaks (90k simulations per move minimum). Can't wait for the tournament to finish so that we can all share in depth our strategies and tricks. Awesome to see other MCTS bots doing so well!