- Created Monday, March 21st 2016 @ 22:18:13
I'll open first since I'm posting this topic :-)
I implemented a heuristic to guess which move was going to result in the best board status, based on how many enemy stones it would surround, how much defensive advantage it gives (ie, not to lose my stones) and how many it captures. No consideration for edges yet (but I did intend to add it). So this was turn-scoring, not board-scoring. As I've since found out, you can't take turn scoring into any kind of multi-turn analysis without turning it into board-scoring first.
I've realized that it's not going to get any better than it is right now. v22 is a bit too much tuned towards being defensive, making it a bit weaker than v15-19 were in effect, but I had the hopes of getting it just good enough to repeatedly beat Hohol or MrGobot. No success there.
I'm now rewriting it to do board-scoring instead and to search a number of levels deep. No idea yet on any specifics, other than "it's gonna do some kind of multi-turn searching and hopefully be better than its own older versions".
So what are you thinking about, doing and so on?
- Created Tuesday, March 22nd 2016 @ 16:15:10
Maybe we don't talk about algorithms just about a few month after start?) It kills enthusiasm
- Updated Tuesday, April 12th 2016 @ 10:41:10
With respect to neurocores request I will not go into detail here.
Rough outline: I am currently (v20) also only looking at short-term (i.e. turn-level) moves and no higher level strategy (game-level) yet. I hope to add this in the future, when time permits ˆˆ.
JoroseCreated Wednesday, April 13th 2016 @ 13:05:19
I am writing a go engine for my BSc thesis. Since my engine is reliant on libraries whose functionality I lack the time to implement I won't be writing any serious engines for this competition. I may however upload an extremely basic engine, that I originally used as a baseline myself (not anymore, even given a 9 stone handicap it gets completely and utterly destroyed), so that other people could use it as a baseline comparison to their own engines.
I was wondering if you guys thought it would be better to leave my primitive baseline closed source until the competition is over or open source it? I'm not actually sure I'm allowed to open source it until my thesis is finished, but assuming I am allowed to do it I am curious what you guys think.
- Updated Wednesday, April 27th 2016 @ 03:59:26
I uploaded my semi-random bot (RandomBagBot) for kicks (which my local AI was playing against) to see how it would do (currently 17th! @ 1480 rating, doing better than expected). Plan to upload my real bot later (which can easily best the semi-random one), but will need to convert the code from C++ if no one uploads a C++ bot and do a bit of work to make it work with their engine from my own.
So I'd say my currently uploaded bot is a baseline to test against since it's mostly random. Has just a few limitations, like preventing suicidal & eye-killing moves (used my own code to detect suicidal moves since starter bot didn't do it right & gave false-positives).
- Created Monday, May 9th 2016 @ 02:23:44
RandomBagBot taken down and replaced with GoGewBagBot. Had rating of around 1504 when I swapped and was hovering around there with a ranking of around 18th. Managed to convert some code to C# and seems like it may be working. Thanks for the C# starter bot as it became useful since no one uploaded a C++ one.
- Created Wednesday, May 11th 2016 @ 07:15:09
In broad, general terms... I'm going to try to transfer some of my existing knowledge of the game into a set of heuristics. I may implement some lookahead mechanism, but clearly there isn't enough time for much heavy lifting.
- Created Wednesday, June 1st 2016 @ 20:18:53
From what I've read online it seems like many good Go bots these days use Monte Carlo Tree Search.
Any thoughts about this? Would the shortened timelimit (200ms per turn) make this approach infeasible? I imagine you'd probably have to alter it a bit so it would fit it the time limit.
- Updated Thursday, June 2nd 2016 @ 10:56:30
Maybe one could use the AMAF approach All Moves As First to gather more data from each iteration of their MCTS. Basically, you assume that a move to space X,Y has a similar value on the current turn as it does on a future turn (in general) because moves in GO often have only a local impact so you give each position a more negative score if you moved to that position in a losing tree search and a more positive score if you moved to that position in a winning tree search. This could help you get sufficient data on a position even with limited processing time.
Also, I'm not sure if it is a hack / against the rules but if you send an invalid move to the engine before sending your "real" move it will give you an extra 200 ms. So at least right now you have 400 ms to make each move.
For myself, I am planning on building an early games database from the first (10-20?) moves based on games between good players found on The KGS Server and then possibly attempting to use a neural net to come up with moves in the midgame and switching to brute force / MCTS in the later game. I don't know much about the capabilities of neural nets yet so that part of my plan may get scrapped.
I'm sure everyone has seen it so The Alphago Paper describes how the alphago bot scraped data from MCTS to train their "policy" neural network which performed quite well against some of the better GO AIs.
- Created Thursday, June 2nd 2016 @ 11:45:24
@multivac This bug is also present in the UTTT challenge. I think it was mentioned before, but for some reason it never got fixed.
- Updated Thursday, October 27th 2016 @ 03:10:55
Ended up implementing a move predictor. Basically given a board state, my bot tries to guess what is the most likely move an expert would make and take that move. The model I use is described in this paper. I would describe it similar to support vector machines (maybe?); I don't know ML that well. The nice thing about the model is that it was easy to implement, and it considers the value of pair-wise interactions between features. I trained off of games that I found from KGS game records. The features I used are slightly different than what's described in the paper, so I will keep it a secret ;) However, my prediction accuracy is still worse than what they achieved, so maybe they are onto something...
Still a lot of work to go though; the model still outputs some strange moves, so I will have to play around with the weights & parameters some more. Some are understandable like how my bot doesn't like to capture stones that are caught in ladders (this makes sense because you usually capture them at the last moment). However, things change with a turn limit.
There is a lot of literature out there about Go which is cool. It's a good opportunity to learn about some machine learning algorithms and try them out. I think the ML approach is especially good if you don't know much about the game (like me).