# Final bot Strategies now that everyone is 'frozen'

- Updated Tuesday, May 17th 2016 @ 23:10:58
For my bot ShapeShifter:

Searching: This is what gave me my biggest jump in strength. For search, I first do a full two-ply search. I then keep track of the best 100 positions of two-ply by scoring each position and keeping the top 100. Then I look a full 1-ply further and keep the best 200 positions. Then I look another full 1-ply further from these 200 and keep the next best 200 positions. And so forth. Finally with this final set of 200 positions, my bot takes the initial first move that contributed to the majority of the positions in this set. So if the move <left, drop> led to 150 positions and <right, drop> led to 50 positions in this set of 200, my bot choose <left, drop>. I would go 2 to 3-ply from beyond the initial 2-ply search. The deeper you go with the search the more "optimistic" your bot becomes, so I would make my searches more shallow the higher my tetris stack went up.

Evaluation: Basically the same as you guys. My evaluation function ended up being relatively simple. I looked at six features from the board. I slightly modified the variables though. For example, for the max height variable (which is just the absolute highest point on the field with range of [0, 20]), I chose to look at max height squared. I tweaked the variables in little ways that I think were beneficial; not sure though. These included max height, smoothness, holes, etc.

Optimization: I implemented an algorithm described in this paper: Continuous PBIL

Idea of the the optimization algorithm: if you want to optimize a variable X with a value between [0, 1], you start off guessing pairs of values randomly and playing a match between these values. Then record who won. So lets say you sample x1 = 0.1 and x2 = 0.2. Then play a game between your bots with one initialized to 0.1 and the other initialized to 0.2. Then if the bot with 0.2 wins, record that down.

Now after many games, you look at you results and see which ranges contributed to the most wins. Maybe the values between [0, 0.1] contributed to 3 wins and [0.1, 0.2] contributed to 8, etc. So just put this data into a histogram. Now start the second iteration of the algorithm and sample points not from a random distribution but the distribution described by the histogram. Repeat the process above of sampling X's and playing matches between them. Under the distribution described by the histogram, you should be sampling more in the ranges with higher win rates. After many iterations, you should end up with a final histogram in which the range of values that has the most wins is potentially the best range for your X.

=====================================================

Separate question: How did you guys resolve pathfinding for your bots? I did something similar that Corniel mentioned in past threads which is to come up with all the hard drop moves, and then find all the places for soft-drop moves and do an A* search from that position to the top of the board. How did you guys do it to make it fast? I would have loved to do a single pass on the board and come up with all the possible positions (and their corresponding moves) that could be reached.

- Updated Tuesday, May 17th 2016 @ 06:34:20
A* ran pretty fast when I set neighbors to one movement left and right, one rotation left and right, and then a neighbor that was up, but not by just one. For the up neighbor, I used a method that tried moving up one by one until it reached a collision. The highest position it could reach was its neighbor. This greatly improved the algorithm's speed. Also, I only ran A* for legal positions that had filled cells directly above. I also checked that there was no filled cell on the 4th row from the top (if there were, then it might've taken some special maneuvering to get around obstacles, even if there were no filled cells above the landing position).

I would like to know what you guys did for finding legal positions. I didn't want to miss any, so I iterated over all possible positions (filtering out stuff that was above the highest filled row). I then checked if there was no collision at that position and that there was a collision after one "down movement." If that were true, only then would I run A* to find out if it's possible to get to that position. I know this cannot be the best way to handle this problem, so I'm curious how others solved it.

- Updated Tuesday, May 17th 2016 @ 04:58:03
Seems my bot has no unique parts) Row & column bitboard representation helps to search deep (4 full move - 0.3 sec, 5 full move - 1.5). I used killer moves, PVS, some small search optimisations. Ganymede cuts all extremely bad moves (ugly placed, great holes made). Tracing a piece to final position is A* too, good but not optimized completely.

My evaluation function is tapered from "opening" to "ending". Opening - low position, you can build t-spin prepare, 2,3,4-liners. Ending - you must to defend position, delete lines. In evaluation function i consider:

- PST. Or simply how far are blocks from center. Placing pieces at the wall seems to be better
- Separate scores for usual point, combo-point, t-spin-point
- Max-Height penalty (exponential, coeff of rising tuned too)
- Hole, semi-hole penalties. Semi-hole can be filled by some move - semi-opened simply
- Hole Integrity (Hole length). For n hole-length formula is 1
*Hole + (n-1)*HoleIntegrity - Press on the hole. Important factor for clearing lines
- Non-linear line completeness. 0, 8, 9 block in row gives 0 completeness score. Others are exponential
- Complete-one penalty. Decreases as position grows.
- T-spin preparing score (step-by-step, 17 steps, exponential)
- Skip score. Work like delta (or lower bound) for moves scores.
- Surface deep, deep-side penalties. Deeper than 2 is bad, for one-side deep 4 and more is bad.
- Surface plato. Existing of two equal heights.

Really dont understand how can help aggregate height.

I used GA at the start, some of gradient methods. But best results i got from Nelder-Mead optimizer. I learned at depth 1 with 1500 games against current tune or depth 3 with 100 games. First is more defensive, but second is more aggresive (sees more t-spins and ways of building big line completes). All this tuning takes around 1 day.

- Updated Tuesday, May 17th 2016 @ 06:19:13
@cowzow Your method of searching is very smart! I am hitting myself in the head right now for not thinking of that. What to do with information from deeper levels of searching was the hardest thing for me to figure out.

@tarokong Your post earlier in the year about storing the gamestate in the bits of integers inspired me to rewrite DawgBlockerson. I have to say that it was a super painful process but the result was that I was able to speed up enough to search depth 4. Also I learned a lot about bitwise operations so thank you!

#### Searching:

Unfortunately I didn't get very much time to work on it in the last month and a half so I ended up leaving my searching with something pretty bad. DawgBlockerson is a gambler! He takes the best possible outcome from depth 4 and adds a percentage of that score to its parent at depth 3 then takes the best possible move from depth 3 and adds a percentage of it's score to it's parent at depth 2. As I said in an earlier thread this causes my bot to either play amazingly or to play terribly. DawgBlockerson was able to take games off almost all the top 10 competitors (in challenge matches) while never cracking the top 18 bots.

The biggest increase in power that I found was when I stopped applying a score to the move at depth 1 and instead scored the pairs of depth 1 and 2 together. Often the best move at depth 1 would temporarily create a hole, bumpy terrain, or temporarily cover a t-spin hole while the next move would remedy the problem. My bot would not choose that move due to the negative score applied by analyzing the outcome of depth 1.

#### Opponents actions:

DawgBlockerson also ignores what his opponent is doing. I was hoping to implement this feature in the last month but I ended up being too busy to work on him at all.

#### Pathfinding:

For my pathfinding I did a breadth first search starting from my target position looking for the path to the starting position of the piece. The search first looks to see if it can move up then moves as far up as it can then looks to see if it is left or right of the target position then tries that direction first. Then it checks if it is in the right orientation then rotates towards the correct orientation. Finally I iterated over my moves backwards replacing the last set of "downs" with a "drop". I capped the search at a depth of 11 but counted all moves up in one sequence as 1 move. I printed to std.err anytime it was unable to find a path and although it was fairly ugly and terrible code I never once found a situation where it didn't find a path when there was one and it was pretty fast too. It was far from an elegant solution but I found the idea of doing A* with rotations to be pretty intimidating.

#### Evaluation:

For evaluation I don't think I did anything unique. I also penalized single line clears based on how close to bottom of the board I was. I used bumpiness as well but squared the differences to penalize very large height discrepencies. I classified a hole as any empty cell with a block above it. But I penalized holes based on how many filled squares they were surrounded by, then used the distance from the hole to the top of the stack as another non-linear multiplier. For t-spin I increased the value of building the structure in deeper moves so that the score from building a t-spin in depth 4 will be enough to affect my choice at depth 2. This was probably not a smart way to handling building t-spins but DawgBlockerson does it just about as well as the best guys.

#### Tuning:

As far as tuning went I never got around to doing anything automated. I was hoping to try playing around with some GAs but I was too busy with exams and a new job (my first software developer job :) ). All of my heuristics are hand picked and corn fed!

This competition was a really awesome learning experience for me though as I have only been programming for 8 months and I just finished my first year of Computer Science in college. I am going to do a full write-up of all the different things I learned, tried and worked on sometime this week.

- Created Tuesday, May 17th 2016 @ 06:29:16
One way to do A* with rotation is with a coordinate system. My bot stores positions as an x, y, z coordinate. If x is the column position, and y is the row position, then z is the orientation. Each piece has 4 orientations, and these are stored in a 2D array. To compute a given position's neighbors, you simply add or subtract from x, y, or z. Precomputing each position's h value is also a good idea. You just have to figure the 3D Manhattan distance from the spawning position to the landing position.

- Updated Tuesday, May 17th 2016 @ 06:42:11
@caffeine Thank you for the explanation. I was having trouble wrapping my head around the concept and I couldn't find many resources online of similar problems. Perhaps I will do some more reading on A* and try to implement it once the competition is over. I had never built any kind of pathfinding prior to this competition so I was still fairly proud of my crappy depth first search.

- Created Tuesday, May 17th 2016 @ 15:42:58
Re: A*, I also use it for pathfinding, but since I do an A* search for every possible placement I look at, it was starting to take the majority of my processing time. To work around it, I split my pathfinding into two use cases. During search, I use a modified version of A* where the goal test only considered whether or not the piece is sufficiently above existing blocks. I also modified the heuristic method used to only count vertical distance to encourage vertical movement. My reasoning was that once you are above the highest row with something in it, you're guaranteed to be able to rotate and translate as much as you want, so it's safe to assume that portion of the path exists. Once I chose which move I wanted to make, I run an additional, full-on A* search, using the usual 3-factor heuristic @caffeine was talking about in order to get the real path that I return to the referee.

The result was that pathfinding completely dropped out of my top 10 hot spots in my profiler, while still guaranteeing that every placement I searched had a valid path (even if I didn't know exactly what it was).

- Updated Tuesday, May 17th 2016 @ 20:52:06
I do one BFS for one game state. During this pass BFS finds all possible moves. As a starting point I take highest row what contains a block minus four. During BFS I managed to avoid only rotation of O-pieces. As for I, S, and Z pathfinder looking for all possible rotations and eliminating possible 'duplicates' just before addition to the resulting list

- Updated Tuesday, May 17th 2016 @ 20:09:14
BlockParty can only find moves that after initial positioning end with drop, or (down)* (left|right|rotate left|rotate right)*. This covers almost all common cases, move generation is very quick, just a handful simple comparisons per move. But it is very cool to see other bots make complex moves that BlockParty only can dream off.

- Created Tuesday, May 17th 2016 @ 21:19:54
I solved the problem of slowness using simple trick. At the root I find all moves using something similar to A* (with some hand crafted heuristics). At deeper levels I use BlockPartie's approach.

- Created Tuesday, May 17th 2016 @ 22:33:27
What BBKing does (and what I did not read so far in this discussion) is giving positive or negative score per row for holes: one hole in a row gives score +10 (but at the edges only +8), two adjacent holes is +5, three adjacent holes +4, four adjacent holes 0. Two, three, four and five hole groups in a row gives resp. scores of -2, -8, -10 and -12.

For Y-position scores, I use fibonacci numbers (1, 1, 2, 3, 5, 8, etc.), starting at position 13. Below y = 13, there is no score for the y-position.

Unlike most of you, I optimized my (20) parameters manually. I wanted to automate it, but had no time left in the end.

I have a separate fullclear evaluator, which keeps as many as possible options open to play a full clear after the first 5 rounds (as long as the blocks so far and the current and next block are only I's, L's, J's and O's). It plays 164 of the full clears. (Not sure how many more full clear options there are.)

Optimization: I do two-ply full search. After that I take the top 20 and do a 3rd ply. From these 20, I sort by the sum of the 7 possible 3rd ply blocks, taking the best board for each of the 7 blocks. Then I do a 4th ply on the best 5 of these, and then again a 5th ply on the best 5.

- Created Wednesday, May 18th 2016 @ 22:15:22
I did not expect that manual method may give such a good results. I mean, AdsRiskBot is somewhere around the top.

- Updated Thursday, May 19th 2016 @ 02:41:14
My bot zluhcs is pretty simple. It does a 2 ply search, and has a basic evaluation. The main thing I did was to run about 20000 games to test each version. I modified the server to run a batch of games. I used the ELO calculation and LOS (likehood of superiority). You can research that in chess programming. I accepted only changes that provided LOS of 99% or about 10% elo increase in 20000 games.

My main strategy was to constantly generate 2/3 lines to overrun the opponent. I guess it is not enough. I had a couple of ideas to try, only at the end I started to implement the T-spin search, but work got in the way. I wanted to do look ahead, improve pathfinding, etc.

Anyways, I hope everyone had fun as I had. It is amazing to see the top bots playing.

Good luck everyone on the finals!

- Created Sunday, May 22nd 2016 @ 17:33:50
@hogeris: would you mind sharing the secrets of your bot with us?

- Updated Sunday, May 22nd 2016 @ 21:54:10
@Ad as you can see, hogeris is not very chatty in these forums :DDD I would also be interested in his approach.