- Created Friday, November 6th 2015 @ 11:39:59
Hello dear coders,
Today we launched our latest competition: Four In A Row!
This competition differs from our other competitions in the way that is quite a bit easier to write a bot for, but we wanted to provide some entry-level competitions as well. Of course we're always working on new competitions and a totally new (more difficult) one is coming in the near future!
So have fun with this and questions or suggestions about the competition can always be posted here.
- Created Saturday, November 7th 2015 @ 11:01:14
Hm... isn't four in a row already solved? Will this competition also be like the other ones with finals and stuff (just with the exception that everyone knows that the bot moving first will win the game).
- Created Saturday, November 7th 2015 @ 13:32:06
- Created Saturday, November 7th 2015 @ 15:29:47
I like the concept of an easy game with lower ambition.
There is currently no rating; to me that is just fine, although I guess many players would like to know their rating. Also there is no game queue, is that intentional? I miss the queue, if the intention is to have this game unrated, then perhaps the queue could pair players totally randomly. But no need for 20 games per day, just a handful would be enough for me, and you could stop pairing bots that have not been updated for a long time.
Of course it would be cool if someone could submit a bot with perfect strategy for the first (winning) player.
- Created Saturday, November 7th 2015 @ 15:41:51
BTW I got some nice flashbacks thanks to this competition; back in the early eighties I developed a connect-four program in Z80 assembler for my ZX Spectrum. Java is much easier...
- Updated Saturday, November 7th 2015 @ 16:21:35
I programmed such a bot for s7 1500 already ☺ its cool to see this challange here.
- Created Saturday, November 7th 2015 @ 19:10:20
Although solved, this can be interestring. As long as you limit the time consuming an the allowed size. And play in pairs.
- Created Monday, November 9th 2015 @ 08:51:44
Yes this game has been solved, but it is intended for people who are looking for something smaller and less complicated, while still being quite enjoyable to watch. There will be no finals/prizes for this competition.
- Created Monday, November 9th 2015 @ 22:35:22
Nice competition! I also programmed my first Connect4 bot about 25 years ago.
I see two disadvantages with this competition (but not really problematic): - Player 1 has a big advantage (probably bigger than in Warlight 2) - There is no random element in the game, so if bot1 wins from bot2 once, then it will win forever and all next games between bot1 and bot2 will be played exactly the same (unless the bots are not deterministic)
- Updated Thursday, December 3rd 2015 @ 11:30:14
Yes the game is solved, but you could fix it with a little effort. Just increase the size of the field (rows and cols) twice.
And one question: why can't I challenge other bot and choose which player I want to play (1 or 2)? Maybe there is a workaround for that or you plan to add this feature?
- Created Wednesday, December 9th 2015 @ 13:29:02
It seams some players are running games every 5-6 min 24x7. Do they train some neural networks?
- Created Wednesday, December 9th 2015 @ 16:07:04
I'll tell you that I'm not analyzing the games manually ;) I don't think there is a queue in this competition so I wrote a script that takes care of that for me, another good use for the raspberry pi :)
- Updated Wednesday, December 9th 2015 @ 16:22:57
Yes, I also have this kind of script. But I don't run it forever :)
- Created Thursday, December 10th 2015 @ 08:43:42
There is a queue for this competition, but it is unfair to challenge somebody every 5 minutes 24/7. I'm going to put a restriction of 10 challenges a day on soon.
- Updated Thursday, December 10th 2015 @ 09:15:41
The manual challenges is a useful feature to evaluate strategy against other real strategies. If you limit the total number of challenges per day you will reduce testing capabilities for programmers.
The only thing that is not fair in this schema, is that manual challenges affect rating. Indeed you just don't have a correct rating if someone plays as "player 1" 90% of time. You can classify such manual challenges as "test matches" and don't consider them when you calculate rating.
I think 10 challenges is enough only to evaluate strategy once a day. Now it is possible to create 200-240 test matches a day using automation, probably you need to limit it to max 40-60 challenges per day to protect CPU resources (if it is it a issue).