We received eight registrations for the competition, however no one decided to submit their final bot before deadline. Therefore we decided to close the competition for this year. We would like to ask you for feedback, so that we learn how to improve the competition or whether we should run it the next year.

As organizers, we prepared a (weak) baseline to play in the tournament, we show its results here. We trained a DQN agent against a random player based on the OpenSpiel implementation of DQN in libtorch. Every 1000 iterations 1000 games have been played between the DQN agent playing either as Player 0 or 1, and the opponent random player to evaluate the performance:

As the above plot shows, the DQN learned how to beat the random opponent quickly in RBC. In Gin Rummy, it slightly improves over time, but still keeps losing to random player after 10^6 iterations.