This page lists my peer reviewed publications with links to the full papers, abstracts and citation information.

Konstantinos Sfikas and Antonios Liapis: “Collaborative Agent Gameplay in the Pandemic Board Game” in Proceedings of the Foundations of Digital Games Conference, 2020.

@inproceedings{sfikas2020collaborative, 
    author = {Konstantinos Sfikas and Antonios Liapis}, 
    title = {Collaborative Agent Gameplay in the Pandemic Board Game}, 
    booktitle={Proceedings of the Foundations of Digital Games Conference}, 
    year={2020}, 
}

“While artificial intelligence has been applied to control players’ decisions in board games for over half a century, little attention is given to games with no player competition. Pandemic is an exemplar collaborative board game where all players coordinate to overcome challenges posed by events occurring during the game’s progression. This paper proposes an artificial agent which controls all players’ actions and balances chances of winning versus risk of losing in this highly stochastic environment. The agent applies a Rolling Horizon Evolutionary Algorithm on an abstraction of the game-state that lowers the branching factor and simulates the game’s stochasticity. Results show that the proposed algorithm can find winning strategies more consistently in different games of varying difficulty. The impact of a number of state evaluation metrics is explored, balancing between optimistic strategies that favor winning and pessimistic strategies that guard against losing.”

View pdf

Antonios Liapis, Daniel Karavolos, Konstantinos Makantasis, Konstantinos Sfikas and Georgios N. Yannakakis: “Fusing Level and Ruleset Features for Multimodal Learning of Gameplay Outcomes” in Proceedings of the IEEE Conference on Games, 2019.

@inproceedings{liapis2019fusing,
    author = {Antonios Liapis and Daniel Karavolos and Konstantinos Makantasis and Konstantinos Sfikas and Georgios N. Yannakakis},
    title = {Fusing Level and Ruleset Features for Multimodal Learning of Gameplay Outcomes},
    booktitle = {Proceedings of the IEEE Conference on Games},
    year = {2019},
}

“Which features of a game influence the dynamics of players interacting with it? Can a level’s architecture change the balance between two competing players, or is it mainly determined by the character classes and roles that players choose before the game starts? This paper assesses how quantifiable gameplay outcomes such as score, duration and features of the heatmap can be predicted from different facets of the initial game state, specifically the architecture of the level and the character classes of the players. Experiments in this paper explore how different representations of a level and class parameters in a shooter game affect a deep learning model which attempts to predict gameplay outcomes in a large corpus of simulated matches. Findings in this paper indicate that a few features of the ruleset (i.e. character class parameters) are the main drivers for the model’s accuracy in all tested gameplay outcomes, but the levels (especially when processed) can augment the model.”

View pdf

David Melhart, Konstantinos Sfikas, Giorgos Giannakakis, Georgios N. Yannakakis and Antonios Liapis: “A Study on Affect Model Validity: Nominal vs Ordinal Labels,” in Proceedings of the IJCAI workshop on AI and Affective Computing, 2018.

inproceedings{melhart2018study,
    author={David Melhart and Konstantinos Sfikas and Giorgos Giannakakis and Georgios N. Yannakakis and Antonios Liapis},
    title={A Study on Affect Model Validity: Nominal vs Ordinal Labels},
    booktitle={Proceedings of the IJCAI workshop on AI and Affective Computing},
    year={2018},
}

“The question of representing emotion computationally remains largely unanswered: popular approaches require annotators to assign a magnitude (or a class) of some emotional dimension, while an alternative is to focus on the relationship between two or more options. Recent evidence in affective computing suggests that following a methodology of ordinal annotations and processing leads to better reliability and validity of the model. This paper compares the generality of classification methods versus preference learning methods in predicting the levels of arousal in two widely used affective datasets. Findings of this initial study further validate the hypothesis that approaching affect labels as ordinal data and building models via preference learning yields models of better validity.”

View pdf