TY - GEN
T1 - Cracking open the black box: What observations can tell us about reinforcement learning agents
AU - Dethise, Arnaud
AU - Canini, Marco
AU - Kandula, Srikanth
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledgements: We thank the anonymous reviewers for their feedback. We are grateful to Nikolaj Bjørner, Bernard Ghanem, Hao Wang and Xiaojin Zhu for their valuable comments and suggestions. We also thank the Pensieve authors, in particular Mohammad Alizadeh and Hongzi
Mao, for their help and feedback.
PY - 2019/8/14
Y1 - 2019/8/14
N2 - Machine learning (ML) solutions to challenging networking problems, while promising, are hard to interpret; the uncertainty about how they would behave in untested scenarios has hindered adoption. Using a case study of an ML-based video rate adaptation model, we show that carefully applying interpretability tools and systematically exploring the model inputs can identify unwanted or anomalous behaviors of the model; hinting at a potential path towards increasing trust in ML-based solutions.
AB - Machine learning (ML) solutions to challenging networking problems, while promising, are hard to interpret; the uncertainty about how they would behave in untested scenarios has hindered adoption. Using a case study of an ML-based video rate adaptation model, we show that carefully applying interpretability tools and systematically exploring the model inputs can identify unwanted or anomalous behaviors of the model; hinting at a potential path towards increasing trust in ML-based solutions.
UR - http://hdl.handle.net/10754/658663
UR - http://dl.acm.org/citation.cfm?doid=3341216.3342210
UR - http://www.scopus.com/inward/record.url?scp=85072028490&partnerID=8YFLogxK
U2 - 10.1145/3341216.3342210
DO - 10.1145/3341216.3342210
M3 - Conference contribution
SN - 9781450368728
SP - 29
EP - 36
BT - Proceedings of the 2019 Workshop on Network Meets AI & ML - NetAI'19
PB - ACM Press
ER -