Cracking open the black box: What observations can tell us about reinforcement learning agents

Arnaud Dethise, Marco Canini, Srikanth Kandula

Research output: Chapter in Book/Report/Conference proceedingConference contribution

32 Scopus citations

Abstract

Machine learning (ML) solutions to challenging networking problems, while promising, are hard to interpret; the uncertainty about how they would behave in untested scenarios has hindered adoption. Using a case study of an ML-based video rate adaptation model, we show that carefully applying interpretability tools and systematically exploring the model inputs can identify unwanted or anomalous behaviors of the model; hinting at a potential path towards increasing trust in ML-based solutions.
Original languageEnglish (US)
Title of host publicationProceedings of the 2019 Workshop on Network Meets AI & ML - NetAI'19
PublisherACM Press
Pages29-36
Number of pages8
ISBN (Print)9781450368728
DOIs
StatePublished - Aug 14 2019

Fingerprint

Dive into the research topics of 'Cracking open the black box: What observations can tell us about reinforcement learning agents'. Together they form a unique fingerprint.

Cite this