15572

Not All Benchmarks Are Created Equal

APA

(2020). Not All Benchmarks Are Created Equal. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/not-all-benchmarks-are-created-equal

MLA

Not All Benchmarks Are Created Equal. The Simons Institute for the Theory of Computing, Apr. 02, 2020, https://simons.berkeley.edu/talks/not-all-benchmarks-are-created-equal

BibTex

          @misc{ scivideos_15572,
            doi = {},
            url = {https://simons.berkeley.edu/talks/not-all-benchmarks-are-created-equal},
            author = {},
            keywords = {},
            language = {en},
            title = {Not All Benchmarks Are Created Equal},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2020},
            month = {apr},
            note = {15572 see, \url{https://scivideos.org/index.php/Simons-Institute/15572}}
          }
          
Robin Blume-Kohout (Sandia National Labs)
Talk number15572
Source RepositorySimons Institute

Abstract

Testbed-class quantum computers -- fully programmable 5-50 qubit systems -- have burst onto the scene in the past few years. The associated surge in funding, hype, and commercial activity has spurred interest in "benchmarks" for assessing their performance. Unsurprisingly, this has generated both a number of scientifically interesting ideas *and* a lot of confusion and kerfuffle. I will try to explain the state of play in this field -- known historically as "quantum characterization, verification, and validation (QCVV)" and more recently and generally as "quantum performance assessment" -- by briefly reviewing its history, explaining the different categories of benchmarks and characterization protocols, and identifying what they're good for. The overarching message of my talk will be these are distinct tools in a diverse toolbox -- almost every known protocol and benchmark really measures a distinct and particular thing, and we probably need *more* of them, not fewer.