(2022). Aggregative Efficiency of Bayesian Learning in Networks. The Simons Institute for the Theory of Computing. https://old.simons.berkeley.edu/node/23047

MLA

Aggregative Efficiency of Bayesian Learning in Networks. The Simons Institute for the Theory of Computing, Dec. 02, 2022, https://old.simons.berkeley.edu/node/23047

BibTex

@misc{ scivideos_23047,
doi = {},
url = {https://old.simons.berkeley.edu/node/23047},
author = {},
keywords = {},
language = {en},
title = {Aggregative Efficiency of Bayesian Learning in Networks},
publisher = {The Simons Institute for the Theory of Computing},
year = {2022},
month = {dec},
note = {23047 see, \url{https://scivideos.org/simons-institute/23047}}
}

When individuals in a social network learn about an unknown state from private signals and neighborsâ€™ actions, the network structure often causes information loss. We consider rational agents and Gaussian signals in the canonical sequential social-learning problem and ask how the network changes the efficiency of signal aggregation. Rational actions in our model are a log-linear function of observations and admit a signal-counting interpretation of accuracy. This generates a fine-grained ranking of networks based on their aggregative efficiency index. Networks where agents observe multiple neighbors but not their common predecessors confound information, and we show confounding can make learning very inefficient. In a class of networks where agents move in generations and observe the previous generation, aggregative efficiency is a simple function of network parameters: increasing in observations and decreasing in confounding. Generations after the first contribute very little additional information due to confounding, even when generations are arbitrarily large.