People help those with a reputation for helping others
as a result, they are more likely to behave generously when reputational concerns are present. Because people are increasingly making helping decisions in the presence of both humans and AI in "hybrid systems," here we ask whether and how reputation-based reciprocity (RBR) promotes generosity in human-bot networks, compared with human-only ones. In two experiments-one where interactants were embedded in a patterned indirect reciprocity network and either knew or did not know that bots were present and another entailing one-shot interactions between humans and bots-we demonstrate that RBR is significantly less effective at fostering generosity in hybrid systems. At the network level, people are less generous when they know bots are present. In line with prior work, our findings suggest that this is driven by altered norms about helping in (known) hybrid networks governed by RBR: people do not believe bots deserve help like humans do, reducing overall generosity. In one-shot dyadic interactions, we likewise demonstrate that people are less willing to help bots even when they can receive reputational rewards for helping and even toward bots that have reputations for helping humans (or bots). People are also less likely to help people who help bots (compared with people who help people) and punish people who fail to help bots (compared with people who fail to help people). Adding bots to RBR networks affects not only humans' prosocial behavior, but also their evaluations of generosity toward human and bot alters.