There is a growing problem of so-called "bots" harvesting content from repositories in an aggressive manner. They are often poorly designed and have no care for the bandwidth or processing capacity they demand from the repository as they attempt to "hoover" up all available content (increasingly to use for AI training purposes). The result of this is to impact - sometimes severely - on the performance of the repository.
This piece by Jonathan Rochkind describes approaches to mitigating this problem. IN particular, it describes an interesting approach to blocking only certain resources - e.g the search function - from machine processes, while leaving content resources freely available for machine processes to access.