Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>If the AI archives/caches all the results it accesses and enough people use it, doesn't it become a scraper?

That's basically how many crowdsourced crawling/archive projects work. For instance, sci-hub and RECAP[1]. Do you think they should be shut down as well? In both cases there's even a stronger justification to shutting them down, because the original content is paywalled and you could plausibly argue there's lost revenue on the line.

[1] https://en.wikipedia.org/wiki/Free_Law_Project#RECAP



I didn't suggest Perplexity should be shut down, though. And yes, in your analogy sites are completely justified to take whatever actions they can to block people who are building those caches.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: