This study leverages RSS feeds which are machine parseable documents that, among many uses, often inform search engines of new content availability
. These data are collected in compliance with the Robots Exclusion Protocol
by observing robots.txt files, industry-standard instructions which convey to systems like this crawler what content is permissible to process. Requests for removal of content can be emailed to the correspoding author: inquiry [at] datadrivenempathy.com. Requestors may need to conduct physical correspondence to verify identity. Note that the crawler is no longer active.