A group of AI industry insiders has launched a controversial initiative called Poison Fountain, aiming to undermine the technology by poisoning the data that trains AI models. The project encourages website operators to add links to their sites that feed AI crawlers with poisoned training data, which can lead to inaccurate or manipulated responses. This initiative is inspired by research showing that data poisoning attacks are more practical than previously thought, requiring only a few malicious documents to degrade model quality. The insiders argue that this is a necessary response to the threat of machine intelligence to the human species, as stated by Geoffrey Hinton. However, the project's effectiveness and ethical implications are debated, with concerns about the potential for misuse and the impact on AI development. The initiative has sparked discussions about the balance between opposition to AI and the need for regulation, as well as the role of misinformation campaigns and the potential for model collapse.