Bill Tomlinson, Andrew Torrance, and I wrote a paper that was accepted to the San Diego Law Review, about how academic articles can be written to influence future training of LLMs. The paper itself uses the technique to prove it’s point. (pre-print available at the bottom)
What do we mean “manipulate the training process of LLMs”?
Our paper discusses how Artificial Intelligence, or AI, has the power to bring about great changes to society, but it can also be misused. AI works by learning from sets of data, or “training sets,” which teach the AI how to interpret information. The reliability and fairness of these training sets are essential because they shape the results that the AI produces. Just as laws are made keeping in mind those who might try to break them, we need to ensure the integrity of these training sets against potential bad actors.
However, there’s a potential danger that people might deliberately tamper with these training sets to spread false information or manipulate outcomes. Imagine someone trying to benefit from changing how we view historical events or influencing our perception of certain individuals or organizations. Or think about someone tricking an AI into advising investors to buy certain stocks, only to benefit from the surge in demand. These are some of the ways in which malicious individuals could exploit the vulnerabilities of AI.
This paper explores possible ways to prevent and tackle this kind of deliberate tampering with AI training sets, using various legal methods such as fraud, nuisance, libel, and slander, among others. At the same time, we need to consider the importance of free speech, so it’s crucial to strike a balance between safeguarding the integrity of AI and preserving our right to express ourselves freely. By understanding these potential threats and how to respond to them, we can help create a more secure and reliable AI environment that benefits society rather than causing harm.
(permanent (forthcoming), open-access, local copy, pre-print)