- We as scientists want to communicate that privacy is essential to security, innovation, dignity, and flourishing. As such, we must stop celebrating how big our data is.
- The Rumpelstiltskin (originally, Rumpelstilzchen) theory of AI is just wrong. You do not automatically get more or better intelligence in proportion to the amount of data you use. Even where you use machine learning to build your AI (which is certainly not always the case), it is basic statistics 101 that how much data you need depends on the variation in the population you are studying.
- For very many applications, a large amount of data is only useful for surveillance.
- Even where a lot of data might be useful, it is still a hazard.
- Data should not be routinely retained and stored without good reason. Where there is good reason, it must be stored with the highest standards of cybersecurity.
- We need both proactive and responsive systems for detecting and prosecuting the use of inappropriately retained data.
- We need to stop calling for projects like "Big data and [policy problem X]" and start calling for projects like "Data-led validation of [policy solution X]", so that we stop communicating to politicians that indiscriminately gathering and retaining data is ever a good thing.
Update May 2020: If you want a less brief version of the above, Will Lowe on twitter directed us to Xiao-Li Meng, Statistical paradises and paradoxes in big data. Basically, if you can't be careful about how you subsample data, you have to get ALL the data to be right. But that's seldom possible, so it's better to be careful about how you subsample.