In January 2020 I learned about the results of a contest at DEFCON 2019 where hackers were challenged to come up with counter-offensive solutions to machine learning. One prize winner proved that machine learning could be 100% evaded by mimicking the behaviors of software that produce whitelisted events, and further proved that any whitelisting in machine learning is a vulnerability. Unfortunately I haven't found an exact source for this information. Simply removing the capability to perform whitelisting on an endpoint usually doesn't change the base code of software that uses machine learning since whitelisting (or ignoring patterns found in machine data from an endpoint) is done prior to ingesting machine data used for training. Alternatively I would argue that any kind of function that ignores input or filters input (like regex) from a user is whitelisting. Do you have a suggestion about how to improve this blog? Let's talk about it. Contact me at David.Brenner.Jr@G...
https://www.davidbrennerjr.com
https://1dbjr.blogspot.com
https://github.com/davidbrennerjr