this post was submitted on 02 Oct 2022
1 points (100.0% liked)

Science

13179 readers
1 users here now

Subscribe to see new publications and popular science coverage of current research on your homepage


founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pizza_is_yum@slrpnk.net 1 points 2 years ago* (last edited 2 years ago)

Cool. Btw, the authors tested their own 2 adversaries. The 1st failed to breach the defense, and the 2nd was deemed "impractical" because of how slow it took to train.

I appreciate their positive outlook, but I'm not so sure. They say they are well-defended because their equations are non-differentiable. That's true, but reinforcement learning (RL) can get around that. Also, I'm curious if attention-based adversaries would fare any better. Seems like those can do magic, given enough training time.

Great work though. I love this "explainable" and "generalizable" approach they've taken. It's awesome to see research in the ML space that doesn't just throw a black box at the problem and call it a day. We need more like this.