Hi everyone, I'm Gundi! π As Adversarial AI Researcher here at SoSafe I spend my days learning how cybercriminals are weaponizing AI to make their attacks smarter, more convincing, and harder to spot. With a background in Psychology and Cognitive Neuroscience I come at this problem from a behavioral science angle, focusing on the human side of the equation: why these attacks work on us, and how we can build stronger defenses. And honestly, the best research starts with real conversations, so I'd love to hear from you: what AI-driven threats are you seeing out there, and what's keeping you up at night? Your real-world experiences are invaluable to the work we do, so please don't hesitate to reach out.
Welcome to the Community Dr. G., happy to have you joining our network! π Lars S. Lea K. Roald R. Sabrina H. nice conversation starter Gundi has introduced here: What keeps you up at night - of course in terms of AI-driven-threats. π€
Melissa G. Dr. G. -- to be honest, currently I am focused not on attackers using AI, its more about colleagues early adopting AI without clear guidelines. In particular colleagues experimenting and maybe even granting Openclaw access and rights to systems and services keeps me up at night. OpenClaw led to a high dynamic with missing awareness to the security implications.
Thanks for bringing this up, really important and worrying dynamic! How are you addressing this right now?
Thanks for sharing, Lars S., really helpful context! Labeling/DLP sounds like an important step, even if it doesnβt fully cover the OpenClaw use case. Are you also looking into more structured approvals for tools, or is the focus mainly on awareness & training right now? Iβm sure this is something many here are currently dealing with, would be great to also share any learnings or food for thought in the 03_best-practices channel π
