Alex Rice, co-founder, CTO, and CISO of HackerOne, said in a statement to Ars that reports containing “hallucinated vulnerabilities, vague or incorrect technical content, or other forms of low-effort noise” are treated as spam and subject to enforcement.
“We believe AI, when used responsibly, can be a powerful tool for researchers, enhancing productivity, scale, and impact,” Rice said. “Innovation in this space is accelerating, and we support researchers who use AI to improve the quality and efficiency of their work. Overall, we’re seeing an aggregate increase in report quality as AI helps researchers bring clarity to their work, especially where English is a second language.”
“The key is ensuring that AI enhances the report rather than introducing noise,” Rice said. “Our goal is to encourage innovation that drives better security outcomes, while holding all submissions to the same high standards.”
“More tools to strike down this behavior”
In an interview with Ars, Stenberg said he was glad his post—which generated 200 comments and nearly 400 reposts as of Wednesday morning—was getting around. “I’m super happy that the issue [is getting] attention so that possibly we can do something about it [and] educate the audience that this is the state of things,” Stenberg said. “LLMs cannot find security problems, at least not like they are being used here.”
This week has seen four such misguided, obviously AI-generated vulnerability reports seemingly seeking either reputation or bug bounty funds, Stenberg said. “One way you can tell is it’s always such a nice report. Friendly phrased, perfect English, polite, with nice bullet-points … an ordinary human never does it like that in their first writing,” he said.