A major meeting of computer hackers planned for this summer is to include an event that will test the limits of artificial intelligence (AI) tools.
The event, in August, will be held as part of the yearly DEF CON hacker meeting in Las Vegas, Nevada. In addition to hackers, the gathering draws computer security experts, students, federal government officials and others.
Organizers say this year’s event is expected to include thousands of hackers. The meeting provides a chance for hackers to hear from leading industry officials about the latest developments in computer security. It also includes hacking competitions.
This year, several major AI developers will take part in DEF CON. Among them will be OpenAI, which launched its latest AI model, ChatGPT-4, in March. American software maker Microsoft has invested heavily in OpenAI. Google also released an AI system called Bard earlier this year.
The administration of President Joe Biden has said it will support the hacking event as part of efforts to study the latest AI tools. Administration officials said the government is aiming to ensure that the fast-developing systems will continue to improve without putting people’s rights and safety at risk.
Recently released AI tools are built by feeding huge amounts of information into machine learning computer systems. The data trains the AI systems to develop complex skills and produce human-like results.
Experts have warned that such systems may bring major changes to many different jobs and industries. They also fear the tools, known as “chatbots,” could greatly increase the amount of misinformation in the news media and on social media.
Organizers of the DEF CON event say some of the questions attendees will try to answer include: How can chatbots be changed by hackers to cause harm? Will they share private data meant to be secret with other users? And why do the systems get easily confused when processing information about gender and race?
“This is why we need thousands of people," Rumman Chowdhury told The Associated Press. She is an organizer of the hacking event and co-founder of AI accountability nonprofit Humane Intelligence.
Chowdhury added, "We need a lot of people with a wide range of lived experiences, subject matter expertise and backgrounds hacking at these models and trying to find problems that can then go be fixed.”
Chowdhury said results of the event can provide helpful information to companies looking at ways to safely use the fast-developing AI systems. She noted that the hackers’ work will not end after the gathering. They will spend months afterward creating reports on their findings and identifying specific system vulnerabilities.
Alexandr Wang is the chief executive of AI developer Scale AI. He told the AP, “As these foundation models become more and more widespread, it’s really critical that we do everything we can to ensure their safety.”
Wang said he especially worries about chatbots giving out “unbelievably bad medical advice” or other misinformation that can cause serious harm.
Jack Clark is the co-founder of AI developer Anthropic. He said he hopes the DEF CON event will lead to deeper commitments from AI developers to measure and test the safety of their systems.
For this to happen, though, Clark said AI systems will need to be examined by third parties both before and after deployment. “We need to get practice at figuring out how to do this. It hasn't really been done before,” he said.
I’m Bryan Lynn.