Ethereum

AI-Driven Cybercrime Will Explode by 2024: CrowdStrike Executive

Shawn Henry, Chief Security Officer at CrowdStrike, said the new year brings new cybersecurity threats based on artificial intelligence. CBS Morning On Tuesday.

“I think this is a major concern for everyone,” Henry said.

“AI has really put this incredibly powerful tool into the hands of ordinary people and has improved their capabilities in incredible ways,” he explained. “So adversaries are using this new innovation, AI, to overcome a variety of cybersecurity capabilities to gain access to corporate networks.”

Henry highlighted that AI is being used to infiltrate corporate networks and spread misinformation online using increasingly sophisticated video, audio and text deepfakes.

Henry emphasized looking at the source of information and not taking what is posted online at face value.

“We need to find out where it came from,” Henry said. “Who is telling the story, what is their motivation, and can we verify this through multiple sources?”

“This is incredibly difficult because when people use video, they only have 15 or 20 seconds, and they don’t have the time or often don’t make the effort to get that data. That’s the problem.”

Henry noted that 2024 is an election year in several countries, including the United States, Mexico, South Africa, Taiwan, and India. Democracy itself is on the ballot as cybercriminals seek to leverage AI to take advantage of the political turmoil, Henry said. I said there is.

“We have seen foreign adversaries target American elections for years. It just wasn’t 2016. (China) targeted us in 2008,” Henry said. “We have seen Russia, China and Iran engage in this type of misinformation and disinformation over the years. They will use it here again in 2024.”

“People need to understand where their information is coming from,” Henry said. “Because there are people who have evil intentions and are causing big trouble.”

Of particular concern in the upcoming 2024 US elections is the security of voting machines. When asked if AI could be used to hack voting machines, Henry expressed optimism that that would not happen due to the decentralized nature of the U.S. voting system.

“I think the system in the United States is very decentralized,” Henry said. “There are private pockets that can be targeted, such as voter registration rolls and the like. (But) I don’t think the voter counting problem is a voter list-making problem that’s broad enough to affect the election. I don’t think it’s a big problem. ”

Henry highlighted AI’s ability to give not-so-technical cybercriminals access to technological weapons.

“AI puts a very capable tool in the hands of people who don’t have high technical skills,” Henry said. “They can write code and create malicious software, phishing emails, etc.”

Last October, the RAND Corporation published a report suggesting that terrorists could jailbreak generative AI to help plan biological attacks.

“Typically, if a malicious actor is explicit (on purpose), the response you get is ‘Sorry, we can’t help you,’” says Christopher, a co-author and principal engineer at the RAND Corporation. Mouton said decryption In an interview. “So typically you have to use one of these jailbreak techniques or use rapid engineering to go one level below these guardrails.”

Cybersecurity firm SlashNext reported in a separate report that email phishing attacks have increased 1265% since the beginning of 2023.

Global policymakers have spent most of 2023 exploring ways to regulate and crack down on the misuse of generative AI, including the UN Secretary-General, who warned against the use of AI-generated deepfakes in conflict zones.

Last August, the Federal Election Commission filed a petition to ban the use of artificial intelligence in campaign ads ahead of the 2024 election season.

Tech giants Microsoft and Meta have announced new policies aimed at curbing AI-based political misinformation.

“In 2024, we could see several authoritarian nation-states attempting to interfere in the electoral process,” Microsoft said. “And they may combine traditional technologies with AI and other new technologies to threaten the integrity of our election systems.”

Pope Francis, who was the target of a deepfake virus created by AI, also mentioned artificial intelligence several times in his sermons.

“We must recognize the rapid changes taking place and manage them in a way that respects the institutions and laws that protect fundamental human rights and promote integral human development,” Pope Francis said. “Artificial intelligence should serve humans’ highest potential and highest aspirations, not compete with them.”

Stay up to date with cryptocurrency news and receive daily updates in your inbox.

Related Articles

Back to top button