Experts worry that as international sanctions tighten, banks and other financial institutions that use artificial intelligence may be particularly vulnerable to retaliatory Russian cyberattacks.
In a recent Wall Street Journal article, these concerns come as Russia’s war on Ukraine enters its second month. An unprecedented onslaught of international sanctions wreaks havoc on the Russian economy. From the beginning, international financial institutions have played a vital role in the sanctions regime, barring money transfers from particular Russian banks, denying them access to foreign markets, and even freezing the assets of Russian President Vladimir Putin and notable billionaires.
On the other hand, experts are concerned that these institutions’ growing dependence on machine learning-learning models to automate more and more of their systems in the name of efficiency might backfire. According to Andrew Burt, a former policy assistant to the FBI’s chief of the cyber division, AI vulnerabilities are “major and frequently neglected” at many financial institutions that have grown to rely on them. Burt described it as a “major unaccounted-for risk.”
What makes machine learning algorithms more vulnerable to cyber-attacks, exactly? The majority of the problems stem from machine learning’s need to consume massive amounts of data to improve calculations. As a result, they are susceptible to data manipulation attacks. According to earlier research, an attacker can “poison” an algorithm’s training data to distort or alter the results it delivers.
While racial, gender and other biases in AI algorithms due to inadequate data are well-known, some academics are concerned that criminal actors targeting financial institutions might use massive amounts of partial data to assault financial system algorithms trying to figure out market emotions. Consider the banking industry’s adaptation of Russian disinformation memes.
According to a paper published by the Georgetown Center for Security and Emerging Technology in 2020, machine learning vulnerabilities cannot be fixed in the same manner as traditional software, which means any possible assaults might persist far longer.
The paper states, “Lying latent in such systems are vulnerabilities distinct from the classic weaknesses with which we have decades of expertise.” “These flaws are widespread and inexpensive to attack using widely disseminated techniques against which there is typically no defense.”
Without significant volumes of data, these systems can be tricked in real-time. Researchers from Tencent’s Keen Security Lab, for example, demonstrated several relatively simple techniques to fool Tesla’s machine learning capabilities in 2019. They were first tricking the windshield wipers into engaging when they weren’t supposed to and then convincing a Tesla engaged in Autopilot to drift into the opposing lane using a bright sticker on the road.
“I haven’t seen any real abilities in terms of being able to defend against the flood of disinformation,” Montreal AI Ethics Institute Founder Abhishek Gupta told The Journal. Gupta described machine learning security as a “novel” field filled with unknowns. “When you introduce machine learning into any software infrastructure, it opens up new attack surfaces and new modalities for how a system’s behavior might be corrupted.”
While such security flaws are concerning even in the best of circumstances, government leaders, including Vice President Joe Biden, are concerned that Russia would employ cyber attacks to retaliate against these institutions as sanctions continue to bite.
Biden recommended that private organizations in the United States strengthen their security policies in a statement issued earlier this week, noting “developing evidence indicating the Russian Government is studying alternatives for possible cyberattacks.” The weeks after the Russian military launched its invasion, and global banks have allegedly intensified network monitoring and drills for potential hacking scenarios. According to Reuters.