2024 is poised to be a very political year as several countries across the world are gearing towards democratic elections from the month of March and with this comes the looming threat of political deep fakes. A technology which is capable of visually depicting political candidates, presidents or delegates delivering speeches and directives which are entirely fake yet quite believable considering they carry the faces and even voices of the original individuals simply with the use of deep fake software.
Although AI emerged as a groundbreaking technological feat, it came with a potentially harmful form of visual and audio manipulation that has posed a major challenge for cybersecurity companies for over a decade. AI abuse particularly presents a major issue of interference in electoral proceedings capable of misinforming voters and the general public. An incidence of this took place ahead of the New Hampshire Primaries in the UK where voters received a number of AI manipulated videos of UK’s Prime Minister Rishi Sunak. These videos were similar to videos traced to non-state actors by Microsoft’s Threat Analysis Center (MTAC). To mitigate this issue, top Tech companies gathered at the Munich Security Conference which held on February 16th to agree on steps needed to combat deceptive AI use in elections set to hold throughout the year. This strategic union of giants of the tech sector dubbed “Davos of Defense” welcomed Intelligence and Military personnel, diplomats and Heads of state, alongside tech company Executives which led to the signing of an agreement which is purposed to track the creation and spread of deep fake video and audio content to curb potential damage.
Google launched its AI Cyber Defense Initiative shortly before the Munich Security Conference where it unveiled tools to help harness AI towards cybersecurity and resilience against malicious cyber attacks. Amongst its offerings were tools & services designed for Research institutions, educational institutions and small to large scale businesses, alongside injecting $2 million in funding for Ai initiatives covering code verification, cyber offence and defense as well as resilient Large Language Models (LLMs).
Some of the major tech companies involved in this accord are OpenAI, Amazon, Meta, X, Tiktok and Google. Upon announcing the agreement signed by the 20 plus cybersecurity companies, Microsoft’s Vice Chair had this to say in a press release shortly after;
“The show of industry unity is itself an accomplishment. We all want and need to innovate. We want and need to compete with each other. We want and need to grow our businesses. But it is also indispensable that we adhere to a high level of responsibility, that we acknowledge and address the problems that are very real including to democracy.”